Three-point Correlation Functions

There are currently 3 differenct classes for calculating the different possible three-point correlation functions:

Each of the above classes is a sub-class of the base class Corr3, so they have a number of features in common about how they are constructed. The common features are documented here.

class treecorr.Corr3(config=None, *, logger=None, rng=None, **kwargs)[source]

This class stores the results of a 3-point correlation calculation, along with some ancillary data.

This is a base class that is not intended to be constructed directly. But it has a few helper functions that derived classes can use to help perform their calculations. See the derived classes for more details:

Three-point correlations are a bit more complicated than two-point, since the data need to be binned in triangles, not just the separation between two points.

There are currenlty three different ways to quantify the triangle shapes.

  1. The triangle can be defined by its three side lengths (i.e. SSS congruence). In this case, we characterize the triangles according to the following three parameters based on the three side lengths with d1 >= d2 >= d3.

    \[\begin{split}r &= d2 \\ u &= \frac{d3}{d2} \\ v &= \pm \frac{(d1 - d2)}{d3} \\\end{split}\]

    The orientation of the triangle is specified by the sign of v. Positive v triangles have the three sides d1,d2,d3 in counter-clockwise orientation. Negative v triangles have the three sides d1,d2,d3 in clockwise orientation.

    Note

    We always bin the same way for positive and negative v values, and the binning specification for v should just be for the positive values. E.g. if you specify min_v=0.2, max_v=0.6, then TreeCorr will also accumulate triangles with -0.6 < v < -0.2 in addition to those with 0.2 < v < 0.6.

  2. The triangle can be defined by two of the sides and the angle between them (i.e. SAS congruence). The vertex point between the two sides is considered point “1” (P1), so the two sides (opposite points 2 and 3) are called d2 and d3. The angle between them is called phi, and it is measured in radians.

    The orientation is defined such that 0 <= phi <= pi is the angle sweeping from d2 to d3 counter-clockwise.

    Unlike the SSS definition where every triangle is uniquely placed in a single bin, this definition forms a triangle with each object at the central vertex, P1, so for auto-correlations, each triangle is placed in bins three times. For cross-correlations, the order of the points is such that objects in the first catalog are at the central vertex, P1, objects in the second catalog are at P2, which is opposite d2 (i.e. at the end of line segment d3 from P1), and objects in the third catalog are at P3, opposite d3 (i.e. at the end of d2 from P1).

  3. The third option is a multipole expansion of the SAS description. This idea was initially developed by Chen & Szapudi (2005, ApJ, 635, 743) and then further refined by Slepian & Eisenstein (2015, MNRAS, 454, 4142), Philcox et al (2022, MNRAS, 509, 2457), and Porth et al (arXiv:2309.08601). The latter in particular showed how to use this method for non-spin-0 correlations (GGG in particular).

    The basic idea is to do a Fourier transform of the phi binning to convert the phi bins into n bins.

    \[\zeta(d_2, d_3, \phi) = \frac{1}{2\pi} \sum_n \mathcal{Z}_n(d_2,d_3) e^{i n \phi}\]

    Formally, this is exact if the sum goes from \(-\infty .. \infty\). Truncating this sum at \(\pm n_\mathrm{max}\) is similar to binning in theta with this many bins for \(\phi\) within the range \(0 <= \phi <= \pi\).

    The above papers show that this multipole expansion allows for a much more efficient calculation, since it can be done with a kind of 2-point calculation. We provide methods to convert the multipole output into the SAS binning if desired, since that is often more convenient in practice.

The constructor for all derived classes take a config dict as the first argument, since this is often how we keep track of parameters, but if you don’t want to use one or if you want to change some parameters from what are in a config dict, then you can use normal kwargs, which take precedence over anything in the config dict.

There are three implemented definitions for the metric, which defines how to calculate the distance between two points, for three-point corretions:

  • ‘Euclidean’ = straight line Euclidean distance between two points. For spherical coordinates (ra,dec without r), this is the chord distance between points on the unit sphere.

  • ‘Arc’ = the true great circle distance for spherical coordinates.

  • ‘Periodic’ = Like Euclidean, but with periodic boundaries.

    Note

    The triangles for three-point correlations can become ambiguous if a triangle side length d > period/2, which means for the SSS triangle definition, max_sep (the maximum d2) should be less than period/4, and for SAS, max_sep should be less than period/2. This is not enforced.

There are three allowed value for the bin_type for three-point correlations.

  • ‘LogRUV’ uses the SSS description given above converted to r,u,v. The bin steps will be uniform in log(r) from log(min_sep) .. log(max_sep). The u and v values are binned linearly from min_u .. max_u and min_v .. max_v.

  • ‘LogSAS’ uses the SAS description given above. The bin steps will be uniform in log(d) for both d2 and d3 from log(min_sep) .. log(max_sep). The phi values are binned linearly from min_phi .. max_phi. This is the default.

  • ‘LogMultipole’ uses the multipole description given above. The bin steps will be uniform in log(d) for both d2 and d3 from log(min_sep) .. log(max_sep), and the n value range from -max_n .. max_n, inclusive.

Parameters:
  • config (dict) – A configuration dict that can be used to pass in the below kwargs if desired. This dict is allowed to have addition entries in addition to those listed below, which are ignored here. (default: None)

  • logger – If desired, a logger object for logging. (default: None, in which case one will be built according to the config dict’s verbose level.)

Keyword Arguments:
  • nbins (int) – How many bins to use. (Exactly three of nbins, bin_size, min_sep, max_sep are required. If nbins is not given or set to None, it will be calculated from the values of the other three, rounding up to the next highest integer. In this case, bin_size will be readjusted to account for this rounding up.)

  • bin_size (float) – The width of the bins in log(separation). (Exactly three of nbins, bin_size, min_sep, max_sep are required. If bin_size is not given or set to None, it will be calculated from the values of the other three.)

  • min_sep (float) – The minimum separation in units of sep_units, if relevant. (Exactly three of nbins, bin_size, min_sep, max_sep are required. If min_sep is not given or set to None, it will be calculated from the values of the other three.)

  • max_sep (float) – The maximum separation in units of sep_units, if relevant. (Exactly three of nbins, bin_size, min_sep, max_sep are required. If max_sep is not given or set to None, it will be calculated from the values of the other three.)

  • sep_units (str) – The units to use for the separation values, given as a string. This includes both min_sep and max_sep above, as well as the units of the output distance values. Valid options are arcsec, arcmin, degrees, hours, radians. (default: radians if angular units make sense, but for 3-d or flat 2-d positions, the default will just match the units of x,y[,z] coordinates)

  • bin_slop (float) – How much slop to allow in the placement of triangles in the bins. If bin_slop = 1, then the bin into which a particular pair is placed may be incorrect by at most 1.0 bin widths. (default: None, which means to use a bin_slop that gives a maximum error of 10% on any bin, which has been found to yield good results for most application.

  • angle_slop (float) – How much slop to allow in the angular direction. This works very similarly to bin_slop, but applies to the projection angle of a pair of cells. The projection angle for any two objects in a pair of cells will differ by no more than angle_slop radians from the projection angle defined by the centers of the cells. (default: 0.1)

  • brute (bool) –

    Whether to use the “brute force” algorithm. (default: False) Options are:

    • False (the default): Stop at non-leaf cells whenever the error in the separation is compatible with the given bin_slop and angle_slop.

    • True: Go to the leaves for both catalogs.

    • 1: Always go to the leaves for cat1, but stop at non-leaf cells of cat2 when the error is compatible with the given slop values.

    • 2: Always go to the leaves for cat2, but stop at non-leaf cells of cat1 when the error is compatible with the given slop values.

  • nphi_bins (int) – Analogous to nbins for the phi values when bin_type=LogSAS. (The default is to calculate from phi_bin_size = bin_size, min_phi = 0, max_u = np.pi, but this can be overridden by specifying up to 3 of these four parametes.)

  • phi_bin_size (float) – Analogous to bin_size for the phi values. (default: bin_size)

  • min_phi (float) – Analogous to min_sep for the phi values. (default: 0)

  • max_phi (float) – Analogous to max_sep for the phi values. (default: np.pi)

  • phi_units (str) – The units to use for the phi values, given as a string. This includes both min_phi and max_phi above, as well as the units of the output meanphi values. Valid options are arcsec, arcmin, degrees, hours, radians. (default: radians)

  • max_n (int) – The maximum value of n to store for the multipole binning. (required if bin_type=LogMultipole)

  • nubins (int) – Analogous to nbins for the u values when bin_type=LogRUV. (The default is to calculate from ubin_size = bin_size, min_u = 0, max_u = 1, but this can be overridden by specifying up to 3 of these four parametes.)

  • ubin_size (float) – Analogous to bin_size for the u values. (default: bin_size)

  • min_u (float) – Analogous to min_sep for the u values. (default: 0)

  • max_u (float) – Analogous to max_sep for the u values. (default: 1)

  • nvbins (int) – Analogous to nbins for the positive v values when bin__type=LogRUV. (The default is to calculate from vbin_size = bin_size, min_v = 0, max_v = 1, but this can be overridden by specifying up to 3 of these four parametes.)

  • vbin_size (float) – Analogous to bin_size for the v values. (default: bin_size)

  • min_v (float) – Analogous to min_sep for the positive v values. (default: 0)

  • max_v (float) – Analogous to max_sep for the positive v values. (default: 1)

  • verbose (int) –

    If no logger is provided, this will optionally specify a logging level to use:

    • 0 means no logging output

    • 1 means to output warnings only (default)

    • 2 means to output various progress information

    • 3 means to output extensive debugging information

  • log_file (str) – If no logger is provided, this will specify a file to write the logging output. (default: None; i.e. output to standard output)

  • output_dots (bool) – Whether to output progress dots during the calcualtion of the correlation function. (default: False unless verbose is given and >= 2, in which case True)

  • split_method (str) –

    How to split the cells in the tree when building the tree structure. Options are:

    • mean = Use the arithmetic mean of the coordinate being split. (default)

    • median = Use the median of the coordinate being split.

    • middle = Use the middle of the range; i.e. the average of the minimum and maximum value.

    • random: Use a random point somewhere in the middle two quartiles of the range.

  • min_top (int) – The minimum number of top layers to use when setting up the field. (default: \(\max(3, \log_2(N_{\rm cpu}))\))

  • max_top (int) – The maximum number of top layers to use when setting up the field. The top-level cells are where each calculation job starts. There will typically be of order \(2^{\rm max\_top}\) top-level cells. (default: 10)

  • precision (int) – The precision to use for the output values. This specifies how many digits to write. (default: 4)

  • metric (str) – Which metric to use for distance measurements. Options are listed above. (default: ‘Euclidean’)

  • bin_type (str) – What type of binning should be used. Only one option currently. (default: ‘LogSAS’)

  • period (float) – For the ‘Periodic’ metric, the period to use in all directions. (default: None)

  • xperiod (float) – For the ‘Periodic’ metric, the period to use in the x direction. (default: period)

  • yperiod (float) – For the ‘Periodic’ metric, the period to use in the y direction. (default: period)

  • zperiod (float) – For the ‘Periodic’ metric, the period to use in the z direction. (default: period)

  • var_method (str) – Which method to use for estimating the variance. Options are: ‘shot’, ‘jackknife’, ‘sample’, ‘bootstrap’, ‘marked_bootstrap’. (default: ‘shot’)

  • num_bootstrap (int) – How many bootstrap samples to use for the ‘bootstrap’ and ‘marked_bootstrap’ var_methods. (default: 500)

  • rng (RandomState) – If desired, a numpy.random.RandomState instance to use for bootstrap random number generation. (default: None)

  • num_threads (int) –

    How many OpenMP threads to use during the calculation. (default: use the number of cpu cores; this value can also be given in the constructor in the config dict.)

    Note

    This won’t work if the system’s C compiler cannot use OpenMP (e.g. clang prior to version 3.7.)

build_cov_design_matrix(method, *, func=None, comm=None)[source]

Build the design matrix that is used for estimating the covariance matrix.

The design matrix for patch-based covariance estimates is a matrix where each row corresponds to a different estimate of the data vector, \(\zeta_i\) (or \(f(\zeta_i)\) if using the optional func parameter).

The different of rows in the matrix for each valid method are:

  • ‘shot’: This method is not valid here.

  • ‘jackknife’: The data vector when excluding a single patch.

  • ‘sample’: The data vector using only a single patch for the first catalog.

  • ‘bootstrap’: The data vector for a random resampling of the patches keeping the sample total number, but allowing some to repeat. Cross terms from repeated patches are excluded (since they are really auto terms).

  • ‘marked_bootstrap’: The data vector for a random resampling of patches in the first catalog, using all patches for the second catalog. Based on the algorithm in Loh(2008).

See estimate_cov for more details.

The return value includes both the design matrix and a vector of weights (the total weight array in the computed correlation functions). The weights are used for the sample method when estimating the covariance matrix. The other methods ignore them, but they are provided here in case they are useful.

Parameters:
  • method (str) – Which method to use to estimate the covariance matrix.

  • func (function) – A unary function that takes the list corrs and returns the desired full data vector. [default: None, which is equivalent to lambda corrs: np.concatenate([c.getStat() for c in corrs])]

  • comm (mpi comm) –

Returns:

numpy arrays with the design matrix and weights respectively.

Return type:

A, w

clear()[source]

Clear all data vectors, the results dict, and any related values.

copy()[source]

Make a copy

property cov

The estimated covariance matrix

property cov_diag

A possibly more efficient way to access just the diagonal of the covariance matrix.

If var_method == ‘shot’, then this won’t make the full covariance matrix, just to then pull out the diagonal.

estimate_cov(method, *, func=None, comm=None)[source]

Estimate the covariance matrix based on the data

This function will calculate an estimate of the covariance matrix according to the given method.

Options for method include:

  • ‘shot’ = The variance based on “shot noise” only. This includes the Poisson counts of points for N statistics, shape noise for G statistics, and the observed scatter in the values for K statistics. In this case, the returned value will only be the diagonal. Use np.diagonal(cov) if you actually want a full matrix from this.

  • ‘jackknife’ = A jackknife estimate of the covariance matrix based on the scatter in the measurement when excluding one patch at a time.

  • ‘sample’ = An estimate based on the sample covariance of a set of samples, taken as the patches of the input catalog.

  • ‘bootstrap’ = A bootstrap covariance estimate. It selects patches at random with replacement and then generates the statistic using all the auto-correlations at their selected repetition plus all the cross terms that aren’t actually auto terms.

  • ‘marked_bootstrap’ = An estimate based on a marked-point bootstrap resampling of the patches. Similar to bootstrap, but only samples the patches of the first catalog and uses all patches from the second catalog that correspond to each patch selection of the first catalog. cf. https://ui.adsabs.harvard.edu/abs/2008ApJ…681..726L/

Both ‘bootstrap’ and ‘marked_bootstrap’ use the num_bootstrap parameter, which can be set on construction.

Note

For most classes, there is only a single statistic, zeta, so this calculates a covariance matrix for that vector. GGGCorrelation has four: gam0, gam1, gam2, and gam3, so in this case the full data vector is gam0 followed by gam1, then gam2, then gam3, and this calculates the covariance matrix for that full vector including both statistics. The helper function getStat returns the relevant statistic in all cases.

In all cases, the relevant processing needs to already have been completed and finalized. And for all methods other than ‘shot’, the processing should have involved an appropriate number of patches – preferably more patches than the length of the vector for your statistic, although this is not checked.

The default data vector to use for the covariance matrix is given by the method getStat. As noted above, this is usually just self.zeta. However, there is an option to compute the covariance of some other function of the correlation object by providing an arbitrary function, func, which should act on the current correlation object and return the data vector of interest.

For instance, for an GGGCorrelation, you might want to compute the covariance of just gam0 and ignore the others. In this case you could use

>>> func = lambda ggg: ggg.gam0

The return value from this func should be a single numpy array. (This is not directly checked, but you’ll probably get some kind of exception if it doesn’t behave as expected.)

Note

The optional func parameter is not valid in conjunction with method='shot'. It only works for the methods that are based on patch combinations.

This function can be parallelized by passing the comm argument as an mpi4py communicator to parallelize using that. For MPI, all processes should have the same inputs. If method == “shot” then parallelization has no effect.

Parameters:
  • method (str) – Which method to use to estimate the covariance matrix.

  • func (function) – A unary function that acts on the current correlation object and returns the desired data vector. [default: None, which is equivalent to lambda corr: corr.getStat().

  • comm (mpi comm) –

Returns:

A numpy array with the estimated covariance matrix.

classmethod from_file(file_name, *, file_type=None, logger=None, rng=None)[source]

Create a new instance from an output file.

This should be a file that was written by TreeCorr.

Note

This classmethod may be called either using the base class or the class type that wrote the file. E.g. if the file was written by GGGCorrelation, then either of the following would work and be equivalent:

>>> ggg = treecorr.GGGCorrelation.from_file(file_name)
>>> ggg = treecorr.Corr3.from_file(file_name)
Parameters:
  • file_name (str) – The name of the file to read in.

  • file_type (str) – The type of file (‘ASCII’, ‘FITS’, or ‘HDF’). (default: determine the type automatically from the extension of file_name.)

  • logger (Logger) – If desired, a logger object to use for logging. (default: None)

  • rng (RandomState) – If desired, a numpy.random.RandomState instance to use for bootstrap random number generation. (default: None)

Returns:

A Correlation object, constructed from the information in the file.

getStat()[source]

The standard statistic for the current correlation object as a 1-d array.

Usually, this is just self.zeta. But in case we have a multi-dimensional array at some point (like TwoD for 2pt), use self.zeta.ravel().

And for GGGCorrelation, it is the concatenation of the four different correlations [gam0.ravel(), gam1.ravel(), gam2.ravel(), gam3.ravel()].

getWeight()[source]

The weight array for the current correlation object as a 1-d array.

This is the weight array corresponding to getStat. Usually just self.weight.ravel(), but duplicated for GGGCorrelation to match what getStat does in that case.

property nonzero

Return if there are any values accumulated yet. (i.e. ntri > 0)

process(cat1, cat2=None, cat3=None, *, metric=None, ordered=True, num_threads=None, comm=None, low_mem=False, initialize=True, finalize=True, patch_method=None, algo=None, max_n=None)[source]

Compute the 3pt correlation function.

  • If only 1 argument is given, then compute an auto-correlation function.

  • If 2 arguments are given, then compute a cross-correlation function with the first catalog taking one corner of the triangles, and the second taking two corners.

  • If 3 arguments are given, then compute a three-way cross-correlation function.

For cross correlations, the default behavior is to use cat1 for the first vertex (P1), cat2 for the second vertex (P2), and cat3 for the third vertex (P3). If only two catalogs are given, vertices P2 and P3 both come from cat2. The sides d1, d2, d3, used to define the binning, are taken to be opposte P1, P2, P3 respectively.

However, if you want to accumulate triangles where objects from each catalog can take any position in the triangles, you can set ordered=False. In this case, triangles will be formed where P1, P2 and P3 can come any input catalog, so long as there is one from cat1, one from cat2, and one from cat3 (or two from cat2 if cat3 is None).

All arguments may be lists, in which case all items in the list are used for that element of the correlation.

Parameters:
  • cat1 (Catalog) – A catalog or list of catalogs for the first field.

  • cat2 (Catalog) – A catalog or list of catalogs for the second field. (default: None)

  • cat3 (Catalog) – A catalog or list of catalogs for the third field. (default: None)

  • metric (str) – Which metric to use. See Metrics for details. (default: ‘Euclidean’; this value can also be given in the constructor in the config dict.)

  • ordered (bool) – Whether to fix the order of the triangle vertices to match the catalogs. (see above; default: True)

  • num_threads (int) – How many OpenMP threads to use during the calculation. (default: use the number of cpu cores; this value can also be given in the constructor in the config dict.)

  • comm (mpi4py.Comm) – If running MPI, an mpi4py Comm object to communicate between processes. If used, the rank=0 process will have the final computation. This only works if using patches. (default: None)

  • low_mem (bool) – Whether to sacrifice a little speed to try to reduce memory usage. This only works if using patches. (default: False)

  • initialize (bool) – Whether to begin the calculation with a call to Corr3.clear. (default: True)

  • finalize (bool) – Whether to complete the calculation with a call to finalize. (default: True)

  • patch_method (str) – Which patch method to use. (default is to use ‘local’ if bin_type=LogMultipole, and ‘global’ otherwise)

  • algo (str) – Which accumulation algorithm to use. (options are ‘triangle’ or ‘multipole’; default is ‘multipole’ unless bin_type is ‘LogRUV’, which can only use ‘triangle’)

  • max_n (int) – If using the multpole algorithm, and this is not directly using bin_type=’LogMultipole’, then this is the value of max_n to use for the multipole part of the calculation. (default is to use 2pi/phi_bin_size; this value can also be given in the constructor in the config dict.)

process_cross(cat1, cat2, cat3, *, metric=None, ordered=True, num_threads=None)[source]

Process a set of three catalogs, accumulating the 3pt cross-correlation.

This accumulates the cross-correlation for the given catalogs as part of a larger auto- or cross-correlation calculation. E.g. when splitting up a large catalog into patches, this is appropriate to use for the cross correlation between different patches as part of the complete auto-correlation of the full catalog.

Parameters:
  • cat1 (Catalog) – The first catalog to process

  • cat2 (Catalog) – The second catalog to process

  • cat3 (Catalog) – The third catalog to process

  • metric (str) – Which metric to use. See Metrics for details. (default: ‘Euclidean’; this value can also be given in the constructor in the config dict.)

  • ordered (bool) – Whether to fix the order of the triangle vertices to match the catalogs. (default: True)

  • num_threads (int) – How many OpenMP threads to use during the calculation. (default: use the number of cpu cores; this value can also be given in the constructor in the config dict.)

read(file_name, *, file_type=None)[source]

Read in values from a file.

This should be a file that was written by TreeCorr, preferably a FITS or HDF5 file, so there is no loss of information.

Warning

The current object should be constructed with the same configuration parameters as the one being read. e.g. the same min_sep, max_sep, etc. This is not checked by the read function.

Parameters:
  • file_name (str) – The name of the file to read in.

  • file_type (str) – The type of file (‘ASCII’ or ‘FITS’). (default: determine the type automatically from the extension of file_name.)

toSAS(*, target=None, **kwargs)[source]

Convert a multipole-binned correlation to the corresponding SAS binning.

This is only valid for bin_type == LogMultipole.

Keyword Arguments:
  • target – A target Correlation object with LogSAS binning to write to. If this is not given, a new object will be created based on the configuration paramters of the current object. (default: None)

  • **kwargs – Any kwargs that you want to use to configure the returned object. Typically, might include min_phi, max_phi, nphi_bins, phi_bin_size. The default phi binning is [0,pi] with nphi_bins = self.max_n.

Returns:

An object with bin_type=LogSAS containing the same information as this object, but with the SAS binning.