Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Galaxy bias as cobaya theory #41

Merged
merged 12 commits into from
Jan 20, 2022
Merged

Galaxy bias as cobaya theory #41

merged 12 commits into from
Jan 20, 2022

Conversation

itrharrison
Copy link
Collaborator

This will implement a cobaya theory class to calculate power spectra with a bias term (e.g. linear galaxy bias).

To begin with this will just be a constant linear bias
Pk_gg = b_lin**2. Pk_mm
but other subclasses can contain more sophisticated bias models (including ones calculated by external codes such as velcileptors or fastpt).

Merging will close #33.

@itrharrison itrharrison added the enhancement New feature or request label Oct 29, 2021
@itrharrison itrharrison self-assigned this Oct 29, 2021
@itrharrison
Copy link
Collaborator Author

itrharrison commented Oct 29, 2021

I'm attempting to run tests evaluating the model using the one likelihood as a dummy likelihood. You can see the tests here, but the info is:

    info = {"params": {
                       "b_lin": 1.,
                       "H0": 70.,
                       "ombh2": 0.0245,
                       "omch2": 0.1225,
                       "ns": 0.96,
                       "As": 2.2e-9,
                       "tau": 0.05
                       },
            "likelihood": {"one": None},
            "theory": {"camb": None,
                       "linear_bias": {"external": Linear_bias}
                       },
            "sampler": {"evaluate": None},
            "debug": True
           }

This currently fails both with and without the linear bias theory, when trying to run the sampler:

>       updated_info, sampler = run(info)

soliket/tests/test_bias.py:64:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/run.py:157: in run
    sampler.run()
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/samplers/evaluate/evaluate.py:66: in run
    self.logposterior = self.model.logposterior(reference_point)
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/model.py:419: in logposterior
    like = self._loglikes_input_params(input_params,
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/model.py:310: in _loglikes_input_params
    result = self.logps(input_params, return_derived=return_derived, cached=cached,
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/model.py:245: in logps
    compute_success = component.check_cache_and_compute(
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/theory.py:260: in check_cache_and_compute
    if self.calculate(state, want_derived, **params_values_dict) is False:
../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/theories/camb/camb.py:485: in calculate
    params, results = self.provider.get_CAMB_transfers()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <cobaya.theory.Provider object at 0x161e98f70>
name = 'get_CAMB_transfers'

    def __getattr__(self, name):
        if name.startswith('get_'):
            requirement = name[4:]
            try:
                return getattr(self.requirement_providers[requirement], name)
            except KeyError:  # requirement not listed (parameter or result)
>               raise AttributeError
E               AttributeError

../../../../opt/anaconda3/envs/cobaya/lib/python3.9/site-packages/cobaya/theory.py:462: AttributeError

@cmbant
Copy link
Collaborator

cmbant commented Nov 1, 2021

I guess your calculate function always gets the Pk even if the class isn't actually asked to calculate anything.

Rather than sampling a mock, you could call get_model() and then add_requirements (https://cobaya.readthedocs.io/en/latest/models.html?highlight=get_model#model.Model.add_requirements) to tell it something that you need to compute. There's an example at https://cobaya.readthedocs.io/en/latest/cosmo_model.html

Or maybe you can add a "requires" argument for Pkgg to the likelihood in the yaml (not tried it).

@itrharrison
Copy link
Collaborator Author

That works, thanks! I had tried add_requirements but hadn't forced the model computation.

@itrharrison itrharrison marked this pull request as ready for review November 3, 2021 15:17
soliket/bias.py Outdated
'Cannot do log extrapolation with zero-crossing pk '
'for %s, %s' % var_pair)
result = PowerSpectrumInterpolator(self.z, self.k, pk, logP=log_p, logsign=sign,
extrap_kmax=extrap_kmax)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure I would pass these downstream as interpolators, as opposed to arrays. Other downstream codes (e.g. CCL, but there must be others) will use their own interpolating functions based on input arrays, so you'll need to resample these upstream interpolators and reinterpolate them, which will be inefficient and give rise to errors.

Copy link
Collaborator Author

@itrharrison itrharrison Nov 18, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(sorry for the delay, was moving house)

Okay, I agree this seems sensible. There are a couple of SOLikeT likelihoods (xcorr, clusters) which request Pk_interpolator but it would be trivial to have them request grids and do any interpolation inside the likelihood.

@itrharrison itrharrison requested a review from damonge November 25, 2021 16:33
@itrharrison itrharrison merged commit 79c4ecc into master Jan 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Implement bias models as cobaya theory
3 participants