bilby.core.sampler.dynesty.Dynesty

class bilby.core.sampler.dynesty.Dynesty(likelihood, priors, outdir='outdir', label='label', use_ratio=False, plot=False, skip_import_verification=False, check_point=True, check_point_plot=True, n_check_point=None, check_point_delta_t=600, resume=True, nestcheck=False, exit_code=130, print_method='tqdm', maxmcmc=5000, nact=2, naccept=60, rejection_sample_posterior=True, proposals=None, **kwargs)[source]

Bases: NestedSampler

bilby wrapper of dynesty.NestedSampler (https://dynesty.readthedocs.io/en/latest/)

All positional and keyword arguments (i.e., the args and kwargs) passed to run_sampler will be propagated to dynesty.NestedSampler, see documentation for that class for further help. Under Other Parameters below, we list commonly used kwargs and the Bilby defaults.

Parameters:
likelihood: likelihood.Likelihood

A object with a log_l method

priors: bilby.core.prior.PriorDict, dict

Priors to be used in the search. This has attributes for each parameter to be sampled.

outdir: str, optional

Name of the output directory

label: str, optional

Naming scheme of the output files

use_ratio: bool, optional

Switch to set whether or not you want to use the log-likelihood ratio or just the log-likelihood

plot: bool, optional

Switch to set whether or not you want to create traceplots

skip_import_verification: bool

Skips the check if the sampler is installed if true. This is only advisable for testing environments

print_method: str (‘tqdm’)

The method to use for printing. The options are: - ‘tqdm’: use a tqdm pbar, this is the default. - ‘interval-$TIME’: print to stdout every $TIME seconds,

e.g., ‘interval-10’ prints every ten seconds, this does not print every iteration

  • else: print to stdout at every iteration

exit_code: int

The code which the same exits on if it hasn’t finished sampling

check_point: bool,

If true, use check pointing.

check_point_plot: bool,

If true, generate a trace plot along with the check-point

check_point_delta_t: float (600)

The minimum checkpoint period (in seconds). Should the run be interrupted, it can be resumed from the last checkpoint.

n_check_point: int, optional (None)

The number of steps to take before checking whether to check_point.

resume: bool

If true, resume run from checkpoint (if available)

maxmcmc: int (5000)

The maximum length of the MCMC exploration to find a new point

nact: int (2)

The number of autocorrelation lengths for MCMC exploration. For use with the act-walk and rwalk sample methods. See the dynesty guide in the Bilby docs for more details.

naccept: int (60)

The expected number of accepted steps for MCMC exploration when using the acceptance-walk sampling method.

rejection_sample_posterior: bool (True)

Whether to form the posterior by rejection sampling the nested samples. If False, the nested samples are resampled with repetition. This was the default behaviour in Bilby<=1.4.1 and leads to non-independent samples being produced.

proposals: iterable (None)

The proposal methods to use during MCMC. This can be some combination of "diff", "volumetric". See the dynesty guide in the Bilby docs for more details. default=:code:[“diff”].

rstate: numpy.random.Generator (None)

Instance of a numpy random generator for generating random numbers. Also see seed in ‘Other Parameters’.

Other Parameters:
nlive: int, (1000)

The number of live points, note this can also equivalently be given as one of [nlive, nlives, n_live_points, npoints]

bound: {‘live’, ‘live-multi’, ‘none’, ‘single’, ‘multi’, ‘balls’, ‘cubes’}, (‘live’)

Method used to select new points

sample: {‘act-walk’, ‘acceptance-walk’, ‘unif’, ‘rwalk’, ‘slice’,

‘rslice’, ‘hslice’, ‘rwalk_dynesty’}, (‘act-walk’)

Method used to sample uniformly within the likelihood constraints, conditioned on the provided bounds

walks: int (100)

Number of walks taken if using the dynesty implemented sample methods Note that the default walks in dynesty itself is 25, although using ndim * 10 can be a reasonable rule of thumb for new problems. For sample="act-walk" and sample="rwalk" this parameter has no impact on the sampling.

dlogz: float, (0.1)

Stopping criteria

seed: int (None)

Use to seed the random number generator if rstate is not specified.

__init__(likelihood, priors, outdir='outdir', label='label', use_ratio=False, plot=False, skip_import_verification=False, check_point=True, check_point_plot=True, n_check_point=None, check_point_delta_t=600, resume=True, nestcheck=False, exit_code=130, print_method='tqdm', maxmcmc=5000, nact=2, naccept=60, rejection_sample_posterior=True, proposals=None, **kwargs)[source]
__call__(*args, **kwargs)

Call self as a function.

Methods

__init__(likelihood, priors[, outdir, ...])

calc_likelihood_count()

check_draw(theta[, warning])

Checks if the draw will generate an infinite prior or likelihood

dump_samples_to_dat()

Save the current posterior samples to a space-separated plain-text file.

finalize_sampler_kwargs(sampler_kwargs)

get_initial_points_from_prior([npoints])

Method to draw a set of live points from the prior

get_random_draw_from_prior()

Get a random draw from the prior distribution

log_likelihood(theta)

Since some nested samplers don't call the log_prior method, evaluate the prior constraint here.

log_prior(theta)

Parameters:

nestcheck_data(out_file)

plot_current_state()

Make diagonstic plots of the history and current state of the sampler.

prior_transform(theta)

Prior transform method that is passed into the external sampler.

read_saved_state([continuing])

Read a pickled saved state of the sampler to disk.

reorder_loglikelihoods(...)

Reorders the stored log-likelihood after they have been reweighted

run_sampler(*args, **kwargs)

A template method to run in subclasses

write_current_state()

Write the current state of the sampler to disk.

write_current_state_and_exit([signum, frame])

Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit.

Attributes

check_point_equiv_kwargs

constraint_parameter_keys

list: List of parameters providing prior constraints

default_kwargs

external_sampler_name

fixed_parameter_keys

list: List of parameter keys that are not being sampled

hard_exit

kwargs

dict: Container for the kwargs.

ndim

int: Number of dimensions of the search parameter space

nlive

npoints_equiv_kwargs

npool

npool_equiv_kwargs

sampler_class

sampler_function_kwargs

sampler_init

sampler_init_kwargs

sampling_seed_equiv_kwargs

sampling_seed_key

Name of keyword argument for setting the sampling for the specific sampler.

search_parameter_keys

list: List of parameter keys that are being sampled

walks_equiv_kwargs

check_draw(theta, warning=True)[source]

Checks if the draw will generate an infinite prior or likelihood

Also catches the output of numpy.nan_to_num.

Parameters:
theta: array_like

Parameter values at which to evaluate likelihood

warning: bool

Whether or not to print a warning

Returns:
bool, cube (nlive,

True if the likelihood and prior are finite, false otherwise

property constraint_parameter_keys

list: List of parameters providing prior constraints

property default_kwargs
dump_samples_to_dat()[source]

Save the current posterior samples to a space-separated plain-text file. These are unbiased posterior samples, however, there will not be many of them until the analysis is nearly over.

property fixed_parameter_keys

list: List of parameter keys that are not being sampled

get_initial_points_from_prior(npoints=1)[source]

Method to draw a set of live points from the prior

This iterates over draws from the prior until all the samples have a finite prior and likelihood (relevant for constrained priors).

Parameters:
npoints: int

The number of values to return

Returns:
unit_cube, parameters, likelihood: tuple of array_like

unit_cube (nlive, ndim) is an array of the prior samples from the unit cube, parameters (nlive, ndim) is the unit_cube array transformed to the target space, while likelihood (nlive) are the likelihood evaluations.

get_random_draw_from_prior()[source]

Get a random draw from the prior distribution

Returns:
draw: array_like

An ndim-length array of values drawn from the prior. Parameters with delta-function (or fixed) priors are not returned

property kwargs

dict: Container for the kwargs. Has more sophisticated logic in subclasses

log_likelihood(theta)[source]

Since some nested samplers don’t call the log_prior method, evaluate the prior constraint here.

Parameters:
theta: array_like

Parameter values at which to evaluate likelihood

Returns:
float: log_likelihood
log_prior(theta)[source]
Parameters:
theta: list

List of sampled values on a unit interval

Returns:
float: Joint ln prior probability of theta
property ndim

int: Number of dimensions of the search parameter space

plot_current_state()[source]

Make diagonstic plots of the history and current state of the sampler.

These plots are a mixture of dynesty implemented run and trace plots and our custom stats plot. We also make a copy of the trace plot using the unit hypercube samples to reflect the internal state of the sampler.

Any errors during plotting should be handled so that sampling can continue.

prior_transform(theta)[source]

Prior transform method that is passed into the external sampler. cube we map this back to [0, 1].

Parameters:
theta: list

List of sampled values on a unit interval

Returns:
list: Properly rescaled sampled values
read_saved_state(continuing=False)[source]

Read a pickled saved state of the sampler to disk.

If the live points are present and the run is continuing they are removed. The random state must be reset, as this isn’t saved by the pickle. nqueue is set to a negative number to trigger the queue to be refilled before the first iteration. The previous run time is set to self.

Parameters:
continuing: bool

Whether the run is continuing or terminating, if True, the loaded state is mostly written back to disk.

static reorder_loglikelihoods(unsorted_loglikelihoods, unsorted_samples, sorted_samples)[source]

Reorders the stored log-likelihood after they have been reweighted

This creates a sorting index by matching the reweights result.samples against the raw samples, then uses this index to sort the loglikelihoods

Parameters:
sorted_samples, unsorted_samples: array-like

Sorted and unsorted values of the samples. These should be of the same shape and contain the same sample values, but in different orders

unsorted_loglikelihoods: array-like

The loglikelihoods corresponding to the unsorted_samples

Returns:
sorted_loglikelihoods: array-like

The loglikelihoods reordered to match that of the sorted_samples

run_sampler(*args, **kwargs)[source]

A template method to run in subclasses

sampling_seed_key = 'seed'

Name of keyword argument for setting the sampling for the specific sampler. If a specific sampler does not have a sampling seed option, then it should be left as None.

property search_parameter_keys

list: List of parameter keys that are being sampled

write_current_state()[source]

Write the current state of the sampler to disk.

The sampler is pickle dumped using dill. The sampling time is also stored to get the full CPU time for the run.

The check of whether the sampler is picklable is to catch an error when using pytest. Hopefully, this message won’t be triggered during normal running.

write_current_state_and_exit(signum=None, frame=None)[source]

Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit. Only the parent has a ‘pool’ attribute.

For samplers that must hard exit (typically due to non-Python process) use os._exit that cannot be excepted. Other samplers exiting can be caught as a SystemExit.