bilby.core.sampler.dynamic_dynesty.DynamicDynesty

class bilby.core.sampler.dynamic_dynesty.DynamicDynesty(likelihood, priors, outdir='outdir', label='label', use_ratio=False, plot=False, skip_import_verification=False, check_point=True, check_point_plot=True, n_check_point=None, check_point_delta_t=600, resume=True, nestcheck=False, exit_code=130, print_method='tqdm', maxmcmc=5000, nact=2, naccept=60, rejection_sample_posterior=True, proposals=None, **kwargs)[source]

Bases: Dynesty

bilby wrapper of dynesty.DynamicNestedSampler (https://dynesty.readthedocs.io/en/latest/)

All positional and keyword arguments (i.e., the args and kwargs) passed to run_sampler will be propagated to dynesty.DynamicNestedSampler, see documentation for that class for further help.

For additional documentation see bilby.core.sampler.Dynesty.

__init__(likelihood, priors, outdir='outdir', label='label', use_ratio=False, plot=False, skip_import_verification=False, check_point=True, check_point_plot=True, n_check_point=None, check_point_delta_t=600, resume=True, nestcheck=False, exit_code=130, print_method='tqdm', maxmcmc=5000, nact=2, naccept=60, rejection_sample_posterior=True, proposals=None, **kwargs)[source]
__call__(*args, **kwargs)

Call self as a function.

Methods

__init__(likelihood, priors[, outdir, ...])

calc_likelihood_count()

check_draw(theta[, warning])

Checks if the draw will generate an infinite prior or likelihood

dump_samples_to_dat()

Save the current posterior samples to a space-separated plain-text file.

finalize_sampler_kwargs(sampler_kwargs)

get_initial_points_from_prior([npoints])

Method to draw a set of live points from the prior

get_random_draw_from_prior()

Get a random draw from the prior distribution

log_likelihood(theta)

Since some nested samplers don't call the log_prior method, evaluate the prior constraint here.

log_prior(theta)

Parameters:

nestcheck_data(out_file)

plot_current_state()

Make diagonstic plots of the history and current state of the sampler.

prior_transform(theta)

Prior transform method that is passed into the external sampler.

read_saved_state([continuing])

Read a pickled saved state of the sampler to disk.

reorder_loglikelihoods(...)

Reorders the stored log-likelihood after they have been reweighted

run_sampler(*args, **kwargs)

A template method to run in subclasses

write_current_state()

Write the current state of the sampler to disk.

write_current_state_and_exit([signum, frame])

Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit.

Attributes

check_point_equiv_kwargs

constraint_parameter_keys

list: List of parameters providing prior constraints

default_kwargs

external_sampler_name

fixed_parameter_keys

list: List of parameter keys that are not being sampled

hard_exit

kwargs

dict: Container for the kwargs.

ndim

int: Number of dimensions of the search parameter space

nlive

Users can either specify nlive_init or nlive (with that precedence) or specify no value, in which case 500 is used.

npoints_equiv_kwargs

npool

npool_equiv_kwargs

sampler_class

sampler_function_kwargs

sampler_init

sampler_init_kwargs

sampling_seed_equiv_kwargs

sampling_seed_key

Name of keyword argument for setting the sampling for the specific sampler.

search_parameter_keys

list: List of parameter keys that are being sampled

walks_equiv_kwargs

check_draw(theta, warning=True)[source]

Checks if the draw will generate an infinite prior or likelihood

Also catches the output of numpy.nan_to_num.

Parameters:
theta: array_like

Parameter values at which to evaluate likelihood

warning: bool

Whether or not to print a warning

Returns:
bool, cube (nlive,

True if the likelihood and prior are finite, false otherwise

property constraint_parameter_keys

list: List of parameters providing prior constraints

property default_kwargs
dump_samples_to_dat()[source]

Save the current posterior samples to a space-separated plain-text file. These are unbiased posterior samples, however, there will not be many of them until the analysis is nearly over.

property fixed_parameter_keys

list: List of parameter keys that are not being sampled

get_initial_points_from_prior(npoints=1)[source]

Method to draw a set of live points from the prior

This iterates over draws from the prior until all the samples have a finite prior and likelihood (relevant for constrained priors).

Parameters:
npoints: int

The number of values to return

Returns:
unit_cube, parameters, likelihood: tuple of array_like

unit_cube (nlive, ndim) is an array of the prior samples from the unit cube, parameters (nlive, ndim) is the unit_cube array transformed to the target space, while likelihood (nlive) are the likelihood evaluations.

get_random_draw_from_prior()[source]

Get a random draw from the prior distribution

Returns:
draw: array_like

An ndim-length array of values drawn from the prior. Parameters with delta-function (or fixed) priors are not returned

property kwargs

dict: Container for the kwargs. Has more sophisticated logic in subclasses

log_likelihood(theta)[source]

Since some nested samplers don’t call the log_prior method, evaluate the prior constraint here.

Parameters:
theta: array_like

Parameter values at which to evaluate likelihood

Returns:
float: log_likelihood
log_prior(theta)[source]
Parameters:
theta: list

List of sampled values on a unit interval

Returns:
float: Joint ln prior probability of theta
property ndim

int: Number of dimensions of the search parameter space

property nlive

Users can either specify nlive_init or nlive (with that precedence) or specify no value, in which case 500 is used.

plot_current_state()[source]

Make diagonstic plots of the history and current state of the sampler.

These plots are a mixture of dynesty implemented run and trace plots and our custom stats plot. We also make a copy of the trace plot using the unit hypercube samples to reflect the internal state of the sampler.

Any errors during plotting should be handled so that sampling can continue.

prior_transform(theta)[source]

Prior transform method that is passed into the external sampler. cube we map this back to [0, 1].

Parameters:
theta: list

List of sampled values on a unit interval

Returns:
list: Properly rescaled sampled values
read_saved_state(continuing=False)[source]

Read a pickled saved state of the sampler to disk.

If the live points are present and the run is continuing they are removed. The random state must be reset, as this isn’t saved by the pickle. nqueue is set to a negative number to trigger the queue to be refilled before the first iteration. The previous run time is set to self.

Parameters:
continuing: bool

Whether the run is continuing or terminating, if True, the loaded state is mostly written back to disk.

static reorder_loglikelihoods(unsorted_loglikelihoods, unsorted_samples, sorted_samples)[source]

Reorders the stored log-likelihood after they have been reweighted

This creates a sorting index by matching the reweights result.samples against the raw samples, then uses this index to sort the loglikelihoods

Parameters:
sorted_samples, unsorted_samples: array-like

Sorted and unsorted values of the samples. These should be of the same shape and contain the same sample values, but in different orders

unsorted_loglikelihoods: array-like

The loglikelihoods corresponding to the unsorted_samples

Returns:
sorted_loglikelihoods: array-like

The loglikelihoods reordered to match that of the sorted_samples

run_sampler(*args, **kwargs)[source]

A template method to run in subclasses

sampling_seed_key = 'seed'

Name of keyword argument for setting the sampling for the specific sampler. If a specific sampler does not have a sampling seed option, then it should be left as None.

property search_parameter_keys

list: List of parameter keys that are being sampled

write_current_state()[source]

Write the current state of the sampler to disk.

The sampler is pickle dumped using dill. The sampling time is also stored to get the full CPU time for the run.

The check of whether the sampler is picklable is to catch an error when using pytest. Hopefully, this message won’t be triggered during normal running.

write_current_state_and_exit(signum=None, frame=None)[source]

Make sure that if a pool of jobs is running only the parent tries to checkpoint and exit. Only the parent has a ‘pool’ attribute.

For samplers that must hard exit (typically due to non-Python process) use os._exit that cannot be excepted. Other samplers exiting can be caught as a SystemExit.