bilby.core.prior.dict.PriorDict

class bilby.core.prior.dict.PriorDict(dictionary=None, filename=None, conversion_function=None)[source]

Bases: dict

__init__(dictionary=None, filename=None, conversion_function=None)[source]

A dictionary of priors

Parameters:
dictionary: Union[dict, str, None]

If given, a dictionary to generate the prior set.

filename: Union[str, None]

If given, a file containing the prior to generate the prior set.

conversion_function: func

Function to convert between sampled parameters and constraints. Default is no conversion.

__call__(*args, **kwargs)

Call self as a function.

Methods

__init__([dictionary, filename, ...])

A dictionary of priors

cdf(sample)

Evaluate the cumulative distribution function at the provided points

check_ln_prob(sample, ln_prob)

check_prob(sample, prob)

clear()

convert_floats_to_delta_functions()

Convert all float parameters to delta functions

copy()

We have to overwrite the copy method as it fails due to the presence of defaults.

default_conversion_function(sample)

Placeholder parameter conversion function.

evaluate_constraints(sample)

fill_priors(likelihood[, default_priors_file])

Fill dictionary of priors based on required parameters of likelihood

from_dictionary(dictionary)

from_file(filename)

Reads in a prior from a file specification

from_json(filename)

Reads in a prior from a json file

fromkeys(iterable[, value])

Create a new dictionary with keys from iterable and values set to value.

get(key[, default])

Return the value for key if key is in the dictionary, else default.

items()

keys()

ln_prob(sample[, axis])

Parameters:

normalize_constraint_factor(keys[, ...])

pop(key[, default])

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

prob(sample, **kwargs)

Parameters:

rescale(keys, theta)

Rescale samples from unit cube to prior

sample([size])

Draw samples from the prior set

sample_subset([keys, size])

Draw samples from the prior set for parameters which are not a DeltaFunction

sample_subset_constrained([keys, size])

sample_subset_constrained_as_array([keys, size])

Return an array of samples

setdefault(key[, default])

Insert key with a value of default if key is not in the dictionary.

test_has_redundant_keys()

Test whether there are redundant keys in self.

test_redundancy(key[, disable_logging])

Empty redundancy test, should be overwritten in subclasses

to_file(outdir, label)

Write the prior distribution to file.

to_json(outdir, label)

update([E, ]**F)

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values()

Attributes

constraint_keys

fixed_keys

non_fixed_keys

cdf(sample)[source]

Evaluate the cumulative distribution function at the provided points

Parameters:
sample: dict, pandas.DataFrame

Dictionary of the samples of which to calculate the CDF

Returns:
dict, pandas.DataFrame: Dictionary containing the CDF values
clear() None.  Remove all items from D.
convert_floats_to_delta_functions()[source]

Convert all float parameters to delta functions

copy()[source]

We have to overwrite the copy method as it fails due to the presence of defaults.

default_conversion_function(sample)[source]

Placeholder parameter conversion function.

Parameters:
sample: dict

Dictionary to convert

Returns:
sample: dict

Same as input

fill_priors(likelihood, default_priors_file=None)[source]

Fill dictionary of priors based on required parameters of likelihood

Any floats in prior will be converted to delta function prior. Any required, non-specified parameters will use the default.

Note: if likelihood has non_standard_sampling_parameter_keys, then this will set-up default priors for those as well.

Parameters:
likelihood: bilby.likelihood.GravitationalWaveTransient instance

Used to infer the set of parameters to fill the prior with

default_priors_file: str, optional

If given, a file containing the default priors.

Returns:
prior: dict

The filled prior dictionary

from_file(filename)[source]

Reads in a prior from a file specification

Parameters:
filename: str

Name of the file to be read in

Notes

Lines beginning with ‘#’ or empty lines will be ignored. Priors can be loaded from:

  • bilby.core.prior as, e.g., foo = Uniform(minimum=0, maximum=1)

  • floats, e.g., foo = 1

  • bilby.gw.prior as, e.g., foo = bilby.gw.prior.AlignedSpin()

  • other external modules, e.g., foo = my.module.CustomPrior(...)

classmethod from_json(filename)[source]

Reads in a prior from a json file

Parameters:
filename: str

Name of the file to be read in

fromkeys(iterable, value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items() a set-like object providing a view on D's items
keys() a set-like object providing a view on D's keys
ln_prob(sample, axis=None)[source]
Parameters:
sample: dict

Dictionary of the samples of which to calculate the log probability

axis: None or int

Axis along which the summation is performed

Returns:
float or ndarray:

Joint log probability of all the individual sample probabilities

pop(key, default=<unrepresentable>, /)

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem(/)

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

prob(sample, **kwargs)[source]
Parameters:
sample: dict

Dictionary of the samples of which we want to have the probability of

kwargs:

The keyword arguments are passed directly to np.prod

Returns:
float: Joint probability of all individual sample probabilities
rescale(keys, theta)[source]

Rescale samples from unit cube to prior

Parameters:
keys: list

List of prior keys to be rescaled

theta: list

List of randomly drawn values on a unit cube associated with the prior keys

Returns:
list: List of floats containing the rescaled sample
sample(size=None)[source]

Draw samples from the prior set

Parameters:
size: int or tuple of ints, optional

See numpy.random.uniform docs

Returns:
dict: Dictionary of the samples
sample_subset(keys=<list_iterator object>, size=None)[source]

Draw samples from the prior set for parameters which are not a DeltaFunction

Parameters:
keys: list

List of prior keys to draw samples from

size: int or tuple of ints, optional

See numpy.random.uniform docs

Returns:
dict: Dictionary of the drawn samples
sample_subset_constrained_as_array(keys=<list_iterator object>, size=None)[source]

Return an array of samples

Parameters:
keys: list

A list of keys to sample in

size: int

The number of samples to draw

Returns:
array: array_like

An array of shape (len(key), size) of the samples (ordered by keys)

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

test_has_redundant_keys()[source]

Test whether there are redundant keys in self.

Returns:
bool: Whether there are redundancies or not
test_redundancy(key, disable_logging=False)[source]

Empty redundancy test, should be overwritten in subclasses

to_file(outdir, label)[source]

Write the prior distribution to file.

Parameters:
outdir: str

output directory name

label: str

Output file naming scheme

update([E, ]**F) None.  Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() an object providing a view on D's values