Data Analysis
Module to run parallel bilby using MPI.
Command line interface for data analysis
usage: parallel_bilby_analysis [-h] [--version] [-n NLIVE] [--dlogz DLOGZ]
[--n-effective N_EFFECTIVE]
[--dynesty-sample DYNESTY_SAMPLE]
[--dynesty-bound DYNESTY_BOUND] [--walks WALKS]
[--proposals PROPOSALS] [--maxmcmc MAXMCMC]
[--nact NACT] [--naccept NACCEPT]
[--min-eff MIN_EFF] [--facc FACC]
[--enlarge ENLARGE]
[--n-check-point N_CHECK_POINT]
[--max-its MAX_ITS]
[--max-run-time MAX_RUN_TIME]
[--fast-mpi FAST_MPI] [--mpi-timing MPI_TIMING]
[--mpi-timing-interval MPI_TIMING_INTERVAL]
[--nestcheck]
[--rejection-sample-posterior REJECTION_SAMPLE_POSTERIOR]
[--bilby-zero-likelihood-mode]
[--sampling-seed SAMPLING_SEED] [-c]
[--no-plot] [--do-not-save-bounds-in-resume]
[--check-point-deltaT CHECK_POINT_DELTAT]
[--rotate-checkpoints] [--outdir OUTDIR]
[--label LABEL] [--result-format RESULT_FORMAT]
data_dump
Positional Arguments
- data_dump
The pickled data dump generated by parallel_bilby_analysis
Named Arguments
- --version
show program’s version number and exit
- --outdir
Outdir to overwrite input label
- --label
Label to overwrite input label
- --result-format
Format to save the result
Default: “hdf5”
Dynesty Settings
- -n, --nlive
Number of live points
Default: 1000
- --dlogz
Stopping criteria: remaining evidence, (default=0.1)
Default: 0.1
- --n-effective
Stopping criteria: effective number of samples, (default=inf)
Default: inf
- --dynesty-sample
Dynesty sampling method (default=acceptance-walk). Note, the dynesty rwalk method is overwritten by parallel bilby for an optimised version
Default: “acceptance-walk”
- --dynesty-bound
Dynesty bounding method (default=multi)
Default: “live”
- --walks
Minimum number of walks, defaults to 100
Default: 100
- --proposals
The jump proposals to use, the options are ‘diff’ and ‘volumetric’
- --maxmcmc
Maximum number of walks, defaults to 5000
Default: 5000
- --nact
Number of autocorrelation times to take, defaults to 2
Default: 2
- --naccept
The average number of accepted steps per MCMC chain, defaults to 60
Default: 60
- --min-eff
The minimum efficiency at which to switch from uniform sampling.
Default: 10
- --facc
See dynesty.NestedSampler
Default: 0.5
- --enlarge
See dynesty.NestedSampler
Default: 1.5
- --n-check-point
Steps to take before attempting checkpoint
Default: 1000
- --max-its
Maximum number of iterations to sample for (default=1.e10)
Default: 10000000000
- --max-run-time
Maximum time to run for (default=1.e10 s)
Default: 10000000000.0
- --fast-mpi
Fast MPI communication pattern (default=False)
Default: False
- --mpi-timing
Print MPI timing when finished (default=False)
Default: False
- --mpi-timing-interval
Interval to write timing snapshot to disk (default=0 – disabled)
Default: 0
- --nestcheck
Save a ‘nestcheck’ pickle in the outdir (default=False). This pickle stores a nestcheck.data_processing.process_dynesty_run object, which can be used during post processing to compute the implementation and bootstrap errors explained by Higson et al (2018) in “Sampling Errors In Nested Sampling Parameter Estimation”.
Default: False
- --rejection-sample-posterior
Whether to generate the posterior samples by rejection sampling the nested samples or resampling with replacement
Default: True
Misc. Settings
- --bilby-zero-likelihood-mode
Default: False
- --sampling-seed
Random seed for sampling, parallel runs will be incremented
- -c, --clean
Run clean: ignore any resume files
Default: False
- --no-plot
If true, don’t generate check-point plots
Default: False
- --do-not-save-bounds-in-resume
If true, do not store bounds in the resume file. This can make resume files large (~GB)
Default: True
- --check-point-deltaT
Write a checkpoint resume file and diagnostic plots every deltaT [s]. Default: 1 hour.
Default: 3600
- --rotate-checkpoints
If true, backup checkpoint before overwriting (ending in ‘.bk’).
Default: False