inspiral_pipe module¶
- class inspiral_pipe.LIGOLWContentHandler(document, start_handlers={})[source]¶
Bases:
LIGOLWContentHandler
- startColumn(parent, attrs)¶
- startStream(parent, attrs, __orig_startStream=<function LIGOLWContentHandler.startStream>)¶
- startTable(parent, attrs, __orig_startTable=<function use_in.<locals>.startTable>)¶
- inspiral_pipe.analysis_segments(analyzable_instruments_set, allsegs, boundary_seg, max_template_length, min_instruments=2)[source]¶
get a dictionary of all the disjoint 2+ detector combination segments
- inspiral_pipe.build_bank_groups(cachedict, numbanks=[2], maxjobs=None)[source]¶
! given a dictionary of bank cache files keyed by ifo from .e.g., parse_cache_str(), group the banks into suitable size chunks for a single svd bank file according to numbanks. Note, numbanks can be should be a list and uses the algorithm in the group() function
- inspiral_pipe.calc_rank_pdf_layer(dag, jobs, marg_nodes, options, boundary_seg, instrument_set, with_zero_lag=False)[source]¶
- inspiral_pipe.clean_merger_products_layer(dag, jobs, plotnodes, dbs_to_delete, margfiles_to_delete)[source]¶
clean intermediate merger products
- inspiral_pipe.compute_far_layer(dag, jobs, margnodes, injdbs, noninjdb, final_sqlite_nodes, options, with_zero_lag=False)[source]¶
compute FAPs and FARs
- inspiral_pipe.expected_snr_layer(dag, jobs, ref_psd_parent_nodes, options, num_split_inj_snr_jobs)[source]¶
- inspiral_pipe.final_marginalize_layer(dag, jobs, rankpdf_nodes, rankpdf_zerolag_nodes, options, with_zero_lag=False)[source]¶
- inspiral_pipe.get_threshold_values(template_mchirp_dict, bgbin_indices, svd_bank_strings, options)[source]¶
Calculate the appropriate ht-gate-threshold values according to the scale given
- inspiral_pipe.horizon_dist_layer(dag, jobs, marg_nodes, output_dir, instruments)[source]¶
calculate horizon distance from marginalize diststats
- inspiral_pipe.injection_template_match_layer(dag, jobs, parent_nodes, options, instruments)[source]¶
- inspiral_pipe.inspiral_layer(dag, jobs, psd_nodes, svd_nodes, segsdict, options, channel_dict, template_mchirp_dict)[source]¶
- inspiral_pipe.likelihood_layer(dag, jobs, marg_nodes, lloid_output, lloid_diststats, options, boundary_seg, instrument_set)[source]¶
- inspiral_pipe.lnlrcdf_signal_layer(dag, jobs, parent_nodes, inj_tmplt_match_nodes, options, boundary_seg, instrument_set)[source]¶
- inspiral_pipe.make_mc_vtplot_layer(dag, jobs, parent_nodes, add_parent_node, options, instrument_set, output_dir, injdbs=None)[source]¶
- inspiral_pipe.marginalize_layer(dag, jobs, svd_nodes, lloid_output, lloid_diststats, options, boundary_seg, instrument_set, model_node, model_file, ref_psd, svd_dtdphi_map, idq_file=None)[source]¶
- inspiral_pipe.mass_model_layer(dag, jobs, parent_nodes, instruments, options, seg, psd)[source]¶
mass model node
- inspiral_pipe.median_psd_layer(dag, jobs, parent_nodes, options, boundary_seg, instruments)[source]¶
- inspiral_pipe.merge_cluster_layer(dag, jobs, parent_nodes, db, db_cache, sqlfile, input_files=None)[source]¶
merge and cluster from sqlite database
- inspiral_pipe.parse_cache_str(instr)[source]¶
! A way to decode a command line option that specifies different bank caches for different detectors, e.g.,
>>> bankcache = parse_cache_str("H1=H1_split_bank.cache,L1=L1_split_bank.cache,V1=V1_split_bank.cache") >>> bankcache {'V1': 'V1_split_bank.cache', 'H1': 'H1_split_bank.cache', 'L1': 'L1_split_bank.cache'}
- inspiral_pipe.snrchi2_pdf_plot_layer(dag, jobs, marg_nodes, output_dir)[source]¶
create snrchi2 PDF plot for each template bank bin
- inspiral_pipe.sql_cluster_and_merge_layer(dag, jobs, likelihood_nodes, ligolw_add_nodes, options, boundary_seg, instruments, with_zero_lag=False)[source]¶
- inspiral_pipe.summary_page_layer(dag, jobs, plotnodes, options, boundary_seg, injdbs, output_dir)[source]¶
create a summary page
- inspiral_pipe.summary_plot_layer(dag, jobs, farnode, options, injdbs, noninjdb, output_dir)[source]¶