glue.pipeline module

This modules contains objects that make it simple for the user to create python scripts that build Condor DAGs to run code on the LSC Data Grid.

This file is part of the Grid LSC User Environment (GLUE)

GLUE is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version.

This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.

You should have received a copy of the GNU General Public License along with this program. If not, see <http://www.gnu.org/licenses/>.

class glue.pipeline.AnalysisChunk(start, end, trig_start=0, trig_end=0)[source]

Bases: object

An AnalysisChunk is the unit of data that a node works with, usually some subset of a ScienceSegment.

dur()[source]

Returns the length (duration) of the chunk in seconds.

end()[source]

Returns the GPS end time of the chunk.

set_trig_end(end)[source]

Set the last GPS time at which triggers for this chunk should be generated.

set_trig_start(start)[source]

Set the first GPS time at which triggers for this chunk should be generated.

start()[source]

Returns the GPS start time of the chunk.

trig_end()[source]

Return the last GPS time at which triggers for this chunk should be generated.

trig_start()[source]

Return the first GPS time at which triggers for this chunk should be generated.

class glue.pipeline.AnalysisJob(cp)[source]

Bases: object

Describes a generic analysis job that filters LIGO data as configured by an ini file.

channel()[source]

Returns the name of the channel that this job is filtering. Note that channel is defined to be IFO independent, so this may be LSC-AS_Q or IOO-MC_F. The IFO is set on a per node basis, not a per job basis.

get_config(sec, opt)[source]

Get the configration variable in a particular section of this jobs ini file. @param sec: ini file section. @param opt: option from section sec.

set_channel(channel)[source]

Set the name of the channel that this job is filtering. This will overwrite the value obtained at initialization.

class glue.pipeline.AnalysisNode[source]

Bases: object

Contains the methods that allow an object to be built to analyse LIGO data in a Condor DAG.

calibration()[source]

Set the path to the calibration cache file for the given IFO. During S2 the Hanford 2km IFO had two calibration epochs, so if the start time is during S2, we use the correct cache file.

calibration_cache_path()[source]

Determine the path to the correct calibration cache file to use.

get_calibration()[source]

Return the calibration cache file to be used by the DAG.

get_data_end()[source]

Get the GPS end time of the data needed by this node.

get_data_start()[source]

Get the GPS start time of the data needed by this node.

get_end()[source]

Get the GPS end time of the node.

get_ifo()[source]

Returns the two letter IFO code for this node.

get_ifo_tag()[source]

Returns the IFO tag string

get_input()[source]

Get the file that will be passed as input.

get_output()[source]

Get the file that will be passed as output.

get_pad_data()[source]

Get the GPS start time of the data needed by this node.

get_start()[source]

Get the GPS start time of the node.

get_trig_end()[source]

Get the trig end time of the node.

get_trig_start()[source]

Get the trig start time of the node.

get_user_tag()[source]

Returns the usertag string

set_cache(filename)[source]

Set the LAL frame cache to to use. The frame cache is passed to the job with the –frame-cache argument. @param filename: calibration file to use.

set_data_end(time)[source]

Set the GPS end time of the data needed by this analysis node. @param time: GPS end time of job.

set_data_start(time)[source]

Set the GPS start time of the data needed by this analysis node. @param time: GPS start time of job.

set_end(time, pass_to_command_line=True)[source]

Set the GPS end time of the analysis node by setting a –gps-end-time option to the node when it is executed. @param time: GPS end time of job. @bool pass_to_command_line: add gps-end-time as variable option.

set_ifo(ifo)[source]

Set the ifo name to analyze. If the channel name for the job is defined, then the name of the ifo is prepended to the channel name obtained from the job configuration file and passed with a –channel-name option. @param ifo: two letter ifo code (e.g. L1, H1 or H2).

set_ifo_tag(ifo_tag, pass_to_command_line=True)[source]

Set the ifo tag that is passed to the analysis code. @param ifo_tag: a string to identify one or more IFOs @bool pass_to_command_line: add ifo-tag as a variable option.

set_input(filename, pass_to_command_line=True)[source]

Add an input to the node by adding a –input option. @param filename: option argument to pass as input. @bool pass_to_command_line: add input as a variable option.

set_output(filename, pass_to_command_line=True)[source]

Add an output to the node by adding a –output option. @param filename: option argument to pass as output. @bool pass_to_command_line: add output as a variable option.

set_pad_data(pad)[source]

Set the GPS start time of the data needed by this analysis node. @param time: GPS start time of job.

set_start(time, pass_to_command_line=True)[source]

Set the GPS start time of the analysis node by setting a –gps-start-time option to the node when it is executed. @param time: GPS start time of job. @bool pass_to_command_line: add gps-start-time as variable option.

set_trig_end(time, pass_to_command_line=True)[source]

Set the trig end time of the analysis node by setting a –trig-end-time option to the node when it is executed. @param time: trig end time of job. @bool pass_to_command_line: add trig-end-time as a variable option.

set_trig_start(time, pass_to_command_line=True)[source]

Set the trig start time of the analysis node by setting a –trig-start-time option to the node when it is executed. @param time: trig start time of job. @bool pass_to_command_line: add trig-start-time as a variable option.

set_user_tag(usertag, pass_to_command_line=True)[source]

Set the user tag that is passed to the analysis code. @param user_tag: the user tag to identify the job @bool pass_to_command_line: add user-tag as a variable option.

class glue.pipeline.CondorDAG(log)[source]

Bases: object

A CondorDAG is a Condor Directed Acyclic Graph that describes a collection of Condor jobs and the order in which to run them. All Condor jobs in the DAG must write their Codor logs to the same file. NOTE: The log file must not be on an NFS mounted system as the Condor jobs must be able to get an exclusive file lock on the log file.

add_maxjobs_category(categoryName, maxJobsNum)[source]

Add a category to this DAG called categoryName with a maxjobs of maxJobsNum. @param node: Add (categoryName,maxJobsNum) tuple to CondorDAG.__maxjobs_categories.

add_node(node)[source]

Add a CondorDAGNode to this DAG. The CondorJob that the node uses is also added to the list of Condor jobs in the DAG so that a list of the submit files needed by the DAG can be maintained. Each unique CondorJob will be added once to prevent duplicate submit files being written. @param node: CondorDAGNode to add to the CondorDAG.

get_dag_file()[source]

Return the path to the DAG file.

get_jobs()[source]

Return a list containing all the jobs in the DAG

get_maxjobs_categories()[source]

Return an array of tuples containing (categoryName,maxJobsNum)

get_nodes()[source]

Return a list containing all the nodes in the DAG

set_dag_file(path)[source]

Set the name of the file into which the DAG is written. @param path: path to DAG file.

set_integer_node_names()[source]

Use integer node names for the DAG

write_concrete_dag()[source]

Write all the nodes in the DAG to the DAG file.

write_dag()[source]

Write a dag.

write_maxjobs(fh, category)[source]

Write the DAG entry for this category’s maxjobs to the DAG file descriptor. @param fh: descriptor of open DAG file. @param category: tuple containing type of jobs to set a maxjobs limit for

and the maximum number of jobs of that type to run at once.

write_script()[source]

Write the workflow to a script (.sh instead of .dag).

Assuming that parents were added to the DAG before their children, dependencies should be handled correctly.

write_sub_files()[source]

Write all the submit files used by the dag to disk. Each submit file is written to the file name set in the CondorJob.

exception glue.pipeline.CondorDAGError(args=None)[source]

Bases: CondorError

class glue.pipeline.CondorDAGJob(universe, executable)[source]

Bases: CondorJob

A Condor DAG job never notifies the user on completion and can have variable options that are set for a particular node in the DAG. Inherits methods from a CondorJob.

add_var_arg(arg_index, quote=False)[source]

Add a command to the submit file to allow variable (macro) arguments to be passed to the executable.

add_var_condor_cmd(command)[source]

Add a condor command to the submit file that allows variable (macro) arguments to be passes to the executable.

add_var_opt(opt, short=False)[source]

Add a variable (or macro) option to the condor job. The option is added to the submit file and a different argument to the option can be set for each node in the DAG. @param opt: name of option to add.

create_node()[source]

Create a condor node from this job. This provides a basic interface to the CondorDAGNode class. Most jobs in a workflow will subclass the CondorDAGNode class and overwrite this to give more details when initializing the node. However, this will work fine for jobs with very simp input/output.

exception glue.pipeline.CondorDAGJobError(args=None)[source]

Bases: CondorError

class glue.pipeline.CondorDAGManJob(dag, dir=None)[source]

Bases: object

Condor DAGMan job class. Appropriate for setting up DAGs to run within a DAG.

create_node()[source]

Create a condor node from this job. This provides a basic interface to the CondorDAGManNode class. Most jobs in a workflow will subclass the CondorDAGManNode class and overwrite this to give more details when initializing the node. However, this will work fine for jobs with very simp input/output.

get_dag()[source]

Return the name of any associated dag file

get_dag_directory()[source]

Get the directory where the dag will be run

get_sub_file()[source]

Return the name of the dag as the submit file name for the SUBDAG EXTERNAL command in the uber-dag

set_dag_directory(dir)[source]

Set the directory where the dag will be run @param dir: the name of the directory where the dag will be run

set_notification(value)[source]

Set the email address to send notification to. @param value: email address or never for no notification.

write_sub_file()[source]

Do nothing as there is not need for a sub file with the SUBDAG EXTERNAL command in the uber-dag

class glue.pipeline.CondorDAGManNode(job)[source]

Bases: CondorDAGNode

Condor DAGMan node class. Appropriate for setting up DAGs to run within a DAG. Adds the user-tag functionality to condor_dagman processes running in the DAG. May also be used to extend dagman-node specific functionality.

add_maxjobs_category(categoryName, maxJobsNum)[source]

Add a category to this DAG called categoryName with a maxjobs of maxJobsNum. @param node: Add (categoryName,maxJobsNum) tuple to CondorDAG.__maxjobs_categories.

get_maxjobs_categories()[source]

Return an array of tuples containing (categoryName,maxJobsNum)

get_user_tag()[source]

Returns the usertag string

set_user_tag(usertag)[source]

Set the user tag that is passed to the analysis code. @param user_tag: the user tag to identify the job

class glue.pipeline.CondorDAGNode(job)[source]

Bases: object

A CondorDAGNode represents a node in the DAG. It corresponds to a particular condor job (and so a particular submit file). If the job has variable (macro) options, they can be set here so each nodes executes with the correct options.

add_checkpoint_file(filename)[source]

Add filename as a checkpoint file for this DAG node @param filename: checkpoint filename to add

add_checkpoint_macro(filename)[source]
add_file_arg(filename)[source]

Add a variable (or macro) file name argument to the condor job. The argument is added to the submit file and a different value of the argument can be set for each node in the DAG. The file name is also added to the list of input files. @param filename: name of option to add.

add_file_opt(opt, filename, file_is_output_file=False)[source]

Add a variable (macro) option for this node. If the option specified does not exist in the CondorJob, it is added so the submit file will be correct when written. The value of the option is also added to the list of input files. @param opt: option name. @param value: value of the option for this node in the DAG. @param file_is_output_file: A boolean if the file will be an output file instead of an input file. The default is to have it be an input.

add_input_file(filename)[source]

Add filename as a necessary input file for this DAG node.

@param filename: input filename to add

add_input_macro(filename)[source]

Add a variable (macro) for storing the input files associated with this node. @param filename: filename of input file

add_io_macro(io, filename)[source]

Add a variable (macro) for storing the input/output files associated with this node. @param io: macroinput or macrooutput @param filename: filename of input/output file

add_macro(name, value)[source]

Add a variable (macro) for this node. This can be different for each node in the DAG, even if they use the same CondorJob. Within the CondorJob, the value of the macro can be referenced as ‘$(name)’ – for instance, to define a unique output or error file for each node. @param name: macro name. @param value: value of the macro for this node in the DAG

add_output_file(filename)[source]

Add filename as a output file for this DAG node.

@param filename: output filename to add

add_output_macro(filename)[source]

Add a variable (macro) for storing the output files associated with this node. @param filename: filename of output file

add_parent(node)[source]

Add a parent to this node. This node will not be executed until the parent node has run sucessfully. @param node: CondorDAGNode to add as a parent.

add_post_script_arg(arg)[source]

Adds an argument to the post script that is executed before the DAG node is run.

add_pre_script_arg(arg)[source]

Adds an argument to the pre script that is executed before the DAG node is run.

add_var_arg(arg, quote=False)[source]

Add a variable (or macro) argument to the condor job. The argument is added to the submit file and a different value of the argument can be set for each node in the DAG. @param arg: name of option to add.

add_var_condor_cmd(command, value)[source]

Add a variable (macro) condor command for this node. If the command specified does not exist in the CondorJob, it is added so the submit file will be correct. PLEASE NOTE: AS with other add_var commands, the variable must be set for all nodes that use the CondorJob instance. @param command: command name @param value: Value of the command for this node in the DAG.

add_var_opt(opt, value, short=False)[source]

Add a variable (macro) option for this node. If the option specified does not exist in the CondorJob, it is added so the submit file will be correct when written. @param opt: option name. @param value: value of the option for this node in the DAG.

finalize()[source]

The finalize method of a node is called before the node is finally added to the DAG and can be overridden to do any last minute clean up (such as setting extra command line arguments)

get_args()[source]

Return the arguments for this node. Note that this returns only the arguments for this instance of the node and not those associated with the underlying job template.

get_category()[source]

Get the category for this node in the DAG.

get_checkpoint_files()[source]

Return a list of checkpoint files for this DAG node and its job.

get_cmd_line()[source]

Return the full command line that will be used when this node is run by DAGman.

get_cmd_tuple_list()[source]

Return a list of tuples containg the command line arguments

get_input_files()[source]

Return list of input files for this DAG node and its job.

get_name()[source]

Get the name for this node in the DAG.

get_opts()[source]

Return the opts for this node. Note that this returns only the options for this instance of the node and not those associated with the underlying job template.

get_output_files()[source]

Return list of output files for this DAG node and its job.

get_post_script()[source]

returns the name of the post script that is executed before the DAG node is run. @param script: path to script

get_post_script_arg()[source]

Returns and array of arguments to the post script that is executed before the DAG node is run.

get_priority()[source]

Get the priority for this node in the DAG.

get_retry()[source]

Return the number of times that this node in the DAG should retry. @param retry: number of times to retry node.

job()[source]

Return the CondorJob that this node is associated with.

set_category(category)[source]

Set the category for this node in the DAG.

set_log_file(log)[source]

Set the Condor log file to be used by this CondorJob. @param log: path of Condor log file.

set_name(name)[source]

Set the name for this node in the DAG.

set_post_script(script)[source]

Sets the name of the post script that is executed before the DAG node is run. @param script: path to script

set_pre_script(script)[source]

Sets the name of the pre script that is executed before the DAG node is run. @param script: path to script

set_priority(priority)[source]

Set the priority for this node in the DAG.

set_retry(retry)[source]

Set the number of times that this node in the DAG should retry. @param retry: number of times to retry node.

write_category(fh)[source]

Write the DAG entry for this node’s category to the DAG file descriptor. @param fh: descriptor of open DAG file.

write_input_files(fh)[source]

Write as a comment into the DAG file the list of input files for this DAG node.

@param fh: descriptor of open DAG file.

write_job(fh)[source]

Write the DAG entry for this node’s job to the DAG file descriptor. @param fh: descriptor of open DAG file.

write_output_files(fh)[source]

Write as a comment into the DAG file the list of output files for this DAG node.

@param fh: descriptor of open DAG file.

write_parents(fh)[source]

Write the parent/child relations for this job to the DAG file descriptor. @param fh: descriptor of open DAG file.

write_post_script(fh)[source]

Write the post script for the job, if there is one @param fh: descriptor of open DAG file.

write_pre_script(fh)[source]

Write the pre script for the job, if there is one @param fh: descriptor of open DAG file.

write_priority(fh)[source]

Write the DAG entry for this node’s priority to the DAG file descriptor. @param fh: descriptor of open DAG file.

write_vars(fh)[source]

Write the variable (macro) options and arguments to the DAG file descriptor. @param fh: descriptor of open DAG file.

exception glue.pipeline.CondorDAGNodeError(args=None)[source]

Bases: CondorError

exception glue.pipeline.CondorError(args=None)[source]

Bases: Exception

Error thrown by Condor Jobs

class glue.pipeline.CondorJob(universe, executable, queue)[source]

Bases: object

Generic condor job class. Provides methods to set the options in the condor submit file for a particular executable

add_arg(arg)[source]

Add an argument to the executable. Arguments are appended after any options and their order is guaranteed. @param arg: argument to add.

add_checkpoint_file(filename)[source]

Add filename as a checkpoint file for this DAG job.

add_condor_cmd(cmd, value)[source]

Add a Condor command to the submit file (e.g. a class add or evironment). @param cmd: Condor command directive. @param value: value for command.

add_file_arg(filename)[source]

Add a file argument to the executable. Arguments are appended after any options and their order is guaranteed. Also adds the file name to the list of required input data for this job. @param filename: file to add as argument.

add_file_opt(opt, filename)[source]

Add a command line option to the executable. The order that the arguments will be appended to the command line is not guaranteed, but they will always be added before any command line arguments. The name of the option is prefixed with double hyphen and the program is expected to parse it with getopt_long(). @param opt: command line option to add. @param value: value to pass to the option (None for no argument).

add_ini_opts(cp, section)[source]

Parse command line options from a given section in an ini file and pass to the executable. @param cp: ConfigParser object pointing to the ini file. @param section: section of the ini file to add to the options.

add_input_file(filename)[source]

Add filename as a necessary input file for this DAG node.

@param filename: input filename to add

add_opt(opt, value)[source]

Add a command line option to the executable. The order that the arguments will be appended to the command line is not guaranteed, but they will always be added before any command line arguments. The name of the option is prefixed with double hyphen and the program is expected to parse it with getopt_long(). @param opt: command line option to add. @param value: value to pass to the option (None for no argument).

add_output_file(filename)[source]

Add filename as a output file for this DAG node.

@param filename: output filename to add

add_short_opt(opt, value)[source]

Add a command line option to the executable. The order that the arguments will be appended to the command line is not guaranteed, but they will always be added before any command line arguments. The name of the option is prefixed with single hyphen and the program is expected to parse it with getopt() or getopt_long() (if a single character option), or getopt_long_only() (if multiple characters). Long and (single-character) short options may be mixed if the executable permits this. @param opt: command line option to add. @param value: value to pass to the option (None for no argument).

get_args()[source]

Return the list of arguments that are to be passed to the executable.

get_checkpoint_files()[source]

Return a list of checkpoint files for this DAG node

get_condor_cmds()[source]

Return the dictionary of condor keywords to add to the job

get_executable()[source]

Return the name of the executable for this job.

get_grid_scheduler()[source]

Return the grid scheduler.

get_grid_server()[source]

Return the grid server on which the job will run.

get_grid_type()[source]

Return the grid type of the job.

get_input_files()[source]

Return list of input files for this DAG node.

get_opt(opt)[source]

Returns the value associated with the given command line option. Returns None if the option does not exist in the options list. @param opt: command line option

get_opts()[source]

Return the dictionary of opts for the job.

get_output_files()[source]

Return list of output files for this DAG node.

get_short_opts()[source]

Return the dictionary of short options for the job.

get_stderr_file()[source]

Get the file to which Condor directs the stderr of the job.

get_stdin_file()[source]

Get the file from which Condor directs the stdin of the job.

get_stdout_file()[source]

Get the file to which Condor directs the stdout of the job.

get_sub_file()[source]

Get the name of the file which the Condor submit file will be written to when write_sub_file() is called.

get_universe()[source]

Return the condor universe that the job will run in.

set_executable(executable)[source]

Set the name of the executable for this job.

set_grid_scheduler(grid_scheduler)[source]

Set the grid scheduler. @param grid_scheduler: grid scheduler on which to run.

set_grid_server(grid_server)[source]

Set the grid server on which to run the job. @param grid_server: grid server on which to run.

set_grid_type(grid_type)[source]

Set the type of grid resource for the job. @param grid_type: type of grid resource.

set_log_file(path)[source]

Set the Condor log file. @param path: path to log file.

set_notification(value)[source]

Set the email address to send notification to. @param value: email address or never for no notification.

set_stderr_file(path)[source]

Set the file to which Condor directs the stderr of the job. @param path: path to stderr file.

set_stdin_file(path)[source]

Set the file from which Condor directs the stdin of the job. @param path: path to stdin file.

set_stdout_file(path)[source]

Set the file to which Condor directs the stdout of the job. @param path: path to stdout file.

set_sub_file(path)[source]

Set the name of the file to write the Condor submit file to when write_sub_file() is called. @param path: path to submit file.

set_universe(universe)[source]

Set the condor universe for the job to run in. @param universe: the condor universe to run the job in.

write_sub_file()[source]

Write a submit file for this Condor job.

exception glue.pipeline.CondorJobError(args=None)[source]

Bases: CondorError

exception glue.pipeline.CondorSubmitError(args=None)[source]

Bases: CondorError

class glue.pipeline.DeepCopyableConfigParser(defaults=None, dict_type=<class 'dict'>, allow_no_value=False, *, delimiters=('=', ':'), comment_prefixes=('#', ';'), inline_comment_prefixes=None, strict=True, empty_lines_in_values=True, default_section='DEFAULT', interpolation=<object object>, converters=<object object>)[source]

Bases: ConfigParser

The standard SafeConfigParser no longer supports deepcopy() as of python 2.7 (see http://bugs.python.org/issue16058). This subclass restores that functionality.

class glue.pipeline.LSCDataFindJob(cache_dir, log_dir, config_file, lsync_cache_file=None, lsync_type_regex=None)[source]

Bases: CondorDAGJob, AnalysisJob

An LSCdataFind job used to locate data. The static options are read from the section [datafind] in the ini file. The stdout from LSCdataFind contains the paths to the frame files and is directed to a file in the cache directory named by site and GPS start and end times. The stderr is directed to the logs directory. The job always runs in the scheduler universe. The path to the executable is determined from the ini file.

get_cache_dir()[source]

returns the directroy that the cache files are written to.

get_config_file()[source]

return the configuration file object

lsync_cache()[source]
class glue.pipeline.LSCDataFindNode(job)[source]

Bases: CondorDAGNode, AnalysisNode

A DataFindNode runs an instance of LSCdataFind in a Condor DAG.

get_end()[source]

Return the start time of the datafind query

get_observatory()[source]

Return the start time of the datafind query

get_output()[source]

Return the output file, i.e. the file containing the frame cache data.

get_output_cache()[source]
get_start()[source]

Return the start time of the datafind query

get_type()[source]

gets the frame type that we are querying

set_end(time)[source]

Set the end time of the datafind query. @param time: GPS end time of query.

set_observatory(obs)[source]

Set the IFO to retrieve data for. Since the data from both Hanford interferometers is stored in the same frame file, this takes the first letter of the IFO (e.g. L or H) and passes it to the –observatory option of LSCdataFind. @param obs: IFO to obtain data for.

set_start(time, pad=None)[source]

Set the start time of the datafind query. @param time: GPS start time of query.

set_type(type)[source]

sets the frame type that we are querying

class glue.pipeline.LigolwAddJob(log_dir, cp)[source]

Bases: CondorDAGJob, AnalysisJob

A ligolw_add job can be used to concatenate several ligo lw files

class glue.pipeline.LigolwAddNode(job)[source]

Bases: CondorDAGNode, AnalysisNode

Runs an instance of ligolw_add in a Condor DAG.

class glue.pipeline.LigolwCutJob(log_dir, cp)[source]

Bases: CondorDAGJob, AnalysisJob

A ligolw_cut job can be used to remove parts of a ligo lw file

class glue.pipeline.LigolwCutNode(job)[source]

Bases: CondorDAGNode, AnalysisNode

Runs an instance of ligolw_cut in a Condor DAG.

class glue.pipeline.LigolwSqliteJob(cp)[source]

Bases: SqliteJob

A LigolwSqlite job. The static options are read from the section [ligolw_sqlite] in the ini file.

set_replace()[source]

Sets the –replace option. This will cause the job to overwrite existing databases rather than add to them.

class glue.pipeline.LigolwSqliteNode(job)[source]

Bases: SqliteNode

A LigolwSqlite node.

get_input_cache()[source]

Gets input cache.

get_output()[source]

Override standard get_output to return xml-file if xml-file is specified. Otherwise, will return database.

set_input_cache(input_cache)[source]

Sets input cache.

set_xml_input(xml_file)[source]

Sets xml input file instead of cache

set_xml_output(xml_file)[source]

Tell ligolw_sqlite to dump the contents of the database to a file.

class glue.pipeline.LsyncCache(path)[source]

Bases: object

get_lfns(site, frameType, gpsStart, gpsEnd)[source]
group(lst, n)[source]

Group an iterable into an n-tuples iterable. Incomplete tuples are discarded

parse(type_regex=None)[source]

Each line of the frame cache file is like the following:

/frames/E13/LHO/frames/hoftMon_H1/H-H1_DMT_C00_L2-9246,H,H1_DMT_C00_L2,1,16 1240664820 6231 {924600000 924646720 924646784 924647472 924647712 924700000}

The description is as follows:

1.1) Directory path of files 1.2) Site 1.3) Type 1.4) Number of frames in the files (assumed to be 1) 1.5) Duration of the frame files.

  1. UNIX timestamp for directory modification time.

  2. Number of files that that match the above pattern in the directory.

  3. List of time range or segments [start, stop)

We store the cache for each site and frameType combination as a dictionary where the keys are (directory, duration) tuples and the values are segment lists.

Since the cache file is already coalesced we do not have to call the coalesce method on the segment lists.

class glue.pipeline.NoopJob(log_dir, cp)[source]

Bases: CondorDAGJob, AnalysisJob

A Noop Job does nothing.

class glue.pipeline.NoopNode(job)[source]

Bases: CondorDAGNode, AnalysisNode

Run an noop job in a Condor DAG.

class glue.pipeline.ScienceData[source]

Bases: object

An object that can contain all the science data used in an analysis. Can contain multiple ScienceSegments and has a method to generate these from a text file produces by the LIGOtools segwizard program.

append_from_tuple(seg_tuple)[source]
coalesce()[source]

Coalesces any adjacent ScienceSegments. Returns the number of ScienceSegments in the coalesced list.

intersect_3(second, third)[source]

Intersection routine for three inputs. Built out of the intersect, coalesce and play routines

intersect_4(second, third, fourth)[source]

Intersection routine for four inputs.

intersection(other)[source]

Replaces the ScienceSegments contained in this instance of ScienceData with the intersection of those in the instance other. Returns the number of segments in the intersection. @param other: ScienceData to use to generate the intersection

invert()[source]

Inverts the ScienceSegments in the class (i.e. set NOT). Returns the number of ScienceSegments after inversion.

make_chunks(length, overlap=0, play=0, sl=0, excl_play=0, pad_data=0)[source]

Divide each ScienceSegment contained in this object into AnalysisChunks. @param length: length of chunk in seconds. @param overlap: overlap between segments. @param play: if true, only generate chunks that overlap with S2 playground data. @param sl: slide by sl seconds before determining playground data. @param excl_play: exclude the first excl_play second from the start and end of the chunk when computing if the chunk overlaps with playground.

make_chunks_from_unused(length, trig_overlap, play=0, min_length=0, sl=0, excl_play=0, pad_data=0)[source]

Create an extra chunk that uses up the unused data in the science segment. @param length: length of chunk in seconds. @param trig_overlap: length of time start generating triggers before the start of the unused data. @param play:

  • 1 : only generate chunks that overlap with S2 playground data.

  • 2as 1 plus compute trig start and end times to coincide

    with the start/end of the playground

@param min_length: the unused data must be greater than min_length to make a chunk. @param sl: slide by sl seconds before determining playground data. @param excl_play: exclude the first excl_play second from the start and end of the chunk when computing if the chunk overlaps with playground. @param pad_data: exclude the first and last pad_data seconds of the segment when generating chunks

make_optimised_chunks(min_length, max_length, pad_data=0)[source]

Splits ScienceSegments up into chunks, of a given maximum length. The length of the last two chunks are chosen so that the data utilisation is optimised. @param min_length: minimum chunk length. @param max_length: maximum chunk length. @param pad_data: exclude the first and last pad_data seconds of the segment when generating chunks

make_short_chunks_from_unused(min_length, overlap=0, play=0, sl=0, excl_play=0)[source]

Create a chunk that uses up the unused data in the science segment @param min_length: the unused data must be greater than min_length to make a chunk. @param overlap: overlap between chunks in seconds. @param play: if true, only generate chunks that overlap with S2 playground data. @param sl: slide by sl seconds before determining playground data. @param excl_play: exclude the first excl_play second from the start and end of the chunk when computing if the chunk overlaps with playground.

play()[source]

Keep only times in ScienceSegments which are in the playground

read(filename, min_length, slide_sec=0, buffer=0)[source]

Parse the science segments from the segwizard output contained in file. @param filename: input text file containing a list of science segments generated by segwizard. @param min_length: only append science segments that are longer than min_length. @param slide_sec: Slide each ScienceSegment by:

delta > 0:
  [s, e] -> [s+delta, e].
delta < 0:
  [s, e] -> [s, e-delta].

@param buffer: shrink the ScienceSegment:

[s, e] -> [s+buffer, e-buffer]
split(dt)[source]

Split the segments in the list is subsegments at least as long as dt

tama_read(filename)[source]
Parse the science segments from a tama list of locked segments contained in

file.

@param filename: input text file containing a list of tama segments.

union(other)[source]

Replaces the ScienceSegments contained in this instance of ScienceData with the union of those in the instance other. Returns the number of ScienceSegments in the union. @param other: ScienceData to use to generate the intersection

class glue.pipeline.ScienceSegment(segment)[source]

Bases: object

A ScienceSegment is a period of time where the experimenters determine that the inteferometer is in a state where the data is suitable for scientific analysis. A science segment can have a list of AnalysisChunks asscociated with it that break the segment up into (possibly overlapping) smaller time intervals for analysis.

add_chunk(start, end, trig_start=0, trig_end=0)[source]

Add an AnalysisChunk to the list associated with this ScienceSegment. @param start: GPS start time of chunk. @param end: GPS end time of chunk. @param trig_start: GPS start time for triggers from chunk

dur()[source]

Returns the length (duration) in seconds of this ScienceSegment.

end()[source]

Returns the GPS end time of this ScienceSegment.

get_df_node()[source]

Returns the DataFind node for this ScienceSegment.

id()[source]

Returns the ID of this ScienceSegment.

make_chunks(length=0, overlap=0, play=0, sl=0, excl_play=0, pad_data=0)[source]

Divides the science segment into chunks of length seconds overlapped by overlap seconds. If the play option is set, only chunks that contain S2 playground data are generated. If the user has a more complicated way of generating chunks, this method should be overriden in a sub-class. Any data at the end of the ScienceSegment that is too short to contain a chunk is ignored. The length of this unused data is stored and can be retrieved with the unused() method. @param length: length of chunk in seconds. @param overlap: overlap between chunks in seconds. @param play: 1 : only generate chunks that overlap with S2 playground data.

2as play = 1 plus compute trig start and end times to

coincide with the start/end of the playground

@param sl: slide by sl seconds before determining playground data. @param excl_play: exclude the first excl_play second from the start and end of the chunk when computing if the chunk overlaps with playground. @param pad_data: exclude the first and last pad_data seconds of the segment when generating chunks

set_df_node(df_node)[source]

Set the DataFind node associated with this ScienceSegment to df_node. @param df_node: the DataFind node for this ScienceSegment.

set_end(t)[source]

Override the GPS end time (and set the duration) of this ScienceSegment. @param t: new GPS end time.

set_start(t)[source]

Override the GPS start time (and set the duration) of this ScienceSegment. @param t: new GPS start time.

set_unused(unused)[source]

Set the length of data in the science segment not used to make chunks.

start()[source]

Returns the GPS start time of this ScienceSegment.

unused()[source]

Returns the length of data in the science segment not used to make chunks.

exception glue.pipeline.SegmentError(args=None)[source]

Bases: Exception

class glue.pipeline.SqliteJob(cp, sections, exec_name)[source]

Bases: CondorDAGJob, AnalysisJob

A cbc sqlite job adds to CondorDAGJob and AnalysisJob features common to jobs which read or write to a sqlite database. Of note, the universe is always set to local regardless of what’s in the cp file, the extension is set to None so that it may be set by individual SqliteNodes, log files do not have macrogpsstarttime and endtime in them, and get_env is set to True.

get_exec_name()[source]

Get the exec_name name

set_exec_name(exec_name)[source]

Set the exec_name name

class glue.pipeline.SqliteNode(job)[source]

Bases: CondorDAGNode, AnalysisNode

A cbc sqlite node adds to the standard AnalysisNode features common to nodes which read or write to a sqlite database. Specifically, it adds the set_tmp_space_path and set_database methods.

get_database()[source]

Gets database option.

get_tmp_space()[source]

Gets tmp-space path.

set_database(database)[source]

Sets database option.

set_tmp_space(tmp_space)[source]

Sets temp-space path. This should be on a local disk.

glue.pipeline.s2play(t)[source]

Return True if t is in the S2 playground, False otherwise t = GPS time to test if playground