pycbc.types package¶
Submodules¶
pycbc.types.aligned module¶
This module provides a class derived from numpy.ndarray that also indicates whether or not its memory is aligned. It further provides functions for creating zeros and empty (unitialized) arrays with this class.
pycbc.types.array module¶
This modules provides a device independent Array class based on PyCUDA and Numpy.
-
class
pycbc.types.array.
Array
(initial_array, dtype=None, copy=True)[source]¶ Bases:
object
Array used to do numeric calculations on a various compute devices. It is a convience wrapper around numpy, and pycuda.
-
abs_max_loc
()[source]¶ Return the maximum elementwise norm in the array along with the index location
-
almost_equal_elem
(other, tol, relative=True)[source]¶ Compare whether two array types are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the array.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the array.
Other meta-data (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: - other – Another Python object, that should be tested for almost-equality with ‘self’, element-by-element.
- tol – A non-negative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance.
- relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
Returns: ‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same.
Return type: boolean
-
almost_equal_norm
(other, tol, relative=True)[source]¶ Compare whether two array types are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol
Other meta-data (type, dtype, and length) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: - other – another Python object, that should be tested for almost-equality with ‘self’, based on their norms.
- tol – a non-negative number, the tolerance, which is interpreted as either a relative tolerance (the default) or an absolute tolerance.
- relative – A boolean, indicating whether ‘tol’ should be interpreted as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
Returns: ‘True’ if the data agree within the tolerance, as interpreted by the ‘relative’ keyword, and if the types, lengths, and dtypes are exactly the same.
Return type: boolean
-
data
¶ Returns the internal python array
-
dtype
¶
-
itemsize
¶
-
kind
¶
-
multiply_and_add
(other, mult_fac)[source]¶ Return other multiplied by mult_fac and with self added. Self is modified in place and returned as output. Precisions of inputs must match.
-
nbytes
¶
-
ndim
¶
-
precision
¶
-
ptr
¶ Returns a pointer to the memory of this array
-
save
(path, group=None)[source]¶ Save array to a Numpy .npy, hdf, or text file. When saving a complex array as text, the real and imaginary parts are saved as the first and second column respectively. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters: - path (string) – Destination file path. Must end with either .hdf, .npy or .txt.
- group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
-
shape
¶
-
view
(dtype)[source]¶ Return a ‘view’ of the array with its bytes now interpreted according to ‘dtype’. The location in memory is unchanged and changing elements in a view of an array will also change the original array.
Parameters: dtype (numpy dtype (one of float32, float64, complex64 or complex128)) – The new dtype that should be used to interpret the bytes of self
-
-
pycbc.types.array.
empty
(length, dtype=<class 'numpy.float64'>)[source]¶ Return an empty Array (no initialization)
-
pycbc.types.array.
load_array
(path, group=None)[source]¶ Load an Array from a .hdf, .txt or .npy file. The default data types will be double precision floating point.
Parameters: - path (string) – source file path. Must end with either .npy or .txt.
- group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
pycbc.types.array_cpu module¶
Numpy based CPU backend for PyCBC Array
-
pycbc.types.array_cpu.
abs_arg_max
()¶
-
pycbc.types.array_cpu.
abs_arg_max_complex
¶
-
pycbc.types.array_cpu.
abs_max_loc
()¶
-
pycbc.types.array_cpu.
clear
()¶
-
pycbc.types.array_cpu.
cumsum
()¶
-
pycbc.types.array_cpu.
dot
()¶
-
pycbc.types.array_cpu.
empty
()¶
-
pycbc.types.array_cpu.
inner
()¶ Return the inner product of the array with complex conjugation.
-
pycbc.types.array_cpu.
inner_real
¶
-
pycbc.types.array_cpu.
max
()¶
-
pycbc.types.array_cpu.
max_loc
()¶
-
pycbc.types.array_cpu.
min
()¶
-
pycbc.types.array_cpu.
multiply_and_add
()¶ Return other multiplied by mult_fac and with self added. Self will be modified in place. This requires all inputs to be of the same precision.
-
pycbc.types.array_cpu.
numpy
()¶
-
pycbc.types.array_cpu.
ptr
()¶
-
pycbc.types.array_cpu.
squared_norm
()¶ Return the elementwise squared norm of the array
-
pycbc.types.array_cpu.
sum
()¶
-
pycbc.types.array_cpu.
take
()¶
-
pycbc.types.array_cpu.
vdot
()¶ Return the inner product of the array with complex conjugation.
-
pycbc.types.array_cpu.
weighted_inner
()¶ Return the inner product of the array with complex conjugation.
-
pycbc.types.array_cpu.
zeros
()¶
pycbc.types.config module¶
This module provides a wrapper to the ConfigParser utilities for pycbc. This module is described in the page here:
-
class
pycbc.types.config.
DeepCopyableConfigParser
(*args, **kwargs)[source]¶ Bases:
configparser.SafeConfigParser
The standard SafeConfigParser no longer supports deepcopy() as of python 2.7 (see http://bugs.python.org/issue16058). This subclass restores that functionality.
-
class
pycbc.types.config.
InterpolatingConfigParser
(configFiles=None, overrideTuples=None, parsedFilePath=None, deleteTuples=None, skip_extended=False)[source]¶ Bases:
pycbc.types.config.DeepCopyableConfigParser
This is a sub-class of DeepCopyableConfigParser, which lets us add a few additional helper features that are useful in workflows.
-
add_options_to_section
(section, items, overwrite_options=False)[source]¶ Add a set of options and values to a section of a ConfigParser object. Will throw an error if any of the options being added already exist, this behaviour can be overridden if desired
Parameters: - section (string) – The name of the section to add options+values to
- items (list of tuples) – Each tuple contains (at [0]) the option and (at [1]) the value to add to the section of the ini file
- overwrite_options (Boolean, optional) – By default this function will throw a ValueError if an option exists in both the original section in the ConfigParser and in the provided items. This will override so that the options+values given in items will replace the original values if the value is set to True. Default = True
-
check_duplicate_options
(section1, section2, raise_error=False)[source]¶ Check for duplicate options in two sections, section1 and section2. Will return a list of the duplicate options.
Parameters: - section1 (string) – The name of the first section to compare
- section2 (string) – The name of the second section to compare
- raise_error (Boolean, optional (default=False)) – If True, raise an error if duplicates are present.
Returns: duplicates – List of duplicate options
Return type: List
-
classmethod
from_cli
(opts)[source]¶ Initialize the config parser using options parsed from the command line.
The parsed options
opts
must include options provided byadd_workflow_command_line_group()
.Parameters: opts (argparse.ArgumentParser) – The command line arguments parsed by argparse
-
get_opt_tag
(section, option, tag)[source]¶ Convenience function accessing get_opt_tags() for a single tag: see documentation for that function. NB calling get_opt_tags() directly is preferred for simplicity.
Parameters: - self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)
- section (string) – The section of the ConfigParser object to read
- option (string) – The ConfigParser option to look for
- tag (string) – The name of the subsection to look in, if not found in [section]
Returns: The value of the options being searched for
Return type: string
Supplement to ConfigParser.ConfigParser.get(). This will search for an option in [section] and if it doesn’t find it will also try in [section-tag] for every value of tag in tags. Will raise a ConfigParser.Error if it cannot find a value.
Parameters: - self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)
- section (string) – The section of the ConfigParser object to read
- option (string) – The ConfigParser option to look for
- tags (list of strings) – The name of subsections to look in, if not found in [section]
Returns: The value of the options being searched for
Return type: string
-
has_option_tag
(section, option, tag)[source]¶ Convenience function accessing has_option_tags() for a single tag: see documentation for that function. NB calling has_option_tags() directly is preferred for simplicity.
Parameters: - self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)
- section (string) – The section of the ConfigParser object to read
- option (string) – The ConfigParser option to look for
- tag (string) – The name of the subsection to look in, if not found in [section]
Returns: Is the option in the section or [section-tag]
Return type: Boolean
Supplement to ConfigParser.ConfigParser.has_option(). This will search for an option in [section] and if it doesn’t find it will also try in [section-tag] for each value in tags. Returns True if the option is found and false if not.
Parameters: - self (ConfigParser object) – The ConfigParser object (automatically passed when this is appended to the ConfigParser class)
- section (string) – The section of the ConfigParser object to read
- option (string) – The ConfigParser option to look for
- tags (list of strings) – The names of the subsection to look in, if not found in [section]
Returns: Is the option in the section or [section-tag] (for tag in tags)
Return type: Boolean
-
interpolate_string
(test_string, section)[source]¶ Take a string and replace all example of ExtendedInterpolation formatting within the string with the exact value.
For values like ${example} this is replaced with the value that corresponds to the option called example *in the same section*
For values like ${common|example} this is replaced with the value that corresponds to the option example in the section [common]. Note that in the python3 config parser this is ${common:example} but python2.7 interprets the : the same as a = and this breaks things
Nested interpolation is not supported here.
Parameters: - test_string (String) – The string to parse and interpolate
- section (String) – The current section of the ConfigParser object
Returns: test_string – Interpolated string
Return type: String
-
perform_extended_interpolation
()[source]¶ Filter through an ini file and replace all examples of ExtendedInterpolation formatting with the exact value. For values like ${example} this is replaced with the value that corresponds to the option called example *in the same section*
For values like ${common|example} this is replaced with the value that corresponds to the option example in the section [common]. Note that in the python3 config parser this is ${common:example} but python2.7 interprets the : the same as a = and this breaks things
Nested interpolation is not supported here.
Parse the [sharedoptions] section of the ini file.
That section should contain entries according to:
- massparams = inspiral, tmpltbank
- dataparams = tmpltbank
This will result in all options in [sharedoptions-massparams] being copied into the [inspiral] and [tmpltbank] sections and the options in [sharedoptions-dataparams] being copited into [tmpltbank]. In the case of duplicates an error will be raised.
-
read_ini_file
(fpath)[source]¶ Read a .ini file and return it as a ConfigParser class. This function does none of the parsing/combining of sections. It simply reads the file and returns it unedited
Stub awaiting more functionality - see configparser_test.py
Parameters: fpath (Path to .ini file, or list of paths) – The path(s) to a .ini file to be read in Returns: cp – The ConfigParser class containing the read in .ini file Return type: ConfigParser
-
sanity_check_subsections
()[source]¶ This function goes through the ConfigParset and checks that any options given in the [SECTION_NAME] section are not also given in any [SECTION_NAME-SUBSECTION] sections.
-
split_multi_sections
()[source]¶ Parse through the WorkflowConfigParser instance and splits any sections labelled with an “&” sign (for e.g. [inspiral&tmpltbank]) into [inspiral] and [tmpltbank] sections. If these individual sections already exist they will be appended to. If an option exists in both the [inspiral] and [inspiral&tmpltbank] sections an error will be thrown
-
pycbc.types.frequencyseries module¶
Provides a class representing a frequency series.
-
class
pycbc.types.frequencyseries.
FrequencySeries
(initial_array, delta_f=None, epoch='', dtype=None, copy=True)[source]¶ Bases:
pycbc.types.array.Array
Models a frequency series consisting of uniformly sampled scalar values.
Parameters: - initial_array (array-like) – Array containing sampled data.
- delta_f (float) – Frequency between consecutive samples in Hertz.
- epoch ({None, lal.LIGOTimeGPS}, optional) – Start time of the associated time domain data in seconds.
- dtype ({None, data-type}, optional) – Sample data type.
- copy (boolean, optional) – If True, samples are copied to a new array.
-
epoch
¶ Time at 0 index.
Type: lal.LIGOTimeGPS
-
almost_equal_elem
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two frequency series are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the series.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the series.
The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: - other (another Python object, that should be tested for) – almost-equality with ‘self’, element-by-element.
- tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
- relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
- dtol (a non-negative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_fs agree within the tolerance,
-
almost_equal_norm
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two frequency series are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol
The method also checks that self.delta_f is within ‘dtol’ of other.delta_f; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: - other (another Python object, that should be tested for) – almost-equality with ‘self’, based on their norms.
- tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
- relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
- dtol (a non-negative number, the tolerance for delta_f. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_f values of the two FrequencySeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_fs agree within the tolerance,
-
cyclic_time_shift
(dt)[source]¶ Shift the data and timestamps by a given number of seconds
Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cycliclly rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.
Parameters: dt (float) – Amount of time to shift the vector. Returns: data – The time shifted frequency series. Return type: pycbc.types.FrequencySeries
-
delta_f
Frequency between consecutive samples in Hertz.
-
delta_t
¶ Return the time between samples if this were a time series. This assume the time series is even in length!
-
duration
¶ Return the time duration of this vector
-
end_time
¶ Return the end time of this vector
-
epoch
Frequency series epoch as a LIGOTimeGPS.
-
lal
()[source]¶ Produces a LAL frequency series object equivalent to self.
Returns: lal_data – LAL frequency series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self was ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self. Return type: {lal.*FrequencySeries} Raises: TypeError
– If frequency series is stored in GPU memory.
-
match
(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]¶ Return the match between the two TimeSeries or FrequencySeries.
Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. Beware, this may remove high frequency content or the end of the vector.
Parameters: - other (TimeSeries or FrequencySeries) – The input vector containing a waveform.
- psd (Frequency Series) – A power spectral density to weight the overlap.
- low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match.
- high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match.
- index (int) – The number of samples to shift to get the match.
Returns: - match (float)
- index (int) – The number of samples to shift to get the match.
-
sample_frequencies
Array of the sample frequencies.
-
sample_rate
¶ Return the sample rate this would have in the time domain. This assumes even length time series!
-
save
(path, group=None, ifo='P1')[source]¶ Save frequency series to a Numpy .npy, hdf, or text file. The first column contains the sample frequencies, the second contains the values. In the case of a complex frequency series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters: - path (string) – Destination file path. Must end with either .hdf, .npy or .txt.
- group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
-
start_time
¶ Return the start time of this vector
-
to_timeseries
(delta_t=None)[source]¶ Return the Fourier transform of this time series.
Note that this assumes even length time series!
Parameters: - delta_t ({None, float}, optional) – The time resolution of the returned series. By default the
- is determined by length and delta_f of this frequency (resolution) –
- series. –
Returns: The inverse fourier transform of this frequency series.
Return type:
-
pycbc.types.frequencyseries.
load_frequencyseries
(path, group=None)[source]¶ Load a FrequencySeries from a .hdf, .txt or .npy file. The default data types will be double precision floating point.
Parameters: - path (string) – source file path. Must end with either .npy or .txt.
- group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
pycbc.types.optparse module¶
This modules contains extensions for use with argparse
-
class
pycbc.types.optparse.
DictWithDefaultReturn
[source]¶ Bases:
collections.defaultdict
-
default_set
= False¶
-
ifo_set
= False¶
-
-
class
pycbc.types.optparse.
MultiDetMultiColonOptionAction
(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
pycbc.types.optparse.MultiDetOptionAction
A special case of MultiDetOptionAction which allows one to use arguments containing colons, such as V1:FOOBAR:1. The first colon is assumed to be the separator between the detector and the argument. All subsequent colons are kept as part of the argument. Unlike MultiDetOptionAction, all arguments must be prefixed by the corresponding detector.
-
class
pycbc.types.optparse.
MultiDetOptionAction
(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
argparse.Action
-
class
pycbc.types.optparse.
MultiDetOptionActionSpecial
(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶ Bases:
pycbc.types.optparse.MultiDetOptionAction
This class in an extension of the MultiDetOptionAction class to handle cases where the : is already a special character. For example the channel name is something like H1:CHANNEL_NAME. Here the channel name must be provided uniquely for each ifo. The dictionary key is set to H1 and the value to H1:CHANNEL_NAME for this example.
-
class
pycbc.types.optparse.
MultiDetOptionAppendAction
(option_strings, dest, nargs='+', const=None, default=None, type=None, choices=None, required=False, help=None, metavar=None)[source]¶
-
pycbc.types.optparse.
convert_to_process_params_dict
(opt)[source]¶ Takes the namespace object (opt) from the multi-detector interface and returns a dictionary of command line options that will be handled correctly by the register_to_process_params ligolw function.
-
pycbc.types.optparse.
copy_opts_for_single_ifo
(opt, ifo)[source]¶ Takes the namespace object (opt) from the multi-detector interface and returns a namespace object for a single ifo that can be used with functions expecting output from the single-detector interface.
-
pycbc.types.optparse.
ensure_one_opt
(opt, parser, opt_list)[source]¶ Check that one and only one in the opt_list is defined in opt
Parameters:
-
pycbc.types.optparse.
ensure_one_opt_multi_ifo
(opt, parser, ifo, opt_list)[source]¶ Check that one and only one in the opt_list is defined in opt
Parameters:
-
pycbc.types.optparse.
nonnegative_float
(s)[source]¶ Ensure argument is a positive real number or zero and return it as float.
To be used as type in argparse arguments.
-
pycbc.types.optparse.
positive_float
(s)[source]¶ Ensure argument is a positive real number and return it as float.
To be used as type in argparse arguments.
-
pycbc.types.optparse.
required_opts
(opt, parser, opt_list, required_by=None)[source]¶ Check that all the opts are defined
Parameters:
pycbc.types.timeseries module¶
Provides a class representing a time series.
-
class
pycbc.types.timeseries.
TimeSeries
(initial_array, delta_t=None, epoch=None, dtype=None, copy=True)[source]¶ Bases:
pycbc.types.array.Array
Models a time series consisting of uniformly sampled scalar values.
Parameters: - initial_array (array-like) – Array containing sampled data.
- delta_t (float) – Time between consecutive samples in seconds.
- epoch ({None, lal.LIGOTimeGPS}, optional) – Time of the first sample in seconds.
- dtype ({None, data-type}, optional) – Sample data type.
- copy (boolean, optional) – If True, samples are copied to a new array.
-
delta_t
¶
-
duration
¶
-
start_time
¶
-
end_time
¶
-
sample_times
¶
-
sample_rate
¶
-
add_into
(other, copy=True)¶ Return copy of self with other injected into it.
The other vector will be resized and time shifted with sub-sample precision before adding. This assumes that one can assume zeros outside of the original vector range.
-
almost_equal_elem
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two time series are almost equal, element by element.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(self[i]-other[i]) <= tol*abs(self[i]) for all elements of the series.
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(self[i]-other[i]) <= tol for all elements of the series.
The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: - other (another Python object, that should be tested for) – almost-equality with ‘self’, element-by-element.
- tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
- relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
- dtol (a non-negative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_ts agree within the tolerance,
-
almost_equal_norm
(other, tol, relative=True, dtol=0.0)[source]¶ Compare whether two time series are almost equal, normwise.
If the ‘relative’ parameter is ‘True’ (the default) then the ‘tol’ parameter (which must be positive) is interpreted as a relative tolerance, and the comparison returns ‘True’ only if abs(norm(self-other)) <= tol*abs(norm(self)).
If ‘relative’ is ‘False’, then ‘tol’ is an absolute tolerance, and the comparison is true only if abs(norm(self-other)) <= tol
The method also checks that self.delta_t is within ‘dtol’ of other.delta_t; if ‘dtol’ has its default value of 0 then exact equality between the two is required.
Other meta-data (type, dtype, length, and epoch) must be exactly equal. If either object’s memory lives on the GPU it will be copied to the CPU for the comparison, which may be slow. But the original object itself will not have its memory relocated nor scheme changed.
Parameters: - other (another Python object, that should be tested for) – almost-equality with ‘self’, based on their norms.
- tol (a non-negative number, the tolerance, which is interpreted) – as either a relative tolerance (the default) or an absolute tolerance.
- relative (A boolean, indicating whether 'tol' should be interpreted) – as a relative tolerance (if True, the default if this argument is omitted) or as an absolute tolerance (if tol is False).
- dtol (a non-negative number, the tolerance for delta_t. Like 'tol',) – it is interpreted as relative or absolute based on the value of ‘relative’. This parameter defaults to zero, enforcing exact equality between the delta_t values of the two TimeSeries.
Returns: boolean – as interpreted by the ‘relative’ keyword, and if the types, lengths, dtypes, and epochs are exactly the same.
Return type: ‘True’ if the data and delta_ts agree within the tolerance,
-
crop
(left, right)[source]¶ Remove given seconds from either end of time series
Parameters: Returns: cropped – The reduced time series
Return type: pycbc.types.TimeSeries
-
cyclic_time_shift
(dt)[source]¶ Shift the data and timestamps by a given number of seconds
Shift the data and timestamps in the time domain a given number of seconds. To just change the time stamps, do ts.start_time += dt. The time shift may be smaller than the intrinsic sample rate of the data. Note that data will be cyclically rotated, so if you shift by 2 seconds, the final 2 seconds of your data will now be at the beginning of the data set.
Parameters: dt (float) – Amount of time to shift the vector. Returns: data – The time shifted time series. Return type: pycbc.types.TimeSeries
-
delta_f
¶ Return the delta_f this ts would have in the frequency domain
-
delta_t
Time between consecutive samples in seconds.
-
detrend
(type='linear')[source]¶ Remove linear trend from the data
Remove a linear trend from the data to improve the approximation that the data is circularly convolved, this helps reduce the size of filter transients from a circular convolution / filter.
Parameters: - type (str) – The choice of detrending. The default (‘linear’) removes a linear
- squares fit. 'constant' removes only the mean of the data. (least) –
-
duration
Duration of time series in seconds.
-
end_time
Time series end time as a LIGOTimeGPS.
-
filter_psd
(segment_duration, delta_f, flow)[source]¶ Calculate the power spectral density of this time series.
Use the pycbc.psd.welch method to estimate the psd of this time segment. The psd is then truncated in the time domain to the segment duration and interpolated to the requested sample frequency.
Parameters: Returns: psd – Frequency series containing the estimated PSD.
Return type:
-
fir_zero_filter
(coeff)[source]¶ Filter the timeseries with a set of FIR coefficients
Parameters: coeff (numpy.ndarray) – FIR coefficients. Should be and odd length and symmetric. Returns: - filtered_series (pycbc.types.TimeSeries) – Return the filtered timeseries, which has been properly shifted to account
- for the FIR filter delay and the corrupted regions zeroed out.
-
gate
(time, window=0.25, method='taper', copy=True, taper_width=0.25, invpsd=None)[source]¶ Gate out portion of time series
Parameters: - time (float) – Central time of the gate in seconds
- window (float) – Half-length in seconds to remove data around gate time.
- method (str) – Method to apply gate, options are ‘hard’, ‘taper’, and ‘paint’.
- copy (bool) – If False, do operations inplace to this time series, else return new time series.
- taper_width (float) – Length of tapering region on either side of excized data. Only applies to the taper gating method.
- invpsd (pycbc.types.FrequencySeries) – The inverse PSD to use for painting method. If not given, a PSD is generated using default settings.
Returns: data – Gated time series
Return type: pycbc.types.TimeSeris
-
highpass_fir
(frequency, order, beta=5.0, remove_corrupted=True)[source]¶ Highpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)
Parameters: - Series (Time) – The time series to be high-passed.
- frequency (float) – The frequency below which is suppressed.
- order (int) – Number of corrupted samples on each side of the time series
- beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.
- remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.
-
inject
(other, copy=True)[source]¶ Return copy of self with other injected into it.
The other vector will be resized and time shifted with sub-sample precision before adding. This assumes that one can assume zeros outside of the original vector range.
-
lal
()[source]¶ Produces a LAL time series object equivalent to self.
Returns: lal_data – LAL time series object containing the same data as self. The actual type depends on the sample’s dtype. If the epoch of self is ‘None’, the epoch of the returned LAL object will be LIGOTimeGPS(0,0); otherwise, the same as that of self. Return type: {lal.*TimeSeries} Raises: TypeError
– If time series is stored in GPU memory.
-
lowpass_fir
(frequency, order, beta=5.0, remove_corrupted=True)[source]¶ Lowpass filter the time series using an FIR filtered generated from the ideal response passed through a kaiser window (beta = 5.0)
Parameters: - Series (Time) – The time series to be low-passed.
- frequency (float) – The frequency below which is suppressed.
- order (int) – Number of corrupted samples on each side of the time series
- beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.
- remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the filtering is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.
-
match
(other, psd=None, low_frequency_cutoff=None, high_frequency_cutoff=None)[source]¶ Return the match between the two TimeSeries or FrequencySeries.
Return the match between two waveforms. This is equivalent to the overlap maximized over time and phase. By default, the other vector will be resized to match self. This may remove high frequency content or the end of the vector.
Parameters: - other (TimeSeries or FrequencySeries) – The input vector containing a waveform.
- psd (Frequency Series) – A power spectral density to weight the overlap.
- low_frequency_cutoff ({None, float}, optional) – The frequency to begin the match.
- high_frequency_cutoff ({None, float}, optional) – The frequency to stop the match.
Returns: - match (float)
- index (int) – The number of samples to shift to get the match.
-
notch_fir
(f1, f2, order, beta=5.0, remove_corrupted=True)[source]¶ notch filter the time series using an FIR filtered generated from the ideal response passed through a time-domain kaiser window (beta = 5.0)
The suppression of the notch filter is related to the bandwidth and the number of samples in the filter length. For a few Hz bandwidth, a length corresponding to a few seconds is typically required to create significant suppression in the notched band.
Parameters: - Series (Time) – The time series to be notched.
- f1 (float) – The start of the frequency suppression.
- f2 (float) – The end of the frequency suppression.
- order (int) – Number of corrupted samples on each side of the time series
- beta (float) – Beta parameter of the kaiser window that sets the side lobe attenuation.
-
prepend_zeros
(num)[source]¶ Prepend num zeros onto the beginning of this TimeSeries. Update also epoch to include this prepending.
-
psd
(segment_duration, **kwds)[source]¶ Calculate the power spectral density of this time series.
Use the pycbc.psd.welch method to estimate the psd of this time segment. For more complete options, please see that function.
Parameters: - segment_duration (float) – Duration in seconds to use for each sample of the spectrum.
- kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method.
Returns: psd – Frequency series containing the estimated PSD.
Return type:
-
qtransform
(delta_t=None, delta_f=None, logfsteps=None, frange=None, qrange=(4, 64), mismatch=0.2, return_complex=False)[source]¶ Return the interpolated 2d qtransform of this data
Parameters: - delta_t ({self.delta_t, float}) – The time resolution to interpolate to
- delta_f (float, Optional) – The frequency resolution to interpolate to
- logfsteps (int) – Do a log interpolation (incompatible with delta_f option) and set the number of steps to take.
- frange ({(30, nyquist*0.8), tuple of ints}) – frequency range
- qrange ({(4, 64), tuple}) – q range
- mismatch (float) – Mismatch between frequency tiles
- return_complex ({False, bool}) – return the raw complex series instead of the normalized power.
Returns: - times (numpy.ndarray) – The time that the qtransform is sampled.
- freqs (numpy.ndarray) – The frequencies that the qtransform is sampled.
- qplane (numpy.ndarray (2d)) – The two dimensional interpolated qtransform of this time series.
-
sample_rate
The sample rate of the time series.
-
sample_times
Array containing the sample times.
-
save
(path, group=None)[source]¶ Save time series to a Numpy .npy, hdf, or text file. The first column contains the sample times, the second contains the values. In the case of a complex time series saved as text, the imaginary part is written as a third column. When using hdf format, the data is stored as a single vector, along with relevant attributes.
Parameters: - path (string) – Destination file path. Must end with either .hdf, .npy or .txt.
- group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.
-
save_to_wav
(file_name)[source]¶ Save this time series to a wav format audio file.
Parameters: file_name (string) – The output file name
-
start_time
Return time series start time as a LIGOTimeGPS.
-
time_slice
(start, end, mode='floor')[source]¶ Return the slice of the time series that contains the time range in GPS seconds.
-
to_frequencyseries
(delta_f=None)[source]¶ Return the Fourier transform of this time series
Parameters: - delta_f ({None, float}, optional) – The frequency resolution of the returned frequency series. By
- the resolution is determined by the duration of the timeseries. (default) –
Returns: The fourier transform of this time series.
Return type:
-
whiten
(segment_duration, max_filter_duration, trunc_method='hann', remove_corrupted=True, low_frequency_cutoff=None, return_psd=False, **kwds)[source]¶ Return a whitened time series
Parameters: - segment_duration (float) – Duration in seconds to use for each sample of the spectrum.
- max_filter_duration (int) – Maximum length of the time-domain filter in seconds.
- trunc_method ({None, 'hann'}) – Function used for truncating the time-domain filter. None produces a hard truncation at max_filter_len.
- remove_corrupted ({True, boolean}) – If True, the region of the time series corrupted by the whitening is excised before returning. If false, the corrupted regions are not excised and the full time series is returned.
- low_frequency_cutoff ({None, float}) – Low frequency cutoff to pass to the inverse spectrum truncation. This should be matched to a known low frequency cutoff of the data if there is one.
- return_psd ({False, Boolean}) – Return the estimated and conditioned PSD that was used to whiten the data.
- kwds (keywords) – Additional keyword arguments are passed on to the pycbc.psd.welch method.
Returns: whitened_data – The whitened time series
Return type:
-
pycbc.types.timeseries.
load_timeseries
(path, group=None)[source]¶ Load a TimeSeries from a .hdf, .txt or .npy file. The default data types will be double precision floating point.
Parameters: - path (string) – source file path. Must end with either .npy or .txt.
- group (string) – Additional name for internal storage use. Ex. hdf storage uses this as the key value.
Raises: ValueError
– If path does not end in .npy or .txt.