user_parameters.C ~~~~~~~~~~~~~~~~~~~~~~ The pipeline has a default parameter file called cwb_parameters.C which contains the list of needed variables for the analysis set at a certain value. The user can modify some of these values adding them in the config/user_parameters.C file. We report here all the variable contained in the cwb_parameters.C file, the user can change each of them accordingly to his/her preferences. Some variable are common for 1G and 2G pipelines, other are characteristic of one of them, in the following sections we distinguish between: - parameter for both analyses (1G/2G) - parameter only for 1G - parameter only for 2G To obtain complete list of parameters with default setting: .. code-block:: bash root -b -l $CWB_PARAMETERS_FILE cwb[0] CWB::config cfg; // creates the config object cwb[1] cfg.Import(); // import parameter from CINT cwb[2] cfg.Print(); // print the default parameters This is the complete file: - :cwb_library:`cwb1G_parameters.C` - :cwb_library:`cwb2G_parameters.C` We divide the file in different sections for simplicity: +---------------------------------------------------------------+-------------------------------------------------------------+ | `Analysis <#analysis>`__ | Type of analysis (1G/2G and polarization constrain) | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Detectors <#detectors>`__ | How to include detectors | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Wavelet TF transformation <#wavelet-tf-transformation>`__ | how to define wavelet decomposition level | +---------------------------------------------------------------+-------------------------------------------------------------+ | `1G conditioning <#1g-conditioning>`__ | Parameters for Linear Prediction Filter | +---------------------------------------------------------------+-------------------------------------------------------------+ | `2G conditioning <#2g-conditioning>`__ | Parameters for Regression | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Cluster thresholds <#cluster-thresholds>`__ | Pixels and Cluster selection | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Wave Packet parameters <#wave-packet-parameters>`__ | Pixels Selection & Reconstruction | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Job settings <#job-settings>`__ | Time segments definition | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Production parameters <#production-parameters>`__ | Typical parameter for background | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Simulation parameters <#simulation-parameters>`__ | Typical parameter for simulation (MDC) | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Data manipulating <#data-manipulating>`__ | Change frame data (amplitude and time shift) | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Regulator <#regulator>`__ | Likelihood regolators | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Sky settings <#sky-settings>`__ | How to define the sky grid | +---------------------------------------------------------------+-------------------------------------------------------------+ | `CED parameters <#ced-parameters>`__ | Parameters for CED generation | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Files list <#files-list>`__ | How to include frame and DQ files | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Plugin <#plugin>`__ | How to include Plugins | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Output settings <#output-settings>`__ | Decide what information to store in the final root files | +---------------------------------------------------------------+-------------------------------------------------------------+ | `Working directories <#working-directories>`__ | Set up of working dir | +---------------------------------------------------------------+-------------------------------------------------------------+ | Analysis ^^^^^^^^^^^^^^ .. code-block:: bash char analysis[8]="1G"; // 1G or 2G analysis bool online=false; // true/false -> online/offline char search = 'r'; // see description below - **analysis**: Setting first generation detector (1G) or second generation detector (2G) analysis. The difference between the two analyses are explained here: ... - **online**: Defining if the analysis is ONLINE or not - **search**: putting a letter define search constrains on waveform polarization. - **for 1G** .. code-block:: bash // statistics: // L - likelihood // c - network correlation coefficient // A - energy disbalance asymmetry // P - penalty factor based on correlation coefficients /sqrt(*) // E - total energy in the data streams // 1G search modes // 'c' - un-modeled search, fast S5 cWB version, requires constraint settings // 'h' - un-modeled search, S5 cWB version, requires constraint settings // 'B' - un-modeled search, max(P*L*c/E) // 'b' - un-modeled search, max(P*L*c*A/E) // 'I' - elliptical polarisation, max(P*L*c/E) // 'S' - linear polarisation, max(P*L*c/E) // 'G' - circular polarisation, max(P*L*c/E) // 'i' - elliptical polarisation, max(P*L*c*A/E) // 's' - linear polarisation, max(P*L*c*A/E) // 'g' - circular polarisation, max(P*L*c*A/E) **for 2G** .. code-block:: bash // r - un-modeled // i - iota - wave (no dispersion correction) // p - Psi - wave // l,s - linear // c,g - circular // e,b - elliptical (no dispersion correction) // low/upper case search (like 'i'/'I') & optim=false - standard MRA // low case search (like 'i') & optim=true - extract PCs from a single resolution // upper case (like 'I') & optim=true - standard single resolution analysis Detectors ^^^^^^^^^^^^^^^ | List of detectors including in the network. | cWB already contained complete information about a set of existing or possible future detectors: - L1: 4km Livingston - H1: 4km Hanford - H2: 2km Hanford - V1: 3km Virgo - I1: Indian - J1: KAGRA Moreover, it is possible to define a not included detector specifying the position on the Earth and the arms direction. .. code-block:: bash int nIFO = 3; // size of network starting with first detector ifo[] char refIFO[4] = "L1"; // reference IFO char ifo[NIFO_MAX][8]; for(int i=6;i`__. .. code-block:: bash int levelD = 8; // decomposition level double whiteWindow = 60.; // [sec] time window dT. if = 0 - dT=T, where T is segment duration double whiteStride = 20.; // [sec] noise sampling time stride - **levelD** At this decomposition level the pipeline applied the regression algorithm and the whitening procedure. Cluster thresholds ^^^^^^^^^^^^^^^^^^^^^^^^ .. code-block:: bash double x2or = 1.5; // 2 OR threshold double netRHO= 3.5; // threshold on rho double netCC = 0.5; // threshold on network correlation double bpp = 0.0001; // probability for pixel selection double Acore = sqrt(2); // threshold for selection of core pixels double Tgap = 0.05; // time gap between clusters (sec) double Fgap = 128.; // frequency gap between clusters (Hz) double TFgap = 6.; // threshold on the time-frequency separation between two pixels double fLow = 64.; // low frequency of the search double fHigh = 2048.; // high frequency of the search - **x2or**: only for 1G threshold on the pixel energies during the coherence phase. The energy of a single detector should be not too large respect to the others. - **netRHO**: Cluster are selected in production stage if rho is bigger than netRHO - **netCC**: Cluster are selected in production stage if rho is bigger than netCC - **bpp**: Black Pixel Probability Fraction of most energetic pixels selected from the TF map to construct events - **Acore**: ... - **Tgap** and **Fgap** Maximum gaps between two different TF pixels at the same decomposition level that can be considered for an unique event - **TFgap** Threshold on the time-frequency separation between two pixels - **fLow** and **fHigh** Boundary frequency limits for the analysis. Note: This limits are considered directly in the TF decomposition, so the pipeline chooses the nearest frequencies to these values according to the decomposition level Wave Packet parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Pixels Selection & Reconstruction (see `The WDM packets `__) |image23| | **Select pixel pattern used to produce the energy max maps for pixel's selection** .. code-block:: bash // patterns: "/" - ring-up, "\" - ring-down, "|" - delta, "-" line, "*" - single pattern = 0 - "*" 1-pixel standard search pattern = 1 - "3|" 3-pixels vertical packet (delta) pattern = 2 - "3-" 3-pixels horizontal packet (line) pattern = 3 - "3/" 3-pixels diagonal packet (ring-up) pattern = 4 - "3\" 3-pixels anti-diagonal packet (ring-down) pattern = 5 - "5/" 5-pixels diagonal packet (ring-up) pattern = 6 - "5\" 5-pixels anti-diagonal packet (ring-down) pattern = 7 - "3+" 5-pixels plus packet (plus) pattern = 8 - "3x" 5-pixels cross packet (cross) pattern = 9 - "9p" 9-pixels square packet (box) pattern = else - "*" 1-pixel packet (single) | **Select the reconstruction method** .. code-block:: bash pattern==0 Standard Search : std-pixel selection + likelihood2G pattern!=0 && pattern<0 Mixed Search : packet-pixel selection + likelihood2G pattern!=0 && pattern>0 Packed Search : packet-pixel selection + likelihoodWP Job settings ^^^^^^^^^^^^^^^^^^ This section sets the time length for the jobs (see also `How job segments are created <#how-job-segments-are-created>`__). .. code-block:: bash // segments int runID = 0; // run number, set in the production job double segLen = 600.; // Segment length [sec] double segMLS = 300.; // Minimum Segment Length after DQ_CAT1 [sec] double segTHR = 30.; // Minimum Segment Length after DQ_CAT2 [sec] double segEdge = 8.; // wavelet boundary offset [sec] - **runID** job number to be analysed. This parameters is auotmatically overwritten when using condor submission (see `cwb_condor <#cwb-condor>`__) and `cwb_inet <#cwb-inet>`__ command - **segLen** [s] is the typical and maximum job length. This is the only possible lenght is super lags are used. (For super-lags see `Production parameters <#production-parameters>`__) - **segMLS** [s] is the minimum job lenght in seconds. It could happens that after application of Data Quality it is not possible to have a continous period of lenght segLen, so the pipeline consider the remaining period if this has a lenght bigger than segMLS. This means that job could have segMLS < lenght < segLen. - **segTHR** [s] is the minimum period of each job that survives after DQ_CAT2 application . This means that, if a job of 600 s, has a period of CAT2 less than segTHR, it is discarded from the analysis. If segTHR = 0, this check is disabled. - **segEdge** [s] is a scratch period used by the pipeline for the wavelet decomposition. For each job the first and last segEdge seconds are not considered for the trigger selection. Production parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^ | Production stage consists of time shifting data for each detectors so that reconstructed events are surely due to detectors noise and not gravitational waves. | cWB can perform two shifts type: the first inside the job segment: the second between different jobs segments. We call the first case as *lag shifts* and the second case as *super-lags shift*. | In **lag** case, the pipeline perform circular shifts on the data of a job. Suppose that a job has a lenght T, the pipeline can perform shifts of step ts, which a maximum number of shifts M such as M*ts <= T. Even if shifts are performed circularly, no data are loss, and shifting the detector A respect to B of the time K, is the same as shifting detector B of the time -K respect to A. So, considering N detectors composing the network, the number of possible lag shifts are (N-1)*M for each job. For N>2 case, the algorithm can perform two different ways: - shifts only the first detector respect to the other (inadvisable); - randomly choose from the list of available shifts a subset that are used in the analysis according to user definition. Randomization algorithm depends only on the detector number and the maximum possible shift. It is possible to write in a text file the lag list applied. The lags are stored with a progressive number which identifies univocally the lags. The lags parameters are: - **lagSize** Lags numbers used (for Simulation stage should be set to 1) - **lagStep** Time step for the shifts - **lagOff** Progressive number of the lag list from which starting to select the subset. - **lagMax** Maximum Allowable shift. If lagMax=0, than only the first detector is shifted, and the maximum allowable shift is given by lagSize parameter. If lagMax > 0 better to chech if lagMax*lagStep < T, otherwise could be possible to loose some lags. - **lagMode** Possibility to write (w)/read (r) the lag list to/from a file. - **lagFile** File name which can be written/read according to the previous parameter. If lagMode=w and lagFile=NULL no file is written. If lagMode=r and tlagFile=NULL the pipeline returns an error - **lagSite** This parameter is a pointer to a size_t array and it is used to declare the detectors with the same site location like H1H2. This information is used by the built-in lag generator to associate the same lags to the detectors which are in the same site. If detectors are all in different sites the default value must be used (lagSite=NULL) .. code-block:: bash Example : L1H1H2V1 lagSite = new size_t[4]; lagSite[0]=0; lagSite[1]=1; lagSite[2]=1; lagSite[3]=2; - **shifts** Array for each detector which the possibilty to apply a constant circular shifts to the detectors (storically, no more used) - **mlagStep** To limit computational load, it is possible to cicle over the lagSize number of lags in subsets of size equal to mlagStep instead of all the lags together. This reduce computational load and increase computational time. **Examples :** .. code-block:: bash 2 detectors L1,H1 : 351 standard built-in lags (include zero lag) lagSize = 351; // number of lags lagStep = 1.; // time interval between lags = 1 sec lagOff = 0; // start from lag=0, include zero lag lagMax = 0; // standard lags the output lag list is : lag ifoL1 ifoH1 0 0.00000 0.00000 1 1.00000 0.00000 2 2.00000 0.00000 ... ......... ....... 350 350.00000 0.00000 note : values ifoDX are in secs 3 detectors L1,H1,V1 : 350 random built-in lags (exclude zero lag) lagSize = 351; // number of lags lagStep = 1.; // time interval between lags = 1 sec lagOff = 1; // start from lag=1, exclude zero lag lagMax = 300; // random lags : max lag = 300 the output lag list is : lag ifoL1 ifoH1 ifoV1 1 158.00000 223.00000 0.00000 2 0.00000 195.00000 236.00000 3 28.00000 0.00000 179.00000 ... ......... ....... ''''''''' 350 283.00000 0.00000 142.00000 note : values ifoDX are in secs 3 detectors L1,H1,V1 : load 201 custom lags from file lagSize = 201; // number of lags lagOff = 0; // start from lag=1 lagMax = 300; // random lags : max lag = 300 lagFile = new char[1024]; strcpy(lagFile,"custom_lagss_list.txt"); // lag file list name lagMode[0] = 'r'; // read mode an example of input lag list is : 0 0 0 0 1 0 1 200 2 0 200 1 3 0 3 198 ... ... ... ... 200 0 2 199 note : all values must be integers lags must in the range [0:lagMax] In the **super-lags** case, the pipeline consider data of each detectors belonging to different segments, so shifted of a time multiple of T. In this way we can increase easily the number of time lags because it allows to make shifts between data bigger than T (expecially when having two detectors). Once selected different segments the standard circular lags shifts are applied as the different segments would be the same one. The meaning of the parameter are similar to the one of lags case, but here the values are in segments and not in seconds as for the previous case. | A detailed description of the slag configuration parameters is here : - :cwb_library:`CWB::Toolbox::getSlagList` **Examples :** .. code-block:: bash use standard segment slagSize = 0; // Standard Segments : segments are not shifted, only lags are applied // segments length is variable and it is selected in the range [segMSL:segLen] 3 detectors L1,H1,V1 : select 4 built-in slags slagSize = 4; // number of super lags slagMin = 0; // select the minimum available slag distance : slagMin must be <= slagMax slagMax = 3; // select the maximum available slag distance slagOff = 0; // start from the first slag in the list the output slag list is : SLAG ifo[0] ifo[1] ifo[2] 0 0 0 0 1 0 1 -1 2 0 -1 1 3 0 2 1 3 detectors L1,H1,V1 : load 4 custom slags from file slagSize = 4; // number of super lags slagOff = 0; // start from the first slag in the list slagFile = new char[1024]; strcpy(slagFile,"custom_slags_list.txt"); // slag file list name an example of input slag list is : 1 0 -4 4 2 0 4 -4 3 0 -8 8 4 0 8 -8 note : all values must be integers Simulation parameters ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | Simulation stage allows to test efficiency detection of the pipeline. It consists on injecting simulated waveforms (MDC) on the detector data. Once chosen the simulation stage, the pipeline make the analysis only around the injection times (and not all the segments) to reduce computational load. | Waveforms are injected at different amplitudes, each amplitude is repeated for each waveform at the same time, such repeating the analysis for each factor. .. code-block:: bash int simulation = 0; // 1 for simulation, 0 for production double iwindow = 5.; // analysis time window for injections (Range = Tinj +/- gap/2) int nfactor=0; // number of strain factors double factors[100]; // array of strain factors char injectionList[1024]=""; - **simulation** variable that sets the simulation stage: 1=simulation, 0=production If sets to 2 it sets injections at constant network SNR over the sky, instead of hrss. - **gap** time windows around the time injection that is analysed (+- gap). - **nfactor** - **factors** list of factors which differ according to the value of simulation. #. amplitude factors to be multiplied ot the hrss written in the injectionList. #. network SNR (waveform is rescaled according to these values). #. time shift applied to the waveforms #. progressive number referring to the multiple trials for injection volume distribution - **injectionList** path of file containing all the information about inections (waveform type, amplitude, source directions, detector arrival times, ...) Data manipulating ^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to apply constant shifts and/or uniform amplitude modifications on detectors data. Here are the parameters that allow to do these things: - Calibration .. code-block:: bash double dcCal[NIFO_MAX]; for(int i=0;i`__ . - MDC time shift .. code-block:: bash // use this parameter to shift in time the injections (sec) // use {0,0,0} to set mdc_shift to 0 // if {-1,0,0} the shift is automatically selected // {startMDC, stopMDC, offset} // see description in the method CWB::Toolbox::getMDCShift mdcshift mdc_shift = {0, 0, 0}; Possibility to apply constant time shifts to injections (MDC). These shifts are made in seconds. This allows to increase statistics for efficiency curve running more simulation jobs, see `How to apply a time shift to the input MDC and noise frame files <#how-to-apply-a-time-shift-to-the-input-mdc-and-noise-frame-files>`__ . Regulator ^^^^^^^^^^^^^^^^ .. code-block:: bash double delta = 1.0; // [0/1] -> [weak/soft] double gamma = 0.2; // set params in net5, [0/1]->net5=[nIFO/0], // if net5>[threshold=(nIFO-1)] weak/soft[according to delta] else hard bool eDisbalance = true; For the meaning of these parameter see `What is the role of the regulators in 1G analysis <#what-is-the-role-of-the-regulators-in-1g-analysis>`__ and What is the role of the regulators in the 2G analysis. Sky settings ^^^^^^^^^^^^^^^^^^^ | The pipeline calculates for each event what is the most probable location of the source in the sky. This is done scanning a discrete grid of sky positions. cWB can implement two grid types: one is the Healpix (`What is HEALPix <#what-is-healpix>`__) and the other is a proper cWB grid. The cWB grid has two possible coordinates: Earth fixes and Celestial. | The Earth fixed has phi running on longitude with 0 on Greenwich and theta running on latitude with 0 at North Pole and 180 at South Pole. | The Celestial is ... | Sky resolution of cWB sky grid can be defined by the user, such as L degrees. The angular difference between two consecutive points at the same longitude is equal to L, but the difference between two consecutive points at the same latitude depend on the latitude, such as DL = L/cos(lat). | Healpix grid is more uniform in the sky. An example (of the differences) between the two grids are here `What is HEALPix <#what-is-healpix>`__ | It is possible to limit the sky grid on limited region on the sky, limiting the range of longitude and latitude, or creating a SkyMask, putting a boolean 1/0 on the sky position in the grid additing which one should be considered. .. code-block:: bash bool EFEC = true; // Earth Fixed / Selestial coordinates size_t mode = 0; // sky search mode double angle = 0.4; // angular resolution double Theta1 = 0.; // start theta double Theta2 = 180.; // end theta double Phi1 = 0.; // start theta double Phi2 = 360.; // end theta double mask = 0.00; // sky mask fraction char skyMaskFile[1024]=""; char skyMaskCCFile[1024]=""; size_t healpix= 0; // if not 0 use healpix sky map (SK: please check if duplicated) int Psave = 0; // Skymap probability to be saved in the final output root file (saved if !=0 : see nSky) long nSky = 0; // if nSky>0 -> # of skymap prob pixels dumped to ascii // if nSky=0 -> (#pixels==1000 || cum prob > 0.99) // if nSky<0 -> nSky=-XYZ... save all pixels with prob < 0.XYZ... double precision = 0.001; // Error region: No = nIFO*(K+KZero)+precision*E size_t upTDF=4; // upsample factor to obtain rate of TD filter : TDRate = (inRate>>levelR)*upTDF char filter[1024] = "up2"; // 1G delay filter suffix: "", or "up1", or "up2" (SK: may replace it with tdUP) - **EFEC** Boolean selecting Earth coordinate (true) or Celestial coordinates (false) (DELETE??????) - **mode** If set to 0, the pipeline consider the total grid. If set to 1 the pipeline exclude from the grid the sky locations with network time delays equal to an already considered sky location. This parameter should not be changed. - **angle** Angular resolution for the sky grid, used for CWB grid. - **Theta1** and **Theta2** Latitute boundaries. - **Phi1** and **Phi2** Longitude boundaries. - **skyMaskFile** and **mask** | **1G pipeline** : skyMaskFile label each sky location with a probability value. The integral probability is equal to 1. The **mask** parameter selects the fraction of most probable pixels the pipeline should consider for the analysis. This uses Earth coordinates. | **2G pipeline** : File giving a number to each sky locations. If the number is different from 0, the sky location is applied. This uses earth coordinates. Alternatively to the file name (generic skymask) it is possible to use the built-in skymask. The built-in skymask is a circle defined by its center in earth coordinates and its radius in degrees. | The syntax is : .. code-block:: bash --theta THETA --phi PHI --radius RADIUS define a circle centered in (THETA,PHI) and radius=RADIUS THETA : [-90,90], PHI : [0,360], RADIUS : degrees | Example : sprintf(skyMaskFile,"--theta -20.417 --phi 210.783 --radius 10"); - **skyMaskCCFile** | File giving a number to each sky locations. If the number is different from 0, the sky location is applied. This uses celestial coordinates. Alternatively to the file name (generic skymask) it is possible to use the built-in skymask. The built-in skymask is a circle defined by its center in earth coordinates and its radius in degrees. | The syntax is : .. code-block:: bash --theta DEC --phi RA --radius RADIUS define a circle centered in (DEC,RA) and radius=RADIUS DEC : [-90,90], RA : [0,360], RADIUS : degrees | Example : sprintf(skyMaskCCFile,"--theta -20.417 --phi 240 --radius 10"); To see how to define a skymask with a file see `How to create a celestial skymask <#how-to-create-a-celestial-skymask>`__ - **healpix** Healpix parameter, if equal to 0 the pipeline uses cWB grid, is > 0 the pipeline uses Healpix - **Psave** Skymap probability to be saved in the final output root file (saved if !=0 : see nSky) - **nSky** | this is the number of sky positions reported in the ascii file and (if Psave=true) in root. | If nSky = 0, the number o sky positions reported is such as the cumulative probabiltiy in the sky reach 0.99%. If this number is greater than 1000, the list is truncated at 1000. | if nSky>0 -> # of skymap prob pixels dumped to ascii | if nSky=0 -> (#pixels==1000 || cum prob > 0.99) | if nSky<0 -> nSky=-XYZ... save all pixels with prob < 0.XYZ... - **precision** | precision = GetPrecision(csize,order); | set parameters for big clusters events management | csize : cluster size threshold | order : order of healpix resampled skymap (=csize is downsampled to skymap(order) - **upTDF** ... - **filter** ... CED parameters ^^^^^^^^^^^^^^^^^^^^^ There are three parameters regarding `CED <#ced>`__: .. code-block:: bash bool cedDump = false; // dump ced plots with rho>cedRHO double cedRHO = 4.0; - **cedDump** boolean value, if true CED pages are produced, otherwise not - **cedRHO** CED pages are produced only for triggers which have rho > cedRHO Output settings ^^^^^^^^^^^^^^^^^^^^^^ There are different information and format styles that the pipeline can produce. Here are the parameters setting these. .. code-block:: bash unsigned int jobfOptions = CWB_JOBF_SAVE_DISABLE; // job file options bool dumpHistory = true; // dump history into output root file bool dump = true; // dump triggers into ascii file bool savemode = true; // temporary save clusters on disc - **jobfOptions** - **dumpHistory** Save in the output file all the parameters and configuration files used - **dump** save the triggers information also in ASCII files in addition to ROOT files. - **savemode** temporary save information about cluster on the disk, to save memory. Working directories ^^^^^^^^^^^^^^^^^^^^^^^^^^ | The analysis needs a working dir where putting temporary files necessary for the analysis, called *nodedir*. | This dir is automatically chosen for ATLAS and CIT clusters, but for other clusters should be specified. .. code-block:: bash // read and dump data on local disk (nodedir) char nodedir[1024] = ""; cout << "nodedir : " << nodedir << endl; | The following directories show where putting results and various information about the analysis. | We suggest not to change it, however, for completeness we report the all directories. | The content of each directory is already explained in `pre-production <#pre-production>`__ .. code-block:: bash char work_dir[512]; sprintf(work_dir,"%s",gSystem->WorkingDirectory()); char config_dir[512] = "config"; char input_dir[512] = "input"; char output_dir[512] = "output"; char merge_dir[512] = "merge"; char condor_dir[512] = "condor"; char report_dir[512] = "report"; char macro_dir[512] = "macro"; char log_dir[512] = "log"; char data_dir[512] = "data"; char tmp_dir[512] = "tmp"; char ced_dir[512] = "report/ced"; char pp_dir[512] = "report/postprod"; char dump_dir[512] = "report/dump"; char www_dir[512]; Files list ^^^^^^^^^^^^^^^^^ These are informations about the list of frame files and data quality files. - Frame files: .. code-block:: bash // If all mdc channels are in a single frame file -> mdc must be declared in the nIFO position char frFiles[2*NIFO_MAX][256]; for(int i=0;i<2*NIFO_MAX;i++) strcpy(frFiles,""); // frame reading retry time (sec) : 0 -> disable // retry time = frRetryTime*(num of trials) : max trials = 3 int frRetryTime=60; char channelNamesRaw[NIFO_MAX][50]; char channelNamesMDC[NIFO_MAX][50]; | If we have N detectors, the [0,N-1] positions refers to detectors data frame files. The [N-1, 2N-1] are for MDC frame filesfor Simulation stage. If the frame file is the same for all MDC, it is sufficient to write only the N position. | The channel name of detector strain and MDC strain are respectively saved in channelNamesRaw and channelNamesMDC. | Sometimes the frames are not temporarily available for reading, if the pipeline is not able to read a frame, it retries after some seconds (...). After a number of trials equal to frRetryTime, the pipeline exit with an error. - Data quality .. code-block:: bash // {ifo, dqcat_file, dqcat[0/1/2], shift[sec], inverse[false/true], 4columns[true/false]} int nDQF = 0; dqfile DQF[20]; See `data quality <#data-quality>`__ for details on how to write the data quality Plugin ^^^^^^^^^^^^^ These are the parameters that regars `Plugins <#plugins>`__ .. code-block:: bash TMacro plugin; // Macro source TMacro configPlugin; // Macro config plugin.SetName(""); configPlugin.SetName(""); bool dataPlugin = false; // if dataPlugin=true disable read data from frames bool mdcPlugin = false; // if mdcPlugin=true disable read mdc from frames bool dcPlugin = false; // if dcPlugin=true disable the build data conditioning - **plugin**: insert the Plugin source code - **configPlugin**: insert the Plugin configuration source code - **plugin.SetName("");**: insert the compiled Plugin code - **configplugin.SetName("");**: insert the compiled Plugin configuration code - **dataPlugin**: disable the reading of detector strain - **mdcPlugin**: disable the reading of MDC strain - **dcPlugin**: disable the conditioning of data (`1G conditioning <#1g-conditioning>`__ or `2G conditioning <#2g-conditioning>`__)