Page MenuHomeHEPForge

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/doc/release.notes b/doc/release.notes
index e575650..f50f56c 100644
--- a/doc/release.notes
+++ b/doc/release.notes
@@ -1,915 +1,923 @@
///////////////////////////////////////////////////////////////
/// ///
/// This is the History file for the Laura++ package. ///
/// ///
///////////////////////////////////////////////////////////////
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r5
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
+6th August 2019 Thomas Latham
+* Add some extra charmonium states to list of known resonances
+* Minor modifications to code for importing parameter values from file
+
17th May 2019 Thomas Latham
* Fix licences and Doxygen in pole, nonresonant form factors, and rescattering line shapes
-* Add reference for pole line shape
+* Add journal reference for pole line shape
* Fix class names and add Doxygen to LauCalcChiSq, LauResultsExtractor, LauMergeDataFiles
-* Update Doxyfile with more recent Doxygen version
+* Update Doxyfile from more recent Doxygen version
* Make consistent the use of ClassDef and ClassImp everywhere
+* Add functions to return the "sanitised" names of parent and daughters
8th January 2019 Thomas Latham
* Workaround for issue with splitting toy MC files when generating large number of experiments
5th December 2018 Thomas Latham
* Move sources and headers for the utilities library into the main library
- makes examples dir executables only (in preparation for CMake build)
22nd, 23rd July and 4th August 2018 Juan Otalora
* Add pole, nonresonant form factors, and rescattering line shapes
17th April 2018 Daniel O'Hanlon
* Fix bug in rho-omega mixing fit-fraction calculation: rho fit-fraction was not normalised to the total DP rate
23rd March 2018 Thomas Latham
* Some fixes in LauRhoOmegaMix that propagate the following settings to the subcomponents:
- spinType_
- flipHelicity_
- ignoreMomenta_
- ignoreSpin_
- ignoreBarrierScaling_
23rd February 2018 Thomas Latham
* Add section on structure of package to README file (requested by CPC technial editor)
* Add copyright notice to files in test directory
21st February 2018 Thomas Latham
* Improve comments in functions calculating helicity angles in LauKinematics
19th February 2018 Daniel O'Hanlon
-* Fix bug in LauCPFitModel introduced in code for importing parameters (25/01/18): when parameters are asked to be randomised via useRandomInitFitPars, those parameters that were already fixed in the model, in addition to those that are fixed on import, were being freed.
+* Fix bug in LauCPFitModel introduced in code for importing parameters (25/01/18)
+ - When parameters are asked to be randomised via useRandomInitFitPars, those
+ parameters that were already fixed in the model, in addition to those that
+ are fixed on import, were being freed.
19th February 2018 Daniel O'Hanlon
* Bug fixes in rho/omega fit fractions calculation
25th January 2018 Daniel O'Hanlon
* Add feature for importing parameters into LauCPFitModels from previous fit output files.
23rd January 2018 Daniel O'Hanlon
* Calculate separate rho and omega fit-fractions for LauRhoOmegaMix, when turned on with calculateRhoOmegaFitFractions in LauIsobarDynamics.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r4
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
16th January 2018 Thomas Latham
* Update licence for all files to the Apache Software License Version 2.0
14th December 2017 Thomas Latham
* Correct LauFlatteRes to use the exact formulae from the papers that report the default parameter values for each supported resonance
- Deals correctly now with the cases where the m0 is absorbed into the couplings
- Now uses the correct value of g1 for the a_0(980) (needed to be squared)
- Also sets the mass values to those reported in those papers
- Added printout to specify what is being done in each case
* Improve the consistency between the weightEvents functions in LauSimpleFitModel and LauCPFitModel
- Also remove the unecessary division by ASqMax in LauIsobarDynamics::getEventWeight
4th December 2017 Thomas Latham
* Allow background yields to be specified as any LauAbsRValue (i.e. LauParameter or now also LauFormulaPar)
30th November 2017 Thomas Latham
* In LauFitDataTree::readExperimentData add a check that the tree contains the branch "iExpt" and if not:
- If the requested experiment is 0 then read all data in the tree (and print a warning)
- Otherwise print an error message and return
29th November 2017 Thomas Latham
* Improve error messages in LauCPFitModel::weightEvents
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r3
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
23rd November 2017 Thomas Latham
* Add an example written as a python script: GenFit3pi.py
* Various improvements to the other example code
23rd November 2017 Thomas Latham
* Fix bug in the LauAbsResonance constructor used by the K-matrix
- "resonance" charge is now set to the sum of the daughter charges
1st November 2017 Thomas Latham
* Fix bug in LauFlatteRes
- m_0 factor multiplying the width was, if useAdlerTerm_ is true, replaced by
f_Adler rather than being multiplied by it
- only affected case where using LauFlatteRes to describe K_0*(1430)
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r2
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
18th October 2017 Thomas Latham
* Modify LauDaughters::testDPSymmetry to:
- produce an ERROR message and exit if the symmetric DP does not have its daughters arranged correctly
- detect also flavour-conjugate DPs and set a corresponding flag that can be retrieved via LauDaughters::gotFlavourConjugateDP
- in this case if the daughters are sub-optimally arranged print a WARNING
* Modify LauIsobarDynamics::calcDPNormalisationScheme so that any narrow resonance regions are symmetrised appropriately if the DP is fully-symmetric, symmetric or is flavour-conjugate (and in this last case, symmetrisation of the integration has been forced via a new boolean function)
11th October 2017 Thomas Latham
* Allow the user to specify the randomiser to be used to randomise the initial values of the isobar parameters
- simply use the new static function LauAbsCoeffSet::setRandomiser
- if not specified it defaults to the current use of LauRandom::zeroSeedRandom
10th October 2017 Thomas Latham
* Make symmetrisation of vetoes automatic in LauVetoes for symmetric DPs
- but add new allowed indices (4 and 5) to allow vetoes to be applied to mMin or mMax if desired
* Also apply to fully symmetric DPs
* In addition implement the more efficient calculation of the amplitude in the fully symmetric case
23rd September 2017 Thomas Latham
* Fix expressions for Covariant option for Blatt-Weisskopf momentum for spin > 1
* Make use of setSpinFormalism to set the formalism to Legendre for NR types (instead of kludging with ignoreMomentum)
15th September 2017 Thomas Latham
* Various improvements to the examples
6th September 2017 Thomas Latham
* Improve doxygen comments for event-embedding functions in fit models
5th September 2017 Thomas Latham
* Improve efficiency of Covariant spin factor calculations
31st August 2017 Thomas Latham
* Add further option for parent Blatt-Weisskopf momentum - Covariant (corresponding to Covariant spin factor)
8th August 2017 Thomas Latham
* Implement expressions for the spin factor based on covariant tensor formalism
* Make it much simpler to switch between formalism options via new functions in LauResonanceMaker:
- setSpinFormalism, setBWType, setBWBachelorRestFrame
- default settings are unchanged (and are equivalent to setting LauAbsResonance::Zemach_P, LauBlattWeisskopfFactor::BWPrimeBarrier, LauBlattWeisskopfFactor::ResonanceFrame, respectively)
21st July 2017 Thomas Latham
* Add note to README file about compilation issue on Ubuntu 16.04 LTS
21st July 2017 Thomas Latham
* Create public functions to update the kinematics based on one invariant mass and the corresponding helicity angle
20th June 2017 Daniel O'Hanlon
* Terminate when asked to read a non-existent K-matrix parameter file
8th June 2017 Thomas Latham
* Fix compilation error on gcc 4.9
30th May 2017 Thomas Latham
* Permit different efficiency histograms to be defined in classic/square DP and force enable calculation of square DP coords (here and for background PDFs) if required
29th May 2017 Thomas Latham
* Propagate information on EDM from the minimiser to the fit models and into the fit results ntuple
29th May 2017 Thomas Latham
* Ensure that the kinematics will calculate the square DP co-ordinates if the integration requires them
29th May 2017 Thomas Latham
* Reintegrate the RooFit-slave branch into the trunk (and delete the branch)
28th March 2017 Thomas Latham
(in branch for developing RooFit-based slave)
* Rename cacheFitData to verifyFitData in all fit models and RF-slave
28th March 2017 Daniel O'Hanlon
* Fix bug in LauCPFitModel::weightEvents
24th March 2017 Thomas Latham
(in branch for developing RooFit-based slave)
* Refactor code between fit models, master, slave and fit-object classes
22nd March 2017 Thomas Latham
(in branch for developing RooFit-based slave)
* Make the compilation of the RF-based slave class, and its associated example binary, optional
22nd March 2017 Thomas Latham
(in branch for developing RooFit-based slave)
* Complete working example of RooFit-based slave
- Have identified some scope for refactoring of code
- Also want to make the compilation of this class, and its associated example binary, optional
15th March 2017 Mark Whitehead
(in branch for developing time-dependent model)
* Handle event-by-event mistag probabilities
7th March 2017 Thomas Latham
* Rename the command for weighting events from "reweight" to "weight"
3rd March 2017 Daniel O'Hanlon
* Implement LauCPFitModel::weightEvents based on the same function in LauSimpleFitModel
1st March 2017 Thomas Latham
(in branch for developing RooFit-based slave)
* Fix root-cling related errors
28th February 2017 Thomas Latham
(in branch for developing RooFit-based slave)
* Start work on creation of RooFit-based slave class
- Will allow RooFit-based fitters to plug-in straightforwardly to the simultaneous fitting (JFit) framework
20th February 2017 Thomas Latham
* Add warning messages to LauBkgndDPModel::setBkgndHisto when supplied backgroud histogram pointers are null
31st January 2017 Thomas Latham
* Audit of the code to automate the creation of the integration binning scheme.
- Fix one bug, some tidy-ups, and reintroduce a couple of lost features:
- the use of a single square-DP grid if a narrow resonance is found in m12
- extension of region down to threshold if the narrow resonance is close to threshold
22nd January 2017 Daniel O'Hanlon
* Fix bug in automated integration scheme where resonances were assumed to have been added in order of ascending mass
14th December 2016 Daniel O'Hanlon
* Add several light resonances to the list in LauResonanceMaker
13th December 2016 Daniel O'Hanlon
* Automate the determination of the integration scheme for an arbitrary number of narrow resonances in m13 and m23
12th December 2016 Daniel O'Hanlon
* Efficiency saving from modifying the behaviour of LauIsobarDynamics::calculateAmplitudes, LauIsobarDynamics::resAmp and LauIsobarDynamics::incohResAmp in the case of symmetric DPs
- Previously, there were 2N calls to LauKinematics::flipAndUpdateKinematics for N resonances, now reduced to 2
21st November 2016 John Back
* Added two K-matrix examples for creating pedagogical plots (with associated data files):
- B3piKMatrixPlots for K-matrix elements, propagator, and pole and SVP production terms
- B3piKMatrixMassProj for pole and SVP mass projections generated over the DP
17th November 2016 John Back
* Modifications of the K matrix implementation:
- Changed the format of the parameter file to use keywords, with updated examples
- The scattering coefficients are now symmetrised, unless "ScattSymmetry 0" is used
- Allow the option to use the Adler zero suppression term for the production poles and SVPs (default = off).
These are specified with the useProdAdler boolean in the LauKMatrixProdPole/SVP constructors
- Added a few helper functions for obtaining coefficients and (internal) matrices
- Added a method to return the Lorentz-invariant transition amplitude for plots/tests
1st November 2016 Wenbin Qian
* Add calculation of so-called covariant factors for spin amplitude
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
9th September 2016 Thomas Latham
* Modification of LauFitNtuple to check the size of the covariance matrix wrt known number of parameters and act accordingly:
- If it is empty, just fill a diagonal correlation matrix and issue a warning.
- If it results from a failed first stage of a two-stage fit then the correlation matrix is padded with 1s and 0s for the parameters that were fixed in the first stage.
* Remove the feature that allows parameters to float in first stage of a two-stage fit but to be fixed in second.
* Minor fix to LauAbsCoeffSet: the names of parameters would be mangled if they were the same as the basename, e.g. A1_A would become A1_ while A1_B would be correct.
8th September 2016 Thomas Latham
* Modifications to LauResonanceInfo to allow customisation of which extra parameters are shared between charge conjugate or shared-parameter records.
- Where the parameters are not shared they are independently created instead of being cloned.
* Modification of LauRhoOmegaMix to take advantage of the above to have the magB and phiB parameters independent.
* Addition to LauResonanceMaker of rho(770)_COPY record that is a shared record with rho(770) - this allows the above new feature to be used.
* Minor unrelated improvement to information messages in LauIsobarDynamics.
25th August 2016 John Back
* Modified LauRhoOmegaMix to allow either a RelBW or GS lineshape for the rho, as well as
allowing the option to set the second-order denominator term to be equal to unity.
Also fixed the bug where the spinTerm was not included in the rho-omega amplitude.
Changed the LauAbsResonance enum from RhoOmegaMix to RhoOmegaMix_GS, RhoOmegaMix_RBW,
RhoOmegaMix_GS_1 and RhoOmegaMix_RBW_1, where _1 sets the denominator term to unity and
_GS or _RBW sets the appropriate rho lineshape. The omega is always a RelBW.
* Added to LauAbsResonance: ignoreSpin and ignoreBarrierScaling boolean flags, which ignore
either the spin terms or the barrier scaling factors for the amplitude. The latter does not
turn off the resonance barrier factor for mass-dependent widths
* Added to LauResonanceMaker: getResInfo(resonanceName), which retrieves the resonance information.
This is used to obtain the PDG omega values for initialising LauRhoOmegaMix
* Made LauRelBreitWignerRes ignore momentum-dependent terms for the resonance width if ignoreMomenta is set.
This is used in the LauRhoOmegaMix for the omega lineshape where its width does not depend on momentum
23rd May 2016 John Back
* Added new lineshape model for rho-omega mass mixing, LauRhoOmegaMix.
12th April 2016 Thomas Latham
* Switch to integrating in square DP when narrow resonances are found in m12.
- The integration grid size can be specified by the user
19th January 2016 John Back
* Correct the f(m^2) factor in the denominator of the LauGounarisSakuraiRes
lineshape to use Gamma_0 instead of Gamma(m)
14th January 2016 Thomas Latham
* Documentation improvements
7th December 2015 Thomas Latham
* Resolve bug that meant the order of resonances in LauIsobarDynamics was assumed to match with the order in which the complex coefficients are supplied to the fit model
- The ordering of resonances is defined by LauIsobarDynamics:
- Firstly all coherent resonances in order of addition
- Followed by all incoherent resonances in order of addition
- The complex coefficients are now rearranged to match that order
- Printout of the model summary at the end of initialisation has been enhanced to indicate the ordering
- Doxygen updated to reflect these changes
12th November 2015 Daniel Craik
* Added support for Akima splines and linear interpolation to Lau1DCubicSpline
* LauAbsModIndPartWave, LauModIndPartWaveRealImag and LauModIndPartWaveMagPhase updated to allow choice of spline interpolation method
* LauEFKLLMRes updated to use Akima splines
10th November 2015 Thomas Latham & Daniel Craik
* Add the EFKLLM form-factor model for the Kpi S-wave and an example using this lineshape
* Modify LauResonanceMaker::getResonance to use a switch for greater clarity and easier checking on missing cases
4th November 2015 Daniel Craik
* Add checks to LauIsobarDynamics::addResonance and LauIsobarDynamics::addIncohResonance to stop the wrong type of LauResonanceModel being used
- LauAbsResonance::isIncoherentModel method added to identify incoherent models
8th September 2015 Mark Whitehead
* Add the ability to modify the error of parameters via the CoeffSet
- setParameterError added to LauAbsCoeffSet
* Tweak the handling of initial error values passed to MINUIT (to determine initial step size) in LauMinuit
7th September 2015 Mark Whitehead
* Add the ability to Gaussian constrain parameters via the CoeffSet
- addGaussianConstraint added to LauAbsCoeffSet
12th June 2015 Thomas Latham
* Modifications to Belle-style nonresonant models
- LauBelleNR modified to use pure Legendre polynomials of cos(theta) in the spin term (i.e. to remove the q*p factors)
- New form added to LauBelleSymNR (LauAbsResonance::BelleSymNRNoInter) that removes the interference term between the two DP halves
- The new form also works with non-zero spin (warning added if non-zero spin specified for BelleSymNR and TaylorNR)
8th June 2015 Thomas Latham
* Further work on the blinding mechanism:
- New method added LauParameter::blindParameter that activates the blinding.
- The rest of the framework updated to use another new method LauParameter::unblindedValue in all likelihood calculations etc.
- Example GenFitNoDP updated to include lines to optionally blind the yield parameters.
29th May 2015 Daniel Craik
* Added LauBlind class for blinding and unblinding a value with an offset based on a blinding string
26th May 2015 Daniel Craik
* Stopped LauCPFitModel passing fixed signal/background yields or
asymmetries to Minuit to avoid hitting limit of 101 fixed parameters
22nd April 2015 Daniel Craik
* Updated MIPW classes to use Lau1DCubicSpline
19th April 2015 Daniel Craik
* Added Lau1DCubicSpline class for 1D spline interpolation
26th March 2015 Thomas Latham
* Reworked MIPW code into abstract base class and derived classes
to allow different representations of the amplitude at each knot
31st December 2015 Daniel Craik
* Added unbinned goodness of fit tests to examples
12th January 2015 Daniel Craik
* Calculate effective masses for virtual resonances above the upper kinematic limit
10th December 2014 Daniel Craik
* Added coefficient sets to extract gamma from a simultaneous fit to CP and nonCP final states,
such as the B0->D_CP K pi and B0->D0bar K pi Dalitz plots, as proposed in Phys. Rev. D79, 051301 (2009)
- LauPolarGammaCPCoeffSet uses the CP parameters r, delta and gamma directly
- LauRealImagGammaCPCoeffSet parameterises CPV as X_CP+/- and Y_CP+/-
- LauCartesianGammaCPCoeffSet parameterises CPV as X_CP, Y_CP DeltaX_CP DeltaY_CP
- Fixed CP parameters are not passed to the fitter so the same coefficient sets can be used for both the
CP and nonCP Dalitz plots
- LauPolarGammaCPCoeffSet allows for a single gamma parameter to be shared between multiple resonances
- LauAbsCoeffSet::adjustName made virtual to allow global variables such as gamma to not receive a prefix
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r0p1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
19th June 2015 Thomas Latham
* Factor out the JFit slave code from LauAbsFitModel into a new base class LauSimFitSlave
19th June 2015 Thomas Latham
* Fix check in LauIsobarDynamics::calcDPNormalisationScheme to avoid using hardcoded number
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v3r0
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
24th October 2014 Thomas Latham
* Fixed bug in floating of Blatt-Weisskopf barrier radii
- The values at the pole mass were not being updated when the radii changed
21st October 2014 Daniel Craik
* Fixed bug in LauIsobarDynamics where multiple incoherent amplitudes led to nonsensical fit fractions
17th October 2014 John Back
* Added the ability to calculate the transition amplitude matrix T in LauKMatrixPropagator,
as well as a few other minor speed-up changes and code checks. Example/PlotKMatrixTAmp.cc
can be used to check the T amplitude variation, phase shift and inelasticity, for a given
K matrix channel, as a function of the invariant mass squared variable s
15th October 2014 Thomas Latham
* Add methods to LauIsobarDynamics to make the integration binning more tunable by the user:
- setNarrowResonanceThreshold - modify the value below which a resonance is considered to be narrow (defaults to 0.02 GeV/c2)
- setIntegralBinningFactor - modify the factor by which the narrow resonance width is divided to obtain the bin size (defaults to 100)
* Print warning messages if the memory usage is likely to be very large
13th October 2014 Thomas Latham
* Modify Makefile to allow compilation with ROOT 6 (in addition to maintaining support for ROOT 5)
* Fix a few compilation errors on MacOSX 10.9
13th October 2014 Daniel Craik
* Update LauModIndPartWave to allow knots at kinematic limits to be modified
- Add new method setKnotAmp to modify existing knots (and the knot at the upper kinematic limit which is automatically added at initialisation)
* Update doxygen for LauIsobarDynamics::addIncoherentResonance to mention that incoherent resonances must be added last
10th October 2014 Thomas Latham
* Add new method to LauResonanceMaker to set whether the radius of a given Blatt-Weisskopf category should be fixed of floated
* Modify the methods of LauResonanceMaker to set the radius value and whether it should be floated so that they work before and after the resonances have been created
9th October 2014 John Back
* Corrected the eta-eta' and 4pi phase space factors in LauKMatrixPropagator,
which is used for the K-matrix amplitude:
- calcEtaEtaPRho() does not include the mass difference term m_eta - m_eta'
following the recommendation in hep-ph/0204328 and from advice from M Pennington
- calcFourPiRho() incorporates a better parameterisation of the double integral of Eq 4 in
hep-ph/0204328 which avoids the exponential increase for small values of s (~< 0.1)
- More detailed comments are provided in the above two functions to explain what is
going on and the reason for the choices made
6th October 2014 Thomas Latham
* Implement the mechanism for floating Blatt-Weisskopf barrier factor radius parameters
30th September 2014 Thomas Latham
* Fix issue in the checks on toy MC generation validity
- in the case of exceeding max iterations it was possible to enter an infinite loop
- the checks now detect all three possible states:
- aSqMaxSet_ is too low (generation is biased) => increase aSqMaxSet_ value
- aSqMaxSet_ is too high => reduce aSqMaxSet_ value to improve efficiency
- aSqMaxSet_ is high (causing low efficiency) but cannot be lowered without biasing => increase iterationsMax_ limit
* Update resonance parameter in LauResonanceMaker to match PDG 2014
* Modify behaviour when TTree objects are saved into files to avoid having multiple cycle numbers present
29th September 2014 Daniel Craik
* Add support for incoherent resonances in the signal model
- LauIsobarDynamics updated to include incoherent terms
- ABC for incoherent resonances, LauAbsIncohRes, added deriving from LauAbsResonance
- LauGaussIncohRes added to implement a Gaussian incoherent resonance, deriving from LauAbsIncohRes
- Small changes to various other classes to enable incoherent terms
* Fixed small bug in LauMagPhaseCoeffSet which could hang if phase is exactly pi or -pi
* Added charged version of the BelleNR resonances to LauResonanceMaker
* Updated parameters in LauConstants to match PDG 2014
14th July 2014 Thomas Latham
* Add intial support for fully-symmetric final states such as B0 -> KS KS KS
- Performs the symmetrisation of the signal model
- Background (and efficiency) histogram classes need some work if the user wants to provide folded histograms
8th July 2014 Daniel Craik
* Add class for model-independent partial wave
- Uses splines to produce a smooth amplitude from a set of magnitude and phase values at given invariant masses
- The individual magnitudes and phases can be floated in the fit
16th June 2014 Thomas Latham
* Allow floating of resonance parameters in simultaneous fits
13th June 2014 Thomas Latham
* Fix bug in LauResonanceInfo cloning method, where the width parameter was given a clone of the mass
10th June 2014 Thomas Latham
* Add new function to allow sharing of resonance parameters between components that are not charged conjugates, e.g. LASS_BW and LASS_NR
9th June 2014 Thomas Latham and Daniel Craik
* Fix bug in the new integration scheme
- Was not accounting for cases where several resonances share a floating parameter
- Meant that the integrals and caches for that resonance were not being updated
* Introduce a change in the implementation of the helicity flip for neutral parent decays
- Prior to this change the helicity was flipped for any neutral resonance in the decay of a neutral particle.
- Now the flip no longer occurs in flavour-specific decays (such as Bs0 -> D0bar K- pi+ or B0 -> K+ pi- pi0) since it is only required in flavour-conjugate modes (such as B0 -> KS pi+ pi-).
- This does not affect any physical results but it does contribute a pi phase flip to some odd-spin resonances (for example K*(892)0 in Bs0->D0barKpi).
- Therefore results are not necessarily comparable between fits run before and after this changeset.
- This change will first be released in v3r0.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v2r2
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
5th June 2014 Thomas Latham
* Fix issue in asymmetric efficiency histogram errors
- Fluctuation of bins was incorrectly sampling - assumed area each side of peak was the same
5th June 2014 Thomas Latham
(in branch for release in v3r0)
* Introduce intelligent calculation of amplitudes during recalculation of integrals and recaching of data points
- Floating resonance parameters is now much more efficient
* Make resonance parameters second-stage, also improves fit timing when floating them
3rd June 2014 Rafael Coutinho
* Implement generation of toy MC from fit results in fitSlave
27th May 2014 Thomas Latham
(in branch for release in v3r0)
* Complete audit of special functions
* Remove unncessary LauAbsDPDynamics base class and move all functionality into LauIsobarDynamics
20th May 2014 Daniel Craik
(in branch for release in v3r0)
* Add support for K*_0(1430) and a_0(980) to LauFlatteRes
16th-19th May 2014 Thomas Latham and Daniel Craik
(in branch for release in v3r0)
* Update all other lineshapes so that their parameters can float
- The only resonance parameters that now cannot float are the Blatt-Weisskopf barrier factor radii
15th May 2014 Thomas Latham
(in branch for release in v3r0)
* Change the mechanism for getting floating resonance parameters into the fit
- Moved from LauResonanceMaker to the resonances themselves
- Lays some groundwork for improving the efficiency of recalculating the integrals
- LauBelleNR and LauBelleSymNR lineshape parameters can now be floated
13th May 2014 Daniel Craik
(in branch for release in v3r0)
* Fix bug where illegal characters were being propagated from resonance names into TBranch names
6th May 2014 Thomas Latham
(in branch for release in v3r0)
* Provide accessors for mass and width parameters
5th May 2014 Louis Henry
* Fix compilation problem by making LauDatabasePDG destructor virtual
4th May 2014 Thomas Latham
* Provide a new argument to Lau*FitModel::splitSignalComponent to allow fluctuation of the bins on the SCF fraction histogram
29th April 2014 Thomas Latham
* Fix bug in the determination of the integration scheme
- Nearby narrow resonances caused problems if their "zones" overlap
- These zones are now merged together
29th April 2014 Thomas Latham
(in branch for release in v3r0)
* Some improvments to integration scheme storage
26th April 2014 Juan Otalora
(in branch for release in v3r0)
* Make integation scheme fixed after first determination
- is stored in a new class LauDPPartialIntegralInfo
- used on subsequent re-evaluations of the integrals
23rd April 2014 Thomas Latham
* Attempt to improve clarity of LauIsobarDynamics::addResonance function
- the 3rd argument is now an enumeration of the various resonance models
- removed the optional arguments regarding the change of mass, width & spin
- the same functionality can be obtained by using the returned pointer and calling changeResonance
- the resonance name now only affects the lineshape in the LauPolNR case
- the BelleSymNR / TaylorNR choice is now made in the 3rd argument
- similarly for the BelleNR / (newly introduced) PowerLawNR choice
* Add new PowerLawNR nonresonant model (part of LauBelleNR)
* All examples updated to use new interface
23rd April 2014 Thomas Latham
* Address issue of setting values of resonance parameters for all models
- decided to do away with need to have LauIsobarDynamics know everything
- LauIsobarDynamics::addResonance now returns a pointer to LauAbsResonance
- parameters can be changed through LauAbsResonance::setResonanceParameter
- LauIsobarDynamics still knows about Blatt-Weisskopf factors - better to only need to set this once
- Update GenFit3pi example to demonstrate mechanism
22nd April 2014 Thomas Latham
* Allow Gaussian constraints to be added in simultaneous fitting
- constraints will be ignored by the slaves
- those added directly to fit parameters will be propogated to the master and handled there
- those on combinations of parameters should be added in the master process
22nd April 2014 Mark Whitehead
* Update Laura to cope with resonances of spin 4 and spin 5
- Zemach spin terms added to src/LauAbsResonance.cc
- BW barrier factors added to src/LauRelBreitWignerRes.cc
19th April 2014 Daniel Craik
* Add LauWeightedSumEffModel which gives an efficiency model from the weighted sum of several LauEffModel objects.
* Added pABC, LauAbsEffModel, for LauEffModel and LauWeightedSumEffModel.
* Various classes updated to use pointers to LauAbsEffModel instead of LauEffModel.
15th April 2014 Daniel Craik
* Enable LauEfficiencyModel to contain several Lau2DAbsDP objects with the total efficiency calculated as the product.
10th April 2014 Mark Whitehead
* Fix an issue with the likelihood penalty term for Gaussian constraints
- Factor two missing in the denominator
- New penalty term is: ( x-mean )^2 / 2*(width^2)
4th April 2014 Thomas Latham
* Add storage of fit fractions that have not been efficiency corrected
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v2r1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
1st April 2014 Thomas Latham
* Fix issue in LauFitter that prevents compilation with g++ 4.8
- Missing virtual destructor
- Take opportunity to audit other special functions
31st March 2014 Mark Whitehead
(in branch for release in v2r1)
* Added an efficiency branch to the ntuple produced for toy data samples
- Both LauSimpleFitModel and LauCPFitModel updated
28th March 2014 Daniel Craik
(in branch for release in v2r2)
* Added support for asymmetric errors to Lau2DHistDP, Lau2DSplineDP and LauEffModel.
27th March 2014 Daniel Craik
* Changed histogram classes to use seeded random number generator for
fluctuation and raising or lowering of bins and updated doxygen.
20th March 2014 Mark Whitehead
(in branch for release in v2r1)
* Added the ability to add Gaussian contraints to LauFormulaPars of fit parameters
- User supplies the information but the LauFormulaPar is constructed behind the scenes
18th March 2014 Thomas Latham
* Improve behaviour of toy generation from fit results
13th March 2014 Juan Otalora
(in branch for release in v3r0)
* Extended ability to float mass and width to other resonance lineshapes (Flatte, LASS and G-S)
11th March 2014 Mark Whitehead
(in branch for release in v2r1)
* Added the functionality to make LauFormulaPars usable in fits
- Added a new class LauAbsRValue which LauParameter and LauFormularPar inherit from
- Many files updated to accept LauAbsRValues instead of LauParameters
* Fairly major clean up of obsolete functions in LauAbsPdf
- Only LauLinearPdf used any of them, this has now been updated
10th March 2014 Thomas Latham
(in branch for release in v3r0)
* First attempt at floating resonance parameters (work mostly from Juan)
- Only works for RelBW lineshape
- Can only float mass and width
- Works nicely!
- Still needs much work to generalise and make more efficient
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v2r0
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
8th March 2014 Thomas Latham
* Some additional functionality for the CoeffSet classes:
- allow the parameter values to be set (optionally setting the initial and generated values as well)
- allow the parameters to be set to float or to be fixed in the fit
These are needed when cloning but wanting some of the parameters to have different values and/or floating behaviour from the cloned set.
* Improve the printout of the setting of the coefficient values in the fit models and the creation of resonances
* Add LauFlatNR for the unform NR model - ends its rather strange special treatment
* Fix bug setting resAmpInt to 0 for LauPolNR
* Many other improvements to the info/warning/error printouts
* Modify GenFitBelleCPKpipi example to demonstrate cloning in action
* Add -Werror to compiler flags (treats warnings as errors)
5th March 2014 Thomas Latham
* Some improvements to LauPolNR to speed up amplitude calculation
2nd March 2014 Thomas Latham
* A number of updates to the CoeffSet classes:
- allow specification of the basename just after construction (before being given to the fit model)
- allow configuration of the parameter fit ranges (through static methods of base class)
- more adaptable cloning of the parameters (e.g. can only clone phase but allow magnitude to float independently)
- allow CP-violating parameters to be second-stage in Belle and CLEO formats
* Some improvements to the Doxygen and runtime information printouts
20th February 2014 Louis Henry
* Add LauPolNR - class for modelling the nonresonant contribution based on BaBar 3K model (arXiv:1201.5897)
6th February 2014 Thomas Latham
* Correct helicity convention information in Doxygen for LauKinematics
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v1r2
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
5th February 2014 Thomas Latham
* Add rule to the Makefile that creates a rootmap file for the library
4th February 2014 Daniel Craik
* Fixed bug in Lau2DSplineDPPdf - normalisation was not being calculated
* Added out-of-range warning in LauBkgndDPModel and supressed excessive warnings
3rd February 2014 Mark Whitehead
(in branch for release in v2r0)
* Added a new class to allow parameters to be a function of other parameters
- inc/LauFormulaPar.hh
- src/LauFormulaPar.cc
28th January 2014 Daniel Craik
* Improved out-of-range efficiency warnings in LauEffModel and supressed excessive errors
* Modified LauIsobarDynamics to allow LASS parameters to be configured for LauLASSBWRes and
LauLASSNRRes
27th January 2014 Daniel Craik
* Added spline interpolation to DP backgrounds
- Added Lau2DSplineDPPdf which uses a spline to model a normalised PDF across a DP
- Added pABC, Lau2DAbsDPPdf, for Lau2DHistDPPdf and Lau2DSplineDPPdf and moved common
code in Lau2DHistDPPdf and Lau2DSplineDPPdf into ABC Lau2DAbsHistDPPdf
- LauBkgndDPModel, modified to use Lau2DAbsDPPdf instead of Lau2DHistDPPdf
- setBkgndSpline method added to LauBkgndDPModel to allow use of splines
22nd January 2014 Thomas Latham
* Improve some error checks and corresponding warning messages in
LauCPFitModel::setSignalDPParameters
16th January 2014 Thomas Latham
* Add LauRealImagCPCoeffSet, which provides an (x,y), (xbar,ybar) way of
parametrising the complex coefficients.
* Try to improve timing in the *CoeffSet classes by making the complex coeffs
into member variables.
20th December 2013 Daniel Craik
* Added Lau2DCubicSpline which provides cubic spline interpolation of a histogram
- Added Lau2DSplineDP which uses a spline to model variation across a DP (eg efficiency)
- Added pABC, Lau2DAbsDP, for Lau2DHistDP and Lau2DSplineDP and moved common code
in Lau2DHistDP and Lau2DSplineDP into ABC Lau2DAbsHistDP
- LauEffModel, LauDPDepSumPdf and LauDPDepMapPdf modified to use Lau2DAbsDP instead of
Lau2DHistDP
- setEffSpline method added to LauEffModel and useSpline flag added to constructor for
LauDPDepSumPdf to allow use of splines
18th December 2013 Mark Whitehead
(in branch for release in v2r0)
* Added functionality to include Gaussian constraints on floated
parameters in the fit.
The files updated are:
- inc/LauAbsFitModel.hh
- inc/LauParameter.hh
- src/LauAbsFitModel.cc
- src/LauParameter.cc
5th December 2013 Thomas Latham
* Fix small bug in GenFitKpipi example where background asymmetry parameter had
its limits the wrong way around
4th December 2013 Daniel Craik
* Updated 2D chi-squared code to use adaptive binning.
3rd December 2013 Thomas Latham
* Generalise the Makefile in the examples directory
- results in minor changes to the names of 3 of the binaries
3rd December 2013 Thomas Latham
(in branch for release in v2r0)
* Have the master save an ntuple with all fitter parameters and the full correlation matrix information.
29th November 2013 Thomas Latham
* Fixed bug in ResultsExtractor where the output file was not written
29th November 2013 Thomas Latham
(in branch for release in v2r0)
* Allow the slave ntuples to store the partial covariance matrices in the simultaneous fitting
26th November 2013 Thomas Latham
(in branch for release in v2r0)
* Added first version of the simultaneous fitting framework
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v1r1p1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
22nd November 2013 Thomas Latham
* Fixed bug in LauCPFitModel where values of q = -1 extra PDFs
were used regardless of the event charge.
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v1r1
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
20th November 2013 Mark Whitehead
* Changed convention for the barrier factors, swapping from p* to p.
This seems to give more physically reasonable results.
The files updated are:
- src/LauGounarisSakuraiRes.cc
- src/LauRelBreitWignerRes.cc
18th October 2013 Thomas Latham
* Fix dependency problem in Makefile
8th October 2013 Thomas Latham
* Some fixes to yield implementation
* Minor bug fix in DP background histogram class
7th October 2013 Mark Whitehead
* Update to accept the yields and yield asymmetries as LauParameters.
All examples have been updated to reflect this change.
This updated the following files:
- inc/LauCPFitModel.hh
- inc/LauSimpleFitModel.hh
- inc/LauAbsFitModel.hh
- src/LauCPFitModel.cc
- src/LauSimpleFitModel.cc
* Addition of the following particles to src/LauResonanceMaker.cc
Ds*+-, Ds0*(2317)+-, Ds2*(2573)+-, Ds1*(2700)+- and Bs*0
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Laura++ v1r0
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
13th September 2013 Thomas Latham
* Initial import of the package into HEPforge
diff --git a/inc/LauAbsFitModel.hh b/inc/LauAbsFitModel.hh
index dfc9f06..1c99548 100644
--- a/inc/LauAbsFitModel.hh
+++ b/inc/LauAbsFitModel.hh
@@ -1,836 +1,834 @@
/*
Copyright 2004 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauAbsFitModel.hh
\brief File containing declaration of LauAbsFitModel class.
*/
/*! \class LauAbsFitModel
\brief Abstract interface to the fitting and toy MC model
Abstract interface to the fitting and toy MC model
Any class inheriting from this must implement the following functions:
- cacheInputFitVars()
- checkInitFitParams()
- finaliseFitResults()
- fixdSpeciesNames()
- freeSpeciesNames()
- genExpt()
- getEventSum()
- getTotEvtLikelihood()
- initialise()
- initialiseDPModels()
- propagateParUpdates()
- recalculateNormalisation()
- scfDPSmear()
- setAmpCoeffSet()
- setNBkgndEvents()
- setNSigEvents()
- setupBkgndVectors()
- setupGenNtupleBranches()
- setupSPlotNtupleBranches()
- splitSignal()
- storePerEvtLlhds()
- twodimPDFs()
- updateCoeffs()
- variableNames()
- weightEvents()
- writeOutTable()
*/
#ifndef LAU_ABS_FIT_MODEL
#define LAU_ABS_FIT_MODEL
#include "TMatrixDfwd.h"
#include "TString.h"
#include "TStopwatch.h"
#include <iosfwd>
#include <set>
#include <vector>
#include "LauFitObject.hh"
#include "LauFormulaPar.hh"
#include "LauSimFitSlave.hh"
// LauSPlot included to get LauSPlot::NameSet typedef
#include "LauSPlot.hh"
class LauAbsCoeffSet;
class LauAbsPdf;
class LauFitDataTree;
class LauGenNtuple;
class LauAbsRValue;
class LauParameter;
class LauAbsFitModel : public LauSimFitSlave {
public:
//! Constructor
LauAbsFitModel();
//! Destructor
virtual ~LauAbsFitModel();
//! Is the Dalitz plot term in the likelihood
Bool_t useDP() const { return usingDP_; }
//! Switch on/off the Dalitz plot term in the Likelihood (allows fits to other quantities, e.g. B mass)
/*!
\param [in] usingDP the boolean flag
*/
void useDP(Bool_t usingDP) { usingDP_ = usingDP; }
//! Return the flag to store the status of using an sFit or not
Bool_t doSFit() const { return doSFit_; }
//! Do an sFit (use sWeights to isolate signal decays rather than using background histograms)
/*!
\param [in] sWeightBranchName name of the branch of the tree containing the sWeights
\param [in] scaleFactor scaling factor to get the uncertainties correct
*/
void doSFit( const TString& sWeightBranchName, Double_t scaleFactor = 1.0 );
//! Determine whether an extended maximum likelihood fit it being performed
Bool_t doEMLFit() const {return emlFit_;}
//! Choice to perform an extended maximum likelihood fit
/*!
\param [in] emlFit boolean specifying whether or not to perform the EML
*/
void doEMLFit(Bool_t emlFit) {emlFit_ = emlFit;}
//! Determine whether Poisson smearing is enabled for the toy MC generation
Bool_t doPoissonSmearing() const {return poissonSmear_;}
//! Turn Poisson smearing (for the toy MC generation) on or off
/*!
\param [in] poissonSmear boolean specifying whether or not to do Poisson smearing
*/
void doPoissonSmearing(Bool_t poissonSmear) {poissonSmear_ = poissonSmear;}
//! Determine whether embedding of events is enabled in the generation
Bool_t enableEmbedding() const {return enableEmbedding_;}
//! Turn on or off embedding of events in the generation
/*!
\param [in] enable boolean specifying whether to embed events
*/
void enableEmbedding(Bool_t enable) {enableEmbedding_ = enable;}
//! Determine whether writing out of the latex table is enabled
Bool_t writeLatexTable() const {return writeLatexTable_;}
//! Turn on or off the writing out of the latex table
/*!
\param [in] writeTable boolean specifying whether or not the latex table should be written out
*/
void writeLatexTable(Bool_t writeTable) {writeLatexTable_ = writeTable;}
//! save files containing graphs of the resonance's PDFs
Bool_t saveFilePDF() const {return savePDF_;}
//! Turn on or off the save of files containing graphs of the resonance's PDFs
/*!
\param [in] savePDF boolean specifying whether or not the save of files containing graphs of the resonance's PDFs
*/
void saveFilePDF(Bool_t savePDF) {savePDF_ = savePDF;}
//! Set up the sPlot ntuple
/*!
\param [in] fileName the sPlot file name
\param [in] treeName the sPlot tree name
\param [in] storeDPEfficiency whether or not to store the efficiency information too
\param [in] verbosity define the level of output
*/
void writeSPlotData(const TString& fileName, const TString& treeName, Bool_t storeDPEfficiency, const TString& verbosity = "q");
//! Determine whether the sPlot data is to be written out
Bool_t writeSPlotData() const {return writeSPlotData_;}
//! Determine whether the efficiency information should be stored in the sPlot ntuple
Bool_t storeDPEff() const {return storeDPEff_;}
//! Determine whether the initial values of the fit parameters, in particular the isobar coefficient parameters, are to be randomised
Bool_t useRandomInitFitPars() const {return randomFit_;}
//! Randomise the initial values of the fit parameters, in particular the isobar coefficient parameters
void useRandomInitFitPars(Bool_t boolean) {randomFit_ = boolean;}
//! Setup the background class names
/*!
\param [in] names a vector of all the background names
*/
virtual void setBkgndClassNames( const std::vector<TString>& names );
//! Returns the number of background classes
inline UInt_t nBkgndClasses() const {return bkgndClassNames_.size();}
//! Set the number of signal events
/*!
\param [in] nSigEvents contains the signal yield and option to fix it
*/
virtual void setNSigEvents(LauParameter* nSigEvents) = 0;
//! Set the number of background events
/*!
The name of the parameter must be that of the corresponding background category (so that it can be correctly assigned)
\param [in] nBkgndEvents contains the name, yield and option to fix the yield of the background
*/
virtual void setNBkgndEvents(LauAbsRValue* nBkgndEvents) = 0;
//! Set the DP amplitude coefficients
/*!
The name of the coeffSet must match the name of one of the resonances in the DP model.
The supplied order of coefficients will be rearranged to match the order in which the resonances are stored in the dynamics, see LauIsobarDynamics::addResonance.
\param [in] coeffSet the set of coefficients
*/
virtual void setAmpCoeffSet(LauAbsCoeffSet* coeffSet) = 0;
//! Specify that a toy MC sample should be created for a successful fit to an experiment
/*!
Generation uses the fitted parameters so that the user can compare the fit to the data
\param [in] toyMCScale the scale factor to get the number of events to generate
\param [in] mcFileName the file name where the toy sample will be stored
\param [in] tableFileName name of the output tex file
\param [in] poissonSmearing turn smearing on or off
*/
void compareFitData(UInt_t toyMCScale = 10, const TString& mcFileName = "fitToyMC.root",
const TString& tableFileName = "fitToyMCTable.tex", Bool_t poissonSmearing = kTRUE);
//! Start the toy generation / fitting
/*!
\param [in] applicationCode specifies what to do, perform a fit ("fit") or generate toy MC ("gen")
\param [in] dataFileName the name of the input data file
\param [in] dataTreeName the name of the tree containing the data
\param [in] histFileName the file name for the output histograms
\param [in] tableFileName the file name for the latex output file
*/
void run(const TString& applicationCode, const TString& dataFileName, const TString& dataTreeName,
const TString& histFileName, const TString& tableFileName = "");
//! This function sets the parameter values from Minuit
/*!
This function has to be public since it is called from the global FCN.
It should not be called otherwise!
\param [in] par an array storing the various parameter values
\param [in] npar the number of free parameters
*/
virtual void setParsFromMinuit(Double_t* par, Int_t npar);
//! Calculates the total negative log-likelihood
/*!
This function has to be public since it is called from the global FCN.
It should not be called otherwise!
*/
virtual Double_t getTotNegLogLikelihood();
//! Set model parameters from a file
/*!
\param [in] fileName the name of the file with parameters to set
\param [in] treeName the name of the tree in the file corresponding to the parameters to set
\param [in] fix whether to fix (set constant) the loaded parameters, or leave them floating
*/
- void setParametersFromFile(TString fileName, TString treeName, Bool_t fix);
+ void setParametersFromFile(const TString& fileName, const TString& treeName, const Bool_t fix);
//! Set model parameters from a given std::map
/*!
-
Only parameters named in the map are imported, all others are set to their values specified
in the model configuration.
\param [in] parameters map from parameter name to imported value
\param [in] fix whether to fix (set constant) the loaded parameters, or leave them floating
*/
- void setParametersFromMap(std::map<std::string, Double_t> parameters, Bool_t fix);
+ void setParametersFromMap(const std::map<TString, Double_t>& parameters, const Bool_t fix);
//! Set named model parameters from a file
/*!
-
Identical to setParametersFromFile, but only import parameters named from parameters set.
All others are set to their values specified in the model configuration.
\param [in] fileName the name of the file with parameters to set
\param [in] treeName the name of the tree in the file corresponding to the parameters to set
\param [in] parameters the set of parameters to import from the file
\param [in] fix whether to fix (set constant) the loaded parameters, or leave them floating
*/
- void setNamedParameters(TString fileName, TString treeName, std::set<std::string> parameters, Bool_t fix);
+ void setNamedParameters(const TString& fileName, const TString& treeName, const std::set<TString>& parameters, const Bool_t fix);
//! Set named model parameters from a given std::map, with fallback to those from a file
/*!
-
Parameters named in the map are imported with their specified values. All other parameters are set
to the values corresponding to the value in the given file.
\param [in] fileName the name of the file with parameters to set
\param [in] treeName the name of the tree in the file corresponding to the parameters to set
\param [in] parameters map from parameter name to imported value (override parameters form the file)
\param [in] fix whether to fix (set constant) the loaded parameters, or leave them floating
*/
- void setParametersFileFallback(TString fileName, TString treeName, std::map<std::string, Double_t> parameters, Bool_t fix);
+ void setParametersFileFallback(const TString& fileName, const TString& treeName, const std::map<TString, Double_t>& parameters, const Bool_t fix);
protected:
// Some typedefs
//! List of Pdfs
typedef std::vector<LauAbsPdf*> LauPdfList;
//! List of parameter pointers
typedef std::vector<LauParameter*> LauParameterPList;
//! List of parameter pointers
typedef std::vector<LauAbsRValue*> LauAbsRValuePList;
//! Set of parameter pointers
typedef std::set<LauParameter*> LauParameterPSet;
//! List of parameters
typedef std::vector<LauParameter> LauParameterList;
//! A type to store background classes
typedef std::map<UInt_t,TString> LauBkgndClassMap;
//! Clear the vectors containing fit parameters
void clearFitParVectors();
//! Clear the vectors containing extra ntuple variables
void clearExtraVarVectors();
//! Weighting - allows e.g. MC events to be weighted by the DP model
/*!
\param [in] dataFileName the name of the data file
\param [in] dataTreeName the name of the tree containing the data
*/
virtual void weightEvents( const TString& dataFileName, const TString& dataTreeName ) = 0;
//! Generate toy MC
/*!
\param [in] dataFileName the name of the file where the generated events are stored
\param [in] dataTreeName the name of the tree used to store the variables
\param [in] histFileName the name of the histogram output file (currently not used)
\param [in] tableFileNameBase the name the latex output file
*/
virtual void generate(const TString& dataFileName, const TString& dataTreeName, const TString& histFileName, const TString& tableFileNameBase);
//! The method that actually generates the toy MC events for the given experiment
/*!
\return the success/failure flag of the generation procedure
*/
virtual Bool_t genExpt() = 0;
//! Perform the total fit
/*!
\param [in] dataFileName the name of the data file
\param [in] dataTreeName the name of the tree containing the data
\param [in] histFileName the name of the histogram output file
\param [in] tableFileNameBase the name the of latex output file
*/
void fit(const TString& dataFileName, const TString& dataTreeName, const TString& histFileName, const TString& tableFileNameBase);
//! Routine to perform the actual fit for a given experiment
void fitExpt();
//! Routine to perform the minimisation
/*!
\return the success/failure flag of the fit
*/
Bool_t runMinimisation();
//! Create a toy MC sample from the fitted parameters
/*!
\param [in] mcFileName the file name where the toy sample will be stored
\param [in] tableFileName name of the output tex file
*/
void createFitToyMC(const TString& mcFileName, const TString& tableFileName);
//! Read in the data for the current experiment
/*!
\return the number of events read in
*/
virtual UInt_t readExperimentData();
//! Open the input file and verify that all required variables are present
/*!
\param [in] dataFileName the name of the input file
\param [in] dataTreeName the name of the input tree
*/
virtual Bool_t verifyFitData(const TString& dataFileName, const TString& dataTreeName);
//! Cache the input data values to calculate the likelihood during the fit
virtual void cacheInputFitVars() = 0;
//! Cache the value of the sWeights to be used in the sFit
virtual void cacheInputSWeights();
//! Initialise the fit par vectors
/*!
Each class that inherits from this one must implement
this sensibly for all vectors specified in
clearFitParVectors, i.e. specify parameter names,
initial, min, max and fixed values
*/
virtual void initialise() = 0;
//! Recalculate normalisation the signal DP model(s)
virtual void recalculateNormalisation() = 0;
//! Initialise the DP models
virtual void initialiseDPModels() = 0;
/*!
For each amp in the fit this function takes its
particular parameters and from them calculates the
single complex number that is its coefficient.
The vector of these coeffs can then be passed to the
signal dynamics.
*/
virtual void updateCoeffs() = 0;
//! This function (specific to each model) calculates anything that depends on the fit parameter values
virtual void propagateParUpdates() = 0;
//! Calculate the sum of the log-likelihood over the specified events
/*!
\param [in] iStart the event number of the first event to be considered
\param [in] iEnd the event number of the final event to be considered
*/
Double_t getLogLikelihood( UInt_t iStart, UInt_t iEnd );
//! Calculate the penalty terms to the log likelihood from Gaussian constraints
Double_t getLogLikelihoodPenalty();
//! Calculates the likelihood for a given event
/*!
\param [in] iEvt the event number
*/
virtual Double_t getTotEvtLikelihood(UInt_t iEvt) = 0;
//! Returns the sum of the expected events over all hypotheses; used in the EML fit scenario
virtual Double_t getEventSum() const = 0;
//! Prints the values of all the fit variables for the specified event - useful for diagnostics
/*!
\param [in] iEvt the event number
*/
virtual void printEventInfo(UInt_t iEvt) const;
//! Same as printEventInfo, but printing out the values of the variables in the fit
virtual void printVarsInfo() const;
//! Update initial fit parameters if required
virtual void checkInitFitParams() = 0;
//! Setup saving of fit results to ntuple/LaTeX table etc.
/*!
\param [in] histFileName the file name for the output histograms
\param [in] tableFileName the file name for the latex output file
*/
virtual void setupResultsOutputs( const TString& histFileName, const TString& tableFileName );
//! Package the initial fit parameters for transmission to the master
/*!
\param [out] array the array to be filled with the LauParameter objects
*/
virtual void prepareInitialParArray( TObjArray& array );
//! Perform all finalisation actions
/*!
- Receive the results of the fit from the master
- Perform any finalisation routines
- Package the finalised fit parameters for transmission back to the master
\param [in] fitStat the status of the fit, e.g. status code, EDM, NLL
\param [in] parsFromMaster the parameters at the fit minimum
\param [in] covMat the fit covariance matrix
\param [out] parsToMaster the array to be filled with the finalised LauParameter objects
*/
virtual void finaliseExperiment( const LauAbsFitter::FitStatus& fitStat, const TObjArray* parsFromMaster, const TMatrixD* covMat, TObjArray& parsToMaster );
//! Write the results of the fit into the ntuple
/*!
\param [in] tableFileName the structure containing the results of the fit
*/
virtual void finaliseFitResults(const TString& tableFileName) = 0;
//! Save the pdf Plots for all the resonances of experiment number fitExp
/*!
\param [in] label prefix for the file name to be saved
*/
virtual void savePDFPlots(const TString& label) = 0;
//! Save the pdf Plots for the sum of ressonances correspondint to "sin" of experiment number fitExp
/*!
\param [in] label prefix for the file name to be saved
\param [in] spin spin of the wave to be saved
*/
virtual void savePDFPlotsWave(const TString& label, const Int_t& spin) = 0;
//! Write the latex table
/*!
\param [in] outputFile the name of the output file
*/
virtual void writeOutTable(const TString& outputFile) = 0;
//! Store the per-event likelihood values
- virtual void storePerEvtLlhds() = 0;
+ virtual void storePerEvtLlhds() = 0;
//! Calculate the sPlot data
virtual void calculateSPlotData();
//! Make sure all parameters hold their genValue as the current value
void setGenValues();
//! Method to set up the storage for background-related quantities called by setBkgndClassNames
virtual void setupBkgndVectors() = 0;
//! Check if the given background class is in the list
/*!
\param [in] className the name of the class to check
\return true or false
*/
Bool_t validBkgndClass( const TString& className ) const;
//! The number assigned to a background class
/*!
\param [in] className the name of the class to check
\return the background class ID number
*/
UInt_t bkgndClassID( const TString& className ) const;
//! Get the name of a background class from the number
/*!
\param [in] classID the ID number of the background class
\return the class name
*/
const TString& bkgndClassName( UInt_t classID ) const;
//! Setup the generation ntuple branches
virtual void setupGenNtupleBranches() = 0;
//! Add a branch to the gen tree for storing an integer
/*!
\param [in] name the name of the branch
*/
virtual void addGenNtupleIntegerBranch(const TString& name);
//! Add a branch to the gen tree for storing a double
/*!
\param [in] name the name of the branch
*/
virtual void addGenNtupleDoubleBranch(const TString& name);
//! Set the value of an integer branch in the gen tree
/*!
\param [in] name the name of the branch
\param [in] value the value to be stored
*/
virtual void setGenNtupleIntegerBranchValue(const TString& name, Int_t value);
//! Set the value of a double branch in the gen tree
/*!
\param [in] name the name of the branch
\param [in] value the value to be stored
*/
virtual void setGenNtupleDoubleBranchValue(const TString& name, Double_t value);
//! Get the value of an integer branch in the gen tree
/*!
\param [in] name the name of the branch
\return the value of the parameter
*/
virtual Int_t getGenNtupleIntegerBranchValue(const TString& name) const;
//! Get the value of a double branch in the gen tree
/*!
\param [in] name the name of the branch
\return the value of the parameter
*/
virtual Double_t getGenNtupleDoubleBranchValue(const TString& name) const;
//! Fill the gen tuple branches
virtual void fillGenNtupleBranches();
//! Setup the branches of the sPlot tuple
virtual void setupSPlotNtupleBranches() = 0;
//! Add a branch to the sPlot tree for storing an integer
/*!
\param [in] name the name of the branch
*/
virtual void addSPlotNtupleIntegerBranch(const TString& name);
//! Add a branch to the sPlot tree for storing a double
/*!
\param [in] name the name of the branch
*/
virtual void addSPlotNtupleDoubleBranch(const TString& name);
//! Set the value of an integer branch in the sPlot tree
/*!
\param [in] name the name of the branch
\param [in] value the value to be stored
*/
virtual void setSPlotNtupleIntegerBranchValue(const TString& name, Int_t value);
//! Set the value of a double branch in the sPlot tree
/*!
\param [in] name the name of the branch
\param [in] value the value to be stored
*/
virtual void setSPlotNtupleDoubleBranchValue(const TString& name, Double_t value);
//! Fill the sPlot tuple
virtual void fillSPlotNtupleBranches();
//! Returns the names of all variables in the fit
virtual LauSPlot::NameSet variableNames() const = 0;
//! Returns the names and yields of species that are free in the fit
virtual LauSPlot::NumbMap freeSpeciesNames() const = 0;
//! Returns the names and yields of species that are fixed in the fit
virtual LauSPlot::NumbMap fixdSpeciesNames() const = 0;
//! Returns the species and variables for all 2D PDFs in the fit
virtual LauSPlot::TwoDMap twodimPDFs() const = 0;
//! Check if the signal is split into well-reconstructed and mis-reconstructed types
virtual Bool_t splitSignal() const = 0;
//! Check if the mis-reconstructed signal is to be smeared in the DP
virtual Bool_t scfDPSmear() const = 0;
//! Add parameters of the PDFs in the list to the list of all fit parameters
/*!
\param [in] pdfList a list of Pdfs
\return the number of parameters added
*/
UInt_t addFitParameters(LauPdfList& pdfList);
//! Add parameters to the list of Gaussian constrained parameters
void addConParameters();
//! Print the fit parameters for all PDFs in the list
/*!
\param [in] pdfList a list of Pdfs
\param [in] fout the output stream to write to
*/
void printFitParameters(const LauPdfList& pdfList, std::ostream& fout) const;
//! Update the fit parameters for all PDFs in the list
/*!
\param [in] pdfList a list of Pdfs
*/
void updateFitParameters(LauPdfList& pdfList);
//! Have all PDFs in the list cache the data
/*!
\param [in] pdfList the list of pdfs
\param [in] theData the data from the fit
*/
void cacheInfo(LauPdfList& pdfList, const LauFitDataTree& theData);
//! Calculate the product of the per-event likelihoods of the PDFs in the list
/*!
\param [in] pdfList the list of pdfs
\param [in] iEvt the event number
*/
Double_t prodPdfValue(LauPdfList& pdfList, UInt_t iEvt);
//! Do any of the PDFs have a dependence on the DP?
/*!
\return the flag to indicated if there is a DP dependence
*/
Bool_t pdfsDependOnDP() const {return pdfsDependOnDP_;}
//! Do any of the PDFs have a dependence on the DP?
/*!
\param [in] dependOnDP the flag to indicated if there is a DP dependence
*/
void pdfsDependOnDP(Bool_t dependOnDP) { pdfsDependOnDP_ = dependOnDP; }
//! Const access the fit variables
const LauParameterPList& fitPars() const {return fitVars_;}
//! Access the fit variables
LauParameterPList& fitPars() {return fitVars_;}
//! Const access the fit variables which affect the DP normalisation
const LauParameterPSet& resPars() const {return resVars_;}
//! Access the fit variables which affect the DP normalisation
LauParameterPSet& resPars() {return resVars_;}
//! Const access the extra variables
const LauParameterList& extraPars() const {return extraVars_;}
//! Access the extra variables
LauParameterList& extraPars() {return extraVars_;}
//! Const access the Gaussian constrained variables
const LauAbsRValuePList& conPars() const {return conVars_;}
//! Access the Gaussian constrained variables
LauAbsRValuePList& conPars() {return conVars_;}
//! Const access the gen ntuple
const LauGenNtuple* genNtuple() const {return genNtuple_;}
//! Access the gen ntuple
LauGenNtuple* genNtuple() {return genNtuple_;}
//! Const access the sPlot ntuple
const LauGenNtuple* sPlotNtuple() const {return sPlotNtuple_;}
//! Access the sPlot ntuple
LauGenNtuple* sPlotNtuple() {return sPlotNtuple_;}
//! Const access the data store
const LauFitDataTree* fitData() const {return inputFitData_;}
//! Access the data store
LauFitDataTree* fitData() {return inputFitData_;}
//! Imported parameters file name
TString fixParamFileName_;
//! Imported parameters tree name
- TString fixParamTreeName_;
+ TString fixParamTreeName_;
//! Map from imported parameter name to value
- std::map<std::string, Double_t> fixParamMap_;
+ std::map<TString, Double_t> fixParamMap_;
//! Imported parameter names
- std::set<std::string> fixParamNames_;
+ std::set<TString> fixParamNames_;
//! Whether to fix the loaded parameters (kTRUE) or leave them floating (kFALSE)
- Bool_t fixParams_;
+ Bool_t fixParams_;
//! The set of parameters that are imported (either from a file or by value) and not
// set to be fixed in the fit. In addition to those from fixParamNames_, these
- // include those imported from a file
- std::set<LauParameter *> allImportedFreeParams_;
+ // include those imported from a file.
+ std::set<LauParameter*> allImportedFreeParams_;
+
private:
//! Copy constructor (not implemented)
LauAbsFitModel(const LauAbsFitModel& rhs);
//! Copy assignment operator (not implemented)
LauAbsFitModel& operator=(const LauAbsFitModel& rhs);
// Various control booleans
//! Option to make toy from 1st successful experiment
Bool_t compareFitData_;
//! Option to output a .C file of PDF's
Bool_t savePDF_;
//! Option to output a Latex format table
Bool_t writeLatexTable_;
//! Option to write sPlot data
Bool_t writeSPlotData_;
//! Option to store DP efficiencies in the sPlot ntuple
Bool_t storeDPEff_;
//! Option to randomise the initial values of the fit parameters
Bool_t randomFit_;
//! Option to perform an extended ML fit
Bool_t emlFit_;
//! Option to perform Poisson smearing
Bool_t poissonSmear_;
//! Option to enable embedding
Bool_t enableEmbedding_;
//! Option to include the DP as part of the fit
Bool_t usingDP_;
//! Option to state if pdfs depend on DP position
Bool_t pdfsDependOnDP_;
// Info on number of experiments and number of events
//! Internal vector of fit parameters
LauParameterPList fitVars_;
//! Internal set of fit parameters upon which the DP normalisation depends
LauParameterPSet resVars_;
//! Extra variables that aren't in the fit but are stored in the ntuple
LauParameterList extraVars_;
//! Internal vectors of Gaussian parameters
LauAbsRValuePList conVars_;
// Input data and output ntuple
//! The input data
LauFitDataTree* inputFitData_;
//! The generated ntuple
LauGenNtuple* genNtuple_;
//! The sPlot ntuple
LauGenNtuple* sPlotNtuple_;
// Background class names
//! The background class names
LauBkgndClassMap bkgndClassNames_;
//! An empty string
const TString nullString_;
// sFit related variables
//! Option to perfom the sFit
Bool_t doSFit_;
//! The name of the sWeight branch
TString sWeightBranchName_;
//! The vector of sWeights
std::vector<Double_t> sWeights_;
//! The sWeight scaling factor
Double_t sWeightScaleFactor_;
// Fit timers
//! The fit timer
TStopwatch timer_;
//! The total fit timer
TStopwatch cumulTimer_;
//! The output table name
TString outputTableName_;
// Comparison toy MC related variables
//! The output file name for Toy MC
TString fitToyMCFileName_;
//! The output table name for Toy MC
TString fitToyMCTableName_;
//! The scaling factor (toy vs data statistics)
UInt_t fitToyMCScale_;
//! Option to perform Poisson smearing
Bool_t fitToyMCPoissonSmear_;
// sPlot related variables
//! The name of the sPlot file
TString sPlotFileName_;
//! The name of the sPlot tree
TString sPlotTreeName_;
//! Control the verbosity of the sFit
TString sPlotVerbosity_;
ClassDef(LauAbsFitModel,0) // Abstract interface to fit/toyMC model
};
#endif
diff --git a/inc/LauCPFitModel.hh b/inc/LauCPFitModel.hh
index b7c95c1..8a052f4 100644
--- a/inc/LauCPFitModel.hh
+++ b/inc/LauCPFitModel.hh
@@ -1,712 +1,712 @@
/*
Copyright 2004 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauCPFitModel.hh
\brief File containing declaration of LauCPFitModel class.
*/
/*! \class LauCPFitModel
\brief Class for defining a CP fit model.
LauCPFitModel is a class that allows the user to define a three-body Dalitz
plot according to the isobar model, i.e. defining a set of resonances that
have complex amplitudes that can interfere with each other.
It extends the LauSimpleFitModel in that it allows simultaneous fitting of
two parent flavours simultaneously. By default, it assumes perfect tagging
of those flavours but it can also be used in an untagged scenario.
*/
#ifndef LAU_CP_FIT_MODEL
#define LAU_CP_FIT_MODEL
#include <vector>
#include "TString.h"
#include "LauAbsFitModel.hh"
#include "LauComplex.hh"
#include "LauParameter.hh"
class TH2;
class LauAbsBkgndDPModel;
class LauAbsCoeffSet;
class LauIsobarDynamics;
class LauAbsPdf;
class LauEffModel;
class LauEmbeddedData;
class LauKinematics;
class LauScfMap;
class LauCPFitModel : public LauAbsFitModel {
public:
//! Constructor
/*!
\param [in] negModel DP model for the antiparticle
\param [in] posModel DP model for the particle
\param [in] tagged is the analysis tagged or untagged?
\param [in] tagVarName the variable name in the data tree that specifies the event tag
*/
LauCPFitModel(LauIsobarDynamics* negModel, LauIsobarDynamics* posModel, Bool_t tagged = kTRUE, const TString& tagVarName = "charge");
//! Destructor
virtual ~LauCPFitModel();
//! Set the signal event yield
/*!
\param [in] nSigEvents contains the signal yield and option to fix it
*/
virtual void setNSigEvents(LauParameter* nSigEvents);
//! Set the signal event yield if there is an asymmetry
/*!
\param [in] nSigEvents contains the signal yield and option to fix it
\param [in] sigAsym contains the signal asymmetry and option to fix it
\param [in] forceAsym the option to force there to be an asymmetry
*/
virtual void setNSigEvents(LauParameter* nSigEvents, LauParameter* sigAsym, Bool_t forceAsym = kFALSE);
//! Set the background event yield(s)
/*!
The name of the parameter must be that of the corresponding background category (so that it can be correctly assigned)
\param [in] nBkgndEvents contains the name, yield and option to fix the yield of the background
*/
virtual void setNBkgndEvents(LauAbsRValue* nBkgndEvents);
//! Set the background event yield(s)
/*!
The names of the parameters must be that of the corresponding background category (so that they can be correctly assigned)
\param [in] nBkgndEvents contains the name, yield and option to fix the yield of the background
\param [in] bkgndAsym contains the background asymmetry and option to fix it
*/
virtual void setNBkgndEvents(LauAbsRValue* nBkgndEvents, LauAbsRValue* bkgndAsym);
//! Set the background DP models
/*!
\param [in] bkgndClass the name of the background class
\param [in] negModel the DP model of the B- background
\param [in] posModel the DP model of the B+ background
*/
void setBkgndDPModels(const TString& bkgndClass, LauAbsBkgndDPModel* negModel, LauAbsBkgndDPModel* posModel);
//! Split the signal component into well-reconstructed and mis-reconstructed parts
/*!
The nomenclature used here is TM (truth-matched) and SCF (self cross feed)
In this option, the SCF fraction is DP-dependent
Can also optionally provide a smearing matrix to smear the SCF DP PDF
\param [in] dpHisto the DP histogram of the SCF fraction value
\param [in] upperHalf boolean flag to specify that the supplied histogram contains only the upper half of a symmetric DP (or lower half if using square DP coordinates)
\param [in] fluctuateBins whether the bins on the histogram should be varied in accordance with their uncertainties (for evaluation of systematic uncertainties)
\param [in] scfMap the (optional) smearing matrix
*/
void splitSignalComponent( const TH2* dpHisto, const Bool_t upperHalf = kFALSE, const Bool_t fluctuateBins = kFALSE, LauScfMap* scfMap = 0 );
//! Split the signal component into well reconstructed and mis-reconstructed parts
/*!
The nomenclature used here is TM (truth-matched) and SCF (self cross feed)
In this option, the SCF fraction is a single global number
\param [in] scfFrac the SCF fraction value
\param [in] fixed whether the SCF fraction is fixed or floated in the fit
*/
void splitSignalComponent( const Double_t scfFrac, const Bool_t fixed );
//! Determine whether we are splitting the signal into TM and SCF parts
Bool_t useSCF() const { return useSCF_; }
//! Determine whether the SCF fraction is DP-dependent
Bool_t useSCFHist() const { return useSCFHist_; }
//! Determine if we are smearing the SCF DP PDF
Bool_t smearSCFDP() const { return (scfMap_ != 0); }
// Set the DeltaE and mES models, i.e. give us the PDFs
//! Set the signal PDFs
/*!
\param [in] negPdf the PDF to be added to the B- signal model
\param [in] posPdf the PDF to be added to the B+ signal model
*/
void setSignalPdfs(LauAbsPdf* negPdf, LauAbsPdf* posPdf);
//! Set the SCF PDF for a given variable
/*!
\param [in] negPdf the PDF to be added to the B- signal model
\param [in] posPdf the PDF to be added to the B+ signal model
*/
void setSCFPdfs(LauAbsPdf* negPdf, LauAbsPdf* posPdf);
//! Set the background PDFs
/*!
\param [in] bkgndClass the name of the background class
\param [in] negPdf the PDF to be added to the B- background model
\param [in] posPdf the PDF to be added to the B+ background model
*/
void setBkgndPdfs(const TString& bkgndClass, LauAbsPdf* negPdf, LauAbsPdf* posPdf);
//! Embed full simulation events for the B- signal, rather than generating toy from the PDFs
/*!
\param [in] fileName the name of the file containing the events
\param [in] treeName the name of the tree
\param [in] reuseEventsWithinEnsemble sample with replacement but only replace events once each experiment has been generated
\param [in] reuseEventsWithinExperiment sample with immediate replacement
\param [in] useReweighting perform an accept/reject routine using the configured signal amplitude model based on the MC-truth DP coordinate
*/
void embedNegSignal(const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment = kFALSE,
- Bool_t useReweighting = kFALSE);
+ Bool_t useReweighting = kFALSE);
//! Embed full simulation events for the given background class, rather than generating toy from the PDFs
/*!
\param [in] bgClass the name of the background class
\param [in] fileName the name of the file containing the events
\param [in] treeName the name of the tree
\param [in] reuseEventsWithinEnsemble sample with replacement but only replace events once each experiment has been generated
\param [in] reuseEventsWithinExperiment sample with immediate replacement
*/
void embedNegBkgnd(const TString& bgClass, const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment = kFALSE);
//! Embed full simulation events for the B+ signal, rather than generating toy from the PDFs
/*!
\param [in] fileName the name of the file containing the events
\param [in] treeName the name of the tree
\param [in] reuseEventsWithinEnsemble sample with replacement but only replace events once each experiment has been generated
\param [in] reuseEventsWithinExperiment sample with immediate replacement
\param [in] useReweighting perform an accept/reject routine using the configured signal amplitude model based on the MC-truth DP coordinate
*/
void embedPosSignal(const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment = kFALSE,
- Bool_t useReweighting = kFALSE);
+ Bool_t useReweighting = kFALSE);
//! Embed full simulation events for the given background class, rather than generating toy from the PDFs
/*!
\param [in] bgClass the name of the background class
\param [in] fileName the name of the file containing the events
\param [in] treeName the name of the tree
\param [in] reuseEventsWithinEnsemble sample with replacement but only replace events once each experiment has been generated
\param [in] reuseEventsWithinExperiment sample with immediate replacement
*/
void embedPosBkgnd(const TString& bgClass, const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment = kFALSE);
//! Set the DP amplitude coefficients
/*!
The name of the coeffSet must match the name of one of the resonances in the DP model for the antiparticle (the name of the conjugate state in the model for the particle will be automatically determined).
The supplied order of coefficients will be rearranged to match the order in which the resonances are stored in the dynamics, see LauIsobarDynamics::addResonance.
\param [in] coeffSet the set of coefficients
*/
virtual void setAmpCoeffSet(LauAbsCoeffSet* coeffSet);
protected:
//! Define a map to be used to store a category name and numbers
typedef std::map< std::pair<TString,Int_t>, std::pair<Int_t,Double_t> > LauGenInfo;
//! Typedef for a vector of background DP models
typedef std::vector<LauAbsBkgndDPModel*> LauBkgndDPModelList;
//! Typedef for a vector of background PDFs
typedef std::vector<LauPdfList> LauBkgndPdfsList;
//! Typedef for a vector of background yields
typedef std::vector<LauAbsRValue*> LauBkgndYieldList;
//! Typedef for a vector of embedded data objects
typedef std::vector<LauEmbeddedData*> LauBkgndEmbDataList;
//! Typedef for a vector of booleans to flag if events are reused
typedef std::vector<Bool_t> LauBkgndReuseEventsList;
//! Weight events based on the DP model
/*!
\param [in] dataFileName the name of the data file
\param [in] dataTreeName the name of the data tree
*/
virtual void weightEvents( const TString& dataFileName, const TString& dataTreeName );
//! Initialise the fit
virtual void initialise();
//! Initialise the signal DP models
virtual void initialiseDPModels();
//! Recalculate Normalization the signal DP models
virtual void recalculateNormalisation();
//! Update the coefficients
virtual void updateCoeffs();
//! Toy MC generation and fitting overloaded functions
virtual Bool_t genExpt();
//! Calculate things that depend on the fit parameters after they have been updated by Minuit
virtual void propagateParUpdates();
//! Read in the input fit data variables, e.g. m13Sq and m23Sq
virtual void cacheInputFitVars();
//! Check the initial fit parameters
virtual void checkInitFitParams();
//! Get the fit results and store them
/*!
\param [in] tablePrefixName prefix for the name of the output file
*/
virtual void finaliseFitResults(const TString& tablePrefixName);
//! Print the fit fractions, total DP rate and mean efficiency
/*!
\param [out] output the stream to which to print
*/
virtual void printFitFractions(std::ostream& output);
//! Print the asymmetries
/*!
\param [out] output the stream to which to print
*/
virtual void printAsymmetries(std::ostream& output);
//! Write the fit results in latex table format
/*!
\param [in] outputFile the name of the output file
*/
virtual void writeOutTable(const TString& outputFile);
//! Save the pdf Plots for all the resonances
/*!
\param [in] label prefix for the file name to be saved
*/
virtual void savePDFPlots(const TString& label);
//! Save the pdf Plots for the sum of resonances of a given spin
/*!
\param [in] label prefix for the file name to be saved
\param [in] spin spin of the wave to be saved
*/
virtual void savePDFPlotsWave(const TString& label, const Int_t& spin);
//! Store the per event likelihood values
- virtual void storePerEvtLlhds();
+ virtual void storePerEvtLlhds();
// Methods to do with calculating the likelihood functions
// and manipulating the fitting parameters.
//! Get the total likelihood for each event
/*!
\param [in] iEvt the event number
*/
virtual Double_t getTotEvtLikelihood(UInt_t iEvt);
//! Calculate the signal and background likelihoods for the DP for a given event
/*!
\param [in] iEvt the event number
*/
virtual void getEvtDPLikelihood(UInt_t iEvt);
//! Calculate the SCF likelihood for the DP for a given event
/*!
\param [in] iEvt the event number
*/
virtual Double_t getEvtSCFDPLikelihood(UInt_t iEvt);
//! Determine the signal and background likelihood for the extra variables for a given event
/*!
\param [in] iEvt the event number
*/
virtual void getEvtExtraLikelihoods(UInt_t iEvt);
//! Get the total number of events
/*!
\return the total number of events
*/
virtual Double_t getEventSum() const;
//! Set the fit parameters for the DP model
void setSignalDPParameters();
//! Set the fit parameters for the extra PDFs
void setExtraPdfParameters();
//! Set the initial yields
void setFitNEvents();
//! Set-up other parameters that are derived from the fit results, e.g. fit fractions
void setExtraNtupleVars();
//! Randomise the initial fit parameters
void randomiseInitFitPars();
//! Calculate the CP-conserving and CP-violating fit fractions
/*!
\param [in] initValues is this before or after the fit
*/
void calcExtraFractions(Bool_t initValues = kFALSE);
//! Calculate the CP asymmetries
/*!
\param [in] initValues is this before or after the fit
*/
void calcAsymmetries(Bool_t initValues = kFALSE);
//! Define the length of the background vectors
virtual void setupBkgndVectors();
//! Determine the number of events to generate for each hypothesis
std::pair<LauGenInfo,Bool_t> eventsToGenerate();
//! Generate signal event
Bool_t generateSignalEvent();
//! Generate background event
/*!
\param [in] bgID ID number of the background class
*/
Bool_t generateBkgndEvent(UInt_t bgID);
//! Setup the required ntuple branches
void setupGenNtupleBranches();
//! Store all of the DP information
void setDPBranchValues();
//! Generate from the extra PDFs
/*!
\param [in] extraPdfs the list of extra PDFs
\param [in] embeddedData the embedded data sample
*/
void generateExtraPdfValues(LauPdfList* extraPdfs, LauEmbeddedData* embeddedData);
//! Store the MC truth info on the TM/SCF nature of the embedded signal event
/*!
\param [in] embeddedData the full simulation information
*/
- Bool_t storeSignalMCMatch(LauEmbeddedData* embeddedData);
+ Bool_t storeSignalMCMatch(LauEmbeddedData* embeddedData);
//! Add sPlot branches for the extra PDFs
/*!
\param [in] extraPdfs the list of extra PDFs
\param [in] prefix the list of prefixes for the branch names
*/
void addSPlotNtupleBranches(const LauPdfList* extraPdfs, const TString& prefix);
//! Set the branches for the sPlot ntuple with extra PDFs
/*!
\param [in] extraPdfs the list of extra PDFs
\param [in] prefix the list of prefixes for the branch names
\param [in] iEvt the event number
*/
Double_t setSPlotNtupleBranchValues(LauPdfList* extraPdfs, const TString& prefix, UInt_t iEvt);
//! Update the signal events after Minuit sets background parameters
void updateSigEvents();
//! Add branches to store experiment number and the event number within the experiment
virtual void setupSPlotNtupleBranches();
//! Returns the names of all variables in the fit
virtual LauSPlot::NameSet variableNames() const;
//! Returns the names and yields of species that are free in the fit
virtual LauSPlot::NumbMap freeSpeciesNames() const;
//! Returns the names and yields of species that are fixed in the fit
virtual LauSPlot::NumbMap fixdSpeciesNames() const;
//! Returns the species and variables for all 2D PDFs in the fit
virtual LauSPlot::TwoDMap twodimPDFs() const;
//! Check if the signal is split into well-reconstructed and mis-reconstructed types
virtual Bool_t splitSignal() const {return this->useSCF();}
//! Check if the mis-reconstructed signal is to be smeared in the DP
virtual Bool_t scfDPSmear() const {return (scfMap_ != 0);}
//! Append fake data points to the inputData for each bin in the SCF smearing matrix
/*!
We'll be caching the DP amplitudes and efficiencies of the centres of the true bins.
To do so, we attach some fake points at the end of inputData, the number of the entry
minus the total number of events corresponding to the number of the histogram for that
given true bin in the LauScfMap object. (What this means is that when Laura is provided with
the LauScfMap object by the user, it's the latter who has to make sure that it contains the
right number of histograms and in exactly the right order!)
\param [in] inputData the fit data
*/
void appendBinCentres( LauFitDataTree* inputData );
LauIsobarDynamics* getNegSigModel() {return negSigModel_;}
LauIsobarDynamics* getPosSigModel() {return posSigModel_;}
- //! Retrieve a named parameter from a TTree
+ //! Retrieve a named parameter from a TTree
/*!
\param [in] tree a reference to the tree from which to obtain the parameters
\param [in] name the name of the parameter to retrive
*/
- Double_t getParamFromTree(TTree & tree, TString name);
+ Double_t getParamFromTree( TTree& tree, const TString& name );
//! Set a LauParameter to a given value
/*!
\param [in] param a pointer to the LauParameter to set
\param [in] val the value to set
\param [in] fix whether to fix the LauParameter or leave it floating
*/
- void fixParam(LauParameter * param, Double_t val, Bool_t fix);
+ void fixParam( LauParameter* param, const Double_t val, const Bool_t fix );
//! Set a vector of LauParameters according to the specified method
/*!
\param [in] params the vector of pointers to LauParameter to set values of
*/
- void fixParams(std::vector<LauParameter *> params);
+ void fixParams( std::vector<LauParameter*>& params );
private:
//! Copy constructor (not implemented)
LauCPFitModel(const LauCPFitModel& rhs);
//! Copy assignment operator (not implemented)
LauCPFitModel& operator=(const LauCPFitModel& rhs);
//! The B- signal Dalitz plot model
LauIsobarDynamics *negSigModel_;
//! The B+ signal Dalitz plot model
LauIsobarDynamics *posSigModel_;
//! The B- background Dalitz plot models
LauBkgndDPModelList negBkgndDPModels_;
//! The B+ background Dalitz plot models
LauBkgndDPModelList posBkgndDPModels_;
//! The B- Dalitz plot kinematics object
LauKinematics *negKinematics_;
//! The B+ Dalitz plot kinematics object
LauKinematics *posKinematics_;
//! The B- signal PDFs
LauPdfList negSignalPdfs_;
//! The B+ signal PDFs
LauPdfList posSignalPdfs_;
//! The B- SCF PDFs
LauPdfList negScfPdfs_;
//! The B+ SCF PDFs
LauPdfList posScfPdfs_;
//! The B- background PDFs
LauBkgndPdfsList negBkgndPdfs_;
//! The B+ background PDFs
LauBkgndPdfsList posBkgndPdfs_;
//! Background boolean
Bool_t usingBkgnd_;
//! Number of signal components
UInt_t nSigComp_;
//! Number of signal DP parameters
UInt_t nSigDPPar_;
//! Number of extra PDF parameters
UInt_t nExtraPdfPar_;
//! Number of normalisation parameters (yields, asymmetries)
UInt_t nNormPar_;
//! Magnitudes and Phases
std::vector<LauAbsCoeffSet*> coeffPars_;
//! The B- fit fractions
LauParArray negFitFrac_;
//! The B+ fit fractions
LauParArray posFitFrac_;
//! Fit B- fractions (uncorrected for the efficiency)
LauParArray negFitFracEffUnCorr_;
//! Fit B+ fractions (uncorrected for the efficiency)
LauParArray posFitFracEffUnCorr_;
//! The CP violating fit fraction
LauParArray CPVFitFrac_;
//! The CP conserving fit fraction
LauParArray CPCFitFrac_;
//! The fit fraction asymmetries
std::vector<LauParameter> fitFracAsymm_;
//! A_CP parameter
std::vector<LauParameter> acp_;
//! The mean efficiency for B- model
LauParameter negMeanEff_;
//! The mean efficiency for B+ model
LauParameter posMeanEff_;
//! The average DP rate for B-
LauParameter negDPRate_;
//! The average DP rate for B+
LauParameter posDPRate_;
//! Signal yield
LauParameter* signalEvents_;
//! Signal asymmetry
LauParameter* signalAsym_;
//! Option to force an asymmetry
Bool_t forceAsym_;
//! Background yield(s)
LauBkgndYieldList bkgndEvents_;
//! Background asymmetries(s)
LauBkgndYieldList bkgndAsym_;
//! IS the analysis tagged?
const Bool_t tagged_;
//! Event charge
const TString tagVarName_;
//! Current event charge
Int_t curEvtCharge_;
//! Vector to store event charges
std::vector<Int_t> evtCharges_;
//! Is the signal split into TM and SCF
Bool_t useSCF_;
//! Is the SCF fraction DP-dependent
Bool_t useSCFHist_;
//! The (global) SCF fraction parameter
LauParameter scfFrac_;
//! The histogram giving the DP-dependence of the SCF fraction
LauEffModel* scfFracHist_;
//! The smearing matrix for the SCF DP PDF
LauScfMap* scfMap_;
//! The cached values of the SCF fraction for each event
std::vector<Double_t> recoSCFFracs_;
//! The cached values of the SCF fraction for each bin centre
std::vector<Double_t> fakeSCFFracs_;
//! The cached values of the sqDP jacobians for each event
std::vector<Double_t> recoJacobians_;
//! The cached values of the sqDP jacobians for each true bin
std::vector<Double_t> fakeJacobians_;
//! Run choice variables
Bool_t compareFitData_;
//! Name of the parent particle
- TString negParent_;
+ TString negParent_;
//! Name of the parent particle
TString posParent_;
//! The complex coefficients for B-
std::vector<LauComplex> negCoeffs_;
//! The complex coefficients for B+
std::vector<LauComplex> posCoeffs_;
// Embedding full simulation events
//! The B- signal event tree
LauEmbeddedData *negSignalTree_;
//! The B+ signal event tree
LauEmbeddedData *posSignalTree_;
//! The B- background event tree
LauBkgndEmbDataList negBkgndTree_;
//! The B+ background event tree
LauBkgndEmbDataList posBkgndTree_;
//! Boolean to reuse signal events
Bool_t reuseSignal_;
//! Boolean to use reweighting for B-
- Bool_t useNegReweighting_;
+ Bool_t useNegReweighting_;
//! Boolean to use reweighting for B+
Bool_t usePosReweighting_;
//! Vector of booleans to reuse background events
LauBkgndReuseEventsList reuseBkgnd_;
// Likelihood values
//! Signal DP likelihood value
Double_t sigDPLike_;
//! SCF DP likelihood value
Double_t scfDPLike_;
//! Background DP likelihood value(s)
std::vector<Double_t> bkgndDPLike_;
//! Signal likelihood from extra PDFs
Double_t sigExtraLike_;
//! SCF likelihood from extra PDFs
Double_t scfExtraLike_;
//! Background likelihood value(s) from extra PDFs
std::vector<Double_t> bkgndExtraLike_;
//! Total signal likelihood
Double_t sigTotalLike_;
//! Total SCF likelihood
Double_t scfTotalLike_;
//! Total background likelihood(s)
std::vector<Double_t> bkgndTotalLike_;
ClassDef(LauCPFitModel,0) // CP fit/ToyMC model
};
#endif
diff --git a/inc/LauIsobarDynamics.hh b/inc/LauIsobarDynamics.hh
index b8123b8..1b522a7 100644
--- a/inc/LauIsobarDynamics.hh
+++ b/inc/LauIsobarDynamics.hh
@@ -1,983 +1,983 @@
/*
Copyright 2005 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauIsobarDynamics.hh
\brief File containing declaration of LauIsobarDynamics class.
*/
/*! \class LauIsobarDynamics
\brief Class for defining signal dynamics using the isobar model.
*/
#ifndef LAU_ISOBAR_DYNAMICS
#define LAU_ISOBAR_DYNAMICS
#include <set>
#include <vector>
#include "TString.h"
#include "LauAbsResonance.hh"
#include "LauComplex.hh"
class LauCacheData;
class LauDaughters;
class LauAbsEffModel;
class LauAbsIncohRes;
class LauFitDataTree;
class LauKMatrixPropagator;
class LauDPPartialIntegralInfo;
class LauKinematics;
class LauIsobarDynamics {
public:
//! The type used for containing multiple self cross feed fraction models for different categories (e.g. tagging categories)
typedef std::map<Int_t,LauAbsEffModel*> LauTagCatScfFractionModelMap;
//! The possible statuses for toy MC generation
enum ToyMCStatus {
GenOK, /*!< Generation completed OK */
MaxIterError, /*!< Maximum allowed number of iterations completed without success (ASqMax is too high) */
ASqMaxError /*!< An amplitude squared value was returned that was larger than the maximum expected (ASqMax is too low) */
};
//! Constructor
/*!
\param [in] daughters the daughters of the decay
\param [in] effModel the model to describe the efficiency across the Dalitz plot
\param [in] scfFractionModel the model to describe the fraction of poorly constructed events (the self cross feed fraction) across the Dalitz plot
*/
LauIsobarDynamics(LauDaughters* daughters, LauAbsEffModel* effModel, LauAbsEffModel* scfFractionModel = 0);
//! Constructor
/*!
\param [in] daughters the daughters of the decay
\param [in] effModel the model to describe the efficiency across the Dalitz plot
\param [in] scfFractionModel the models to describe the fraction of poorly constructed events (the self cross feed fraction) across the Dalitz plot for various tagging categories
*/
LauIsobarDynamics(LauDaughters* daughters, LauAbsEffModel* effModel, LauTagCatScfFractionModelMap scfFractionModel);
//! Destructor
virtual ~LauIsobarDynamics();
//! Initialise the Dalitz plot dynamics
/*!
\param [in] coeffs the complex coefficients for the resonances
*/
void initialise(const std::vector<LauComplex>& coeffs);
//! recalculate Normalization
void recalculateNormalisation();
//! Set the name of the file to which to save the results of the integrals
/*!
\param [in] fileName the name of the file
*/
inline void setIntFileName(const TString& fileName) {intFileName_ = fileName;}
// Integration
//! Set the widths of the bins to use when integrating across the Dalitz plot or square Dalitz plot
/*!
Specify the bin widths required when performing the DP integration.
Note that the integration is not performed in m13^2 vs m23^2 space
but in either m13 vs m23 space or mPrime vs thetaPrime space,
with the appropriate Jacobian applied.
The default bin widths in m13 vs m23 space are 0.005 GeV.
The default bin widths in mPrime vs thetaPrime space are 0.001.
\param [in] m13BinWidth the bin width to use when integrating over m13
\param [in] m23BinWidth the bin width to use when integrating over m23
\param [in] mPrimeBinWidth the bin width to use when integrating over mPrime
\param [in] thPrimeBinWidth the bin width to use when integrating over thetaPrime
*/
void setIntegralBinWidths(const Double_t m13BinWidth, const Double_t m23BinWidth,
const Double_t mPrimeBinWidth = 0.001, const Double_t thPrimeBinWidth = 0.001);
//! Set the value below which a resonance width is considered to be narrow
/*!
Narrow resonances trigger different integration behaviour - dividing the DP into regions where a finer binning is used.
This can cause high memory usage, so use this method and LauIsobarDynamics::setIntegralBinningFactor to tune this behaviour, if needed.
\param [in] narrowWidth the value below which a resonance is considered to be narrow (defaults to 0.02 GeV/c2)
*/
void setNarrowResonanceThreshold(const Double_t narrowWidth) { narrowWidth_ = narrowWidth; }
//! Set the factor relating the width of a narrow resonance and the binning size in its integration region
/*!
Narrow resonances trigger different integration behaviour - dividing the DP into regions where a finer binning is used.
This can cause high memory usage, so use this method and LauIsobarDynamics::setNarrowResonanceThreshold to tune this behaviour, if needed.
\param [in] binningFactor the factor by which the resonance width is divided to obtain the bin size (defaults to 100)
*/
void setIntegralBinningFactor(const Double_t binningFactor) { binningFactor_ = binningFactor; }
//! Force the symmetrisation of the integration in m13 <-> m23 for non-symmetric but flavour-conjugate final states
/*!
This can be necessary for time-dependent fits (where interference terms between A and Abar need to be integrated)
\param [in] force toggle forcing symmetrisation of the integration for apparently flavour-conjugate final states
*/
void forceSymmetriseIntegration(const Bool_t force) { forceSymmetriseIntegration_ = force; }
//! Add a resonance to the Dalitz plot
/*!
NB the stored order of resonances is:
- Firstly, all coherent resonances (i.e. those added using addResonance() or addKMatrixProdPole() or addKMatrixProdSVP()) in order of addition
- Followed by all incoherent resonances (i.e. those added using addIncoherentResonance()) in order of addition
\param [in] resName the name of the resonant particle
\param [in] resPairAmpInt the index of the daughter not produced by the resonance
\param [in] resType the model for the resonance dynamics
\param [in] bwCategory the Blatt-Weisskopf barrier factor category
\return the newly created resonance
*/
LauAbsResonance* addResonance(const TString& resName, const Int_t resPairAmpInt, const LauAbsResonance::LauResonanceModel resType, const LauBlattWeisskopfFactor::BlattWeisskopfCategory bwCategory = LauBlattWeisskopfFactor::Default);
//! Add an incoherent resonance to the Dalitz plot
/*!
NB the stored order of resonances is:
- Firstly, all coherent resonances (i.e. those added using addResonance() or addKMatrixProdPole() or addKMatrixProdSVP()) in order of addition
- Followed by all incoherent resonances (i.e. those added using addIncoherentResonance()) in order of addition
\param [in] resName the name of the resonant particle
\param [in] resPairAmpInt the index of the daughter not produced by the resonance
\param [in] resType the model for the resonance dynamics
\return the newly created resonance
*/
LauAbsResonance* addIncoherentResonance(const TString& resName, const Int_t resPairAmpInt, const LauAbsResonance::LauResonanceModel resType);
//! Define a new K-matrix Propagator
/*!
\param [in] propName the name of the propagator
\param [in] paramFileName the file that defines the propagator
\param [in] resPairAmpInt the index of the bachelor
\param [in] nChannels the number of channels
\param [in] nPoles the number of poles
\param [in] rowIndex the index of the row to be used when summing over all amplitude channels: S-wave corresponds to rowIndex = 1.
*/
void defineKMatrixPropagator(const TString& propName, const TString& paramFileName,
Int_t resPairAmpInt, Int_t nChannels, Int_t nPoles, Int_t rowIndex = 1);
//! Add a K-matrix production pole term to the model
/*!
NB the stored order of resonances is:
- Firstly, all coherent resonances (i.e. those added using addResonance() or addKMatrixProdPole() or addKMatrixProdSVP()) in order of addition
- Followed by all incoherent resonances (i.e. those added using addIncoherentResonance()) in order of addition
\param [in] poleName the name of the pole
\param [in] propName the name of the propagator to use
\param [in] poleIndex the index of the pole within the propagator
- \param [in] useProdAdler boolean to turn on/off the production Adler zero factor (default = off)
+ \param [in] useProdAdler boolean to turn on/off the production Adler zero factor (default = off)
*/
- void addKMatrixProdPole(const TString& poleName, const TString& propName, Int_t poleIndex, Bool_t useProdAdler = kFALSE);
+ void addKMatrixProdPole(const TString& poleName, const TString& propName, Int_t poleIndex, Bool_t useProdAdler = kFALSE);
//! Add a K-matrix slowly-varying part (SVP) term to the model
/*!
NB the stored order of resonances is:
- Firstly, all coherent resonances (i.e. those added using addResonance() or addKMatrixProdPole() or addKMatrixProdSVP()) in order of addition
- Followed by all incoherent resonances (i.e. those added using addIncoherentResonance()) in order of addition
\param [in] SVPName the name of the term
\param [in] propName the name of the propagator to use
\param [in] channelIndex the index of the channel within the propagator
\param [in] useProdAdler boolean to turn on/off the production Adler zero factor (default = off)
*/
- void addKMatrixProdSVP(const TString& SVPName, const TString& propName, Int_t channelIndex, Bool_t useProdAdler = kFALSE);
+ void addKMatrixProdSVP(const TString& SVPName, const TString& propName, Int_t channelIndex, Bool_t useProdAdler = kFALSE);
//! Set the maximum value of A squared to be used in the accept/reject
/*!
\param [in] value the new value
*/
inline void setASqMaxValue(Double_t value) {aSqMaxSet_ = value;}
//! Retrieve the maximum value of A squared to be used in the accept/reject
/*!
\return the maximum value of A squared
*/
inline Double_t getASqMaxSetValue() const { return aSqMaxSet_; }
//! Retrieve the maximum of A squared that has been found while generating
/*!
\return the maximum of A squared that has been found
*/
inline Double_t getASqMaxVarValue() const { return aSqMaxVar_; }
//! Generate a toy MC signal event
/*!
\return kTRUE if the event is successfully generated, kFALSE otherwise
*/
Bool_t generate();
//! Check the status of the toy MC generation
/*!
\param [in] printErrorMessages whether error messages should be printed
\param [in] printInfoMessages whether info messages should be printed
\return the status of the toy MC generation
*/
ToyMCStatus checkToyMC(Bool_t printErrorMessages = kTRUE, Bool_t printInfoMessages = kFALSE);
//! Retrieve the maximum number of iterations allowed when generating an event
/*!
\return the maximum number of iterations allowed
*/
inline Int_t maxGenIterations() const {return iterationsMax_;}
//! Calculate the likelihood (and all associated information) for the given event number
/*!
\param [in] iEvt the event number
*/
void calcLikelihoodInfo(const UInt_t iEvt);
//! Calculate the likelihood (and all associated information) given values of the Dalitz plot coordinates
/*!
\param [in] m13Sq the invariant mass squared of the first and third daughters
\param [in] m23Sq the invariant mass squared of the second and third daughters
*/
void calcLikelihoodInfo(const Double_t m13Sq, const Double_t m23Sq);
//! Calculate the likelihood (and all associated information) given values of the Dalitz plot coordinates and the tagging category
/*!
Also obtain the self cross feed fraction to cache with the rest of the Dalitz plot quantities.
\param [in] m13Sq the invariant mass squared of the first and third daughters
\param [in] m23Sq the invariant mass squared of the second and third daughters
\param [in] tagCat the tagging category
*/
void calcLikelihoodInfo(const Double_t m13Sq, const Double_t m23Sq, const Int_t tagCat);
//! Calculate the fit fractions, mean efficiency and total DP rate
/*!
\param [in] init whether the calculated values should be stored as the initial/generated values or the fitted values
*/
void calcExtraInfo(const Bool_t init = kFALSE);
//! Calculates whether an event with the current kinematics should be accepted in order to produce a distribution of events that matches the model e.g. when reweighting embedded data
/*!
Uses the accept/reject method.
\return kTRUE if the event has been accepted, kFALSE otherwise
*/
Bool_t gotReweightedEvent();
//! Calculate the acceptance rate, for events with the current kinematics, when generating events according to the model
/*!
\return the weight for the current kinematics
*/
Double_t getEventWeight();
//! Retrieve the total amplitude for the current event
/*!
\return the total amplitude
*/
inline const LauComplex& getEvtDPAmp() const {return totAmp_;}
//! Retrieve the invariant mass squared of the first and third daughters in the current event
/*!
\return the invariant mass squared of the first and third daughters in the current event
*/
inline Double_t getEvtm13Sq() const {return m13Sq_;}
//! Retrieve the invariant mass squared of the second and third daughters in the current event
/*!
\return the invariant mass squared of the second and third daughters in the current event
*/
inline Double_t getEvtm23Sq() const {return m23Sq_;}
//! Retrieve the square Dalitz plot coordinate, m', for the current event
/*!
\return the square Dalitz plot coordinate, m', for the current event
*/
inline Double_t getEvtmPrime() const {return mPrime_;}
//! Retrieve the square Dalitz plot coordinate, theta', for the current event
/*!
\return the square Dalitz plot coordinate, theta', for the current event
*/
inline Double_t getEvtthPrime() const {return thPrime_;}
//! Retrieve the efficiency for the current event
/*!
\return the efficiency for the current event
*/
inline Double_t getEvtEff() const {return eff_;}
//! Retrieve the fraction of events that are poorly reconstructed (the self cross feed fraction) for the current event
/*!
\return the self cross feed fraction for the current event
*/
inline Double_t getEvtScfFraction() const {return scfFraction_;}
//! Retrieve the Jacobian, for the transformation into square DP coordinates, for the current event
/*!
\return the Jacobian for the current event
*/
inline Double_t getEvtJacobian() const {return jacobian_;}
//! Retrieve the total intensity multiplied by the efficiency for the current event
/*!
\return the total intensity multiplied by the efficiency for the current event
*/
inline Double_t getEvtIntensity() const {return ASq_;}
//! Retrieve the likelihood for the current event
/*!
The likelihood is the normalised total intensity:
evtLike_ = ASq_/DPNorm_
\return the likelihood for the current event
*/
inline Double_t getEvtLikelihood() const {return evtLike_;}
//! Retrieve the normalised dynamic part of the amplitude of the given amplitude component at the current point in the Dalitz plot
/*!
\param [in] resID the index of the component within the model
\return the amplitude of the given component
*/
inline LauComplex getDynamicAmp(const Int_t resID) const {return ff_[resID].scale(fNorm_[resID]);}
//! Retrieve the Amplitude of resonance resID
/*!
\param [in] resID the index of the component within the model
\return the amplitude of the given component
*/
inline LauComplex getFullAmplitude(const Int_t resID) const {return Amp_[resID] * this->getDynamicAmp(resID);}
//! Retrieve the event-by-event running totals of amplitude cross terms for all pairs of amplitude components
/*!
\return the event-by-event running totals of amplitude cross terms
*/
inline const std::vector< std::vector<LauComplex> >& getFiFjSum() const {return fifjSum_;}
//! Retrieve the event-by-event running totals of efficiency corrected amplitude cross terms for all pairs of amplitude components
/*!
\return the event-by-event running totals of amplitude cross terms with efficiency corrections applied
*/
inline const std::vector< std::vector<LauComplex> >& getFiFjEffSum() const {return fifjEffSum_;}
//! Retrieve the normalisation factors for the dynamic parts of the amplitudes for all of the amplitude components
/*!
\return the normalisation factors
*/
inline const std::vector<Double_t>& getFNorm() const {return fNorm_;}
//! Fill the internal data structure that caches the resonance dynamics
/*!
\param [in] fitDataTree the data source
*/
void fillDataTree(const LauFitDataTree& fitDataTree);
//! Recache the amplitude values for those that have changed
void modifyDataTree();
//! Check whether this model includes a named resonance
/*!
\param [in] resName the resonance
\return true if the resonance is present, false otherwise
*/
Bool_t hasResonance(const TString& resName) const;
//! Retrieve the index for the given resonance
/*!
\param [in] resName the resonance
\return the index of the resonance if it is present, -1 otherwise
*/
Int_t resonanceIndex(const TString& resName) const;
//! Retrieve the name of the charge conjugate of a named resonance
/*!
\param [in] resName the resonance
\return the name of the charge conjugate
*/
TString getConjResName(const TString& resName) const;
//! Retrieve the named resonance
/*!
\param [in] resName the name of the resonance to retrieve
\return the requested resonance
*/
const LauAbsResonance* findResonance(const TString& resName) const;
//! Retrieve a resonance by its index
/*!
\param [in] resIndex the index of the resonance to retrieve
\return the requested resonance
*/
const LauAbsResonance* getResonance(const UInt_t resIndex) const;
//! Update the complex coefficients for the resonances
/*!
\param [in] coeffs the new set of coefficients
*/
void updateCoeffs(const std::vector<LauComplex>& coeffs);
- //! Collate the resonance parameters to initialise (or re-initialise) the model
+ //! Collate the resonance parameters to initialise (or re-initialise) the model
/*!
NB: This has been factored out of the initialise() method to allow for use in the
- importation of parameters in LauAbsFitModel
+ importation of parameters in LauAbsFitModel
*/
- void collateResonanceParameters();
+ void collateResonanceParameters();
//! Set the helicity flip flag for new amplitude components
/*!
\param [in] boolean the helicity flip flag
*/
- inline void flipHelicityForCPEigenstates(Bool_t boolean) {flipHelicity_ = boolean;}
+ inline void flipHelicityForCPEigenstates(const Bool_t boolean) {flipHelicity_ = boolean;}
//! Retrieve the mean efficiency across the Dalitz plot
/*!
\return the mean efficiency across the Dalitz plot
*/
inline const LauParameter& getMeanEff() const {return meanDPEff_;}
//! Retrieve the overall Dalitz plot rate
/*!
\return the overall Dalitz plot rate
*/
inline const LauParameter& getDPRate() const {return DPRate_;}
//! Retrieve the fit fractions for the amplitude components
/*!
\return the fit fractions
*/
inline const LauParArray& getFitFractions() const {return fitFrac_;}
//! Retrieve the fit fractions for the amplitude components
/*!
\return the fit fractions
*/
inline const LauParArray& getFitFractionsEfficiencyUncorrected() const {return fitFracEffUnCorr_;}
//! Retrieve the total number of amplitude components
/*!
\return the total number of amplitude components
*/
inline UInt_t getnTotAmp() const {return nAmp_+nIncohAmp_;}
//! Retrieve the number of coherent amplitude components
/*!
\return the number of coherent amplitude components
*/
inline UInt_t getnCohAmp() const {return nAmp_;}
//! Retrieve the number of incoherent amplitude components
/*!
\return the number of incoherent amplitude components
*/
inline UInt_t getnIncohAmp() const {return nIncohAmp_;}
//! Retrieve the normalisation factor for the log-likelihood function
/*!
\return the normalisation factor
*/
inline Double_t getDPNorm() const {return DPNorm_;}
//! Retrieve the daughters
/*!
\return the daughters
*/
inline const LauDaughters* getDaughters() const {return daughters_;}
//! Retrieve the Dalitz plot kinematics
/*!
\return the Dalitz plot kinematics
*/
inline const LauKinematics* getKinematics() const {return kinematics_;}
//! Retrieve the Dalitz plot kinematics
/*!
\return the Dalitz plot kinematics
*/
inline LauKinematics* getKinematics() {return kinematics_;}
//! Retrieve the model for the efficiency across the Dalitz plot
/*!
\return the efficiency model
*/
inline const LauAbsEffModel* getEffModel() const {return effModel_;}
//! Check whether a self cross feed fraction model is being used
/*!
\return true if a self cross feed fraction model is being used, false otherwise
*/
inline Bool_t usingScfModel() const { return ! scfFractionModel_.empty(); }
//! Retrieve any extra parameters/quantities (e.g. K-matrix total fit fractions)
/*!
\return any extra parameters
*/
inline const std::vector<LauParameter>& getExtraParameters() const {return extraParameters_;}
//! Retrieve the floating parameters of the resonance models
/*!
\return the list of floating parameters
*/
inline std::vector<LauParameter*>& getFloatingParameters() {return resonancePars_;}
//! Whether to calculate separate rho and omega fit-fractions from LauRhoOmegaMix
- inline void calculateRhoOmegaFitFractions(Bool_t calcFF) { calculateRhoOmegaFitFractions_ = calcFF; }
+ inline void calculateRhoOmegaFitFractions(const Bool_t calcFF) { calculateRhoOmegaFitFractions_ = calcFF; }
protected:
//! Print a summary of the model to be used
void initSummary();
//! Initialise the internal storage for this model
void initialiseVectors();
//! Zero the various values used to store integrals info
void resetNormVectors();
//! Calculate the Dalitz plot normalisation integrals across the whole Dalitz plot
void calcDPNormalisation();
//! Form the regions that are produced by the spaces between narrow resonances
/*!
\param [in] regions the regions defined around narrow resonances
\param [in] min the minimum value of the invariant mass
\param [in] max the maximum value of the invariant mass
\return vector of pointers to LauDPPartialIntegralInfo objects that contain the individual coarse regions
*/
std::vector< std::pair<Double_t,Double_t> > formGapsFromRegions(const std::vector< std::pair<Double_t,Double_t> >& regions, const Double_t min, const Double_t max) const;
//! Removes entries in the vector of LauDPPartialIntegralInfo* that are null
/*!
\param [in] regions the list of region pointers
*/
void cullNullRegions(std::vector<LauDPPartialIntegralInfo*>& regions) const;
//! Wrapper for LauDPPartialIntegralInfo constructor
/*!
\param [in] minm13 the minimum of the m13 range
\param [in] maxm13 the maximum of the m13 range
\param [in] minm23 the minimum of the m23 range
\param [in] maxm23 the maximum of the m23 range
\param [in] m13BinWidth the m13 bin width
\param [in] m23BinWidth the m23 bin width
\param [in] precision the precision required for the Gauss-Legendre weights
\param [in] nAmp the number of coherent amplitude components
\param [in] nIncohAmp the number of incoherent amplitude components
\return 0 if the integration region has no internal points, otherwise returns a pointer to the newly constructed LauDPPartialIntegralInfo object
*/
LauDPPartialIntegralInfo* newDPIntegrationRegion(const Double_t minm13, const Double_t maxm13,
const Double_t minm23, const Double_t maxm23,
const Double_t m13BinWidth, const Double_t m23BinWidth,
const Double_t precision,
const UInt_t nAmp,
const UInt_t nIncohAmp) const;
//! Correct regions to ensure that the finest integration grid takes precedence
/*!
\param [in] regions the windows in invariant mass
\param [in] binnings the corresponding binnings for each window
*/
void correctDPOverlap(std::vector< std::pair<Double_t,Double_t> >& regions, const std::vector<Double_t>& binnings) const;
//! Create the integration grid objects for the m23 narrow resonance regions, including the overlap regions with the m13 narrow resonances
/*!
The overlap regions will have an m13Binnings x m23Binnings grid.
The other regions will have a defaultBinning x m23Binnings grid.
\param [in] m13Regions the limits of each narrow-resonance region in m13
\param [in] m23Regions the limits of each narrow-resonance region in m23
\param [in] m13Binnings the binning of each narrow-resonance region in m13
\param [in] m23Binnings the binning of each narrow-resonance region in m23
\param [in] precision the precision required for the Gauss-Legendre weights
\param [in] defaultBinning the binning used in the bulk of the phase space
\return vector of pointers to LauDPPartialIntegralInfo objects that contain the individual regions
*/
std::vector<LauDPPartialIntegralInfo*> m23IntegrationRegions(const std::vector< std::pair<Double_t,Double_t> >& m13Regions,
const std::vector< std::pair<Double_t,Double_t> >& m23Regions,
const std::vector<Double_t>& m13Binnings,
const std::vector<Double_t>& m23Binnings,
const Double_t precision,
const Double_t defaultBinning) const;
//! Create the integration grid objects for the m13 narrow resonance regions, excluding the overlap regions with the m23 narrow resonances
/*!
The regions will have a m13Binnings x defaultBinning grid.
The overlap regions are created by the m23IntegrationRegions function.
\param [in] m13Regions the limits of each narrow-resonance region in m13
\param [in] m23Regions the limits of each narrow-resonance region in m23
\param [in] m13Binnings the binning of each narrow-resonance region in m13
\param [in] precision the precision required for the Gauss-Legendre weights
\param [in] defaultBinning the binning used in the bulk of the phase space
\return vector of pointers to LauDPPartialIntegralInfo objects that contain the individual regions
*/
std::vector<LauDPPartialIntegralInfo*> m13IntegrationRegions(const std::vector< std::pair<Double_t,Double_t> >& m13Regions,
const std::vector< std::pair<Double_t,Double_t> >& m23Regions,
const std::vector<Double_t>& m13Binnings,
const Double_t precision,
const Double_t defaultBinning) const;
//! Calculate the Dalitz plot normalisation integrals across the whole Dalitz plot
void calcDPNormalisationScheme();
//! Determine which amplitudes and integrals need to be recalculated
void findIntegralsToBeRecalculated();
//! Calculate the Dalitz plot normalisation integrals over a given range
/*!
\param [in] intInfo the integration information object
*/
void calcDPPartialIntegral(LauDPPartialIntegralInfo* intInfo);
//! Write the results of the integrals (and related information) to a file
void writeIntegralsFile();
//! Set the dynamic part of the amplitude for a given amplitude component at the current point in the Dalitz plot
/*!
\param [in] index the index of the amplitude component
\param [in] realPart the real part of the amplitude
\param [in] imagPart the imaginary part of the amplitude
*/
void setFFTerm(const UInt_t index, const Double_t realPart, const Double_t imagPart);
//! Set the dynamic part of the intensity for a given incoherent amplitude component at the current point in the Dalitz plot
/*!
\param [in] index the index of the incoherent amplitude component
\param [in] value the intensity
*/
void setIncohIntenTerm(const UInt_t index, const Double_t value);
//! Calculate the amplitudes for all resonances for the current kinematics
void calculateAmplitudes();
//! Calculate or retrieve the cached value of the amplitudes for all resonances at the specified integration grid point
/*!
\param [in,out] intInfo the integration information object
\param [in] m13Point the grid index in m13
\param [in] m23Point the grid index in m23
*/
void calculateAmplitudes( LauDPPartialIntegralInfo* intInfo, const UInt_t m13Point, const UInt_t m23Point );
//! Add the amplitude values (with the appropriate weight) at the current grid point to the running integral values
/*!
\param [in] weight the weight to apply
*/
void addGridPointToIntegrals(const Double_t weight);
//! Calculate the total Dalitz plot amplitude at the current point in the Dalitz plot
/*!
\param [in] useEff whether to apply efficiency corrections
*/
void calcTotalAmp(const Bool_t useEff);
//! Obtain the efficiency of the current event from the model
/*!
\return the efficiency
*/
Double_t retrieveEfficiency();
//! Obtain the self cross feed fraction of the current event from the model
/*!
\param [in] tagCat the tagging category of the current event
\return the self cross feed fraction
*/
Double_t retrieveScfFraction(Int_t tagCat);
//! Set the maximum of A squared that has been found
/*!
\param [in] value the new value
*/
inline void setASqMaxVarValue(Double_t value) {aSqMaxVar_ = value;}
//! Calculate the normalisation factor for the log-likelihood function
/*!
\return the normalisation factor
*/
Double_t calcSigDPNorm();
//! Calculate the dynamic part of the amplitude for a given component at the current point in the Dalitz plot
/*!
\param [in] index the index of the amplitude component within the model
*/
LauComplex resAmp(const UInt_t index);
//! Calculate the dynamic part of the intensity for a given incoherent component at the current point in the Dalitz plot
/*!
\param [in] index the index of the incoherent component within the model
*/
Double_t incohResAmp(const UInt_t index);
//! Load the data for a given event
/*!
\param [in] iEvt the number of the event
*/
void setDataEventNo(UInt_t iEvt);
//! Retrieve the named resonance
/*!
\param [in] resName the name of the resonance to retrieve
\return the requested resonance
*/
LauAbsResonance* findResonance(const TString& resName);
//! Retrieve a resonance by its index
/*!
\param [in] resIndex the index of the resonance to retrieve
\return the requested resonance
*/
LauAbsResonance* getResonance(const UInt_t resIndex);
//! Remove the charge from the given particle name
/*!
\param [in,out] string the particle name
*/
void removeCharge(TString& string) const;
//! Check whether a resonance is a K-matrix component of a given propagator
/*!
\param [in] resAmpInt the index of the resonance within the model
\param [in] propName the name of the K-matrix propagator
\return true if the resonance is a component of the given propagator, otherwise return false
*/
Bool_t gotKMatrixMatch(UInt_t resAmpInt, const TString& propName) const;
private:
//! Copy constructor (not implemented)
LauIsobarDynamics(const LauIsobarDynamics& rhs);
//! Copy assignment operator (not implemented)
LauIsobarDynamics& operator=(const LauIsobarDynamics& rhs);
//! The type used for containing the K-matrix propagators
typedef std::map<TString, LauKMatrixPropagator*> KMPropMap;
//! The type used for mapping K-matrix components to their propagators
typedef std::map<TString, TString> KMStringMap;
//! The daughters of the decay
LauDaughters* daughters_;
//! The kinematics of the decay
LauKinematics* kinematics_;
//! The efficiency model across the Dalitz plot
LauAbsEffModel* effModel_;
//! The self cross feed fraction models across the Dalitz plot
/*!
These model the fraction of signal events that are poorly reconstructed (the self cross feed fraction) as a function of Dalitz plot position.
If the self cross feed is depependent on the tagging category then seperate models can be defined.
*/
LauTagCatScfFractionModelMap scfFractionModel_;
//! The number of amplitude components
UInt_t nAmp_;
//! The number of incoherent amplitude components
UInt_t nIncohAmp_;
//! The complex coefficients for the amplitude components
std::vector<LauComplex> Amp_;
//! The normalisation factor for the log-likelihood function
Double_t DPNorm_;
//! The fit fractions for the amplitude components
LauParArray fitFrac_;
//! The efficiency-uncorrected fit fractions for the amplitude components
LauParArray fitFracEffUnCorr_;
//! The overall Dalitz plot rate
LauParameter DPRate_;
//! The mean efficiency across the Dalitz plot
LauParameter meanDPEff_;
//! The cached data for all events
std::vector<LauCacheData*> data_;
//! The cached data for the current event
LauCacheData* currentEvent_;
//! any extra parameters/quantities (e.g. K-matrix total fit fractions)
std::vector<LauParameter> extraParameters_;
//! The resonances in the model
std::vector<LauAbsResonance*> sigResonances_;
//! The incoherent resonances in the model
std::vector<LauAbsIncohRes*> sigIncohResonances_;
//! The K-matrix propagators
KMPropMap kMatrixPropagators_;
//! The names of the M-matrix components in the model mapped to their propagators
KMStringMap kMatrixPropSet_;
//! The resonance types of all of the amplitude components
std::vector<TString> resTypAmp_;
//! The index of the daughter not produced by the resonance for each amplitude component
std::vector<Int_t> resPairAmp_;
//! The resonance types of all of the incoherent amplitude components
std::vector<TString> incohResTypAmp_;
//! The index of the daughter not produced by the resonance for each incoherent amplitude component
std::vector<Int_t> incohResPairAmp_;
//! The PDG codes of the daughters
std::vector<Int_t> typDaug_;
//! Whether the Dalitz plot is symmetrical
Bool_t symmetricalDP_;
//! Whether the Dalitz plot is fully symmetric
Bool_t fullySymmetricDP_;
//! Whether the Dalitz plot is a flavour-conjugate final state
Bool_t flavConjDP_;
//! Whether the integrals have been performed
Bool_t integralsDone_;
//! Whether the scheme for the integration has been determined
Bool_t normalizationSchemeDone_;
//! Force the symmetrisation of the integration in m13 <-> m23 for non-symmetric but flavour-conjugate final states
Bool_t forceSymmetriseIntegration_;
//! The storage of the integration scheme
std::vector<LauDPPartialIntegralInfo*> dpPartialIntegralInfo_;
//! The name of the file to save integrals to
TString intFileName_;
//! The bin width to use when integrating over m13
Double_t m13BinWidth_;
//! The bin width to use when integrating over m23
Double_t m23BinWidth_;
//! The bin width to use when integrating over mPrime
Double_t mPrimeBinWidth_;
//! The bin width to use when integrating over thetaPrime
Double_t thPrimeBinWidth_;
//! The value below which a resonance width is considered to be narrow
Double_t narrowWidth_;
//! The factor relating the width of the narrowest resonance and the binning size
Double_t binningFactor_;
//! The invariant mass squared of the first and third daughters
Double_t m13Sq_;
//! The invariant mass squared of the second and third daughters
Double_t m23Sq_;
//! The square Dalitz plot coordinate, m'
Double_t mPrime_;
//! The square Dalitz plot coordinate theta'
Double_t thPrime_;
//! The tagging category
Int_t tagCat_;
//! The efficiency at the current point in the Dalitz plot
Double_t eff_;
//!The fraction of events that are poorly reconstructed (the self cross feed fraction) at the current point in the Dalitz plot
Double_t scfFraction_;
//! The Jacobian, for the transformation into square DP coordinates at the current point in the Dalitz plot
Double_t jacobian_;
//! The value of A squared for the current event
Double_t ASq_;
//! The normalised likelihood for the current event
Double_t evtLike_;
//! The total amplitude for the current event
LauComplex totAmp_;
//! The event-by-event running total of efficiency corrected amplitude cross terms for each pair of amplitude components
/*!
Calculated as the sum of ff_[i]*ff_[j]*efficiency for all events
*/
std::vector< std::vector<LauComplex> > fifjEffSum_;
//! The event-by-event running total of the amplitude cross terms for each pair of amplitude components
/*!
Calculated as the sum of ff_[i]*ff_[j] for all events
*/
std::vector< std::vector<LauComplex> > fifjSum_;
//! The dynamic part of the amplitude for each amplitude component at the current point in the Dalitz plot
std::vector<LauComplex> ff_;
//! The dynamic part of the intensity for each incoherent amplitude component at the current point in the Dalitz plot
std::vector<Double_t> incohInten_;
//! The event-by-event running total of the dynamical amplitude squared for each amplitude component
std::vector<Double_t> fSqSum_;
//! The event-by-event running total of the dynamical amplitude squared for each amplitude component
std::vector<Double_t> fSqEffSum_;
//! The normalisation factors for the dynamic parts of the amplitude for each amplitude component
std::vector<Double_t> fNorm_;
//! The maximum allowed number of attempts when generating an event
Int_t iterationsMax_;
//! The number of unsucessful attempts to generate an event so far
Int_t nSigGenLoop_;
//! The maximum allowed value of A squared
Double_t aSqMaxSet_;
//! The maximum value of A squared that has been seen so far while generating
Double_t aSqMaxVar_;
//! The helicity flip flag for new amplitude components
Bool_t flipHelicity_;
//! Flag to recalculate the normalisation
Bool_t recalcNormalisation_;
//! List of floating resonance parameters
std::vector<LauParameter*> resonancePars_;
//! List of floating resonance parameter values from previous calculation
std::vector<Double_t> resonanceParValues_;
//! Indices in sigResonances_ to point to the corresponding signal resonance(s) for each floating parameter
std::vector< std::vector<UInt_t> > resonanceParResIndex_;
//! Resonance indices for which the amplitudes and integrals should be recalculated
std::set<UInt_t> integralsToBeCalculated_;
- //! Whether to calculate separate rho and omega fit fractions from the LauRhoOmegaMix model
- Bool_t calculateRhoOmegaFitFractions_;
+ //! Whether to calculate separate rho and omega fit fractions from the LauRhoOmegaMix model
+ Bool_t calculateRhoOmegaFitFractions_;
ClassDef(LauIsobarDynamics,0)
};
#endif
diff --git a/src/LauAbsFitModel.cc b/src/LauAbsFitModel.cc
index 4e96ce5..6bd4899 100644
--- a/src/LauAbsFitModel.cc
+++ b/src/LauAbsFitModel.cc
@@ -1,1089 +1,1090 @@
/*
Copyright 2004 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauAbsFitModel.cc
\brief File containing implementation of LauAbsFitModel class.
*/
#include <iostream>
#include <limits>
#include <vector>
#include "TMessage.h"
#include "TMonitor.h"
#include "TServerSocket.h"
#include "TSocket.h"
#include "TSystem.h"
#include "TVirtualFitter.h"
#include "LauAbsFitModel.hh"
#include "LauAbsFitter.hh"
#include "LauAbsPdf.hh"
#include "LauComplex.hh"
#include "LauFitter.hh"
#include "LauFitDataTree.hh"
#include "LauGenNtuple.hh"
#include "LauParameter.hh"
#include "LauParamFixed.hh"
#include "LauPrint.hh"
#include "LauSPlot.hh"
ClassImp(LauAbsFitModel)
LauAbsFitModel::LauAbsFitModel() :
+ fixParams_(kFALSE),
compareFitData_(kFALSE),
savePDF_(kFALSE),
writeLatexTable_(kFALSE),
writeSPlotData_(kFALSE),
storeDPEff_(kFALSE),
randomFit_(kFALSE),
emlFit_(kFALSE),
poissonSmear_(kFALSE),
enableEmbedding_(kFALSE),
usingDP_(kTRUE),
pdfsDependOnDP_(kFALSE),
inputFitData_(0),
genNtuple_(0),
sPlotNtuple_(0),
nullString_(""),
doSFit_(kFALSE),
sWeightBranchName_(""),
sWeightScaleFactor_(1.0),
outputTableName_(""),
fitToyMCFileName_("fitToyMC.root"),
fitToyMCTableName_("fitToyMCTable"),
fitToyMCScale_(10),
fitToyMCPoissonSmear_(kFALSE),
sPlotFileName_(""),
sPlotTreeName_(""),
sPlotVerbosity_("")
{
}
LauAbsFitModel::~LauAbsFitModel()
{
delete inputFitData_; inputFitData_ = 0;
delete genNtuple_; genNtuple_ = 0;
delete sPlotNtuple_; sPlotNtuple_ = 0;
// Remove the components created to apply constraints to fit parameters
for (std::vector<LauAbsRValue*>::iterator iter = conVars_.begin(); iter != conVars_.end(); ++iter){
if ( !(*iter)->isLValue() ){
delete (*iter);
(*iter) = 0;
}
}
}
void LauAbsFitModel::run(const TString& applicationCode, const TString& dataFileName, const TString& dataTreeName,
const TString& histFileName, const TString& tableFileName)
{
// Chose whether you want to generate or fit events in the Dalitz plot.
// To generate events choose applicationCode = "gen", to fit events choose
// applicationCode = "fit".
TString runCode(applicationCode);
runCode.ToLower();
TString histFileNameCopy(histFileName);
TString tableFileNameCopy(tableFileName);
TString dataFileNameCopy(dataFileName);
TString dataTreeNameCopy(dataTreeName);
// Initialise the fit par vectors. Each class that inherits from this one
// must implement this sensibly for all vectors specified in clearFitParVectors,
// i.e. specify parameter names, initial, min, max and fixed values
this->initialise();
// Add variables to Gaussian constrain to a list
this->addConParameters();
if (dataFileNameCopy == "") {dataFileNameCopy = "data.root";}
if (dataTreeNameCopy == "") {dataTreeNameCopy = "genResults";}
if (runCode.Contains("gen")) {
if (histFileNameCopy == "") {histFileNameCopy = "parInfo.root";}
if (tableFileNameCopy == "") {tableFileNameCopy = "genResults";}
this->setGenValues();
this->generate(dataFileNameCopy, dataTreeNameCopy, histFileNameCopy, tableFileNameCopy);
} else if (runCode.Contains("fit")) {
if (histFileNameCopy == "") {histFileNameCopy = "parInfo.root";}
if (tableFileNameCopy == "") {tableFileNameCopy = "fitResults";}
this->fit(dataFileNameCopy, dataTreeNameCopy, histFileNameCopy, tableFileNameCopy);
} else if (runCode.Contains("plot")) {
this->savePDFPlots("plot");
} else if (runCode.Contains("weight")) {
this->weightEvents(dataFileNameCopy, dataTreeNameCopy);
}
}
void LauAbsFitModel::doSFit( const TString& sWeightBranchName, Double_t scaleFactor )
{
if ( sWeightBranchName == "" ) {
std::cerr << "WARNING in LauAbsFitModel::doSFit : sWeight branch name is empty string, not setting-up sFit." << std::endl;
return;
}
doSFit_ = kTRUE;
sWeightBranchName_ = sWeightBranchName;
sWeightScaleFactor_ = scaleFactor;
}
void LauAbsFitModel::setBkgndClassNames( const std::vector<TString>& names )
{
if ( !bkgndClassNames_.empty() ) {
std::cerr << "WARNING in LauAbsFitModel::setBkgndClassNames : Names already stored, not changing them." << std::endl;
return;
}
UInt_t nBkgnds = names.size();
for ( UInt_t i(0); i < nBkgnds; ++i ) {
bkgndClassNames_.insert( std::make_pair( i, names[i] ) );
}
this->setupBkgndVectors();
}
Bool_t LauAbsFitModel::validBkgndClass( const TString& className ) const
{
if ( bkgndClassNames_.empty() ) {
return kFALSE;
}
Bool_t found(kFALSE);
for ( LauBkgndClassMap::const_iterator iter = bkgndClassNames_.begin(); iter != bkgndClassNames_.end(); ++iter ) {
if ( iter->second == className ) {
found = kTRUE;
break;
}
}
return found;
}
UInt_t LauAbsFitModel::bkgndClassID( const TString& className ) const
{
if ( ! this->validBkgndClass( className ) ) {
std::cerr << "ERROR in LauAbsFitModel::bkgndClassID : Request for ID for invalid background class \"" << className << "\"." << std::endl;
return (bkgndClassNames_.size() + 1);
}
UInt_t bgID(0);
for ( LauBkgndClassMap::const_iterator iter = bkgndClassNames_.begin(); iter != bkgndClassNames_.end(); ++iter ) {
if ( iter->second == className ) {
bgID = iter->first;
break;
}
}
return bgID;
}
const TString& LauAbsFitModel::bkgndClassName( UInt_t classID ) const
{
LauBkgndClassMap::const_iterator iter = bkgndClassNames_.find( classID );
if ( iter == bkgndClassNames_.end() ) {
std::cerr << "ERROR in LauAbsFitModel::bkgndClassName : Request for name of invalid background class ID " << classID << "." << std::endl;
return nullString_;
}
return iter->second;
}
void LauAbsFitModel::clearFitParVectors()
{
std::cout << "INFO in LauAbsFitModel::clearFitParVectors : Clearing fit variable vectors" << std::endl;
// Remove the components created to apply constraints to fit parameters
for (std::vector<LauAbsRValue*>::iterator iter = conVars_.begin(); iter != conVars_.end(); ++iter){
if ( !(*iter)->isLValue() ){
delete (*iter);
(*iter) = 0;
}
}
conVars_.clear();
fitVars_.clear();
}
void LauAbsFitModel::clearExtraVarVectors()
{
std::cout << "INFO in LauAbsFitModel::clearExtraVarVectors : Clearing extra ntuple variable vectors" << std::endl;
extraVars_.clear();
}
void LauAbsFitModel::setGenValues()
{
// makes sure each parameter holds its genValue as its current value
for (LauParameterPList::iterator iter = fitVars_.begin(); iter != fitVars_.end(); ++iter) {
(*iter)->value((*iter)->genValue());
}
this->propagateParUpdates();
}
void LauAbsFitModel::writeSPlotData(const TString& fileName, const TString& treeName, Bool_t storeDPEfficiency, const TString& verbosity)
{
if (this->writeSPlotData()) {
std::cerr << "ERROR in LauAbsFitModel::writeSPlotData : Already have an sPlot ntuple setup, not doing it again." << std::endl;
return;
}
writeSPlotData_ = kTRUE;
sPlotFileName_ = fileName;
sPlotTreeName_ = treeName;
sPlotVerbosity_ = verbosity;
storeDPEff_ = storeDPEfficiency;
}
// TODO : histFileName isn't used here at the moment but could be used for
// storing the values of the parameters used in the generation.
// These could then be read and used for setting the "true" values
// in a subsequent fit.
void LauAbsFitModel::generate(const TString& dataFileName, const TString& dataTreeName, const TString& /*histFileName*/, const TString& tableFileNameBase)
{
// Create the ntuple for storing the results
std::cout << "INFO in LauAbsFitModel::generate : Creating generation ntuple." << std::endl;
if (genNtuple_ != 0) {delete genNtuple_; genNtuple_ = 0;}
genNtuple_ = new LauGenNtuple(dataFileName,dataTreeName);
// add branches for storing the experiment number and the number of
// the event within the current experiment
this->addGenNtupleIntegerBranch("iExpt");
this->addGenNtupleIntegerBranch("iEvtWithinExpt");
this->setupGenNtupleBranches();
// Start the cumulative timer
cumulTimer_.Start();
const UInt_t firstExp = this->firstExpt();
const UInt_t nExp = this->nExpt();
Bool_t genOK(kTRUE);
do {
// Loop over the number of experiments
for (UInt_t iExp = firstExp; iExp < (firstExp+nExp); ++iExp) {
// Start the timer to see how long each experiment takes to generate
timer_.Start();
// Store the experiment number in the ntuple
this->setGenNtupleIntegerBranchValue("iExpt",iExp);
// Do the generation for this experiment
std::cout << "INFO in LauAbsFitModel::generate : Generating experiment number " << iExp << std::endl;
genOK = this->genExpt();
// Stop the timer and see how long the program took so far
timer_.Stop();
timer_.Print();
if (!genOK) {
// delete and recreate an empty tree
genNtuple_->deleteAndRecreateTree();
// then break out of the experiment loop
std::cerr << "WARNING in LauAbsFitModel::generate : Problem in toy MC generation. Starting again with updated parameters..." << std::endl;
break;
}
if (this->writeLatexTable()) {
TString tableFileName(tableFileNameBase);
tableFileName += "_";
tableFileName += iExp;
tableFileName += ".tex";
this->writeOutTable(tableFileName);
}
} // Loop over number of experiments
} while (!genOK);
// Print out total timing info.
cumulTimer_.Stop();
std::cout << "INFO in LauAbsFitModel::generate : Finished generating all experiments." << std::endl;
std::cout << "INFO in LauAbsFitModel::generate : Cumulative timing:" << std::endl;
cumulTimer_.Print();
// Build the event index
std::cout << "INFO in LauAbsFitModel::generate : Building experiment:event index." << std::endl;
// TODO - can test this return value?
//Int_t nIndexEntries =
genNtuple_->buildIndex("iExpt","iEvtWithinExpt");
// Write out toy MC ntuple
std::cout << "INFO in LauAbsFitModel::generate : Writing data to file " << dataFileName << "." << std::endl;
genNtuple_->writeOutGenResults();
}
void LauAbsFitModel::addGenNtupleIntegerBranch(const TString& name)
{
genNtuple_->addIntegerBranch(name);
}
void LauAbsFitModel::addGenNtupleDoubleBranch(const TString& name)
{
genNtuple_->addDoubleBranch(name);
}
void LauAbsFitModel::setGenNtupleIntegerBranchValue(const TString& name, Int_t value)
{
genNtuple_->setIntegerBranchValue(name,value);
}
void LauAbsFitModel::setGenNtupleDoubleBranchValue(const TString& name, Double_t value)
{
genNtuple_->setDoubleBranchValue(name,value);
}
Int_t LauAbsFitModel::getGenNtupleIntegerBranchValue(const TString& name) const
{
return genNtuple_->getIntegerBranchValue(name);
}
Double_t LauAbsFitModel::getGenNtupleDoubleBranchValue(const TString& name) const
{
return genNtuple_->getDoubleBranchValue(name);
}
void LauAbsFitModel::fillGenNtupleBranches()
{
genNtuple_->fillBranches();
}
void LauAbsFitModel::addSPlotNtupleIntegerBranch(const TString& name)
{
sPlotNtuple_->addIntegerBranch(name);
}
void LauAbsFitModel::addSPlotNtupleDoubleBranch(const TString& name)
{
sPlotNtuple_->addDoubleBranch(name);
}
void LauAbsFitModel::setSPlotNtupleIntegerBranchValue(const TString& name, Int_t value)
{
sPlotNtuple_->setIntegerBranchValue(name,value);
}
void LauAbsFitModel::setSPlotNtupleDoubleBranchValue(const TString& name, Double_t value)
{
sPlotNtuple_->setDoubleBranchValue(name,value);
}
void LauAbsFitModel::fillSPlotNtupleBranches()
{
sPlotNtuple_->fillBranches();
}
void LauAbsFitModel::fit(const TString& dataFileName, const TString& dataTreeName, const TString& histFileName, const TString& tableFileNameBase)
{
// Routine to perform the total fit.
const UInt_t firstExp = this->firstExpt();
const UInt_t nExp = this->nExpt();
std::cout << "INFO in LauAbsFitModel::fit : First experiment = " << firstExp << std::endl;
std::cout << "INFO in LauAbsFitModel::fit : Number of experiments = " << nExp << std::endl;
// Start the cumulative timer
cumulTimer_.Start();
this->resetFitCounters();
// Create and setup the fit results ntuple
this->setupResultsOutputs( histFileName, tableFileNameBase );
// Create and setup the sPlot ntuple
if (this->writeSPlotData()) {
std::cout << "INFO in LauAbsFitModel::fit : Creating sPlot ntuple." << std::endl;
if (sPlotNtuple_ != 0) {delete sPlotNtuple_; sPlotNtuple_ = 0;}
sPlotNtuple_ = new LauGenNtuple(sPlotFileName_,sPlotTreeName_);
this->setupSPlotNtupleBranches();
}
// This reads in the given dataFile and creates an input
// fit data tree that stores them for all events and experiments.
Bool_t dataOK = this->verifyFitData(dataFileName,dataTreeName);
if (!dataOK) {
std::cerr << "ERROR in LauAbsFitModel::fit : Problem caching the fit data." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
// Loop over the number of experiments
for (UInt_t iExp = firstExp; iExp < (firstExp+nExp); ++iExp) {
// Start the timer to see how long each fit takes
timer_.Start();
this->setCurrentExperiment( iExp );
UInt_t nEvents = this->readExperimentData();
if (nEvents < 1) {
std::cerr << "WARNING in LauAbsFitModel::fit : Zero events in experiment " << iExp << ", skipping..." << std::endl;
timer_.Stop();
continue;
}
// Now the sub-classes must implement whatever they need to do
// to cache any more input fit data they need in order to
// calculate the likelihoods during the fit.
// They need to use the inputFitData_ tree as input. For example,
// inputFitData_ contains m13Sq and m23Sq. The appropriate fit model
// then caches the resonance dynamics for the signal model, as
// well as the background likelihood values in the Dalitz plot
this->cacheInputFitVars();
if ( this->doSFit() ) {
this->cacheInputSWeights();
}
// Do the fit for this experiment
this->fitExpt();
// Write the results into the ntuple
this->finaliseFitResults( outputTableName_ );
// Stop the timer and see how long the program took so far
timer_.Stop();
timer_.Print();
// Store the per-event likelihood values
if ( this->writeSPlotData() ) {
this->storePerEvtLlhds();
}
// Create a toy MC sample using the fitted parameters so that
// the user can compare the fit to the data.
if (compareFitData_ == kTRUE && this->statusCode() == 3) {
this->createFitToyMC(fitToyMCFileName_, fitToyMCTableName_);
}
} // Loop over number of experiments
// Print out total timing info.
cumulTimer_.Stop();
std::cout << "INFO in LauAbsFitModel::fit : Cumulative timing:" << std::endl;
cumulTimer_.Print();
// Print out stats on OK fits.
const UInt_t nOKFits = this->numberOKFits();
const UInt_t nBadFits = this->numberBadFits();
std::cout << "INFO in LauAbsFitModel::fit : Number of OK Fits = " << nOKFits << std::endl;
std::cout << "INFO in LauAbsFitModel::fit : Number of Failed Fits = " << nBadFits << std::endl;
Double_t fitEff(0.0);
if (nExp != 0) {fitEff = nOKFits/(1.0*nExp);}
std::cout << "INFO in LauAbsFitModel::fit : Fit efficiency = " << fitEff*100.0 << "%." << std::endl;
// Write out any fit results (ntuples etc...).
this->writeOutAllFitResults();
if ( this->writeSPlotData() ) {
this->calculateSPlotData();
}
}
void LauAbsFitModel::setupResultsOutputs( const TString& histFileName, const TString& tableFileName )
{
this->LauSimFitSlave::setupResultsOutputs( histFileName, tableFileName );
outputTableName_ = tableFileName;
}
Bool_t LauAbsFitModel::verifyFitData(const TString& dataFileName, const TString& dataTreeName)
{
// From the input data stream, store the variables into the
// internal tree inputFitData_ that can be used by the sub-classes
// in calculating their likelihood functions for the fit
delete inputFitData_;
inputFitData_ = new LauFitDataTree(dataFileName,dataTreeName);
Bool_t dataOK = inputFitData_->findBranches();
if (!dataOK) {
delete inputFitData_; inputFitData_ = 0;
}
return dataOK;
}
void LauAbsFitModel::cacheInputSWeights()
{
Bool_t hasBranch = inputFitData_->haveBranch( sWeightBranchName_ );
if ( ! hasBranch ) {
std::cerr << "ERROR in LauAbsFitModel::cacheInputSWeights : Input data does not contain variable \"" << sWeightBranchName_ << "\".\n";
std::cerr << " : Turning off sFit!" << std::endl;
doSFit_ = kFALSE;
return;
}
UInt_t nEvents = this->eventsPerExpt();
sWeights_.clear();
sWeights_.reserve( nEvents );
for (UInt_t iEvt = 0; iEvt < nEvents; ++iEvt) {
const LauFitData& dataValues = inputFitData_->getData(iEvt);
LauFitData::const_iterator iter = dataValues.find( sWeightBranchName_ );
sWeights_.push_back( iter->second * sWeightScaleFactor_ );
}
}
void LauAbsFitModel::fitExpt()
{
// Routine to perform the actual fit for the given experiment
// Update initial fit parameters if required (e.g. if using random numbers).
this->checkInitFitParams();
// Initialise the fitter
LauFitter::fitter()->useAsymmFitErrors( this->useAsymmFitErrors() );
LauFitter::fitter()->twoStageFit( this->twoStageFit() );
LauFitter::fitter()->initialise( this, fitVars_ );
this->startNewFit( LauFitter::fitter()->nParameters(), LauFitter::fitter()->nFreeParameters() );
// Now ready for minimisation step
std::cout << "\nINFO in LauAbsFitModel::fitExpt : Start minimisation...\n";
LauAbsFitter::FitStatus fitResult = LauFitter::fitter()->minimise();
// If we're doing a two stage fit we can now release (i.e. float)
// the 2nd stage parameters and re-fit
if (this->twoStageFit()) {
if ( fitResult.status != 3 ) {
std::cerr << "WARNING in LauAbsFitModel:fitExpt : Not running second stage fit since first stage failed." << std::endl;
LauFitter::fitter()->releaseSecondStageParameters();
} else {
LauFitter::fitter()->releaseSecondStageParameters();
this->startNewFit( LauFitter::fitter()->nParameters(), LauFitter::fitter()->nFreeParameters() );
fitResult = LauFitter::fitter()->minimise();
}
}
const TMatrixD& covMat = LauFitter::fitter()->covarianceMatrix();
this->storeFitStatus( fitResult, covMat );
// Store the final fit results and errors into protected internal vectors that
// all sub-classes can use within their own finalFitResults implementation
// used below (e.g. putting them into an ntuple in a root file)
LauFitter::fitter()->updateParameters();
}
void LauAbsFitModel::calculateSPlotData()
{
if (sPlotNtuple_ != 0) {
sPlotNtuple_->addFriendTree(inputFitData_->fileName(), inputFitData_->treeName());
sPlotNtuple_->writeOutGenResults();
LauSPlot splot(sPlotNtuple_->fileName(), sPlotNtuple_->treeName(), this->firstExpt(), this->nExpt(),
this->variableNames(), this->freeSpeciesNames(), this->fixdSpeciesNames(), this->twodimPDFs(),
this->splitSignal(), this->scfDPSmear());
splot.runCalculations(sPlotVerbosity_);
splot.writeOutResults();
}
}
void LauAbsFitModel::compareFitData(UInt_t toyMCScale, const TString& mcFileName, const TString& tableFileName, Bool_t poissonSmearing)
{
compareFitData_ = kTRUE;
fitToyMCScale_ = toyMCScale;
fitToyMCFileName_ = mcFileName;
fitToyMCTableName_ = tableFileName;
fitToyMCPoissonSmear_ = poissonSmearing;
}
void LauAbsFitModel::createFitToyMC(const TString& mcFileName, const TString& tableFileName)
{
// Create a toy MC sample so that the user can compare the fitted
// result with the data.
// Generate more toy MC to reduce statistical fluctuations:
// - use the rescaling value fitToyMCScale_
// Store the info on the number of experiments, first expt and current expt
const UInt_t oldNExpt(this->nExpt());
const UInt_t oldFirstExpt(this->firstExpt());
const UInt_t oldIExpt(this->iExpt());
// Turn off Poisson smearing if required
const Bool_t poissonSmearing(this->doPoissonSmearing());
this->doPoissonSmearing(fitToyMCPoissonSmear_);
// Turn off embedding, since we need toy MC, not reco'ed events
const Bool_t enableEmbeddingOrig(this->enableEmbedding());
this->enableEmbedding(kFALSE);
// Need to make sure that the generation of the DP co-ordinates is
// switched on if any of our PDFs depend on it
const Bool_t origUseDP = this->useDP();
if ( !origUseDP && this->pdfsDependOnDP() ) {
this->useDP( kTRUE );
this->initialiseDPModels();
}
// Construct a unique filename for this experiment
TString exptString("_expt");
exptString += oldIExpt;
TString fileName( mcFileName );
fileName.Insert( fileName.Last('.'), exptString );
// Generate the toy MC
std::cout << "INFO in LauAbsFitModel::createFitToyMC : Generating toy MC in " << fileName << " to compare fit with data..." << std::endl;
std::cout << " : Number of experiments to generate = " << fitToyMCScale_ << "." << std::endl;
std::cout << " : This is to allow the toy MC to be made with reduced statistical fluctuations." << std::endl;
// Set the genValue of each parameter to its current (fitted) value
// but first store the original genValues for restoring later
std::vector<Double_t> origGenValues; origGenValues.reserve(this->nTotParams());
Bool_t blind(kFALSE);
for (LauParameterPList::iterator iter = fitVars_.begin(); iter != fitVars_.end(); ++iter) {
origGenValues.push_back((*iter)->genValue());
(*iter)->genValue((*iter)->unblindValue());
if ( (*iter)->blind() ) {
blind = kTRUE;
}
}
if ( blind ) {
std::cerr << "WARNING in LauAbsFitModel::createFitToyMC : One or more parameters are blind but the toy will be created using the unblind values - use with caution!!" << std::endl;
}
// If we're asked to generate more than 100 experiments then split it
// up into multiple files since otherwise can run into memory issues
// when building the index
// TODO - this obviously depends on the number of events per experiment as well, so should do this properly
UInt_t totalExpts = fitToyMCScale_;
if ( totalExpts > 100 ) {
UInt_t nFiles = totalExpts/100;
if ( totalExpts%100 ) {
nFiles += 1;
}
TString fileNameBase {fileName};
for ( UInt_t iFile(0); iFile < nFiles; ++iFile ) {
UInt_t firstExp( iFile*100 );
// Set number of experiments and first experiment to generate
UInt_t nExp = ((firstExp + 100)>totalExpts) ? totalExpts-firstExp : 100;
this->setNExpts(nExp, firstExp);
// Create a unique filename and generate the events
fileName = fileNameBase;
TString extraname = "_file";
extraname += iFile;
fileName.Insert( fileName.Last('.'), extraname );
this->generate(fileName, "genResults", "dummy.root", tableFileName);
}
} else {
// Set number of experiments to new value
this->setNExpts(fitToyMCScale_, 0);
// Generate the toy
this->generate(fileName, "genResults", "dummy.root", tableFileName);
}
// Reset number of experiments to original value
this->setNExpts(oldNExpt, oldFirstExpt);
this->setCurrentExperiment(oldIExpt);
// Restore the Poisson smearing to its former value
this->doPoissonSmearing(poissonSmearing);
// Restore the embedding status
this->enableEmbedding(enableEmbeddingOrig);
// Restore "useDP" to its former status
this->useDP( origUseDP );
// Restore the original genValue to each parameter
for (UInt_t i(0); i<this->nTotParams(); ++i) {
fitVars_[i]->genValue(origGenValues[i]);
}
std::cout << "INFO in LauAbsFitModel::createFitToyMC : Finished in createFitToyMC." << std::endl;
}
Double_t LauAbsFitModel::getTotNegLogLikelihood()
{
// Calculate the total negative log-likelihood over all events.
// This function assumes that the fit parameters and data tree have
// already been set-up correctly.
// Loop over the data points to calculate the log likelihood
Double_t logLike = this->getLogLikelihood( 0, this->eventsPerExpt() );
// Include the Poisson term in the extended likelihood if required
if (this->doEMLFit()) {
logLike -= this->getEventSum();
}
// Calculate any penalty terms from Gaussian constrained variables
if ( ! conVars_.empty() ){
logLike -= this->getLogLikelihoodPenalty();
}
Double_t totNegLogLike = -logLike;
return totNegLogLike;
}
Double_t LauAbsFitModel::getLogLikelihoodPenalty()
{
Double_t penalty(0.0);
for ( LauAbsRValuePList::const_iterator iter = conVars_.begin(); iter != conVars_.end(); ++iter ) {
Double_t val = (*iter)->unblindValue();
Double_t mean = (*iter)->constraintMean();
Double_t width = (*iter)->constraintWidth();
Double_t term = ( val - mean )*( val - mean );
penalty += term/( 2*width*width );
}
return penalty;
}
Double_t LauAbsFitModel::getLogLikelihood( UInt_t iStart, UInt_t iEnd )
{
// Calculate the total negative log-likelihood over all events.
// This function assumes that the fit parameters and data tree have
// already been set-up correctly.
// Loop over the data points to calculate the log likelihood
Double_t logLike(0.0);
const Double_t worstLL = this->worstLogLike();
// Loop over the number of events in this experiment
Bool_t ok(kTRUE);
for (UInt_t iEvt = iStart; iEvt < iEnd; ++iEvt) {
Double_t likelihood = this->getTotEvtLikelihood(iEvt);
if (likelihood > std::numeric_limits<Double_t>::min()) { // Is the likelihood zero?
Double_t evtLogLike = TMath::Log(likelihood);
if ( doSFit_ ) {
evtLogLike *= sWeights_[iEvt];
}
logLike += evtLogLike;
} else {
ok = kFALSE;
std::cerr << "WARNING in LauAbsFitModel::getLogLikelihood : Strange likelihood value for event " << iEvt << ": " << likelihood << "\n";
this->printEventInfo(iEvt);
this->printVarsInfo(); //Write the values of the floated variables for which the likelihood is zero
break;
}
}
if (!ok) {
std::cerr << " : Returning worst NLL found so far to force MINUIT out of this region." << std::endl;
logLike = worstLL;
} else if (logLike < worstLL) {
this->worstLogLike( logLike );
}
return logLike;
}
void LauAbsFitModel::setParsFromMinuit(Double_t* par, Int_t npar)
{
// This function sets the internal parameters based on the values
// that Minuit is using when trying to minimise the total likelihood function.
// MINOS reports different numbers of free parameters depending on the
// situation, so disable this check
if ( ! this->withinAsymErrorCalc() ) {
const UInt_t nFreePars = this->nFreeParams();
if (static_cast<UInt_t>(npar) != nFreePars) {
std::cerr << "ERROR in LauAbsFitModel::setParsFromMinuit : Unexpected number of free parameters: " << npar << ".\n";
std::cerr << " Expected: " << nFreePars << ".\n" << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
// Despite npar being the number of free parameters
// the par array actually contains all the parameters,
// free and floating...
// Update all the floating ones with their new values
// Also check if we have any parameters on which the DP integrals depend
// and whether they have changed since the last iteration
Bool_t recalcNorm(kFALSE);
const LauParameterPSet::const_iterator resVarsEnd = resVars_.end();
for (UInt_t i(0); i<this->nTotParams(); ++i) {
if (!fitVars_[i]->fixed()) {
if ( resVars_.find( fitVars_[i] ) != resVarsEnd ) {
if ( fitVars_[i]->value() != par[i] ) {
recalcNorm = kTRUE;
}
}
fitVars_[i]->value(par[i]);
}
}
// If so, then recalculate the normalisation
if (recalcNorm) {
this->recalculateNormalisation();
}
this->propagateParUpdates();
}
UInt_t LauAbsFitModel::addFitParameters(LauPdfList& pdfList)
{
UInt_t nParsAdded(0);
for (LauPdfList::iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter) {
LauAbsPdf* pdf = (*pdf_iter);
if ( pdf->isDPDependent() ) {
this->pdfsDependOnDP( kTRUE );
}
LauAbsRValuePList& pars = pdf->getParameters();
for (LauAbsRValuePList::iterator pars_iter = pars.begin(); pars_iter != pars.end(); ++pars_iter) {
LauParameterPList params = (*pars_iter)->getPars();
for (LauParameterPList::iterator params_iter = params.begin(); params_iter != params.end(); ++params_iter) {
if ( !(*params_iter)->clone() && ( !(*params_iter)->fixed() || ( this->twoStageFit() && (*params_iter)->secondStage() ) ) ) {
fitVars_.push_back(*params_iter);
++nParsAdded;
}
}
}
}
return nParsAdded;
}
void LauAbsFitModel::addConParameters()
{
for ( LauParameterPList::const_iterator iter = fitVars_.begin(); iter != fitVars_.end(); ++iter ) {
if ( (*iter)->gaussConstraint() ) {
conVars_.push_back( *iter );
std::cout << "INFO in LauAbsFitModel::addConParameters : Added Gaussian constraint to parameter "<< (*iter)->name() << std::endl;
}
}
// Add penalties from the constraints to fit parameters
const std::vector<StoreConstraints>& storeCon = this->constraintsStore();
for ( std::vector<StoreConstraints>::const_iterator iter = storeCon.begin(); iter != storeCon.end(); ++iter ) {
const std::vector<TString>& names = (*iter).conPars_;
std::vector<LauParameter*> params;
for ( std::vector<TString>::const_iterator iternames = names.begin(); iternames != names.end(); ++iternames ) {
for ( LauParameterPList::const_iterator iterfit = fitVars_.begin(); iterfit != fitVars_.end(); ++iterfit ) {
if ( (*iternames) == (*iterfit)->name() ){
params.push_back(*iterfit);
}
}
}
// If the parameters are not found, skip it
if ( params.size() != (*iter).conPars_.size() ) {
std::cerr << "WARNING in LauAbsFitModel::addConParameters: Could not find parameters to constrain in the formula... skipping" << std::endl;
continue;
}
LauFormulaPar* formPar = new LauFormulaPar( (*iter).formula_, (*iter).formula_, params );
formPar->addGaussianConstraint( (*iter).mean_, (*iter).width_ );
conVars_.push_back(formPar);
std::cout << "INFO in LauAbsFitModel::addConParameters : Added Gaussian constraint to formula\n";
std::cout << " : Formula: " << (*iter).formula_ << std::endl;
for ( std::vector<LauParameter*>::iterator iterparam = params.begin(); iterparam != params.end(); ++iterparam ) {
std::cout << " : Parameter: " << (*iterparam)->name() << std::endl;
}
}
}
void LauAbsFitModel::updateFitParameters(LauPdfList& pdfList)
{
for (LauPdfList::iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter) {
(*pdf_iter)->updatePulls();
}
}
void LauAbsFitModel::printFitParameters(const LauPdfList& pdfList, std::ostream& fout) const
{
LauPrint print;
for (LauPdfList::const_iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter) {
const LauAbsRValuePList& pars = (*pdf_iter)->getParameters();
for (LauAbsRValuePList::const_iterator pars_iter = pars.begin(); pars_iter != pars.end(); ++pars_iter) {
LauParameterPList params = (*pars_iter)->getPars();
for (LauParameterPList::iterator params_iter = params.begin(); params_iter != params.end(); ++params_iter) {
if (!(*params_iter)->clone()) {
fout << (*params_iter)->name() << " & $";
print.printFormat(fout, (*params_iter)->value());
if ((*params_iter)->fixed() == kTRUE) {
fout << "$ (fixed) \\\\";
} else {
fout << " \\pm ";
print.printFormat(fout, (*params_iter)->error());
fout << "$ \\\\" << std::endl;
}
}
}
}
}
}
void LauAbsFitModel::cacheInfo(LauPdfList& pdfList, const LauFitDataTree& theData)
{
for (LauPdfList::iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter) {
(*pdf_iter)->cacheInfo(theData);
}
}
Double_t LauAbsFitModel::prodPdfValue(LauPdfList& pdfList, UInt_t iEvt)
{
Double_t pdfVal = 1.0;
for (LauPdfList::iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter) {
(*pdf_iter)->calcLikelihoodInfo(iEvt);
pdfVal *= (*pdf_iter)->getLikelihood();
}
return pdfVal;
}
void LauAbsFitModel::printEventInfo(UInt_t iEvt) const
{
const LauFitData& data = inputFitData_->getData(iEvt);
std::cerr << " : Input data values for this event:" << std::endl;
for (LauFitData::const_iterator iter = data.begin(); iter != data.end(); ++iter) {
std::cerr << " " << iter->first << " = " << iter->second << std::endl;
}
}
void LauAbsFitModel::printVarsInfo() const
{
std::cerr << " : Current values of fit parameters:" << std::endl;
for (UInt_t i(0); i<this->nTotParams(); ++i) {
std::cerr << " " << (fitVars_[i]->name()).Data() << " = " << fitVars_[i]->value() << std::endl;
}
}
void LauAbsFitModel::prepareInitialParArray( TObjArray& array )
{
// Update initial fit parameters if required (e.g. if using random numbers).
this->checkInitFitParams();
// Store the total number of parameters and the number of free parameters
UInt_t nPars = fitVars_.size();
UInt_t nFreePars = 0;
// Send the fit parameters
for ( LauParameterPList::iterator iter = fitVars_.begin(); iter != fitVars_.end(); ++iter ) {
if ( ! (*iter)->fixed() ) {
++nFreePars;
}
array.Add( *iter );
}
this->startNewFit( nPars, nFreePars );
}
void LauAbsFitModel::finaliseExperiment( const LauAbsFitter::FitStatus& fitStat, const TObjArray* parsFromMaster, const TMatrixD* covMat, TObjArray& parsToMaster )
{
// Copy the fit status information
this->storeFitStatus( fitStat, *covMat );
// Now process the parameters
const UInt_t nPars = this->nTotParams();
UInt_t nParsFromMaster = parsFromMaster->GetEntries();
if ( nParsFromMaster != nPars ) {
std::cerr << "ERROR in LauAbsFitModel::finaliseExperiment : Unexpected number of parameters received from master" << std::endl;
std::cerr << " : Received " << nParsFromMaster << " when expecting " << nPars << std::endl;
gSystem->Exit( EXIT_FAILURE );
}
for ( UInt_t iPar(0); iPar < nParsFromMaster; ++iPar ) {
LauParameter* parameter = dynamic_cast<LauParameter*>( (*parsFromMaster)[iPar] );
if ( ! parameter ) {
std::cerr << "ERROR in LauAbsFitModel::finaliseExperiment : Error reading parameter from master" << std::endl;
gSystem->Exit( EXIT_FAILURE );
}
if ( parameter->name() != fitVars_[iPar]->name() ) {
std::cerr << "ERROR in LauAbsFitModel::finaliseExperiment : Error reading parameter from master" << std::endl;
gSystem->Exit( EXIT_FAILURE );
}
*(fitVars_[iPar]) = *parameter;
}
// Write the results into the ntuple
this->finaliseFitResults( outputTableName_ );
// Store the per-event likelihood values
if ( this->writeSPlotData() ) {
this->storePerEvtLlhds();
}
// Create a toy MC sample using the fitted parameters so that
// the user can compare the fit to the data.
if (compareFitData_ == kTRUE && fitStat.status == 3) {
this->createFitToyMC(fitToyMCFileName_, fitToyMCTableName_);
}
// Send the finalised fit parameters
for ( LauParameterPList::iterator iter = fitVars_.begin(); iter != fitVars_.end(); ++iter ) {
parsToMaster.Add( *iter );
}
}
UInt_t LauAbsFitModel::readExperimentData()
{
// retrieve the data and find out how many events have been read
const UInt_t exptIndex = this->iExpt();
inputFitData_->readExperimentData( exptIndex );
const UInt_t nEvent = inputFitData_->nEvents();
this->eventsPerExpt( nEvent );
return nEvent;
}
-void LauAbsFitModel::setParametersFromFile(TString fileName, TString treeName, Bool_t fix)
+void LauAbsFitModel::setParametersFromFile(const TString& fileName, const TString& treeName, const Bool_t fix)
{
- this->fixParamFileName_ = fileName;
- this->fixParamTreeName_ = treeName;
- this->fixParams_ = fix;
+ fixParamFileName_ = fileName;
+ fixParamTreeName_ = treeName;
+ fixParams_ = fix;
}
-void LauAbsFitModel::setParametersFromMap(std::map<std::string, Double_t> parameters, Bool_t fix)
+void LauAbsFitModel::setParametersFromMap(const std::map<TString, Double_t>& parameters, const Bool_t fix)
{
- this->fixParamMap_ = parameters;
- this->fixParams_ = fix;
+ fixParamMap_ = parameters;
+ fixParams_ = fix;
}
-void LauAbsFitModel::setNamedParameters(TString fileName, TString treeName, std::set<std::string> parameters, Bool_t fix)
+void LauAbsFitModel::setNamedParameters(const TString& fileName, const TString& treeName, const std::set<TString>& parameters, const Bool_t fix)
{
- this->fixParamFileName_ = fileName;
- this->fixParamTreeName_ = treeName;
- this->fixParamNames_ = parameters;
- this->fixParams_ = fix;
+ fixParamFileName_ = fileName;
+ fixParamTreeName_ = treeName;
+ fixParamNames_ = parameters;
+ fixParams_ = fix;
}
-void LauAbsFitModel::setParametersFileFallback(TString fileName, TString treeName, std::map<std::string, Double_t> parameters, Bool_t fix)
+void LauAbsFitModel::setParametersFileFallback(const TString& fileName, const TString& treeName, const std::map<TString, Double_t>& parameters, const Bool_t fix)
{
- this->fixParamFileName_ = fileName;
- this->fixParamTreeName_ = treeName;
- this->fixParamMap_ = parameters;
- this->fixParams_ = fix;
+ fixParamFileName_ = fileName;
+ fixParamTreeName_ = treeName;
+ fixParamMap_ = parameters;
+ fixParams_ = fix;
}
diff --git a/src/LauCPFitModel.cc b/src/LauCPFitModel.cc
index b4db278..15c01ca 100644
--- a/src/LauCPFitModel.cc
+++ b/src/LauCPFitModel.cc
@@ -1,3445 +1,3440 @@
/*
Copyright 2004 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauCPFitModel.cc
\brief File containing implementation of LauCPFitModel class.
*/
#include <iostream>
#include <iomanip>
#include <fstream>
#include <vector>
#include "TVirtualFitter.h"
#include "TSystem.h"
#include "TMinuit.h"
#include "TRandom.h"
#include "TFile.h"
+#include "TTree.h"
+#include "TBranch.h"
+#include "TLeaf.h"
#include "TMath.h"
#include "TH2.h"
+#include "TGraph2D.h"
+#include "TGraph.h"
+#include "TStyle.h"
+#include "TCanvas.h"
#include "LauAbsBkgndDPModel.hh"
#include "LauAbsCoeffSet.hh"
#include "LauIsobarDynamics.hh"
#include "LauAbsPdf.hh"
#include "LauAsymmCalc.hh"
#include "LauComplex.hh"
#include "LauConstants.hh"
#include "LauCPFitModel.hh"
#include "LauDaughters.hh"
#include "LauEffModel.hh"
#include "LauEmbeddedData.hh"
#include "LauFitNtuple.hh"
#include "LauGenNtuple.hh"
#include "LauKinematics.hh"
#include "LauPrint.hh"
#include "LauRandom.hh"
#include "LauScfMap.hh"
-#include "TGraph2D.h"
-#include "TGraph.h"
-#include "TStyle.h"
-#include "TCanvas.h"
-#include "TGraph2D.h"
-#include "TGraph.h"
-#include "TStyle.h"
-#include "TCanvas.h"
ClassImp(LauCPFitModel)
LauCPFitModel::LauCPFitModel(LauIsobarDynamics* negModel, LauIsobarDynamics* posModel, Bool_t tagged, const TString& tagVarName) : LauAbsFitModel(),
negSigModel_(negModel), posSigModel_(posModel),
negKinematics_(negModel ? negModel->getKinematics() : 0),
posKinematics_(posModel ? posModel->getKinematics() : 0),
usingBkgnd_(kFALSE),
nSigComp_(0),
nSigDPPar_(0),
nExtraPdfPar_(0),
nNormPar_(0),
negMeanEff_("negMeanEff",0.0,0.0,1.0), posMeanEff_("posMeanEff",0.0,0.0,1.0),
negDPRate_("negDPRate",0.0,0.0,100.0), posDPRate_("posDPRate",0.0,0.0,100.0),
signalEvents_(0),
signalAsym_(0),
forceAsym_(kFALSE),
tagged_(tagged),
tagVarName_(tagVarName),
curEvtCharge_(0),
useSCF_(kFALSE),
useSCFHist_(kFALSE),
scfFrac_("scfFrac",0.0,0.0,1.0),
scfFracHist_(0),
scfMap_(0),
compareFitData_(kFALSE),
negParent_("B-"), posParent_("B+"),
negSignalTree_(0), posSignalTree_(0),
reuseSignal_(kFALSE),
useNegReweighting_(kFALSE), usePosReweighting_(kFALSE),
sigDPLike_(0.0),
scfDPLike_(0.0),
sigExtraLike_(0.0),
scfExtraLike_(0.0),
sigTotalLike_(0.0),
scfTotalLike_(0.0)
{
const LauDaughters* negDaug = negSigModel_->getDaughters();
if (negDaug != 0) {negParent_ = negDaug->getNameParent();}
const LauDaughters* posDaug = posSigModel_->getDaughters();
if (posDaug != 0) {posParent_ = posDaug->getNameParent();}
}
LauCPFitModel::~LauCPFitModel()
{
delete negSignalTree_;
delete posSignalTree_;
for (LauBkgndEmbDataList::iterator iter = negBkgndTree_.begin(); iter != negBkgndTree_.end(); ++iter) {
delete (*iter);
}
for (LauBkgndEmbDataList::iterator iter = posBkgndTree_.begin(); iter != posBkgndTree_.end(); ++iter) {
delete (*iter);
}
delete scfFracHist_;
}
void LauCPFitModel::setupBkgndVectors()
{
UInt_t nBkgnds = this->nBkgndClasses();
negBkgndDPModels_.resize( nBkgnds );
posBkgndDPModels_.resize( nBkgnds );
negBkgndPdfs_.resize( nBkgnds );
posBkgndPdfs_.resize( nBkgnds );
bkgndEvents_.resize( nBkgnds );
bkgndAsym_.resize( nBkgnds );
negBkgndTree_.resize( nBkgnds );
posBkgndTree_.resize( nBkgnds );
reuseBkgnd_.resize( nBkgnds );
bkgndDPLike_.resize( nBkgnds );
bkgndExtraLike_.resize( nBkgnds );
bkgndTotalLike_.resize( nBkgnds );
}
void LauCPFitModel::setNSigEvents(LauParameter* nSigEvents)
{
if ( nSigEvents == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : The LauParameter pointer is null." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( signalEvents_ != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : You are trying to overwrite the signal yield." << std::endl;
return;
}
if ( signalAsym_ != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : You are trying to overwrite the signal asymmetry." << std::endl;
return;
}
signalEvents_ = nSigEvents;
TString name = signalEvents_->name();
if ( ! name.Contains("signalEvents") && !( name.BeginsWith("signal") && name.EndsWith("Events") ) ) {
signalEvents_->name("signalEvents");
}
Double_t value = nSigEvents->value();
signalEvents_->range(-2.0*(TMath::Abs(value)+1.0), 2.0*(TMath::Abs(value)+1.0));
signalAsym_ = new LauParameter("signalAsym",0.0,-1.0,1.0,kTRUE);
}
void LauCPFitModel::setNSigEvents( LauParameter* nSigEvents, LauParameter* sigAsym, Bool_t forceAsym )
{
if ( nSigEvents == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : The event LauParameter pointer is null." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( sigAsym == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : The asym LauParameter pointer is null." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( signalEvents_ != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : You are trying to overwrite the signal yield." << std::endl;
return;
}
if ( signalAsym_ != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNSigEvents : You are trying to overwrite the signal asymmetry." << std::endl;
return;
}
signalEvents_ = nSigEvents;
signalEvents_->name("signalEvents");
Double_t value = nSigEvents->value();
signalEvents_->range(-2.0*(TMath::Abs(value)+1.0), 2.0*(TMath::Abs(value)+1.0));
signalAsym_ = sigAsym;
signalAsym_->name("signalAsym");
signalAsym_->range(-1.0,1.0);
forceAsym_ = forceAsym;
}
void LauCPFitModel::setNBkgndEvents( LauAbsRValue* nBkgndEvents )
{
if ( nBkgndEvents == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBgkndEvents : The background yield LauParameter pointer is null." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( ! this->validBkgndClass( nBkgndEvents->name() ) ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : Invalid background class \"" << nBkgndEvents->name() << "\"." << std::endl;
std::cerr << " : Background class names must be provided in \"setBkgndClassNames\" before any other background-related actions can be performed." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
UInt_t bkgndID = this->bkgndClassID( nBkgndEvents->name() );
if ( bkgndEvents_[bkgndID] != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : You are trying to overwrite the background yield." << std::endl;
return;
}
if ( bkgndAsym_[bkgndID] != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : You are trying to overwrite the background asymmetry." << std::endl;
return;
}
nBkgndEvents->name( nBkgndEvents->name()+"Events" );
if ( nBkgndEvents->isLValue() ) {
Double_t value = nBkgndEvents->value();
LauParameter* yield = dynamic_cast<LauParameter*>( nBkgndEvents );
yield->range(-2.0*(TMath::Abs(value)+1.0), 2.0*(TMath::Abs(value)+1.0));
}
bkgndEvents_[bkgndID] = nBkgndEvents;
bkgndAsym_[bkgndID] = new LauParameter(nBkgndEvents->name()+"Asym",0.0,-1.0,1.0,kTRUE);
}
void LauCPFitModel::setNBkgndEvents(LauAbsRValue* nBkgndEvents, LauAbsRValue* bkgndAsym)
{
if ( nBkgndEvents == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : The background yield LauParameter pointer is null." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( bkgndAsym == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : The background asym LauParameter pointer is null." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( ! this->validBkgndClass( nBkgndEvents->name() ) ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : Invalid background class \"" << nBkgndEvents->name() << "\"." << std::endl;
std::cerr << " : Background class names must be provided in \"setBkgndClassNames\" before any other background-related actions can be performed." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
UInt_t bkgndID = this->bkgndClassID( nBkgndEvents->name() );
if ( bkgndEvents_[bkgndID] != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : You are trying to overwrite the background yield." << std::endl;
return;
}
if ( bkgndAsym_[bkgndID] != 0 ) {
std::cerr << "ERROR in LauCPFitModel::setNBkgndEvents : You are trying to overwrite the background asymmetry." << std::endl;
return;
}
nBkgndEvents->name( nBkgndEvents->name()+"Events" );
if ( nBkgndEvents->isLValue() ) {
Double_t value = nBkgndEvents->value();
LauParameter* yield = dynamic_cast<LauParameter*>( nBkgndEvents );
yield->range(-2.0*(TMath::Abs(value)+1.0), 2.0*(TMath::Abs(value)+1.0));
}
bkgndEvents_[bkgndID] = nBkgndEvents;
bkgndAsym->name( nBkgndEvents->name()+"Asym" );
if ( bkgndAsym->isLValue() ) {
LauParameter* asym = dynamic_cast<LauParameter*>( bkgndAsym );
asym->range(-1.0, 1.0);
}
bkgndAsym_[bkgndID] = bkgndAsym;
}
void LauCPFitModel::splitSignalComponent( const TH2* dpHisto, const Bool_t upperHalf, const Bool_t fluctuateBins, LauScfMap* scfMap )
{
if ( useSCF_ == kTRUE ) {
std::cerr << "ERROR in LauCPFitModel::splitSignalComponent : Have already setup SCF." << std::endl;
return;
}
if ( dpHisto == 0 ) {
std::cerr << "ERROR in LauCPFitModel::splitSignalComponent : The histogram pointer is null." << std::endl;
return;
}
const LauDaughters* daughters = negSigModel_->getDaughters();
scfFracHist_ = new LauEffModel( daughters, 0 );
scfFracHist_->setEffHisto( dpHisto, kTRUE, fluctuateBins, 0.0, 0.0, upperHalf, daughters->squareDP() );
scfMap_ = scfMap;
useSCF_ = kTRUE;
useSCFHist_ = kTRUE;
}
void LauCPFitModel::splitSignalComponent( const Double_t scfFrac, const Bool_t fixed )
{
if ( useSCF_ == kTRUE ) {
std::cerr << "ERROR in LauCPFitModel::splitSignalComponent : Have already setup SCF." << std::endl;
return;
}
scfFrac_.range( 0.0, 1.0 );
scfFrac_.value( scfFrac ); scfFrac_.initValue( scfFrac ); scfFrac_.genValue( scfFrac );
scfFrac_.fixed( fixed );
useSCF_ = kTRUE;
useSCFHist_ = kFALSE;
}
void LauCPFitModel::setBkgndDPModels(const TString& bkgndClass, LauAbsBkgndDPModel* negModel, LauAbsBkgndDPModel* posModel)
{
if ((negModel==0) || (posModel==0)) {
std::cerr << "ERROR in LauCPFitModel::setBkgndDPModels : One or both of the model pointers is null." << std::endl;
return;
}
// check that this background name is valid
if ( ! this->validBkgndClass( bkgndClass) ) {
std::cerr << "ERROR in LauCPFitModel::setBkgndDPModel : Invalid background class \"" << bkgndClass << "\"." << std::endl;
std::cerr << " : Background class names must be provided in \"setBkgndClassNames\" before any other background-related actions can be performed." << std::endl;
return;
}
UInt_t bkgndID = this->bkgndClassID( bkgndClass );
negBkgndDPModels_[bkgndID] = negModel;
posBkgndDPModels_[bkgndID] = posModel;
usingBkgnd_ = kTRUE;
}
void LauCPFitModel::setSignalPdfs(LauAbsPdf* negPdf, LauAbsPdf* posPdf)
{
if ( tagged_ ) {
if (negPdf==0 || posPdf==0) {
std::cerr << "ERROR in LauCPFitModel::setSignalPdfs : One or both of the PDF pointers is null." << std::endl;
return;
}
} else {
// if we're doing an untagged analysis we will only use the negative PDFs
if ( negPdf==0 ) {
std::cerr << "ERROR in LauCPFitModel::setSignalPdfs : The negative PDF pointer is null." << std::endl;
return;
}
if ( posPdf!=0 ) {
std::cerr << "WARNING in LauCPFitModel::setSignalPdfs : Doing an untagged fit so will not use the positive PDF." << std::endl;
}
}
negSignalPdfs_.push_back(negPdf);
posSignalPdfs_.push_back(posPdf);
}
void LauCPFitModel::setSCFPdfs(LauAbsPdf* negPdf, LauAbsPdf* posPdf)
{
if ( tagged_ ) {
if (negPdf==0 || posPdf==0) {
std::cerr << "ERROR in LauCPFitModel::setSCFPdfs : One or both of the PDF pointers is null." << std::endl;
return;
}
} else {
// if we're doing an untagged analysis we will only use the negative PDFs
if ( negPdf==0 ) {
std::cerr << "ERROR in LauCPFitModel::setSCFPdfs : The negative PDF pointer is null." << std::endl;
return;
}
if ( posPdf!=0 ) {
std::cerr << "WARNING in LauCPFitModel::setSCFPdfs : Doing an untagged fit so will not use the positive PDF." << std::endl;
}
}
negScfPdfs_.push_back(negPdf);
posScfPdfs_.push_back(posPdf);
}
void LauCPFitModel::setBkgndPdfs(const TString& bkgndClass, LauAbsPdf* negPdf, LauAbsPdf* posPdf)
{
if ( tagged_ ) {
if (negPdf==0 || posPdf==0) {
std::cerr << "ERROR in LauCPFitModel::setBkgndPdfs : One or both of the PDF pointers is null." << std::endl;
return;
}
} else {
// if we're doing an untagged analysis we will only use the negative PDFs
if ( negPdf==0 ) {
std::cerr << "ERROR in LauCPFitModel::setBkgndPdfs : The negative PDF pointer is null." << std::endl;
return;
}
if ( posPdf!=0 ) {
std::cerr << "WARNING in LauCPFitModel::setBkgndPdfs : Doing an untagged fit so will not use the positive PDF." << std::endl;
}
}
// check that this background name is valid
if ( ! this->validBkgndClass( bkgndClass ) ) {
std::cerr << "ERROR in LauCPFitModel::setBkgndPdfs : Invalid background class \"" << bkgndClass << "\"." << std::endl;
std::cerr << " : Background class names must be provided in \"setBkgndClassNames\" before any other background-related actions can be performed." << std::endl;
return;
}
UInt_t bkgndID = this->bkgndClassID( bkgndClass );
negBkgndPdfs_[bkgndID].push_back(negPdf);
posBkgndPdfs_[bkgndID].push_back(posPdf);
usingBkgnd_ = kTRUE;
}
void LauCPFitModel::setAmpCoeffSet(LauAbsCoeffSet* coeffSet)
{
// Resize the coeffPars vector if not already done
if ( coeffPars_.empty() ) {
const UInt_t nNegAmp = negSigModel_->getnTotAmp();
const UInt_t nPosAmp = posSigModel_->getnTotAmp();
if ( nNegAmp != nPosAmp ) {
std::cerr << "ERROR in LauCPFitModel::setAmpCoeffSet : Unequal number of signal DP components in the negative and positive models: " << nNegAmp << " != " << nPosAmp << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
coeffPars_.resize( nNegAmp );
for (std::vector<LauAbsCoeffSet*>::iterator iter = coeffPars_.begin(); iter != coeffPars_.end(); ++iter) {
(*iter) = 0;
}
fitFracAsymm_.resize( nNegAmp );
acp_.resize( nNegAmp );
}
// Is there a component called compName in the signal model?
TString compName(coeffSet->name());
TString conjName = negSigModel_->getConjResName(compName);
const Int_t negIndex = negSigModel_->resonanceIndex(compName);
const Int_t posIndex = posSigModel_->resonanceIndex(conjName);
if ( negIndex < 0 ) {
std::cerr << "ERROR in LauCPFitModel::setAmpCoeffSet : " << negParent_ << " signal DP model doesn't contain component \"" << compName << "\"." << std::endl;
return;
}
if ( posIndex < 0 ) {
std::cerr << "ERROR in LauCPFitModel::setAmpCoeffSet : " << posParent_ << " signal DP model doesn't contain component \"" << conjName << "\"." << std::endl;
return;
}
if ( posIndex != negIndex ) {
std::cerr << "ERROR in LauCPFitModel::setAmpCoeffSet : " << negParent_ << " signal DP model and " << posParent_ << " signal DP model have different indices for components \"" << compName << "\" and \"" << conjName << "\"." << std::endl;
return;
}
// Do we already have it in our list of names?
if ( coeffPars_[negIndex] != 0 && coeffPars_[negIndex]->name() == compName) {
std::cerr << "ERROR in LauCPFitModel::setAmpCoeffSet : Have already set coefficients for \"" << compName << "\"." << std::endl;
return;
}
coeffSet->index(negIndex);
coeffPars_[negIndex] = coeffSet;
TString parName = coeffSet->baseName(); parName += "FitFracAsym";
fitFracAsymm_[negIndex] = LauParameter(parName, 0.0, -1.0, 1.0);
acp_[negIndex] = coeffSet->acp();
++nSigComp_;
std::cout << "INFO in LauCPFitModel::setAmpCoeffSet : Added coefficients for component \"" << compName << "\" to the fit model." << std::endl;
coeffSet->printParValues();
}
void LauCPFitModel::initialise()
{
// Initialisation
if (!this->useDP() && negSignalPdfs_.empty()) {
std::cerr << "ERROR in LauCPFitModel::initialise : Signal model doesn't exist for any variable." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( this->useDP() ) {
// Check that we have all the Dalitz-plot models
if ((negSigModel_ == 0) || (posSigModel_ == 0)) {
std::cerr << "ERROR in LauCPFitModel::initialise : the pointer to one (neg or pos) of the signal DP models is null.\n";
std::cerr << " : Removing the Dalitz Plot from the model." << std::endl;
this->useDP(kFALSE);
}
if ( usingBkgnd_ ) {
if ( negBkgndDPModels_.empty() || posBkgndDPModels_.empty() ) {
std::cerr << "ERROR in LauCPFitModel::initialise : No background DP models found.\n";
std::cerr << " : Removing the Dalitz plot from the model." << std::endl;
this->useDP(kFALSE);
}
for (LauBkgndDPModelList::const_iterator dpmodel_iter = negBkgndDPModels_.begin(); dpmodel_iter != negBkgndDPModels_.end(); ++dpmodel_iter ) {
if ( (*dpmodel_iter) == 0 ) {
std::cerr << "ERROR in LauCPFitModel::initialise : The pointer to one of the background DP models is null.\n";
std::cerr << " : Removing the Dalitz Plot from the model." << std::endl;
this->useDP(kFALSE);
break;
}
}
for (LauBkgndDPModelList::const_iterator dpmodel_iter = posBkgndDPModels_.begin(); dpmodel_iter != posBkgndDPModels_.end(); ++dpmodel_iter ) {
if ( (*dpmodel_iter) == 0 ) {
std::cerr << "ERROR in LauCPFitModel::initialise : The pointer to one of the background DP models is null.\n";
std::cerr << " : Removing the Dalitz Plot from the model." << std::endl;
this->useDP(kFALSE);
break;
}
}
}
// Need to check that the number of components we have and that the dynamics has matches up
const UInt_t nNegAmp = negSigModel_->getnTotAmp();
const UInt_t nPosAmp = posSigModel_->getnTotAmp();
if ( nNegAmp != nPosAmp ) {
std::cerr << "ERROR in LauCPFitModel::initialise : Unequal number of signal DP components in the negative and positive models: " << nNegAmp << " != " << nPosAmp << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if ( nNegAmp != nSigComp_ ) {
std::cerr << "ERROR in LauCPFitModel::initialise : Number of signal DP components in the model (" << nNegAmp << ") not equal to number of coefficients supplied (" << nSigComp_ << ")." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
- if( !this->fixParamFileName_.IsNull() || !this->fixParamMap_.empty() ) {
- std::vector<LauAbsCoeffSet*>::iterator itr;
+ if ( !fixParamFileName_.IsNull() || !fixParamMap_.empty() ) {
- // Set coefficients
+ // Set coefficients
- std::vector<LauParameter *> params;
- for (itr = coeffPars_.begin(); itr!= coeffPars_.end(); itr++) {
- std::vector<LauParameter*> p = (*itr)->getParameters();
- params.insert(params.end(), p.begin(), p.end());
- }
+ std::vector<LauParameter*> params;
+ for ( auto itr = coeffPars_.begin(); itr != coeffPars_.end(); ++itr ) {
+ std::vector<LauParameter*> p = (*itr)->getParameters();
+ params.insert(params.end(), p.begin(), p.end());
+ }
- this->fixParams(params);
+ this->fixParams(params);
- // Set resonance parameters (if they exist)
+ // Set resonance parameters (if they exist)
- negSigModel_->collateResonanceParameters();
- posSigModel_->collateResonanceParameters();
+ negSigModel_->collateResonanceParameters();
+ posSigModel_->collateResonanceParameters();
- this->fixParams(negSigModel_->getFloatingParameters());
- this->fixParams(posSigModel_->getFloatingParameters());
- }
+ this->fixParams(negSigModel_->getFloatingParameters());
+ this->fixParams(posSigModel_->getFloatingParameters());
+ }
// From the initial parameter values calculate the coefficients
// so they can be passed to the signal model
this->updateCoeffs();
// If all is well, go ahead and initialise them
this->initialiseDPModels();
}
// Next check that, if a given component is being used we've got the
// right number of PDFs for all the variables involved
// TODO - should probably check variable names and so on as well
UInt_t nsigpdfvars(0);
for ( LauPdfList::const_iterator pdf_iter = negSignalPdfs_.begin(); pdf_iter != negSignalPdfs_.end(); ++pdf_iter ) {
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nsigpdfvars;
}
}
}
if (useSCF_) {
UInt_t nscfpdfvars(0);
for ( LauPdfList::const_iterator pdf_iter = negScfPdfs_.begin(); pdf_iter != negScfPdfs_.end(); ++pdf_iter ) {
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nscfpdfvars;
}
}
}
if (nscfpdfvars != nsigpdfvars) {
std::cerr << "ERROR in LauCPFitModel::initialise : There are " << nsigpdfvars << " TM signal PDF variables but " << nscfpdfvars << " SCF signal PDF variables." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
if (usingBkgnd_) {
for (LauBkgndPdfsList::const_iterator bgclass_iter = negBkgndPdfs_.begin(); bgclass_iter != negBkgndPdfs_.end(); ++bgclass_iter) {
UInt_t nbkgndpdfvars(0);
const LauPdfList& pdfList = (*bgclass_iter);
for ( LauPdfList::const_iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter ) {
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nbkgndpdfvars;
}
}
}
if (nbkgndpdfvars != nsigpdfvars) {
std::cerr << "ERROR in LauCPFitModel::initialise : There are " << nsigpdfvars << " signal PDF variables but " << nbkgndpdfvars << " bkgnd PDF variables." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
}
// Clear the vectors of parameter information so we can start from scratch
this->clearFitParVectors();
// Set the fit parameters for signal and background models
this->setSignalDPParameters();
// Set the fit parameters for the various extra PDFs
this->setExtraPdfParameters();
// Set the initial bg and signal events
this->setFitNEvents();
// Check that we have the expected number of fit variables
const LauParameterPList& fitVars = this->fitPars();
if (fitVars.size() != (nSigDPPar_ + nExtraPdfPar_ + nNormPar_)) {
std::cerr << "ERROR in LauCPFitModel::initialise : Number of fit parameters not of expected size. Exiting" << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
this->setExtraNtupleVars();
}
void LauCPFitModel::recalculateNormalisation()
{
//std::cout << "INFO in LauCPFitModel::recalculateNormalizationInDPModels : Recalc Norm in DP model" << std::endl;
negSigModel_->recalculateNormalisation();
posSigModel_->recalculateNormalisation();
negSigModel_->modifyDataTree();
posSigModel_->modifyDataTree();
}
void LauCPFitModel::initialiseDPModels()
{
std::cout << "INFO in LauCPFitModel::initialiseDPModels : Initialising signal DP model" << std::endl;
negSigModel_->initialise(negCoeffs_);
posSigModel_->initialise(posCoeffs_);
if (usingBkgnd_ == kTRUE) {
for (LauBkgndDPModelList::iterator iter = negBkgndDPModels_.begin(); iter != negBkgndDPModels_.end(); ++iter) {
(*iter)->initialise();
}
for (LauBkgndDPModelList::iterator iter = posBkgndDPModels_.begin(); iter != posBkgndDPModels_.end(); ++iter) {
(*iter)->initialise();
}
}
}
void LauCPFitModel::setSignalDPParameters()
{
// Set the fit parameters for the signal model.
nSigDPPar_ = 0;
if ( ! this->useDP() ) {
return;
}
std::cout << "INFO in LauCPFitModel::setSignalDPParameters : Setting the initial fit parameters for the signal DP model." << std::endl;
// Place isobar coefficient parameters in vector of fit variables
LauParameterPList& fitVars = this->fitPars();
for (UInt_t i = 0; i < nSigComp_; i++) {
LauParameterPList pars = coeffPars_[i]->getParameters();
for (LauParameterPList::iterator iter = pars.begin(); iter != pars.end(); ++iter) {
if ( !(*iter)->clone() ) {
fitVars.push_back(*iter);
++nSigDPPar_;
}
}
}
// Obtain the resonance parameters and place them in the vector of fit variables and in a separate vector
// Need to make sure that they are unique because some might appear in both DP models
LauParameterPSet& resVars = this->resPars();
resVars.clear();
LauParameterPList& negSigDPPars = negSigModel_->getFloatingParameters();
LauParameterPList& posSigDPPars = posSigModel_->getFloatingParameters();
for ( LauParameterPList::iterator iter = negSigDPPars.begin(); iter != negSigDPPars.end(); ++iter ) {
if ( resVars.insert(*iter).second ) {
fitVars.push_back(*iter);
++nSigDPPar_;
}
}
for ( LauParameterPList::iterator iter = posSigDPPars.begin(); iter != posSigDPPars.end(); ++iter ) {
if ( resVars.insert(*iter).second ) {
fitVars.push_back(*iter);
++nSigDPPar_;
}
}
}
void LauCPFitModel::setExtraPdfParameters()
{
// Include all the parameters of the PDF in the fit
// NB all of them are passed to the fit, even though some have been fixed through parameter.fixed(kTRUE)
// With the new "cloned parameter" scheme only "original" parameters are passed to the fit.
// Their clones are updated automatically when the originals are updated.
nExtraPdfPar_ = 0;
nExtraPdfPar_ += this->addFitParameters(negSignalPdfs_);
if ( tagged_ ) {
nExtraPdfPar_ += this->addFitParameters(posSignalPdfs_);
}
if (useSCF_ == kTRUE) {
nExtraPdfPar_ += this->addFitParameters(negScfPdfs_);
if ( tagged_ ) {
nExtraPdfPar_ += this->addFitParameters(posScfPdfs_);
}
}
if (usingBkgnd_ == kTRUE) {
for (LauBkgndPdfsList::iterator iter = negBkgndPdfs_.begin(); iter != negBkgndPdfs_.end(); ++iter) {
nExtraPdfPar_ += this->addFitParameters(*iter);
}
if ( tagged_ ) {
for (LauBkgndPdfsList::iterator iter = posBkgndPdfs_.begin(); iter != posBkgndPdfs_.end(); ++iter) {
nExtraPdfPar_ += this->addFitParameters(*iter);
}
}
}
}
void LauCPFitModel::setFitNEvents()
{
if ( signalEvents_ == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setFitNEvents : Signal yield not defined." << std::endl;
return;
}
nNormPar_ = 0;
// initialise the total number of events to be the sum of all the hypotheses
Double_t nTotEvts = signalEvents_->value();
for (LauBkgndYieldList::const_iterator iter = bkgndEvents_.begin(); iter != bkgndEvents_.end(); ++iter) {
nTotEvts += (*iter)->value();
if ( (*iter) == 0 ) {
std::cerr << "ERROR in LauCPFitModel::setFitNEvents : Background yield not defined." << std::endl;
return;
}
}
this->eventsPerExpt(TMath::FloorNint(nTotEvts));
LauParameterPList& fitVars = this->fitPars();
// if doing an extended ML fit add the number of signal events into the fit parameters
if (this->doEMLFit()) {
std::cout << "INFO in LauCPFitModel::setFitNEvents : Initialising number of events for signal and background components..." << std::endl;
// add the signal fraction to the list of fit parameters
if(!signalEvents_->fixed()) {
fitVars.push_back(signalEvents_);
++nNormPar_;
}
} else {
std::cout << "INFO in LauCPFitModel::setFitNEvents : Initialising number of events for background components (and hence signal)..." << std::endl;
}
// if not using the DP in the model we need an explicit signal asymmetry parameter
if (this->useDP() == kFALSE) {
if(!signalAsym_->fixed()) {
fitVars.push_back(signalAsym_);
++nNormPar_;
}
}
if (useSCF_ && !useSCFHist_) {
if(!scfFrac_.fixed()) {
fitVars.push_back(&scfFrac_);
++nNormPar_;
}
}
if (usingBkgnd_ == kTRUE) {
for (LauBkgndYieldList::iterator iter = bkgndEvents_.begin(); iter != bkgndEvents_.end(); ++iter) {
std::vector<LauParameter*> parameters = (*iter)->getPars();
for ( LauParameter* parameter : parameters ) {
if(!parameter->clone()) {
fitVars.push_back(parameter);
++nNormPar_;
}
}
}
for (LauBkgndYieldList::iterator iter = bkgndAsym_.begin(); iter != bkgndAsym_.end(); ++iter) {
std::vector<LauParameter*> parameters = (*iter)->getPars();
for ( LauParameter* parameter : parameters ) {
if(!parameter->clone()) {
fitVars.push_back(parameter);
++nNormPar_;
}
}
}
}
}
void LauCPFitModel::setExtraNtupleVars()
{
// Set-up other parameters derived from the fit results, e.g. fit fractions.
if (this->useDP() != kTRUE) {
return;
}
// First clear the vectors so we start from scratch
this->clearExtraVarVectors();
LauParameterList& extraVars = this->extraPars();
// Add the positive and negative fit fractions for each signal component
negFitFrac_ = negSigModel_->getFitFractions();
if (negFitFrac_.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << negFitFrac_.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (negFitFrac_[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << negFitFrac_[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
posFitFrac_ = posSigModel_->getFitFractions();
if (posFitFrac_.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << posFitFrac_.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (posFitFrac_[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << posFitFrac_[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
// Add the positive and negative fit fractions that have not been corrected for the efficiency for each signal component
negFitFracEffUnCorr_ = negSigModel_->getFitFractionsEfficiencyUncorrected();
if (negFitFracEffUnCorr_.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << negFitFracEffUnCorr_.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (negFitFracEffUnCorr_[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << negFitFracEffUnCorr_[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
posFitFracEffUnCorr_ = posSigModel_->getFitFractionsEfficiencyUncorrected();
if (posFitFracEffUnCorr_.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << posFitFracEffUnCorr_.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (posFitFracEffUnCorr_[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::setExtraNtupleVars : Initial Fit Fraction array of unexpected dimension: " << posFitFracEffUnCorr_[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
for (UInt_t i(0); i<nSigComp_; ++i) {
for (UInt_t j(i); j<nSigComp_; ++j) {
TString name = negFitFrac_[i][j].name();
name.Insert( name.Index("FitFrac"), "Neg" );
negFitFrac_[i][j].name(name);
extraVars.push_back(negFitFrac_[i][j]);
name = posFitFrac_[i][j].name();
name.Insert( name.Index("FitFrac"), "Pos" );
posFitFrac_[i][j].name(name);
extraVars.push_back(posFitFrac_[i][j]);
name = negFitFracEffUnCorr_[i][j].name();
name.Insert( name.Index("FitFrac"), "Neg" );
negFitFracEffUnCorr_[i][j].name(name);
extraVars.push_back(negFitFracEffUnCorr_[i][j]);
name = posFitFracEffUnCorr_[i][j].name();
name.Insert( name.Index("FitFrac"), "Pos" );
posFitFracEffUnCorr_[i][j].name(name);
extraVars.push_back(posFitFracEffUnCorr_[i][j]);
}
}
// Store any extra parameters/quantities from the DP model (e.g. K-matrix total fit fractions)
std::vector<LauParameter> negExtraPars = negSigModel_->getExtraParameters();
std::vector<LauParameter>::iterator negExtraIter;
for (negExtraIter = negExtraPars.begin(); negExtraIter != negExtraPars.end(); ++negExtraIter) {
LauParameter negExtraParameter = (*negExtraIter);
extraVars.push_back(negExtraParameter);
}
std::vector<LauParameter> posExtraPars = posSigModel_->getExtraParameters();
std::vector<LauParameter>::iterator posExtraIter;
for (posExtraIter = posExtraPars.begin(); posExtraIter != posExtraPars.end(); ++posExtraIter) {
LauParameter posExtraParameter = (*posExtraIter);
extraVars.push_back(posExtraParameter);
}
// Now add in the DP efficiency value
Double_t initMeanEff = negSigModel_->getMeanEff().initValue();
negMeanEff_.value(initMeanEff);
negMeanEff_.genValue(initMeanEff);
negMeanEff_.initValue(initMeanEff);
extraVars.push_back(negMeanEff_);
initMeanEff = posSigModel_->getMeanEff().initValue();
posMeanEff_.value(initMeanEff);
posMeanEff_.genValue(initMeanEff);
posMeanEff_.initValue(initMeanEff);
extraVars.push_back(posMeanEff_);
// Also add in the DP rates
Double_t initDPRate = negSigModel_->getDPRate().initValue();
negDPRate_.value(initDPRate);
negDPRate_.genValue(initDPRate);
negDPRate_.initValue(initDPRate);
extraVars.push_back(negDPRate_);
initDPRate = posSigModel_->getDPRate().initValue();
posDPRate_.value(initDPRate);
posDPRate_.genValue(initDPRate);
posDPRate_.initValue(initDPRate);
extraVars.push_back(posDPRate_);
// Calculate the CPC and CPV Fit Fractions, ACPs and FitFrac asymmetries
this->calcExtraFractions(kTRUE);
this->calcAsymmetries(kTRUE);
// Add the CP violating and CP conserving fit fractions for each signal component
for (UInt_t i = 0; i < nSigComp_; i++) {
for (UInt_t j = i; j < nSigComp_; j++) {
extraVars.push_back(CPVFitFrac_[i][j]);
}
}
for (UInt_t i = 0; i < nSigComp_; i++) {
for (UInt_t j = i; j < nSigComp_; j++) {
extraVars.push_back(CPCFitFrac_[i][j]);
}
}
// Add the Fit Fraction asymmetry for each signal component
for (UInt_t i = 0; i < nSigComp_; i++) {
extraVars.push_back(fitFracAsymm_[i]);
}
// Add the calculated CP asymmetry for each signal component
for (UInt_t i = 0; i < nSigComp_; i++) {
extraVars.push_back(acp_[i]);
}
}
void LauCPFitModel::calcExtraFractions(Bool_t initValues)
{
// Calculate the CP-conserving and CP-violating fit fractions
if (initValues) {
// create the structure
CPCFitFrac_.clear();
CPVFitFrac_.clear();
CPCFitFrac_.resize(nSigComp_);
CPVFitFrac_.resize(nSigComp_);
for (UInt_t i(0); i<nSigComp_; ++i) {
CPCFitFrac_[i].resize(nSigComp_);
CPVFitFrac_[i].resize(nSigComp_);
for (UInt_t j(i); j<nSigComp_; ++j) {
TString name = negFitFrac_[i][j].name();
name.Replace( name.Index("Neg"), 3, "CPC" );
CPCFitFrac_[i][j].name( name );
CPCFitFrac_[i][j].valueAndRange( 0.0, -100.0, 100.0 );
name = negFitFrac_[i][j].name();
name.Replace( name.Index("Neg"), 3, "CPV" );
CPVFitFrac_[i][j].name( name );
CPVFitFrac_[i][j].valueAndRange( 0.0, -100.0, 100.0 );
}
}
}
Double_t denom = negDPRate_.value() + posDPRate_.value();
for (UInt_t i(0); i<nSigComp_; ++i) {
for (UInt_t j(i); j<nSigComp_; ++j) {
Double_t negTerm = negFitFrac_[i][j].value()*negDPRate_.value();
Double_t posTerm = posFitFrac_[i][j].value()*posDPRate_.value();
Double_t cpcFitFrac = (negTerm + posTerm)/denom;
Double_t cpvFitFrac = (negTerm - posTerm)/denom;
CPCFitFrac_[i][j].value(cpcFitFrac);
CPVFitFrac_[i][j].value(cpvFitFrac);
if (initValues) {
CPCFitFrac_[i][j].genValue(cpcFitFrac);
CPCFitFrac_[i][j].initValue(cpcFitFrac);
CPVFitFrac_[i][j].genValue(cpvFitFrac);
CPVFitFrac_[i][j].initValue(cpvFitFrac);
}
}
}
}
void LauCPFitModel::calcAsymmetries(Bool_t initValues)
{
// Calculate the CP asymmetries
// Also calculate the fit fraction asymmetries
for (UInt_t i = 0; i < nSigComp_; i++) {
acp_[i] = coeffPars_[i]->acp();
LauAsymmCalc asymmCalc(negFitFrac_[i][i].value(), posFitFrac_[i][i].value());
Double_t asym = asymmCalc.getAsymmetry();
fitFracAsymm_[i].value(asym);
if (initValues) {
fitFracAsymm_[i].genValue(asym);
fitFracAsymm_[i].initValue(asym);
}
}
}
void LauCPFitModel::finaliseFitResults(const TString& tablePrefixName)
{
// Retrieve parameters from the fit results for calculations and toy generation
// and eventually store these in output root ntuples/text files
// Now take the fit parameters and update them as necessary
// i.e. to make mag > 0.0, phase in the right range.
// This function will also calculate any other values, such as the
// fit fractions, using any errors provided by fitParErrors as appropriate.
// Also obtain the pull values: (measured - generated)/(average error)
if (this->useDP() == kTRUE) {
for (UInt_t i = 0; i < nSigComp_; ++i) {
// Check whether we have "a/b > 0.0", and phases in the right range
coeffPars_[i]->finaliseValues();
}
}
// update the pulls on the event fractions and asymmetries
if (this->doEMLFit()) {
signalEvents_->updatePull();
}
if (this->useDP() == kFALSE) {
signalAsym_->updatePull();
}
if (useSCF_ && !useSCFHist_) {
scfFrac_.updatePull();
}
if (usingBkgnd_ == kTRUE) {
for (LauBkgndYieldList::iterator iter = bkgndEvents_.begin(); iter != bkgndEvents_.end(); ++iter) {
std::vector<LauParameter*> parameters = (*iter)->getPars();
for ( LauParameter* parameter : parameters ) {
parameter->updatePull();
}
}
for (LauBkgndYieldList::iterator iter = bkgndAsym_.begin(); iter != bkgndAsym_.end(); ++iter) {
std::vector<LauParameter*> parameters = (*iter)->getPars();
for ( LauParameter* parameter : parameters ) {
parameter->updatePull();
}
}
}
// Update the pulls on all the extra PDFs' parameters
this->updateFitParameters(negSignalPdfs_);
this->updateFitParameters(posSignalPdfs_);
if (useSCF_ == kTRUE) {
this->updateFitParameters(negScfPdfs_);
this->updateFitParameters(posScfPdfs_);
}
if (usingBkgnd_ == kTRUE) {
for (LauBkgndPdfsList::iterator iter = negBkgndPdfs_.begin(); iter != negBkgndPdfs_.end(); ++iter) {
this->updateFitParameters(*iter);
}
for (LauBkgndPdfsList::iterator iter = posBkgndPdfs_.begin(); iter != posBkgndPdfs_.end(); ++iter) {
this->updateFitParameters(*iter);
}
}
// Fill the fit results to the ntuple
// update the coefficients and then calculate the fit fractions and ACP's
if (this->useDP() == kTRUE) {
this->updateCoeffs();
negSigModel_->updateCoeffs(negCoeffs_); negSigModel_->calcExtraInfo();
posSigModel_->updateCoeffs(posCoeffs_); posSigModel_->calcExtraInfo();
LauParArray negFitFrac = negSigModel_->getFitFractions();
if (negFitFrac.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << negFitFrac.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (negFitFrac[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << negFitFrac[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
LauParArray posFitFrac = posSigModel_->getFitFractions();
if (posFitFrac.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << posFitFrac.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (posFitFrac[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << posFitFrac[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
LauParArray negFitFracEffUnCorr = negSigModel_->getFitFractionsEfficiencyUncorrected();
if (negFitFracEffUnCorr.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << negFitFracEffUnCorr.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (negFitFracEffUnCorr[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << negFitFracEffUnCorr[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
LauParArray posFitFracEffUnCorr = posSigModel_->getFitFractionsEfficiencyUncorrected();
if (posFitFracEffUnCorr.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << posFitFracEffUnCorr.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (posFitFracEffUnCorr[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::finaliseFitResults : Fit Fraction array of unexpected dimension: " << posFitFracEffUnCorr[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
for (UInt_t i(0); i<nSigComp_; ++i) {
for (UInt_t j(i); j<nSigComp_; ++j) {
negFitFrac_[i][j].value(negFitFrac[i][j].value());
posFitFrac_[i][j].value(posFitFrac[i][j].value());
negFitFracEffUnCorr_[i][j].value(negFitFracEffUnCorr[i][j].value());
posFitFracEffUnCorr_[i][j].value(posFitFracEffUnCorr[i][j].value());
}
}
negMeanEff_.value(negSigModel_->getMeanEff().value());
posMeanEff_.value(posSigModel_->getMeanEff().value());
negDPRate_.value(negSigModel_->getDPRate().value());
posDPRate_.value(posSigModel_->getDPRate().value());
this->calcExtraFractions();
this->calcAsymmetries();
// Then store the final fit parameters, and any extra parameters for
// the signal model (e.g. fit fractions, FF asymmetries, ACPs, mean efficiency and DP rate)
this->clearExtraVarVectors();
LauParameterList& extraVars = this->extraPars();
// Add the positive and negative fit fractions for each signal component
for (UInt_t i(0); i<nSigComp_; ++i) {
for (UInt_t j(i); j<nSigComp_; ++j) {
extraVars.push_back(negFitFrac_[i][j]);
extraVars.push_back(posFitFrac_[i][j]);
extraVars.push_back(negFitFracEffUnCorr_[i][j]);
extraVars.push_back(posFitFracEffUnCorr_[i][j]);
}
}
// Store any extra parameters/quantities from the DP model (e.g. K-matrix total fit fractions)
std::vector<LauParameter> negExtraPars = negSigModel_->getExtraParameters();
std::vector<LauParameter>::iterator negExtraIter;
for (negExtraIter = negExtraPars.begin(); negExtraIter != negExtraPars.end(); ++negExtraIter) {
LauParameter negExtraParameter = (*negExtraIter);
extraVars.push_back(negExtraParameter);
}
std::vector<LauParameter> posExtraPars = posSigModel_->getExtraParameters();
std::vector<LauParameter>::iterator posExtraIter;
for (posExtraIter = posExtraPars.begin(); posExtraIter != posExtraPars.end(); ++posExtraIter) {
LauParameter posExtraParameter = (*posExtraIter);
extraVars.push_back(posExtraParameter);
}
extraVars.push_back(negMeanEff_);
extraVars.push_back(posMeanEff_);
extraVars.push_back(negDPRate_);
extraVars.push_back(posDPRate_);
for (UInt_t i = 0; i < nSigComp_; i++) {
for (UInt_t j(i); j<nSigComp_; ++j) {
extraVars.push_back(CPVFitFrac_[i][j]);
}
}
for (UInt_t i = 0; i < nSigComp_; i++) {
for (UInt_t j(i); j<nSigComp_; ++j) {
extraVars.push_back(CPCFitFrac_[i][j]);
}
}
for (UInt_t i(0); i<nSigComp_; ++i) {
extraVars.push_back(fitFracAsymm_[i]);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
extraVars.push_back(acp_[i]);
}
this->printFitFractions(std::cout);
this->printAsymmetries(std::cout);
}
const LauParameterPList& fitVars = this->fitPars();
const LauParameterList& extraVars = this->extraPars();
LauFitNtuple* ntuple = this->fitNtuple();
ntuple->storeParsAndErrors(fitVars, extraVars);
// find out the correlation matrix for the parameters
ntuple->storeCorrMatrix(this->iExpt(), this->fitStatus(), this->covarianceMatrix());
// Fill the data into ntuple
ntuple->updateFitNtuple();
// Print out the partial fit fractions, phases and the
// averaged efficiency, reweighted by the dynamics (and anything else)
if (this->writeLatexTable()) {
TString sigOutFileName(tablePrefixName);
sigOutFileName += "_"; sigOutFileName += this->iExpt(); sigOutFileName += "Expt.tex";
this->writeOutTable(sigOutFileName);
}
}
void LauCPFitModel::printFitFractions(std::ostream& output)
{
// Print out Fit Fractions, total DP rate and mean efficiency
// First for the B- events
for (UInt_t i = 0; i < nSigComp_; i++) {
const TString compName(coeffPars_[i]->name());
output << negParent_ << " FitFraction for component " << i << " (" << compName << ") = " << negFitFrac_[i][i] << std::endl;
}
output << negParent_ << " overall DP rate (integral of matrix element squared) = " << negDPRate_ << std::endl;
output << negParent_ << " average efficiency weighted by whole DP dynamics = " << negMeanEff_ << std::endl;
// Then for the positive sample
for (UInt_t i = 0; i < nSigComp_; i++) {
const TString compName(coeffPars_[i]->name());
const TString conjName(negSigModel_->getConjResName(compName));
output << posParent_ << " FitFraction for component " << i << " (" << conjName << ") = " << posFitFrac_[i][i] << std::endl;
}
output << posParent_ << " overall DP rate (integral of matrix element squared) = " << posDPRate_ << std::endl;
output << posParent_ << " average efficiency weighted by whole DP dynamics = " << posMeanEff_ << std::endl;
}
void LauCPFitModel::printAsymmetries(std::ostream& output)
{
for (UInt_t i = 0; i < nSigComp_; i++) {
const TString compName(coeffPars_[i]->name());
output << "Fit Fraction asymmetry for component " << i << " (" << compName << ") = " << fitFracAsymm_[i] << std::endl;
}
for (UInt_t i = 0; i < nSigComp_; i++) {
const TString compName(coeffPars_[i]->name());
output << "ACP for component " << i << " (" << compName << ") = " << acp_[i] << std::endl;
}
}
void LauCPFitModel::writeOutTable(const TString& outputFile)
{
// Write out the results of the fit to a tex-readable table
// TODO - need to include the yields in this table
std::ofstream fout(outputFile);
LauPrint print;
std::cout << "INFO in LauCPFitModel::writeOutTable : Writing out results of the fit to the tex file " << outputFile << std::endl;
if (this->useDP() == kTRUE) {
// print the fit coefficients in one table
coeffPars_.front()->printTableHeading(fout);
for (UInt_t i = 0; i < nSigComp_; i++) {
coeffPars_[i]->printTableRow(fout);
}
fout << "\\hline" << std::endl;
fout << "\\end{tabular}" << std::endl << std::endl;
// print the fit fractions and asymmetries in another
fout << "\\begin{tabular}{|l|c|c|c|c|}" << std::endl;
fout << "\\hline" << std::endl;
fout << "Component & " << negParent_ << " Fit Fraction & " << posParent_ << " Fit Fraction & Fit Fraction Asymmetry & ACP \\\\" << std::endl;
fout << "\\hline" << std::endl;
Double_t negFitFracSum(0.0);
Double_t posFitFracSum(0.0);
for (UInt_t i = 0; i < nSigComp_; i++) {
TString resName = coeffPars_[i]->name();
resName = resName.ReplaceAll("_", "\\_");
Double_t negFitFrac = negFitFrac_[i][i].value();
Double_t posFitFrac = posFitFrac_[i][i].value();
negFitFracSum += negFitFrac;
posFitFracSum += posFitFrac;
Double_t fitFracAsymm = fitFracAsymm_[i].value();
Double_t acp = acp_[i].value();
Double_t acpErr = acp_[i].error();
fout << resName << " & $";
print.printFormat(fout, negFitFrac);
fout << "$ & $";
print.printFormat(fout, posFitFrac);
fout << "$ & $";
print.printFormat(fout, fitFracAsymm);
fout << "$ & $";
print.printFormat(fout, acp);
fout << " \\pm ";
print.printFormat(fout, acpErr);
fout << "$ \\\\" << std::endl;
}
fout << "\\hline" << std::endl;
// Also print out sum of fit fractions
fout << "Fit Fraction Sum & $";
print.printFormat(fout, negFitFracSum);
fout << "$ & $";
print.printFormat(fout, posFitFracSum);
fout << "$ & & \\\\" << std::endl;
fout << "\\hline" << std::endl;
fout << "DP rate & $";
print.printFormat(fout, negDPRate_.value());
fout << "$ & $";
print.printFormat(fout, posDPRate_.value());
fout << "$ & & \\\\" << std::endl;
fout << "$< \\varepsilon > $ & $";
print.printFormat(fout, negMeanEff_.value());
fout << "$ & $";
print.printFormat(fout, posMeanEff_.value());
fout << "$ & & \\\\" << std::endl;
fout << "\\hline" << std::endl;
fout << "\\end{tabular}" << std::endl << std::endl;
}
if (!negSignalPdfs_.empty()) {
fout << "\\begin{tabular}{|l|c|}" << std::endl;
fout << "\\hline" << std::endl;
if (useSCF_ == kTRUE) {
fout << "\\Extra TM Signal PDFs' Parameters: & \\\\" << std::endl;
} else {
fout << "\\Extra Signal PDFs' Parameters: & \\\\" << std::endl;
}
this->printFitParameters(negSignalPdfs_, fout);
if ( tagged_ ) {
this->printFitParameters(posSignalPdfs_, fout);
}
if (useSCF_ == kTRUE && !negScfPdfs_.empty()) {
fout << "\\hline" << std::endl;
fout << "\\Extra SCF Signal PDFs' Parameters: & \\\\" << std::endl;
this->printFitParameters(negScfPdfs_, fout);
if ( tagged_ ) {
this->printFitParameters(posScfPdfs_, fout);
}
}
if (usingBkgnd_ == kTRUE && !negBkgndPdfs_.empty()) {
fout << "\\hline" << std::endl;
fout << "\\Extra Background PDFs' Parameters: & \\\\" << std::endl;
for (LauBkgndPdfsList::const_iterator iter = negBkgndPdfs_.begin(); iter != negBkgndPdfs_.end(); ++iter) {
this->printFitParameters(*iter, fout);
}
if ( tagged_ ) {
for (LauBkgndPdfsList::const_iterator iter = posBkgndPdfs_.begin(); iter != posBkgndPdfs_.end(); ++iter) {
this->printFitParameters(*iter, fout);
}
}
}
fout << "\\hline \n\\end{tabular}" << std::endl << std::endl;
}
}
void LauCPFitModel::checkInitFitParams()
{
// Update the number of signal events to be total-sum(background events)
this->updateSigEvents();
// Check whether we want to have randomised initial fit parameters for the signal model
if (this->useRandomInitFitPars() == kTRUE) {
std::cout << "INFO in LauCPFitModel::checkInitFitParams : Setting random parameters for the signal model" << std::endl;
this->randomiseInitFitPars();
}
}
void LauCPFitModel::randomiseInitFitPars()
{
// Only randomise those parameters that are not fixed!
std::cout << "INFO in LauCPFitModel::randomiseInitFitPars : Randomising the initial fit magnitudes and phases of the components..." << std::endl;
- if (this->fixParamFileName_.IsNull() && this->fixParamMap_.empty()) {
+ if ( fixParamFileName_.IsNull() && fixParamMap_.empty() ) {
// No params are imported - randomise as normal
for (UInt_t i = 0; i < nSigComp_; i++) {
coeffPars_[i]->randomiseInitValues();
}
} else {
// Only randomise those that are not imported (i.e., not found in allImportedFreeParams_)
// by temporarily fixing all imported parameters, and then freeing those not set to be fixed when imported,
// except those that are previously set to be fixed anyhow.
// Convoluted, but beats changing the behaviour of functions that call checkInitFitParams or the coeffSet
// itself.
for (auto p : allImportedFreeParams_) { p->fixed(kTRUE); }
for (UInt_t i = 0; i < nSigComp_; i++) {
coeffPars_[i]->randomiseInitValues();
}
for (auto p : allImportedFreeParams_) { p->fixed(kFALSE); }
-
}
}
std::pair<LauCPFitModel::LauGenInfo,Bool_t> LauCPFitModel::eventsToGenerate()
{
// Determine the number of events to generate for each hypothesis
// If we're smearing then smear each one individually
LauGenInfo nEvtsGen;
// Keep track of whether any yield or asymmetry parameters are blinded
Bool_t blind = kFALSE;
// Signal
Double_t evtWeight(1.0);
Double_t nEvts = signalEvents_->genValue();
if ( nEvts < 0.0 ) {
evtWeight = -1.0;
nEvts = TMath::Abs( nEvts );
}
if ( signalEvents_->blind() ) {
blind = kTRUE;
}
Double_t asym(0.0);
Double_t sigAsym(0.0);
// need to include this as an alternative in case the DP isn't in the model
if ( !this->useDP() || forceAsym_ ) {
sigAsym = signalAsym_->genValue();
if ( signalAsym_->blind() ) {
blind = kTRUE;
}
} else {
Double_t negRate = negSigModel_->getDPNorm();
Double_t posRate = posSigModel_->getDPNorm();
if (negRate+posRate>1e-30) {
sigAsym = (negRate-posRate)/(negRate+posRate);
}
}
asym = sigAsym;
Int_t nPosEvts = static_cast<Int_t>((nEvts/2.0 * (1.0 - asym)) + 0.5);
Int_t nNegEvts = static_cast<Int_t>((nEvts/2.0 * (1.0 + asym)) + 0.5);
if (this->doPoissonSmearing()) {
nNegEvts = LauRandom::randomFun()->Poisson(nNegEvts);
nPosEvts = LauRandom::randomFun()->Poisson(nPosEvts);
}
nEvtsGen[std::make_pair("signal",-1)] = std::make_pair(nNegEvts,evtWeight);
nEvtsGen[std::make_pair("signal",+1)] = std::make_pair(nPosEvts,evtWeight);
// backgrounds
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
const TString& bkgndClass = this->bkgndClassName(bkgndID);
const LauAbsRValue* evtsPar = bkgndEvents_[bkgndID];
const LauAbsRValue* asymPar = bkgndAsym_[bkgndID];
if ( evtsPar->blind() || asymPar->blind() ) {
blind = kTRUE;
}
evtWeight = 1.0;
nEvts = TMath::FloorNint( evtsPar->genValue() );
if ( nEvts < 0 ) {
evtWeight = -1.0;
nEvts = TMath::Abs( nEvts );
}
asym = asymPar->genValue();
nPosEvts = static_cast<Int_t>((nEvts/2.0 * (1.0 - asym)) + 0.5);
nNegEvts = static_cast<Int_t>((nEvts/2.0 * (1.0 + asym)) + 0.5);
if (this->doPoissonSmearing()) {
nNegEvts = LauRandom::randomFun()->Poisson(nNegEvts);
nPosEvts = LauRandom::randomFun()->Poisson(nPosEvts);
}
nEvtsGen[std::make_pair(bkgndClass,-1)] = std::make_pair(nNegEvts,evtWeight);
nEvtsGen[std::make_pair(bkgndClass,+1)] = std::make_pair(nPosEvts,evtWeight);
}
// Print out the information on what we're generating, but only if none of the parameters are blind (otherwise we risk unblinding them!)
if ( !blind ) {
std::cout << "INFO in LauCPFitModel::eventsToGenerate : Generating toy MC with:" << std::endl;
std::cout << " : Signal asymmetry = " << sigAsym << " and number of signal events = " << signalEvents_->genValue() << std::endl;
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
const TString& bkgndClass = this->bkgndClassName(bkgndID);
const LauAbsRValue* evtsPar = bkgndEvents_[bkgndID];
const LauAbsRValue* asymPar = bkgndAsym_[bkgndID];
std::cout << " : " << bkgndClass << " asymmetry = " << asymPar->genValue() << " and number of " << bkgndClass << " events = " << evtsPar->genValue() << std::endl;
}
}
return std::make_pair( nEvtsGen, blind );
}
Bool_t LauCPFitModel::genExpt()
{
// Routine to generate toy Monte Carlo events according to the various models we have defined.
// Determine the number of events to generate for each hypothesis
std::pair<LauGenInfo,Bool_t> info = this->eventsToGenerate();
LauGenInfo nEvts = info.first;
const Bool_t blind = info.second;
Bool_t genOK(kTRUE);
Int_t evtNum(0);
const UInt_t nBkgnds = this->nBkgndClasses();
std::vector<TString> bkgndClassNames(nBkgnds);
std::vector<TString> bkgndClassNamesGen(nBkgnds);
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
TString name( this->bkgndClassName(iBkgnd) );
bkgndClassNames[iBkgnd] = name;
bkgndClassNamesGen[iBkgnd] = "gen"+name;
}
const Bool_t storeSCFTruthInfo = ( useSCF_ || ( this->enableEmbedding() &&
negSignalTree_ != 0 && negSignalTree_->haveBranch("mcMatch") &&
posSignalTree_ != 0 && posSignalTree_->haveBranch("mcMatch") ) );
// Loop over the hypotheses and generate the requested number of events for each one
for (LauGenInfo::const_iterator iter = nEvts.begin(); iter != nEvts.end(); ++iter) {
const TString& type(iter->first.first);
curEvtCharge_ = iter->first.second;
Double_t evtWeight( iter->second.second );
Int_t nEvtsGen( iter->second.first );
for (Int_t iEvt(0); iEvt<nEvtsGen; ++iEvt) {
this->setGenNtupleDoubleBranchValue( "evtWeight", evtWeight );
this->setGenNtupleDoubleBranchValue( "efficiency", 1.0 );
if (type == "signal") {
this->setGenNtupleIntegerBranchValue("genSig",1);
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
this->setGenNtupleIntegerBranchValue( bkgndClassNamesGen[iBkgnd], 0 );
}
genOK = this->generateSignalEvent();
if ( curEvtCharge_ > 0 ){
this->setGenNtupleDoubleBranchValue( "efficiency", posSigModel_->getEvtEff() );
} else {
this->setGenNtupleDoubleBranchValue( "efficiency", negSigModel_->getEvtEff() );
}
} else {
this->setGenNtupleIntegerBranchValue("genSig",0);
if ( storeSCFTruthInfo ) {
this->setGenNtupleIntegerBranchValue("genTMSig",0);
this->setGenNtupleIntegerBranchValue("genSCFSig",0);
}
UInt_t bkgndID(0);
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
Int_t gen(0);
if ( bkgndClassNames[iBkgnd] == type ) {
gen = 1;
bkgndID = iBkgnd;
}
this->setGenNtupleIntegerBranchValue( bkgndClassNamesGen[iBkgnd], gen );
}
genOK = this->generateBkgndEvent(bkgndID);
}
if (!genOK) {
// If there was a problem with the generation then break out and return.
// The problem model will have adjusted itself so that all should be OK next time.
break;
}
if (this->useDP() == kTRUE) {
this->setDPBranchValues();
}
// Store the event charge
this->setGenNtupleIntegerBranchValue(tagVarName_,curEvtCharge_);
// Store the event number (within this experiment)
// and then increment it
this->setGenNtupleIntegerBranchValue("iEvtWithinExpt",evtNum);
++evtNum;
this->fillGenNtupleBranches();
if ( !blind && (iEvt%500 == 0) ) {
std::cout << "INFO in LauCPFitModel::genExpt : Generated event number " << iEvt << " out of " << nEvtsGen << " " << type << " events." << std::endl;
}
}
if (!genOK) {
break;
}
}
if (this->useDP() && genOK) {
negSigModel_->checkToyMC(kTRUE,kTRUE);
posSigModel_->checkToyMC(kTRUE,kTRUE);
// Get the fit fractions if they're to be written into the latex table
if (this->writeLatexTable()) {
LauParArray negFitFrac = negSigModel_->getFitFractions();
if (negFitFrac.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::genExpt : Fit Fraction array of unexpected dimension: " << negFitFrac.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (negFitFrac[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::genExpt : Fit Fraction array of unexpected dimension: " << negFitFrac[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
LauParArray posFitFrac = posSigModel_->getFitFractions();
if (posFitFrac.size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::genExpt : Fit Fraction array of unexpected dimension: " << posFitFrac.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
for (UInt_t i(0); i<nSigComp_; ++i) {
if (posFitFrac[i].size() != nSigComp_) {
std::cerr << "ERROR in LauCPFitModel::genExpt : Fit Fraction array of unexpected dimension: " << posFitFrac[i].size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
}
for (UInt_t i(0); i<nSigComp_; ++i) {
for (UInt_t j(i); j<nSigComp_; ++j) {
negFitFrac_[i][j].value(negFitFrac[i][j].value());
posFitFrac_[i][j].value(posFitFrac[i][j].value());
}
}
negMeanEff_.value(negSigModel_->getMeanEff().value());
posMeanEff_.value(posSigModel_->getMeanEff().value());
negDPRate_.value(negSigModel_->getDPRate().value());
posDPRate_.value(posSigModel_->getDPRate().value());
}
}
// If we're reusing embedded events or if the generation is being
// reset then clear the lists of used events
if (reuseSignal_ || !genOK) {
if (negSignalTree_) {
negSignalTree_->clearUsedList();
}
if (posSignalTree_) {
posSignalTree_->clearUsedList();
}
}
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
LauEmbeddedData* data = negBkgndTree_[bkgndID];
if (reuseBkgnd_[bkgndID] || !genOK) {
if (data) {
data->clearUsedList();
}
}
}
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
LauEmbeddedData* data = posBkgndTree_[bkgndID];
if (reuseBkgnd_[bkgndID] || !genOK) {
if (data) {
data->clearUsedList();
}
}
}
return genOK;
}
Bool_t LauCPFitModel::generateSignalEvent()
{
// Generate signal event
Bool_t genOK(kTRUE);
Bool_t genSCF(kFALSE);
LauIsobarDynamics* model(0);
LauKinematics* kinematics(0);
LauEmbeddedData* embeddedData(0);
LauPdfList* sigPdfs(0);
LauPdfList* scfPdfs(0);
Bool_t doReweighting(kFALSE);
if (curEvtCharge_<0) {
model = negSigModel_;
kinematics = negKinematics_;
sigPdfs = &negSignalPdfs_;
scfPdfs = &negScfPdfs_;
if (this->enableEmbedding()) {
embeddedData = negSignalTree_;
doReweighting = useNegReweighting_;
}
} else {
model = posSigModel_;
kinematics = posKinematics_;
if ( tagged_ ) {
sigPdfs = &posSignalPdfs_;
scfPdfs = &posScfPdfs_;
} else {
sigPdfs = &negSignalPdfs_;
scfPdfs = &negScfPdfs_;
}
if (this->enableEmbedding()) {
embeddedData = posSignalTree_;
doReweighting = usePosReweighting_;
}
}
if (this->useDP()) {
if (embeddedData) {
if (doReweighting) {
// Select a (random) event from the generated data. Then store the
// reconstructed DP co-ords, together with other pdf information,
// as the embedded data.
genOK = embeddedData->getReweightedEvent(model);
} else {
// Just get the information of a (randomly) selected event in the
// embedded data
embeddedData->getEmbeddedEvent(kinematics);
}
genSCF = this->storeSignalMCMatch( embeddedData );
} else {
genOK = model->generate();
if ( genOK && useSCF_ ) {
Double_t frac(0.0);
if ( useSCFHist_ ) {
frac = scfFracHist_->calcEfficiency( kinematics );
} else {
frac = scfFrac_.genValue();
}
if ( frac < LauRandom::randomFun()->Rndm() ) {
this->setGenNtupleIntegerBranchValue("genTMSig",1);
this->setGenNtupleIntegerBranchValue("genSCFSig",0);
genSCF = kFALSE;
} else {
this->setGenNtupleIntegerBranchValue("genTMSig",0);
this->setGenNtupleIntegerBranchValue("genSCFSig",1);
genSCF = kTRUE;
// Optionally smear the DP position
// of the SCF event
if ( scfMap_ != 0 ) {
Double_t xCoord(0.0), yCoord(0.0);
if ( kinematics->squareDP() ) {
xCoord = kinematics->getmPrime();
yCoord = kinematics->getThetaPrime();
} else {
xCoord = kinematics->getm13Sq();
yCoord = kinematics->getm23Sq();
}
// Find the bin number where this event is generated
Int_t binNo = scfMap_->binNumber( xCoord, yCoord );
// Retrieve the migration histogram
TH2* histo = scfMap_->trueHist( binNo );
const LauAbsEffModel * effModel = model->getEffModel();
do {
// Get a random point from the histogram
histo->GetRandom2( xCoord, yCoord );
// Update the kinematics
if ( kinematics->squareDP() ) {
kinematics->updateSqDPKinematics( xCoord, yCoord );
} else {
kinematics->updateKinematics( xCoord, yCoord );
}
} while ( ! effModel->passVeto( kinematics ) );
}
}
}
}
} else {
if (embeddedData) {
embeddedData->getEmbeddedEvent(0);
genSCF = this->storeSignalMCMatch( embeddedData );
} else if ( useSCF_ ) {
Double_t frac = scfFrac_.genValue();
if ( frac < LauRandom::randomFun()->Rndm() ) {
this->setGenNtupleIntegerBranchValue("genTMSig",1);
this->setGenNtupleIntegerBranchValue("genSCFSig",0);
genSCF = kFALSE;
} else {
this->setGenNtupleIntegerBranchValue("genTMSig",0);
this->setGenNtupleIntegerBranchValue("genSCFSig",1);
genSCF = kTRUE;
}
}
}
if (genOK) {
if ( useSCF_ ) {
if ( genSCF ) {
this->generateExtraPdfValues(scfPdfs, embeddedData);
} else {
this->generateExtraPdfValues(sigPdfs, embeddedData);
}
} else {
this->generateExtraPdfValues(sigPdfs, embeddedData);
}
}
// Check for problems with the embedding
if (embeddedData && (embeddedData->nEvents() == embeddedData->nUsedEvents())) {
std::cerr << "WARNING in LauCPFitModel::generateSignalEvent : Source of embedded signal events used up, clearing the list of used events." << std::endl;
embeddedData->clearUsedList();
}
return genOK;
}
Bool_t LauCPFitModel::generateBkgndEvent(UInt_t bkgndID)
{
// Generate Bkgnd event
Bool_t genOK(kTRUE);
LauAbsBkgndDPModel* model(0);
LauEmbeddedData* embeddedData(0);
LauPdfList* extraPdfs(0);
LauKinematics* kinematics(0);
if (curEvtCharge_<0) {
model = negBkgndDPModels_[bkgndID];
if (this->enableEmbedding()) {
embeddedData = negBkgndTree_[bkgndID];
}
extraPdfs = &negBkgndPdfs_[bkgndID];
kinematics = negKinematics_;
} else {
model = posBkgndDPModels_[bkgndID];
if (this->enableEmbedding()) {
embeddedData = posBkgndTree_[bkgndID];
}
if ( tagged_ ) {
extraPdfs = &posBkgndPdfs_[bkgndID];
} else {
extraPdfs = &negBkgndPdfs_[bkgndID];
}
kinematics = posKinematics_;
}
if (this->useDP()) {
if (embeddedData) {
embeddedData->getEmbeddedEvent(kinematics);
} else {
if (model == 0) {
const TString& bkgndClass = this->bkgndClassName(bkgndID);
std::cerr << "ERROR in LauCPFitModel::generateBkgndEvent : Can't find the DP model for background class \"" << bkgndClass << "\"." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
genOK = model->generate();
}
} else {
if (embeddedData) {
embeddedData->getEmbeddedEvent(0);
}
}
if (genOK) {
this->generateExtraPdfValues(extraPdfs, embeddedData);
}
// Check for problems with the embedding
if (embeddedData && (embeddedData->nEvents() == embeddedData->nUsedEvents())) {
const TString& bkgndClass = this->bkgndClassName(bkgndID);
std::cerr << "WARNING in LauCPFitModel::generateBkgndEvent : Source of embedded " << bkgndClass << " events used up, clearing the list of used events." << std::endl;
embeddedData->clearUsedList();
}
return genOK;
}
void LauCPFitModel::setupGenNtupleBranches()
{
// Setup the required ntuple branches
this->addGenNtupleDoubleBranch("evtWeight");
this->addGenNtupleIntegerBranch("genSig");
this->addGenNtupleDoubleBranch("efficiency");
if ( useSCF_ || ( this->enableEmbedding() &&
negSignalTree_ != 0 && negSignalTree_->haveBranch("mcMatch") &&
posSignalTree_ != 0 && posSignalTree_->haveBranch("mcMatch") ) ) {
this->addGenNtupleIntegerBranch("genTMSig");
this->addGenNtupleIntegerBranch("genSCFSig");
}
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
TString name( this->bkgndClassName(iBkgnd) );
name.Prepend("gen");
this->addGenNtupleIntegerBranch(name);
}
this->addGenNtupleIntegerBranch("charge");
if (this->useDP() == kTRUE) {
this->addGenNtupleDoubleBranch("m12");
this->addGenNtupleDoubleBranch("m23");
this->addGenNtupleDoubleBranch("m13");
this->addGenNtupleDoubleBranch("m12Sq");
this->addGenNtupleDoubleBranch("m23Sq");
this->addGenNtupleDoubleBranch("m13Sq");
this->addGenNtupleDoubleBranch("cosHel12");
this->addGenNtupleDoubleBranch("cosHel23");
this->addGenNtupleDoubleBranch("cosHel13");
if (negKinematics_->squareDP() && posKinematics_->squareDP()) {
this->addGenNtupleDoubleBranch("mPrime");
this->addGenNtupleDoubleBranch("thPrime");
}
}
for (LauPdfList::const_iterator pdf_iter = negSignalPdfs_.begin(); pdf_iter != negSignalPdfs_.end(); ++pdf_iter) {
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
this->addGenNtupleDoubleBranch( (*var_iter) );
}
}
}
}
void LauCPFitModel::setDPBranchValues()
{
LauKinematics* kinematics(0);
if (curEvtCharge_<0) {
kinematics = negKinematics_;
} else {
kinematics = posKinematics_;
}
// Store all the DP information
this->setGenNtupleDoubleBranchValue("m12", kinematics->getm12());
this->setGenNtupleDoubleBranchValue("m23", kinematics->getm23());
this->setGenNtupleDoubleBranchValue("m13", kinematics->getm13());
this->setGenNtupleDoubleBranchValue("m12Sq", kinematics->getm12Sq());
this->setGenNtupleDoubleBranchValue("m23Sq", kinematics->getm23Sq());
this->setGenNtupleDoubleBranchValue("m13Sq", kinematics->getm13Sq());
this->setGenNtupleDoubleBranchValue("cosHel12", kinematics->getc12());
this->setGenNtupleDoubleBranchValue("cosHel23", kinematics->getc23());
this->setGenNtupleDoubleBranchValue("cosHel13", kinematics->getc13());
if (kinematics->squareDP()) {
this->setGenNtupleDoubleBranchValue("mPrime", kinematics->getmPrime());
this->setGenNtupleDoubleBranchValue("thPrime", kinematics->getThetaPrime());
}
}
void LauCPFitModel::generateExtraPdfValues(LauPdfList* extraPdfs, LauEmbeddedData* embeddedData)
{
LauKinematics* kinematics(0);
if (curEvtCharge_<0) {
kinematics = negKinematics_;
} else {
kinematics = posKinematics_;
}
if (!extraPdfs) {
std::cerr << "ERROR in LauCPFitModel::generateExtraPdfValues : Null pointer to PDF list." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
if (extraPdfs->empty()) {
//std::cerr << "WARNING in LauCPFitModel::generateExtraPdfValues : PDF list is empty." << std::endl;
return;
}
// Generate from the extra PDFs
for (LauPdfList::iterator pdf_iter = extraPdfs->begin(); pdf_iter != extraPdfs->end(); ++pdf_iter) {
LauFitData genValues;
if (embeddedData) {
genValues = embeddedData->getValues( (*pdf_iter)->varNames() );
} else {
genValues = (*pdf_iter)->generate(kinematics);
}
for ( LauFitData::const_iterator var_iter = genValues.begin(); var_iter != genValues.end(); ++var_iter ) {
TString varName = var_iter->first;
if ( varName != "m13Sq" && varName != "m23Sq" ) {
Double_t value = var_iter->second;
this->setGenNtupleDoubleBranchValue(varName,value);
}
}
}
}
Bool_t LauCPFitModel::storeSignalMCMatch(LauEmbeddedData* embeddedData)
{
// Default to TM
Bool_t genSCF(kFALSE);
Int_t match(1);
// Check that we have a valid pointer and that embedded data has
// the mcMatch branch. If so then get the match value.
if ( embeddedData && embeddedData->haveBranch("mcMatch") ) {
match = TMath::Nint( embeddedData->getValue("mcMatch") );
}
// Set the variables accordingly.
if (match) {
this->setGenNtupleIntegerBranchValue("genTMSig",1);
this->setGenNtupleIntegerBranchValue("genSCFSig",0);
genSCF = kFALSE;
} else {
this->setGenNtupleIntegerBranchValue("genTMSig",0);
this->setGenNtupleIntegerBranchValue("genSCFSig",1);
genSCF = kTRUE;
}
return genSCF;
}
void LauCPFitModel::propagateParUpdates()
{
// Update the signal parameters and then the total normalisation for the signal likelihood
if (this->useDP() == kTRUE) {
this->updateCoeffs();
negSigModel_->updateCoeffs(negCoeffs_);
posSigModel_->updateCoeffs(posCoeffs_);
}
// Update the signal fraction from the background fractions if not doing an extended fit
if ( !this->doEMLFit() && !signalEvents_->fixed() ) {
this->updateSigEvents();
}
}
void LauCPFitModel::updateSigEvents()
{
// The background parameters will have been set from Minuit.
// We need to update the signal events using these.
Double_t nTotEvts = this->eventsPerExpt();
signalEvents_->range(-2.0*nTotEvts,2.0*nTotEvts);
for (LauBkgndYieldList::iterator iter = bkgndEvents_.begin(); iter != bkgndEvents_.end(); ++iter) {
LauAbsRValue* nBkgndEvents = (*iter);
if ( nBkgndEvents->isLValue() ) {
LauParameter* yield = dynamic_cast<LauParameter*>( nBkgndEvents );
yield->range(-2.0*nTotEvts,2.0*nTotEvts);
}
}
if (signalEvents_->fixed()) {
return;
}
// Subtract background events (if any) from signal.
Double_t signalEvents = nTotEvts;
if (usingBkgnd_ == kTRUE) {
for (LauBkgndYieldList::const_iterator iter = bkgndEvents_.begin(); iter != bkgndEvents_.end(); ++iter) {
signalEvents -= (*iter)->value();
}
}
signalEvents_->value(signalEvents);
}
void LauCPFitModel::cacheInputFitVars()
{
// Fill the internal data trees of the signal and background models.
// Note that we store the events of both charges in both the
// negative and the positive models. It's only later, at the stage
// when the likelihood is being calculated, that we separate them.
LauFitDataTree* inputFitData = this->fitData();
// First the Dalitz plot variables (m_ij^2)
if (this->useDP() == kTRUE) {
// need to append SCF smearing bins before caching DP amplitudes
if ( scfMap_ != 0 ) {
this->appendBinCentres( inputFitData );
}
negSigModel_->fillDataTree(*inputFitData);
posSigModel_->fillDataTree(*inputFitData);
if (usingBkgnd_ == kTRUE) {
for (LauBkgndDPModelList::iterator iter = negBkgndDPModels_.begin(); iter != negBkgndDPModels_.end(); ++iter) {
(*iter)->fillDataTree(*inputFitData);
}
for (LauBkgndDPModelList::iterator iter = posBkgndDPModels_.begin(); iter != posBkgndDPModels_.end(); ++iter) {
(*iter)->fillDataTree(*inputFitData);
}
}
}
// ...and then the extra PDFs
this->cacheInfo(negSignalPdfs_, *inputFitData);
this->cacheInfo(negScfPdfs_, *inputFitData);
for (LauBkgndPdfsList::iterator iter = negBkgndPdfs_.begin(); iter != negBkgndPdfs_.end(); ++iter) {
this->cacheInfo((*iter), *inputFitData);
}
if ( tagged_ ) {
this->cacheInfo(posSignalPdfs_, *inputFitData);
this->cacheInfo(posScfPdfs_, *inputFitData);
for (LauBkgndPdfsList::iterator iter = posBkgndPdfs_.begin(); iter != posBkgndPdfs_.end(); ++iter) {
this->cacheInfo((*iter), *inputFitData);
}
}
// the SCF fractions and jacobians
if ( useSCF_ && useSCFHist_ ) {
if ( !inputFitData->haveBranch( "m13Sq" ) || !inputFitData->haveBranch( "m23Sq" ) ) {
std::cerr << "ERROR in LauCPFitModel::cacheInputFitVars : Input data does not contain DP branches and so can't cache the SCF fraction." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
UInt_t nEvents = inputFitData->nEvents();
recoSCFFracs_.clear();
recoSCFFracs_.reserve( nEvents );
if ( negKinematics_->squareDP() ) {
recoJacobians_.clear();
recoJacobians_.reserve( nEvents );
}
for (UInt_t iEvt = 0; iEvt < nEvents; iEvt++) {
const LauFitData& dataValues = inputFitData->getData(iEvt);
LauFitData::const_iterator m13_iter = dataValues.find("m13Sq");
LauFitData::const_iterator m23_iter = dataValues.find("m23Sq");
negKinematics_->updateKinematics( m13_iter->second, m23_iter->second );
Double_t scfFrac = scfFracHist_->calcEfficiency( negKinematics_ );
recoSCFFracs_.push_back( scfFrac );
if ( negKinematics_->squareDP() ) {
recoJacobians_.push_back( negKinematics_->calcSqDPJacobian() );
}
}
}
// finally cache the event charge
evtCharges_.clear();
if ( tagged_ ) {
if ( !inputFitData->haveBranch( tagVarName_ ) ) {
std::cerr << "ERROR in LauCPFitModel::cacheInputFitVars : Input data does not contain branch \"" << tagVarName_ << "\"." << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
UInt_t nEvents = inputFitData->nEvents();
evtCharges_.reserve( nEvents );
for (UInt_t iEvt = 0; iEvt < nEvents; iEvt++) {
const LauFitData& dataValues = inputFitData->getData(iEvt);
LauFitData::const_iterator iter = dataValues.find( tagVarName_ );
curEvtCharge_ = static_cast<Int_t>( iter->second );
evtCharges_.push_back( curEvtCharge_ );
}
}
}
void LauCPFitModel::appendBinCentres( LauFitDataTree* inputData )
{
// We'll be caching the DP amplitudes and efficiencies of the centres of the true bins.
// To do so, we attach some fake points at the end of inputData, the number of the entry
// minus the total number of events corresponding to the number of the histogram for that
// given true bin in the LauScfMap object. (What this means is that when Laura is provided with
// the LauScfMap object by the user, it's the latter who has to make sure that it contains the
// right number of histograms and in exactly the right order!)
// Get the x and y co-ordinates of the bin centres
std::vector<Double_t> binCentresXCoords;
std::vector<Double_t> binCentresYCoords;
scfMap_->listBinCentres(binCentresXCoords, binCentresYCoords);
// The SCF histograms could be in square Dalitz plot histograms.
// The dynamics takes normal Dalitz plot coords, so we might have to convert them back.
Bool_t sqDP = negKinematics_->squareDP();
UInt_t nBins = binCentresXCoords.size();
fakeSCFFracs_.clear();
fakeSCFFracs_.reserve( nBins );
if ( sqDP ) {
fakeJacobians_.clear();
fakeJacobians_.reserve( nBins );
}
for (UInt_t iBin = 0; iBin < nBins; ++iBin) {
if ( sqDP ) {
negKinematics_->updateSqDPKinematics(binCentresXCoords[iBin],binCentresYCoords[iBin]);
binCentresXCoords[iBin] = negKinematics_->getm13Sq();
binCentresYCoords[iBin] = negKinematics_->getm23Sq();
fakeJacobians_.push_back( negKinematics_->calcSqDPJacobian() );
} else {
negKinematics_->updateKinematics(binCentresXCoords[iBin],binCentresYCoords[iBin]);
}
fakeSCFFracs_.push_back( scfFracHist_->calcEfficiency( negKinematics_ ) );
}
// Set up inputFitVars_ object to hold the fake events
inputData->appendFakePoints(binCentresXCoords,binCentresYCoords);
}
Double_t LauCPFitModel::getTotEvtLikelihood(UInt_t iEvt)
{
// Find out whether we have B- or B+
if ( tagged_ ) {
curEvtCharge_ = evtCharges_[iEvt];
// check that the charge is either +1 or -1
if (TMath::Abs(curEvtCharge_)!=1) {
std::cerr << "ERROR in LauCPFitModel::getTotEvtLikelihood : Charge/tag not accepted value: " << curEvtCharge_ << std::endl;
if (curEvtCharge_>0) {
curEvtCharge_ = +1;
} else {
curEvtCharge_ = -1;
}
std::cerr << " : Making it: " << curEvtCharge_ << "." << std::endl;
}
}
// Get the DP likelihood for signal and backgrounds
this->getEvtDPLikelihood(iEvt);
// Get the combined extra PDFs likelihood for signal and backgrounds
this->getEvtExtraLikelihoods(iEvt);
// If appropriate, combine the TM and SCF likelihoods
Double_t sigLike = sigDPLike_ * sigExtraLike_;
if ( useSCF_ ) {
Double_t scfFrac(0.0);
if (useSCFHist_) {
scfFrac = recoSCFFracs_[iEvt];
} else {
scfFrac = scfFrac_.unblindValue();
}
sigLike *= (1.0 - scfFrac);
if ( (scfMap_ != 0) && (this->useDP() == kTRUE) ) {
// if we're smearing the SCF DP PDF then the SCF frac
// is already included in the SCF DP likelihood
sigLike += (scfDPLike_ * scfExtraLike_);
} else {
sigLike += (scfFrac * scfDPLike_ * scfExtraLike_);
}
}
// Get the correct event fractions depending on the charge
// Signal asymmetry is built into the DP model... but when the DP
// isn't in the fit we need an explicit parameter
Double_t signalEvents = signalEvents_->unblindValue() * 0.5;
if (this->useDP() == kFALSE) {
signalEvents *= (1.0 - curEvtCharge_ * signalAsym_->unblindValue());
}
// Construct the total event likelihood
Double_t likelihood(0.0);
if (usingBkgnd_) {
likelihood = sigLike*signalEvents;
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
Double_t bkgndEvents = bkgndEvents_[bkgndID]->unblindValue() * 0.5 * (1.0 - curEvtCharge_ * bkgndAsym_[bkgndID]->unblindValue());
likelihood += bkgndEvents*bkgndDPLike_[bkgndID]*bkgndExtraLike_[bkgndID];
}
} else {
likelihood = sigLike*0.5;
}
return likelihood;
}
Double_t LauCPFitModel::getEventSum() const
{
Double_t eventSum(0.0);
eventSum += signalEvents_->unblindValue();
if (usingBkgnd_) {
for (LauBkgndYieldList::const_iterator iter = bkgndEvents_.begin(); iter != bkgndEvents_.end(); ++iter) {
eventSum += (*iter)->unblindValue();
}
}
return eventSum;
}
void LauCPFitModel::getEvtDPLikelihood(UInt_t iEvt)
{
// Function to return the signal and background likelihoods for the
// Dalitz plot for the given event evtNo.
if ( ! this->useDP() ) {
// There's always going to be a term in the likelihood for the
// signal, so we'd better not zero it.
sigDPLike_ = 1.0;
scfDPLike_ = 1.0;
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
if (usingBkgnd_ == kTRUE) {
bkgndDPLike_[bkgndID] = 1.0;
} else {
bkgndDPLike_[bkgndID] = 0.0;
}
}
return;
}
const UInt_t nBkgnds = this->nBkgndClasses();
if ( tagged_ ) {
if (curEvtCharge_==+1) {
posSigModel_->calcLikelihoodInfo(iEvt);
sigDPLike_ = posSigModel_->getEvtIntensity();
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
if (usingBkgnd_ == kTRUE) {
bkgndDPLike_[bkgndID] = posBkgndDPModels_[bkgndID]->getLikelihood(iEvt);
} else {
bkgndDPLike_[bkgndID] = 0.0;
}
}
} else {
negSigModel_->calcLikelihoodInfo(iEvt);
sigDPLike_ = negSigModel_->getEvtIntensity();
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
if (usingBkgnd_ == kTRUE) {
bkgndDPLike_[bkgndID] = negBkgndDPModels_[bkgndID]->getLikelihood(iEvt);
} else {
bkgndDPLike_[bkgndID] = 0.0;
}
}
}
} else {
posSigModel_->calcLikelihoodInfo(iEvt);
negSigModel_->calcLikelihoodInfo(iEvt);
sigDPLike_ = 0.5 * ( posSigModel_->getEvtIntensity() + negSigModel_->getEvtIntensity() );
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
if (usingBkgnd_ == kTRUE) {
bkgndDPLike_[bkgndID] = 0.5 * ( posBkgndDPModels_[bkgndID]->getLikelihood(iEvt) +
negBkgndDPModels_[bkgndID]->getLikelihood(iEvt) );
} else {
bkgndDPLike_[bkgndID] = 0.0;
}
}
}
if ( useSCF_ == kTRUE ) {
if ( scfMap_ == 0 ) {
// we're not smearing the SCF DP position
// so the likelihood is the same as the TM
scfDPLike_ = sigDPLike_;
} else {
// calculate the smeared SCF DP likelihood
scfDPLike_ = this->getEvtSCFDPLikelihood(iEvt);
}
}
// Calculate the signal normalisation
// NB the 2.0 is there so that the 0.5 factor is applied to
// signal and background in the same place otherwise you get
// normalisation problems when you switch off the DP in the fit
Double_t norm = negSigModel_->getDPNorm() + posSigModel_->getDPNorm();
sigDPLike_ *= 2.0/norm;
scfDPLike_ *= 2.0/norm;
}
Double_t LauCPFitModel::getEvtSCFDPLikelihood(UInt_t iEvt)
{
Double_t scfDPLike(0.0);
Double_t recoJacobian(1.0);
Double_t xCoord(0.0);
Double_t yCoord(0.0);
Bool_t squareDP = negKinematics_->squareDP();
if ( squareDP ) {
xCoord = negSigModel_->getEvtmPrime();
yCoord = negSigModel_->getEvtthPrime();
recoJacobian = recoJacobians_[iEvt];
} else {
xCoord = negSigModel_->getEvtm13Sq();
yCoord = negSigModel_->getEvtm23Sq();
}
// Find the bin that our reco event falls in
Int_t recoBin = scfMap_->binNumber( xCoord, yCoord );
// Find out which true Bins contribute to the given reco bin
const std::vector<Int_t>* trueBins = scfMap_->trueBins(recoBin);
const Int_t nDataEvents = this->eventsPerExpt();
// Loop over the true bins
for (std::vector<Int_t>::const_iterator iter = trueBins->begin(); iter != trueBins->end(); ++iter)
{
Int_t trueBin = (*iter);
// prob of a true event in the given true bin migrating to the reco bin
Double_t pRecoGivenTrue = scfMap_->prob( recoBin, trueBin );
Double_t pTrue(0.0);
// We've cached the DP amplitudes and the efficiency for the
// true bin centres, just after the data points
if ( tagged_ ) {
LauIsobarDynamics* sigModel(0);
if (curEvtCharge_<0) {
sigModel = negSigModel_;
} else {
sigModel = posSigModel_;
}
sigModel->calcLikelihoodInfo( nDataEvents + trueBin );
pTrue = sigModel->getEvtIntensity();
} else {
posSigModel_->calcLikelihoodInfo( nDataEvents + trueBin );
negSigModel_->calcLikelihoodInfo( nDataEvents + trueBin );
pTrue = 0.5 * ( posSigModel_->getEvtIntensity() + negSigModel_->getEvtIntensity() );
}
// Get the cached SCF fraction (and jacobian if we're using the square DP)
Double_t scfFraction = fakeSCFFracs_[ trueBin ];
Double_t jacobian(1.0);
if ( squareDP ) {
jacobian = fakeJacobians_[ trueBin ];
}
scfDPLike += pTrue * jacobian * scfFraction * pRecoGivenTrue;
}
// Divide by the reco jacobian
scfDPLike /= recoJacobian;
return scfDPLike;
}
void LauCPFitModel::getEvtExtraLikelihoods(UInt_t iEvt)
{
// Function to return the signal and background likelihoods for the
// extra variables for the given event evtNo.
sigExtraLike_ = 1.0;
const UInt_t nBkgnds = this->nBkgndClasses();
if ( ! tagged_ || curEvtCharge_ < 0 ) {
sigExtraLike_ = this->prodPdfValue( negSignalPdfs_, iEvt );
if (useSCF_) {
scfExtraLike_ = this->prodPdfValue( negScfPdfs_, iEvt );
}
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
if (usingBkgnd_) {
bkgndExtraLike_[bkgndID] = this->prodPdfValue( negBkgndPdfs_[bkgndID], iEvt );
} else {
bkgndExtraLike_[bkgndID] = 0.0;
}
}
} else {
sigExtraLike_ = this->prodPdfValue( posSignalPdfs_, iEvt );
if (useSCF_) {
scfExtraLike_ = this->prodPdfValue( posScfPdfs_, iEvt );
}
for ( UInt_t bkgndID(0); bkgndID < nBkgnds; ++bkgndID ) {
if (usingBkgnd_) {
bkgndExtraLike_[bkgndID] = this->prodPdfValue( posBkgndPdfs_[bkgndID], iEvt );
} else {
bkgndExtraLike_[bkgndID] = 0.0;
}
}
}
}
void LauCPFitModel::updateCoeffs()
{
negCoeffs_.clear(); posCoeffs_.clear();
negCoeffs_.reserve(nSigComp_); posCoeffs_.reserve(nSigComp_);
for (UInt_t i = 0; i < nSigComp_; i++) {
negCoeffs_.push_back(coeffPars_[i]->antiparticleCoeff());
posCoeffs_.push_back(coeffPars_[i]->particleCoeff());
}
}
void LauCPFitModel::setupSPlotNtupleBranches()
{
// add branches for storing the experiment number and the number of
// the event within the current experiment
this->addSPlotNtupleIntegerBranch("iExpt");
this->addSPlotNtupleIntegerBranch("iEvtWithinExpt");
// Store the efficiency of the event (for inclusive BF calculations).
if (this->storeDPEff()) {
this->addSPlotNtupleDoubleBranch("efficiency");
if ( negSigModel_->usingScfModel() && posSigModel_->usingScfModel() ) {
this->addSPlotNtupleDoubleBranch("scffraction");
}
}
// Store the total event likelihood for each species.
if (useSCF_) {
this->addSPlotNtupleDoubleBranch("sigTMTotalLike");
this->addSPlotNtupleDoubleBranch("sigSCFTotalLike");
this->addSPlotNtupleDoubleBranch("sigSCFFrac");
} else {
this->addSPlotNtupleDoubleBranch("sigTotalLike");
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
TString name( this->bkgndClassName(iBkgnd) );
name += "TotalLike";
this->addSPlotNtupleDoubleBranch(name);
}
}
// Store the DP likelihoods
if (this->useDP()) {
if (useSCF_) {
this->addSPlotNtupleDoubleBranch("sigTMDPLike");
this->addSPlotNtupleDoubleBranch("sigSCFDPLike");
} else {
this->addSPlotNtupleDoubleBranch("sigDPLike");
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
TString name( this->bkgndClassName(iBkgnd) );
name += "DPLike";
this->addSPlotNtupleDoubleBranch(name);
}
}
}
// Store the likelihoods for each extra PDF
if (useSCF_) {
this->addSPlotNtupleBranches(&negSignalPdfs_, "sigTM");
this->addSPlotNtupleBranches(&negScfPdfs_, "sigSCF");
} else {
this->addSPlotNtupleBranches(&negSignalPdfs_, "sig");
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
const TString& bkgndClass = this->bkgndClassName(iBkgnd);
const LauPdfList* pdfList = &(negBkgndPdfs_[iBkgnd]);
this->addSPlotNtupleBranches(pdfList, bkgndClass);
}
}
}
void LauCPFitModel::addSPlotNtupleBranches(const LauPdfList* extraPdfs, const TString& prefix)
{
if (extraPdfs) {
// Loop through each of the PDFs
for (LauPdfList::const_iterator pdf_iter = extraPdfs->begin(); pdf_iter != extraPdfs->end(); ++pdf_iter) {
// Count the number of input variables that are not
// DP variables (used in the case where there is DP
// dependence for e.g. MVA)
UInt_t nVars(0);
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nVars;
}
}
if ( nVars == 1 ) {
// If the PDF only has one variable then
// simply add one branch for that variable
TString varName = (*pdf_iter)->varName();
TString name(prefix);
name += varName;
name += "Like";
this->addSPlotNtupleDoubleBranch(name);
} else if ( nVars == 2 ) {
// If the PDF has two variables then we
// need a branch for them both together and
// branches for each
TString allVars("");
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
allVars += (*var_iter);
TString name(prefix);
name += (*var_iter);
name += "Like";
this->addSPlotNtupleDoubleBranch(name);
}
TString name(prefix);
name += allVars;
name += "Like";
this->addSPlotNtupleDoubleBranch(name);
} else {
std::cerr << "WARNING in LauCPFitModel::addSPlotNtupleBranches : Can't yet deal with 3D PDFs." << std::endl;
}
}
}
}
Double_t LauCPFitModel::setSPlotNtupleBranchValues(LauPdfList* extraPdfs, const TString& prefix, UInt_t iEvt)
{
// Store the PDF value for each variable in the list
Double_t totalLike(1.0);
Double_t extraLike(0.0);
if (extraPdfs) {
for (LauPdfList::iterator pdf_iter = extraPdfs->begin(); pdf_iter != extraPdfs->end(); ++pdf_iter) {
// calculate the likelihood for this event
(*pdf_iter)->calcLikelihoodInfo(iEvt);
extraLike = (*pdf_iter)->getLikelihood();
totalLike *= extraLike;
// Count the number of input variables that are not
// DP variables (used in the case where there is DP
// dependence for e.g. MVA)
UInt_t nVars(0);
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nVars;
}
}
if ( nVars == 1 ) {
// If the PDF only has one variable then
// simply store the value for that variable
TString varName = (*pdf_iter)->varName();
TString name(prefix);
name += varName;
name += "Like";
this->setSPlotNtupleDoubleBranchValue(name, extraLike);
} else if ( nVars == 2 ) {
// If the PDF has two variables then we
// store the value for them both together
// and for each on their own
TString allVars("");
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
allVars += (*var_iter);
TString name(prefix);
name += (*var_iter);
name += "Like";
Double_t indivLike = (*pdf_iter)->getLikelihood( (*var_iter) );
this->setSPlotNtupleDoubleBranchValue(name, indivLike);
}
TString name(prefix);
name += allVars;
name += "Like";
this->setSPlotNtupleDoubleBranchValue(name, extraLike);
} else {
std::cerr << "WARNING in LauCPFitModel::setSPlotNtupleBranchValues : Can't yet deal with 3D PDFs." << std::endl;
}
}
}
return totalLike;
}
LauSPlot::NameSet LauCPFitModel::variableNames() const
{
LauSPlot::NameSet nameSet;
if (this->useDP()) {
nameSet.insert("DP");
}
// Loop through all the signal PDFs
for (LauPdfList::const_iterator pdf_iter = negSignalPdfs_.begin(); pdf_iter != negSignalPdfs_.end(); ++pdf_iter) {
// Loop over the variables involved in each PDF
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
// If they are not DP coordinates then add them
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
nameSet.insert( (*var_iter) );
}
}
}
return nameSet;
}
LauSPlot::NumbMap LauCPFitModel::freeSpeciesNames() const
{
LauSPlot::NumbMap numbMap;
if (!signalEvents_->fixed() && this->doEMLFit()) {
numbMap["sig"] = signalEvents_->genValue();
}
if ( usingBkgnd_ ) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
const TString& bkgndClass = this->bkgndClassName(iBkgnd);
const LauAbsRValue* par = bkgndEvents_[iBkgnd];
if (!par->fixed()) {
numbMap[bkgndClass] = par->genValue();
if ( ! par->isLValue() ) {
std::cerr << "WARNING in LauCPFitModel::freeSpeciesNames : \"" << par->name() << "\" is a LauFormulaPar, which implies it is perhaps not entirely free to float in the fit, so the sWeight calculation may not be reliable" << std::endl;
}
}
}
}
return numbMap;
}
LauSPlot::NumbMap LauCPFitModel::fixdSpeciesNames() const
{
LauSPlot::NumbMap numbMap;
if (signalEvents_->fixed() && this->doEMLFit()) {
numbMap["sig"] = signalEvents_->genValue();
}
if ( usingBkgnd_ ) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
const TString& bkgndClass = this->bkgndClassName(iBkgnd);
const LauAbsRValue* par = bkgndEvents_[iBkgnd];
if (par->fixed()) {
numbMap[bkgndClass] = par->genValue();
}
}
}
return numbMap;
}
LauSPlot::TwoDMap LauCPFitModel::twodimPDFs() const
{
// This makes the assumption that the form of the positive and
// negative PDFs are the same, which seems reasonable to me
LauSPlot::TwoDMap twodimMap;
for (LauPdfList::const_iterator pdf_iter = negSignalPdfs_.begin(); pdf_iter != negSignalPdfs_.end(); ++pdf_iter) {
// Count the number of input variables that are not DP variables
UInt_t nVars(0);
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nVars;
}
}
if ( nVars == 2 ) {
if (useSCF_) {
twodimMap.insert( std::make_pair( "sigTM", std::make_pair( varNames[0], varNames[1] ) ) );
} else {
twodimMap.insert( std::make_pair( "sig", std::make_pair( varNames[0], varNames[1] ) ) );
}
}
}
if ( useSCF_ ) {
for (LauPdfList::const_iterator pdf_iter = negScfPdfs_.begin(); pdf_iter != negScfPdfs_.end(); ++pdf_iter) {
// Count the number of input variables that are not DP variables
UInt_t nVars(0);
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nVars;
}
}
if ( nVars == 2 ) {
twodimMap.insert( std::make_pair( "sigSCF", std::make_pair( varNames[0], varNames[1] ) ) );
}
}
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
const TString& bkgndClass = this->bkgndClassName(iBkgnd);
const LauPdfList& pdfList = negBkgndPdfs_[iBkgnd];
for (LauPdfList::const_iterator pdf_iter = pdfList.begin(); pdf_iter != pdfList.end(); ++pdf_iter) {
// Count the number of input variables that are not DP variables
UInt_t nVars(0);
std::vector<TString> varNames = (*pdf_iter)->varNames();
for ( std::vector<TString>::const_iterator var_iter = varNames.begin(); var_iter != varNames.end(); ++var_iter ) {
if ( (*var_iter) != "m13Sq" && (*var_iter) != "m23Sq" ) {
++nVars;
}
}
if ( nVars == 2 ) {
twodimMap.insert( std::make_pair( bkgndClass, std::make_pair( varNames[0], varNames[1] ) ) );
}
}
}
}
return twodimMap;
}
void LauCPFitModel::storePerEvtLlhds()
{
std::cout << "INFO in LauCPFitModel::storePerEvtLlhds : Storing per-event likelihood values..." << std::endl;
// if we've not been using the DP model then we need to cache all
// the info here so that we can get the efficiency from it
LauFitDataTree* inputFitData = this->fitData();
if (!this->useDP() && this->storeDPEff()) {
negSigModel_->initialise(negCoeffs_);
posSigModel_->initialise(posCoeffs_);
negSigModel_->fillDataTree(*inputFitData);
posSigModel_->fillDataTree(*inputFitData);
}
UInt_t evtsPerExpt(this->eventsPerExpt());
LauIsobarDynamics* sigModel(0);
LauPdfList* sigPdfs(0);
LauPdfList* scfPdfs(0);
LauBkgndPdfsList* bkgndPdfs(0);
for (UInt_t iEvt = 0; iEvt < evtsPerExpt; ++iEvt) {
this->setSPlotNtupleIntegerBranchValue("iExpt",this->iExpt());
this->setSPlotNtupleIntegerBranchValue("iEvtWithinExpt",iEvt);
// Find out whether we have B- or B+
if ( tagged_ ) {
const LauFitData& dataValues = inputFitData->getData(iEvt);
LauFitData::const_iterator iter = dataValues.find("charge");
curEvtCharge_ = static_cast<Int_t>(iter->second);
if (curEvtCharge_==+1) {
sigModel = posSigModel_;
sigPdfs = &posSignalPdfs_;
scfPdfs = &posScfPdfs_;
bkgndPdfs = &posBkgndPdfs_;
} else {
sigModel = negSigModel_;
sigPdfs = &negSignalPdfs_;
scfPdfs = &negScfPdfs_;
bkgndPdfs = &negBkgndPdfs_;
}
} else {
sigPdfs = &negSignalPdfs_;
scfPdfs = &negScfPdfs_;
bkgndPdfs = &negBkgndPdfs_;
}
// the DP information
this->getEvtDPLikelihood(iEvt);
if (this->storeDPEff()) {
if (!this->useDP()) {
posSigModel_->calcLikelihoodInfo(iEvt);
negSigModel_->calcLikelihoodInfo(iEvt);
}
if ( tagged_ ) {
this->setSPlotNtupleDoubleBranchValue("efficiency",sigModel->getEvtEff());
if ( negSigModel_->usingScfModel() && posSigModel_->usingScfModel() ) {
this->setSPlotNtupleDoubleBranchValue("scffraction",sigModel->getEvtScfFraction());
}
} else {
this->setSPlotNtupleDoubleBranchValue("efficiency",0.5*(posSigModel_->getEvtEff() + negSigModel_->getEvtEff()) );
if ( negSigModel_->usingScfModel() && posSigModel_->usingScfModel() ) {
this->setSPlotNtupleDoubleBranchValue("scffraction",0.5*(posSigModel_->getEvtScfFraction() + negSigModel_->getEvtScfFraction()));
}
}
}
if (this->useDP()) {
sigTotalLike_ = sigDPLike_;
if (useSCF_) {
scfTotalLike_ = scfDPLike_;
this->setSPlotNtupleDoubleBranchValue("sigTMDPLike",sigDPLike_);
this->setSPlotNtupleDoubleBranchValue("sigSCFDPLike",scfDPLike_);
} else {
this->setSPlotNtupleDoubleBranchValue("sigDPLike",sigDPLike_);
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
TString name = this->bkgndClassName(iBkgnd);
name += "DPLike";
this->setSPlotNtupleDoubleBranchValue(name,bkgndDPLike_[iBkgnd]);
}
}
} else {
sigTotalLike_ = 1.0;
if (useSCF_) {
scfTotalLike_ = 1.0;
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
bkgndTotalLike_[iBkgnd] = 1.0;
}
}
}
// the signal PDF values
if ( useSCF_ ) {
sigTotalLike_ *= this->setSPlotNtupleBranchValues(sigPdfs, "sigTM", iEvt);
scfTotalLike_ *= this->setSPlotNtupleBranchValues(scfPdfs, "sigSCF", iEvt);
} else {
sigTotalLike_ *= this->setSPlotNtupleBranchValues(sigPdfs, "sig", iEvt);
}
// the background PDF values
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
const TString& bkgndClass = this->bkgndClassName(iBkgnd);
LauPdfList& pdfs = (*bkgndPdfs)[iBkgnd];
bkgndTotalLike_[iBkgnd] *= this->setSPlotNtupleBranchValues(&(pdfs), bkgndClass, iEvt);
}
}
// the total likelihoods
if (useSCF_) {
Double_t scfFrac(0.0);
if ( useSCFHist_ ) {
scfFrac = recoSCFFracs_[iEvt];
} else {
scfFrac = scfFrac_.unblindValue();
}
this->setSPlotNtupleDoubleBranchValue("sigSCFFrac",scfFrac);
sigTotalLike_ *= ( 1.0 - scfFrac );
if ( scfMap_ == 0 ) {
scfTotalLike_ *= scfFrac;
}
this->setSPlotNtupleDoubleBranchValue("sigTMTotalLike",sigTotalLike_);
this->setSPlotNtupleDoubleBranchValue("sigSCFTotalLike",scfTotalLike_);
} else {
this->setSPlotNtupleDoubleBranchValue("sigTotalLike",sigTotalLike_);
}
if (usingBkgnd_) {
const UInt_t nBkgnds = this->nBkgndClasses();
for ( UInt_t iBkgnd(0); iBkgnd < nBkgnds; ++iBkgnd ) {
TString name = this->bkgndClassName(iBkgnd);
name += "TotalLike";
this->setSPlotNtupleDoubleBranchValue(name,bkgndTotalLike_[iBkgnd]);
}
}
// fill the tree
this->fillSPlotNtupleBranches();
}
std::cout << "INFO in LauCPFitModel::storePerEvtLlhds : Finished storing per-event likelihood values." << std::endl;
}
void LauCPFitModel::embedNegSignal(const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment,
Bool_t useReweighting)
{
if (negSignalTree_) {
std::cerr << "ERROR in LauCPFitModel::embedNegSignal : Already embedding signal from a file." << std::endl;
return;
}
negSignalTree_ = new LauEmbeddedData(fileName,treeName,reuseEventsWithinExperiment);
Bool_t dataOK = negSignalTree_->findBranches();
if (!dataOK) {
delete negSignalTree_; negSignalTree_ = 0;
std::cerr << "ERROR in LauCPFitModel::embedNegSignal : Problem creating data tree for embedding." << std::endl;
return;
}
reuseSignal_ = reuseEventsWithinEnsemble;
useNegReweighting_ = useReweighting;
if (this->enableEmbedding() == kFALSE) {this->enableEmbedding(kTRUE);}
}
void LauCPFitModel::embedNegBkgnd(const TString& bkgndClass, const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment)
{
if ( ! this->validBkgndClass( bkgndClass ) ) {
std::cerr << "ERROR in LauCPFitModel::embedBkgnd : Invalid background class \"" << bkgndClass << "\"." << std::endl;
std::cerr << " : Background class names must be provided in \"setBkgndClassNames\" before any other background-related actions can be performed." << std::endl;
return;
}
UInt_t bkgndID = this->bkgndClassID( bkgndClass );
if (negBkgndTree_[bkgndID]) {
std::cerr << "ERROR in LauCPFitModel::embedNegBkgnd : Already embedding background from a file." << std::endl;
return;
}
negBkgndTree_[bkgndID] = new LauEmbeddedData(fileName,treeName,reuseEventsWithinExperiment);
Bool_t dataOK = negBkgndTree_[bkgndID]->findBranches();
if (!dataOK) {
delete negBkgndTree_[bkgndID]; negBkgndTree_[bkgndID] = 0;
std::cerr << "ERROR in LauCPFitModel::embedNegBkgnd : Problem creating data tree for embedding." << std::endl;
return;
}
reuseBkgnd_[bkgndID] = reuseEventsWithinEnsemble;
if (this->enableEmbedding() == kFALSE) {this->enableEmbedding(kTRUE);}
}
void LauCPFitModel::embedPosSignal(const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment,
Bool_t useReweighting)
{
if (posSignalTree_) {
std::cerr << "ERROR in LauCPFitModel::embedPosSignal : Already embedding signal from a file." << std::endl;
return;
}
posSignalTree_ = new LauEmbeddedData(fileName,treeName,reuseEventsWithinExperiment);
Bool_t dataOK = posSignalTree_->findBranches();
if (!dataOK) {
delete posSignalTree_; posSignalTree_ = 0;
std::cerr << "ERROR in LauCPFitModel::embedPosSignal : Problem creating data tree for embedding." << std::endl;
return;
}
reuseSignal_ = reuseEventsWithinEnsemble;
usePosReweighting_ = useReweighting;
if (this->enableEmbedding() == kFALSE) {this->enableEmbedding(kTRUE);}
}
void LauCPFitModel::embedPosBkgnd(const TString& bkgndClass, const TString& fileName, const TString& treeName,
Bool_t reuseEventsWithinEnsemble, Bool_t reuseEventsWithinExperiment)
{
if ( ! this->validBkgndClass( bkgndClass ) ) {
std::cerr << "ERROR in LauCPFitModel::embedBkgnd : Invalid background class \"" << bkgndClass << "\"." << std::endl;
std::cerr << " : Background class names must be provided in \"setBkgndClassNames\" before any other background-related actions can be performed." << std::endl;
return;
}
UInt_t bkgndID = this->bkgndClassID( bkgndClass );
if (posBkgndTree_[bkgndID]) {
std::cerr << "ERROR in LauCPFitModel::embedPosBkgnd : Already embedding background from a file." << std::endl;
return;
}
posBkgndTree_[bkgndID] = new LauEmbeddedData(fileName,treeName,reuseEventsWithinExperiment);
Bool_t dataOK = posBkgndTree_[bkgndID]->findBranches();
if (!dataOK) {
delete posBkgndTree_[bkgndID]; posBkgndTree_[bkgndID] = 0;
std::cerr << "ERROR in LauCPFitModel::embedPosBkgnd : Problem creating data tree for embedding." << std::endl;
return;
}
reuseBkgnd_[bkgndID] = reuseEventsWithinEnsemble;
if (this->enableEmbedding() == kFALSE) {this->enableEmbedding(kTRUE);}
}
void LauCPFitModel::weightEvents( const TString& dataFileName, const TString& dataTreeName )
{
// Routine to provide weights for events that are uniformly distributed
// in the DP (or square DP) so as to reproduce the given DP model
if ( posKinematics_->squareDP() ) {
std::cout << "INFO in LauCPFitModel::weightEvents : will create weights assuming events were generated flat in the square DP" << std::endl;
} else {
std::cout << "INFO in LauCPFitModel::weightEvents : will create weights assuming events were generated flat in phase space" << std::endl;
}
// This reads in the given dataFile and creates an input
// fit data tree that stores them for all events and experiments.
Bool_t dataOK = this->verifyFitData(dataFileName,dataTreeName);
if (!dataOK) {
std::cerr << "ERROR in LauCPFitModel::weightEvents : Problem caching the data." << std::endl;
return;
}
LauFitDataTree* inputFitData = this->fitData();
if ( ! inputFitData->haveBranch( "m13Sq_MC" ) || ! inputFitData->haveBranch( "m23Sq_MC" ) ) {
std::cerr << "WARNING in LauCPFitModel::weightEvents : Cannot find MC truth DP coordinate branches in supplied data, aborting." << std::endl;
return;
}
if ( ! inputFitData->haveBranch( "charge" ) ) {
std::cerr << "WARNING in LauCPFitModel::weightEvents : Cannot find branch specifying event charge in supplied data, aborting." << std::endl;
return;
}
// Create the ntuple to hold the DP weights
TString weightsFileName( dataFileName );
Ssiz_t index = weightsFileName.Last('.');
weightsFileName.Insert( index, "_DPweights" );
LauGenNtuple * weightsTuple = new LauGenNtuple( weightsFileName, dataTreeName );
weightsTuple->addIntegerBranch("iExpt");
weightsTuple->addIntegerBranch("iEvtWithinExpt");
weightsTuple->addDoubleBranch("dpModelWeight");
UInt_t iExpmt = this->iExpt();
UInt_t nExpmt = this->nExpt();
UInt_t firstExpmt = this->firstExpt();
for (iExpmt = firstExpmt; iExpmt < (firstExpmt+nExpmt); ++iExpmt) {
inputFitData->readExperimentData(iExpmt);
UInt_t nEvents = inputFitData->nEvents();
if (nEvents < 1) {
std::cerr << "WARNING in LauCPFitModel::weightEvents : Zero events in experiment " << iExpmt << ", skipping..." << std::endl;
continue;
}
weightsTuple->setIntegerBranchValue( "iExpt", iExpmt );
// Calculate and store the weights for the events in this experiment
for ( UInt_t iEvent(0); iEvent < nEvents; ++iEvent ) {
weightsTuple->setIntegerBranchValue( "iEvtWithinExpt", iEvent );
const LauFitData& evtData = inputFitData->getData( iEvent );
Double_t m13Sq_MC = evtData.find("m13Sq_MC")->second;
Double_t m23Sq_MC = evtData.find("m23Sq_MC")->second;
Int_t charge = evtData.find("charge")->second;
Double_t dpModelWeight(0.0);
LauKinematics * kinematics;
LauIsobarDynamics * dpModel;
if (charge > 0) {
kinematics = posKinematics_;
dpModel = posSigModel_;
} else {
kinematics = negKinematics_;
dpModel = negSigModel_;
}
if ( kinematics->withinDPLimits( m13Sq_MC, m23Sq_MC ) ) {
kinematics->updateKinematics( m13Sq_MC, m23Sq_MC );
dpModelWeight = dpModel->getEventWeight();
if ( kinematics->squareDP() ) {
dpModelWeight *= kinematics->calcSqDPJacobian();
}
const Double_t norm = (negSigModel_->getDPNorm() + posSigModel_->getDPNorm())/2.0;
dpModelWeight /= norm;
}
weightsTuple->setDoubleBranchValue( "dpModelWeight", dpModelWeight );
weightsTuple->fillBranches();
}
}
weightsTuple->buildIndex( "iExpt", "iEvtWithinExpt" );
weightsTuple->addFriendTree(dataFileName, dataTreeName);
weightsTuple->writeOutGenResults();
delete weightsTuple;
}
void LauCPFitModel::savePDFPlots(const TString& label)
{
savePDFPlotsWave(label, 0);
savePDFPlotsWave(label, 1);
savePDFPlotsWave(label, 2);
std::cout << "LauCPFitModel::plot" << std::endl;
// ((LauIsobarDynamics*)negSigModel_)->plot();
//Double_t minm13 = negSigModel_->getKinematics()->getm13Min();
Double_t minm13 = 0.0;
Double_t maxm13 = negSigModel_->getKinematics()->getm13Max();
//Double_t minm23 = negSigModel_->getKinematics()->getm23Min();
Double_t minm23 = 0.0;
Double_t maxm23 = negSigModel_->getKinematics()->getm23Max();
Double_t mins13 = minm13*minm13;
Double_t maxs13 = maxm13*maxm13;
Double_t mins23 = minm23*minm23;
Double_t maxs23 = maxm23*maxm23;
Double_t s13, s23, posChPdf, negChPdf;
TString xLabel = "s13";
TString yLabel = "s23";
if (negSigModel_->getDaughters()->gotSymmetricalDP()) { xLabel = "sHigh"; yLabel = "sLow";}
Int_t n13=200.00, n23=200.00;
Double_t delta13, delta23;
delta13 = (maxs13 - mins13)/n13;
delta23 = (maxs23 - mins23)/n23;
UInt_t nAmp = negSigModel_->getnCohAmp();
for (UInt_t resID = 0; resID <= nAmp; ++resID)
{
TGraph2D *posDt = new TGraph2D();
TGraph2D *negDt = new TGraph2D();
TGraph2D *acpDt = new TGraph2D();
TString resName = "TotalAmp";
if (resID != nAmp){
TString tStrResID = Form("%d", resID);
const LauIsobarDynamics* model = negSigModel_;
const LauAbsResonance* resonance = model->getResonance(resID);
resName = resonance->getResonanceName();
std::cout << "resName = " << resName << std::endl;
}
resName.ReplaceAll("(", "");
resName.ReplaceAll(")", "");
resName.ReplaceAll("*", "Star");
posDt->SetName(resName+label);
posDt->SetTitle(resName+" ("+label+") Positive");
negDt->SetName(resName+label);
negDt->SetTitle(resName+" ("+label+") Negative");
acpDt->SetName(resName+label);
acpDt->SetTitle(resName+" ("+label+") Asymmetry");
Int_t count=0;
for (Int_t i=0; i<n13; i++) {
s13 = mins13 + i*delta13;
for (Int_t j=0; j<n23; j++) {
s23 = mins23 + j*delta23;
if (negSigModel_->getKinematics()->withinDPLimits2(s23, s13))
{
if (negSigModel_->getDaughters()->gotSymmetricalDP() && (s13>s23) ) continue;
negSigModel_->calcLikelihoodInfo(s13, s23);
posSigModel_->calcLikelihoodInfo(s13, s23);
LauComplex negChAmp = negSigModel_->getEvtDPAmp();
LauComplex posChAmp = posSigModel_->getEvtDPAmp();
if (resID != nAmp){
negChAmp = negSigModel_->getFullAmplitude(resID);
posChAmp = posSigModel_->getFullAmplitude(resID);
}
negChPdf = negChAmp.abs2();
posChPdf = posChAmp.abs2();
negDt->SetPoint(count,s23,s13,negChPdf); // s23=sHigh, s13 = sLow
posDt->SetPoint(count,s23,s13,posChPdf); // s23=sHigh, s13 = sLow
acpDt->SetPoint(count,s23,s13, negChPdf - posChPdf); // s23=sHigh, s13 = sLow
count++;
}
}
}
gStyle->SetPalette(1);
TCanvas *posC = new TCanvas("c"+resName+label + "Positive",resName+" ("+label+") Positive",0,0,600,400);
posDt->GetXaxis()->SetTitle(xLabel);
posDt->GetYaxis()->SetTitle(yLabel);
posDt->Draw("SURF1");
posDt->GetXaxis()->SetTitle(xLabel);
posDt->GetYaxis()->SetTitle(yLabel);
posC->SaveAs("plot_2D_"+resName + "_"+label+"Positive.C");
TCanvas *negC = new TCanvas("c"+resName+label + "Negative",resName+" ("+label+") Negative",0,0,600,400);
negDt->GetXaxis()->SetTitle(xLabel);
negDt->GetYaxis()->SetTitle(yLabel);
negDt->Draw("SURF1");
negDt->GetXaxis()->SetTitle(xLabel);
negDt->GetYaxis()->SetTitle(yLabel);
negC->SaveAs("plot_2D_"+resName + "_"+label+"Negative.C");
TCanvas *acpC = new TCanvas("c"+resName+label + "Asymmetry",resName+" ("+label+") Asymmetry",0,0,600,400);
acpDt->GetXaxis()->SetTitle(xLabel);
acpDt->GetYaxis()->SetTitle(yLabel);
acpDt->Draw("SURF1");
acpDt->GetXaxis()->SetTitle(xLabel);
acpDt->GetYaxis()->SetTitle(yLabel);
acpC->SaveAs("plot_2D_"+resName + "_"+label+"Asymmetry.C");
}
}
void LauCPFitModel::savePDFPlotsWave(const TString& label, const Int_t& spin)
{
std::cout << "label = "<< label << ", spin = "<< spin << std::endl;
TString tStrResID = "S_Wave";
if (spin == 1) tStrResID = "P_Wave";
if (spin == 2) tStrResID = "D_Wave";
TString xLabel = "s13";
TString yLabel = "s23";
std::cout << "LauCPFitModel::savePDFPlotsWave: "<< tStrResID << std::endl;
Double_t minm13 = 0.0;
Double_t maxm13 = negSigModel_->getKinematics()->getm13Max();
Double_t minm23 = 0.0;
Double_t maxm23 = negSigModel_->getKinematics()->getm23Max();
Double_t mins13 = minm13*minm13;
Double_t maxs13 = maxm13*maxm13;
Double_t mins23 = minm23*minm23;
Double_t maxs23 = maxm23*maxm23;
Double_t s13, s23, posChPdf, negChPdf;
TGraph2D *posDt = new TGraph2D();
TGraph2D *negDt = new TGraph2D();
TGraph2D *acpDt = new TGraph2D();
posDt->SetName(tStrResID+label);
posDt->SetTitle(tStrResID+" ("+label+") Positive");
negDt->SetName(tStrResID+label);
negDt->SetTitle(tStrResID+" ("+label+") Negative");
acpDt->SetName(tStrResID+label);
acpDt->SetTitle(tStrResID+" ("+label+") Asymmetry");
Int_t n13=200.00, n23=200.00;
Double_t delta13, delta23;
delta13 = (maxs13 - mins13)/n13;
delta23 = (maxs23 - mins23)/n23;
UInt_t nAmp = negSigModel_->getnCohAmp();
Int_t count=0;
for (Int_t i=0; i<n13; i++)
{
s13 = mins13 + i*delta13;
for (Int_t j=0; j<n23; j++)
{
s23 = mins23 + j*delta23;
if (negSigModel_->getKinematics()->withinDPLimits2(s23, s13))
{
if (negSigModel_->getDaughters()->gotSymmetricalDP() && (s13>s23) ) continue;
LauComplex negChAmp(0,0);
LauComplex posChAmp(0,0);
Bool_t noWaveRes = kTRUE;
negSigModel_->calcLikelihoodInfo(s13, s23);
for (UInt_t resID = 0; resID < nAmp; ++resID)
{
const LauIsobarDynamics* model = negSigModel_;
const LauAbsResonance* resonance = model->getResonance(resID);
Int_t spin_res = resonance->getSpin();
if (spin != spin_res) continue;
noWaveRes = kFALSE;
negChAmp += negSigModel_->getFullAmplitude(resID);
posChAmp += posSigModel_->getFullAmplitude(resID);
}
if (noWaveRes) return;
negChPdf = negChAmp.abs2();
posChPdf = posChAmp.abs2();
negDt->SetPoint(count,s23,s13,negChPdf); // s23=sHigh, s13 = sLow
posDt->SetPoint(count,s23,s13,posChPdf); // s23=sHigh, s13 = sLow
acpDt->SetPoint(count,s23,s13, negChPdf - posChPdf); // s23=sHigh, s13 = sLow
count++;
}
}
}
gStyle->SetPalette(1);
TCanvas *posC = new TCanvas("c"+tStrResID+label + "Positive",tStrResID+" ("+label+") Positive",0,0,600,400);
posDt->GetXaxis()->SetTitle(xLabel);
posDt->GetYaxis()->SetTitle(yLabel);
posDt->Draw("SURF1");
posDt->GetXaxis()->SetTitle(xLabel);
posDt->GetYaxis()->SetTitle(yLabel);
posC->SaveAs("plot_2D_"+tStrResID + "_"+label+"Positive.C");
TCanvas *negC = new TCanvas("c"+tStrResID+label + "Negative",tStrResID+" ("+label+") Negative",0,0,600,400);
negDt->GetXaxis()->SetTitle(xLabel);
negDt->GetYaxis()->SetTitle(yLabel);
negDt->Draw("SURF1");
negDt->GetXaxis()->SetTitle(xLabel);
negDt->GetYaxis()->SetTitle(yLabel);
negC->SaveAs("plot_2D_"+tStrResID + "_"+label+"Negative.C");
TCanvas *acpC = new TCanvas("c"+tStrResID+label + "Asymmetry",tStrResID+" ("+label+") Asymmetry",0,0,600,400);
acpDt->GetXaxis()->SetTitle(xLabel);
acpDt->GetYaxis()->SetTitle(yLabel);
acpDt->Draw("SURF1");
acpDt->GetXaxis()->SetTitle(xLabel);
acpDt->GetYaxis()->SetTitle(yLabel);
acpC->SaveAs("plot_2D_"+tStrResID + "_"+label+"Asymmetry.C");
}
-Double_t LauCPFitModel::getParamFromTree(TTree & tree, TString name)
+Double_t LauCPFitModel::getParamFromTree( TTree& tree, const TString& name )
{
- if ( tree.FindBranch(name) ) {
- Double_t val;
-
- tree.SetBranchAddress(name, &val);
- tree.GetEntry(0);
-
- return val;
- } else {
- std::cout << "ERROR in LauCPFitModel::getParamFromTree : Branch name " + name + " not found in parameter file!" << std::endl;
- }
+ TBranch* branch{tree.FindBranch( name )};
+ if ( branch ) {
+ TLeaf* leaf{branch->GetLeaf( name )};
+ if ( leaf ) {
+ tree.GetEntry(0);
+ return leaf->GetValue();
+ } else {
+ std::cout << "ERROR in LauCPFitModel::getParamFromTree : Leaf name " + name + " not found in parameter file!" << std::endl;
+ }
+ } else {
+ std::cout << "ERROR in LauCPFitModel::getParamFromTree : Branch name " + name + " not found in parameter file!" << std::endl;
+ }
- return -1.1;
+ return -1.1;
}
-void LauCPFitModel::fixParam(LauParameter * param, Double_t val, Bool_t fix)
+void LauCPFitModel::fixParam( LauParameter* param, const Double_t val, const Bool_t fix)
{
- std::cout << "INFO in LauCPFitModel::fixParam : Setting " << param->name() << " to " << val << std::endl;
+ std::cout << "INFO in LauCPFitModel::fixParam : Setting " << param->name() << " to " << val << std::endl;
+
+ param->value(val);
+ param->genValue(val);
+ param->initValue(val);
- param->value(val);
- param->genValue(val);
- param->initValue(val);
-
- if (fix) {
- param->fixed(kTRUE);
- } else if (!param->fixed()){
+ if (fix) {
+ param->fixed(kTRUE);
+ } else if (!param->fixed()){
// Add parameter name to list to indicate that this should not be randomised by randomiseInitFitPars
// (otherwise only those that are fixed are not randomised).
- // This is only done to those that are not already fixed (see randomiseInitFitPars).
-
+ // This is only done to those that are not already fixed (see randomiseInitFitPars).
allImportedFreeParams_.insert(param);
- }
+ }
}
-void LauCPFitModel::fixParams(std::vector<LauParameter *> params)
+void LauCPFitModel::fixParams( std::vector<LauParameter*>& params )
{
- Bool_t fix = this->fixParams_;
-
- // TODO: Allow some parameters to be fixed and some to remain floating (but initialised)
-
- if (!this->fixParamFileName_.IsNull()) {
-
- // Take param values from a file
-
- TFile * paramFile = TFile::Open(this->fixParamFileName_, "READ");
- TTree * paramTree = dynamic_cast<TTree *>(paramFile->Get(this->fixParamTreeName_));
+ const Bool_t fix{fixParams_};
- if (!paramTree) {
- std::cout << "ERROR in LauCPFitModel::fixParams : Tree '" + this->fixParamTreeName_ + "' not found in parameter file!" << std::endl;
- return;
- }
+ // TODO: Allow some parameters to be fixed and some to remain floating (but initialised)
- if (!this->fixParamNames_.empty()) {
+ if ( !fixParamFileName_.IsNull() ) {
- // Fix params from file, according to vector of names
-
- std::set<std::string>::iterator itrName;
- std::vector<LauParameter *>::iterator itr;
-
- for(itr = params.begin(); itr != params.end(); itr++) {
- if ( (itrName = this->fixParamNames_.find((*itr)->name().Data())) != this->fixParamNames_.end() ) {
- this->fixParam(*itr, this->getParamFromTree(*paramTree, *itrName), fix);
- }
- }
+ // Take param values from a file
+ TFile * paramFile = TFile::Open(fixParamFileName_, "READ");
+ if (!paramFile) {
+ std::cerr << "ERROR in LauCPFitModel::fixParams : File '" + fixParamFileName_ + "' could not be opened for reading!" << std::endl;
+ return;
+ }
- } else {
+ TTree * paramTree = dynamic_cast<TTree*>(paramFile->Get(fixParamTreeName_));
+ if (!paramTree) {
+ std::cerr << "ERROR in LauCPFitModel::fixParams : Tree '" + fixParamTreeName_ + "' not found in parameter file!" << std::endl;
+ return;
+ }
- // Fix some (unspecified) parameters from file, prioritising the map (if it exists)
+ if ( !fixParamNames_.empty() ) {
- std::vector<LauParameter *>::iterator itr;
+ // Fix params from file, according to vector of names
- for(itr = params.begin(); itr != params.end(); itr++) {
+ for( auto itr = params.begin(); itr != params.end(); ++itr ) {
+ auto itrName = fixParamNames_.find( (*itr)->name() );
+ if ( itrName != fixParamNames_.end() ) {
+ this->fixParam(*itr, this->getParamFromTree(*paramTree, *itrName), fix);
+ }
+ }
- TString name = (*itr)->name();
- std::map<std::string, Double_t>::iterator nameValItr;
+ } else {
- if ( !this->fixParamMap_.empty() && (nameValItr = this->fixParamMap_.find(name.Data())) != this->fixParamMap_.end() ) {
- this->fixParam(*itr, nameValItr->second, fix);
- } else {
- this->fixParam(*itr, this->getParamFromTree(*paramTree, name), fix);
- }
+ // Fix some (unspecified) parameters from file, prioritising the map (if it exists)
- }
+ for( auto itr = params.begin(); itr != params.end(); ++itr) {
- } // Vector of names?
+ const TString& name = (*itr)->name();
- } else {
+ if ( ! fixParamMap_.empty() ) {
+ auto nameValItr = fixParamMap_.find(name);
+ if ( nameValItr != fixParamMap_.end() ) {
+ this->fixParam(*itr, nameValItr->second, fix);
+ }
+ } else {
+ this->fixParam(*itr, this->getParamFromTree(*paramTree, name), fix);
+ }
- // Fix param names fom map, leave others floating
+ }
- std::vector<LauParameter *>::iterator itr;
- std::map<std::string, Double_t>::iterator nameValItr;
+ } // Vector of names?
- for(itr = params.begin(); itr != params.end(); itr++) {
- if ( (nameValItr = this->fixParamMap_.find((*itr)->name().Data())) != this->fixParamMap_.end() ) {
- this->fixParam(*itr, nameValItr->second, fix);
- }
- }
+ } else {
- }
+ // Fix param names fom map, leave others floating
+ for( auto itr = params.begin(); itr != params.end(); ++itr ) {
+ auto nameValItr = this->fixParamMap_.find( (*itr)->name() );
+ if ( nameValItr != this->fixParamMap_.end() ) {
+ this->fixParam(*itr, nameValItr->second, fix);
+ }
+ }
+ }
}
diff --git a/src/LauFitter.cc b/src/LauFitter.cc
index d0482bd..7c079b4 100644
--- a/src/LauFitter.cc
+++ b/src/LauFitter.cc
@@ -1,64 +1,64 @@
/*
Copyright 2005 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauFitter.cc
\brief File containing implementation of LauFitter methods.
*/
#include <iostream>
#include "LauFitter.hh"
#include "LauAbsFitter.hh"
#include "LauMinuit.hh"
-ClassImp(LauFitter)
-
LauAbsFitter* LauFitter::theInstance_ = 0;
LauFitter::Type LauFitter::fitterType_ = LauFitter::Minuit;
+ClassImp(LauFitter)
+
void LauFitter::setFitterType( Type type )
{
if ( theInstance_ != 0 ) {
std::cerr << "ERROR in LauFitter::setFitterType : The fitter has already been created, cannot change the type now." << std::endl;
return;
}
fitterType_ = type;
}
LauAbsFitter* LauFitter::fitter()
{
// Returns a pointer to a singleton LauAbsFitter object.
// Creates the object the first time it is called.
if ( theInstance_ == 0 ) {
if ( fitterType_ == Minuit ) {
theInstance_ = new LauMinuit();
}
}
return theInstance_;
}
diff --git a/src/LauIsobarDynamics.cc b/src/LauIsobarDynamics.cc
index f594707..78a5810 100644
--- a/src/LauIsobarDynamics.cc
+++ b/src/LauIsobarDynamics.cc
@@ -1,2682 +1,2681 @@
/*
Copyright 2005 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauIsobarDynamics.cc
\brief File containing implementation of LauIsobarDynamics class.
*/
#include <iostream>
#include <iomanip>
#include <fstream>
#include <set>
#include <vector>
#include "TFile.h"
#include "TRandom.h"
#include "TSystem.h"
#include "LauAbsEffModel.hh"
#include "LauAbsResonance.hh"
#include "LauAbsIncohRes.hh"
#include "LauBelleNR.hh"
#include "LauBelleSymNR.hh"
#include "LauCacheData.hh"
#include "LauConstants.hh"
#include "LauDaughters.hh"
#include "LauDPPartialIntegralInfo.hh"
#include "LauFitDataTree.hh"
#include "LauIsobarDynamics.hh"
#include "LauKinematics.hh"
#include "LauKMatrixProdPole.hh"
#include "LauKMatrixProdSVP.hh"
#include "LauKMatrixPropagator.hh"
#include "LauKMatrixPropFactory.hh"
#include "LauNRAmplitude.hh"
#include "LauPrint.hh"
#include "LauRandom.hh"
#include "LauResonanceInfo.hh"
#include "LauResonanceMaker.hh"
#include "LauRhoOmegaMix.hh"
ClassImp(LauIsobarDynamics)
// for Kpipi: only one scfFraction 2D histogram is needed
LauIsobarDynamics::LauIsobarDynamics(LauDaughters* daughters, LauAbsEffModel* effModel, LauAbsEffModel* scfFractionModel) :
daughters_(daughters),
kinematics_(daughters_ ? daughters_->getKinematics() : 0),
effModel_(effModel),
nAmp_(0),
nIncohAmp_(0),
DPNorm_(0.0),
DPRate_("DPRate", 0.0, 0.0, 1000.0),
meanDPEff_("meanDPEff", 0.0, 0.0, 1.0),
currentEvent_(0),
symmetricalDP_(kFALSE),
fullySymmetricDP_(kFALSE),
flavConjDP_(kFALSE),
integralsDone_(kFALSE),
normalizationSchemeDone_(kFALSE),
forceSymmetriseIntegration_(kFALSE),
intFileName_("integ.dat"),
m13BinWidth_(0.005),
m23BinWidth_(0.005),
mPrimeBinWidth_(0.001),
thPrimeBinWidth_(0.001),
narrowWidth_(0.020),
binningFactor_(100.0),
m13Sq_(0.0),
m23Sq_(0.0),
mPrime_(0.0),
thPrime_(0.0),
tagCat_(-1),
eff_(1.0),
scfFraction_(0.0),
jacobian_(0.0),
ASq_(0.0),
evtLike_(0.0),
iterationsMax_(100000),
nSigGenLoop_(0),
aSqMaxSet_(1.25),
aSqMaxVar_(0.0),
flipHelicity_(kTRUE),
recalcNormalisation_(kFALSE)
{
if (daughters != 0) {
symmetricalDP_ = daughters->gotSymmetricalDP();
fullySymmetricDP_ = daughters->gotFullySymmetricDP();
flavConjDP_ = daughters->gotFlavourConjugateDP();
typDaug_.push_back(daughters->getTypeDaug1());
typDaug_.push_back(daughters->getTypeDaug2());
typDaug_.push_back(daughters->getTypeDaug3());
}
if (scfFractionModel != 0) {
scfFractionModel_[0] = scfFractionModel;
}
sigResonances_.clear();
sigIncohResonances_.clear();
kMatrixPropagators_.clear();
kMatrixPropSet_.clear();
extraParameters_.clear();
}
// for Kspipi, we need a scfFraction 2D histogram for each tagging category. They are provided by the map.
// Also, we need to know the place that the tagging category of the current event occupies in the data structure inputFitTree
LauIsobarDynamics::LauIsobarDynamics(LauDaughters* daughters, LauAbsEffModel* effModel, LauTagCatScfFractionModelMap scfFractionModel) :
daughters_(daughters),
kinematics_(daughters_ ? daughters_->getKinematics() : 0),
effModel_(effModel),
scfFractionModel_(scfFractionModel),
nAmp_(0),
nIncohAmp_(0),
DPNorm_(0.0),
DPRate_("DPRate", 0.0, 0.0, 1000.0),
meanDPEff_("meanDPEff", 0.0, 0.0, 1.0),
currentEvent_(0),
symmetricalDP_(kFALSE),
fullySymmetricDP_(kFALSE),
flavConjDP_(kFALSE),
integralsDone_(kFALSE),
normalizationSchemeDone_(kFALSE),
forceSymmetriseIntegration_(kFALSE),
intFileName_("integ.dat"),
m13BinWidth_(0.005),
m23BinWidth_(0.005),
mPrimeBinWidth_(0.001),
thPrimeBinWidth_(0.001),
narrowWidth_(0.020),
binningFactor_(100.0),
m13Sq_(0.0),
m23Sq_(0.0),
mPrime_(0.0),
thPrime_(0.0),
tagCat_(-1),
eff_(1.0),
scfFraction_(0.0),
jacobian_(0.0),
ASq_(0.0),
evtLike_(0.0),
iterationsMax_(100000),
nSigGenLoop_(0),
aSqMaxSet_(1.25),
aSqMaxVar_(0.0),
flipHelicity_(kTRUE),
recalcNormalisation_(kFALSE)
{
// Constructor for the isobar signal model
if (daughters != 0) {
symmetricalDP_ = daughters->gotSymmetricalDP();
fullySymmetricDP_ = daughters->gotFullySymmetricDP();
flavConjDP_ = daughters->gotFlavourConjugateDP();
typDaug_.push_back(daughters->getTypeDaug1());
typDaug_.push_back(daughters->getTypeDaug2());
typDaug_.push_back(daughters->getTypeDaug3());
}
sigResonances_.clear();
sigIncohResonances_.clear();
kMatrixPropagators_.clear();
kMatrixPropSet_.clear();
extraParameters_.clear();
}
LauIsobarDynamics::~LauIsobarDynamics()
{
extraParameters_.clear();
for ( std::vector<LauCacheData*>::iterator iter = data_.begin(); iter != data_.end(); ++iter ) {
delete (*iter);
}
data_.clear();
for (std::vector<LauDPPartialIntegralInfo*>::iterator it = dpPartialIntegralInfo_.begin(); it != dpPartialIntegralInfo_.end(); ++it)
{
delete (*it);
}
dpPartialIntegralInfo_.clear();
}
void LauIsobarDynamics::resetNormVectors()
{
for (UInt_t i = 0; i < nAmp_; i++) {
fSqSum_[i] = 0.0;
fSqEffSum_[i] = 0.0;
fNorm_[i] = 0.0;
ff_[i].zero();
for (UInt_t j = 0; j < nAmp_; j++) {
fifjEffSum_[i][j].zero();
fifjSum_[i][j].zero();
}
}
for (UInt_t i = 0; i < nIncohAmp_; i++) {
fSqSum_[i+nAmp_] = 0.0;
fSqEffSum_[i+nAmp_] = 0.0;
fNorm_[i+nAmp_] = 0.0;
incohInten_[i] = 0.0;
}
}
void LauIsobarDynamics::recalculateNormalisation()
{
if ( recalcNormalisation_ == kFALSE ) {
return;
}
// We need to calculate the normalisation constants for the
// Dalitz plot generation/fitting.
integralsDone_ = kFALSE;
this->resetNormVectors();
this->findIntegralsToBeRecalculated();
this->calcDPNormalisation();
integralsDone_ = kTRUE;
}
void LauIsobarDynamics::findIntegralsToBeRecalculated()
{
// Loop through the resonance parameters and see which ones have changed
// For those that have changed mark the corresponding resonance(s) as needing to be re-evaluated
integralsToBeCalculated_.clear();
const UInt_t nResPars = resonancePars_.size();
for ( UInt_t iPar(0); iPar < nResPars; ++iPar ) {
const Double_t newValue = resonancePars_[iPar]->value();
if ( newValue != resonanceParValues_[iPar] ) {
resonanceParValues_[iPar] = newValue;
const std::vector<UInt_t>& indices = resonanceParResIndex_[iPar];
std::vector<UInt_t>::const_iterator indexIter = indices.begin();
const std::vector<UInt_t>::const_iterator indexEnd = indices.end();
for( ; indexIter != indexEnd; ++indexIter) {
integralsToBeCalculated_.insert(*indexIter);
}
}
}
}
void LauIsobarDynamics::collateResonanceParameters()
{
- // Initialise all resonance models
+ // Initialise all resonance models
resonancePars_.clear();
resonanceParValues_.clear();
resonanceParResIndex_.clear();
std::set<LauParameter*> uniqueResPars;
UInt_t resIndex(0);
for ( std::vector<LauAbsResonance*>::iterator resIter = sigResonances_.begin(); resIter != sigResonances_.end(); ++resIter ) {
(*resIter)->initialise();
// Check if this resonance has floating parameters
// Append all unique parameters to our list
const std::vector<LauParameter*>& resPars = (*resIter)->getFloatingParameters();
for ( std::vector<LauParameter*>::const_iterator parIter = resPars.begin(); parIter != resPars.end(); ++parIter ) {
if ( uniqueResPars.insert( *parIter ).second ) {
// This parameter has not already been added to
// the list of unique ones. Add it, its value
// and its associated resonance ID to the
// appropriate lists.
resonancePars_.push_back( *parIter );
resonanceParValues_.push_back( (*parIter)->value() );
std::vector<UInt_t> resIndices( 1, resIndex );
resonanceParResIndex_.push_back( resIndices );
} else {
// This parameter has already been added to the
// list of unique ones. However, we still need
// to indicate that this resonance should be
// associated with it.
std::vector<LauParameter*>::const_iterator uniqueParIter = resonancePars_.begin();
std::vector<std::vector<UInt_t> >::iterator indicesIter = resonanceParResIndex_.begin();
while( (*uniqueParIter) != (*parIter) ) {
++uniqueParIter;
++indicesIter;
}
( *indicesIter ).push_back( resIndex );
}
}
++resIndex;
}
for ( std::vector<LauAbsIncohRes*>::iterator resIter = sigIncohResonances_.begin(); resIter != sigIncohResonances_.end(); ++resIter ) {
(*resIter)->initialise();
// Check if this resonance has floating parameters
// Append all unique parameters to our list
const std::vector<LauParameter*>& resPars = (*resIter)->getFloatingParameters();
for ( std::vector<LauParameter*>::const_iterator parIter = resPars.begin(); parIter != resPars.end(); ++parIter ) {
if ( uniqueResPars.insert( *parIter ).second ) {
// This parameter has not already been added to
// the list of unique ones. Add it, its value
// and its associated resonance ID to the
// appropriate lists.
resonancePars_.push_back( *parIter );
resonanceParValues_.push_back( (*parIter)->value() );
std::vector<UInt_t> resIndices( 1, resIndex );
resonanceParResIndex_.push_back( resIndices );
} else {
// This parameter has already been added to the
// list of unique ones. However, we still need
// to indicate that this resonance should be
// associated with it.
std::vector<LauParameter*>::const_iterator uniqueParIter = resonancePars_.begin();
std::vector<std::vector<UInt_t> >::iterator indicesIter = resonanceParResIndex_.begin();
while( (*uniqueParIter) != (*parIter) ) {
++uniqueParIter;
++indicesIter;
}
( *indicesIter ).push_back( resIndex );
}
}
++resIndex;
}
}
void LauIsobarDynamics::initialise(const std::vector<LauComplex>& coeffs)
{
// Check whether we have a valid set of integration constants for
// the normalisation of the signal likelihood function.
this->initialiseVectors();
// Mark the DP integrals as undetermined
integralsDone_ = kFALSE;
this->collateResonanceParameters();
if ( resonancePars_.empty() ) {
recalcNormalisation_ = kFALSE;
} else {
recalcNormalisation_ = kTRUE;
}
// Print summary of what we have so far to screen
this->initSummary();
if ( nAmp_+nIncohAmp_ == 0 ) {
std::cout << "INFO in LauIsobarDynamics::initialise : No contributions to DP model, not performing normalisation integrals." << std::endl;
} else {
// We need to calculate the normalisation constants for the Dalitz plot generation/fitting.
std::cout<<"INFO in LauIsobarDynamics::initialise : Starting special run to generate the integrals for normalising the PDF..."<<std::endl;
// Since this is the initialisation, we need to calculate everything for every resonance
integralsToBeCalculated_.clear();
for ( UInt_t i(0); i < nAmp_+nIncohAmp_; ++i ) {
integralsToBeCalculated_.insert(i);
}
// Calculate and cache the normalisations of each resonance _dynamic_ amplitude
// (e.g. Breit-Wigner contribution, not from the complex coefficients).
// These are stored in fNorm_[i].
// fSqSum[i] is the total of the dynamical amplitude squared for a given resonance, i.
// We require that:
// |fNorm_[i]|^2 * |fSqSum[i]|^2 = 1,
// i.e. fNorm_[i] normalises each resonance contribution to give the same number of
// events in the DP, accounting for the total DP area and the dynamics of the resonance.
this->calcDPNormalisation();
// Write the integrals to a file (mainly for debugging purposes)
this->writeIntegralsFile();
}
integralsDone_ = kTRUE;
std::cout << std::setprecision(10);
std::cout<<"INFO in LauIsobarDynamics::initialise : Summary of the integrals:"<<std::endl;
for (UInt_t i = 0; i < nAmp_+nIncohAmp_; i++) {
std::cout<<" fNorm["<<i<<"] = "<<fNorm_[i]<<std::endl;
std::cout<<" fSqSum["<<i<<"] = "<<fSqSum_[i]<<std::endl;
std::cout<<" fSqEffSum["<<i<<"] = "<<fSqEffSum_[i]<<std::endl;
}
for (UInt_t i = 0; i < nAmp_; i++) {
for (UInt_t j = 0; j < nAmp_; j++) {
std::cout<<" fifjEffSum["<<i<<"]["<<j<<"] = "<<fifjEffSum_[i][j];
}
std::cout<<std::endl;
}
for (UInt_t i = 0; i < nAmp_; i++) {
for (UInt_t j = 0; j < nAmp_; j++) {
std::cout<<" fifjSum["<<i<<"]["<<j<<"] = "<<fifjSum_[i][j];
}
std::cout<<std::endl;
}
// Calculate the initial fit fractions (for later comparison with Toy MC, if required)
this->updateCoeffs(coeffs);
this->calcExtraInfo(kTRUE);
for (UInt_t i = 0; i < nAmp_; i++) {
for (UInt_t j = i; j < nAmp_; j++) {
std::cout<<"INFO in LauIsobarDynamics::initialise : Initial fit fraction for amplitude ("<<i<<","<<j<<") = "<<fitFrac_[i][j].genValue()<<std::endl;
}
}
for (UInt_t i = 0; i < nIncohAmp_; i++) {
std::cout<<"INFO in LauIsobarDynamics::initialise : Initial fit fraction for incoherent amplitude ("<<i<<","<<i<<") = "<<fitFrac_[i+nAmp_][i+nAmp_].genValue()<<std::endl;
}
std::cout<<"INFO in LauIsobarDynamics::initialise : Initial efficiency = "<<meanDPEff_.initValue()<<std::endl;
std::cout<<"INFO in LauIsobarDynamics::initialise : Initial DPRate = "<<DPRate_.initValue()<<std::endl;
}
void LauIsobarDynamics::initSummary()
{
UInt_t i(0);
TString nameP = daughters_->getNameParent();
TString name1 = daughters_->getNameDaug1();
TString name2 = daughters_->getNameDaug2();
TString name3 = daughters_->getNameDaug3();
std::cout<<"INFO in LauIsobarDynamics::initSummary : We are going to do a DP with "<<nameP<<" going to "<<name1<<" "<<name2<<" "<<name3<<std::endl;
std::cout<<" : For the following resonance combinations:"<<std::endl;
std::cout<<" : In tracks 2 and 3:"<<std::endl;
for (i = 0; i < nAmp_; i++) {
if (resPairAmp_[i] == 1) {
std::cout<<" : A"<<i<<": "<<resTypAmp_[i]<<" to "<<name2<<", "<< name3<<std::endl;
}
}
for (i = 0; i < nIncohAmp_; i++) {
if (incohResPairAmp_[i] == 1) {
std::cout<<" : A"<<nAmp_+i<<": "<<incohResTypAmp_[i]<<" (incoherent) to "<<name2<<", "<< name3<<std::endl;
}
}
std::cout<<" : In tracks 1 and 3:"<<std::endl;
for (i = 0; i < nAmp_; i++) {
if (resPairAmp_[i] == 2) {
std::cout<<" : A"<<i<<": "<<resTypAmp_[i]<<" to "<<name1<<", "<< name3<<std::endl;
}
}
for (i = 0; i < nIncohAmp_; i++) {
if (incohResPairAmp_[i] == 2) {
std::cout<<" : A"<<nAmp_+i<<": "<<incohResTypAmp_[i]<<" (incoherent) to "<<name1<<", "<< name3<<std::endl;
}
}
std::cout<<" : In tracks 1 and 2:"<<std::endl;
for (i = 0; i < nAmp_; i++) {
if (resPairAmp_[i] == 3) {
std::cout<<" : A"<<i<<": "<<resTypAmp_[i]<<" to "<<name1<<", "<< name2<<std::endl;
}
}
for (i = 0; i < nIncohAmp_; i++) {
if (incohResPairAmp_[i] == 3) {
std::cout<<" : A"<<nAmp_+i<<": "<<incohResTypAmp_[i]<<" (incoherent) to "<<name1<<", "<< name2<<std::endl;
}
}
for (i = 0; i < nAmp_; i++) {
if (resPairAmp_[i] == 0) {
std::cout<<" : and a non-resonant amplitude:"<<std::endl;
std::cout<<" : A"<<i<<std::endl;
}
}
std::cout<<std::endl;
}
void LauIsobarDynamics::initialiseVectors()
{
fSqSum_.clear(); fSqSum_.resize(nAmp_+nIncohAmp_);
fSqEffSum_.clear(); fSqEffSum_.resize(nAmp_+nIncohAmp_);
fifjEffSum_.clear(); fifjEffSum_.resize(nAmp_);
fifjSum_.clear(); fifjSum_.resize(nAmp_);
ff_.clear(); ff_.resize(nAmp_);
incohInten_.clear(); incohInten_.resize(nIncohAmp_);
fNorm_.clear(); fNorm_.resize(nAmp_+nIncohAmp_);
fitFrac_.clear(); fitFrac_.resize(nAmp_+nIncohAmp_);
fitFracEffUnCorr_.clear(); fitFracEffUnCorr_.resize(nAmp_+nIncohAmp_);
LauComplex null(0.0, 0.0);
for (UInt_t i = 0; i < nAmp_; i++) {
fSqSum_[i] = 0.0; fNorm_[i] = 0.0;
ff_[i] = null;
fifjEffSum_[i].resize(nAmp_);
fifjSum_[i].resize(nAmp_);
fitFrac_[i].resize(nAmp_+nIncohAmp_);
fitFracEffUnCorr_[i].resize(nAmp_+nIncohAmp_);
for (UInt_t j = 0; j < nAmp_; j++) {
fifjEffSum_[i][j] = null;
fifjSum_[i][j] = null;
fitFrac_[i][j].valueAndRange(0.0, -100.0, 100.0);
fitFracEffUnCorr_[i][j].valueAndRange(0.0, -100.0, 100.0);
}
for (UInt_t j = nAmp_; j < nAmp_+nIncohAmp_; j++) {
fitFrac_[i][j].valueAndRange(0.0, -100.0, 100.0);
fitFracEffUnCorr_[i][j].valueAndRange(0.0, -100.0, 100.0);
}
}
for (UInt_t i = nAmp_; i < nAmp_+nIncohAmp_; i++) {
fSqSum_[i] = 0.0; fNorm_[i] = 0.0;
incohInten_[i-nAmp_] = 0.0;
fitFrac_[i].resize(nAmp_+nIncohAmp_);
fitFracEffUnCorr_[i].resize(nAmp_+nIncohAmp_);
for (UInt_t j = 0; j < nAmp_+nIncohAmp_; j++) {
fitFrac_[i][j].valueAndRange(0.0, -100.0, 100.0);
fitFracEffUnCorr_[i][j].valueAndRange(0.0, -100.0, 100.0);
}
}
UInt_t nKMatrixProps = kMatrixPropagators_.size();
extraParameters_.clear();
extraParameters_.resize( nKMatrixProps );
for ( UInt_t i = 0; i < nKMatrixProps; ++i ) {
extraParameters_[i].valueAndRange(0.0, -100.0, 100.0);
}
}
void LauIsobarDynamics::writeIntegralsFile()
{
// Routine to end integration calculation for loglike normalisation.
// This writes out the normalisation integral output into the file named
// outputFileName.
std::cout<<"INFO in LauIsobarDynamics::writeIntegralsFile : Writing integral output to integrals file "<<intFileName_.Data()<<std::endl;
UInt_t i(0), j(0);
std::ofstream getChar(intFileName_.Data());
getChar << std::setprecision(10);
// Write out daughter types (pi, pi0, K, K0s?)
for (i = 0; i < 3; i++) {
getChar << typDaug_[i] << " ";
}
// Write out number of resonances in the Dalitz plot model
getChar << nAmp_ << std::endl;
// Write out the resonances
for (i = 0; i < nAmp_; i++) {
getChar << resTypAmp_[i] << " ";
}
getChar << std::endl;
// Write out the resonance model types (BW, RelBW etc...)
for (i = 0; i < nAmp_; i++) {
LauAbsResonance* theResonance = sigResonances_[i];
Int_t resModelInt = theResonance->getResonanceModel();
getChar << resModelInt << " ";
}
getChar << std::endl;
// Write out the track pairings for each resonance. This is specified
// by the resPairAmpInt integer in the addResonance function.
for (i = 0; i < nAmp_; i++) {
getChar << resPairAmp_[i] << " ";
}
getChar << std::endl;
// Write out the fSqSum = |ff|^2, where ff = resAmp()
for (i = 0; i < nAmp_; i++) {
getChar << fSqSum_[i] << " ";
}
getChar << std::endl;
// Similar to fSqSum, but with the efficiency term included.
for (i = 0; i < nAmp_; i++) {
getChar << fSqEffSum_[i] << " ";
}
getChar << std::endl;
// Write out the f_i*f_j_conj*eff values = resAmp_i*resAmp_j_conj*eff.
// Note that only the top half of the i*j "matrix" is required, as it
// is symmetric w.r.t i, j.
for (i = 0; i < nAmp_; i++) {
for (j = i; j < nAmp_; j++) {
getChar << fifjEffSum_[i][j] << " ";
}
}
getChar << std::endl;
// Similar to fifjEffSum, but without the efficiency term included.
for (i = 0; i < nAmp_; i++) {
for (j = i; j < nAmp_; j++) {
getChar << fifjSum_[i][j] << " ";
}
}
getChar << std::endl;
// Write out number of incoherent resonances in the Dalitz plot model
getChar << nIncohAmp_ << std::endl;
// Write out the incoherent resonances
for (i = 0; i < nIncohAmp_; i++) {
getChar << incohResTypAmp_[i] << " ";
}
getChar << std::endl;
// Write out the incoherent resonance model types (BW, RelBW etc...)
for (i = 0; i < nIncohAmp_; i++) {
LauAbsResonance* theResonance = sigIncohResonances_[i];
Int_t resModelInt = theResonance->getResonanceModel();
getChar << resModelInt << " ";
}
getChar << std::endl;
// Write out the track pairings for each incoherent resonance. This is specified
// by the resPairAmpInt integer in the addIncohResonance function.
for (i = 0; i < nIncohAmp_; i++) {
getChar << incohResPairAmp_[i] << " ";
}
getChar << std::endl;
// Write out the fSqSum = |ff|^2, where |ff|^2 = incohResAmp()
for (i = nAmp_; i < nAmp_+nIncohAmp_; i++) {
getChar << fSqSum_[i] << " ";
}
getChar << std::endl;
// Similar to fSqSum, but with the efficiency term included.
for (i = nAmp_; i < nAmp_+nIncohAmp_; i++) {
getChar << fSqEffSum_[i] << " ";
}
getChar << std::endl;
}
LauAbsResonance* LauIsobarDynamics::addResonance(const TString& resName, const Int_t resPairAmpInt, const LauAbsResonance::LauResonanceModel resType, const LauBlattWeisskopfFactor::BlattWeisskopfCategory bwCategory)
{
// Function to add a resonance in a Dalitz plot.
// No check is made w.r.t flavour and charge conservation rules, and so
// the user is responsible for checking the internal consistency of
// their function statements with these laws. For example, the program
// will not prevent the user from asking for a rho resonance in a K-pi
// pair or a K* resonance in a pi-pi pair.
// However, to assist the user, a summary of the resonant structure requested
// by the user is printed before the program runs. It is important to check this
// information when you first define your Dalitz plot model before doing
// any fitting/generating.
// Arguments are: resonance name, integer to specify the resonance track pairing
// (1 => m_23, 2 => m_13, 3 => m_12), i.e. the bachelor track number.
// The third argument resType specifies whether the resonance is a Breit-Wigner (BW)
// Relativistic Breit-Wigner (RelBW) or Flatte distribution (Flatte), for example.
if( LauAbsResonance::isIncoherentModel(resType) == true ) {
std::cerr<<"ERROR in LauIsobarDynamics::addResonance : Resonance type \""<<resType<<"\" not allowed for a coherent resonance"<<std::endl;
return 0;
}
LauResonanceMaker& resonanceMaker = LauResonanceMaker::get();
LauAbsResonance *theResonance = resonanceMaker.getResonance(daughters_, resName, resPairAmpInt, resType, bwCategory);
if (theResonance == 0) {
std::cerr<<"ERROR in LauIsobarDynamics::addResonance : Couldn't create the resonance \""<<resName<<"\""<<std::endl;
return 0;
}
// implement the helicity flip here
if (flipHelicity_ && daughters_->getCharge(resPairAmpInt) == 0 && daughters_->getChargeParent() == 0 && daughters_->getTypeParent() > 0 ) {
if ( ( resPairAmpInt == 1 && TMath::Abs(daughters_->getTypeDaug2()) == TMath::Abs(daughters_->getTypeDaug3()) ) ||
( resPairAmpInt == 2 && TMath::Abs(daughters_->getTypeDaug1()) == TMath::Abs(daughters_->getTypeDaug3()) ) ||
( resPairAmpInt == 3 && TMath::Abs(daughters_->getTypeDaug1()) == TMath::Abs(daughters_->getTypeDaug2()) ) ) {
theResonance->flipHelicity(kTRUE);
}
}
// Set the resonance name and what track is the bachelor
TString resonanceName = theResonance->getResonanceName();
resTypAmp_.push_back(resonanceName);
// Always force the non-resonant amplitude pair to have resPairAmp = 0
// in case the user chooses the wrong number.
if ( resType == LauAbsResonance::FlatNR ||
resType == LauAbsResonance::NRModel ) {
std::cout<<"INFO in LauIsobarDynamics::addResonance : Setting resPairAmp to 0 for "<<resonanceName<<" contribution."<<std::endl;
resPairAmp_.push_back(0);
} else {
resPairAmp_.push_back(resPairAmpInt);
}
// Increment the number of resonance amplitudes we have so far
++nAmp_;
// Finally, add the resonance object to the internal array
sigResonances_.push_back(theResonance);
std::cout<<"INFO in LauIsobarDynamics::addResonance : Successfully added resonance. Total number of resonances so far = "<<nAmp_<<std::endl;
return theResonance;
}
LauAbsResonance* LauIsobarDynamics::addIncoherentResonance(const TString& resName, const Int_t resPairAmpInt, const LauAbsResonance::LauResonanceModel resType)
{
// Function to add an incoherent resonance in a Dalitz plot.
// No check is made w.r.t flavour and charge conservation rules, and so
// the user is responsible for checking the internal consistency of
// their function statements with these laws. For example, the program
// will not prevent the user from asking for a rho resonance in a K-pi
// pair or a K* resonance in a pi-pi pair.
// However, to assist the user, a summary of the resonant structure requested
// by the user is printed before the program runs. It is important to check this
// information when you first define your Dalitz plot model before doing
// any fitting/generating.
// Arguments are: resonance name, integer to specify the resonance track pairing
// (1 => m_23, 2 => m_13, 3 => m_12), i.e. the bachelor track number.
// The third argument resType specifies the shape of the resonance
// Gaussian (GaussIncoh), for example.
if( LauAbsResonance::isIncoherentModel(resType) == false ) {
std::cerr<<"ERROR in LauIsobarDynamics::addIncohResonance : Resonance type \""<<resType<<"\" not allowed for an incoherent resonance"<<std::endl;
return 0;
}
LauResonanceMaker& resonanceMaker = LauResonanceMaker::get();
LauAbsIncohRes *theResonance = dynamic_cast<LauAbsIncohRes*>( resonanceMaker.getResonance(daughters_, resName, resPairAmpInt, resType) );
if (theResonance == 0) {
std::cerr<<"ERROR in LauIsobarDynamics::addIncohResonance : Couldn't create the resonance \""<<resName<<"\""<<std::endl;
return 0;
}
// Set the resonance name and what track is the bachelor
TString resonanceName = theResonance->getResonanceName();
incohResTypAmp_.push_back(resonanceName);
incohResPairAmp_.push_back(resPairAmpInt);
// Increment the number of resonance amplitudes we have so far
++nIncohAmp_;
// Finally, add the resonance object to the internal array
sigIncohResonances_.push_back(theResonance);
std::cout<<"INFO in LauIsobarDynamics::addIncohResonance : Successfully added incoherent resonance. Total number of incoherent resonances so far = "<<nIncohAmp_<<std::endl;
return theResonance;
}
void LauIsobarDynamics::defineKMatrixPropagator(const TString& propName, const TString& paramFileName, Int_t resPairAmpInt,
Int_t nChannels, Int_t nPoles, Int_t rowIndex)
{
// Define the K-matrix propagator. The resPairAmpInt integer specifies which mass combination should be used
// for the invariant mass-squared variable "s". The pole masses and coupling constants are defined in the
// paramFileName parameter file.
- // The number of channels and poles are defined by the nChannels and nPoles integers, respectively.
+ // The number of channels and poles are defined by the nChannels and nPoles integers, respectively.
// The integer rowIndex specifies which row of the propagator should be used when
// summing over all amplitude channels: S-wave will be the first row, so rowIndex = 1.
- // Check that the rowIndex is valid
+ // Check that the rowIndex is valid
if (rowIndex < 1 || rowIndex > nChannels) {
std::cerr << "ERROR in LauIsobarDynamics::defineKMatrixPropagator. The rowIndex, which is set to "
<< rowIndex << ", must be between 1 and the number of channels "
<< nChannels << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
TString propagatorName(propName), parameterFile(paramFileName);
LauKMatrixPropagator* thePropagator = LauKMatrixPropFactory::getInstance()->getPropagator(propagatorName, parameterFile,
resPairAmpInt, nChannels,
nPoles, rowIndex);
kMatrixPropagators_[propagatorName] = thePropagator;
}
void LauIsobarDynamics::addKMatrixProdPole(const TString& poleName, const TString& propName, Int_t poleIndex, Bool_t useProdAdler)
{
// Add a K-matrix production pole term, using the K-matrix propagator given by the propName.
// Here, poleIndex is the integer specifying the pole number.
// First, find the K-matrix propagator.
KMPropMap::iterator mapIter = kMatrixPropagators_.find(propName);
if (mapIter != kMatrixPropagators_.end()) {
LauKMatrixPropagator* thePropagator = mapIter->second;
// Make sure the pole index is valid
Int_t nPoles = thePropagator->getNPoles();
if (poleIndex < 1 || poleIndex > nPoles) {
std::cerr<<"ERROR in LauIsobarDynamics::addKMatrixProdPole : The pole index "<<poleIndex
<<" is not between 1 and "<<nPoles<<". Not adding production pole "<<poleName
<<" for K-matrix propagator "<<propName<<std::endl;
return;
}
// Now add the K-matrix production pole amplitude to the vector of LauAbsResonance pointers.
Int_t resPairAmpInt = thePropagator->getResPairAmpInt();
LauAbsResonance* prodPole = new LauKMatrixProdPole(poleName, poleIndex, resPairAmpInt,
thePropagator, daughters_, useProdAdler);
resTypAmp_.push_back(poleName);
resPairAmp_.push_back(resPairAmpInt);
++nAmp_;
sigResonances_.push_back(prodPole);
// Also store the propName-poleName pair for calculating total fit fractions later on
// (avoiding the need to use dynamic casts to check which resonances are of the K-matrix type)
kMatrixPropSet_[poleName] = propName;
std::cout<<"INFO in LauIsobarDynamics::addKMatrixProdPole : Successfully added K-matrix production pole term. Total number of resonances so far = "<<nAmp_<<std::endl;
} else {
std::cerr<<"ERROR in LauIsobarDynamics::addKMatrixProdPole : The propagator of the name "<<propName
<<" could not be found for the production pole "<<poleName<<std::endl;
}
}
void LauIsobarDynamics::addKMatrixProdSVP(const TString& SVPName, const TString& propName, Int_t channelIndex, Bool_t useProdAdler)
{
// Add a K-matrix production "slowly-varying part" (SVP) term, using the K-matrix propagator
// given by the propName. Here, channelIndex is the integer specifying the channel number.
// First, find the K-matrix propagator.
KMPropMap::iterator mapIter = kMatrixPropagators_.find(propName);
if (mapIter != kMatrixPropagators_.end()) {
LauKMatrixPropagator* thePropagator = mapIter->second;
// Make sure the channel index is valid
Int_t nChannels = thePropagator->getNChannels();
if (channelIndex < 1 || channelIndex > nChannels) {
std::cerr<<"ERROR in LauIsobarDynamics::addKMatrixProdSVP : The channel index "<<channelIndex
<<" is not between 1 and "<<nChannels<<". Not adding production slowly-varying part "<<SVPName
<<" for K-matrix propagator "<<propName<<std::endl;
return;
}
// Now add the K-matrix production SVP amplitude to the vector of LauAbsResonance pointers.
Int_t resPairAmpInt = thePropagator->getResPairAmpInt();
LauAbsResonance* prodSVP = new LauKMatrixProdSVP(SVPName, channelIndex, resPairAmpInt,
thePropagator, daughters_, useProdAdler);
resTypAmp_.push_back(SVPName);
resPairAmp_.push_back(resPairAmpInt);
++nAmp_;
sigResonances_.push_back(prodSVP);
// Also store the SVPName-propName pair for calculating total fit fractions later on
// (avoiding the need to use dynamic casts to check which resonances are of the K-matrix type)
kMatrixPropSet_[SVPName] = propName;
std::cout<<"INFO in LauIsobarDynamics::addKMatrixProdSVP : Successfully added K-matrix production slowly-varying (SVP) term. Total number of resonances so far = "<<nAmp_<<std::endl;
} else {
std::cerr<<"ERROR in LauIsobarDynamics::addKMatrixProdSVP : The propagator of the name "<<propName
<<" could not be found for the production slowly-varying part "<<SVPName<<std::endl;
}
}
Int_t LauIsobarDynamics::resonanceIndex(const TString& resName) const
{
Int_t index(0);
const LauAbsResonance* theResonance(0);
for (std::vector<LauAbsResonance*>::const_iterator iter=sigResonances_.begin(); iter!=sigResonances_.end(); ++iter) {
theResonance = (*iter);
if (theResonance != 0) {
const TString& resString = theResonance->getResonanceName();
if (resString == resName) {
return index;
}
}
++index;
}
for (std::vector<LauAbsIncohRes*>::const_iterator iter=sigIncohResonances_.begin(); iter!=sigIncohResonances_.end(); ++iter) {
theResonance = (*iter);
if (theResonance != 0) {
const TString& resString = theResonance->getResonanceName();
if (resString == resName) {
return index;
}
}
++index;
}
return -1;
}
Bool_t LauIsobarDynamics::hasResonance(const TString& resName) const
{
const Int_t index = this->resonanceIndex(resName);
if (index < 0) {
return kFALSE;
} else {
return kTRUE;
}
}
const LauAbsResonance* LauIsobarDynamics::getResonance(const UInt_t resIndex) const
{
if ( resIndex < this->getnCohAmp() ) {
return sigResonances_[resIndex];
} else if ( resIndex < this->getnTotAmp() ) {
return sigIncohResonances_[ resIndex - nAmp_ ];
} else {
std::cerr<<"ERROR in LauIsobarDynamics::getResonance : Couldn't find resonance with index \""<<resIndex<<"\" in the model."<<std::endl;
return 0;
}
}
LauAbsResonance* LauIsobarDynamics::getResonance(const UInt_t resIndex)
{
if ( resIndex < this->getnCohAmp() ) {
return sigResonances_[resIndex];
} else if ( resIndex < this->getnTotAmp() ) {
return sigIncohResonances_[ resIndex - nAmp_ ];
} else {
std::cerr<<"ERROR in LauIsobarDynamics::getResonance : Couldn't find resonance with index \""<<resIndex<<"\" in the model."<<std::endl;
return 0;
}
}
LauAbsResonance* LauIsobarDynamics::findResonance(const TString& resName)
{
const Int_t index = this->resonanceIndex( resName );
if ( index < 0 ) {
std::cerr<<"ERROR in LauIsobarDynamics::findResonance : Couldn't find resonance with name \""<<resName<<"\" in the model."<<std::endl;
return 0;
} else {
return this->getResonance( index );
}
}
const LauAbsResonance* LauIsobarDynamics::findResonance(const TString& resName) const
{
const Int_t index = this->resonanceIndex( resName );
if ( index < 0 ) {
std::cerr<<"ERROR in LauIsobarDynamics::findResonance : Couldn't find resonance with name \""<<resName<<"\" in the model."<<std::endl;
return 0;
} else {
return this->getResonance( index );
}
}
void LauIsobarDynamics::removeCharge(TString& string) const
{
Ssiz_t index = string.Index("+");
if (index != -1) {
string.Remove(index,1);
}
index = string.Index("-");
if (index != -1) {
string.Remove(index,1);
}
}
void LauIsobarDynamics::calcDPNormalisation()
{
if (!normalizationSchemeDone_) {
this->calcDPNormalisationScheme();
}
for (std::vector<LauDPPartialIntegralInfo*>::iterator it = dpPartialIntegralInfo_.begin(); it != dpPartialIntegralInfo_.end(); ++it)
{
this->calcDPPartialIntegral( *it );
}
for (UInt_t i = 0; i < nAmp_+nIncohAmp_; ++i) {
fNorm_[i] = 0.0;
if (fSqSum_[i] > 0.0) {fNorm_[i] = TMath::Sqrt(1.0/(fSqSum_[i]));}
}
}
std::vector< std::pair<Double_t, Double_t> > LauIsobarDynamics::formGapsFromRegions( const std::vector< std::pair<Double_t, Double_t> >& regions, const Double_t min, const Double_t max ) const
{
- std::vector< std::pair<Double_t, Double_t> > gaps(regions.size() + 1, std::make_pair(0., 0.));
+ std::vector< std::pair<Double_t, Double_t> > gaps(regions.size() + 1, std::make_pair(0., 0.));
- // Given some narrow resonance regions, find the regions that correspond to the gaps between them
+ // Given some narrow resonance regions, find the regions that correspond to the gaps between them
- gaps[0].first = min;
+ gaps[0].first = min;
- for (UInt_t i = 0; i < regions.size(); ++i) {
- gaps[i].second = regions[i].first;
- gaps[i + 1].first = regions[i].second;
- }
+ for (UInt_t i = 0; i < regions.size(); ++i) {
+ gaps[i].second = regions[i].first;
+ gaps[i + 1].first = regions[i].second;
+ }
- gaps[gaps.size() - 1].second = max;
+ gaps[gaps.size() - 1].second = max;
- return gaps;
+ return gaps;
}
void LauIsobarDynamics::cullNullRegions( std::vector<LauDPPartialIntegralInfo*>& regions ) const
{
LauDPPartialIntegralInfo* tmp(0);
regions.erase( std::remove(regions.begin(), regions.end(), tmp), regions.end() );
}
void LauIsobarDynamics::correctDPOverlap( std::vector< std::pair<Double_t, Double_t> >& regions, const std::vector<Double_t>& binnings ) const
{
if (regions.empty()) {
return;
}
// If the regions overlap, ensure that the one with the finest binning takes precedence (i.e., extends its full width)
for (UInt_t i = 0; i < regions.size() - 1; ++i) {
if ( regions[i + 1].first <= regions[i].second ) {
if ((binnings[i] < binnings[i + 1])) {
regions[i + 1] = std::make_pair(regions[i].second, regions[i + 1].second);
} else {
regions[i] = std::make_pair(regions[i].first, regions[i + 1].first);
}
}
}
}
std::vector<LauDPPartialIntegralInfo*> LauIsobarDynamics::m13IntegrationRegions( const std::vector< std::pair<Double_t,Double_t> >& m13Regions,
const std::vector< std::pair<Double_t,Double_t> >& m23Regions,
const std::vector<Double_t>& m13Binnings,
const Double_t precision,
const Double_t defaultBinning ) const
{
// Create integration regions for all narrow resonances in m13 except for the overlaps with narrow resonances in m23
std::vector<LauDPPartialIntegralInfo*> integrationRegions;
const Double_t m23Min = kinematics_->getm23Min();
const Double_t m23Max = kinematics_->getm23Max();
// Loop over narrow resonances in m13
for (UInt_t m13i = 0; m13i < m13Regions.size(); ++m13i) {
const Double_t m13Binning = m13Binnings[m13i];
const Double_t resMin13 = m13Regions[m13i].first;
const Double_t resMax13 = m13Regions[m13i].second;
// Initialise to the full height of the DP in case there are no narrow resonances in m23
Double_t lastResMax23 = m23Min;
// Loop over narrow resonances in m23
for (UInt_t m23i = 0; m23i < m23Regions.size(); m23i++) {
const Double_t resMin23 = m23Regions[m23i].first;
const Double_t resMax23 = m23Regions[m23i].second;
// For the first entry, add the area between m23 threshold and this first entry
if (m23i == 0) {
integrationRegions.push_back(this->newDPIntegrationRegion(resMin13, resMax13, m23Min, resMin23, m13Binning, defaultBinning, precision, nAmp_, nIncohAmp_));
}
// For all entries except the last one, add the area between this and the next entry
if (m23i != (m23Regions.size() - 1)) {
const Double_t nextResMin23 = m23Regions[m23i + 1].first;
integrationRegions.push_back(this->newDPIntegrationRegion(resMin13, resMax13, resMax23, nextResMin23, m13Binning, defaultBinning, precision, nAmp_, nIncohAmp_));
} else {
lastResMax23 = resMax23;
}
}
// Add the area between the last entry and the maximum m23 (which could be the whole strip if there are no entries in m23Regions)
integrationRegions.push_back(this->newDPIntegrationRegion(resMin13, resMax13, lastResMax23, m23Max, m13Binning, defaultBinning, precision, nAmp_, nIncohAmp_));
}
return integrationRegions;
}
std::vector<LauDPPartialIntegralInfo*> LauIsobarDynamics::m23IntegrationRegions( const std::vector<std::pair<Double_t,Double_t> >& m13Regions,
const std::vector<std::pair<Double_t,Double_t> >& m23Regions,
const std::vector<Double_t>& m13Binnings,
const std::vector<Double_t>& m23Binnings,
const Double_t precision,
const Double_t defaultBinning ) const
{
// Create integration regions for all narrow resonances in m23 (including the overlap regions with m13 narrow resonances)
std::vector<LauDPPartialIntegralInfo *> integrationRegions;
const Double_t m13Min = kinematics_->getm13Min();
const Double_t m13Max = kinematics_->getm13Max();
// Loop over narrow resonances in m23
for (UInt_t m23i = 0; m23i < m23Regions.size(); m23i++) {
const Double_t m23Binning = m23Binnings[m23i];
const Double_t resMin23 = m23Regions[m23i].first;
const Double_t resMax23 = m23Regions[m23i].second;
// Initialise to the full width of the DP in case there are no narrow resonances in m13
Double_t lastResMax13 = m13Min;
// Loop over narrow resonances in m13
for (UInt_t m13i = 0; m13i < m13Regions.size(); m13i++){
const Double_t m13Binning = m13Binnings[m23i];
const Double_t resMin13 = m13Regions[m13i].first;
const Double_t resMax13 = m13Regions[m13i].second;
// Overlap region (only needed in m23)
integrationRegions.push_back(this->newDPIntegrationRegion(resMin13, resMax13, resMin23, resMax23, m13Binning, m23Binning, precision, nAmp_, nIncohAmp_));
// For the first entry, add the area between m13 threshold and this first entry
if (m13i == 0) {
integrationRegions.push_back(this->newDPIntegrationRegion(m13Min, resMin13, resMin23, resMax23, defaultBinning, m23Binning, precision, nAmp_, nIncohAmp_));
}
// For all entries except the last one, add the area between this and the next entry
if (m13i != m13Regions.size() - 1) {
const Double_t nextResMin13 = m23Regions[m13i + 1].first;
integrationRegions.push_back(this->newDPIntegrationRegion(resMax13, nextResMin13, resMin23, resMax23, defaultBinning, m23Binning, precision, nAmp_, nIncohAmp_));
} else {
lastResMax13 = resMax13;
}
}
// Add the area between the last entry and the maximum m13 (which could be the whole strip if there are no entries in m13Regions)
integrationRegions.push_back(this->newDPIntegrationRegion(lastResMax13, m13Max, resMin23, resMax23, defaultBinning, m23Binning, precision, nAmp_, nIncohAmp_));
}
return integrationRegions;
}
LauDPPartialIntegralInfo* LauIsobarDynamics::newDPIntegrationRegion( const Double_t minm13, const Double_t maxm13,
const Double_t minm23, const Double_t maxm23,
const Double_t m13BinWidth, const Double_t m23BinWidth,
const Double_t precision,
const UInt_t nAmp,
const UInt_t nIncohAmp ) const
{
const UInt_t nm13Points = static_cast<UInt_t>((maxm13-minm13)/m13BinWidth);
const UInt_t nm23Points = static_cast<UInt_t>((maxm23-minm23)/m23BinWidth);
// If we would create a region with no interior points, just return a null pointer
if (nm13Points == 0 || nm23Points == 0) {
return 0;
}
return new LauDPPartialIntegralInfo(minm13, maxm13, minm23, maxm23, m13BinWidth, m23BinWidth, precision, nAmp, nIncohAmp);
}
void LauIsobarDynamics::calcDPNormalisationScheme()
{
if ( ! dpPartialIntegralInfo_.empty() ) {
std::cerr << "ERROR in LauIsobarDynamics::calcDPNormalisationScheme : Scheme already stored!" << std::endl;
return;
}
// The precision for the Gauss-Legendre weights
const Double_t precision(1e-6);
// Get the rectangle that encloses the DP
const Double_t minm13 = kinematics_->getm13Min();
const Double_t maxm13 = kinematics_->getm13Max();
const Double_t minm23 = kinematics_->getm23Min();
const Double_t maxm23 = kinematics_->getm23Max();
const Double_t minm12 = kinematics_->getm12Min();
const Double_t maxm12 = kinematics_->getm12Max();
// Find out whether we have narrow resonances in the DP (defined here as width < 20 MeV).
std::vector< std::pair<Double_t,Double_t> > m13NarrowRes;
std::vector< std::pair<Double_t,Double_t> > m23NarrowRes;
std::vector< std::pair<Double_t,Double_t> > m12NarrowRes;
// Rho-omega mixing models implicitly contains omega(782) model, but width is of rho(770) - handle as a special case
LauResonanceMaker& resonanceMaker = LauResonanceMaker::get();
LauResonanceInfo* omega_info = resonanceMaker.getResInfo("omega(782)");
const Double_t omegaMass = (omega_info!=0) ? omega_info->getMass()->unblindValue() : 0.78265;
const Double_t omegaWidth = (omega_info!=0) ? omega_info->getWidth()->unblindValue() : 0.00849;
for ( std::vector<LauAbsResonance*>::const_iterator iter = sigResonances_.begin(); iter != sigResonances_.end(); ++iter ) {
LauAbsResonance::LauResonanceModel model = (*iter)->getResonanceModel();
const TString& name = (*iter)->getResonanceName();
Int_t pair = (*iter)->getPairInt();
Double_t mass = (*iter)->getMass();
Double_t width = (*iter)->getWidth();
if ( model == LauAbsResonance::RhoOmegaMix_GS ||
model == LauAbsResonance::RhoOmegaMix_GS_1 ||
model == LauAbsResonance::RhoOmegaMix_RBW ||
model == LauAbsResonance::RhoOmegaMix_RBW_1 ) {
mass = omegaMass;
width = omegaWidth;
}
if ( width > narrowWidth_ || width == 0.0 ) {
continue;
}
std::cout << "INFO in LauIsobarDynamics::calcDPNormalisationScheme : Found narrow resonance: " << name << ", mass = " << mass << ", width = " << width << ", pair int = " << pair << std::endl;
if ( pair == 1 ) {
if ( mass < minm23 || mass > maxm23 ){
std::cout << std::string(53, ' ') << ": But its pole is outside the kinematically allowed range, so will not consider it narrow for the purposes of integration" << std::endl;
} else {
m23NarrowRes.push_back( std::make_pair(mass,width) );
if ( fullySymmetricDP_ ) {
m13NarrowRes.push_back( std::make_pair(mass,width) );
m12NarrowRes.push_back( std::make_pair(mass,width) );
} else if ( symmetricalDP_ || ( flavConjDP_ && forceSymmetriseIntegration_ ) ) {
m13NarrowRes.push_back( std::make_pair(mass,width) );
}
}
} else if ( pair == 2 ) {
if ( mass < minm13 || mass > maxm13 ){
std::cout << std::string(53, ' ') << ": But its pole is outside the kinematically allowed range, so will not consider it narrow for the purposes of integration" << std::endl;
} else {
m13NarrowRes.push_back( std::make_pair(mass,width) );
if ( fullySymmetricDP_ ) {
m23NarrowRes.push_back( std::make_pair(mass,width) );
m12NarrowRes.push_back( std::make_pair(mass,width) );
} else if ( symmetricalDP_ || ( flavConjDP_ && forceSymmetriseIntegration_ ) ) {
m23NarrowRes.push_back( std::make_pair(mass,width) );
}
}
} else if ( pair == 3 ) {
if ( mass < minm12 || mass > maxm12 ){
std::cout << std::string(53, ' ') << ": But its pole is outside the kinematically allowed range, so will not consider it narrow for the purposes of integration" << std::endl;
} else {
m12NarrowRes.push_back( std::make_pair(mass,width) );
if ( fullySymmetricDP_ ) {
m13NarrowRes.push_back( std::make_pair(mass,width) );
m12NarrowRes.push_back( std::make_pair(mass,width) );
}
}
} else {
std::cerr << "WARNING in LauIsobarDynamics::calcDPNormalisationScheme : strange pair integer, " << pair << ", for resonance \"" << (*iter)->getResonanceName() << std::endl;
}
}
for ( std::vector<LauAbsIncohRes*>::const_iterator iter = sigIncohResonances_.begin(); iter != sigIncohResonances_.end(); ++iter ) {
const TString& name = (*iter)->getResonanceName();
Int_t pair = (*iter)->getPairInt();
Double_t mass = (*iter)->getMass();
Double_t width = (*iter)->getWidth();
if ( width > narrowWidth_ || width == 0.0 ) {
continue;
}
std::cout<<"INFO in LauIsobarDynamics::calcDPNormalisationScheme : Found narrow resonance: " << name << ", mass = " << mass << ", width = " << width << ", pair int = " << pair << std::endl;
if ( pair == 1 ) {
if ( mass < minm23 || mass > maxm23 ){
std::cout << std::string(53, ' ') << ": But its pole is outside the kinematically allowed range, so will not consider it narrow for the purposes of integration" << std::endl;
} else {
m23NarrowRes.push_back( std::make_pair(mass,width) );
if ( fullySymmetricDP_ ) {
m13NarrowRes.push_back( std::make_pair(mass,width) );
m12NarrowRes.push_back( std::make_pair(mass,width) );
} else if ( symmetricalDP_ || ( flavConjDP_ && forceSymmetriseIntegration_ ) ) {
m13NarrowRes.push_back( std::make_pair(mass,width) );
}
}
} else if ( pair == 2 ) {
if ( mass < minm13 || mass > maxm13 ){
std::cout << std::string(53, ' ') << ": But its pole is outside the kinematically allowed range, so will not consider it narrow for the purposes of integration" << std::endl;
} else {
m13NarrowRes.push_back( std::make_pair(mass,width) );
if ( fullySymmetricDP_ ) {
m23NarrowRes.push_back( std::make_pair(mass,width) );
m12NarrowRes.push_back( std::make_pair(mass,width) );
} else if ( symmetricalDP_ || ( flavConjDP_ && forceSymmetriseIntegration_ ) ) {
m23NarrowRes.push_back( std::make_pair(mass,width) );
}
}
} else if ( pair == 3 ) {
if ( mass < minm12 || mass > maxm12 ){
std::cout << std::string(53, ' ') << ": But its pole is outside the kinematically allowed range, so will not consider it narrow for the purposes of integration" << std::endl;
} else {
m12NarrowRes.push_back( std::make_pair(mass,width) );
if ( fullySymmetricDP_ ) {
m13NarrowRes.push_back( std::make_pair(mass,width) );
m12NarrowRes.push_back( std::make_pair(mass,width) );
}
}
} else {
std::cerr << "WARNING in LauIsobarDynamics::calcDPNormalisationScheme : strange pair integer, " << pair << ", for resonance \"" << (*iter)->getResonanceName() << std::endl;
}
}
// Depending on how many narrow resonances we have and where they are
// we adopt different approaches
if ( ! m12NarrowRes.empty() ) {
// We have at least one narrow resonance in m12
// Switch to using the square DP for the integration
// TODO - for the time being just use a single, reasonably fine by default and tunable, grid
// - can later consider whether there's a need to split up the mPrime axis into regions around particularly narrow resonances in m12
// - but it seems that this isn't really needed since even the default tune gives a good resolution for most narrow resonances such as phi / chi_c0
std::cout<<"INFO in LauIsobarDynamics::calcDPNormalisationScheme : One or more narrow resonances found in m12, integrating over whole square Dalitz plot with bin widths of "<<mPrimeBinWidth_<<" in mPrime and "<<thPrimeBinWidth_<<" in thetaPrime..."<<std::endl;
// Make sure that the kinematics will calculate the square DP co-ordinates
if ( ! kinematics_->squareDP() ) {
std::cerr << "WARNING in LauIsobarDynamics::calcDPNormalisationScheme : forcing kinematics to calculate the required square DP co-ordinates" << std::endl;
kinematics_->squareDP(kTRUE);
}
dpPartialIntegralInfo_.push_back(new LauDPPartialIntegralInfo(0.0, 1.0, 0.0, 1.0, mPrimeBinWidth_, thPrimeBinWidth_, precision, nAmp_, nIncohAmp_, kTRUE, kinematics_));
} else if (m13NarrowRes.empty() && m23NarrowRes.empty()) {
// There are no narrow resonances, so we just do a single grid over the whole DP
std::cout << "INFO in LauIsobarDynamics::calcDPNormalisationScheme : No narrow resonances found, integrating over whole Dalitz plot..." << std::endl;
dpPartialIntegralInfo_.push_back(new LauDPPartialIntegralInfo(minm13, maxm13, minm23, maxm23, m13BinWidth_, m23BinWidth_, precision, nAmp_, nIncohAmp_));
} else {
// Get regions in that correspond to narrow resonances in m13 and m23, and correct for overlaps in each dimension (to use the finest binning)
// Sort resonances by ascending mass to calculate regions properly
std::sort(m13NarrowRes.begin(), m13NarrowRes.end());
std::sort(m23NarrowRes.begin(), m23NarrowRes.end());
// For each narrow resonance in m13, determine the corresponding window and its binning
std::vector<std::pair<Double_t, Double_t> > m13Regions;
std::vector<Double_t> m13Binnings;
for ( std::vector<std::pair<Double_t, Double_t> >::const_iterator iter = m13NarrowRes.begin(); iter != m13NarrowRes.end(); ++iter ) {
Double_t mass = iter->first;
Double_t width = iter->second;
Double_t regionBegin = mass - 5.0 * width;
Double_t regionEnd = mass + 5.0 * width;
Double_t binning = width / binningFactor_;
// check if we ought to extend the region to the edge of the phase space (in either direction)
if ( regionBegin < (minm13+50.0*m13BinWidth_) ) {
std::cout << "INFO in LauIsobarDynamics::calcDPNormalisationScheme : Resonance at m13 = " << mass << " is close to threshold, extending integration region" << std::endl;
regionBegin = minm13;
}
if ( regionEnd > (maxm13-50.0*m13BinWidth_) ) {
std::cout << "INFO in LauIsobarDynamics::calcDPNormalisationScheme : Resonance at m13 = " << mass << " is close to upper edge of phase space, extending integration region" << std::endl;
regionEnd = maxm13;
}
m13Regions.push_back(std::make_pair(regionBegin, regionEnd));
m13Binnings.push_back(binning);
}
// For each narrow resonance in m23, determine the corresponding window and its binning
std::vector<std::pair<Double_t, Double_t> > m23Regions;
std::vector<Double_t> m23Binnings;
for ( std::vector<std::pair<Double_t, Double_t> >::const_iterator iter = m23NarrowRes.begin(); iter != m23NarrowRes.end(); ++iter ) {
Double_t mass = iter->first;
Double_t width = iter->second;
Double_t regionBegin = mass - 5.0 * width;
Double_t regionEnd = mass + 5.0 * width;
Double_t binning = width / binningFactor_;
// check if we ought to extend the region to the edge of the phase space (in either direction)
if ( regionBegin < (minm23+50.0*m23BinWidth_) ) {
std::cout << "INFO in LauIsobarDynamics::calcDPNormalisationScheme : Resonance at m23 = " << mass << " is close to threshold, extending integration region" << std::endl;
regionBegin = minm23;
}
if ( regionEnd > (maxm23-50.0*m23BinWidth_) ) {
std::cout << "INFO in LauIsobarDynamics::calcDPNormalisationScheme : Resonance at m23 = " << mass << " is close to upper edge of phase space, extending integration region" << std::endl;
regionEnd = maxm23;
}
m23Regions.push_back(std::make_pair(regionBegin, regionEnd));
m23Binnings.push_back(binning);
}
// Sort out overlaps between regions in the same mass pairing
this->correctDPOverlap(m13Regions, m13Binnings);
this->correctDPOverlap(m23Regions, m23Binnings);
// Get the narrow resonance regions plus any overlap region
std::vector<LauDPPartialIntegralInfo*> fineScheme13 = this->m13IntegrationRegions(m13Regions, m23Regions, m13Binnings, precision, m13BinWidth_);
std::vector<LauDPPartialIntegralInfo*> fineScheme23 = this->m23IntegrationRegions(m13Regions, m23Regions, m13Binnings, m23Binnings, precision, m23BinWidth_);
// Get coarse regions by calculating the gaps between the
// narrow resonances and using the same functions to create
// the integration grid object for each
std::vector< std::pair<Double_t,Double_t> > coarseRegions = this->formGapsFromRegions(m13Regions, minm13, maxm13);
std::vector<Double_t> coarseBinning( fineScheme13.size()+1, m13BinWidth_ );
std::vector<LauDPPartialIntegralInfo*> coarseScheme = this->m13IntegrationRegions(coarseRegions, m23Regions, coarseBinning, precision, m13BinWidth_);
dpPartialIntegralInfo_.insert(dpPartialIntegralInfo_.end(), fineScheme13.begin(), fineScheme13.end());
dpPartialIntegralInfo_.insert(dpPartialIntegralInfo_.end(), fineScheme23.begin(), fineScheme23.end());
dpPartialIntegralInfo_.insert(dpPartialIntegralInfo_.end(), coarseScheme.begin(), coarseScheme.end());
// Remove any null pointer entries in the integral list
// (that are produced when an integration region with no
// interior points is defined)
this->cullNullRegions(dpPartialIntegralInfo_);
}
normalizationSchemeDone_ = kTRUE;
}
void LauIsobarDynamics::setIntegralBinWidths(const Double_t m13BinWidth, const Double_t m23BinWidth,
const Double_t mPrimeBinWidth, const Double_t thPrimeBinWidth)
{
// Set the bin widths for the m13 vs m23 integration grid
m13BinWidth_ = m13BinWidth;
m23BinWidth_ = m23BinWidth;
// Set the bin widths for the m' vs theta' integration grid
mPrimeBinWidth_ = mPrimeBinWidth;
thPrimeBinWidth_ = thPrimeBinWidth;
}
void LauIsobarDynamics::calcDPPartialIntegral(LauDPPartialIntegralInfo* intInfo)
{
// Calculate the integrals for all parts of the amplitude in the given region of the DP
const Bool_t squareDP = intInfo->getSquareDP();
const UInt_t nm13Points = intInfo->getnm13Points();
const UInt_t nm23Points = intInfo->getnm23Points();
//Double_t dpArea(0.0);
for (UInt_t i = 0; i < nm13Points; ++i) {
const Double_t m13 = intInfo->getM13Value(i);
const Double_t m13Sq = m13*m13;
for (UInt_t j = 0; j < nm23Points; ++j) {
const Double_t m23 = intInfo->getM23Value(j);
const Double_t m23Sq = m23*m23;
const Double_t weight = intInfo->getWeight(i,j);
// Calculate the integral contributions for each resonance.
// Only points within the DP area contribute.
// This also calculates the total DP area as a check.
// NB if squareDP is true, m13 and m23 are actually mPrime and thetaPrime
Bool_t withinDP = squareDP ? kinematics_->withinSqDPLimits(m13, m23) : kinematics_->withinDPLimits(m13Sq, m23Sq);
if (withinDP == kTRUE) {
if ( squareDP ) {
// NB m13 and m23 are actually mPrime and thetaPrime
kinematics_->updateSqDPKinematics(m13, m23);
} else {
kinematics_->updateKinematics(m13Sq, m23Sq);
}
this->calculateAmplitudes(intInfo, i, j);
this->addGridPointToIntegrals(weight);
// Increment total DP area
//dpArea += weight;
}
} // j weights loop
} // i weights loop
// Print out DP area to check whether we have a sensible output
//std::cout<<" : dpArea = "<<dpArea<<std::endl;
}
void LauIsobarDynamics::calculateAmplitudes( LauDPPartialIntegralInfo* intInfo, const UInt_t m13Point, const UInt_t m23Point )
{
const std::set<UInt_t>::const_iterator intEnd = integralsToBeCalculated_.end();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( integralsToBeCalculated_.find(iAmp) != intEnd ) {
// Calculate the dynamics for this resonance
ff_[iAmp] = this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
} else {
// Retrieve the cached value of the amplitude
ff_[iAmp] = intInfo->getAmplitude( m13Point, m23Point, iAmp );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] = this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
} else {
// Retrieve the cached value of the amplitude
incohInten_[iAmp] = intInfo->getIntensity( m13Point, m23Point, iAmp );
}
}
// If symmetric, do as above with flipped kinematics and add to amplitude
// (No need to retrive the cache if this was done in the first case)
if ( symmetricalDP_ == kTRUE ) {
kinematics_->flipAndUpdateKinematics();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
ff_[iAmp] += this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] += this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
}
}
kinematics_->flipAndUpdateKinematics();
}
if (fullySymmetricDP_ == kTRUE) {
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
ff_[iAmp] += this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] += this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
}
}
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
ff_[iAmp] += this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] += this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
}
}
// rotate, flip and evaluate
kinematics_->rotateAndUpdateKinematics();
kinematics_->flipAndUpdateKinematics();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
ff_[iAmp] += this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] += this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
}
}
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
ff_[iAmp] += this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] += this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
}
}
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for (UInt_t iAmp = 0; iAmp < nAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
ff_[iAmp] += this->resAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeAmplitude( m13Point, m23Point, iAmp, ff_[iAmp] );
}
}
for (UInt_t iAmp = 0; iAmp < nIncohAmp_; ++iAmp) {
if ( (integralsToBeCalculated_.find(iAmp+nAmp_) != intEnd) && !sigResonances_[iAmp]->preSymmetrised() ) {
// Calculate the dynamics for this resonance
incohInten_[iAmp] += this->incohResAmp(iAmp);
// Store the new value in the integration info object
intInfo->storeIntensity( m13Point, m23Point, iAmp, incohInten_[iAmp] );
}
}
// rotate and flip to get us back to where we started
kinematics_->rotateAndUpdateKinematics();
kinematics_->flipAndUpdateKinematics();
}
// If we haven't cached the data, then we need to find out the efficiency.
eff_ = this->retrieveEfficiency();
intInfo->storeEfficiency( m13Point, m23Point, eff_ );
}
void LauIsobarDynamics::calculateAmplitudes()
{
std::set<UInt_t>::const_iterator iter = integralsToBeCalculated_.begin();
const std::set<UInt_t>::const_iterator intEnd = integralsToBeCalculated_.end();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
// Calculate the dynamics for this resonance
if(*iter < nAmp_) {
ff_[*iter] = this->resAmp(*iter);
} else {
incohInten_[*iter-nAmp_] = this->incohResAmp(*iter-nAmp_);
}
}
if ( symmetricalDP_ == kTRUE ) {
kinematics_->flipAndUpdateKinematics();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
// Calculate the dynamics for this resonance
if(*iter < nAmp_ && !sigResonances_[*iter]->preSymmetrised() ) {
ff_[*iter] += this->resAmp(*iter);
} else if (*iter >= nAmp_ && !sigResonances_[*iter-nAmp_]->preSymmetrised() ){
incohInten_[*iter-nAmp_] += this->incohResAmp(*iter-nAmp_);
}
}
kinematics_->flipAndUpdateKinematics();
}
if ( fullySymmetricDP_ == kTRUE ) {
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
if(*iter < nAmp_ && !sigResonances_[*iter]->preSymmetrised() ) {
ff_[*iter] += this->resAmp(*iter);
} else if (*iter >= nAmp_ && !sigResonances_[*iter-nAmp_]->preSymmetrised() ){
incohInten_[*iter-nAmp_] += this->incohResAmp(*iter-nAmp_);
}
}
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
if(*iter < nAmp_ && !sigResonances_[*iter]->preSymmetrised() ) {
ff_[*iter] += this->resAmp(*iter);
} else if (*iter >= nAmp_ && !sigResonances_[*iter-nAmp_]->preSymmetrised() ){
incohInten_[*iter-nAmp_] += this->incohResAmp(*iter-nAmp_);
}
}
// rotate, flip and evaluate
kinematics_->rotateAndUpdateKinematics();
kinematics_->flipAndUpdateKinematics();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
if(*iter < nAmp_ && !sigResonances_[*iter]->preSymmetrised() ) {
ff_[*iter] += this->resAmp(*iter);
} else if (*iter >= nAmp_ && !sigResonances_[*iter-nAmp_]->preSymmetrised() ){
incohInten_[*iter-nAmp_] += this->incohResAmp(*iter-nAmp_);
}
}
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
if(*iter < nAmp_ && !sigResonances_[*iter]->preSymmetrised() ) {
ff_[*iter] += this->resAmp(*iter);
} else if (*iter >= nAmp_ && !sigResonances_[*iter-nAmp_]->preSymmetrised() ){
incohInten_[*iter-nAmp_] += this->incohResAmp(*iter-nAmp_);
}
}
// rotate and evaluate
kinematics_->rotateAndUpdateKinematics();
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
if(*iter < nAmp_ && !sigResonances_[*iter]->preSymmetrised() ) {
ff_[*iter] += this->resAmp(*iter);
} else if (*iter >= nAmp_ && !sigResonances_[*iter-nAmp_]->preSymmetrised() ){
incohInten_[*iter-nAmp_] += this->incohResAmp(*iter-nAmp_);
}
}
// rotate and flip to get us back to where we started
kinematics_->rotateAndUpdateKinematics();
kinematics_->flipAndUpdateKinematics();
}
// If we haven't cached the data, then we need to find out the efficiency.
eff_ = this->retrieveEfficiency();
}
void LauIsobarDynamics::calcTotalAmp(const Bool_t useEff)
{
// Reset the total amplitude to zero
totAmp_.zero();
// Loop over all signal amplitudes
LauComplex ATerm;
for (UInt_t i = 0; i < nAmp_; ++i) {
// Get the partial complex amplitude - (mag, phase)*(resonance dynamics)
ATerm = Amp_[i]*ff_[i];
// Scale this contribution by its relative normalisation w.r.t. the whole dynamics
ATerm.rescale(fNorm_[i]);
// Add this partial amplitude to the sum
totAmp_ += ATerm;
} // Loop over amplitudes
// |Sum of partial amplitudes|^2
ASq_ = totAmp_.abs2();
for (UInt_t i = 0; i < nIncohAmp_; ++i) {
// Get the partial complex amplitude - (mag, phase)
ATerm = Amp_[i+nAmp_];
// Scale this contribution by its relative normalisation w.r.t. the whole dynamics
ATerm.rescale(fNorm_[i+nAmp_]);
// Add this partial amplitude to the sum
ASq_ += ATerm.abs2()*incohInten_[i];
}
// Apply the efficiency correction for this event.
// Multiply the amplitude squared sum by the DP efficiency
if ( useEff ) {
ASq_ *= eff_;
}
}
void LauIsobarDynamics::addGridPointToIntegrals(const Double_t weight)
{
// Combine the Gauss-Legendre weight with the efficiency
const Double_t effWeight = eff_*weight;
LauComplex fifjEffSumTerm;
LauComplex fifjSumTerm;
// Calculates the half-matrix of amplitude-squared and interference
// terms (dynamical part only)
// Add the values at this point on the integration grid to the sums
// (one weighted only by the integration weights, one also weighted by
// the efficiency)
for (UInt_t i = 0; i < nAmp_; ++i) {
// Add the dynamical amplitude squared for this resonance.
Double_t fSqVal = ff_[i].abs2();
fSqSum_[i] += fSqVal*weight;
fSqEffSum_[i] += fSqVal*effWeight;
for (UInt_t j = i; j < nAmp_; ++j) {
fifjEffSumTerm = fifjSumTerm = ff_[i]*ff_[j].conj();
fifjEffSumTerm.rescale(effWeight);
fifjEffSum_[i][j] += fifjEffSumTerm;
fifjSumTerm.rescale(weight);
fifjSum_[i][j] += fifjSumTerm;
}
}
for (UInt_t i = 0; i < nIncohAmp_; ++i) {
// Add the dynamical amplitude squared for this resonance.
Double_t fSqVal = incohInten_[i];
fSqSum_[i+nAmp_] += fSqVal*weight;
fSqEffSum_[i+nAmp_] += fSqVal*effWeight;
}
}
LauComplex LauIsobarDynamics::resAmp(const UInt_t index)
{
// Routine to calculate the resonance dynamics (amplitude)
// using the appropriate Breit-Wigner/Form Factors.
LauComplex amp = LauComplex(0.0, 0.0);
if ( index >= nAmp_ ) {
std::cerr<<"ERROR in LauIsobarDynamics::resAmp : index = "<<index<<" is not within the range 0 to "<<nAmp_-1<<std::endl;
return amp;
}
// Get the signal resonance from the stored vector
LauAbsResonance* sigResonance = sigResonances_[index];
if (sigResonance == 0) {
std::cerr<<"ERROR in LauIsobarDynamics::resAmp : Couldn't retrieve valid resonance with index = "<<index<<std::endl;
return amp;
}
amp = sigResonance->amplitude(kinematics_);
return amp;
}
Double_t LauIsobarDynamics::incohResAmp(const UInt_t index)
{
// Routine to calculate the resonance dynamics (amplitude)
// using the appropriate Breit-Wigner/Form Factors.
Double_t intensity = 0.;
if ( index >= nIncohAmp_ ) {
std::cerr<<"ERROR in LauIsobarDynamics::incohResAmp : index = "<<index<<" is not within the range 0 to "<<nIncohAmp_-1<<std::endl;
return intensity;
}
// Get the signal resonance from the stored vector
LauAbsIncohRes* sigResonance = sigIncohResonances_[index];
if (sigResonance == 0) {
std::cerr<<"ERROR in LauIsobarDynamics::incohResAmp : Couldn't retrieve valid incoherent resonance with index = "<<index<<std::endl;
return intensity;
}
LauComplex ff = sigResonance->amplitude(kinematics_);
intensity = sigResonance->intensityFactor(kinematics_)*ff.abs2();
return intensity;
}
void LauIsobarDynamics::setFFTerm(const UInt_t index, const Double_t realPart, const Double_t imagPart)
{
// Function to set the internal ff term (normally calculated using resAmp(index).
if ( index >= nAmp_ ) {
std::cerr<<"ERROR in LauIsobarDynamics::setFFTerm : index = "<<index<<" is not within the range 0 to "<<nAmp_-1<<std::endl;
return;
}
ff_[index].setRealImagPart( realPart, imagPart );
}
void LauIsobarDynamics::setIncohIntenTerm(const UInt_t index, const Double_t value)
{
// Function to set the internal incoherent intensity term (normally calculated using incohResAmp(index).
if ( index >= nIncohAmp_ ) {
std::cerr<<"ERROR in LauIsobarDynamics::setFFTerm : index = "<<index<<" is not within the range 0 to "<<nIncohAmp_-1<<std::endl;
return;
}
incohInten_[index] = value;
}
void LauIsobarDynamics::calcExtraInfo(const Bool_t init)
{
// This method calculates the fit fractions, mean efficiency and total DP rate
Double_t fifjEffTot(0.0), fifjTot(0.0);
UInt_t i, j;
for (i = 0; i < nAmp_; i++) {
// Calculate the diagonal terms
TString name = "A"; name += i; name += "Sq_FitFrac";
fitFrac_[i][i].name(name);
name += "EffUnCorr";
fitFracEffUnCorr_[i][i].name(name);
Double_t fifjSumReal = fifjSum_[i][i].re();
Double_t sumTerm = Amp_[i].abs2()*fifjSumReal*fNorm_[i]*fNorm_[i];
fifjTot += sumTerm;
Double_t fifjEffSumReal = fifjEffSum_[i][i].re();
Double_t sumEffTerm = Amp_[i].abs2()*fifjEffSumReal*fNorm_[i]*fNorm_[i];
fifjEffTot += sumEffTerm;
fitFrac_[i][i].value(sumTerm);
fitFracEffUnCorr_[i][i].value(sumEffTerm);
}
for (i = 0; i < nAmp_; i++) {
for (j = i+1; j < nAmp_+nIncohAmp_; j++) {
// Calculate the cross-terms
TString name = "A"; name += i; name += "A"; name += j; name += "_FitFrac";
fitFrac_[i][j].name(name);
name += "EffUnCorr";
fitFracEffUnCorr_[i][j].name(name);
if(j >= nAmp_) {
//Set off-diagonal incoherent terms to zero
fitFrac_[i][j].value(0.);
fitFracEffUnCorr_[i][j].value(0.);
continue;
}
LauComplex AmpjConj = Amp_[j].conj();
LauComplex AmpTerm = Amp_[i]*AmpjConj;
Double_t crossTerm = 2.0*(AmpTerm*fifjSum_[i][j]).re()*fNorm_[i]*fNorm_[j];
fifjTot += crossTerm;
Double_t crossEffTerm = 2.0*(AmpTerm*fifjEffSum_[i][j]).re()*fNorm_[i]*fNorm_[j];
fifjEffTot += crossEffTerm;
fitFrac_[i][j].value(crossTerm);
fitFracEffUnCorr_[i][j].value(crossEffTerm);
}
}
for (i = nAmp_; i < nAmp_+nIncohAmp_; i++) {
// Calculate the incoherent terms
TString name = "A"; name += i; name += "Sq_FitFrac";
fitFrac_[i][i].name(name);
name += "EffUnCorr";
fitFracEffUnCorr_[i][i].name(name);
Double_t sumTerm = Amp_[i].abs2()*fSqSum_[i]*fNorm_[i]*fNorm_[i];
fifjTot += sumTerm;
Double_t sumEffTerm = Amp_[i].abs2()*fSqEffSum_[i]*fNorm_[i]*fNorm_[i];
fifjEffTot += sumEffTerm;
fitFrac_[i][i].value(sumTerm);
fitFracEffUnCorr_[i][i].value(sumEffTerm);
}
for (i = nAmp_; i < nAmp_+nIncohAmp_; i++) {
for (j = i+1; j < nAmp_+nIncohAmp_; j++) {
//Set off-diagonal incoherent terms to zero
TString name = "A"; name += i; name += "A"; name += j; name += "_FitFrac";
fitFrac_[i][j].name(name);
name += "EffUnCorr";
fitFracEffUnCorr_[i][j].name(name);
fitFrac_[i][j].value(0.);
fitFracEffUnCorr_[i][j].value(0.);
}
}
if (TMath::Abs(fifjTot) > 1e-10) {
meanDPEff_.value(fifjEffTot/fifjTot);
if (init) {
meanDPEff_.genValue( meanDPEff_.value() );
meanDPEff_.initValue( meanDPEff_.value() );
}
}
DPRate_.value(fifjTot);
if (init) {
DPRate_.genValue( DPRate_.value() );
DPRate_.initValue( DPRate_.value() );
}
// Now divide the fitFraction sums by the overall integral
for (i = 0; i < nAmp_+nIncohAmp_; i++) {
for (j = i; j < nAmp_+nIncohAmp_; j++) {
// Get the actual fractions by dividing by the total DP rate
Double_t fitFracVal = fitFrac_[i][j].value();
fitFracVal /= fifjTot;
fitFrac_[i][j].value( fitFracVal );
Double_t fitFracEffUnCorrVal = fitFracEffUnCorr_[i][j].value();
fitFracEffUnCorrVal /= fifjEffTot;
fitFracEffUnCorr_[i][j].value( fitFracEffUnCorrVal );
if (init) {
fitFrac_[i][j].genValue( fitFrac_[i][j].value() );
fitFrac_[i][j].initValue( fitFrac_[i][j].value() );
fitFracEffUnCorr_[i][j].genValue( fitFracEffUnCorr_[i][j].value() );
fitFracEffUnCorr_[i][j].initValue( fitFracEffUnCorr_[i][j].value() );
}
}
}
// Work out total fit fraction over all K-matrix components (for each propagator)
KMPropMap::iterator mapIter;
Int_t propInt(0);
for (mapIter = kMatrixPropagators_.begin(); mapIter != kMatrixPropagators_.end(); ++mapIter) {
LauKMatrixPropagator* thePropagator = mapIter->second;
TString propName = thePropagator->getName();
// Now loop over all resonances and find those which are K-matrix components for this propagator
Double_t kMatrixTotFitFrac(0.0);
for (i = 0; i < nAmp_; i++) {
Bool_t gotKMRes1 = this->gotKMatrixMatch(i, propName);
if (gotKMRes1 == kFALSE) {continue;}
Double_t fifjSumReal = fifjSum_[i][i].re();
Double_t sumTerm = Amp_[i].abs2()*fifjSumReal*fNorm_[i]*fNorm_[i];
//Double_t fifjEffSumReal = fifjEffSum_[i][i].re();
//Double_t sumEffTerm = Amp_[i].abs2()*fifjEffSumReal*fNorm_[i]*fNorm_[i];
kMatrixTotFitFrac += sumTerm;
for (j = i+1; j < nAmp_; j++) {
Bool_t gotKMRes2 = this->gotKMatrixMatch(j, propName);
if (gotKMRes2 == kFALSE) {continue;}
LauComplex AmpjConj = Amp_[j].conj();
LauComplex AmpTerm = Amp_[i]*AmpjConj;
Double_t crossTerm = 2.0*(AmpTerm*fifjSum_[i][j]).re()*fNorm_[i]*fNorm_[j];
//Double_t crossEffTerm = 2.0*(AmpTerm*fifjEffSum_[i][j]).re()*fNorm_[i]*fNorm_[j];
kMatrixTotFitFrac += crossTerm;
}
}
kMatrixTotFitFrac /= fifjTot;
TString parName("KMatrixTotFF_"); parName += propInt;
extraParameters_[propInt].name( parName );
extraParameters_[propInt].value(kMatrixTotFitFrac);
if (init) {
extraParameters_[propInt].genValue(kMatrixTotFitFrac);
extraParameters_[propInt].initValue(kMatrixTotFitFrac);
}
std::cout<<"INFO in LauIsobarDynamics::calcExtraInfo : Total K-matrix fit fraction for propagator "<<propName<<" is "<<kMatrixTotFitFrac<<std::endl;
++propInt;
}
// Calculate rho and omega fit fractions from LauRhoOmegaMix
// In principle the presence of LauRhoOmegaMix can be detected, however
// working out the index of the correct component is hard.
// To prevent unwanted effects, this has to be manually turned on, with the
// assumption that the first two components are rho and rho_COPY in a LauCPFitModel
if (this->calculateRhoOmegaFitFractions_ && !init) {
int omegaID = 0;
int storeID = 1;
// Check which B flavour (and therefore which rho_COPY we are) by whether the FF is non-zero
// Only for CP fit though - for a 'simple' fit this is more complicated
if (fitFrac_[omegaID][omegaID].value() < 1E-4) {
omegaID = 1;
storeID = 0;
}
// Check this is really the correct model
LauRhoOmegaMix * rhomega = dynamic_cast<LauRhoOmegaMix *>(getResonance(omegaID));
if (rhomega != NULL) { // Bail out
std::cout << "INFO in LauIsobarDynamics::calcExtraInfo : Calculating omega fit fraction from resonance " << omegaID << std::endl;
std::cout << "INFO in LauIsobarDynamics::calcExtraInfo : Storing omega fit fraction in resonance " << storeID << std::endl;
// Tell the RhoOmegaMix model only to give us the omega amplitude-squared
rhomega->setWhichAmpSq(1);
// Recalculate the integrals for the omega fit-fraction
integralsDone_ = kFALSE;
this->resetNormVectors();
for ( UInt_t k(0); k < nAmp_+nIncohAmp_; ++k ) {
integralsToBeCalculated_.insert(k);
}
this->calcDPNormalisation();
integralsDone_ = kTRUE;
Double_t fifjSumRealOmega = fifjSum_[omegaID][omegaID].re();
// Recalculate the integrals for the rho fit-fraction
rhomega->setWhichAmpSq(2);
integralsDone_ = kFALSE;
this->resetNormVectors();
for ( UInt_t k(0); k < nAmp_+nIncohAmp_; ++k ) {
integralsToBeCalculated_.insert(k);
}
this->calcDPNormalisation();
integralsDone_ = kTRUE;
Double_t fitFracPartRho = Amp_[omegaID].abs2()*fifjSum_[omegaID][omegaID].re();
// Reset the RhoOmegaMix model and the integrals
rhomega->setWhichAmpSq(0);
integralsDone_ = kFALSE;
this->resetNormVectors();
for ( UInt_t k(0); k < nAmp_+nIncohAmp_; ++k ) {
integralsToBeCalculated_.insert(k);
}
this->calcDPNormalisation();
integralsDone_ = kTRUE;
// Store the omega fit-fraction in the rho_COPY location (which is otherwise empty)
// Store the rho fit-fraction in the rho location (overwriting the combined FF)
Double_t omegaFF = fifjSumRealOmega * fitFrac_[omegaID][omegaID].value();
fitFrac_[storeID][storeID].value(omegaFF);
fitFrac_[omegaID][omegaID].value(fitFracPartRho * fNorm_[omegaID] * fNorm_[omegaID] / DPRate_.value());
} else {
std::cout << "INFO in LauIsobarDynamics::calcExtraInfo : calculateRhoOmegaFitFractions is set, but the RhoOmegaMix model isn't in the right place. Ignoring this option." << std::endl;
}
}
}
Bool_t LauIsobarDynamics::gotKMatrixMatch(UInt_t resAmpInt, const TString& propName) const
{
Bool_t gotMatch(kFALSE);
if (resAmpInt >= nAmp_) {return kFALSE;}
const LauAbsResonance* theResonance = sigResonances_[resAmpInt];
if (theResonance == 0) {return kFALSE;}
Int_t resModelInt = theResonance->getResonanceModel();
if (resModelInt == LauAbsResonance::KMatrix) {
TString resName = theResonance->getResonanceName();
KMStringMap::const_iterator kMPropSetIter = kMatrixPropSet_.find(resName);
if (kMPropSetIter != kMatrixPropSet_.end()) {
TString kmPropString = kMPropSetIter->second;
if (kmPropString == propName) {gotMatch = kTRUE;}
}
}
return gotMatch;
}
Double_t LauIsobarDynamics::calcSigDPNorm()
{
// Calculate the normalisation for the log-likelihood function.
DPNorm_ = 0.0;
for (UInt_t i = 0; i < nAmp_; i++) {
// fifjEffSum is the contribution from the term involving the resonance
// dynamics (f_i for resonance i) and the efficiency term.
Double_t fifjEffSumReal = fifjEffSum_[i][i].re();
// We need to normalise this contribution w.r.t. the complete dynamics in the DP.
// Hence we scale by the fNorm_i factor (squared), which is calculated by the
// initialise() function, when the normalisation integrals are calculated and cached.
// We also include the complex amplitude squared to get the total normalisation
// contribution from this resonance.
DPNorm_ += Amp_[i].abs2()*fifjEffSumReal*fNorm_[i]*fNorm_[i];
}
// We now come to the cross-terms (between resonances i and j) in the normalisation.
for (UInt_t i = 0; i < nAmp_; i++) {
for (UInt_t j = i+1; j < nAmp_; j++) {
LauComplex AmpjConj = Amp_[j].conj();
LauComplex AmpTerm = Amp_[i]*AmpjConj;
// Again, fifjEffSum is the contribution from the term involving the resonance
// dynamics (f_i*f_j_conjugate) and the efficiency cross term.
// Also include the relative normalisation between these two resonances w.r.t. the
// total DP dynamical structure (fNorm_i and fNorm_j) and the complex
// amplitude squared (mag,phase) part.
DPNorm_ += 2.0*(AmpTerm*fifjEffSum_[i][j]).re()*fNorm_[i]*fNorm_[j];
}
}
for (UInt_t i = 0; i < nIncohAmp_; i++) {
DPNorm_ += Amp_[i+nAmp_].abs2()*fSqEffSum_[i+nAmp_]*fNorm_[i+nAmp_]*fNorm_[i+nAmp_];
}
return DPNorm_;
}
Bool_t LauIsobarDynamics::generate()
{
// Routine to generate a signal event according to the Dalitz plot
// model we have defined.
// We need to make sure to calculate everything for every resonance
integralsToBeCalculated_.clear();
for ( UInt_t i(0); i < nAmp_+nIncohAmp_; ++i ) {
integralsToBeCalculated_.insert(i);
}
nSigGenLoop_ = 0;
Bool_t generatedSig(kFALSE);
while (generatedSig == kFALSE && nSigGenLoop_ < iterationsMax_) {
// Generates uniform DP phase-space distribution
Double_t m13Sq(0.0), m23Sq(0.0);
kinematics_->genFlatPhaseSpace(m13Sq, m23Sq);
// If we're in a symmetrical DP then we should only generate events in one half
// TODO - what do we do for fully symmetric?
if ( symmetricalDP_ && !fullySymmetricDP_ && m13Sq > m23Sq ) {
Double_t tmpSq = m13Sq;
m13Sq = m23Sq;
m23Sq = tmpSq;
}
// Calculate the amplitudes and total amplitude for the given DP point
this->calcLikelihoodInfo(m13Sq, m23Sq);
// Throw the random number and check it against the ratio of ASq and the accept/reject ceiling
const Double_t randNo = LauRandom::randomFun()->Rndm();
if (randNo > ASq_/aSqMaxSet_) {
++nSigGenLoop_;
} else {
generatedSig = kTRUE;
nSigGenLoop_ = 0;
// Keep a note of the maximum ASq that we've found
if (ASq_ > aSqMaxVar_) {aSqMaxVar_ = ASq_;}
}
} // while loop
// Check that all is well with the generation
Bool_t sigGenOK(kTRUE);
if (GenOK != this->checkToyMC(kTRUE,kFALSE)) {
sigGenOK = kFALSE;
}
return sigGenOK;
}
LauIsobarDynamics::ToyMCStatus LauIsobarDynamics::checkToyMC(Bool_t printErrorMessages, Bool_t printInfoMessages)
{
// Check whether we have generated the toy MC OK.
ToyMCStatus ok(GenOK);
if (nSigGenLoop_ >= iterationsMax_) {
// Exceeded maximum allowed iterations - the generation is too inefficient
if (printErrorMessages) {
std::cerr<<"WARNING in LauIsobarDynamics::checkToyMC : More than "<<iterationsMax_<<" iterations performed and no event accepted."<<std::endl;
}
if ( aSqMaxSet_ > 1.01 * aSqMaxVar_ ) {
if (printErrorMessages) {
std::cerr<<" : |A|^2 maximum was set to "<<aSqMaxSet_<<" but this appears to be too high."<<std::endl;
std::cerr<<" : Maximum value of |A|^2 found so far = "<<aSqMaxVar_<<std::endl;
std::cerr<<" : The value of |A|^2 maximum will be decreased and the generation restarted."<<std::endl;
}
aSqMaxSet_ = 1.01 * aSqMaxVar_;
std::cout<<"INFO in LauIsobarDynamics::checkToyMC : |A|^2 max reset to "<<aSqMaxSet_<<std::endl;
ok = MaxIterError;
} else {
if (printErrorMessages) {
std::cerr<<" : |A|^2 maximum was set to "<<aSqMaxSet_<<", which seems to be correct for the given model."<<std::endl;
std::cerr<<" : However, the generation is very inefficient - please check your model."<<std::endl;
std::cerr<<" : The maximum number of iterations will be increased and the generation restarted."<<std::endl;
}
iterationsMax_ *= 2;
std::cout<<"INFO in LauIsobarDynamics::checkToyMC : max number of iterations reset to "<<iterationsMax_<<std::endl;
ok = MaxIterError;
}
} else if (aSqMaxVar_ > aSqMaxSet_) {
// Found a value of ASq higher than the accept/reject ceiling - the generation is biased
if (printErrorMessages) {
std::cerr<<"WARNING in LauIsobarDynamics::checkToyMC : |A|^2 maximum was set to "<<aSqMaxSet_<<" but a value exceeding this was found: "<<aSqMaxVar_<<std::endl;
std::cerr<<" : Run was invalid, as any generated MC will be biased, according to the accept/reject method!"<<std::endl;
std::cerr<<" : The value of |A|^2 maximum be reset to be > "<<aSqMaxVar_<<" and the generation restarted."<<std::endl;
}
aSqMaxSet_ = 1.01 * aSqMaxVar_;
std::cout<<"INFO in LauIsobarDynamics::checkToyMC : |A|^2 max reset to "<<aSqMaxSet_<<std::endl;
ok = ASqMaxError;
} else if (printInfoMessages) {
std::cout<<"INFO in LauIsobarDynamics::checkToyMC : aSqMaxSet = "<<aSqMaxSet_<<" and aSqMaxVar = "<<aSqMaxVar_<<std::endl;
}
return ok;
}
void LauIsobarDynamics::setDataEventNo(UInt_t iEvt)
{
// Retrieve the data for event iEvt
if (data_.size() > iEvt) {
currentEvent_ = data_[iEvt];
} else {
std::cerr<<"ERROR in LauIsobarDynamics::setDataEventNo : Event index too large: "<<iEvt<<" >= "<<data_.size()<<"."<<std::endl;
}
m13Sq_ = currentEvent_->retrievem13Sq();
m23Sq_ = currentEvent_->retrievem23Sq();
mPrime_ = currentEvent_->retrievemPrime();
thPrime_ = currentEvent_->retrievethPrime();
tagCat_ = currentEvent_->retrieveTagCat();
eff_ = currentEvent_->retrieveEff();
scfFraction_ = currentEvent_->retrieveScfFraction(); // These two are necessary, even though the dynamics don't actually use scfFraction_ or jacobian_,
jacobian_ = currentEvent_->retrieveJacobian(); // since this is at the heart of the caching mechanism.
}
void LauIsobarDynamics::calcLikelihoodInfo(const UInt_t iEvt)
{
// Calculate the likelihood and associated info
// for the given event using cached information
evtLike_ = 0.0;
// retrieve the cached dynamics from the tree:
// realAmp, imagAmp for each resonance plus efficiency, scf fraction and jacobian
this->setDataEventNo(iEvt);
// use realAmp and imagAmp to create the resonance amplitudes
const std::vector<Double_t>& realAmp = currentEvent_->retrieveRealAmp();
const std::vector<Double_t>& imagAmp = currentEvent_->retrieveImagAmp();
const std::vector<Double_t>& incohInten = currentEvent_->retrieveIncohIntensities();
for (UInt_t i = 0; i < nAmp_; i++) {
this->setFFTerm(i, realAmp[i], imagAmp[i]);
}
for (UInt_t i = 0; i < nIncohAmp_; i++) {
this->setIncohIntenTerm(i, incohInten[i]);
}
// Update the dynamics - calculates totAmp_ and then ASq_ = totAmp_.abs2() * eff_
// All calculated using cached information on the individual amplitudes and efficiency.
this->calcTotalAmp(kTRUE);
// Calculate the normalised matrix element squared value
if (DPNorm_ > 1e-10) {
evtLike_ = ASq_/DPNorm_;
}
}
void LauIsobarDynamics::calcLikelihoodInfo(const Double_t m13Sq, const Double_t m23Sq)
{
this->calcLikelihoodInfo(m13Sq, m23Sq, -1);
}
void LauIsobarDynamics::calcLikelihoodInfo(const Double_t m13Sq, const Double_t m23Sq, const Int_t tagCat)
{
// Calculate the likelihood and associated info
// for the given point in the Dalitz plot
// Also retrieves the SCF fraction in the bin where the event lies (done
// here to cache it along with the the rest of the DP quantities, like eff)
// The jacobian for the square DP is calculated here for the same reason.
evtLike_ = 0.0;
// update the kinematics for the specified DP point
kinematics_->updateKinematics(m13Sq, m23Sq);
// calculate the jacobian and the scfFraction to cache them later
scfFraction_ = this->retrieveScfFraction(tagCat);
if (kinematics_->squareDP() == kTRUE) {
jacobian_ = kinematics_->calcSqDPJacobian();
}
// calculate the ff_ terms and retrieves eff_ from the efficiency model
this->calculateAmplitudes();
// then calculate totAmp_ and finally ASq_ = totAmp_.abs2() * eff_
this->calcTotalAmp(kTRUE);
// Calculate the normalised matrix element squared value
if (DPNorm_ > 1e-10) {
evtLike_ = ASq_/DPNorm_;
}
}
void LauIsobarDynamics::modifyDataTree()
{
if ( recalcNormalisation_ == kFALSE ) {
return;
}
const UInt_t nEvents = data_.size();
std::set<UInt_t>::const_iterator iter = integralsToBeCalculated_.begin();
const std::set<UInt_t>::const_iterator intEnd = integralsToBeCalculated_.end();
for (UInt_t iEvt = 0; iEvt < nEvents; ++iEvt) {
currentEvent_ = data_[iEvt];
std::vector<Double_t>& realAmp = currentEvent_->retrieveRealAmp();
std::vector<Double_t>& imagAmp = currentEvent_->retrieveImagAmp();
std::vector<Double_t>& incohInten = currentEvent_->retrieveIncohIntensities();
const Double_t m13Sq = currentEvent_->retrievem13Sq();
const Double_t m23Sq = currentEvent_->retrievem23Sq();
const Int_t tagCat = currentEvent_->retrieveTagCat();
this->calcLikelihoodInfo(m13Sq, m23Sq, tagCat);
for ( iter = integralsToBeCalculated_.begin(); iter != intEnd; ++iter) {
const UInt_t i = *iter;
if(*iter < nAmp_) {
realAmp[i] = ff_[i].re();
imagAmp[i] = ff_[i].im();
} else {
incohInten[i-nAmp_] = incohInten_[i-nAmp_];
}
}
}
}
void LauIsobarDynamics::fillDataTree(const LauFitDataTree& inputFitTree)
{
// In LauFitDataTree, the first two variables should always be m13^2 and m23^2.
// Other variables follow thus: charge/flavour tag prob, etc.
// Since this is the first caching, we need to make sure to calculate everything for every resonance
integralsToBeCalculated_.clear();
for ( UInt_t i(0); i < nAmp_+nIncohAmp_; ++i ) {
integralsToBeCalculated_.insert(i);
}
UInt_t nBranches = inputFitTree.nBranches();
if (nBranches < 2) {
std::cerr<<"ERROR in LauIsobarDynamics::fillDataTree : Expecting at least 2 variables " <<"in input data tree, but there are "<<nBranches<<"!\n";
std::cerr<<" : Make sure you have the right number of variables in your input data file!"<<std::endl;
gSystem->Exit(EXIT_FAILURE);
}
// Data structure that will cache the variables required to
// calculate the signal likelihood for this experiment
for ( std::vector<LauCacheData*>::iterator iter = data_.begin(); iter != data_.end(); ++iter ) {
delete (*iter);
}
data_.clear();
Double_t m13Sq(0.0), m23Sq(0.0);
Double_t mPrime(0.0), thPrime(0.0);
Int_t tagCat(-1);
std::vector<Double_t> realAmp(nAmp_), imagAmp(nAmp_);
Double_t eff(0.0), scfFraction(0.0), jacobian(0.0);
UInt_t nEvents = inputFitTree.nEvents() + inputFitTree.nFakeEvents();
data_.reserve(nEvents);
for (UInt_t iEvt = 0; iEvt < nEvents; ++iEvt) {
const LauFitData& dataValues = inputFitTree.getData(iEvt);
LauFitData::const_iterator iter = dataValues.find("m13Sq");
m13Sq = iter->second;
iter = dataValues.find("m23Sq");
m23Sq = iter->second;
// is there more than one tagging category?
// if so then we need to know the category from the data
if (scfFractionModel_.size()>1) {
iter = dataValues.find("tagCat");
tagCat = static_cast<Int_t>(iter->second);
}
// calculates the amplitudes and total amplitude for the given DP point
// tagging category not needed by dynamics, but to find out the scfFraction
this->calcLikelihoodInfo(m13Sq, m23Sq, tagCat);
// extract the real and imaginary parts of the ff_ terms for storage
for (UInt_t i = 0; i < nAmp_; i++) {
realAmp[i] = ff_[i].re();
imagAmp[i] = ff_[i].im();
}
if ( kinematics_->squareDP() ) {
mPrime = kinematics_->getmPrime();
thPrime = kinematics_->getThetaPrime();
}
eff = this->getEvtEff();
scfFraction = this->getEvtScfFraction();
jacobian = this->getEvtJacobian();
// store the data for each event in the list
data_.push_back( new LauCacheData() );
data_[iEvt]->storem13Sq(m13Sq);
data_[iEvt]->storem23Sq(m23Sq);
data_[iEvt]->storemPrime(mPrime);
data_[iEvt]->storethPrime(thPrime);
data_[iEvt]->storeTagCat(tagCat);
data_[iEvt]->storeEff(eff);
data_[iEvt]->storeScfFraction(scfFraction);
data_[iEvt]->storeJacobian(jacobian);
data_[iEvt]->storeRealAmp(realAmp);
data_[iEvt]->storeImagAmp(imagAmp);
data_[iEvt]->storeIncohIntensities(incohInten_);
}
}
Bool_t LauIsobarDynamics::gotReweightedEvent()
{
// Select the event (kinematics_) using an accept/reject method based on the
// ratio of the current value of ASq to the maximal value.
Bool_t accepted(kFALSE);
// calculate the ff_ terms and retrieves eff_ from the efficiency model
this->calculateAmplitudes();
// then calculate totAmp_ and finally ASq_ = totAmp_.abs2() (without the efficiency correction!)
this->calcTotalAmp(kFALSE);
// Compare the ASq value with the maximal value (set by the user)
if (LauRandom::randomFun()->Rndm() < ASq_/aSqMaxSet_) {
accepted = kTRUE;
}
if (ASq_ > aSqMaxVar_) {aSqMaxVar_ = ASq_;}
return accepted;
}
Double_t LauIsobarDynamics::getEventWeight()
{
// calculate the ff_ terms and retrieves eff_ from the efficiency model
this->calculateAmplitudes();
// then calculate totAmp_ and finally ASq_ = totAmp_.abs2() (without the efficiency correction!)
this->calcTotalAmp(kFALSE);
// return the event weight = the value of the squared amplitude
return ASq_;
}
void LauIsobarDynamics::updateCoeffs(const std::vector<LauComplex>& coeffs)
{
// Check that the number of coeffs is correct
if (coeffs.size() != this->getnTotAmp()) {
std::cerr << "ERROR in LauIsobarDynamics::updateCoeffs : Expected " << this->getnTotAmp() << " but got " << coeffs.size() << std::endl;
gSystem->Exit(EXIT_FAILURE);
}
// Now check if the coeffs have changed
Bool_t changed = (Amp_ != coeffs);
if (changed) {
// Copy the coeffs
Amp_ = coeffs;
}
// TODO should perhaps keep track of whether the resonance parameters have changed here and if none of those and none of the coeffs have changed then we don't need to update the norm
// Update the total normalisation for the signal likelihood
this->calcSigDPNorm();
}
TString LauIsobarDynamics::getConjResName(const TString& resName) const
{
- // Get the name of the charge conjugate resonance
- TString conjName(resName);
-
- Ssiz_t index1 = resName.Index("+");
- Ssiz_t index2 = resName.Index("-");
- if (index1 != -1) {
- conjName.Replace(index1, 1, "-");
- } else if (index2 != -1) {
- conjName.Replace(index2, 1, "+");
- }
+ // Get the name of the charge conjugate resonance
+ TString conjName(resName);
- return conjName;
+ Ssiz_t index1 = resName.Index("+");
+ Ssiz_t index2 = resName.Index("-");
+ if (index1 != -1) {
+ conjName.Replace(index1, 1, "-");
+ } else if (index2 != -1) {
+ conjName.Replace(index2, 1, "+");
+ }
+ return conjName;
}
Double_t LauIsobarDynamics::retrieveEfficiency()
{
Double_t eff(1.0);
if (effModel_ != 0) {
eff = effModel_->calcEfficiency(kinematics_);
}
return eff;
}
Double_t LauIsobarDynamics::retrieveScfFraction(Int_t tagCat)
{
Double_t scfFraction(0.0);
// scf model and eff model are exactly the same, functionally
// so we use an instance of LauAbsEffModel, and the method
// calcEfficiency actually calculates the scf fraction
if (tagCat == -1) {
if (!scfFractionModel_.empty()) {
scfFraction = scfFractionModel_[0]->calcEfficiency(kinematics_);
}
} else {
scfFraction = scfFractionModel_[tagCat]->calcEfficiency(kinematics_);
}
return scfFraction;
}
diff --git a/src/LauKMatrixPropFactory.cc b/src/LauKMatrixPropFactory.cc
index 890eafb..be6d06f 100644
--- a/src/LauKMatrixPropFactory.cc
+++ b/src/LauKMatrixPropFactory.cc
@@ -1,90 +1,90 @@
/*
Copyright 2008 University of Warwick
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
*/
/*
Laura++ package authors:
John Back
Paul Harrison
Thomas Latham
*/
/*! \file LauKMatrixPropFactory.cc
\brief File containing implementation of LauKMatrixPropFactory class.
*/
// Class for storing K-matrix propagator objects
// using the factory method.
#include "LauKMatrixPropFactory.hh"
#include "LauKMatrixPropagator.hh"
#include <iostream>
using std::cout;
using std::endl;
-ClassImp(LauKMatrixPropFactory)
-
// the singleton instance
LauKMatrixPropFactory* LauKMatrixPropFactory::theFactory_ = 0;
+ClassImp(LauKMatrixPropFactory)
+
LauKMatrixPropFactory::LauKMatrixPropFactory()
{
// Constructor
map_.clear();
}
LauKMatrixPropFactory::~LauKMatrixPropFactory()
{
// Destructor
KMatrixPropMap::iterator iter;
for (iter = map_.begin(); iter != map_.end(); ++iter) {
LauKMatrixPropagator* thePropagator = iter->second;
delete thePropagator;
}
map_.clear();
}
LauKMatrixPropFactory* LauKMatrixPropFactory::getInstance()
{
if (theFactory_ == 0) {
theFactory_ = new LauKMatrixPropFactory();
}
return theFactory_;
}
LauKMatrixPropagator* LauKMatrixPropFactory::getPropagator(const TString& name, const TString& paramFileName,
Int_t resPairAmpInt, Int_t nChannels,
Int_t nPoles, Int_t rowIndex)
{
LauKMatrixPropagator* thePropagator(0);
KMatrixPropMap::iterator iter = map_.find(name);
if ( iter != map_.end() ) {
// We have already made this propagator
thePropagator = iter->second;
} else {
// The propagator does not exist. Create it and store it in the map.
thePropagator = new LauKMatrixPropagator(name, paramFileName, resPairAmpInt,
nChannels, nPoles, rowIndex);
map_[name] = thePropagator;
}
return thePropagator;
}

File Metadata

Mime Type
text/x-diff
Expires
Sat, May 3, 6:24 AM (1 d, 13 h)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
4894381
Default Alt Text
(401 KB)

Event Timeline