diff --git a/ChangeLog b/ChangeLog --- a/ChangeLog +++ b/ChangeLog @@ -1,6320 +1,6329 @@ +2019-02-20 James Monk + * src/Tools/RivetHepMC_2.cc: implementation of HepMC helper funcs for HepMC2 + * Add HepMCUtils namespace for helper funcs + * Relatives class spoofs HepMC3::Relatives interface using HepMC2 iterator_ranges + * Replace calls to particles_in() and particles_out() by HepMCUtils::particles + * Fix pyext/setup.py.in for both HepMC2 and HepMC3 + * Configures with either --with-hepmc=/blah or --with-hepmc3=/blah + * Compiles for either HepMC2 or HepMC3 (3.1 or higher) + 2019-02-17 James Monk * Update many build paths to cope with new HepMC3 include and lib paths :( * Use RivetHepMC namespace in place of HepMC:: RivetHepMC.hh should take care of it * configure.ac addes appropriate define to CPPFLAGS * wrangle rivet-buildplugin to cope with HePMC3 paths * HepMC::Reader is still a bit fubar :( 2018-12-10 James Monk * Merge from default 2018-12-06 James Monk * RivetHepMC.hh: Much simplified. Use only Const version of GenParticlePtr Only func delarations - two separate implementation files for HepMC2 or 3 * configure.ac: HepMC version dependence for building RivetHepMC_2.cc or RivetHepMC_3.cc * src/Makefile.am: HepMC version dependence * src/Tools/RivetHepMC_3.cc: implementations of funs for HepMC v3 * bin/rivet-nopy.cc: re-implement using HepMC3 reader interface (may need separate implementation for HepMC2) * Particle.hh, Event.cc, Jet.cc: const GenParticle replaced by ConstGenParticle * Projections Beam, DISLepton, FinalPartons, FinalState, HeavyHadrons, InitialQuarks, MergedFinalState, PrimaryHadrons, UnstableFinalState, VetoedFinalState: Use ConstGenParticlePtr and vector consistently * pluginATLAS: Start to fix some uses of const GenParticlePtr 2018-01-08 Andy Buckley * Add highlighted source to HTML analysis metadata listings. 2017-12-21 Andy Buckley * Version 2.6.0 release. 2017-12-20 Andy Buckley * Typo fix in TOTEM_2012_I1220862 data -- thanks to Anton Karneyeu. 2017-12-19 Andy Buckley * Adding contributed analyses: 1 ALICE, 6 ATLAS, 1 CMS. * Fix bugged PID codes in MC_PRINTEVENT. 2017-12-13 Andy Buckley * Protect Run methods and rivet script against being told to run from a missing or unreadable file. 2017-12-11 Andy Buckley * Replace manual event count & weight handling with a YODA Counter object. 2017-11-28 Andy Buckley * Providing neater & more YODA-consistent sumW and sumW2 methods on AnalysisHandler and Analysis. * Fix to Python version check for >= 2.7.10 (patch submitted to GNU) 2017-11-17 Andy Buckley * Various improvements to DISKinematics, DISLepton, and the ZEUS 2001 analysis. 2017-11-06 Andy Buckley * Extend AOPath regex to allow dots and underscores in weight names. 2017-10-27 Andy Buckley * Add energy to the list of cuts (both as Cuts::E and Cuts::energy) * Add missing pT (rather than Et) functions to SmearedMET, although they are just copies of the MET functions for now. 2017-10-09 Andy Buckley * Embed zstr and enable transparent reading of gzipped HepMC streams. 2017-10-03 Andy Buckley * Use Lester MT2 bisection header, and expose a few more mT2 function signatures. 2017-09-26 Andy Buckley * Use generic YODA read and write functions -- enables zipped yoda.gz output. * Add ChargedLeptons enum and mode argument to ZFinder and WFinder constructors, to allow control over whether the selected charged leptons are prompt. This is mostly cosmetic/for symmetry in the case of ZFinder, since the same can be achieved by passing a PromptFinalState as the fs argument, but for WFinder it's essential since passing a prompt final state screws up the MET calculation. Both are slightly different in the treatment of the lepton dressing, although conventionally this is an area where only prompt photons are used. 2017-09-25 Andy Buckley * Add deltaR2 functions for squared distances. 2017-09-10 Andy Buckley * Add white backgrounds to make-plots main and ratio plot frames. 2017-09-05 Andy Buckley * Add CMS_2016_PAS_TOP_15_006 jet multiplicity in lepton+jets ttbar at 8 TeV analysis. * Add CMS_2017_I1467451 Higgs -> WW -> emu + MET in 8 TeV pp analysis. * Add ATLAS_2017_I1609448 Z->ll + pTmiss analysis. * Add vectorMissingEt/Pt and vectorMET/MPT convenience methods to MissingMomentum. * Add ATLAS_2017_I1598613 J/psi + mu analysis. * Add CMS SUSY 0-lepton search CMS_2017_I1594909 (unofficial implementation, validated vs. published cutflows) 2017-09-04 Andy Buckley * Change license explicitly to GPLv3, cf. MCnet3 agreement. * Add a better jet smearing resolution parametrisation, based on GAMBIT code from Matthias Danninger. 2017-08-16 Andy Buckley * Protect make-plots against NaNs in error band values (patch from Dmitry Kalinkin). 2017-07-20 Andy Buckley * Add sumPt, sumP4, sumP3 utility functions. * Record truth particles as constituents of SmearedParticles output. * Rename UnstableFinalState -> UnstableParticles, and convert ZFinder to be a general ParticleFinder rather than FinalState. 2017-07-19 Andy Buckley * Add implicit cast from FourVector & FourMomentum to Vector3, and tweak mT implementation. * Add rawParticles() to ParticleFinder, and update DressedLeptons, WFinder, ZFinder and VetoedFinalState to cope. * Add isCharged() and isChargedLepton() to Particle. * Add constituents() and rawConstituents() to Particle. * Add support for specifying bin edges as braced initializer lists rather than explicit vector. 2017-07-18 Andy Buckley * Enable methods for booking of Histo2D and Profile2D from Scatter3D reference data. * Remove IsRef annotation from autobooked histogram objects. 2017-07-17 Andy Buckley * Add pair-smearing to SmearedJets. 2017-07-08 Andy Buckley * Add Event::centrality(), for non-HepMC access to the generator value if one has been recorded -- otherwise -1. >>>>>>> merge rev 2017-06-28 Andy Buckley * Split the smearing functions into separate header files for generic/momentum, Particle, Jet, and experiment-specific smearings & efficiencies. 2017-06-27 Andy Buckley * Add 'JetFinder' alias for JetAlg, by analogy with ParticleFinder. 2017-06-26 Andy Buckley * Convert SmearedParticles to a more general list of combined efficiency+smearing functions, with extra constructors and some variadic template cleverness to allow implicit conversions from single-operation eff and smearing function. Yay for C++11 ;-) This work based on a macro-based version of combined eff/smear functions by Karl Nordstrom -- thanks! * Add *EffFn, *SmearFn, and *EffSmearFn types to SmearingFunctions.hh. 2017-06-23 Andy Buckley * Add portable OpenMP enabling flags to AM_CXXFLAGS. 2017-06-22 Andy Buckley * Fix the smearing random number seed and make it thread-specific if OpenMP is available (not yet in the build system). * Remove the UNUSED macro and find an alternative solution for the cases where it was used, since there was a risk of macro clashes with embedding codes. * Add a -o output directory option to make-plots. * Vector4.hh: Add mT2(vec,vec) functions. 2017-06-21 Andy Buckley * Add a full set of in-range kinematics functors: ptInRange, (abs)etaInRange, (abs)phiInRange, deltaRInRange, deltaPhiInRange, deltaEtaInRange, deltaRapInRange. * Add a convenience JET_BTAG_EFFS functor with several constructors to handle mistag rates. * Add const efficiency functors operating on Particle, Jet, and FourMomentum. * Add const-efficiency constructor variants for SmearedParticles. 2017-06-21 Jon Butterworth * Fix normalisations in CMS_2016_I1454211. * Fix analysis name in ref histo paths for ATLAS_2017_I1591327. 2017-06-18 Andy Buckley * Move all standard plugin files into subdirs of src/Analyses, with some custom make rules driving rivet-buildplugin. 2017-06-18 David Grellscheid * Parallelise rivet-buildplugin, with source-file cat'ing and use of a temporary Makefile. 2016-06-18 Holger Schulz * Version 2.5.4 release! 2016-06-17 Holger Schulz * Fix 8 TeV DY (ATLAS_2016_I1467454), EL/MU bits were bissing. * Add 13 TeV DY (ATLAS_2017_I1514251) and mark ATLAS_2015_CONF_2015_041 obsolete * Add missing install statement for ATLAS_2016_I1448301.yoda/plot/info leading to segfault 2017-06-09 Andy Buckley * Slight improvements to Particle constructors. * Improvement to Beam projection: before falling back to barcodes 1 & 2, try a manual search for status=4 particles. Based on a patch from Andrii Verbytskyi. 2017-06-05 Andy Buckley * Add CMS_2016_I1430892: dilepton channel ttbar charge asymmetry analysis. * Add CMS_2016_I1413748: dilepton channel ttbar spin correlations and polarisation analysis. * Add CMS_2017_I1518399: leading jet mass for boosted top quarks at 8 TeV. * Add convenience constructors for ChargedLeptons projection. 2017-06-03 Andy Buckley * Add FinalState and Cut (optional) constructor arguments and usage to DISFinalState. Thanks to Andrii Verbytskyi for the idea and initial patch. 2017-05-23 Andy Buckley * Add ATLAS_2016_I1448301, Z/gamma cross section measurement at 8 TeV. * Add ATLAS_2016_I1426515, WW production at 8 TeV. 2016-05-19 Holger Schulz * Add BELLE measurement of semileptonic B0bar -> D*+ ell nu decays. I took the liberty to correct the data in the sense that I take the bin widhts into account in the normalisation. BELLE_2017_I1512299. This is a nice analysis as it probes the hadronic and the leptonic side of the decay so very valuable for model building and of course it is rare as it is an unfolded B measurement. 2016-05-17 Holger Schulz * Add ALEPH measurement of hadronic tau decays, ALEPH_2014_I1267648. * Add ALEPH dimuon invariant mass (OS and SS) analysis, ALEPH_2016_I1492968 * The latter needed GENKTEE FastJet algorithm so I added that FastJets * Protection against logspace exception in histobooking of MC_JetAnalysis * Fix compiler complaints about uninitialised variable in OPAL_2004. 2016-05-16 Holger Schulz * Tidy ALEPH_1999 charm fragmentation analysis and normalise to data integral. Added DSTARPLUS and DSTARMINUS to PID. 2017-05-16 Andy Buckley * Add ATLAS_2016_CONF_2016_092, inclusive jet cross sections using early 13 TeV data. * Add ATLAS_2017_I1591327, isolated diphoton + X differential cross-sections. * Add ATLAS_2017_I1589844, ATLAS_2017_I1589844_EL, ATLAS_2017_I1589844_MU: kT splittings in Z events at 8 TeV. * Add ATLAS_2017_I1509919, track-based underlying event at 13 TeV in ATLAS. * Add ATLAS_2016_I1492320_2l2j and ATLAS_2016_I1492320_3l, the WWW cross-section at 8 TeV. 2017-05-12 Andy Buckley * Add ATLAS_2016_I1449082, charge asymmetry in top quark pair production in dilepton channel. * Add ATLAS_2015_I1394865, inclusive 4-lepton/ZZ lineshape. 2017-05-11 Andy Buckley * Add ATLAS_2013_I1234228, high-mass Drell-Yan at 7 TeV. 2017-05-10 Andy Buckley * Add CMS_2017_I1519995, search for new physics with dijet angular distributions in proton-proton collisions at sqrt{(s) = 13 TeV. * Add CMS_2017_I1511284, inclusive energy spectrum in the very forward direction in proton-proton collisions at 13 TeV. * Add CMS_2016_I1486238, studies of 2 b-jet + 2 jet production in proton-proton collisions at 7 TeV. * Add CMS_2016_I1454211, boosted ttbar in pp collisions at sqrtS = 8 TeV. * Add CMS_2016_I1421646, CMS azimuthal decorrelations at 8 TeV. 2017-05-09 Andy Buckley * Add CMS_2015_I1380605, per-event yield of the highest transverse momentum charged particle and charged-particle jet. * Add CMS_2015_I1370682_PARTON, a partonic-top version of the CMS 7 TeV pseudotop ttbar differential cross-section analysis. * Adding EHS_1988_I265504 from Felix Riehn: charged-particle production in K+ p, pi+ p and pp interactions at 250 GeV/c. * Fix ALICE_2012_I1116147 for pi0 and Lambda feed-down. 2017-05-08 Andy Buckley * Add protection against leptons from QED FSR photon conversions in assigning PartonicTop decay modes. Thanks to Markus Seidel for the report and suggested fix. * Reimplement FastJets methods in terms of new static helper functions. * Add new mkClusterInputs, mkJet and mkJets static methods to FastJets, to help with direct calls to FastJet where particle lookup for constituents and ghost tags are required. * Fix Doxygen config and Makefile target to allow working with out-of-source builds. Thanks to Christian Holm Christensen. * Improve DISLepton for HERA analyses: thanks to Andrii Verbytskyi for the patch! 2017-03-30 Andy Buckley * Replace non-template Analysis::refData functions with C++11 default T=Scatter2D. 2017-03-29 Andy Buckley * Allow yes/no and true/false values for LogX, etc. plot options. * Add --errs as an alias for --mc-errs to rivet-mkhtml and rivet-cmphistos. 2017-03-08 Peter Richardson * Added 6 analyses AMY_1990_I295160, HRS_1986_I18502, JADE_1983_I190818, PLUTO_1980_I154270, TASSO_1989_I277658, TPC_1987_I235694 for charged multiplicity in e+e- at CMS energies below the Z pole * Added 2 analyses for charged multiplicity at the Z pole DELPHI_1991_I301657, OPAL_1992_I321190 * Updated ALEPH_1991_S2435284 to plot the average charged multiplcity * Added analyses OPAL_2004_I631361, OPAL_2004_I631361_qq, OPAL_2004_I648738 for gluon jets in e+e-, most need fictitious e+e- > g g process 2017-03-29 Andy Buckley * Add Cut and functor selection args to HeavyHadrons accessor methods. 2017-03-03 Andy Buckley * bin/rivet-mkanalysis: Add FastJets.hh include by default -- it's almost always used. 2017-03-02 Andy Buckley * src/Analyses/CMS_2016_I1473674.cc: Patch from CMS to use partonic tops. * src/Analyses/CMS_2015_I1370682.cc: Patch to inline jet finding from CMS. 2017-03-01 Andy Buckley * Convert DressedLeptons use of fromDecay to instead veto photons that match fromHadron() || fromHadronicTau() -- meaning that electrons and muons from leptonic taus will now be dressed. * Move Particle and Jet std::function aliases to .fhh files, and replace many uses of templates for functor arguments with ParticleSelector meta-types instead. * Move the canonical implementations of hasAncestorWith, etc. and isLastWith, etc. from ParticleUtils.hh into Particle. * Disable the event-to-event beam consistency check if the ignore-beams mode is active. 2017-02-27 Andy Buckley * Add BoolParticleAND, BoolJetOR, etc. functor combiners to Tools/ParticleUtils.hh and Tools/JetUtils.hh. 2017-02-24 Andy Buckley * Mark ATLAS_2016_CONF_2016_078 and CMS_2016_PAS_SUS_16_14 analyses as validated, since their cutflows match the documentation. 2017-02-22 Andy Buckley * Add aggregate signal regions to CMS_2016_PAS_SUS_16_14. 2017-02-18 Andy Buckley * Add getEnvParam function, for neater use of environment variable parameters with a required default. 2017-02-05 Andy Buckley * Add HasBTag and HasCTag jet functors, with lower-case aliases. 2017-01-31 Andy Buckley * Start making analyses HepMC3-compatible, including major tidying in ATLAS_2013_I1243871. * Add Cut-arg variants to HeavyHadrons particle-retrieving methods. * Convert core to be compatible with HepMC3. 2017-01-21 Andy Buckley * Removing lots of long-deprecated functions & methods. 2017-01-18 Andy Buckley * Use std::function in functor-expanded method signatures on JetAlg. 2017-01-16 Andy Buckley * Convert FinalState particles() accessors to use std::function rather than a template arg for sorting, and add filtering functor support -- including a mix of filtering and sorting functors. Yay for C++11! * Add ParticleEffFilter and JetEffFilter constructors from a double (encoding constant efficiency). * Add Vector3::abseta() 2016-12-13 Andy Buckley * Version 2.5.3 release. 2016-12-12 Holger Schulz * Add cut in BZ calculation in OPAL 4 jet analysis. Paper is not clear about treatment of parallel vectors, leads to division by zero and nan-fill and subsequent YODA RangeError (OPAL_2001_S4553896) 2016-12-12 Andy Buckley * Fix bugs in SmearedJets treatment of b & c tagging rates. * Adding ATLAS_2016_I1467454 analysis (high-mass Drell-Yan at 8 TeV) * Tweak to 'convert' call to improve the thumbnail quality from rivet-mkhtml/make-plots. 2016-12-07 Andy Buckley * Require Cython 0.24 or later. 2016-12-02 Andy Buckley * Adding L3_2004_I652683 (LEP 1 & 2 event shapes) and LHCB_2014_I1262703 (Z+jet at 7 TeV). * Adding leading dijet mass plots to MC_JetAnalysis (and all derived classes). Thanks to Chris Gutschow! * Adding CMS_2012_I1298807 (ZZ cross-section at 8 TeV), CMS_2016_I1459051 (inclusive jet cross-sections at 13 TeV) and CMS_PAS_FSQ_12_020 (preliminary 7 TeV leading-track underlying event). * Adding CDF_2015_1388868 (ppbar underlying event at 300, 900, and 1960 GeV) * Adding ATLAS_2016_I1467230 (13 TeV min bias), ATLAS_2016_I1468167 (13 TeV inelastic pp cross-section), and ATLAS_2016_I1479760 (7 TeV pp double-parton scattering with 4 jets). 2016-12-01 Andy Buckley * Adding ALICE_2012_I1116147 (eta and pi0 pTs and ratio) and ATLAS_2011_I929691 (7 TeV jet frag) 2016-11-30 Andy Buckley * Fix bash bugs in rivet-buildplugin, including fixing the --cmd mode. 2016-11-28 Andy Buckley * Add LHC Run 2 BSM analyses ATLAS_2016_CONF_2016_037 (3-lepton and same-sign 2-lepton), ATLAS_2016_CONF_2016_054 (1-lepton + jets), ATLAS_2016_CONF_2016_078 (ICHEP jets + MET), ATLAS_2016_CONF_2016_094 (1-lepton + many jets), CMS_2013_I1223519 (alphaT + b-jets), and CMS_2016_PAS_SUS_16_14 (jets + MET). * Provide convenience reversed-argument versions of apply and declare methods, to allow presentational choice of declare syntax in situations where the projection argument is very long, and reduce requirements on the user's memory since this is one situation in Rivet where there is no 'most natural' ordering choice. 2016-11-24 Andy Buckley * Adding pTvec() function to 4-vectors and ParticleBase. * Fix --pwd option of the rivet script 2016-11-21 Andy Buckley * Add weights and scaling to Cutflow/s. 2016-11-19 Andy Buckley * Add Et(const ParticleBase&) unbound function. 2016-11-18 Andy Buckley * Fix missing YAML quote mark in rivet-mkanalysis. 2016-11-15 Andy Buckley * Fix constness requirements on ifilter_select() and Particle/JetEffFilter::operator(). * src/Analyses/ATLAS_2016_I1458270.cc: Fix inverted particle efficiency filtering. 2016-10-24 Andy Buckley * Add rough ATLAS and CMS photon reco efficiency functions from Delphes (ATLAS and CMS versions are identical, hmmm) 2016-10-12 Andy Buckley * Tidying/fixing make-plots custom z-ticks code. Thanks to Dmitry Kalinkin. 2016-10-03 Holger Schulz * Fix SpiresID -> InspireID in some analyses (show-analysis pointed to non-existing web page) 2016-09-29 Holger Schulz * Add Luminosity_fb to AnalysisInfo * Added some keywords and Lumi to ATLAS_2016_I1458270 2016-09-28 Andy Buckley * Merge the ATLAS and CMS from-Delphes electron and muon tracking efficiency functions into generic trkeff functions -- this is how it should be. * Fix return type typo in Jet::bTagged(FN) templated method. * Add eta and pT cuts to ATLAS truth b-jet definition. * Use rounding rather than truncation in Cutflow percentage efficiency printing. 2016-09-28 Frank Siegert * make-plots bugfix in y-axis labels for RatioPlotMode=deviation 2016-09-27 Andy Buckley * Add vector and scalar pT (rather than Et) to MissingMomentum. 2016-09-27 Holger Schulz * Analysis keyword machinery * rivet -a @semileptonic * rivet -a @semileptonic@^bdecays -a @semileptonic@^ddecays 2016-09-22 Holger Schulz * Release version 2.5.2 2016-09-21 Andy Buckley * Add a requirement to DressedLeptons that the FinalState passed as 'bareleptons' will be filtered to only contain charged leptons, if that is not already the case. Thanks to Markus Seidel for the suggestion. 2016-09-21 Holger Schulz * Add Simone Amoroso's plugin for hadron spectra (ALEPH_1995_I382179) * Add Simone Amoroso's plugin for hadron spectra (OPAL_1993_I342766) 2016-09-20 Holger Schulz * Add CMS ttbar analysis from contrib, mark validated (CMS_2016_I1473674) * Extend rivet-mkhtml --booklet to also work with pdfmerge 2016-09-20 Andy Buckley * Fix make-plots automatic YMax calculation, which had a typo from code cleaning (mea culpa!). * Fix ChargedLeptons projection, which failed to exclude neutrinos!!! Thanks to Markus Seidel. * Add templated FN filtering arg versions of the Jet::*Tags() and Jet::*Tagged() functions. 2016-09-18 Andy Buckley * Add CMS partonic top analysis (CMS_2015_I1397174) 2016-09-18 Holger Schulz * Add L3 xp analysis of eta mesons, thanks Simone (L3_1992_I336180) * Add D0 1.8 TeV jet shapes analysis, thanks Simone (D0_1995_I398175) 2016-09-17 Andy Buckley * Add has{Ancestor,Parent,Child,Descendant}With functions and HasParticle{Ancestor,Parent,Child,Descendant}With functors. 2016-09-16 Holger Schulz * Add ATLAS 8TeV ttbar analysis from contrib (ATLAS_2015_I1404878) 2016-09-16 Andy Buckley * Add particles(GenParticlePtr) to RivetHepMC.hh * Add hasParent, hasParentWith, and hasAncestorWith to Particle. 2016-09-15 Holger Schulz * Add ATLAS 8TeV dijet analysis from contrib (ATLAS_2015_I1393758) * Add ATLAS 8TeV 'number of tracks in jets' analysis from contrib (ATLAS_2016_I1419070) * Add ATLAS 8TeV g->H->WW->enumunu analysis from contrib (ATLAS_2016_I1444991) 2016-09-14 Holger Schulz * Explicit std::toupper and std::tolower to make clang happy 2016-09-14 Andy Buckley * Add ATLAS Run 2 0-lepton SUSY and monojet search papers (ATLAS_2016_I1452559, ATLAS_2016_I1458270) 2016-09-13 Andy Buckley * Add experimental Cutflow and Cutflows objects for BSM cut tracking. * Add 'direct' versions of any, all, none to Utils.hh, with an implicity bool() transforming function. 2016-09-13 Holger Schulz * Add and mark validated B+ to omega analysis (BABAR_2013_I1116411) * Add and mark validated D0 to pi- analysis (BABAR_2015_I1334693) * Add a few more particle names and use PID names in recently added analyses * Add Simone's OPAL b-frag analysis (OPAL_2003_I599181) after some cleanup and heavy usage of new features * Restructured DELPHI_2011_I890503 in the same manner --- picks up a few more B-hadrons now (e.g. 20523 and such) * Clean up and add ATLAS 8TeV MinBias (from contrib ATLAS_2016_I1426695) 2016-09-12 Andy Buckley * Add a static constexpr DBL_NAN to Utils.hh for convenience, and move some utils stuff out of MathHeader.hh 2016-09-12 Holger Schulz * Add count function to Tools/Utils.h * Add and mark validated B0bar and Bminus-decay to pi analysis (BELLE_2013_I1238273) * Add and mark validated B0-decay analysis (BELLE_2011_I878990) * Add and mark validated B to D decay analysis (BELLE_2011_I878990) 2016-09-08 Andy Buckley * Add C-array version of multi-target Analysis::scale() and normalize(), and fix (semantic) constness. * Add == and != operators for cuts applied to integers. * Add missing delta{Phi,Eta,Rap}{Gtr,Less} functors to ParticleBaseUtils.hh 2016-09-07 Andy Buckley * Add templated functor filtering args to the Particle parent/child/descendent methods. 2016-09-06 Andy Buckley * Add ATLAS Run 1 medium and tight electron ID efficiency functions. * Update configure scripts to use newer (Py3-safe) Python testing macros. 2016-09-02 Andy Buckley * Add isFirstWith(out), isLastWith(out) functions, and functor wrappers, using Cut and templated function/functor args. * Add Particle::parent() method. * Add using import/typedef of HepMC *Ptr types (useful step for HepMC 2.07 and 3.00). * Various typo fixes (and canonical renaming) in ParticleBaseUtils functor collection. * Add ATLAS MV2c10 and MV2c20 b-tagging effs to SmearingFunctions.hh collection. 2016-09-01 Andy Buckley * Add a PartonicTops projection. * Add overloaded versions of the Event::allParticles() method with selection Cut or templated selection function arguments. 2016-08-25 Andy Buckley * Add rapidity scheme arg to DeltaR functor constructors. 2016-08-23 Andy Buckley * Provide an Analysis::bookCounter(d,x,y, title) function, for convenience and making the mkanalysis template valid. * Improve container utils functions, and provide combined remove_if+erase filter_* functions for both select- and discard-type selector functions. 2016-08-22 Holger Schulz * Bugfix in rivet-mkhtml (NoneType: ana.spiresID() --> spiresid) * Added include to Rivet/Tools/Utils.h to make gcc6 happy 2016-08-22 Andy Buckley * Add efffilt() functions and Particle/JetEffFilt functors to SmearingFunctions.hh 2016-08-20 Andy Buckley * Adding filterBy methods for Particle and Jet which accept generic boolean functions as well as the Cut specialisation. 2016-08-18 Andy Buckley * Add a Jet::particles(Cut&) method, for inline filtering of jet constituents. * Add 'conjugate' behaviours to container head and tail functions via negative length arg values. 2016-08-15 Andy Buckley * Add convenience headers for including all final-state and smearing projections, to save user typing. 2016-08-12 Andy Buckley * Add standard MET functions for ATLAS R1 (and currently copies for R2 and CMS). * Add lots of vector/container helpers for e.g. container slicing, summing, and min/max calculation. * Adapt SmearedMET to take *two* arguments, since SET is typically used to calculate MET resolution. * Adding functors for computing vector & ParticleBase differences w.r.t. another vector. 2016-08-12 Holger Schulz * Implemented a few more cuts in prompt photon analysis (CDF_1993_S2742446) but to no avail, the rise of the data towards larger costheta values cannot be reproduced --- maybe this is a candidate for more scrutiny and using the boosting machinery such that the c.m. cuts can be done in a non-approximate way 2016-08-11 Holger Schulz * Rename CDF_2009_S8383952 to CDF_2009_I856131 due to invalid Spires entry. * Add InspireID to all analysis known by their Spires key 2016-08-09 Holger Schulz * Release 2.5.1 2016-08-08 Andy Buckley * Add a simple MC_MET analysis for out-of-the-box MET distribution testing. 2016-08-08 Holger Schulz * Add DELPHI_2011_I890503 b-quark fragmentation function measurement, which superseded DELPHI_2002_069_CONF_603. The latter is marked OBSOLETE. 2016-08-05 Holger Schulz * Use Jet mass and energy smearing in CDF_1997_... six-jet analysis, mark validated. * Mark CDF_2001_S4563131 validated * D0_1996_S3214044 --- cut on jet Et rather than pT, fix filling of costheta and theta plots, mark validated. Concerning the jet algorithm, I tried with the implementation of fastjet fastjet/D0RunIConePlugin.hh but that really does not help. * D0_1996_S3324664 --- fix normalisations, sorting jets properly now, cleanup and mark validated. 2016-08-04 Holger Schulz * Use Jet mass and energy smearing in CDF_1996_S310 ... jet properties analysis. Cleanup analysis and mark validated. Added some more run info. The same for CDF_1996_S334... (pretty much the same cuts, different observables). * Minor fixes in SmearedJets projection 2016-08-03 Andy Buckley * Protect SmearedJets against loss of tagging information if a momentum smearing function is used (rather than a dedicated Jet smearing fn) via implicit casts. 2016-08-02 Andy Buckley * Add SmearedMET projection, wrapping MissingMomentum. * Include base truth-level projections in SmearedParticles/Jets compare() methods. 2016-07-29 Andy Buckley * Rename TOTEM_2012_002 to proper TOTEM_2012_I1220862 name. * Remove conditional building of obsolete, preliminary and unvalidated analyses. Now always built, since there are sufficient warnings. 2016-07-28 Holger Schulz * Mark D0_2000... W pT analysis validated * Mark LHCB_2011_S919... phi meson analysis validated 2016-07-25 Andy Buckley * Add unbound accessors for momentum properties of ParticleBase objects. * Add Rivet/Tools/ParticleBaseUtils.hh to collect tools like functors for particle & jet filtering. * Add vector versions of Analysis::scale() and ::normalize(), for batched scaling. * Add Analysis::scale() and Analysis::divide() methods for Counter types. * Utils.hh: add a generic sum() function for containers, and use auto in loop to support arrays. * Set data path as well as lib path in scripts with --pwd option, and use abs path to $PWD. * Add setAnalysisDataPaths and addAnalysisDataPath to RivetPaths.hh/cc and Python. * Pass absolutized RIVET_DATA_PATH from rivet-mkhtml to rivet-cmphistos. 2016-07-24 Holger Schulz * Mark CDF_2008_S77... b jet shapes validated * Added protection against low stats yoda exception in finalize for that analysis 2016-07-22 Andy Buckley * Fix newly introduced bug in make-plots which led to data point markers being skipped for all but the last bin. 2016-07-21 Andy Buckley * Add pid, abspid, charge, abscharge, charge3, and abscharge3 Cut enums, handled by Particle cut targets. * Add abscharge() and abscharge3() methods to Particle. * Add optional Cut and duplicate-removal flags to Particle children & descendants methods. * Add unbound versions of Particle is* and from* methods, for easier functor use. * Add Particle::isPrompt() as a member rather than unbound function. * Add protections against -ve mass from numerical precision errors in smearing functions. 2016-07-20 Andy Buckley * Move several internal system headers into the include/Rivet/Tools directory. * Fix median-computing safety logic in ATLAS_2010_S8914702 and tidy this and @todo markers in several similar analyses. * Add to_str/toString and stream functions for Particle, and a bit of Particle util function reorganisation. * Add isStrange/Charm/Bottom PID and Particle functions. * Add RangeError exception throwing from MathUtils.hh stats functions if given empty/mismatched datasets. * Add Rivet/Tools/PrettyPrint.hh, based on https://louisdx.github.io/cxx-prettyprint/ * Allow use of path regex group references in .plot file keyed values. 2016-07-20 Holger Schulz * Fix the --nskip behaviour on the main rivet script. 2016-07-07 Andy Buckley * Release version 2.5.0 2016-07-01 Andy Buckley * Fix pandoc interface flag version detection. 2016-06-28 Andy Buckley * Release version 2.4.3 * Add ATLAS_2016_I1468168 early ttbar fully leptonic fiducial cross-section analysis at 13 TeV. 2016-06-21 Andy Buckley * Add ATLAS_2016_I1457605 inclusive photon analysis at 8 TeV. 2016-06-15 Andy Buckley * Add a --show-bibtex option to the rivet script, for convenient outputting of a BibTeX db for the used analyses. 2016-06-14 Andy Buckley * Add and rename 4-vector boost calculation methods: new methods beta, betaVec, gamma & gammaVec are now preferred to the deprecated boostVector method. 2016-06-13 Andy Buckley * Add and use projection handling methods declare(proj, pname) and apply(evt, pname) rather than the longer and explicitly 'projectiony' addProjection & applyProjection. * Start using the DEFAULT_RIVET_ANALYSIS_CTOR macro (newly created preferred alias to long-present DEFAULT_RIVET_ANA_CONSTRUCTOR) * Add a DEFAULT_RIVET_PROJ_CLONE macro for implementing the clone() method boiler-plate code in projections. 2016-06-10 Andy Buckley * Add a NonPromptFinalState projection, and tweak the PromptFinalState and unbound Particle functions a little in response. May need some more finessing. * Add user-facing aliases to ProjectionApplier add, get, and apply methods... the templated versions of which can now be called without using the word 'projection', which makes the function names a bit shorter and pithier, and reduces semantic repetition. 2016-06-10 Andy Buckley * Adding ATLAS_2015_I1397635 Wt at 8 TeV analysis. * Adding ATLAS_2015_I1390114 tt+b(b) at 8 TeV analysis. 2016-06-09 Andy Buckley * Downgrade some non-fatal error messages from ERROR to WARNING status, because *sigh* ATLAS's software treats any appearance of the word 'ERROR' in its log file as a reason to report the job as failed (facepalm). 2016-06-07 Andy Buckley * Adding ATLAS 13 TeV minimum bias analysis, ATLAS_2016_I1419652. 2016-05-30 Andy Buckley * pyext/rivet/util.py: Add pandoc --wrap/--no-wrap CLI detection and batch conversion. * bin/rivet: add -o as a more standard 'output' option flag alias to -H. 2016-05-23 Andy Buckley * Remove the last ref-data bin from table 16 of ATLAS_2010_S8918562, due to data corruption. The corresponding HepData record will be amended by ATLAS. 2016-05-12 Holger Schulz * Mark ATLAS_2012_I1082009 as validated after exhaustive tests with Pythia8 and Sherpa in inclusive QCD mode. 2016-05-11 Andy Buckley * Specialise return error codes from the rivet script. 2016-05-11 Andy Buckley * Add Event::allParticles() to provide neater (but not *helpful*) access to Rivet-wrapped versions of the raw particles in the Event::genEvent() record, and hence reduce HepMC digging. 2016-05-05 Andy Buckley * Version 2.4.2 release! * Update SLD_2002_S4869273 ref data to match publication erratum, now updated in HepData. Thanks to Peter Skands for the report and Mike Whalley / Graeme Watt for the quick fix and heads-up. 2016-04-27 Andy Buckley * Add CMS_2014_I1305624 event shapes analysis, with standalone variable calculation struct embedded in an unnamed namespace. 2016-04-19 Andy Buckley * Various clean-ups and fixes in ATLAS analyses using isolated photons with median pT density correction. 2016-04-18 Andy Buckley * Add transformBy(LT) methods to Particle and Jet. * Add mkObjectTransform and mkFrameTransform factory methods to LorentzTransform. 2016-04-17 Andy Buckley * Add null GenVertex protection in Particle children & descendants methods. 2016-04-15 Andy Buckley * Add ATLAS_2015_I1397637, ATLAS 8 TeV boosted top cross-section vs. pT 2016-04-14 Andy Buckley * Add a --no-histos argument to the rivet script. 2016-04-13 Andy Buckley * Add ATLAS_2015_I1351916 (8 TeV Z FB asymmetry) and ATLAS_2015_I1408516 (8 TeV Z phi* and pT) analyses, and their _EL, _MU variants. 2016-04-12 Andy Buckley * Patch PID utils for ordering issues in baryon decoding. 2016-04-11 Andy Buckley * Actually implement ZEUS_2001_S4815815... only 10 years late! 2016-04-08 Andy Buckley * Add a --guess-prefix flag to rivet-config, cf. fastjet-config. * Add RIVET_DATA_PATH variable and related functions in C++ and Python as a common first-fallback for RIVET_REF_PATH, RIVET_INFO_PATH, and RIVET_PLOT_PATH. * Add --pwd options to rivet-mkhtml and rivet-cmphistos 2016-04-07 Andy Buckley * Remove implicit conventional event rotation for HERA -- this needs to be done explicitly from now. * Add comBoost functions and methods to Beam.hh, and tidy LorentzTransformation. * Restructure Beam projection functions for beam particle and sqrtS extraction, and add asqrtS functions. * Rename and improve PID and Particle Z,A,lambda functions -> nuclZ,nuclA,nuclNlambda. 2016-04-05 Andy Buckley * Improve binIndex function, with an optional argument to allow overflow lookup, and add it to testMath. * Adding setPE, setPM, setPtEtaPhiM, etc. methods and corresponding mk* static methods to FourMomentum, as well as adding more convenience aliases and vector attributes for completeness. Coordinate conversion functions taken from HEPUtils::P4. New attrs also mapped to ParticleBase. 2016-03-29 Andy Buckley * ALEPH_1996_S3196992.cc, ATLAS_2010_S8914702.cc, ATLAS_2011_I921594.cc, ATLAS_2011_S9120807.cc, ATLAS_2012_I1093738.cc, ATLAS_2012_I1199269.cc, ATLAS_2013_I1217867.cc, ATLAS_2013_I1244522.cc, ATLAS_2013_I1263495.cc, ATLAS_2014_I1307756.cc, ATLAS_2015_I1364361.cc, CDF_2008_S7540469.cc, CMS_2015_I1370682.cc, MC_JetSplittings.cc, STAR_2006_S6870392.cc: Updates for new FastJets interface, and other cleaning. * Deprecate 'standalone' FastJets constructors -- they are misleading. * More improvements around jets, including unbound conversion and filtering routines between collections of Particles, Jets, and PseudoJets. * Place 'Cut' forward declaration in a new Cuts.fhh header. * Adding a Cuts::OPEN extern const (a bit more standard- and constant-looking than Cuts::open()) 2016-03-28 Andy Buckley * Improvements to FastJets constructors, including specification of optional AreaDefinition as a constructor arg, disabling dodgy no-FS constructors which I suspect don't work properly in the brave new world of automatic ghost tagging, using a bit of judicious constructor delegation, and completing/exposing use of shared_ptr for internal memory management. 2016-03-26 Andy Buckley * Remove Rivet/Tools/RivetBoost.hh and Boost references from rivet-config, rivet-buildplugin, and configure.ac. It's gone ;-) * Replace Boost assign usage with C++11 brace initialisers. All Boost use is gone from Rivet! * Replace Boost lexical_cast and string algorithms. 2016-03-25 Andy Buckley * Bug-fix in semi-leptonic top selection of CMS_2015_I1370682. 2016-03-12 Andy Buckley * Allow multi-line major tick labels on make-plots linear x and y axes. Linebreaks are indicated by \n in the .dat file. 2016-03-09 Andy Buckley * Release 2.4.1 2016-03-03 Andy Buckley * Add a --nskip flag to the rivet command-line tool, to allow processing to begin in the middle of an event file (useful for batched processing of large files, in combination with --nevts) 2016-03-03 Holger Schulz * Add ATLAS 7 TeV event shapes in Z+jets analysis (ATLAS_2016_I1424838) 2016-02-29 Andy Buckley * Update make-plots to use multiprocessing rather than threading. * Add FastJets::trimJet method, thanks to James Monk for the suggestion and patch. * Add new preferred name PID::charge3 in place of PID::threeCharge, and also convenience PID::abscharge and PID::abscharge3 functions -- all derived from changes in external HEPUtils. * Add analyze(const GenEvent*) and analysis(string&) methods to AnalysisHandler, plus some docstring improvements. 2016-02-23 Andy Buckley * New ATLAS_2015_I1394679 analysis. * New MC_HHJETS analysis from Andreas Papaefstathiou. * Ref data updates for ATLAS_2013_I1219109, ATLAS_2014_I1312627, and ATLAS_2014_I1319490. * Add automatic output paging to 'rivet --show-analyses' 2016-02-16 Andy Buckley * Apply cross-section unit fixes and plot styling improvements to ATLAS_2013_I1217863 analyses, thanks to Christian Gutschow. * Fix to rivet-cmphistos to avoid overwriting RatioPlotYLabel if already set via e.g. the PLOT pseudo-file. Thanks to Johann Felix v. Soden-Fraunhofen. 2016-02-15 Andy Buckley * Add Analysis::bookCounter and some machinery in rivet-cmphistos to avoid getting tripped up by unplottable (for now) data types. * Add --font and --format options to rivet-mkhtml and make-plots, to replace the individual flags used for that purpose. Not fully cleaned up, but a necessary step. * Add new plot styling options to rivet-cmphistos and rivet-mkhtml. Thanks to Gavin Hesketh. * Modify rivet-cmphistos and rivet-mkhtml to apply plot hiding if *any* path component is hidden by an underscore prefix, as implemented in AOPath, plus other tidying using new AOPath methods. * Add pyext/rivet/aopaths.py, containing AOPath object for central & standard decoding of Rivet-standard analysis object path structures. 2016-02-12 Andy Buckley * Update ParticleIdUtils.hh (i.e. PID:: functions) to use the functions from the latest version of MCUtils' PIDUtils.h. 2016-01-15 Andy Buckley * Change rivet-cmphistos path matching logic from match to search (user can add explicit ^ marker if they want match semantics). 2015-12-20 Andy Buckley * Improve linspace (and hence also logspace) precision errors by using multiplication rather than repeated addition to build edge list (thanks to Holger Schulz for the suggestion). 2015-12-15 Andy Buckley * Add cmphistos and make-plots machinery for handling 'suffix' variations on plot paths, currently just by plotting every line, with the variations in a 70% faded tint. * Add Beam::pv() method for finding the beam interaction primary vertex 4-position. * Add a new Particle::setMomentum(E,x,y,z) method, and an origin position member which is automatically populated from the GenParticle, with access methods corresponding to the momentum ones. 2015-12-10 Andy Buckley * make-plots: improve custom tick attribute handling, allowing empty lists. Also, any whitespace now counts as a tick separator -- explicit whitespace in labels should be done via ~ or similar LaTeX markup. 2015-12-04 Andy Buckley * Pro-actively use -m/-M arguments when initially loading histograms in mkhtml, *before* passing them to cmphistos. 2015-12-03 Andy Buckley * Move contains() and has_key() functions on STL containers from std to Rivet namespaces. * Adding IsRef attributes to all YODA refdata files; this will be used to replace the /REF prefix in Rivet v3 onwards. The migration has also removed leading # characters from BEGIN/END blocks, as per YODA format evolution: new YODA versions as required by current Rivet releases are able to read both the old and new formats. 2015-12-02 Andy Buckley * Add handling of a command-line PLOT 'file' argument to rivet-mkhtml, cf. rivet-cmphistos. * Improvements to rivet-mkhtml behaviour re. consistency with rivet-cmphistos in how muti-part histo paths are decomposed into analysis-name + histo name, and removal of 'NONE' strings. 2015-11-30 Andy Buckley * Relax rivet/plotinfo.py pattern matching on .plot file components, to allow leading whitespace and around = signs, and to make the leading # optional on BEGIN/END blocks. 2015-11-26 Andy Buckley * Write out intermediate histogram files by default, with event interval of 10k. 2015-11-25 Andy Buckley * Protect make-plots against lock-up due to partial pstricks command when there are no data points. 2015-11-17 Andy Buckley * rivet-cmphistos: Use a ratio label that doesn't mention 'data' when plotting MC vs. MC. 2015-11-12 Andy Buckley * Tweak plot and subplot sizing defaults in make-plots so the total canvas is always the same size by default. 2015-11-10 Andy Buckley * Handle 2D histograms better in rivet-cmphistos (since they can't be overlaid) 2015-11-05 Andy Buckley * Allow comma-separated analysis name lists to be passed to a single -a/--analysis/--analyses option. * Convert namespace-global const variables to be static, to suppress compiler warnings. * Use standard MAX_DBL and MAX_INT macros as a source for MAXDOUBLE and MAXINT, to suppress GCC5 warnings. 2015-11-04 Holger Schulz * Adding LHCB inelastic xsection measurement (LHCB_2015_I1333223) * Adding ATLAS colour flow in ttbar->semileptonic measurement (ATLAS_2015_I1376945) 2015-10-07 Chris Pollard * Release 2.4.0 2015-10-06 Holger Schulz * Adding CMS_2015_I1327224 dijet analysis (Mjj>2 TeV) 2015-10-03 Holger Schulz * Adding CMS_2015_I1346843 Z+gamma 2015-09-30 Andy Buckley * Important improvement in FourVector & FourMomentum: new reverse() method to return a 4-vector in which only the spatial component has been inverted cf. operator- which flips the t/E component as well. 2015-09-28 Holger Schulz * Adding D0_2000_I503361 ZPT at 1800 GeV 2015-09-29 Chris Pollard * Adding ATLAS_2015_CONF_2015_041 2015-09-29 Chris Pollard * Adding ATLAS_2015_I1387176 2015-09-29 Chris Pollard * Adding ATLAS_2014_I1327229 2015-09-28 Chris Pollard * Adding ATLAS_2014_I1326641 2015-09-28 Holger Schulz * Adding CMS_2013_I1122847 FB assymetry in DY analysis 2015-09-28 Andy Buckley * Adding CMS_2015_I1385107 LHA pp 2.76 TeV track-jet underlying event. 2015-09-27 Andy Buckley * Adding CMS_2015_I1384119 LHC Run 2 minimum bias dN/deta with no B field. 2015-09-25 Andy Buckley * Adding TOTEM_2014_I1328627 forward charged density in eta analysis. 2015-09-23 Andy Buckley * Add CMS_2015_I1310737 Z+jets analysis. * Allow running MC_{W,Z}INC, MC_{W,Z}JETS as separate bare lepton analyses. 2015-09-23 Andy Buckley * FastJets now allows use of FastJet pure ghosts, by excluding them from the constituents of Rivet Jet objects. Thanks to James Monk for raising the issue and providing a patch. 2015-09-15 Andy Buckley * More MissingMomentum changes: add optional 'mass target' argument when retrieving the vector sum as a 4-momentum, with the mass defaulting to 0 rather than sqrt(sum(E)^2 - sum(p)^2). * Require Boost 1.55 for robust compilation, as pointed out by Andrii Verbytskyi. 2015-09-10 Andy Buckley * Allow access to MissingMomentum projection via WFinder. * Adding extra methods to MissingMomentum, to make it more user-friendly. 2015-09-09 Andy Buckley * Fix factor of 2 in LHCB_2013_I1218996 normalisation, thanks to Felix Riehn for the report. 2015-08-20 Frank Siegert * Add function to ZFinder to retrieve all fiducial dressed leptons, e.g. to allow vetoing on a third one (proposed by Christian Gutschow). 2015-08-18 Andy Buckley * Rename xs and counter AOs to start with underscores, and modify rivet-cmphistos to skip AOs whose basenames start with _. 2015-08-17 Andy Buckley * Add writing out of cross-section and total event counter by default. Need to add some name protection to avoid them being plotted. 2015-08-16 Andy Buckley * Add templated versions of Analysis::refData() to use data types other than Scatter2DPtr, and convert the cached ref data store to generic AnalysisObjectPtrs to make it possible. 2015-07-29 Andy Buckley * Add optional Cut arguments to all the Jet tag methods. * Add exception handling and pre-emptive testing for a non-writeable output directory (based on patch from Lukas Heinrich). 2015-07-24 Andy Buckley * Version 2.3.0 release. 2015-07-02 Holger Schulz * Tidy up ATLAS higgs combination analysis. * Add ALICE kaon, pion analysis (ALICE_2015_I1357424) * Add ALICE strange baryon analysis (ALICE_2014_I1300380) * Add CDF ZpT measurement in Z->ee events analysis (CDF_2012_I1124333) * Add validated ATLAS W+charm measurement (ATLAS_2014_I1282447) * Add validated CMS jet and dijet analysis (CMS_2013_I1208923) 2015-07-01 Andy Buckley * Define a private virtual operator= on Projection, to block 'sliced' accidental value copies of derived class instances. * Add new muon-in-jet options to FastJet constructors, pass that and invisibles enums correctly to JetAlg, tweak the default strategies, and add a FastJets constructor from a fastjet::JetDefinition (while deprecating the plugin-by-reference constructor). 2015-07-01 Holger Schulz * Add D0 phi* measurement (D0_2015_I1324946). * Remove WUD and MC_PHOTONJETUE analyses * Don't abort ATLAS_2015_I1364361 if there is no stable Higgs print a warning instead and veto event 2015-07-01 Andy Buckley * Add all, none, from-decay muon filtering options to JetAlg and FastJets. * Rename NONPROMPT_INVISIBLES to DECAY_INVISIBLES for clarity & extensibility. * Remove FastJets::ySubJet, splitJet, and filterJet methods -- they're BDRS-paper-specific and you can now use the FastJet objects directly to do this and much more. * Adding InvisiblesStrategy to JetAlg, using it rather than a bool in the useInvisibles method, and updating FastJets to use this approach for its particle filtering and to optionally use the enum in the constructor arguments. The new default invisibles-using behaviour is to still exclude _prompt_ invisibles, and the default is still to exclude them all. Only one analysis (src/Analyses/STAR_2006_S6870392.cc) required updating, since it was the only one to be using the FastJets legacy seed_threshold constructor argument. * Adding isVisible method to Particle, taken from VisibleFinalState (which now uses this). 2015-06-30 Andy Buckley * Marking many old & superseded ATLAS analyses as obsolete. * Adding cmpMomByMass and cmpMomByAscMass sorting functors. * Bump version to 2.3.0 and require YODA > 1.4.0 (current head at time of development). 2015-06-08 Andy Buckley * Add handling of -m/-M flags on rivet-cmphistos and rivet-mkhtml, moving current rivet-mkhtml -m/-M to -a/-A (for analysis name pattern matching). Requires YODA head (will be YODA 1.3.2 of 1.4.0). * src/Analyses/ATLAS_2015_I1364361.cc: Now use the built-in prompt photon selecting functions. * Tweak legend positions in MC_JETS .plot file. * Add a bit more debug output from ZFinder and WFinder. 2015-05-24 Holger Schulz * Normalisation discussion concerning ATLAS_2014_I1325553 is resolved. Changed YLabel accordingly. 2015-05-19 Holger Schulz * Add (preliminary) ATLAS combined Higgs analysis (ATLAS_2015_I1364361). Data will be updated and more histos added as soon as paper is published in journal. For now using data taken from public ressource https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HIGG-2014-11/ 2015-05-19 Peter Richardson * Fix ATLAS_2014_I1325553 normalisation of histograms was wrong by factor of two |y| vs y problem 2015-05-01 Andy Buckley * Fix MC_HJETS/HINC/HKTSPLITTINGS analyses to (ab)use the ZFinder with a mass range of 115-135 GeV and a mass target of 125 GeV (was previously 115-125 and mass target of mZ) 2015-04-30 Andy Buckley * Removing uses of boost::assign::list_of, preferring the existing comma-based assign override for now, for C++11 compatibility. * Convert MC_Z* analysis finalize methods to use scale() rather than normalize(). 2015-04-01 Holger Schulz * Add CMS 7 TeV rapidity gap analysis (CMS_2015_I1356998). * Remove FinalState Projection. 2015-03-30 Holger Schulz * Add ATLAS 7 TeV photon + jets analysis (ATLAS_2013_I1244522). 2015-03-26 Andy Buckley * Updates for HepMC 2.07 interface constness improvements. 2015-03-25 Holger Schulz * Add ATLAS double parton scattering in W+2j analysis (ATLAS_2013_I1216670). 2015-03-24 Andy Buckley * 2.2.1 release! 2015-03-23 Holger Schulz * Add ATLAS differential Higgs analysis (ATLAS_2014_I1306615). 2015-03-19 Chris Pollard * Add ATLAS V+gamma analyses (ATLAS_2013_I1217863) 2015-03-20 Andy Buckley * Adding ATLAS R-jets analysis i.e. ratios of W+jets and Z+jets observables (ATLAS_2014_I1312627 and _EL, _MU variants) * include/Rivet/Tools/ParticleUtils.hh: Adding same/oppSign and same/opp/diffCharge functions, operating on two Particles. * include/Rivet/Tools/ParticleUtils.hh: Adding HasAbsPID functor and removing optional abs arg from HasPID. 2015-03-19 Andy Buckley * Mark ATLAS_2012_I1083318 as VALIDATED and fix d25-x01-y02 ref data. 2015-03-19 Chris Pollard * Add ATLAS W and Z angular analyses (ATLAS_2011_I928289) 2015-03-19 Andy Buckley * Add LHCb charged particle multiplicities and densities analysis (LHCB_2014_I1281685) * Add LHCb Z y and phi* analysis (LHCB_2012_I1208102) 2015-03-19 Holger Schulz * Add ATLAS dijet analysis (ATLAS_2014_I1325553). * Add ATLAS Z pT analysis (ATLAS_2014_I1300647). * Add ATLAS low-mass Drell-Yan analysis (ATLAS_2014_I1288706). * Add ATLAS gap fractions analysis (ATLAS_2014_I1307243). 2015-03-18 Andy Buckley * Adding CMS_2014_I1298810 and CMS_2014_I1303894 analyses. 2015-03-18 Holger Schulz * Add PDG_TAUS analysis which makes use of the TauFinder. * Add ATLAS 'traditional' Underlying Event in Z->mumu analysis (ATLAS_2014_I1315949). 2015-03-18 Andy Buckley * Change UnstableFinalState duplicate resolution to use the last in a chain rather than the first. 2015-03-17 Holger Schulz * Update Taufinder to use decaytyoe (can be HADRONIC, LEPTONIC or ANY), in FastJet.cc --- set TauFinder mode to hadronic for tau-tagging 2015-03-16 Chris Pollard * Removed fuzzyEquals() from Vector3::angle() 2015-03-16 Andy Buckley * Adding Cuts-based constructor to PrimaryHadrons. * Adding missing compare() method to HeavyHadrons projection. 2015-03-15 Chris Pollard * Adding FinalPartons projection which selects the quarks and gluons immediately before hadronization 2015-03-05 Andy Buckley * Adding Cuts-based constructors and other tidying in UnstableFinalState and HeavyHadrons 2015-03-03 Andy Buckley * Add support for a PLOT meta-file argument to rivet-cmphistos. 2015-02-27 Andy Buckley * Improved time reporting. 2015-02-24 Andy Buckley * Add Particle::fromHadron and Particle::fromPromptTau, and add a boolean 'prompt' argument to Particle::fromTau. * Fix WFinder use-transverse-mass property setting. Thanks to Christian Gutschow. 2015-02-04 Andy Buckley * Add more protection against math domain errors with log axes. * Add some protection against nan-valued points and error bars in make-plots. 2015-02-03 Andy Buckley * Converting 'bitwise' to 'logical' Cuts combinations in all analyses. 2015-02-02 Andy Buckley * Use vector MET rather than scalar VET (doh...) in WFinder cut. Thanks to Ines Ochoa for the bug report. * Updating and tidying analyses with deprecation warnings. * Adding more Cuts/FS constructors for Charged,Neutral,UnstableFinalState. * Add &&, || and ! operators for without-parens-warnings Cut combining. Note these don't short-circuit, but this is ok since Cut comparisons don't have side-effects. * Add absetaIn, absrapIn Cut range definitions. * Updating use of sorted particle/jet access methods and cmp functors in projections and analyses. 2014-12-09 Andy Buckley * Adding a --cmd arg to rivet-buildplugin to allow the output paths to be sed'ed (to help deal with naive Grid distribution). For example BUILDROOT=`rivet-config --prefix`; rivet-buildplugin PHOTONS.cc --cmd | sed -e "s:$BUILDROOT:$SITEROOT:g" 2014-11-26 Andy Buckley * Interface improvements in DressedLeptons constructor. * Adding DEPRECATED macro to throw compiler deprecation warnings when using deprecated features. 2014-11-25 Andy Buckley * Adding Cut-based constructors, and various constructors with lists of PDG codes to IdentifiedFinalState. 2014-11-20 Andy Buckley * Analysis updates (ATLAS, CMS, CDF, D0) to apply the changes below. * Adding JetAlg jets(Cut, Sorter) methods, and other interface improvements for cut and sorted ParticleBase retrieval from JetAlg and ParticleFinder projections. Some old many-doubles versions removed, syntactic sugar sorting methods deprecated. * Adding Cuts::Et and Cuts::ptIn, Cuts::etIn, Cuts::massIn. * Moving FastJet includes, conversions, uses etc. into Tools/RivetFastJet.hh 2014-10-07 Andy Buckley * Fix a bug in the isCharmHadron(pid) function and remove isStrange* functions. 2014-09-30 Andy Buckley * 2.2.0 release! * Mark Jet::containsBottom and Jet::containsCharm as deprecated methods: use the new methods. Analyses updated. * Add Jet::bTagged(), Jet::cTagged() and Jet::tauTagged() as ghost-assoc-based replacements for the 'contains' tagging methods. 2014-09-17 Andy Buckley * Adding support for 1D and 3D YODA scatters, and helper methods for calling the efficiency, asymm and 2D histo divide functions. 2014-09-12 Andy Buckley * Adding 5 new ATLAS analyses: ATLAS_2011_I921594: Inclusive isolated prompt photon analysis with full 2010 LHC data ATLAS_2013_I1263495: Inclusive isolated prompt photon analysis with 2011 LHC data ATLAS_2014_I1279489: Measurements of electroweak production of dijets + $Z$ boson, and distributions sensitive to vector boson fusion ATLAS_2014_I1282441: The differential production cross section of the $\phi(1020)$ meson in $\sqrt{s}=7$ TeV $pp$ collisions measured with the ATLAS detector ATLAS_2014_I1298811: Leading jet underlying event at 7 TeV in ATLAS * Adding a median(vector) function and fixing the other stats functions to operate on vector rather than vector. 2014-09-03 Andy Buckley * Fix wrong behaviour of LorentzTransform with a null boost vector -- thanks to Michael Grosse. 2014-08-26 Andy Buckley * Add calc() methods to Hemispheres as requested, to allow it to be used with Jet or FourMomentum inputs outside the normal projection system. 2014-08-17 Andy Buckley * Improvements to the particles methods on ParticleFinder/FinalState, in particular adding the range of cuts arguments cf. JetAlg (and tweaking the sorted jets equivalent) and returning as a copy rather than a reference if cut/sorted to avoid accidentally messing up the cached copy. * Creating ParticleFinder projection base class, and moving Particles-accessing methods from FinalState into it. * Adding basic forms of MC_ELECTRONS, MC_MUONS, and MC_TAUS analyses. 2014-08-15 Andy Buckley * Version bump to 2.2.0beta1 for use at BOOST and MCnet school. 2014-08-13 Andy Buckley * New analyses: ATLAS_2014_I1268975 (high mass dijet cross-section at 7 TeV) ATLAS_2014_I1304688 (jet multiplicity and pT at 7 TeV) ATLAS_2014_I1307756 (scalar diphoton resonance search at 8 TeV -- no histograms!) CMSTOTEM_2014_I1294140 (charged particle pseudorapidity at 8 TeV) 2014-08-09 Andy Buckley * Adding PromptFinalState, based on code submitted by Alex Grohsjean and Will Bell. Thanks! 2014-08-06 Andy Buckley * Adding MC_HFJETS and MC_JETTAGS analyses. 2014-08-05 Andy Buckley * Update all analyses to use the xMin/Max/Mid, xMean, xWidth, etc. methods on YODA classes rather than the deprecated lowEdge etc. * Merge new HasPID functor from Holger Schulz into Rivet/Tools/ParticleUtils.hh, mainly for use with the any() function in Rivet/Tools/Utils.hh 2014-08-04 Andy Buckley * Add ghost tagging of charms, bottoms and taus to FastJets, and tag info accessors to Jet. * Add constructors from and cast operators to FastJet's PseudoJet object from Particle and Jet. * Convert inRange to not use fuzzy comparisons on closed intervals, providing old version as fuzzyInRange. 2014-07-30 Andy Buckley * Remove classifier functions accepting a Particle from the PID inner namespace. 2014-07-29 Andy Buckley * MC_JetAnalysis.cc: re-enable +- ratios for eta and y, now that YODA divide doesn't throw an exception. * ATLAS_2012_I1093734: fix a loop index error which led to the first bin value being unfilled for half the dphi plots. * Fix accidental passing of a GenParticle pointer as a PID code int in HeavyHadrons.cc. Effect limited to incorrect deductions about excited HF decay chains and should be small. Thanks to Tomasz Przedzinski for finding and reporting the issue during HepMC3 design work! 2014-07-23 Andy Buckley * Fix to logspace: make sure that start and end values are exact, not the result of exp(log(x)). 2014-07-16 Andy Buckley * Fix setting of library paths for doc building: Python can't influence the dynamic loader in its own process by setting an environment variable because the loader only looks at the variable once, when it starts. 2014-07-02 Andy Buckley * rivet-cmphistos now uses the generic yoda.read() function rather than readYODA() -- AIDA files can also be compared and plotted directly now. 2014-06-24 Andy Buckley * Add stupid missing include and std:: prefix in Rivet.hh 2014-06-20 Holger Schulz * bin/make-plots: Automatic generation of minor xtick labels if LogX is requested but data resides e.g. in [200, 700]. Fixes m_12 plots of, e.g. ATLAS_2010_S8817804 2014-06-17 David Grellscheid * pyext/rivet/Makefile.am: 'make distcheck' and out-of-source builds should work now. 2014-06-10 Andy Buckley * Fix use of the install command for bash completion installation on Macs. 2014-06-07 Andy Buckley * Removing direct includes of MathUtils.hh and others from analysis code files. 2014-06-02 Andy Buckley * Rivet 2.1.2 release! 2014-05-30 Andy Buckley * Using Particle absrap(), abseta() and abspid() where automatic conversion was feasible. * Adding a few extra kinematics mappings to ParticleBase. * Adding p3() accessors to the 3-momentum on FourMomentum, Particle, and Jet. * Using Jet and Particle kinematics methods directly (without momentum()) where possible. * More tweaks to make-plots 2D histo parsing behaviour. 2014-05-30 Holger Schulz * Actually fill the XQ 2D histo, .plot decorations. * Have make-plots produce colourmaps using YODA_3D_SCATTER objects. Remove the grid in colourmaps. * Some tweaks for the SFM analysis, trying to contact Martin Wunsch who did the unfolding back then. 2014-05-29 Holger Schulz * Re-enable 2D histo in MC_PDFS 2014-05-28 Andy Buckley * Updating analysis and project routines to use Particle::pid() by preference to Particle::pdgId(), and Particle::abspid() by preference to abs(Particle::pdgId()), etc. * Adding interfacing of smart pointer types and booking etc. for YODA 2D histograms and profiles. * Improving ParticleIdUtils and ParticleUtils functions based on merging of improved function collections from MCUtils, and dropping the compiled ParticleIdUtils.cc file. 2014-05-27 Andy Buckley * Adding CMS_2012_I1090423 (dijet angular distributions), CMS_2013_I1256943 (Zbb xsec and angular correlations), CMS_2013_I1261026 (jet and UE properties vs. Nch) and D0_2000_I499943 (bbbar production xsec and angular correlations). 2014-05-26 Andy Buckley * Fixing a bug in plot file handling, and adding a texpand() routine to rivet.util, to be used to expand some 'standard' physics TeX macros. * Adding ATLAS_2012_I1124167 (min bias event shapes), ATLAS_2012_I1203852 (ZZ cross-section), and ATLAS_2013_I1190187 (WW cross-section) analyses. 2014-05-16 Andy Buckley * Adding any(iterable, fn) and all(iterable, fn) template functions for convenience. 2014-05-15 Holger Schulz * Fix some bugs in identified hadron PIDs in OPAL_1998_S3749908. 2014-05-13 Andy Buckley * Writing out [UNVALIDATED], [PRELIMINARY], etc. in the --list-analyses output if analysis is not VALIDATED. 2014-05-12 Andy Buckley * Adding CMS_2013_I1265659 colour coherence analysis. 2014-05-07 Andy Buckley * Bug fixes in CMS_2013_I1209721 from Giulio Lenzi. * Fixing compiler warnings from clang, including one which indicated a misapplied cut bug in CDF_2006_S6653332. 2014-05-05 Andy Buckley * Fix missing abs() in Particle::abspid()!!!! 2014-04-14 Andy Buckley * Adding the namespace protection workaround for Boost described at http://www.boost.org/doc/libs/1_55_0/doc/html/foreach.html 2014-04-13 Andy Buckley * Adding a rivet.pc template file and installation rule for pkg-config to use. * Updating data/refdata/ALEPH_2001_S4656318.yoda to corrected version in HepData. 2014-03-27 Andy Buckley * Flattening PNG output of make-plots (i.e. no transparency) and other tweaks. 2014-03-23 Andy Buckley * Renaming the internal meta-particle class in DressedLeptons (and exposed in the W/ZFinders) from ClusteredLepton to DressedLepton for consistency with the change in name of its containing class. * Removing need for cmake and unportable yaml-cpp trickery by using libtool to build an embedded symbol-mangled copy of yaml-cpp rather than trying to mangle and build direct from the tarball. 2014-03-10 Andy Buckley * Rivet 2.1.1 release. 2014-03-07 Andy Buckley * Adding ATLAS multilepton search (no ref data file), ATLAS_2012_I1204447. 2014-03-05 Andy Buckley * Also renaming Breit-Wigner functions to cdfBW, invcdfBW and bwspace. * Renaming index_between() to the more Rivety binIndex(), since that's the only real use of such a function... plus a bit of SFINAE type relaxation trickery. 2014-03-04 Andy Buckley * Adding programmatic access to final histograms via AnalysisHandler::getData(). * Adding CMS 4 jet correlations analysis, CMS_2013_I1273574. * Adding CMS W + 2 jet double parton scattering analysis, CMS_2013_I1272853. * Adding ATLAS isolated diphoton measurement, ATLAS_2012_I1199269. * Improving the index_between function so the numeric types don't have to exactly match. * Adding better momentum comparison functors and sortBy, sortByX functions to use them easily on containers of Particle, Jet, and FourMomentum. 2014-02-10 Andy Buckley * Removing duplicate and unused ParticleBase sorting functors. * Removing unused HT increment and units in ATLAS_2012_I1180197 (unvalidated SUSY). * Fixing photon isolation logic bug in CMS_2013_I1258128 (Z rapidity). * Replacing internal uses of #include Rivet/Rivet.hh with Rivet/Config/RivetCommon.hh, removing the MAXRAPIDITY const, and repurposing Rivet/Rivet.hh as a convenience include for external API users. * Adding isStable, children, allDescendants, stableDescendants, and flightLength functions to Particle. * Replacing Particle and Jet deltaX functions with generic ones on ParticleBase, and adding deltaRap variants. * Adding a Jet.fhh forward declaration header, including fastjet::PseudoJet. * Adding a RivetCommon.hh header to allow Rivet.hh to be used externally. * Fixing HeavyHadrons to apply pT cuts if specified. 2014-02-06 Andy Buckley * 2.1.0 release! 2014-02-05 Andy Buckley * Protect against invalid prefix value if the --prefix configure option is unused. 2014-02-03 Andy Buckley * Adding the ATLAS_2012_I1093734 fwd-bwd / azimuthal minbias correlations analysis. * Adding the LHCB_2013_I1208105 forward energy flow analysis. 2014-01-31 Andy Buckley * Checking the YODA minimum version in the configure script. * Fixing the JADE_OPAL analysis ycut values to the midpoints, thanks to information from Christoph Pahl / Stefan Kluth. 2014-01-29 Andy Buckley * Removing unused/overrestrictive Isolation* headers. 2014-01-27 Andy Buckley * Re-bundling yaml-cpp, now built as a mangled static lib based on the LHAPDF6 experience. * Throw a UserError rather than an assert if AnalysisHandler::init is called more than once. 2014-01-25 David Grellscheid * src/Core/Cuts.cc: New Cuts machinery, already used in FinalState. Old-style "mineta, maxeta, minpt" constructors kept around for ease of transition. Minimal set of convenience functions available, like EtaIn(), should be expanded as needed. 2014-01-22 Andy Buckley * configure.ac: Remove opportunistic C++11 build, until this becomes mandatory (in version 2.2.0?). Anyone who wants C++11 can explicitly set the CXXFLAGS (and DYLDFLAGS for pre-Mavericks Macs) 2014-01-21 Leif Lonnblad * src/Core/Analysis.cc: Fixed bug in Analysis::isCompatible where an 'abs' was left out when checking that beam energes does not differ by more than 1GeV. * src/Analyses/CMS_2011_S8978280.cc: Fixed checking of beam energy and booking corresponding histograms. 2013-12-19 Andy Buckley * Adding pid() and abspid() methods to Particle. * Adding hasCharm and hasBottom methods to Particle. * Adding a sorting functor arg version of the ZFinder::constituents() method. * Adding pTmin cut accessors to HeavyHadrons. * Tweak to the WFinder constructor to place the target W (trans) mass argument last. 2013-12-18 Andy Buckley * Adding a GenParticle* cast operator to Particle, removing the Particle and Jet copies of the momentum cmp functors, and general tidying/improvement/unification of the momentum properties of jets and particles. 2013-12-17 Andy Buckley * Using SFINAE techniques to improve the math util functions. * Adding isNeutrino to ParticleIdUtils, and isHadron/isMeson/isBaryon/isLepton/isNeutrino methods to Particle. * Adding a FourMomentum cast operator to ParticleBase, so that Particle and Jet objects can be used directly as FourMomentums. 2013-12-16 Andy Buckley * LeptonClusters renamed to DressedLeptons. * Adding singular particle accessor functions to WFinder and ZFinder. * Removing ClusteredPhotons and converting ATLAS_2010_S8919674. 2013-12-12 Andy Buckley * Fixing a problem with --disable-analyses (thanks to David Hall) * Require FastJet version 3. * Bumped version to 2.1.0a0 * Adding -DNDEBUG to the default build flags, unless in --enable-debug mode. * Adding a special treatment of RIVET_*_PATH variables: if they end in :: the default search paths will not be appended. Used primarily to restrict the doc builds to look only inside the build dirs, but potentially also useful in other special circumstances. * Adding a definition of exec_prefix to rivet-buildplugin. * Adding -DNDEBUG to the default non-debug build flags. 2013-11-27 Andy Buckley * Removing accidentally still-present no-as-needed linker flag from rivet-config. * Lots of analysis clean-up and migration to use new features and W/Z finder APIs. * More momentum method forwarding on ParticleBase and adding abseta(), absrap() etc. functions. * Adding the DEFAULT_RIVET_ANA_CONSTRUCTOR cosmetic macro. * Adding deltaRap() etc. function variations * Adding no-decay photon clustering option to WFinder and ZFinder, and replacing opaque bool args with enums. * Adding an option for ignoring photons from hadron/tau decays in LeptonClusters. 2013-11-22 Andy Buckley * Adding Particle::fromBottom/Charm/Tau() members. LHCb were aready mocking this up, so it seemed sensible to add it to the interface as a more popular (and even less dangerous) version of hasAncestor(). * Adding an empty() member to the JetAlg interface. 2013-11-07 Andy Buckley * Adding the GSL lib path to the library path in the env scripts and the rivet-config --ldflags output. 2013-10-25 Andy Buckley * 2.0.0 release!!!!!! 2013-10-24 Andy Buckley * Supporting zsh completion via bash completion compatibility. 2013-10-22 Andy Buckley * Updating the manual to describe YODA rather than AIDA and the new rivet-cmphistos script. * bin/make-plots: Adding paths to error messages in histogram combination. * CDF_2005_S6217184: fixes to low stats errors and final scatter plot binning. 2013-10-21 Andy Buckley * Several small fixes in jet shape analyses, SFM_1984, etc. found in the last H++ validation run. 2013-10-18 Andy Buckley * Updates to configure and the rivetenv scripts to try harder to discover YODA. 2013-09-26 Andy Buckley * Now bundling Cython-generated files in the tarballs, so Cython is not a build requirement for non-developers. 2013-09-24 Andy Buckley * Removing unnecessary uses of a momentum() indirection when accessing kinematic variables. * Clean-up in Jet, Particle, and ParticleBase, in particular splitting PID functions on Particle from those on PID codes, and adding convenience kinematic functions to ParticleBase. 2013-09-23 Andy Buckley * Add the -avoid-version flag to libtool. * Final analysis histogramming issues resolved. 2013-08-16 Andy Buckley * Adding a ConnectBins flag in make-plots, to decide whether to connect adjacent, gapless bins with a vertical line. Enabled by default (good for the step-histo default look of MC lines), but now rivet-cmphistos disables it for the reference data. 2013-08-14 Andy Buckley * Making 2.0.0beta3 -- just a few remaining analysis migration issues remaining but it's worth making another beta since there were lots of framework fixes/improvements. 2013-08-11 Andy Buckley * ARGUS_1993_S2669951 also fixed using scatter autobooking. * Fixing remaining issues with booking in BABAR_2007_S7266081 using the feature below (far nicer than hard-coding). * Adding a copy_pts param to some Analysis::bookScatter2D methods: pre-setting the points with x values is sometimes genuinely useful. 2013-07-26 Andy Buckley * Removed the (officially) obsolete CDF 2008 LEADINGJETS and NOTE_9351 underlying event analyses -- superseded by the proper versions of these analyses based on the final combined paper. * Removed the semi-demo Multiplicity projection -- only the EXAMPLE analysis and the trivial ALEPH_1991_S2435284 needed adaptation. 2013-07-24 Andy Buckley * Adding a rejection of histo paths containing /TMP/ from the writeData function. Use this to handle booked temporary histograms... for now. 2013-07-23 Andy Buckley * Make rivet-cmphistos _not_ draw a ratio plot if there is only one line. * Improvements and fixes to HepData lookup with rivet-mkanalysis. 2013-07-22 Andy Buckley * Add -std=c++11 or -std=c++0x to the Rivet compiler flags if supported. * Various fixes to analyses with non-zero numerical diffs. 2013-06-12 Andy Buckley * Adding a new HeavyHadrons projection. * Adding optional extra include_end args to logspace() and linspace(). 2013-06-11 Andy Buckley * Moving Rivet/RivetXXX.hh tools headers into Rivet/Tools/. * Adding PrimaryHadrons projection. * Adding particles_in/out functions on GenParticle to RivetHepMC. * Moved STL extensions from Utils.hh to RivetSTL.hh and tidying. * Tidying, improving, extending, and documenting in RivetSTL.hh. * Adding a #include of Logging.hh into Projection.hh, and removing unnecessary #includes from all Projection headers. 2013-06-10 Andy Buckley * Moving htmlify() and detex() Python functions into rivet.util. * Add HepData URL for Inspire ID lookup to the rivet script. * Fix analyses' info files which accidentally listed the Inspire ID under the SpiresID metadata key. 2013-06-07 Andy Buckley * Updating mk-analysis-html script to produce MathJax output * Adding a version of Analysis::removeAnalysisObject() which takes an AO pointer as its argument. * bin/rivet: Adding pandoc-based conversion of TeX summary and description strings to plain text on the terminal output. * Add MathJax to rivet-mkhtml output, set up so the .info entries should render ok. * Mark the OPAL 1993 analysis as UNVALIDATED: from the H++ benchmark runs it looks nothing like the data, and there are some outstanding ambiguities. 2013-06-06 Andy Buckley * Releasing 2.0.0b2 beta version. 2013-06-05 Andy Buckley * Renaming Analysis::add() etc. to very explicit addAnalysisObject(), sorting out shared_pointer polymorphism issues via the Boost dynamic_pointer_cast, and adding a full set of getHisto1D(), etc. explicitly named and typed accessors, including ones with HepData dataset/axis ID signatures. * Adding histo booking from an explicit reference Scatter2D (and more placeholders for 2D histos / 3D scatters) and rewriting existing autobooking to use this. * Converting inappropriate uses of size_t to unsigned int in Analysis. * Moving Analysis::addPlot to Analysis::add() (or reg()?) and adding get() and remove() (or unreg()?) * Fixing attempted abstraction of import fallbacks in rivet.util.import_ET(). * Removing broken attempt at histoDir() caching which led to all histograms being registered under the same analysis name. 2013-06-04 Andy Buckley * Updating the Cython version requirement to 0.18 * Adding Analysis::integrate() functions and tidying the Analysis.hh file a bit. 2013-06-03 Andy Buckley * Adding explicit protection against using inf/nan scalefactors in ATLAS_2011_S9131140 and H1_2000_S4129130. * Making Analysis::scale noisly complain about invalid scalefactors. 2013-05-31 Andy Buckley * Reducing the TeX main memory to ~500MB. Turns out that it *can* be too large with new versions of TeXLive! 2013-05-30 Andy Buckley * Reverting bookScatter2D behaviour to never look at ref data, and updating a few affected analyses. This should fix some bugs with doubled datapoints introduced by the previous behaviour+addPoint. * Adding a couple of minor Utils.hh and MathUtils.hh features. 2013-05-29 Andy Buckley * Removing Constraints.hh header. * Minor bugfixes and improvements in Scatter2D booking and MC_JetAnalysis. 2013-05-28 Andy Buckley * Removing defunct HistoFormat.hh and HistoHandler.{hh,cc} 2013-05-27 Andy Buckley * Removing includes of Logging.hh, RivetYODA.hh, and ParticleIdUtils.hh from analyses (and adding an include of ParticleIdUtils.hh to Analysis.hh) * Removing now-unused .fhh files. * Removing lots of unnecessary .fhh includes from core classes: everything still compiling ok. A good opportunity to tidy this up before the release. * Moving the rivet-completion script from the data dir to bin (the completion is for scripts in bin, and this makes development easier). * Updating bash completion scripts for YODA format and compare-histos -> rivet-cmphistos. 2013-05-23 Andy Buckley * Adding Doxy comments and a couple of useful convenience functions to Utils.hh. * Final tweaks to ATLAS ttbar jet veto analysis (checked logic with Kiran Joshi). 2013-05-15 Andy Buckley * Many 1.0 -> weight bugfixes in ATLAS_2011_I945498. * yaml-cpp v3 support re-introduced in .info parsing. * Lots of analysis clean-ups for YODA TODO issues. 2013-05-13 Andy Buckley * Analysis histo booking improvements for Scatter2D, placeholders for 2D histos, and general tidying. 2013-05-12 Andy Buckley * Adding configure-time differentiation between yaml-cpp API versions 3 and 5. 2013-05-07 Andy Buckley * Converting info file reading to use the yaml-cpp 0.5.x API. 2013-04-12 Andy Buckley * Tagging as 2.0.0b1 2013-04-04 Andy Buckley * Removing bundling of yaml-cpp: it needs to be installed by the user / bootstrap script from now on. 2013-04-03 Andy Buckley * Removing svn:external m4 directory, and converting Boost detection to use better boost.m4 macros. 2013-03-22 Andy Buckley * Moving PID consts to the PID namespace and corresponding code updates and opportunistic clean-ups. * Adding Particle::fromDecay() method. 2013-03-09 Andy Buckley * Version bump to 2.0.0b1 in anticipation of first beta release. * Adding many more 'popular' particle ID code named-consts and aliases, and updating the RapScheme enum with ETA -> ETARAP, and fixing affected analyses (plus other opportunistic tidying / minor bug-fixing). * Fixing a symbol misnaming in ATLAS_2012_I1119557. 2013-03-07 Andy Buckley * Renaming existing uses of ParticleVector to the new 'Particles' type. * Updating util classes, projections, and analyses to deal with the HepMC return value changes. * Adding new Particle(const GenParticle*) constructor. * Converting Particle::genParticle() to return a const pointer rather than a reference, for the same reason as below (+ consistency within Rivet and with the HepMC pointer-centric coding design). * Converting Event to use a different implementation of original and modified GenParticles, and to manage the memory in a more future-proof way. Event::genParticle() now returns a const pointer rather than a reference, to signal that the user is leaving the happy pastures of 'normal' Rivet behind. * Adding a Particles typedef by analogy to Jets, and in preference to the cumbersome ParticleVector. * bin/: Lots of tidying/pruning of messy/defunct scripts. * Creating spiresbib, util, and plotinfo rivet python module submodules: this eliminates lighthisto and the standalone spiresbib modules. Util contains convenience functions for Python version testing, clean ElementTree import, and process renaming, for primary use by the rivet-* scripts. * Removing defunct scripts that have been replaced/obsoleted by YODA. 2013-03-06 Andy Buckley * Fixing doc build so that the reference histos and titles are ~correctly documented. We may want to truncate some of the lists! 2013-03-06 Hendrik Hoeth * Added ATLAS_2012_I1125575 analysis * Converted rivet-mkhtml to yoda * Introduced rivet-cmphistos as yoda based replacement for compare-histos 2013-03-05 Andy Buckley * Replacing all AIDA ref data with YODA versions. * Fixing the histograms entries in the documentation to be tolerant to plotinfo loading failures. * Making the findDatafile() function primarily find YODA data files, then fall back to AIDA. The ref data loader will use the appropriate YODA format reader. 2013-02-05 David Grellscheid * include/Rivet/Math/MathUtils.hh: added BWspace bin edge method to give equal-area Breit-Wigner bins 2013-02-01 Andy Buckley * Adding an element to the PhiMapping enum and a new mapAngle(angle, mapping) function. * Fixes to Vector3::azimuthalAngle and Vector3::polarAngle calculation (using the mapAngle functions). 2013-01-25 Frank Siegert * Split MC_*JETS analyses into three separate bits: MC_*INC (inclusive properties) MC_*JETS (jet properties) MC_*KTSPLITTINGS (kT splitting scales). 2013-01-22 Hendrik Hoeth * Fix TeX variable in the rivetenv scripts, especially for csh 2012-12-21 Andy Buckley * Version 1.8.2 release! 2012-12-20 Andy Buckley * Adding ATLAS_2012_I1119557 analysis (from Roman Lysak and Lily Asquith). 2012-12-18 Andy Buckley * Adding TOTEM_2012_002 analysis, from Sercan Sen. 2012-12-18 Hendrik Hoeth * Added CMS_2011_I954992 analysis 2012-12-17 Hendrik Hoeth * Added CMS_2012_I1193338 analysis * Fixed xi cut in ATLAS_2011_I894867 2012-12-17 Andy Buckley * Adding analysis descriptions to the HTML analysis page ToC. 2012-12-14 Hendrik Hoeth * Added CMS_2012_PAS_FWD_11_003 analysis * Added LHCB_2012_I1119400 analysis 2012-12-12 Andy Buckley * Correction to jet acceptance in CMS_2011_S9120041, from Sercan Sen: thanks! 2012-12-12 Hendrik Hoeth * Added CMS_2012_PAS_QCD_11_010 analysis 2012-12-07 Andy Buckley * Version number bump to 1.8.2 -- release approaching. * Rewrite of ALICE_2012_I1181770 analysis to make it a bit more sane and acceptable. * Adding a note on FourVector and FourMomentum that operator- and operator-= invert both the space and time components: use of -= can result in a vector with negative energy. * Adding particlesByRapidity and particlesByAbsRapidity to FinalState. 2012-12-07 Hendrik Hoeth * Added ALICE_2012_I1181770 analysis * Bump version to 1.8.2 2012-12-06 Hendrik Hoeth * Added ATLAS_2012_I1188891 analysis * Added ATLAS_2012_I1118269 analysis * Added CMS_2012_I1184941 analysis * Added LHCB_2010_I867355 analysis * Added TGraphErrors support to root2flat 2012-11-27 Andy Buckley * Converting CMS_2012_I1102908 analysis to use YODA. * Adding XLabel and YLabel setting in histo/profile/scatter booking. 2012-11-27 Hendrik Hoeth * Fix make-plots png creation for SL5 2012-11-23 Peter Richardson * Added ATLAS_2012_CONF_2012_153 4-lepton SUSY search 2012-11-17 Andy Buckley * Adding MC_PHOTONS by Steve Lloyd and AB, for testing general unisolated photon properties, especially those associated with charged leptons (e and mu). 2012-11-16 Andy Buckley * Adding MC_PRINTEVENT, a convenient (but verbose!) analysis for printing out event details to stdout. 2012-11-15 Andy Buckley * Removing the long-unused/defunct autopackage system. 2012-11-15 Hendrik Hoeth * Added LHCF_2012_I1115479 analysis * Added ATLAS_2011_I894867 analysis 2012-11-14 Hendrik Hoeth * Added CMS_2012_I1102908 analysis 2012-11-14 Andy Buckley * Converting the argument order of logspace, clarifying the arguments, updating affected code, and removing Analysis::logBinEdges. * Merging updates from the AIDA maintenance branch up to r4002 (latest revision for next merges is r4009). 2012-11-11 Andy Buckley * include/Math/: Various numerical fixes to Vector3::angle and changing the 4 vector mass treatment to permit spacelike virtualities (in some cases even the fuzzy isZero assert check was being violated). The angle check allows a clean-up of some workaround code in MC_VH2BB. 2012-10-15 Hendrik Hoeth * Added CMS_2012_I1107658 analysis 2012-10-11 Hendrik Hoeth * Added CDF_2012_NOTE10874 analysis 2012-10-04 Hendrik Hoeth * Added ATLAS_2012_I1183818 analysis 2012-07-17 Hendrik Hoeth * Cleanup and multiple fixes in CMS_2011_S9120041 * Bugfixed in ALEPH_2004_S5765862 and ATLAS_2010_CONF_2010_049 (thanks to Anil Pratap) 2012-08-09 Andy Buckley * Fixing aida2root command-line help message and converting to TH* rather than TGraph by default. 2012-07-24 Andy Buckley * Improvements/migrations to rivet-mkhtml, rivet-mkanalysis, and rivet-buildplugin. 2012-07-17 Hendrik Hoeth * Add CMS_2012_I1087342 2012-07-12 Andy Buckley * Fix rivet-mkanalysis a bit for YODA compatibility. 2012-07-05 Hendrik Hoeth * Version 1.8.1! 2012-07-05 Holger Schulz * Add ATLAS_2011_I945498 2012-07-03 Hendrik Hoeth * Bugfix for transverse mass (thanks to Gavin Hesketh) 2012-06-29 Hendrik Hoeth * Merge YODA branch into trunk. YODA is alive!!!!!! 2012-06-26 Holger Schulz * Add ATLAS_2012_I1091481 2012-06-20 Hendrik Hoeth * Added D0_2011_I895662: 3-jet mass 2012-04-24 Hendrik Hoeth * fixed a few bugs in rivet-rmgaps * Added new TOTEM dN/deta analysis 2012-03-19 Andy Buckley * Version 1.8.0! * src/Projections/UnstableFinalState.cc: Fix compiler warning. * Version bump for testing: 1.8.0beta1. * src/Core/AnalysisInfo.cc: Add printout of YAML parser exception error messages to aid debugging. * bin/Makefile.am: Attempt to fix rivet-nopy build on SLC5. * src/Analyses/LHCB_2010_S8758301.cc: Add two missing entries to the PDGID -> lifetime map. * src/Projections/UnstableFinalState.cc: Extend list of vetoed particles to include reggeons. 2012-03-16 Andy Buckley * Version change to 1.8.0beta0 -- nearly ready for long-awaited release! * pyext/setup.py.in: Adding handling for the YAML library: fix for Genser build from Anton Karneyeu. * src/Analyses/LHCB_2011_I917009.cc: Hiding lifetime-lookup error message if the offending particle is not a hadron. * include/Rivet/Math/MathHeader.hh: Using unnamespaced std::isnan and std::isinf as standard. 2012-03-16 Hendrik Hoeth * Improve default plot behaviour for 2D histograms 2012-03-15 Hendrik Hoeth * Make ATLAS_2012_I1084540 less verbose, and general code cleanup of that analysis. * New-style plugin hook in ATLAS_2011_I926145, ATLAS_2011_I944826 and ATLAS_2012_I1084540 * Fix compiler warnings in ATLAS_2011_I944826 and CMS_2011_S8973270 * CMS_2011_S8941262: Weights are double, not int. * disable inRange() tests in test/testMath.cc until we have a proper fix for the compiler warnings we see on SL5. 2012-03-07 Andy Buckley * Marking ATLAS_2011_I919017 as VALIDATED (this should have happened a long time ago) and adding more references. 2012-02-28 Hendrik Hoeth * lighthisto.py: Caching for re.compile(). This speeds up aida2flat and flat2aida by more than an order of magnitude. 2012-02-27 Andy Buckley * doc/mk-analysis-html: Adding more LaTeX/text -> HTML conversion replacements, including better <,> handling. 2012-02-26 Andy Buckley * Adding CMS_2011_S8973270, CMS_2011_S8941262, CMS_2011_S9215166, CMS_QCD_10_024, from CMS. * Adding LHCB_2011_I917009 analysis, from Alex Grecu. * src/Core/Analysis.cc, include/Rivet/Analysis.hh: Add a numeric-arg version of histoPath(). 2012-02-24 Holger Schulz * Adding ATLAS Ks/Lambda analysis. 2012-02-20 Andy Buckley * src/Analyses/ATLAS_2011_I925932.cc: Using new overflow-aware normalize() in place of counters and scale(..., 1/count) 2012-02-14 Andy Buckley * Splitting MC_GENERIC to put the PDF and PID plotting into MC_PDFS and MC_IDENTIFIED respectively. * Renaming MC_LEADINGJETS to MC_LEADJETUE. 2012-02-14 Hendrik Hoeth * DELPHI_1996_S3430090 and ALEPH_1996_S3486095: fix rapidity vs {Thrust,Sphericity}-axis. 2012-02-14 Andy Buckley * bin/compare-histos: Don't attempt to remove bins from MC histos where they aren't found in the ref file, if the ref file is not expt data, or if the new --no-rmgapbins arg is given. * bin/rivet: Remove the conversion of requested analysis names to upper-case: mixed-case analysis names will now work. 2012-02-14 Frank Siegert * Bugfixes and improvements for MC_TTBAR: - Avoid assert failure with logspace starting at 0.0 - Ignore charged lepton in jet finding (otherwise jet multi is always +1). - Add some dR/deta/dphi distributions as noted in TODO - Change pT plots to logspace as well (to avoid low-stat high pT bins) 2012-02-10 Hendrik Hoeth * rivet-mkhtml -c option now has the semantics of a .plot file. The contents are appended to the dat output by compare-histos. 2012-02-09 David Grellscheid * Fixed broken UnstableFS behaviour 2012-01-25 Frank Siegert * Improvements in make-plots: - Add PlotTickLabels and RatioPlotTickLabels options (cf. make-plots.txt) - Make ErrorBars and ErrorBands non-exclusive (and change their order, such that Bars are on top of Bands) 2012-01-25 Holger Schulz * Add ATLAS diffractive gap analysis 2012-01-23 Andy Buckley * bin/rivet: When using --list-analyses, the analysis summary is now printed out when log level is <= INFO, rather than < INFO. The effect on command line behaviour is that useful identifying info is now printed by default when using --list-analyses, rather than requiring --list-analyses -v. To get the old behaviour, e.g. if using the output of rivet --list-analyses for scripting, now use --list-analyses -q. 2012-01-22 Andy Buckley * Tidying lighthisto, including fixing the order in which +- error values are passed to the Bin constructor in fromFlatHisto. 2012-01-16 Frank Siegert * Bugfix in ATLAS_2012_I1083318: Include non-signal neutrinos in jet clustering. * Add first version of ATLAS_2012_I1083318 (W+jets). Still UNVALIDATED until final happiness with validation plots arises and data is in Hepdata. * Bugfix in ATLAS_2010_S8919674: Really use neutrino with highest pT for Etmiss. Doesn't seem to make very much difference, but is more correct in principle. 2012-01-16 Peter Richardson * Fixes to ATLAS_20111_S9225137 to include reference data 2012-01-13 Holger Schulz * Add ATLAS inclusive lepton analysis 2012-01-12 Hendrik Hoeth * Font selection support in rivet-mkhtml 2012-01-11 Peter Richardson * Added pi0 to list of particles. 2012-01-11 Andy Buckley * Removing references to Boost random numbers. 2011-12-30 Andy Buckley * Adding a placeholder rivet-which script (not currently installed). * Tweaking to avoid a very time-consuming debug printout in compare-histos with the -v flag, and modifying the Rivet env vars in rivet-mkhtml before calling compare-histos to eliminate problems induced by relative paths (i.e. "." does not mean the same thing when the directory is changed within the script). 2011-12-12 Andy Buckley * Adding a command line completion function for rivet-mkhtml. 2011-12-12 Frank Siegert * Fix for factor of 2.0 in normalisation of CMS_2011_S9086218 * Add --ignore-missing option to rivet-mkhtml to ignore non-existing AIDA files. 2011-12-06 Andy Buckley * Include underflow and overflow bins in the normalisation when calling Analysis::normalise(h) 2011-11-23 Andy Buckley * Bumping version to 1.8.0alpha0 since the Jet interface changes are quite a major break with backward compatibility (although the vast majority of analyses should be unaffected). * Removing crufty legacy stuff from the Jet class -- there is never any ambiguity between whether Particle or FourMomentum objects are the constituents now, and the jet 4-momentum is set explicitly by the jet alg so that e.g. there is no mismatch if the FastJet pt recombination scheme is used. * Adding default do-nothing implementations of Analysis::init() and Analysis::finalize(), since it is possible for analysis implementations to not need to do anything in these methods, and forcing analysis authors to write do-nothing boilerplate code is not "the Rivet way"! 2011-11-19 Andy Buckley * Adding variant constructors to FastJets with a more natural Plugin* argument, and decrufting the constructor implementations a bit. * bin/rivet: Adding a more helpful error message if the rivet module can't be loaded, grouping the option parser options, removing the -A option (this just doesn't seem useful anymore), and providing a --pwd option as a shortcut to append "." to the search path. 2011-11-18 Andy Buckley * Adding a guide to compiling a new analysis template to the output message of rivet-mkanalysis. 2011-11-11 Andy Buckley * Version 1.7.0 release! * Protecting the OPAL 2004 analysis against NaNs in the hemispheres projection -- I can't track the origin of these and suspect some occasional memory corruption. 2011-11-09 Andy Buckley * Renaming source files for EXAMPLE and PDG_HADRON_MULTIPLICITIES(_RATIOS) analyses to match the analysis names. * Cosmetic fixes in ATLAS_2011_S9212183 SUSY analysis. * Adding new ATLAS W pT analysis from Elena Yatsenko (slightly adapted). 2011-10-20 Frank Siegert * Extend API of W/ZFinder to allow for specification of input final state in which to search for leptons/photons. 2011-10-19 Andy Buckley * Adding new version of LHCB_2010_S8758301, based on submission from Alex Grecu. There is some slightly dodgy-looking GenParticle* fiddling going on, but apparently it's necessary (and hopefully robust). 2011-10-17 Andy Buckley * bin/rivet-nopy linker line tweak to make compilation work with GCC 4.6 (-lHepMC has to be explicitly added for some reason). 2011-10-13 Frank Siegert * Add four CMS QCD analyses kindly provided by CMS. 2011-10-12 Andy Buckley * Adding a separate test program for non-matrix/vector math functions, and adding a new set of int/float literal arg tests for the inRange functions in it. * Adding a jet multiplicity plot for jets with pT > 30 GeV to MC_TTBAR. 2011-10-11 Andy Buckley * Removing SVertex. 2011-10-11 James Monk * root2flat was missing the first bin (plus spurious last bin) * My version of bash does not understand the pipe syntax |& in rivet-buildplugin 2011-09-30 James Monk * Fix bug in ATLAS_2010_S8817804 that misidentified the akt4 jets as akt6 2011-09-29 Andy Buckley * Converting FinalStateHCM to a slightly more general DISFinalState. 2011-09-26 Andy Buckley * Adding a default libname argument to rivet-buildplugin. If the first argument doesn't have a .so library suffix, then use RivetAnalysis.so as the default. 2011-09-19 Hendrik Hoeth * make-plots: Fixing regex for \physicscoor. Adding "FrameColor" option. 2011-09-17 Andy Buckley * Improving interactive metadata printout, by not printing headings for missing info. * Bumping the release number to 1.7.0alpha0, since with these SPIRES/Inspire changes and the MissingMomentum API change we need more than a minor release. * Updating the mkanalysis, BibTeX-grabbing and other places that care about analysis SPIRES IDs to also be able to handle the new Inspire system record IDs. The missing link is getting to HepData from an Inspire code... * Using the .info file rather than an in-code declaration to specify that an analysis needs cross-section information. * Adding Inspire support to the AnalysisInfo and Analysis interfaces. Maybe we can find a way to combine the two, e.g. return the SPIRES code prefixed with an "S" if no Inspire ID is available... 2011-09-17 Hendrik Hoeth * Added ALICE_2011_S8909580 (strange particle production at 900 GeV) * Feed-down correction in ALICE_2011_S8945144 2011-09-16 Andy Buckley * Adding ATLAS track jet analysis, modified from the version provided by Seth Zenz: ATLAS_2011_I919017. Note that this analysis is currently using the Inspire ID rather than the Spires one: we're clearly going to have to update the API to handle Inspire codes, so might as well start now... 2011-09-14 Andy Buckley * Adding the ATLAS Z pT measurement at 7 TeV (ATLAS_2011_S9131140) and an MC analysis for VH->bb events (MC_VH2BB). 2011-09-12 Andy Buckley * Removing uses of getLog, cout, cerr, and endl from all standard analyses and projections, except in a very few special cases. 2011-09-10 Andy Buckley * Changing the behaviour and interface of the MissingMomentum projection to calculate vector ET correctly. This was previously calculated according to the common definition of -E*sin(theta) of the summed visible 4-momentum in the event, but that is incorrect because the timelike term grows monotonically. Instead, transverse 2-vectors of size ET need to be constructed for each visible particle, and vector-summed in the transverse plane. The rewrite of this behaviour made it opportune to make an API improvement: the previous method names scalarET/vectorET() have been renamed to scalar/vectorEt() to better match the Rivet FourMomentum::Et() method, and MissingMomentum::vectorEt() now returns a Vector3 rather than a double so that the transverse missing Et direction is also available. Only one data analysis has been affected by this change in behaviour: the D0_2004_S5992206 dijet delta(phi) analysis. It's expected that this change will not be very significant, as it is a *veto* on significant missing ET to reduce non-QCD contributions. MC studies using this analysis ~always run with QCD events only, so these contributions should be small. The analysis efficiency may have been greatly improved, as fewer events will now fail the missing ET veto cut. * Add sorting of the ParticleVector returned by the ChargedLeptons projection. * configure.ac: Adding a check to make sure that no-one tries to install into --prefix=$PWD. 2011-09-04 Andy Buckley * lighthisto fixes from Christian Roehr. 2011-08-26 Andy Buckley * Removing deprecated features: the setBeams(...) method on Analysis, the MaxRapidity constant, the split(...) function, the default init() method from AnalysisHandler and its test, and the deprecated TotalVisibleMomentum and PVertex projections. 2011-08-23 Andy Buckley * Adding a new DECLARE_RIVET_PLUGIN wrapper macro to hide the details of the plugin hook system from analysis authors. Migration of all analyses and the rivet-mkanalysis script to use this as the standard plugin hook syntax. * Also call the --cflags option on root-config when using the --root option with rivet-biuldplugin (thanks to Richard Corke for the report) 2011-08-23 Frank Siegert * Added ATLAS_2011_S9126244 * Added ATLAS_2011_S9128077 2011-08-23 Hendrik Hoeth * Added ALICE_2011_S8945144 * Remove obsolete setBeams() from the analyses * Update CMS_2011_S8957746 reference data to the official numbers * Use Inspire rather than Spires. 2011-08-19 Frank Siegert * More NLO parton level generator friendliness: Don't crash or fail when there are no beam particles. * Add --ignore-beams option to skip compatibility check. 2011-08-09 David Mallows * Fix aida2flat to ignore empty dataPointSet 2011-08-07 Andy Buckley * Adding TEXINPUTS and LATEXINPUTS prepend definitions to the variables provided by rivetenv.(c)sh. A manual setting of these variables that didn't include the Rivet TEXMFHOME path was breaking make-plots on lxplus, presumably since the system LaTeX packages are so old there. 2011-08-02 Frank Siegert Version 1.6.0 release! 2011-08-01 Frank Siegert * Overhaul of the WFinder and ZFinder projections, including a change of interface. This solves potential problems with leptons which are not W/Z constituents being excluded from the RemainingFinalState. 2011-07-29 Andy Buckley * Version 1.5.2 release! * New version of aida2root from James Monk. 2011-07-29 Frank Siegert * Fix implementation of --config file option in make-plots. 2011-07-27 David Mallows * Updated MC_TTBAR.plot to reflect updated analysis. 2011-07-25 Andy Buckley * Adding a useTransverseMass flag method and implementation to InvMassFinalState, and using it in the WFinder, after feedback from Gavin Hesketh. This was the neatest way I could do it :S Some other tidying up happened along the way. * Adding transverse mass massT and massT2 methods and functions for FourMomentum. 2011-07-22 Frank Siegert * Added ATLAS_2011_S9120807 * Add two more observables to MC_DIPHOTON and make its isolation cut more LHC-like * Add linear photon pT histo to MC_PHOTONJETS 2011-07-20 Andy Buckley * Making MC_TTBAR work with semileptonic ttbar events and generally tidying the code. 2011-07-19 Andy Buckley * Version bump to 1.5.2.b01 in preparation for a release in the very near future. 2011-07-18 David Mallows * Replaced MC_TTBAR: Added t,tbar reconstruction. Not yet working. 2011-07-18 Andy Buckley * bin/rivet-buildplugin.in: Pass the AM_CXXFLAGS variable (including the warning flags) to the C++ compiler when building user analysis plugins. * include/LWH/DataPointSet.h: Fix accidental setting of errorMinus = scalefactor * error_Plus_. Thanks to Anton Karneyeu for the bug report! 2011-07-18 Hendrik Hoeth * Added CMS_2011_S8884919 (charged hadron multiplicity in NSD events corrected to pT>=0). * Added CMS_2010_S8656010 and CMS_2010_S8547297 (charged hadron pT and eta in NSD events) * Added CMS_2011_S8968497 (chi_dijet) * Added CMS_2011_S8978280 (strangeness) 2011-07-13 Andy Buckley * Rivet PDF manual updates, to not spread disinformation about bootstrapping a Genser repo. 2011-07-12 Andy Buckley * bin/make-plots: Protect property reading against unstripped \r characters from DOS newlines. * bin/rivet-mkhtml: Add a -M unmatch regex flag (note that these are matching the analysis path rather than individual histos on this script), and speed up the initial analysis identification and selection by avoiding loops of regex comparisons for repeats of strings which have already been analysed. * bin/compare-histos: remove the completely (?) unused histogram list, and add -m and -M regex flags, as for aida2flat and flat2aida. 2011-06-30 Hendrik Hoeth * fix fromFlat() in lighthistos: It would ignore histogram paths before. * flat2aida: preserve histogram order from .dat files 2011-06-27 Andy Buckley * pyext/setup.py.in: Use CXXFLAGS and LDFLAGS safely in the Python extension build, and improve the use of build/src directory arguments. 2011-06-23 Andy Buckley * Adding a tentative rivet-updateanalyses script, based on lhapdf-getdata, which will download new analyses as requested. We could change our analysis-providing behaviour a bit to allow this sort of delivery mechanism to be used as the normal way of getting analysis updates without us having to make a whole new Rivet release. It is nice to be able to identify analyses with releases, though, for tracking whether bugs have been addressed. 2011-06-10 Frank Siegert * Bugfixes in WFinder. 2011-06-10 Andy Buckley * Adding \physicsxcoor and \physicsycoor treatment to make-plots. 2011-06-06 Hendrik Hoeth * Allow for negative cross-sections. NLO tools need this. * make-plots: For RatioPlotMode=deviation also consider the MC uncertainties, not just data. 2011-06-04 Hendrik Hoeth * Add support for goodness-of-fit calculations to make-plots. The results are shown in the legend, and one histogram can be selected to determine the color of the plot margin. See the documentation for more details. 2011-06-04 Andy Buckley * Adding auto conversion of Histogram2D to DataPointSets in the AnalysisHandler _normalizeTree method. 2011-06-03 Andy Buckley * Adding a file-weight feature to the Run object, which will optionally rescale the weights in the provided HepMC files. This should be useful for e.g. running on multiple differently-weighted AlpGen HepMC files/streams. The new functionality is used by the rivet command via an optional weight appended to the filename with a colon delimiter, e.g. "rivet fifo1.hepmc fifo2.hepmc:2.31" 2011-06-01 Hendrik Hoeth * Add BeamThrust projection 2011-05-31 Hendrik Hoeth * Fix LIBS for fastjet-3.0 * Add basic infrastructure for Taylor plots in make-plots * Fix OPAL_2004_S6132243: They are using charged+neutral. * Release 1.5.1 2011-05-22 Andy Buckley * Adding plots of stable and decayed PID multiplicities to MC_GENERIC (useful for sanity-checking generator setups). * Removing actually-unused ProjectionApplier.fhh forward declaration header. 2011-05-20 Andy Buckley * Removing import of ipython shell from rivet-rescale, having just seen it throw a multi-coloured warning message on a student's lxplus Rivet session! * Adding support for the compare-histos --no-ratio flag when using rivet-mkhtml. Adding --rel-ratio, --linear, etc. is an exercise for the enthusiast ;-) 2011-05-10 Andy Buckley * Internal minor changes to the ProjectionHandler and ProjectionApplier interfaces, in particular changing the ProjectionHandler::create() function to be called getInstance and to return a reference rather than a pointer. The reference change is to make way for an improved singleton implementation, which cannot yet be used due to a bug in projection memory management. The code of the improved singleton is available, but commented out, in ProjectionManager.hh to allow for easier migration and to avoid branching. 2011-05-08 Andy Buckley * Extending flat2aida to be able to read from and write to stdin/out as for aida2flat, and also eliminating the internal histo parsing representation in favour of the one in lighthisto. lighthisto's fromFlat also needed a bit of an overhaul: it has been extended to parse each histo's chunk of text (including BEGIN and END lines) in fromFlatHisto, and for fromFlat to parse a collection of histos from a file, in keeping with the behaviour of fromDPS/fromAIDA. Merging into Professor is now needed. * Extending aida2flat to have a better usage message, to accept input from stdin for command chaining via pipes, and to be a bit more sensibly internally structured (although it also now has to hold all histos in memory before writing out -- that shouldn't be a problem for anything other than truly huge histo files). 2011-05-04 Andy Buckley * compare-histos: If using --mc-errs style, prefer dotted and dashdotted line styles to dashed, since dashes are often too long to be distinguishable from solid lines. Even better might be to always use a solid line for MC errs style, and to add more colours. * rivet-mkhtml: use a no-mc-errors drawing style by default, to match the behaviour of compare-histos, which it calls. The --no-mc-errs option has been replaced with an --mc-errs option. 2011-05-04 Hendrik Hoeth * Ignore duplicate files in compare-histos. 2011-04-25 Andy Buckley * Adding some hadron-specific N and sumET vs. |eta| plots to MC_GENERIC. * Re-adding an explicit attempt to get the beam particles, since HepMC's IO_HERWIG seems to not always set them even though it's meant to. 2011-04-19 Hendrik Hoeth * Added ATLAS_2011_S9002537 W asymmetry analysis 2011-04-14 Hendrik Hoeth * deltaR, deltaPhi, deltaEta now available in all combinations of FourVector, FourMomentum, Vector3, doubles. They also accept jets and particles as arguments now. 2011-04-13 David Grellscheid * added ATLAS 8983313: 0-lepton BSM 2011-04-01 Andy Buckley * bin/rivet-mkanalysis: Don't try to download SPIRES or HepData info if it's not a standard analysis (i.e. if the SPIRES ID is not known), and make the default .info file validly parseable by YAML, which was an unfortunate gotcha for anyone writing a first analysis. 2011-03-31 Andy Buckley * bin/compare-histos: Write more appropriate ratio plot labels when not comparing to data, and use the default make-plots labels when comparing to data. * bin/rivet-mkhtml: Adding a timestamp to the generated pages, and a -t/--title option to allow setting the main HTML page title on the command line: otherwise it becomes impossible to tell these pages apart when you have a lot of them, except by URL! 2011-03-24 Andy Buckley * bin/aida2flat: Adding a -M option to *exclude* histograms whose paths match a regex. Writing a negative lookahead regex with positive matching was far too awkward! 2011-03-18 Leif Lonnblad * src/Core/AnalysisHandler.cc (AnalysisHandler::removeAnalysis): Fixed strange shared pointer assignment that caused seg-fault. 2011-03-13 Hendrik Hoeth * filling of functions works now in a more intuitive way (I hope). 2011-03-09 Andy Buckley * Version 1.5.0 release! 2011-03-08 Andy Buckley * Adding some extra checks for external packages in make-plots. 2011-03-07 Andy Buckley * Changing the accuracy of the beam energy checking to 1%, to make the UI a bit more forgiving. It's still best to specify exactly the right energy of course! 2011-03-01 Andy Buckley * Adding --no-plottitle to compare-histos (+ completion). * Fixing segfaults in UA1_1990_S2044935 and UA5_1982_S875503. * Bump ABI version numbers for 1.5.0 release. * Use AnalysisInfo for storage of the NeedsCrossSection analysis flag. * Allow field setting in AnalysisInfo. 2011-02-27 Hendrik Hoeth * Support LineStyle=dashdotted in make-plots * New command line option --style for compare-histos. Options are "default", "bw" and "talk". * cleaner uninstall 2011-02-26 Andy Buckley * Changing internal storage and return type of Particle::pdgId() to PdgId, and adding Particle::energy(). * Renaming Analysis::energies() as Analysis::requiredEnergies(). * Adding beam energies into beam consistency checking: Analysis::isCompatible methods now also require the beam energies to be provided. * Removing long-deprecated AnalysisHandler::init() constructor and AnalysisHandler::removeIncompatibleAnalyses() methods. 2011-02-25 Andy Buckley * Adding --disable-obsolete, which takes its value from the value of --disable-preliminary by default. * Replacing RivetUnvalidated and RivetPreliminary plugin libraries with optionally-configured analysis contents in the experiment-specific plugin libraries. This avoids issues with making libraries rebuild consistently when sources were reassigned between libraries. 2011-02-24 Andy Buckley * Changing analysis plugin registration to fall back through available paths rather than have RIVET_ANALYSIS_PATH totally override the built-in paths. The first analysis hook of a given name to be found is now the one that's used: any duplicates found will be warned about but unused. getAnalysisLibPaths() now returns *all* the search paths, in keeping with the new search behaviour. 2011-02-22 Andy Buckley * Moving the definition of the MSG_* macros into the Logging.hh header. They can't be used everywhere, though, as they depend on the existence of a this->getLog() method in the call scope. This move makes them available in e.g. AnalysisHandler and other bits of framework other than projections and analyses. * Adding a gentle print-out from the Rivet AnalysisHandler if preliminary analyses are being used, and strengthening the current warning if unvalidated analyses are used. * Adding documentation about the validation "process" and the (un)validated and preliminary analysis statuses. * Adding the new RivetPreliminary analysis library, and the corresponding --disable-preliminary configure flag. Analyses in this library are subject to change names, histograms, reference data values, etc. between releases: make sure you check any dependences on these analyses when upgrading Rivet. * Change the Python script ref data search behaviours to use Rivet ref data by default where available, rather than requiring a -R option. Where relevant, -R is still a valid option, to avoid breaking legacy scripts, and there is a new --no-rivet-refs option to turn the default searching *off*. * Add the prepending and appending optional arguments to the path searching functions. This will make it easier to combine the search functions with user-supplied paths in Python scripts. * Make make-plots killable! * Adding Rivet version to top of run printout. * Adding Run::crossSection() and printing out the cross-section in pb at the end of a Rivet run. 2011-02-22 Hendrik Hoeth * Make lighthisto.py aware of 2D histograms * Adding published versions of the CDF_2008 leading jets and DY analyses, and marking the preliminary ones as "OBSOLETE". 2011-02-21 Andy Buckley * Adding PDF documentation for path searching and .info/.plot files, and tidying overfull lines. * Removing unneeded const declarations from various return by value path and internal binning functions. Should not affect ABI compatibility but will force recompilation of external packages using the RivetPaths.hh and Utils.hh headers. * Adding findAnalysis*File(fname) functions, to be used by Rivet scripts and external programs to find files known to Rivet according to Rivet's (newly standard) lookup rule. * Changing search path function behaviour to always return *all* search directories rather than overriding the built-in locations if the environment variables are set. 2011-02-20 Andy Buckley * Adding the ATLAS 2011 transverse jet shapes analysis. 2011-02-18 Hendrik Hoeth * Support for transparency in make-plots 2011-02-18 Frank Siegert * Added ATLAS prompt photon analysis ATLAS_2010_S8914702 2011-02-10 Hendrik Hoeth * Simple NOOP constructor for Thrust projection * Add CMS event shape analysis. Data read off the plots. We will get the final numbers when the paper is accepted by the journal. 2011-02-10 Frank Siegert * Add final version of ATLAS dijet azimuthal decorrelation 2011-02-10 Hendrik Hoeth * remove ATLAS conf note analyses for which we have final data * reshuffle histograms in ATLAS minbias analysis to match Hepdata * small pT-cut fix in ATLAS track based UE analysis 2011-01-31 Andy Buckley * Doc tweaks and adding cmp-by-|p| functions for Jets, to match those added by Hendrik for Particles. * Don't sum photons around muons in the D0 2010 Z pT analysis. 2011-01-27 Andy Buckley * Adding ATLAS 2010 min bias and underlying event analyses and data. 2011-01-23 Andy Buckley * Make make-plots write out PDF rather than PS by default. 2011-01-12 Andy Buckley * Fix several rendering and comparison bugs in rivet-mkhtml. * Allow make-plots to write into an existing directory, at the user's own risk. * Make rivet-mkhtml produce PDF-based output rather than PS by default (most people want PDF these days). Can we do the same change of default for make-plots? * Add getAnalysisPlotPaths() function, and use it in compare-histos * Use proper .info file search path function internally in AnalysisInfo::make. 2011-01-11 Andy Buckley * Clean up ATLAS dijet analysis. 2010-12-30 Andy Buckley * Adding a run timeout option, and small bug-fixes to the event timeout handling, and making first event timeout work nicely with the run timeout. Run timeout is intended to be used in conjunction with timed batch token expiry, of the type that likes to make 0 byte AIDA files on LCG when Grid proxies time out. 2010-12-21 Andy Buckley * Fix the cuts in the CDF 1994 colour coherence analysis. 2010-12-19 Andy Buckley * Fixing CDF midpoint cone jet algorithm default construction to have an overlap threshold of 0.5 rather than 0.75. This was recommended by the FastJet manual, and noticed while adding the ATLAS and CMS cones. * Adding ATLAS and CMS old iterative cones as "official" FastJets constructor options (they could always have been used by explicit instantiation and attachment of a Fastjet plugin object). * Removing defunct and unused ClosestJetShape projection. 2010-12-16 Andy Buckley * bin/compare-histos, pyext/lighthisto.py: Take ref paths from rivet module API rather than getting the environment by hand. * pyext/lighthisto.py: Only read .plot info from the first matching file (speed-up compare-histos). 2010-12-14 Andy Buckley * Augmenting the physics vector functionality to make FourMomentum support maths operators with the correct return type (FourMomentum rather than FourVector). 2010-12-11 Andy Buckley * Adding a --event-timeout option to control the event timeout, adding it to the completion script, and making sure that the init time check is turned OFF once successful! * Adding an 3600 second timeout for initialising an event file. If it takes longer than (or anywhere close to) this long, chances are that the event source is inactive for some reason (perhaps accidentally unspecified and stdin is not active, or the event generator has died at the other end of the pipe. The reason for not making it something shorter is that e.g. Herwig++ or Sherpa can have long initialisation times to set up the MPI handler or to run the matrix element integration. An timeout after an hour is still better than a batch job which runs for two days before you realise that you forgot to generate any events! 2010-12-10 Andy Buckley * Fixing unbooked-histo segfault in UA1_1990_S2044935 at 63 GeV. 2010-12-08 Hendrik Hoeth * Fixes in ATLAS_2010_CONF_083, declaring it validated * Added ATLAS_2010_CONF_046, only two plots are implemented. The paper will be out soon, and we don't need the other plots right now. Data is read off the plots in the note. * New option "SmoothLine" for HISTOGRAM sections in make-plots * Changed CustomTicks to CustomMajorTicks and added CustomMinorTicks in make-plots. 2010-12-07 Andy Buckley * Update the documentation to explain this latest bump to path lookup behaviours. * Various improvements to existing path lookups. In particular, the analysis lib path locations are added to the info and ref paths to avoid having to set three variables when you have all three file types in the same personal plugin directory. * Adding setAnalysisLibPaths and addAnalysisLibPath functions. rivet --analysis-path{,-append} now use these and work correctly. Hurrah! * Add --show-analyses as an alias for --show-analysis, following a comment at the ATLAS tutorial. 2010-12-07 Hendrik Hoeth * Change LegendXPos behaviour in make-plots. Now the top left corner of the legend is used as anchor point. 2010-12-03 Andy Buckley * 1.4.0 release. * Add bin-skipping to compare-histos to avoid one use of rivet-rmgaps (it's still needed for non-plotting post-processing like Professor). 2010-12-03 Hendrik Hoeth * Fix normalisation issues in UA5 and ALEPH analyses 2010-11-27 Andy Buckley * MathUtils.hh: Adding fuzzyGtrEquals and fuzzyLessEquals, and tidying up the math utils collection a bit. * CDF 1994 colour coherence analysis overhauled and correction/norm factors fixed. Moved to VALIDATED status. * Adding programmable completion for aida2flat and flat2aida. * Improvements to programmable completion using the neat _filedir completion shell function which I just discovered. 2010-11-26 Andy Buckley * Tweak to floating point inRange to use fuzzyEquals for CLOSED interval equality comparisons. * Some BibTeX generation improvements, and fixing the ATLAS dijet BibTeX key. * Resolution upgrade in PNG make-plots output. * CDF_2005_S6217184.cc, CDF_2008_S7782535.cc: Updates to use the new per-jet JetAlg interface (and some other fixes). * JetAlg.cc: Changed the interface on request to return per-jet rather than per-event jet shapes, with an extra jet index argument. * MathUtils.hh: Adding index_between(...) function, which is handy for working out which bin a value falls in, given a set of bin edges. 2010-11-25 Andy Buckley * Cmp.hh: Adding ASC/DESC (and ANTISORTED) as preferred non-EQUIVALENT enum value synonyms over misleading SORTED/UNSORTED. * Change of rapidity scheme enum name to RapScheme * Reworking JetShape a bit further: constructor args now avoid inconsistencies (it was previously possible to define incompatible range-ends and interval). Internal binning implementation also reworked to use a vector of bin edges: the bin details are available via the interface. The general jet pT cuts can be applied via the JetShape constructor. * MathUtils.hh: Adding linspace and logspace utility functions. Useful for defining binnings. * Adding more general cuts on jet pT and (pseudo)rapidity. 2010-11-11 Andy Buckley * Adding special handling of FourMomentum::mass() for computed zero-mass vectors for which mass2 can go (very slightly) negative due to numerical precision. 2010-11-10 Hendrik Hoeth * Adding ATLAS-CONF-2010-083 conference note. Data is read from plots. When I run Pythia 6 the bins close to pi/2 are higher than in the note, so I call this "unvalidated". But then ... the note doesn't specify a tune or even just a version for the generators in the plots. Not even if they used Pythia 6 or Pythia 8. Probably 6, since they mention AGILe. 2010-11-10 Andy Buckley * Adding a JetAlg::useInvisibles(bool) mechanism to allow ATLAS jet studies to include neutrinos. Anyone who chooses to use this mechanism had better be careful to remove hard neutrinos manually in the provided FinalState object. 2010-11-09 Hendrik Hoeth * Adding ATLAS-CONF-2010-049 conference note. Data is read from plots. Fragmentation functions look good, but I can't reproduce the MC lines (or even the relative differences between them) in the jet cross-section plots. So consider those unvalidated for now. Oh, and it seems ATLAS screwed up the error bands in their ratio plots, too. They are upside-down. 2010-11-07 Hendrik Hoeth * Adding ATLAS-CONF-2010-081 conference note. Data is read from plots. 2010-11-06 Andy Buckley * Deprecating the old JetShape projection and renaming to ClosestJetShape: the algorithm has a tenuous relationship with that actually used in the CDF (and ATLAS) jet shape analyses. CDF analyses to be migrated to the new JetShape projection... and some of that projection's features, design elements, etc. to be finished off: we may as well take this opportunity to clear up what was one of our nastiest pieces of code. 2010-11-05 Hendrik Hoeth * Adding ATLAS-CONF-2010-031 conference note. Data is read from plots. 2010-10-29 Andy Buckley * Making rivet-buildplugin use the same C++ compiler and CXXFLAGS variable as used for the Rivet system build. * Fixing NeutralFinalState projection to, erm, actually select neutral particles (by Hendrik). * Allow passing a general FinalState reference to the JetShape projection, rather than requiring a VetoedFS. 2010-10-07 Andy Buckley * Adding a --with-root flag to rivet-buildplugin to add root-config --libs flags to the plugin build command. 2010-09-24 Andy Buckley * Releasing as Rivet 1.3.0. * Bundling underscore.sty to fix problems with running make-plots on dat files generated by compare-histos from AIDA files with underscores in their names. 2010-09-16 Andy Buckley * Fix error in N_effective definition for weighted profile errors. 2010-08-18 Andy Buckley * Adding MC_GENERIC analysis. NB. Frank Siegert also added MC_HJETS. 2010-08-03 Andy Buckley * Fixing compare-histos treatment of what is now a ref file, and speeding things up... again. What a mess! 2010-08-02 Andy Buckley * Adding rivet-nopy: a super-simple Rivet C++ command line interface which avoids Python to make profiling and debugging easier. * Adding graceful exception handling to the AnalysisHandler event loop methods. * Changing compare-histos behaviour to always show plots for which there is at least one MC histo. The default behaviour should now be the correct one in 99% of use cases. 2010-07-30 Andy Buckley * Merging in a fix for shared_ptrs not being compared for insertion into a set based on raw pointer value. 2010-07-16 Andy Buckley * Adding an explicit library dependency declaration on libHepMC, and hence removing the -lHepMC from the rivet-config --libs output. 2010-07-14 Andy Buckley * Adding a manual section on use of Rivet (and AGILe) as libraries, and how to use the -config scripts to aid compilation. * FastJets projection now allows setting of a jet area definition, plus a hacky mapping for getting the area-enabled cluster sequence. Requested by Pavel Starovoitov & Paolo Francavilla. * Lots of script updates in last two weeks! 2010-06-30 Andy Buckley * Minimising amount of Log class mapped into SWIG. * Making Python ext build checks fail with error rather than warning if it has been requested (or, rather, not explicitly disabled). 2010-06-28 Andy Buckley * Converting rivet Python module to be a package, with the dlopen flag setting etc. done around the SWIG generated core wrapper module (rivet.rivetwrap). * Requiring Python >= 2.4.0 in rivet scripts (and adding a Python version checker function to rivet module) * Adding --epspng option to make-plots (and converting to use subprocess.Popen). 2010-06-27 Andy Buckley * Converting JADE_OPAL analysis to use the fastjet exclusive_ymerge_*max* function, rather than just exclusive_ymerge: everything looks good now. It seems that fastjet >= 2.4.2 is needed for this to work properly. 2010-06-24 Andy Buckley * Making rivet-buildplugin look in its own bin directory when trying to find rivet-config. 2010-06-23 Andy Buckley * Adding protection and warning about numerical precision issues in jet mass calculation/histogramming to the MC_JetAnalysis analysis. * Numerical precision improvement in calculation of Vector4::mass2. * Adding relative scale ratio plot flag to compare-histos * Extended command completion to rivet-config, compare-histos, and make-plots. * Providing protected log messaging macros, MSG_{TRACE,DEBUG,INFO,WARNING,ERROR} cf. Athena. * Adding environment-aware functions for Rivet search path list access. 2010-06-21 Andy Buckley * Using .info file beam ID and energy info in HTML and LaTeX documentation. * Using .info file beam ID and energy info in command-line printout. * Fixing a couple of references to temporary variables in the analysis beam info, which had been introduced during refactoring: have reinstated reference-type returns as the more efficient solution. This should not affect API compatibility. * Making SWIG configure-time check include testing for incompatibilities with the C++ compiler (re. the recurring _const_ char* literals issue). * Various tweaks to scripts: make-plots and compare-histos processes are now renamed (on Linux), rivet-config is avoided when computing the Rivet version,and RIVET_REF_PATH is also set using the rivet --analysis-path* flags. compare-histos now uses multiple ref data paths for .aida file globbing. * Hendrik changed VetoedFinalState comparison to always return UNDEFINED if vetoing on the results of other FS projections is being used. This is the only simple way to avoid problems emanating from the remainingFinalState thing. 2010-06-19 Andy Buckley * Adding --analysis-path and --analysis-path-append command-line flags to the rivet script, as a "persistent" way to set or extend the RIVET_ANALYSIS_PATH variable. * Changing -Q/-V script verbosity arguments to more standard -q/-v, after Hendrik moaned about it ;) * Small fix to TinyXML operator precendence: removes a warning, and I think fixes a small bug. * Adding plotinfo entries for new jet rapidity and jet mass plots in MC_JetAnalysis derivatives. * Moving MC_JetAnalysis base class into a new libRivetAnalysisTools library, with analysis base class and helper headers to be stored in the reinstated Rivet/Analyses include directory. 2010-06-08 Andy Buckley * Removing check for CEDARSTD #define guard, since we no longer compile against AGILe and don't have to be careful about duplication. * Moving crappy closest approach and decay significance functions from Utils into SVertex, which is the only place they have ever been used (and is itself almost entirely pointless). * Overhauling particle ID <-> name system to clear up ambiguities between enums, ints, particles and beams. There are no more enums, although the names are still available as const static ints, and names are now obtained via a singleton class which wraps an STL map for name/ID lookups in both directions. 2010-05-18 Hendrik Hoeth * Fixing factor-of-2 bug in the error calculation when scaling histograms. * Fixing D0_2001_S4674421 analysis. 2010-05-11 Andy Buckley * Replacing TotalVisibleMomentum with MissingMomentum in analyses and WFinder. Using vector ET rather than scalar ET in some places. 2010-05-07 Andy Buckley * Revamping the AnalysisHandler constructor and data writing, with some LWH/AIDA mangling to bypass the stupid AIDA idea of having to specify the sole output file and format when making the data tree. Preferred AnalysisHandler constructor now takes only one arg -- the runname -- and there is a new AH.writeData(outfile) method to replace AH.commitData(). Doing this now to begin migration to more flexible histogramming in the long term. 2010-04-21 Hendrik Hoeth * Fixing LaTeX problems (make-plots) on ancient machines, like lxplus. 2010-04-29 Andy Buckley * Fixing (I hope!) the treatment of weighted profile bin errors in LWH. 2010-04-21 Andy Buckley * Removing defunct and unused KtJets and Configuration classes. * Hiding some internal details from Doxygen. * Add @brief Doxygen comments to all analyses, projections and core classes which were missing them. 2010-04-21 Hendrik Hoeth * remove obsolete reference to InitialQuarks from DELPHI_2002 * fix normalisation in CDF_2000_S4155203 2010-04-20 Hendrik Hoeth * bin/make-plots: real support for 2-dim histograms plotted as colormaps, updated the documentation accordingly. * fix misleading help comment in configure.ac 2010-04-08 Andy Buckley * bin/root2flat: Adding this little helper script, minimally modified from one which Holger Schulz made for internal use in ATLAS. 2010-04-05 Andy Buckley * Using spiresbib in rivet-mkanalysis: analysis templates made with rivet-mkanalysis will now contain a SPIRES-dumped BibTeX key and entry if possible! * Adding BibKey and BibTeX entries to analysis metadata files, and updating doc build to use them rather than the time-consuming SPIRES screen-scraping. Added SPIRES BibTeX dumps to all analysis metadata using new (uninstalled & unpackaged) doc/get-spires-data script hack. * Updating metadata files to add Energies, Beams and PtCuts entries to all of them. * Adding ToDo, NeedsCrossSection, and better treatment of Beams and Energies entries in metadata files and in AnalysisInfo and Analysis interfaces. 2010-04-03 Andy Buckley * Frank Siegert: Update of rivet-mkhtml to conform to improved compare-histos. * Frank Siegert: LWH output in precision-8 scientific notation, to solve a binning precision problem... the first time weve noticed a problem! * Improved treatment of data/reference datasets and labels in compare-histos. * Rewrite of rivet-mkanalysis in Python to make way for neat additions. * Improving SWIG tests, since once again the user's biuld system must include SWIG (no test to check that it's a 'good SWIG', since the meaning of that depends on which compiler is being used and we hope that the user system is consistent... evidence from Finkified Macs and bloody SLC5 notwithstanding). 2010-03-23 Andy Buckley * Tag as patch release 1.2.1. 2010-03-22 Andy Buckley * General tidying of return arguments and intentionally unused parameters to keep -Wextra happy (some complaints remain from TinyXML, FastJet, and HepMC). * Some extra bug fixes: in FastJets projection with explicit plugin argument, removing muon veto cut on FoxWolframMoments. * Adding UNUSED macro to help with places where compiler warnings can't be helped. * Turning on -Wextra warnings, and fixing some violations. 2010-03-21 Andy Buckley * Adding MissingMomentum projection, as replacement for ~all uses of now-deprecated TotalVisibleMomentum projection. * Fixing bug with TotalVisibleMomentum projection usage in MC_SUSY analysis. * Frank Siegert fixed major bug in pTmin param passing to FastJets projection. D'oh: requires patch release. 2010-03-02 Andy Buckley * Tagging for 1.2.0 release... at last! 2010-03-01 Andy Buckley * Updates to manual, manual generation scripts, analysis info etc. * Add HepData URL to metadata print-out with rivet --show-analysis * Fix average Et plot in UA1 analysis to only apply to the tracker acceptance (but to include neutral particle contributions, i.e. the region of the calorimeter in the tracker acceptance). * Use Et rather than pT in filling the scalar Et measure in TotalVisibleMomentum. * Fixes to UA1 normalisation (which is rather funny in the paper). 2010-02-26 Andy Buckley * Update WFinder to not place cuts and other restrictions on the neutrino. 2010-02-11 Andy Buckley * Change analysis loader behaviour to use ONLY RIVET_ANALYSIS_PATH locations if set, otherwise use ONLY the standard Rivet analysis install path. Should only impact users of personal plugin analyses, who should now explicitly set RIVET_ANALYSIS_PATH to load their analysis... and who can now create personal versions of standard analyses without getting an error message about duplicate loading. 2010-01-15 Andy Buckley * Add tests for "stable" heavy flavour hadrons in jets (rather than just testing for c/b hadrons in the ancestor lists of stable jet constituents) 2009-12-23 Hendrik Hoeth * New option "RatioPlotMode=deviation" in make-plots. 2009-12-14 Hendrik Hoeth * New option "MainPlot" in make-plots. For people who only want the ratio plot and nothing else. * New option "ConnectGaps" in make-plots. Set to 1 if you want to connect gaps in histograms with a line when ErrorBars=0. Works both in PLOT and in HISTOGRAM sections. * Eliminated global variables for coordinates in make-plots and enabled multithreading. 2009-12-14 Andy Buckley * AnalysisHandler::execute now calls AnalysisHandler::init(event) if it has not yet been initialised. * Adding more beam configuration features to Beam and AnalysisHandler: the setRunBeams(...) methods on the latter now allows a beam configuration for the run to be specified without using the Run class. 2009-12-11 Andy Buckley * Removing use of PVertex from few remaining analyses. Still used by SVertex, which is itself hardly used and could maybe be removed... 2009-12-10 Andy Buckley * Updating JADE_OPAL to do the histo booking in init(), since sqrtS() is now available at that stage. * Renaming and slightly re-engineering all MC_*_* analyses to not be collider-specific (now the Analysis::sqrtS/beams()) methods mean that histograms can be dynamically binned. * Creating RivetUnvalidated.so plugin library for unvalidated analyses. Unvalidated analyses now need to be explicitly enabled with a --enable-unvalidated flag on the configure script. * Various min bias analyses updated and validated. 2009-12-10 Hendrik Hoeth * Propagate SPECIAL and HISTOGRAM sections from .plot files through compare-histos * STAR_2006_S6860818: vs particle mass, validate analysis 2009-12-04 Andy Buckley * Use scaling rather than normalising in DELPHI_1996: this is generally desirable, since normalizing to 1 for 1/sig dsig/dx observables isn't correct if any events fall outside the histo bounds. * Many fixes to OPAL_2004. * Improved Hemispheres interface to remove unnecessary consts on returned doubles, and to also return non-squared versions of (scaled) hemisphere masses. * Add "make pyclean" make target at the top level to make it easier for developers to clean their Python module build when the API is extended. * Identify use of unvalidated analyses with a warning message at runtime. * Providing Analysis::sqrtS() and Analysis::beams(), and making sure they're available by the time the init methods are called. 2009-12-02 Andy Buckley * Adding passing of first event sqrt(s) and beams to analysis handler. * Restructuring running to only use one HepMC input file (no-one was using multiple ones, right?) and to break down the Run class to cleanly separate the init and event loop phases. End of file is now neater. 2009-12-01 Andy Buckley * Adding parsing of beam types and pairs of energies from YAML. 2009-12-01 Hendrik Hoeth * Fixing trigger efficiency in CDF_2009_S8233977 2009-11-30 Andy Buckley * Using shared pointers to make I/O object memory management neater and less error-prone. * Adding crossSectionPerEvent() method [== crossSection()/sumOfWeights()] to Analysis. Useful for histogram scaling since numerator of sumW_passed/sumW_total (to calculate pass-cuts xsec) is cancelled by dividing histo by sumW_passed. * Clean-up of Particle class and provision of inline PID:: functions which take a Particle as an argument to avoid having to explicitly call the Particle::pdgId() method. 2009-11-30 Hendrik Hoeth * Fixing division by zero in Profile1D bin errors for bins with just a single entry. 2009-11-24 Hendrik Hoeth * First working version of STAR_2006_S6860818 2009-11-23 Hendrik Hoeth * Adding missing CDF_2001_S4751469 plots to uemerge * New "ShowZero" option in make-plots * Improving lots of plot defaults * Fixing typos / non-existing bins in CDF_2001_S4751469 and CDF_2004_S5839831 reference data 2009-11-19 Hendrik Hoeth * Fixing our compare() for doubles. 2009-11-17 Hendrik Hoeth * Zeroth version of STAR_2006_S6860818 analysis (identified strange particles). Not working yet for unstable particles. 2009-11-11 Andy Buckley * Adding separate jet-oriented and photon-oriented observables to MC PHOTONJETUE analysis. * Bug fix in MC leading jets analysis, and general tidying of leading jet analyses to insert units, etc. (should not affect any current results) 2009-11-10 Hendrik Hoeth * Fixing last issues in STAR_2006_S6500200 and setting it to VALIDATED. * Noramlise STAR_2006_S6870392 to cross-section 2009-11-09 Andy Buckley * Overhaul of jet caching and ParticleBase interface. * Adding lists of analyses' histograms (obtained by scanning the plot info files) to the LaTeX documentation. 2009-11-07 Andy Buckley * Adding checking system to ensure that Projections aren't registered before the init phase of analyses. * Now that the ProjHandler isn't full of defunct pointers (which tend to coincidentally point to *new* Projection pointers rather than undefined memory, hence it wasn't noticed until recently!), use of a duplicate projection name is now banned with a helpful message at runtime. * (Huge) overhaul of ProjectionHandler system to use shared_ptr: projections are now deleted much more efficiently, naturally cleaning themselves out of the central repository as they go out of scope. 2009-11-06 Andy Buckley * Adding Cmp specialisation, using fuzzyEquals(). 2009-11-05 Hendrik Hoeth * Fixing histogram division code. 2009-11-04 Hendrik Hoeth * New analysis STAR_2006_S6500200 (pion and proton pT spectra in pp collisions at 200 GeV). It is still unclear if they used a cut in rapidity or pseudorapidity, thus the analysis is declared "UNDER DEVELOPMENT" and "DO NOT USE". * Fixing compare() in NeutralFinalState and MergedFinalState 2009-11-04 Andy Buckley * Adding consistence checking on beam ID and sqrt(s) vs. those from first event. 2009-11-03 Andy Buckley * Adding more assertion checks to linear algebra testing. 2009-11-02 Hendrik Hoeth * Fixing normalisation issue with stacked histograms in make-plots. 2009-10-30 Hendrik Hoeth * CDF_2009_S8233977: Updating data and axes labels to match final paper. Normalise to cross-section instead of data. 2009-10-23 Andy Buckley * Fixing Cheese-3 plot in CDF 2004... at last! 2009-10-23 Hendrik Hoeth * Fix muon veto in CDF_1994_S2952106, CDF_2005_S6217184, CDF_2008_S7782535, and D0_2004_S5992206 2009-10-19 Andy Buckley * Adding analysis info files for MC SUSY and PHOTONJETUE analyses. * Adding MC UE analysis in photon+jet events. 2009-10-19 Hendrik Hoeth * Adding new NeutralFinalState projection. Note that this final state takes E_T instead of p_T as argument (makes more sense for neutral particles). The compare() method does not yet work as expected (E_T comparison still missing). * Adding new MergedFinalState projection. This merges two final states, removing duplicate particles. Duplicates are identified by looking at the genParticle(), so users need to take care of any manually added particles themselves. * Fixing most open issues with the STAR_2009_UE_HELEN analysis. There is only one question left, regarding the away region. * Set the default split-merge value for SISCone in our FastJets projection to the recommended (but not Fastjet-default!) value of 0.75. 2009-10-17 Andy Buckley * Adding parsing of units in cross-sections passed to the "-x" flag, i.e. "-x 101 mb" is parsed internally into 1.01e11 pb. 2009-10-16 Hendrik Hoeth * Disabling DELPHI_2003_WUD_03_11 in the Makefiles, since I don't trust the data. * Getting STAR_2009_UE_HELEN to work. 2009-10-04 Andy Buckley * Adding triggers and other tidying to (still unvalidated) UA1_1990 analysis. * Fixing definition of UA5 trigger to not be intrinscally different for pp and ppbar: this is corrected for (although it takes some readng to work this out) in the 1982 paper, which I think is the only one to compare the two modes. * Moving projection setup and registration into init() method for remaining analyses. * Adding trigger implementations as projections for CDF Runs 0 & 1, and for UA5. 2009-10-01 Andy Buckley * Moving projection setup and registration into init() method for analyses from ALEPH, CDF and the MC_ group. * Adding generic SUSY validation analysis, based on plots used in ATLAS Herwig++ validation. * Adding sorted particle accessors to FinalState (cf. JetAlg). 2009-09-29 Andy Buckley * Adding optional use of args as regex match expressions with -l/--list-analyses. 2009-09-03 Andy Buckley * Passing GSL include path to compiler, since its absence was breaking builds on systems with no GSL installation in a standard location (such as SLC5, for some mysterious reason!) * Removing lib extension passing to compiler from the configure script, because Macs and Linux now both use .so extensions for the plugin analysis modules. 2009-09-02 Andy Buckley * Adding analysis info file path search with RIVET_DATA_PATH variable (and using this to fix doc build.) * Improvements to AnalysisLoader path search. * Moving analysis sources back into single directory, after a proletarian uprising ;) 2009-09-01 Andy Buckley * Adding WFinder and WAnalysis, based on Z proj and analysis, with some tidying of the Z code. * ClusteredPhotons now uses an IdentifiedFS to pick the photons to be looped over, and only clusters photons around *charged* signal particles. 2009-08-31 Andy Buckley * Splitting analyses by directory, to make it easier to disable building of particular analysis group plugin libs. * Removing/merging headers for all analyses except for the special MC_JetAnalysis base class. * Exit with an error message if addProjection is used twice from the same parent with distinct projections. 2009-08-28 Andy Buckley * Changed naming convention for analysis plugin libraries, since the loader has changed so much: they must now *start* with the word "Rivet" (i.e. no lib prefix). * Split standard plugin analyses into several plugin libraries: these will eventually move into separate subdirs for extra build convenience. * Started merging analysis headers into the source files, now that we can (the plugin hooks previously forbade this). * Replacement of analysis loader system with a new one based on ideas from ThePEG, which uses dlopen-time instantiation of templated global variables to reduce boilerplate plugin hooks to one line in analyses. 2009-07-14 Frank Siegert * Replacing in-source histo-booking metadata with .plot files. 2009-07-14 Andy Buckley * Making Python wrapper files copy into place based on bundled versions for each active HepMC interface (2.3, 2.4 & 2.5), using a new HepMC version detector test in configure. * Adding YAML metadata files and parser, removing same metadata from the analysis classes' source headers. 2009-07-07 Andy Buckley * Adding Jet::hadronicEnergy() * Adding VisibleFinalState and automatically using it in JetAlg projections. * Adding YAML parser for new metadata (and eventually ref data) files. 2009-07-02 Andy Buckley * Adding Jet::neutralEnergy() (and Jet::totalEnergy() for convenience/symmetry). 2009-06-25 Andy Buckley * Tidying and small efficiency improvements in CDF_2008_S7541902 W+jets analysis (remove unneeded second stage of jet storing, sorting the jets twice, using foreach, etc.). 2009-06-24 Andy Buckley * Fixing Jet's containsBottom and containsCharm methods, since B hadrons are not necessarily to be found in the final state. Discovered at the same time that HepMC::GenParticle defines a massively unhelpful copy constructor that actually loses the tree information; it would be better to hide it entirely! * Adding RivetHepMC.hh, which defines container-type accessors to HepMC particles and vertices, making it possible to use Boost foreach and hence avoiding the usual huge boilerplate for-loops. 2009-06-11 Andy Buckley * Adding --disable-pdfmanual option, to make the bootstrap a bit more robust. * Re-enabling D0IL in FastJets: adding 10^-10 to the pTmin removes the numerical instability! * Fixing CDF_2004 min/max cone analysis to use calo jets for the leading jet Et binning. Thanks to Markus Warsinsky for (re)discovering this bug: I was sure it had been fixed. I'm optimistic that this will fix the main distributions, although Swiss Cheese "minus 3" is still likely to be broken. Early tests look okay, but it'll take more stats before we can remove the "do not trust" sign. 2009-06-10 Andy Buckley * Providing "calc" methods so that Thrust and Sphericity projections can be used as calculators without having to use the projecting/caching system. 2009-06-09 Andy Buckley * 1.1.3 release! * More doc building and SWIG robustness tweaks. 2009-06-07 Andy Buckley * Make doc build from metadata work even before the library is installed. 2009-06-07 Hendrik Hoeth * Fix phi rotation in CDF_2008_LEADINGJETS. 2009-06-07 Andy Buckley * Disabling D0 IL midpoint cone (using CDF modpoint instead), since there seems to be a crashing bug in FastJet's implementation: we can't release that way, since ~no D0 analyses will run. 2009-06-03 Andy Buckley * Putting SWIG-generated source files under SVN control to make life easier for people who we advise to check out the SVN head version, but who don't have a sufficiently modern copy of SWIG to * Adding the --disable-analyses option, for people who just want to use Rivet as a framework for their own analyses. * Enabling HepMC cross-section reading, now that HepMC 2.5.0 has been released. 2009-05-23 Hendrik Hoeth * Using gsl-config to locate libgsl * Fix the paths for linking such that our own libraries are found before any system libraries, e.g. for the case that there is an outdated fastjet version installed on the system while we want to use our own up-to-date version. * Change dmerge to ymerge in the e+e- analyses using JADE or DURHAM from fastjet. That's what it is called in fastjet-2.4 now. 2009-05-18 Andy Buckley * Adding use of gsl-config in configure script. 2009-05-16 Andy Buckley * Removing argument to vetoEvent macro, since no weight subtraction is now needed. It's now just an annotated return, with built-in debug log message. * Adding an "open" FinalState, which is only calculated once per even, then used by all other FSes, avoiding the loop over non-status 1 particles. 2009-05-15 Andy Buckley * Removing incorrect setting of DPS x-errs in CDF_2008 jet shape analysis: the DPS autobooking already gets this bit right. * Using Jet rather than FastJet::PseudoJet where possible, as it means that the phi ranges match up nicely between Particle and the Jet object. The FastJet objects are only really needed if you want to do detailed things like look at split/merge scales for e.g. diff jet rates or "y-split" analyses. * Tidying and debugging CDF jet shape analyses and jet shape plugin... ongoing, but I think I've found at least one real bug, plus a lot of stuff that can be done a lot more nicely. * Fully removing deprecated math functions and updating affected analyses. 2009-05-14 Andy Buckley * Removing redundant rotation in DISKinematics... this was a legacy of Peter using theta rather than pi-theta in his rotation. * Adding convenience phi, rho, eta, theta, and perp,perp2 methods to the 3 and 4 vector classes. 2009-05-12 Andy Buckley * Adding event auto-rotation for events with one proton... more complete approach? 2009-05-09 Hendrik Hoeth * Renaming CDF_2008_NOTE_9337 to CDF_2009_S8233977. * Numerous small bug fixes in ALEPH_1996_S3486095. * Adding data for one of the Rick-Field-style STAR UE analyses. 2009-05-08 Andy Buckley * Adding rivet-mkanalysis script, to make generating new analysis source templates easier. 2009-05-07 Andy Buckley * Adding null vector check to Vector3::azimuthalAngle(). * Fixing definition of HCM/Breit frames in DISKinematics, and adding asserts to check that the transformation is doing what it should. 2009-05-05 Andy Buckley * Removing eta and Et cuts from CDF 2000 Z pT analysis, based on our reading of the paper, and converting most of the analysis to a call of the ZFinder projection. 2009-05-05 Hendrik Hoeth * Support non-default seed_threshold in CDF cone jet algorithms. * New analyses STAR_2006_S6870392 and STAR_2008_S7993412. In STAR_2008_S7993412 only the first distribution is filled at the moment. STAR_2006_S6870392 is normalised to data instead of the Monte Carlo cross-section, since we don't have that available in the HepMC stream yet. 2009-05-05 Andy Buckley * Changing Event wrapper to copy whole GenEvents rather than pointers, use std units if supported in HepMC, and run a placeholder function for event auto-orientation. 2009-04-28 Andy Buckley * Removing inclusion of IsolationTools header by analyses that aren't actually using the isolation tools... which is all of them. Leaving the isolation tools in place for now, as there might still be use cases for them and there's quite a lot of code there that deserves a second chance to be used! 2009-04-24 Andy Buckley * Deleting Rivet implementations of TrackJet and D0ILConeJets: the code from these has now been incorporated into FastJet 2.4.0. * Removed all mentions of the FastJet JADE patch and the HAVE_JADE preprocessor macro. * Bug fix to D0_2008_S6879055 to ensure that cuts compare to both electron and positron momenta (was just comparing against electrons, twice, probably thanks to the miracle of cut and paste). * Converting all D0 IL Cone jets to use FastJets. Involved tidying D0_2004 jet azimuthal decorrelation analysis and D0_2008_S6879055 as part of migration away from using the getLorentzJets method, and removing the D0ILConeJets header from quite a few analyses that weren't using it at all. * Updating CDF 2001 to use FastJets in place of TrackJet, and adding axis labels to its plots. * Note that ZEUS_2001_S4815815 uses the wrong jet definition: it should be a cone but curently uses kT. * Fixing CDF_2005_S6217184 to use correct (midpoint, R=0.7) jet definition. That this was using a kT definition with R=1.0 was only made obvious when the default FastJets constructor was removed. * Removing FastJets default constructor: since there are now several good (IRC safe) jet definitions available, there is no obvious safe default and analyses should have to specify which they use. * Moving FastJets constructors into implementation file to reduce recompilation dependencies, and adding new plugins. * Ensuring that axis labels actually get written to the output data file. 2009-04-22 Andy Buckley * Adding explicit FastJet CDF jet alg overlap_threshold constructor param values, since the default value from 2.3.x is now removed in version 2.4.0. * Removing use of HepMC ThreeVector::mag method (in one place only) since this has been removed in version 2.5.0b. * Fix to hepmc.i (included by rivet.i) to ignore new HepMC 2.5.0b GenEvent stream operator. 2009-04-21 Andy Buckley * Dependency on FastJet now requires version 2.4.0 or later. Jade algorithm is now native. * Moving all analysis constructors and Projection headers from the analysis header files into their .cc implementation files, cutting header dependencies. * Removing AIDA headers: now using LWH headers only, with enhancement to use axis labels. This facility is now used by the histo booking routines, and calling the booking function versions which don't specify axis labels will result in a runtime warning. 2009-04-07 Andy Buckley * Adding $(DESTDIR) prefix to call to Python module "setup.py install" * Moving HepMC SWIG mappings into Python Rivet module for now: seems to work-around the SL type-mapping bug. 2009-04-03 Andy Buckley * Adding MC analysis for LHC UE: higher-pT replica of Tevatron 2008 leading jets study. * Adding CDF_1990 pseudorapidity analysis. * Moving CDF_2001 constructor into implementation file. * Cleaning up CDF_2008_LEADINGJETS a bit, e.g. using foreach loops. * Adding function interface for specifying axis labels in histo bookings. Currently has no effect, since AIDA doesn't seem to have a mechanism for axis labels. It really is a piece of crap. 2009-03-18 Andy Buckley * Adding docs "make upload" and other tweaks to make the doc files fit in with the Rivet website. * Improving LaTex docs to show email addresses in printable form and to group analyses by collider or other metadata. * Adding doc script to include run info in LaTeX docs, and to make HTML docs. * Removing WZandh projection, which wasn't generator independent and whose sole usage was already replaced by ZFinder. * Improvements to constructors of ZFinder and InvMassFS. * Changing ExampleTree to use real FS-based Z finding. 2009-03-16 Andy Buckley * Allow the -H histo file spec to give a full name if wanted. If it doesn't end in the desired extension, it will be added. * Adding --runname option (and API elements) to choose a run name to be prepended as a "top level directory" in histo paths. An empty value results in no extra TLD. 2009-03-06 Andy Buckley * Adding R=0.2 photon clustering to the electrons in the CDF 2000 Z pT analysis. 2009-03-04 Andy Buckley * Fixing use of fastjet-config to not use the user's PATH variable. * Fixing SWIG type table for HepMC object interchange. 2009-02-20 Andy Buckley * Adding use of new metadata in command line analysis querying with the rivet command, and in building the PDF Rivet manual. * Adding extended metadata methods to the Analysis interface and the Python wrapper. All standard analyses comply with this new interface. 2009-02-19 Andy Buckley * Adding usefully-scoped config headers, a Rivet::version() function which uses them, and installing the generated headers to fix "external" builds against an installed copy of Rivet. The version() function has been added to the Python wrapper. 2009-02-05 Andy Buckley * Removing ROOT dependency and linking. Woo! There's no need for this now, because the front-end accepts no histo format switch and we'll just use aida2root for output conversions. Simpler this way, and it avoids about half of our compilation bug reports from 32/64 bit ROOT build confusions. 2009-02-04 Andy Buckley * Adding automatic generation of LaTeX manual entries for the standard analyses. 2009-01-20 Andy Buckley * Removing RivetGun and TCLAP source files! 2009-01-19 Andy Buckley * Added psyco Python optimiser to rivet, make-plots and compare-histos. * bin/aida2root: Added "-" -> "_" mangling, following requests. 2009-01-17 Andy Buckley * 1.1.2 release. 2009-01-15 Andy Buckley * Converting Python build system to bundle SWIG output in tarball. 2009-01-14 Andy Buckley * Converting AIDA/LWH divide function to return a DPS so that bin width factors don't get all screwed up. Analyses adapted to use the new division operation (a DPS/DPS divide would also be nice... but can wait for YODA). 2009-01-06 Andy Buckley * bin/make-plots: Added --png option for making PNG output files, using 'convert' (after making a PDF --- it's a bit messy) * bin/make-plots: Added --eps option for output filtering through ps2eps. 2009-01-05 Andy Buckley * Python: reworking Python extension build to use distutils and newer m4 Python macros. Probably breaks distcheck but is otherwise more robust and platform independent (i.e. it should now work on Macs). 2008-12-19 Andy Buckley * make-plots: Multi-threaded make-plots and cleaned up the LaTeX building a bit (necessary to remove the implicit single global state). 2008-12-18 Andy Buckley * make-plots: Made LaTeX run in no-stop mode. * compare-histos: Updated to use a nicer labelling syntax on the command line and to successfully build MC-MC plots. 2008-12-16 Andy Buckley * Made LWH bin edge comparisons safe against numerical errors. * Added Particle comparison functions for sorting. * Removing most bad things from ExampleTree and tidying up. Marked WZandh projection for removal. 2008-12-03 Hendrik Hoeth * Added the two missing observables to the CDF_2008_NOTE_9337 analysis, i.e. track pT and sum(ET). There is a small difference between our MC output and the MC plots of the analysis' author, we're still waiting for the author's comments. 2008-12-02 Andy Buckley * Overloading use of a std::set in the interface, since the version of SWIG on Sci Linux doesn't have a predefined mapping for STL sets. 2008-12-02 Hendrik Hoeth * Fixed uemerge. The output was seriously broken by a single line of debug information in fillAbove(). Also changed uemerge output to exponential notation. * Unified ref and mc histos in compare-histos. Histos with one bin are plotted linear. Option for disabling the ratio plot. Several fixes for labels, legends, output directories, ... * Changed rivetgun's fallback directory for parameter files to $PREFIX/share/AGILe, since that's where the steering files now are. * Running aida2flat in split mode now produces make-plots compatible dat-files for direct plotting. 2008-11-28 Andy Buckley * Replaced binreloc with an upgraded and symbol-independent copy. 2008-11-25 Andy Buckley * Added searching of $RIVET_REF_PATH for AIDA reference data files. 2008-11-24 Andy Buckley * Removing "get"s and other obsfucated syntax from ProjectionApplier (Projection and Analysis) interfaces. 2008-11-21 Andy Buckley * Using new "global" Jet and V4 sorting functors in TrackJet. Looks like there was a sorting direction problem before... * Verbose mode with --list-analyses now shows descriptions as well as analysis names. * Moved data/Rivet to data/refdata and moved data/RivetGun contents to AGILe (since generator steering is no longer a Rivet thing) * Added unchecked ratio plots to D0 Run II jet + photon analysis. * Added D0 inclusive photon analysis. * Added D0 Z rapidity analysis. * Tidied up constructor interface and projection chain implementation of InvMassFinalState. * Added ~complete set of Jet and FourMomentum sorting functors. 2008-11-20 Andy Buckley * Added IdentifiedFinalState. * Moved a lot of TrackJet and Jet code into .cc files. * Fixed a caching bug in Jet: cache flag resets should never be conditional, since they are then sensitive to initialisation errors. * Added quark enum values to ParticleName. * Rationalised JetAlg interfaces somewhat, with "size()" and "jets()" methods in the interface. * Added D0 W charge asymmetry and D0 inclusive jets analyses. 2008-11-18 Andy Buckley * Adding D0 inclusive Z pT shape analysis. * Added D0 Z + jet pT and photon + jet pT spectrum analyses. * Lots of tidying up of particle, event, particle name etc. * Now the first event is used to detect the beam type and remove incompatible analyses. 2008-11-17 Andy Buckley * Added bash completion for rivetgun. * Starting to provide stand-alone call methods on projections so they can be used without the caching infrastructure. This could also be handy for unit testing. * Adding functionality (sorting function and built-in sorting schemes) to the JetAlg interface. 2008-11-10 Hendrik Hoeth * Fix floating point number output format in aida2flat and flat2aida * Added CDF_2002_S4796047: CDF Run-I charged multiplicity distribution * Renamed CDF_2008_MINBIAS to CDF_2008_NOTE_9337, since the note is publicly available now. 2008-11-10 Hendrik Hoeth * Added DELPHI_2003_WUD_03_11: Delphi 4-jet angular distributions. There is still a problem with the analysis, so don't use it yet. But I need to commit the code because my laptop is broken ... 2008-11-06 Andy Buckley * Code review: lots of tidying up of projections and analyses. * Fixes for compatibility with the LLVM C & C++ compiler. * Change of Particle interface to remove "get"-prefixed method names. 2008-11-05 Andy Buckley * Adding ability to query analysis metadata from the command line. * Example of a plugin analyis now in plugindemo, with a make check test to make sure that the plugin analysis is recognised by the command line "rivet" tool. * GCC 4.3 fix to mat-vec tests. 2008-11-04 Andy Buckley * Adding native logger control from Python interface. 2008-11-03 Andy Buckley * Adding bash_completion for rivet executable. 2008-10-31 Andy Buckley * Clean-up of histo titles and analysis code review. * Added momentum construction functions from FastJet PseudoJets. 2008-10-28 Andy Buckley * Auto-booking of histograms with a name, rather than the HepData ID 3-tuple is now possible. * Fix in CDF 2001 pT spectra to get the normalisations to depend on the pT_lead cutoff. 2008-10-23 Andy Buckley * rivet handles signals neatly, as for rivetgun, so that premature killing of the analysis process will still result in an analysis file. * rivet now accepts cross-section as a command line argument and, if it is missing and is required, will prompt the user for it interactively. 2008-10-22 Andy Buckley * rivet (Python interface) now can list analyses, check when adding analyses that the given names are valid, specify histo file name, and provide sensibly graded event number logging. 2008-10-20 Andy Buckley * Corrections to CDF 2004 analysis based on correspondance with Joey Huston. M bias dbns now use whole event within |eta| < 0.7, and Cheese plots aren't filled at all if there are insufficient jets (and the correct ETlead is used). 2008-10-08 Andy Buckley * Added AnalysisHandler::commitData() method, to allow the Python interface to write out a histo file without having to know anything about the histogramming API. * Reduced SWIG interface file to just map a subset of Analysis and AnalysisHandler functionality. This will be the basis for a new command line interface. 2008-10-06 Andy Buckley * Converted FastJets plugin to use a Boost shared_pointer to the cached ClusterSequence. The nullness of the pointer is now used to indicate an empty tracks (and hence jets) set. Once FastJet natively support empty CSeqs, we can rewrite this a bit more neatly and ditch the shared_ptr. 2008-10-02 Andy Buckley * The CDF_2004 (Acosta) data file now includes the full range of pT for the min bias data at both 630 and 1800 GeV. Previously, only the small low-pT insert plot had been entered into HepData. 2008-09-30 Andy Buckley * Lots of updates to CDF_2004 (Acosta) UE analysis, including sorting jets by E rather than Et, and factorising transverse cone code into a function so that it can be called with a random "leading jet" in min bias mode. Min bias histos are now being trial-filled just with tracks in the transverse cones, since the paper is very unclear on this. * Discovered a serious caching problem in FastJets projection when an empty tracks vector is passed from the FinalState. Unfortunately, FastJet provides no API way to solve the problem, so we'll have to report this upstream. For now, we're solving this for CDF_2004 by explicitly vetoing events with no tracks. * Added Doxygen to the build with target "dox" * Moved detection of whether cross-section information is needed into the AnalysisHandler, with dynamic computation by scanning contained analyses. * Improved robustness of event reading to detect properly when the input file is smaller than expected. 2008-09-29 Hendrik Hoeth * New analysis: CDF_2000_S4155203 2008-09-23 Andy Buckley * rivetgun can now be built and run without AGILe. Based on a patch by Frank Siegert. 2008-09-23 Hendrik Hoeth * Some preliminary numbers for the CDF_2008_LEADINGJETS analysis (only transverse region and not all observables. But all we have now.) 2008-09-17 Andy Buckley * Breaking up the mammoth "generate" function, to make Python mapping easier, among other reasons. * Added if-zero-return-zero checking to angle mapping functions, to avoid problems where 1e-19 gets mapped on to 2 pi and then fails boundary asserts. * Added HistoHandler singleton class, which will be a central repository for holding analyses' histogram objects to be accessed via a user-chosen name. 2008-08-26 Andy Buckley * Allowing rivet-config to return combined flags. 2008-08-14 Andy Buckley * Fixed some g++ 4.3 compilation bugs, including "vector" not being a valid name for a method which returns a physics vector, since it clashes with std::vector (which we globally import). Took the opportunity to rationalise the Jet interface a bit, since "particle" was used to mean "FourMomentum", and "Particle" types required a call to "getFullParticle". I removed the "gets" at the same time, as part of our gradual migration to a coherent naming policy. 2008-08-11 Andy Buckley * Tidying of FastJets and added new data files from HepData. 2008-08-10 James Monk * FastJets now uses user_index property of fastjet::PseudoJet to reconstruct PID information in jet contents. 2008-08-07 Andy Buckley * Reworking of param file and command line parsing. Tab characters are now handled by the parser, in a way equivalent to spaces. 2008-08-06 Andy Buckley * Added extra histos and filling to Acosta analysis - all HepData histos should now be filled, depending on sqrt{s}. Also trialling use of LaTeX math mode in titles. 2008-08-05 Andy Buckley * More data files for CDF analyses (2 x 2008, 1 x 1994), and moved the RivetGun AtlasPythia6.params file to more standard fpythia-atlas.params (and added to the install list). 2008-08-04 Andy Buckley * Reduced size of available options blocks in RivetGun help text by removing "~" negating variants (which are hardly ever used in practice) and restricting beam particles to PROTON, ANTIPROTON,ELECTRON and POSITRON. * Fixed Et sorting in Acosta analysis. 2008-08-01 Andy Buckley * Added AIDA headers to the install list, since external (plugin-type) analyses need them to be present for compilation to succeed. 2008-07-29 Andy Buckley * Fixed missing ROOT compile flags for libRivet. * Added command line repetition to logging. 2008-07-29 Hendrik Hoeth * Included the missing numbers and three more observables in the CDF_2008_NOTE_9351 analysis. 2008-07-29 Andy Buckley * Fixed wrong flags on rivet-config 2008-07-28 Hendrik Hoeth * Renamed CDF_2008_DRELLYAN to CDF_2008_NOTE_9351. Updated numbers and cuts to the final version of this CDF note. 2008-07-28 Andy Buckley * Fixed polar angle calcuation to use atan2. * Added "mk" prefixes and x/setX convention to math classes. 2008-07-28 Hendrik Hoeth * Fixed definition of FourMomentum::pT (it had been returning pT2) 2008-07-27 Andy Buckley * Added better tests for Boost headers. * Added testing for -ansi, -pedantic and -Wall compiler flags. 2008-07-25 Hendrik Hoeth * updated DELPHI_2002_069_CONF_603 according to information from the author 2008-07-17 Andy Buckley * Improvements to aida2flat: now can produce one output file per histo, and there is a -g "gnuplot mode" which comments out the YODA/make_plot headers to make the format readable by gnuplot. * Import boost::assign namespace contents into the Rivet namespace --- provides very useful intuitive collection initialising functions. 2008-07-15 Andy Buckley * Fixed missing namespace in vector/matrix testing. * Removed Boost headers: now a system dependency. * Fixed polarRadius infinite loop. 2008-07-09 Andy Buckley * Fixed definitions of mapAngleMPiToPi, etc. and used them to fix the Jet::getPhi method. * Trialling use of "foreach" loop in CDF_2004: it works! Very nice. 2008-07-08 Andy Buckley * Removed accidental reference to an "FS" projection in FinalStateHCM's compare method. rivetgun -A now works again. * Added TASSO, SLD and D0_2008 reference data. The TASSO and SLD papers aren't installed or included in the tarball since there are currently no plans to implement these analyses. * Added Rivet namespacing to vector, matrix etc. classes. This required some re-writing and the opportunity was taken to move some canonical function definitions inside the classes and to improve the header structure of the Math area. 2008-07-07 Andy Buckley * Added Rivet namespace to Units.hh and Constants.hh. * Added Doxygen "@brief" flags to analyses. * Added "RIVET_" namespacing to all header guards. * Merged Giulio Lenzi's isolation/vetoing/invmass projections and D0 2008 analysis. 2008-06-23 Jon Butterworth * Modified FastJet to fix ysplit and split and filter. * Modified ExampleTree to show how to call them. 2008-06-19 Hendrik Hoeth * Added first version of the CDF_2008_DRELLYAN analysis described on http://www-cdf.fnal.gov/physics/new/qcd/abstracts/UEinDY_08.html There is a small difference between the analysis and this implementation, but it's too small to be visible. The fpythia-cdfdrellyan.params parameter file is for this analysis. * Added first version of the CDF_2008_MINBIAS analysis described on http://www-cdf.fnal.gov/physics/new/qcd/abstracts/minbias_08.html The .aida file is read from the plots on the web and will change. I'm still discussing some open questions about the analysis with the author. 2008-06-18 Jon Butterworth * Added First versions of splitJet and filterJet methods to fastjet.cc. Not yet tested, buyer beware. 2008-06-18 Andy Buckley * Added extra sorted Jets and Pseudojets methods to FastJets, and added ptmin argument to the JetAlg getJets() method, requiring a change to TrackJet. 2008-06-13 Andy Buckley * Fixed processing of "RG" parameters to ensure that invalid iterators are never used. 2008-06-10 Andy Buckley * Updated AIDA reference files, changing "/HepData" root path to "/REF". Still missing a couple of reference files due to upstream problems with the HepData records. 2008-06-09 Andy Buckley * rivetgun now handles termination signals (SIGTERM, SIGINT and SIGHUP) gracefully, finishing the event loop and finalising histograms. This means that histograms will always get written out, even if not all the requested events have been generated. 2008-06-04 Hendrik Hoeth * Added DELPHI_2002_069_CONF_603 analysis 2008-05-30 Hendrik Hoeth * Added InitialQuarks projection * Added OPAL_1998_S3780481 analysis 2008-05-29 Andy Buckley * distcheck compatibility fixes and autotools tweaks. 2008-05-28 Andy Buckley * Converted FastJet to use Boost smart_ptr for its plugin handling, to solve double-delete errors stemming from the heap cloning of projections. * Added (a subset of) Boost headers, particularly the smart pointers. 2008-05-24 Andy Buckley * Added autopackage spec files. * Merged these changes into the trunk. * Added a registerClonedProjection(...) method to ProjectionHandler: this is needed so that cloned projections will have valid pointer entries in the ProjectHandler repository. * Added clone() methods to all projections (need to use this, since the templated "new PROJ(proj)" approach to cloning can't handle object polymorphism. 2008-05-19 Andy Buckley * Moved projection-applying functions into ProjectionApplier base class (from which Projection and Analysis both derive). * Added Rivet-specific exceptions in place of std::runtime_error. * Removed unused HepML reference files. * Added error handling for requested analyses with wrong case convention / missing name. 2008-05-15 Hendrik Hoeth * New analysis PDG_Hadron_Multiplicities * flat2aida converter 2008-05-15 Andy Buckley * Removed unused mysterious Perl scripts! * Added RivetGun.HepMC logging of HepMC event details. 2008-05-14 Hendrik Hoeth * New analysis DELPHI_1995_S3137023. This analysis contains the xp spectra of Xi+- and Sigma(1385)+-. 2008-05-13 Andy Buckley * Improved logging interface: log levels are now integers (for cross-library compatibility and level setting also applies to existing loggers. 2008-05-09 Andy Buckley * Improvements to robustness of ROOT checks. * Added --version flag on config scripts and rivetgun. 2008-05-06 Hendrik Hoeth * New UnstableFinalState projection which selects all hadrons, leptons and real photons including unstable particles. * In the DELPHI_1996_S3430090 analysis the multiplicities for pi+/pi- and p0 are filled, using the UnstableFinalState projection. 2008-05-06 Andy Buckley * FastJets projection now protects against the case where no particles exist in the final state (where FastJet throws an exception). * AIDA file writing is now separated from the AnalysisHandler::finalize method... API users can choose what to do with the histo objects, be that writing out or further processing. 2008-04-29 Andy Buckley * Increased default tolerances in floating point comparisons as they were overly stringent and valid f.p. precision errors were being treated as significant. * Implemented remainder of Acosta UE analysis. * Added proper getEtSum() to Jet. * Added Et2() member and function to FourMomentum. * Added aida2flat conversion script. * Fixed ambiguity in TrackJet algorithm as to how the iteration continues when tracks are merged into jets in the inner loop. 2008-04-28 Andy Buckley * Merged in major "ProjectionHandler" branch. Projections are now all stored centrally in the singleton ProjectionHandler container, rather than as member pointers in projections and analyses. This also affects the applyProjection mechanism, which is now available as a templated method on Analysis and Projection. Still a few wrinkles need to be worked out. * The branch changes required a comprehensive review of all existing projections and analyses: lots of tidying up of these classes, as well as the auxiliary code like math utils, has taken place. Too much to list and track, unfortunately! 2008-03-28 Andy Buckley * Started second CDF UE analysis ("Acosta"): histograms defined. * Fixed anomalous factor of 2 in LWH conversion from Profile1D to DataPointSet. * Added pT distribution histos to CDF 2001 UE analysis. 2008-03-26 Andy Buckley * Removed charged+neutral versions of histograms and projections from DELPHI analysis since they just duplicate the more robust charged-only measurements and aren't really of interest for tuning. 2008-03-10 Andy Buckley * Profile histograms now use error computation with proper weighting, as described here: http://en.wikipedia.org/wiki/Weighted_average 2008-02-28 Andy Buckley * Added --enable-jade flag for Professor studies with patched FastJet. * Minor fixes to LCG tag generator and gfilt m4 macros. * Fixed projection slicing issues with Field UE analysis. * Added Analysis::vetoEvent(e) function, which keeps track of the correction to the sum of weights due to event vetoing in analysis classes. 2008-02-26 Andy Buckley * Vector and derived classes now initialise to have zeroed components when the no-arg constructor is used. * Added Analysis::scale() function to scale 1D histograms. Analysis::normalize() uses it internally, and the DELPHI (A)EEC, whose histo weights are not pure event weights, and normalised using scale(h, 1/sumEventWeights). 2008-02-21 Hendrik Hoeth * Added EEC and AEEC to the DELPHI_1996_S3430090 analysis. The normalisation of these histograms is still broken (ticket #163). 2008-02-19 Hendrik Hoeth * Many fixes to the DELPHI_1996_S3430090 analysis: bugfix in the calulation of eigenvalues/eigenvectors in MatrixDiag.hh for the sphericity, rewrite of Thrust/Major/Minor, fixed scaled momentum, hemisphere masses, normalisation in single particle events, final state slicing problems in the projections for Thrust, Sphericity and Hemispheres. 2008-02-08 Andy Buckley * Applied fixes and extensions to DIS classes, based on submissions by Dan Traynor. 2008-02-06 Andy Buckley * Made projection pointers used for cut combining into const pointers. Required some redefinition of the Projection* comparison operator. * Temporarily added FinalState member to ChargedFinalState to stop projection lifetime crash. 2008-02-01 Andy Buckley * Fixed another misplaced factor of bin width in the Analysis::normalize() method. 2008-01-30 Andy Buckley * Fixed the conversion of IHistogram1D to DPS, both via the explicit Analysis::normalize() method and the implicit AnalysisHandler::treeNormalize() route. The root of the problem is the AIDA choice of the word "height" to represent the sum of weights in a bin: i.e. the bin width is not taken into account either in computing bin height or error. 2008-01-22 Andy Buckley * Beam projection now uses HepMC GenEvent::beam_particles() method to get the beam particles. This is more portable and robust for C++ generators, and equivalent to the existing "first two" method for Fortran generators. 2008-01-17 Andy Buckley * Added angle range fix to pseudorapidity function (thanks to Piergiulio Lenzi). 2008-01-10 Andy Buckley * Changed autobooking plot codes to use zero-padding (gets the order right in JAS, file browser, ROOT etc.). Also changed the 'ds' part to 'd' for consistency. HepData's AIDA output has been correspondingly updated, as have the bundled data files. 2008-01-04 Andy Buckley * Tidied up JetShape projection a bit, including making the constructor params const references. This seems to have sorted the runtime segfault in the CDF_2005 analysis. * Added caching of the analysis bin edges from the AIDA file - each analysis object will now only read its reference file once, which massively speeds up the rivetgun startup time for analyses with large numbhers of autobooked histos (e.g. the DELPHI_1996_S3430090 analysis). 2008-01-02 Andy Buckley * CDF_2001_S4751469 now uses the LossyFinalState projection, with an 8% loss rate. * Added LossyFinalState and HadronicFinalState, and fixed a "polarity" bug in the charged final state projection (it was keeping only the *uncharged* particles). * Now using isatty(1) to determine whether or not color escapes can be used. Also removed --color argument, since it can't have an effect (TCLAP doesn't do position-based flag toggling). * Made Python extension build optional (and disabled by default). 2008-01-01 Andy Buckley * Removed some unwanted DEBUG statements, and lowered the level of some infrastructure DEBUGs to TRACE level. * Added bash color escapes to the logger system. 2007-12-21 Leif Lonnblad * include/LWH/ManagedObject.h: Fixed infinite loop in encodeForXML cf. ticket #135. 2007-12-20 Andy Buckley * Removed HepPID, HepPDT and Boost dependencies. * Fixed XML entity encoding in LWH. Updated CDF_2007_S7057202 analysis to not do its own XML encoding of titles. 2007-12-19 Andy Buckley * Changed units header to set GeV = 1 (HepMC convention) and using units in CDF UE analysis. 2007-12-15 Andy Buckley * Introduced analysis metadata methods for all analyses (and made them part of the Analysis interface). 2007-12-11 Andy Buckley * Added JetAlg base projection for TrackJet, FastJet etc. 2007-12-06 Andy Buckley * Added checking for Boost library, and the standard Boost test program for shared_ptr. * Got basic Python interface running - required some tweaking since Python and Rivet's uses of dlopen collide (another RTLD_GLOBAL issue - see http://muttley.hates-software.com/2006/01/25/c37456e6.html ) 2007-12-05 Andy Buckley * Replaced all use of KtJets projection with FastJets projection. KtJets projection disabled but left undeleted for now. CLHEP and KtJet libraries removed from configure searches and Makefile flags. 2007-12-04 Andy Buckley * Param file loading now falls back to the share/RivetGun directory if a local file can't be found and the provided name has no directory separators in it. * Converted TrackJet projection to update the jet centroid with each particle added, using pT weighting in the eta and phi averaging. 2007-12-03 Andy Buckley * Merged all command line handling functions into one large parse function, since only one executable now needs them. This removes a few awkward memory leaks. * Removed rivet executable - HepMC file reading functionality will move into rivetgun. * Now using HepMC IO_GenEvent format (IO_Ascii and IO_ExtendedAscii are deprecated). Now requires HepMC >= 2.3.0. * Added forward declarations of GSL diagonalisation routines, eliminating need for GSL headers to be installed on build machine. 2007-11-27 Andy Buckley * Removed charge differentiation from Multiplicity projection (use CFS proj) and updated ExampleAnalysis to produce more useful numbers. * Introduced binreloc for runtime path determination. * Fixed several bugs in FinalState, ChargedFinalState, TrackJet and Field analysis. * Completed move to new analysis naming scheme. 2007-11-26 Andy Buckley * Removed conditional HAVE_FASTJET bits: FastJet is now compulsory. * Merging appropriate RivetGun parts into Rivet. RivetGun currently broken. 2007-11-23 Andy Buckley * Renaming analyses to Spires-ID scheme: currently of form S, to become __. 2007-11-20 Andy Buckley * Merged replacement vectors, matrices and boosts into trunk. 2007-11-15 Leif Lonnblad * src/Analysis.cc, include/Rivet/Analysis.hh: Introduced normalize function. See ticket #126. 2007-10-31 Andy Buckley * Tagging as 1.0b2 for HERA-LHC meeting. 2007-10-25 Andy Buckley * Added AxesDefinition base interface to Sphericity and Thrust, used by Hemispheres. * Exposed BinaryCut class, improved its interface and fixed a few bugs. It's now used by VetoedFinalState for momentum cuts. * Removed extra output from autobooking AIDA reader. * Added automatic DPS booking. 2007-10-12 Andy Buckley * Improved a few features of the build system 2007-10-09 James Monk * Fixed dylib dlopen on Mac OS X. 2007-10-05 Andy Buckley * Added new reference files. 2007-10-03 Andy Buckley * Fixed bug in configure.ac which led to explicit CXX setting being ignored. * Including Logging.hh in Projection.hh, hence new transitive dependency on Logging.hh being installed. Since this is the normal behaviour, I don't think this is a problem. * Fixed segfaulting bug due to use of addProjection() in locally-scoped contained projections. This isn't a proper fix, since the whole framework should be designed to avoid the possibility of bugs like this. * Added newly built HepML and AIDA reference files for current analyses. 2007-10-02 Andy Buckley * Fixed possible null-pointer dereference in Particle copy constructor and copy assignment: this removes one of two blocker segfaults, the other of which is related to the copy-assignment of the TotalVisMomentum projection in the ExampleTree analysis. 2007-10-01 Andy Buckley * Fixed portable path to Rivet share directory. 2007-09-28 Andy Buckley * Added more functionality to the rivet-config script: now has libdir, includedir, cppflags, ldflags and ldlibs options. 2007-09-26 Andy Buckley * Added the analysis library closer function to the AnalysisHandler finalize() method, and also moved the analysis delete loop into AnalysisHandler::finalize() so as not to try deleting objects whose libraries have already closed. * Replaced the RivetPaths.cc.in method for portable paths with something using -D defines - much simpler! 2007-09-21 Lars Sonnenschein * Added HepEx0505013 analysis and JetShape projection (some fixes by AB.) * Added GetLorentzJets member function to D0 RunII cone jet projection 2007-09-21 Andy Buckley * Fixed lots if bugs and bad practice in HepEx0505013 (to make it compile-able!) * Downclassed the log messages from the Test analysis to DEBUG level. * Added isEmpty() method to final state projection. * Added testing for empty final state and useful debug log messages to sphericity projection. 2007-09-20 Andy Buckley * Added Hemispheres projection, which calculates event hemisphere masses and broadenings. 2007-09-19 Andy Buckley * Added an explicit copy assignment operator to Particle: the absence of one of these was responsible for the double-delete error. * Added a "fuzzy equals" utility function for float/double types to Utils.hh (which already contains a variety of handy little functions). * Removed deprecated Beam::operator(). * Added ChargedFinalState projection and de-pointered the contained FinalState projection in VetoedFinalState. 2007-09-18 Andy Buckley * Major bug fixes to the regularised version of the sphericity projection (and hence the Parisi tensor projection). Don't trust C & D param results from any previous version! * Added extra methods to thrust and sphericity projections to get the oblateness and the sphericity basis (currently returns dummy axes since I can't yet work out how to get the similarity transform eigenvectors from CLHEP) 2007-09-14 Andy Buckley * Merged in a branch of pluggable analysis mechanisms. 2007-06-25 Jon Butterworth * Fixed some bugs in the root output for DataPoint.h 2007-06-25 Andy Buckley * include/Rivet/**/Makefile.am: No longer installing headers for "internal" functionality. * include/Rivet/Projections/*.hh: Removed the private restrictions on copy-assignment operators. 2007-06-18 Leif Lonnblad * include/LWH/Tree.h: Fixed minor bug in listObjectNames. * include/LWH/DataPointSet.h: Fixed setCoordinate functions so that they resize the vector of DataPoints if it initially was empty. * include/LWH/DataPoint.h: Added constructor taking a vector of measuremts. 2007-06-16 Leif Lonnblad * include/LWH/Tree.h: Implemented the listObjectNames and ls functions. * include/Rivet/Projections/FinalStateHCM.hh, include/Rivet/Projections/VetoedFinalState.hh: removed _theParticles and corresponding access function. Use base class variable instead. * include/Rivet/Projections/FinalState.hh: Made _theParticles protected. 2007-06-13 Leif Lonnblad * src/Projections/FinalStateHCM.cc, src/Projections/DISKinematics.cc: Equality checks using GenParticle::operator== changed to check for pointer equality. * include/Rivet/Analysis/HepEx9506012.hh: Uses modified DISLepton projection. * include/Rivet/Particle.hh: Added member function to check if a GenParticle is associated. * include/Rivet/Projections/DISLepton.hh, src/Projections/DISLepton.cc: Fixed bug in projection. Introduced final state projection to limit searching for scattered lepton. Still not properly tested. 2007-06-08 Leif Lonnblad * include/Rivet/Projections/PVertex.hh, src/Projections/PVertex.cc: Fixed the projection to simply get the signal_process_vertex from the GenEvent. This is the way it should work. If the GenEvent does not have a signal_process_vertex properly set up in this way, the problem is with the class that fills the GenEvent. 2007-06-06 Jon Butterworth * Merged TotalVisibleMomentum and CalMET * Added pT ranges to Vetoed final state projection 2007-05-27 Jon Butterworth * Fixed initialization of VetoedFinalStateProjection in ExampleTree 2007-05-27 Leif Lonnblad * include/Rivet/Projections/KtJets.*: Make sure the KtEvent is deleted properly. 2007-05-26 Jon Butterworth * Added leptons to the ExampleTree. * Added TotalVisibleEnergy projection, and added output to ExampleTree. 2007-05-25 Jon Butterworth * Added a charged lepton projection 2007-05-23 Andy Buckley * src/Analysis/HepEx0409040.cc: Changed range of the histograms to the "pi" range rather than the "128" range. * src/Analysis/Analysis.cc: Fixed a bug in the AIDA path building. Histogram auto-booking now works. 2007-05-23 Leif Lonnblad * src/Analysis/HepEx9506012.cc: Now uses the histogram booking function in the Analysis class. 2007-05-23 Jon Butterworth * Fixed bug in PRD65092002 (was failing on zero jets) 2007-05-23 Andy Buckley * Added (but haven't properly tested) a VetoedFinalState projection. * Added normalize() method for AIDA 1D histograms. * Added configure checking for Mac OS X version, and setting the development target flag accordingly. 2007-05-22 Andy Buckley * Added an ostream method for AnalysisName enums. * Converted Analyses and Projections to use projection lists, cuts and beam constraints. * Added beam pair combining to the BeamPair sets of Projections by finding set meta-intersections. * Added methods to Cuts, Analysis and Projection to make Cut definition easier. * Fixed default fall-through in cut handling switch statement and now using -numeric_limits::max() rather than min() * Added more control of logging presentation via static flag methods on Log. 2007-05-13 Andy Buckley * Added self-consistency checking mechanisms for Cuts and Beam * Re-implemented the cut-handling part of RivetInfo as a Cuts class. * Changed names of Analysis and Projection name() and handler() methods to getName() and getHandler() to be more consistent with the rest of the public method names in those classes. 2007-05-02 Andy Buckley * Added auto-booking of histogram bins from AIDA XML files. The AIDA files are located via a C++ function which is generated from RivetPaths.cc.in by running configure. 2007-04-18 Andy Buckley * Added a preliminary version of the Rick Field UE analysis, under the name PRD65092002. 2007-04-19 Leif Lonnblad * src/Analysis/HepEx0409040.cc: The reason this did not compile under gcc-4 is that some iterators into a vector were wrongly assued to be pointers and were initialized to 0 and later compared to 0. I've changed this to initialize to end() of the corresponding vector and to compare with the same end() later. 2007-04-05 Andy Buckley * Lots of name changes in anticipation of the MCNet school. RivetHandler is now AnalysisHandler (since that's what it does!), BeamParticle has become ParticleName, and RivetInfo has been split into Cut and BeamConstraint portions. * Added BeamConstraint mechanism, which can be used to determine if an analysis is compatible with the beams being used in the generator. The ParticleName includes an "ANY" wildcard for this purpose. 2006-03-19 Andy Buckley * Added "rivet" executable which can read in HepMC ASCII dump files and apply Rivet analyses on the events. 2007-02-24 Leif Lonnblad * src/Projections/KtJets.cc: Added comparison of member variables in compare() function * all: Merged changes from polymorphic-projections branch into trunk 2007-02-17 Leif Lonnblad * all: projections and analysis handlers: All projections which uses other projctions now has a pointer rather than a copy of those projections to allow for polymorphism. The constructors has also been changed to require the used projections themselves, rather than the arguments needed to construct them. 2007-02-17 Leif Lonnblad * src/Projections/FinalState.cc, include/Rivet/Projections/FinalState.icc (Rivet), include/Rivet/Projections/FinalState.hh: Added cut in transverse momentum on the particles to be included in the final state. 2007-02-06 Leif Lonnblad * include/LWH/HistogramFactory.h: Fixed divide-by-zero in divide function. Also fixed bug in error calculation in divide function. Introduced checkBin function to make sure two histograms are equal even if they have variable bin widths. * include/LWH/Histogram1D.h: In normalize(double), do not do anything if the sum of the bins are zero to avoid dividing by zero. 2007-01-20 Leif Lonnblad * src/Test/testLWH.cc: Modified to output files using the Tree. * configure.ac: Removed AC_CONFIG_AUX_DIR([include/Rivet/Config]) since the directory does not exist anymore. 2006-12-21 Andy Buckley * Rivet will now conditionally install the AIDA and LWH headers if it can't find them when configure'ing. * Started integrating Leif's LWH package to fulfill the AIDA duties. * Replaced multitude of CLHEP wrapper headers with a single RivetCLHEP.h header. 2006-11-20 Andy Buckley * Introduced log4cpp logging. * Added analysis enum, which can be used as input to an analysis factory by Rivet users. 2006-11-02 Andy Buckley * Yet more, almost pointless, administrative moving around of things with the intention of making the structure a bit better-defined: * The RivetInfo and RivetHandler classes have been moved from src/Analysis into src as they are really the main Rivet interface classes. The Rivet.h header has also been moved into the "header root". * The build of a single shared library in lib has been disabled, with the library being built instead in src. 2006-10-14 Andy Buckley * Introduced a minimal subset of the Sherpa math tools, such as Vector{3,4}D, Matrix, etc. The intention is to eventually cut the dependency on CLHEP. 2006-07-28 Andy Buckley * Moving things around: all sources now in directories under src 2006-06-04 Leif Lonnblad * Analysis/Examples/HZ95108.*: Now uses CentralEtHCM. Also set GeV units on the relevant histograms. * Projections/CentralEtHCM.*: Making a special class just to get out one number - the summed Et in the central rapidity bin - may seem like an overkill. But in case some one else might nees it... 2006-06-03 Leif Lonnblad * Analysis/Examples/HZ95108.*: Added the hz95108 energy flow analysis from HZtool. * Projections/DISLepton.*: Since many HERA measurements do not care if we have electron or positron beam, it is now possible to specify lepton or anti-lepton. * Projections/Event.*: Added member and access function for the weight of an event (taken from the GenEvent object.weights()[0]. * Analysis/RivetHandler.*: Now depends explicitly on the AIDA interface. An AIDA analysis factory must be specified in the constructor, where a tree and histogram factory is automatically created. Added access functions to the relevant AIDA objects. * Analysis/AnalysisBase.*: Added access to the RivetHandler and its AIDA factories. 2005-12-27 Leif Lonnblad * configure.ac: Added -I$THEPEGPATH/include to AM_CPPFLAGS. * Config/Rivet.h: Added some std incudes and using std:: declaration. * Analysis/RivetInfo.*: Fixed some bugs. The RivetInfo facility now works, although it has not been thoroughly tested. * Analysis/Examples/TestMultiplicity.*: Re-introduced FinalStateHCM for testing purposes but commented it away again. * .: Made a number of changes to implement handling of RivetInfo objects. diff --git a/analyses/pluginATLAS/ATLAS_2011_I944826.cc b/analyses/pluginATLAS/ATLAS_2011_I944826.cc --- a/analyses/pluginATLAS/ATLAS_2011_I944826.cc +++ b/analyses/pluginATLAS/ATLAS_2011_I944826.cc @@ -1,260 +1,260 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { class ATLAS_2011_I944826 : public Analysis { public: /// Constructor ATLAS_2011_I944826() : Analysis("ATLAS_2011_I944826") { _sum_w_ks = 0.0; _sum_w_lambda = 0.0; _sum_w_passed = 0.0; } /// Book histograms and initialise projections before the run void init() { UnstableFinalState ufs(Cuts::pT > 100*MeV); declare(ufs, "UFS"); ChargedFinalState mbts(Cuts::absetaIn(2.09, 3.84)); declare(mbts, "MBTS"); IdentifiedFinalState nstable(Cuts::abseta < 2.5 && Cuts::pT >= 100*MeV); nstable.acceptIdPair(PID::ELECTRON) .acceptIdPair(PID::MUON) .acceptIdPair(PID::PIPLUS) .acceptIdPair(PID::KPLUS) .acceptIdPair(PID::PROTON); declare(nstable, "nstable"); if (fuzzyEquals(sqrtS()/GeV, 7000, 1e-3)) { _hist_Ks_pT = bookHisto1D(1, 1, 1); _hist_Ks_y = bookHisto1D(2, 1, 1); _hist_Ks_mult = bookHisto1D(3, 1, 1); _hist_L_pT = bookHisto1D(7, 1, 1); _hist_L_y = bookHisto1D(8, 1, 1); _hist_L_mult = bookHisto1D(9, 1, 1); _hist_Ratio_v_y = bookScatter2D(13, 1, 1); _hist_Ratio_v_pT = bookScatter2D(14, 1, 1); // _temp_lambda_v_y = Histo1D(10, 0.0, 2.5); _temp_lambdabar_v_y = Histo1D(10, 0.0, 2.5); _temp_lambda_v_pT = Histo1D(18, 0.5, 4.1); _temp_lambdabar_v_pT = Histo1D(18, 0.5, 4.1); } else if (fuzzyEquals(sqrtS()/GeV, 900, 1E-3)) { _hist_Ks_pT = bookHisto1D(4, 1, 1); _hist_Ks_y = bookHisto1D(5, 1, 1); _hist_Ks_mult = bookHisto1D(6, 1, 1); _hist_L_pT = bookHisto1D(10, 1, 1); _hist_L_y = bookHisto1D(11, 1, 1); _hist_L_mult = bookHisto1D(12, 1, 1); _hist_Ratio_v_y = bookScatter2D(15, 1, 1); _hist_Ratio_v_pT = bookScatter2D(16, 1, 1); // _temp_lambda_v_y = Histo1D(5, 0.0, 2.5); _temp_lambdabar_v_y = Histo1D(5, 0.0, 2.5); _temp_lambda_v_pT = Histo1D(8, 0.5, 3.7); _temp_lambdabar_v_pT = Histo1D(8, 0.5, 3.7); } } // This function is required to impose the flight time cuts on Kaons and Lambdas double getPerpFlightDistance(const Rivet::Particle& p) { - HepMC::ConstGenParticlePtr genp = p.genParticle(); - HepMC::ConstGenVertexPtr prodV = genp->production_vertex(); - HepMC::ConstGenVertexPtr decV = genp->end_vertex(); - HepMC::FourVector prodPos = prodV->position(); + ConstGenParticlePtr genp = p.genParticle(); + ConstGenVertexPtr prodV = genp->production_vertex(); + ConstGenVertexPtr decV = genp->end_vertex(); + RivetHepMC::FourVector prodPos = prodV->position(); if (decV) { - const HepMC::FourVector decPos = decV->position(); + const RivetHepMC::FourVector decPos = decV->position(); double dy = prodPos.y() - decPos.y(); double dx = prodPos.x() - decPos.x(); return add_quad(dx, dy); } return numeric_limits::max(); } bool daughtersSurviveCuts(const Rivet::Particle& p) { // We require the Kshort or Lambda to decay into two charged // particles with at least pT = 100 MeV inside acceptance region - HepMC::ConstGenParticlePtr genp = p.genParticle(); - HepMC::ConstGenVertexPtr decV = genp->end_vertex(); + ConstGenParticlePtr genp = p.genParticle(); + ConstGenVertexPtr decV = genp->end_vertex(); bool decision = true; if (!decV) return false; - if (decV->particles_out_size() == 2) { + if (HepMCUtils::particles(decV, Relatives::CHILDREN).size() == 2) { std::vector pTs; std::vector charges; std::vector etas; - for(HepMC::ConstGenParticlePtr gp: particles(decV, Relatives::CHILDREN)) { + for(ConstGenParticlePtr gp: HepMCUtils::particles(decV, Relatives::CHILDREN)) { pTs.push_back(gp->momentum().perp()); etas.push_back(fabs(gp->momentum().eta())); charges.push_back( Rivet::PID::threeCharge(gp->pdg_id()) ); // gp->print(); } if ( (pTs[0]/Rivet::GeV < 0.1) || (pTs[1]/Rivet::GeV < 0.1) ) { decision = false; MSG_DEBUG("Failed pT cut: " << pTs[0]/Rivet::GeV << " " << pTs[1]/Rivet::GeV); } if ( etas[0] > 2.5 || etas[1] > 2.5 ) { decision = false; MSG_DEBUG("Failed eta cut: " << etas[0] << " " << etas[1]); } if ( charges[0] * charges[1] >= 0 ) { decision = false; MSG_DEBUG("Failed opposite charge cut: " << charges[0] << " " << charges[1]); } } else { decision = false; - MSG_DEBUG("Failed nDaughters cut: " << decV->particles_out_size()); + MSG_DEBUG("Failed nDaughters cut: " << HepMCUtils::particles(decV, Relatives::CHILDREN).size()); } return decision; } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // ATLAS MBTS trigger requirement of at least one hit in either hemisphere if (apply(event, "MBTS").size() < 1) { MSG_DEBUG("Failed trigger cut"); vetoEvent; } // Veto event also when we find less than 2 particles in the acceptance region of type 211,2212,11,13,321 if (apply(event, "nstable").size() < 2) { MSG_DEBUG("Failed stable particle cut"); vetoEvent; } _sum_w_passed += weight; // This ufs holds all the Kaons and Lambdas const UnstableFinalState& ufs = apply(event, "UFS"); // Some conters int n_KS0 = 0; int n_LAMBDA = 0; // Particle loop foreach (const Particle& p, ufs.particles()) { // General particle quantities const double pT = p.pT(); const double y = p.rapidity(); const PdgId apid = p.abspid(); double flightd = 0.0; // Look for Kaons, Lambdas switch (apid) { case PID::K0S: flightd = getPerpFlightDistance(p); if (!inRange(flightd/mm, 4., 450.) ) { MSG_DEBUG("Kaon failed flight distance cut:" << flightd); break; } if (daughtersSurviveCuts(p) ) { _hist_Ks_y ->fill(y, weight); _hist_Ks_pT->fill(pT/GeV, weight); _sum_w_ks += weight; n_KS0++; } break; case PID::LAMBDA: if (pT < 0.5*GeV) { // Lambdas have an additional pT cut of 500 MeV MSG_DEBUG("Lambda failed pT cut:" << pT/GeV << " GeV"); break; } flightd = getPerpFlightDistance(p); if (!inRange(flightd/mm, 17., 450.)) { MSG_DEBUG("Lambda failed flight distance cut:" << flightd/mm << " mm"); break; } if ( daughtersSurviveCuts(p) ) { if (p.pid() == PID::LAMBDA) { _temp_lambda_v_y.fill(fabs(y), weight); _temp_lambda_v_pT.fill(pT/GeV, weight); _hist_L_y->fill(y, weight); _hist_L_pT->fill(pT/GeV, weight); _sum_w_lambda += weight; n_LAMBDA++; } else if (p.pid() == -PID::LAMBDA) { _temp_lambdabar_v_y.fill(fabs(y), weight); _temp_lambdabar_v_pT.fill(pT/GeV, weight); } } break; } } // Fill multiplicity histos _hist_Ks_mult->fill(n_KS0, weight); _hist_L_mult->fill(n_LAMBDA, weight); } /// Normalise histograms etc., after the run void finalize() { MSG_DEBUG("# Events that pass the trigger: " << _sum_w_passed); MSG_DEBUG("# Kshort events: " << _sum_w_ks); MSG_DEBUG("# Lambda events: " << _sum_w_lambda); /// @todo Replace with normalize()? scale(_hist_Ks_pT, 1.0/_sum_w_ks); scale(_hist_Ks_y, 1.0/_sum_w_ks); scale(_hist_Ks_mult, 1.0/_sum_w_passed); /// @todo Replace with normalize()? scale(_hist_L_pT, 1.0/_sum_w_lambda); scale(_hist_L_y, 1.0/_sum_w_lambda); scale(_hist_L_mult, 1.0/_sum_w_passed); // Division of histograms to obtain lambda_bar/lambda ratios divide(_temp_lambdabar_v_y, _temp_lambda_v_y, _hist_Ratio_v_y); divide(_temp_lambdabar_v_pT, _temp_lambda_v_pT, _hist_Ratio_v_pT); } private: /// Counters double _sum_w_ks, _sum_w_lambda, _sum_w_passed; /// @name Persistent histograms //@{ Histo1DPtr _hist_Ks_pT, _hist_Ks_y, _hist_Ks_mult; Histo1DPtr _hist_L_pT, _hist_L_y, _hist_L_mult; Scatter2DPtr _hist_Ratio_v_pT, _hist_Ratio_v_y; //@} /// @name Temporary histograms //@{ Histo1D _temp_lambda_v_y, _temp_lambdabar_v_y; Histo1D _temp_lambda_v_pT, _temp_lambdabar_v_pT; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2011_I944826); } diff --git a/analyses/pluginATLAS/ATLAS_2011_S9035664.cc b/analyses/pluginATLAS/ATLAS_2011_S9035664.cc --- a/analyses/pluginATLAS/ATLAS_2011_S9035664.cc +++ b/analyses/pluginATLAS/ATLAS_2011_S9035664.cc @@ -1,138 +1,138 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief J/psi production at ATLAS class ATLAS_2011_S9035664: public Analysis { public: /// Constructor ATLAS_2011_S9035664() : Analysis("ATLAS_2011_S9035664") {} /// @name Analysis methods //@{ void init() { declare(UnstableFinalState(), "UFS"); _nonPrRapHigh = bookHisto1D( 14, 1, 1); _nonPrRapMedHigh = bookHisto1D( 13, 1, 1); _nonPrRapMedLow = bookHisto1D( 12, 1, 1); _nonPrRapLow = bookHisto1D( 11, 1, 1); _PrRapHigh = bookHisto1D( 18, 1, 1); _PrRapMedHigh = bookHisto1D( 17, 1, 1); _PrRapMedLow = bookHisto1D( 16, 1, 1); _PrRapLow = bookHisto1D( 15, 1, 1); _IncRapHigh = bookHisto1D( 20, 1, 1); _IncRapMedHigh = bookHisto1D( 21, 1, 1); _IncRapMedLow = bookHisto1D( 22, 1, 1); _IncRapLow = bookHisto1D( 23, 1, 1); } void analyze(const Event& e) { // Get event weight for histo filling const double weight = e.weight(); // Final state of unstable particles to get particle spectra const UnstableFinalState& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if (p.abspid() != 443) continue; ConstGenVertexPtr gv = p.genParticle()->production_vertex(); bool nonPrompt = false; if (gv) { - for (ConstGenParticlePtr pi: Rivet::particles(gv, Relatives::ANCESTORS)) { + for (ConstGenParticlePtr pi: HepMCUtils::particles(gv, Relatives::ANCESTORS)) { const PdgId pid2 = pi->pdg_id(); if (PID::isHadron(pid2) && PID::hasBottom(pid2)) { nonPrompt = true; break; } } } double absrap = p.absrap(); double xp = p.perp(); if (absrap<=2.4 and absrap>2.) { if (nonPrompt) _nonPrRapHigh->fill(xp, weight); else if (!nonPrompt) _PrRapHigh->fill(xp, weight); _IncRapHigh->fill(xp, weight); } else if (absrap<=2. and absrap>1.5) { if (nonPrompt) _nonPrRapMedHigh->fill(xp, weight); else if (!nonPrompt) _PrRapMedHigh->fill(xp, weight); _IncRapMedHigh->fill(xp, weight); } else if (absrap<=1.5 and absrap>0.75) { if (nonPrompt) _nonPrRapMedLow->fill(xp, weight); else if (!nonPrompt) _PrRapMedLow->fill(xp, weight); _IncRapMedLow->fill(xp, weight); } else if (absrap<=0.75) { if (nonPrompt) _nonPrRapLow->fill(xp, weight); else if (!nonPrompt) _PrRapLow->fill(xp, weight); _IncRapLow->fill(xp, weight); } } } /// Finalize void finalize() { double factor = crossSection()/nanobarn*0.0593; scale(_PrRapHigh , factor/sumOfWeights()); scale(_PrRapMedHigh , factor/sumOfWeights()); scale(_PrRapMedLow , factor/sumOfWeights()); scale(_PrRapLow , factor/sumOfWeights()); scale(_nonPrRapHigh , factor/sumOfWeights()); scale(_nonPrRapMedHigh, factor/sumOfWeights()); scale(_nonPrRapMedLow , factor/sumOfWeights()); scale(_nonPrRapLow , factor/sumOfWeights()); scale(_IncRapHigh , 1000.*factor/sumOfWeights()); scale(_IncRapMedHigh , 1000.*factor/sumOfWeights()); scale(_IncRapMedLow , 1000.*factor/sumOfWeights()); scale(_IncRapLow , 1000.*factor/sumOfWeights()); } //@} private: Histo1DPtr _nonPrRapHigh; Histo1DPtr _nonPrRapMedHigh; Histo1DPtr _nonPrRapMedLow; Histo1DPtr _nonPrRapLow; Histo1DPtr _PrRapHigh; Histo1DPtr _PrRapMedHigh; Histo1DPtr _PrRapMedLow; Histo1DPtr _PrRapLow; Histo1DPtr _IncRapHigh; Histo1DPtr _IncRapMedHigh; Histo1DPtr _IncRapMedLow; Histo1DPtr _IncRapLow; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2011_S9035664); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1094568.cc b/analyses/pluginATLAS/ATLAS_2012_I1094568.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1094568.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1094568.cc @@ -1,366 +1,366 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/LeadingParticlesFinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Projections/HadronicFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { struct ATLAS_2012_I1094568_Plots { // Track which veto region this is, to match the autobooked histograms int region_index; // Lower rapidity boundary or veto region double y_low; // Upper rapidity boundary or veto region double y_high; double vetoJetPt_Q0; double vetoJetPt_Qsum; // Histograms to store the veto jet pT and sum(veto jet pT) histograms. Histo1DPtr _h_vetoJetPt_Q0; Histo1DPtr _h_vetoJetPt_Qsum; // Scatter2Ds for the gap fractions Scatter2DPtr _d_gapFraction_Q0; Scatter2DPtr _d_gapFraction_Qsum; }; /// Top pair production with central jet veto class ATLAS_2012_I1094568 : public Analysis { public: /// Constructor ATLAS_2012_I1094568() : Analysis("ATLAS_2012_I1094568") { } /// Book histograms and initialise projections before the run void init() { const FinalState fs(Cuts::abseta < 4.5); declare(fs, "ALL_FS"); /// Get electrons from truth record IdentifiedFinalState elec_fs(Cuts::abseta < 2.47 && Cuts::pT > 25*GeV); elec_fs.acceptIdPair(PID::ELECTRON); declare(elec_fs, "ELEC_FS"); /// Get muons which pass the initial kinematic cuts: IdentifiedFinalState muon_fs(Cuts::abseta < 2.5 && Cuts::pT > 20*GeV); muon_fs.acceptIdPair(PID::MUON); declare(muon_fs, "MUON_FS"); /// Get all neutrinos. These will not be used to form jets. /// We'll use the highest 2 pT neutrinos to calculate the MET IdentifiedFinalState neutrino_fs(Cuts::abseta < 4.5); neutrino_fs.acceptNeutrinos(); declare(neutrino_fs, "NEUTRINO_FS"); // Final state used as input for jet-finding. // We include everything except the muons and neutrinos VetoedFinalState jet_input(fs); jet_input.vetoNeutrinos(); jet_input.addVetoPairId(PID::MUON); declare(jet_input, "JET_INPUT"); // Get the jets FastJets jets(jet_input, FastJets::ANTIKT, 0.4); declare(jets, "JETS"); // Initialise weight counter m_total_weight = 0.0; // Init histogramming for the various regions m_plots[0].region_index = 1; m_plots[0].y_low = 0.0; m_plots[0].y_high = 0.8; initializePlots(m_plots[0]); // m_plots[1].region_index = 2; m_plots[1].y_low = 0.8; m_plots[1].y_high = 1.5; initializePlots(m_plots[1]); // m_plots[2].region_index = 3; m_plots[2].y_low = 1.5; m_plots[2].y_high = 2.1; initializePlots(m_plots[2]); // m_plots[3].region_index = 4; m_plots[3].y_low = 0.0; m_plots[3].y_high = 2.1; initializePlots(m_plots[3]); } void initializePlots(ATLAS_2012_I1094568_Plots& plots) { const string vetoPt_Q0_name = "TMP/vetoJetPt_Q0_" + to_str(plots.region_index); plots.vetoJetPt_Q0 = 0.0; plots._h_vetoJetPt_Q0 = bookHisto1D(vetoPt_Q0_name, 200, 0.0, 1000.0); plots._d_gapFraction_Q0 = bookScatter2D(plots.region_index, 1, 1); foreach (Point2D p, refData(plots.region_index, 1, 1).points()) { p.setY(0, 0); plots._d_gapFraction_Q0->addPoint(p); } const string vetoPt_Qsum_name = "TMP/vetoJetPt_Qsum_" + to_str(plots.region_index); plots._h_vetoJetPt_Qsum = bookHisto1D(vetoPt_Qsum_name, 200, 0.0, 1000.0); plots._d_gapFraction_Qsum = bookScatter2D(plots.region_index, 2, 1); plots.vetoJetPt_Qsum = 0.0; foreach (Point2D p, refData(plots.region_index, 2, 1).points()) { p.setY(0, 0); plots._d_gapFraction_Qsum->addPoint(p); } } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); /// Get the various sets of final state particles const Particles& elecFS = apply(event, "ELEC_FS").particlesByPt(); const Particles& muonFS = apply(event, "MUON_FS").particlesByPt(); const Particles& neutrinoFS = apply(event, "NEUTRINO_FS").particlesByPt(); // Get all jets with pT > 25 GeV const Jets& jets = apply(event, "JETS").jetsByPt(25.0*GeV); // Keep any jets that pass the initial rapidity cut vector central_jets; foreach(const Jet& j, jets) { if (j.absrap() < 2.4) central_jets.push_back(&j); } // For each of the jets that pass the rapidity cut, only keep those that are not // too close to any leptons vector good_jets; foreach(const Jet* j, central_jets) { bool goodJet = true; foreach (const Particle& e, elecFS) { double elec_jet_dR = deltaR(e.momentum(), j->momentum()); if (elec_jet_dR < 0.4) { goodJet = false; break; } } if (!goodJet) continue; if (!goodJet) continue; foreach (const Particle& m, muonFS) { double muon_jet_dR = deltaR(m.momentum(), j->momentum()); if (muon_jet_dR < 0.4) { goodJet = false; break; } } if (!goodJet) continue; good_jets.push_back(j); } // Get b hadrons with pT > 5 GeV /// @todo This is a hack -- replace with UnstableFinalState vector B_hadrons; - vector allParticles = particles(event.genEvent()); + vector allParticles = HepMCUtils::particles(event.genEvent()); for (size_t i = 0; i < allParticles.size(); i++) { ConstGenParticlePtr p = allParticles[i]; if (!PID::isHadron(p->pdg_id()) || !PID::hasBottom(p->pdg_id())) continue; if (p->momentum().perp() < 5*GeV) continue; B_hadrons.push_back(p); } // For each of the good jets, check whether any are b-jets (via dR matching) vector b_jets; for(const Jet* j: good_jets) { bool isbJet = false; for(ConstGenParticlePtr b: B_hadrons) { if (deltaR(j->momentum(), FourMomentum(b->momentum())) < 0.3) isbJet = true; } if (isbJet) b_jets.push_back(j); } // Check the good jets again and keep track of the "additional jets" // i.e. those which are not either of the 2 highest pT b-jets vector veto_jets; int n_bjets_matched = 0; for(const Jet* j: good_jets) { bool isBJet = false; for(const Jet* b: b_jets) { if (n_bjets_matched == 2) break; if (b == j){isBJet = true; ++ n_bjets_matched;} } if (!isBJet) veto_jets.push_back(j); } // Get the MET by taking the vector sum of all neutrinos /// @todo Use MissingMomentum instead? double MET = 0; FourMomentum p_MET; for(const Particle& p: neutrinoFS) { p_MET = p_MET + p.momentum(); } MET = p_MET.pT(); // Now we have everything we need to start doing the event selections bool passed_ee = false; vector vetoJets_ee; // We want exactly 2 electrons... if (elecFS.size() == 2) { // ... with opposite sign charges. if (charge(elecFS[0]) != charge(elecFS[1])) { // Check the MET if (MET >= 40*GeV) { // Do some dilepton mass cuts const double dilepton_mass = (elecFS[0].momentum() + elecFS[1].momentum()).mass(); if (dilepton_mass >= 15*GeV) { if (fabs(dilepton_mass - 91.0*GeV) >= 10.0*GeV) { // We need at least 2 b-jets if (b_jets.size() > 1) { // This event has passed all the cuts; passed_ee = true; } } } } } } bool passed_mumu = false; // Now do the same checks for the mumu channel vector vetoJets_mumu; // So we now want 2 good muons... if (muonFS.size() == 2) { // ...with opposite sign charges. if (charge(muonFS[0]) != charge(muonFS[1])) { // Check the MET if (MET >= 40*GeV) { // and do some di-muon mass cuts const double dilepton_mass = (muonFS.at(0).momentum() + muonFS.at(1).momentum()).mass(); if (dilepton_mass >= 15*GeV) { if (fabs(dilepton_mass - 91.0*GeV) >= 10.0*GeV) { // Need at least 2 b-jets if (b_jets.size() > 1) { // This event has passed all mumu-channel cuts passed_mumu = true; } } } } } } bool passed_emu = false; // Finally, the same again with the emu channel vector vetoJets_emu; // We want exactly 1 electron and 1 muon if (elecFS.size() == 1 && muonFS.size() == 1) { // With opposite sign charges if (charge(elecFS[0]) != charge(muonFS[0])) { // Calculate HT: scalar sum of the pTs of the leptons and all good jets double HT = 0; HT += elecFS[0].pT(); HT += muonFS[0].pT(); foreach (const Jet* j, good_jets) HT += fabs(j->pT()); // Keep events with HT > 130 GeV if (HT > 130.0*GeV) { // And again we want 2 or more b-jets if (b_jets.size() > 1) { passed_emu = true; } } } } if (passed_ee == true || passed_mumu == true || passed_emu == true) { // If the event passes the selection, we use it for all gap fractions m_total_weight += weight; // Loop over each veto jet foreach (const Jet* j, veto_jets) { const double pt = j->pT(); const double rapidity = fabs(j->rapidity()); // Loop over each region for (size_t i = 0; i < 4; ++i) { // If the jet falls into this region, get its pT and increment sum(pT) if (inRange(rapidity, m_plots[i].y_low, m_plots[i].y_high)) { m_plots[i].vetoJetPt_Qsum += pt; // If we've already got a veto jet, don't replace it if (m_plots[i].vetoJetPt_Q0 == 0.0) m_plots[i].vetoJetPt_Q0 = pt; } } } for (size_t i = 0; i < 4; ++i) { m_plots[i]._h_vetoJetPt_Q0->fill(m_plots[i].vetoJetPt_Q0, weight); m_plots[i]._h_vetoJetPt_Qsum->fill(m_plots[i].vetoJetPt_Qsum, weight); m_plots[i].vetoJetPt_Q0 = 0.0; m_plots[i].vetoJetPt_Qsum = 0.0; } } } /// Normalise histograms etc., after the run void finalize() { for (size_t i = 0; i < 4; ++i) { finalizeGapFraction(m_total_weight, m_plots[i]._d_gapFraction_Q0, m_plots[i]._h_vetoJetPt_Q0); finalizeGapFraction(m_total_weight, m_plots[i]._d_gapFraction_Qsum, m_plots[i]._h_vetoJetPt_Qsum); } } /// Convert temporary histos to cumulative efficiency scatters /// @todo Should be possible to replace this with a couple of YODA one-lines for diff -> integral and "efficiency division" void finalizeGapFraction(double total_weight, Scatter2DPtr gapFraction, Histo1DPtr vetoPt) { // Stores the cumulative frequency of the veto jet pT histogram double vetoPtWeightSum = 0.0; // Keep track of which gap fraction point we're currently populating (#final_points != #tmp_bins) size_t fgap_point = 0; for (size_t i = 0; i < vetoPt->numBins(); ++i) { // If we've done the last "final" point, stop if (fgap_point == gapFraction->numPoints()) break; // Increment the cumulative vetoPt counter for this temp histo bin /// @todo Get rid of this and use vetoPt->integral(i+1) when points and bins line up? vetoPtWeightSum += vetoPt->bin(i).sumW(); // If this temp histo bin's upper edge doesn't correspond to the reference point, don't finalise the scatter. // Note that points are ON the bin edges and have no width: they represent the integral up to exactly that point. if ( !fuzzyEquals(vetoPt->bin(i).xMax(), gapFraction->point(fgap_point).x()) ) continue; // Calculate the gap fraction and its uncertainty const double frac = (total_weight != 0.0) ? vetoPtWeightSum/total_weight : 0; const double fracErr = (total_weight != 0.0) ? sqrt(frac*(1-frac)/total_weight) : 0; gapFraction->point(fgap_point).setY(frac, fracErr); ++fgap_point; } } private: // Weight counter double m_total_weight; // Structs containing all the plots, for each event selection ATLAS_2012_I1094568_Plots m_plots[4]; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2012_I1094568); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1118269.cc b/analyses/pluginATLAS/ATLAS_2012_I1118269.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1118269.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1118269.cc @@ -1,78 +1,78 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Particle.hh" namespace Rivet { class ATLAS_2012_I1118269 : public Analysis { public: ATLAS_2012_I1118269() : Analysis("ATLAS_2012_I1118269") { } void init() { _h_sigma_vs_pt = bookHisto1D(1, 1, 1); _h_sigma_vs_eta = bookHisto1D(2, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { double weight = event.weight(); Particles bhadrons; - for(ConstGenParticlePtr p: particles(event.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(event.genEvent())) { if (!( PID::isHadron( p->pdg_id() ) && PID::hasBottom( p->pdg_id() )) ) continue; ConstGenVertexPtr dv = p->end_vertex(); /// @todo In future, convert to use built-in 'last B hadron' function bool hasBdaughter = false; if ( PID::isHadron( p->pdg_id() ) && PID::hasBottom( p->pdg_id() )) { // b-hadron selection if (dv) { /// @todo particles_out_const_iterator is deprecated in HepMC3 - for (HepMC::GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin() ; pp != dv->particles_out_const_end() ; ++pp) { - if ( PID::isHadron( (*pp)->pdg_id() ) && PID::hasBottom( (*pp)->pdg_id()) ) { + for(ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ + if ( PID::isHadron( pp->pdg_id() ) && PID::hasBottom( pp->pdg_id()) ) { hasBdaughter = true; } } } } if (hasBdaughter) continue; bhadrons += Particle(*p); } foreach (const Particle& particle, bhadrons) { double eta = particle.eta(); double pt = particle.pT(); if (!(inRange(eta, -2.5, 2.5))) continue; if (pt < 9.*GeV) continue; _h_sigma_vs_pt->fill(pt, weight); _h_sigma_vs_eta->fill(fabs(eta), weight); } } void finalize() { scale(_h_sigma_vs_pt, crossSection()/nanobarn/sumOfWeights()); scale(_h_sigma_vs_eta, crossSection()/microbarn/sumOfWeights()); } private: Histo1DPtr _h_sigma_vs_pt; Histo1DPtr _h_sigma_vs_eta; }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2012_I1118269); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1188891.cc b/analyses/pluginATLAS/ATLAS_2012_I1188891.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1188891.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1188891.cc @@ -1,146 +1,146 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Tools/BinnedHistogram.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Particle.hh" namespace Rivet { class ATLAS_2012_I1188891 : public Analysis { public: ATLAS_2012_I1188891() : Analysis("ATLAS_2012_I1188891") { } void init() { const FinalState fs; FastJets fj04(fs, FastJets::ANTIKT, 0.4); declare(fj04, "AntiKT04"); string histotitle[7]={"BBfraction","BCfraction","CCfraction","BUfraction","CUfraction","UUfraction","Total"}; for (int i = 0 ; i < 7 ; i++){ _h_temp[i] = bookHisto1D("TMP/"+histotitle[i],refData(1,1,1)); if (i < 6) { _h_results[i] = bookScatter2D(i+1, 1, 1); } } } void analyze(const Event& event) { double weight = event.weight(); double weight100 = event.weight() * 100.; //to get results in % //keeps jets with pt>20 geV and ordered in decreasing pt Jets jetAr = apply(event, "AntiKT04").jetsByPt(20*GeV); int flav[2]={1,1}; vector leadjets; //get b/c-hadrons vector B_hadrons, C_hadrons; - vector allParticles = particles(event.genEvent()); + vector allParticles = HepMCUtils::particles(event.genEvent()); for (size_t i = 0; i < allParticles.size(); i++) { ConstGenParticlePtr p = allParticles.at(i); if(p->momentum().perp()*GeV < 5) continue; if ( (Rivet::PID::isHadron ( p->pdg_id() ) && Rivet::PID::hasBottom( p->pdg_id() ) ) ) { B_hadrons.push_back(p); } if ( (Rivet::PID::isHadron( p->pdg_id() ) && Rivet::PID::hasCharm( p->pdg_id() ) ) ) { C_hadrons.push_back(p); } } //select dijet for(const Jet& jet: jetAr) { const double pT = jet.pT(); const double absy = jet.absrap(); bool isBjet = jet.bTagged(); bool isCjet = jet.cTagged(); int jetflav=1; if (isBjet)jetflav=5; else if (isCjet)jetflav=4; if (absy <= 2.1 && leadjets.size() < 2) { if (pT > 500*GeV) continue; if ((leadjets.empty() && pT < 40*GeV) || pT < 20*GeV) continue; leadjets.push_back(jet.momentum()); if (leadjets.size()==1) flav[0] = jetflav; if (leadjets.size()==2) flav[1] = jetflav; } } if (leadjets.size() < 2) vetoEvent; double pBinsLJ[7] = {40.,60.,80.,120.,160.,250.,500.}; int iPBinLJ = -1; for (int k = 0 ; k < 7 ; k++) { if (leadjets[0].pT() > pBinsLJ[k]*GeV) iPBinLJ=k; else break; } bool c_ljpt = (iPBinLJ != -1); bool c_nljpt = leadjets[1].pT() > 20*GeV; bool c_dphi = fabs( deltaPhi(leadjets[0],leadjets[1]) ) > 2.1; bool isDijet = c_ljpt & c_nljpt & c_dphi; if (!isDijet) vetoEvent; _h_temp[6]->fill(leadjets[0].pT(), weight); if (flav[0]==5 && flav[1]==5) // BB dijet _h_temp[0]->fill(leadjets[0].pT(), weight100); if ((flav[0]==5 && flav[1]==4) || (flav[0]==4 && flav[1]==5)) // BC dijet _h_temp[1]->fill(leadjets[0].pT(), weight100); if (flav[0]==4 && flav[1]==4) // CC dijet _h_temp[2]->fill(leadjets[0].pT(), weight100); if ((flav[0]==5 && flav[1]==1) || (flav[0]==1 && flav[1]==5)) // B-light dijet _h_temp[3]->fill(leadjets[0].pT(), weight100); if ((flav[0]==4 && flav[1]==1) || (flav[0]==1 && flav[1]==4)) // C-light dijet _h_temp[4]->fill(leadjets[0].pT(), weight100); if (flav[0]==1 && flav[1]==1) // light-light dijet _h_temp[5]->fill(leadjets[0].pT(), weight100); } void finalize() { divide(_h_temp[0], _h_temp[6], _h_results[0]); divide(_h_temp[1], _h_temp[6], _h_results[1]); divide(_h_temp[2], _h_temp[6], _h_results[2]); divide(_h_temp[3], _h_temp[6], _h_results[3]); divide(_h_temp[4], _h_temp[6], _h_results[4]); divide(_h_temp[5], _h_temp[6], _h_results[5]); } private: Histo1DPtr _h_temp[7]; Scatter2DPtr _h_results[6]; }; DECLARE_RIVET_PLUGIN(ATLAS_2012_I1188891); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1204447.cc b/analyses/pluginATLAS/ATLAS_2012_I1204447.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1204447.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1204447.cc @@ -1,1062 +1,1060 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/VisibleFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class ATLAS_2012_I1204447 : public Analysis { public: /// Constructor ATLAS_2012_I1204447() : Analysis("ATLAS_2012_I1204447") { } /// Book histograms and initialise projections before the run void init() { // To calculate the acceptance without having the fiducial lepton efficiencies included, this part can be turned off _use_fiducial_lepton_efficiency = true; // Random numbers for simulation of ATLAS detector reconstruction efficiency srand(160385); // Read in all signal regions _signal_regions = getSignalRegions(); // Set number of events per signal region to 0 for (size_t i = 0; i < _signal_regions.size(); i++) _eventCountsPerSR[_signal_regions[i]] = 0.0; // Final state including all charged and neutral particles const FinalState fs(-5.0, 5.0, 1*GeV); declare(fs, "FS"); // Final state including all charged particles declare(ChargedFinalState(Cuts::abseta < 2.5 && Cuts::pT > 1*GeV), "CFS"); // Final state including all visible particles (to calculate MET, Jets etc.) declare(VisibleFinalState(Cuts::abseta < 5.0), "VFS"); // Final state including all AntiKt 04 Jets VetoedFinalState vfs; vfs.addVetoPairId(PID::MUON); declare(FastJets(vfs, FastJets::ANTIKT, 0.4), "AntiKtJets04"); // Final state including all unstable particles (including taus) declare(UnstableFinalState(Cuts::abseta < 5.0 && Cuts::pT > 5*GeV), "UFS"); // Final state including all electrons IdentifiedFinalState elecs(Cuts::abseta < 2.47 && Cuts::pT > 10*GeV); elecs.acceptIdPair(PID::ELECTRON); declare(elecs, "elecs"); // Final state including all muons IdentifiedFinalState muons(Cuts::abseta < 2.5 && Cuts::pT > 10*GeV); muons.acceptIdPair(PID::MUON); declare(muons, "muons"); // Book histograms _h_HTlep_all = bookHisto1D("HTlep_all" , 30, 0, 1500); _h_HTjets_all = bookHisto1D("HTjets_all", 30, 0, 1500); _h_MET_all = bookHisto1D("MET_all" , 20, 0, 1000); _h_Meff_all = bookHisto1D("Meff_all" , 30, 0, 3000); _h_e_n = bookHisto1D("e_n" , 10, -0.5, 9.5); _h_mu_n = bookHisto1D("mu_n" , 10, -0.5, 9.5); _h_tau_n = bookHisto1D("tau_n", 10, -0.5, 9.5); _h_pt_1_3l = bookHisto1D("pt_1_3l", 100, 0, 2000); _h_pt_2_3l = bookHisto1D("pt_2_3l", 100, 0, 2000); _h_pt_3_3l = bookHisto1D("pt_3_3l", 100, 0, 2000); _h_pt_1_2ltau = bookHisto1D("pt_1_2ltau", 100, 0, 2000); _h_pt_2_2ltau = bookHisto1D("pt_2_2ltau", 100, 0, 2000); _h_pt_3_2ltau = bookHisto1D("pt_3_2ltau", 100, 0, 2000); _h_excluded = bookHisto1D("excluded", 2, -0.5, 1.5); } /// Perform the per-event analysis void analyze(const Event& event) { // Muons Particles muon_candidates; const Particles charged_tracks = apply(event, "CFS").particles(); const Particles visible_particles = apply(event, "VFS").particles(); foreach (const Particle& mu, apply(event, "muons").particlesByPt()) { // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of muon itself) double pTinCone = -mu.pT(); foreach (const Particle& track, charged_tracks) { if (deltaR(mu.momentum(), track.momentum()) < 0.3) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles within dR<0.3) double eTinCone = 0.; foreach (const Particle& visible_particle, visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(mu.momentum(), visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reco int muon_id = 13; if ( mu.hasAncestor(15) || mu.hasAncestor(-15)) muon_id = 14; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(muon_id, mu) : 1.0; const bool keep_muon = rand()/static_cast(RAND_MAX) <= eff; // Keep muon if pTCone30/pT < 0.15 and eTCone30/pT < 0.2 and reconstructed if (keep_muon && pTinCone/mu.pT() <= 0.15 && eTinCone/mu.pT() < 0.2) muon_candidates.push_back(mu); } // Electrons Particles electron_candidates; foreach (const Particle& e, apply(event, "elecs").particlesByPt()) { // Neglect electrons in crack regions if (inRange(e.abseta(), 1.37, 1.52)) continue; // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of electron itself) double pTinCone = -e.pT(); foreach (const Particle& track, charged_tracks) { if (deltaR(e.momentum(), track.momentum()) < 0.3) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles (except muons) within dR<0.3) double eTinCone = 0.; foreach (const Particle& visible_particle, visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(e.momentum(), visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reco int elec_id = 11; if (e.hasAncestor(15) || e.hasAncestor(-15)) elec_id = 12; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(elec_id, e) : 1.0; const bool keep_elec = rand()/static_cast(RAND_MAX) <= eff; // Keep electron if pTCone30/pT < 0.13 and eTCone30/pT < 0.2 and reconstructed if (keep_elec && pTinCone/e.pT() <= 0.13 && eTinCone/e.pT() < 0.2) electron_candidates.push_back(e); } // Taus /// @todo This could benefit from a tau finder projection Particles tau_candidates; foreach (const Particle& tau, apply(event, "UFS").particlesByPt()) { // Only pick taus out of all unstable particles if (tau.abspid() != PID::TAU) continue; // Check that tau has decayed into daughter particles /// @todo Huh? Unstable taus with no decay vtx? Can use Particle.isStable()? But why in this situation? if (tau.genParticle()->end_vertex() == 0) continue; // Calculate visible tau pT from pT of tau neutrino in tau decay for pT and |eta| cuts FourMomentum daughter_tau_neutrino_momentum = get_tau_neutrino_mom(tau); Particle tau_vis = tau; tau_vis.setMomentum(tau.momentum()-daughter_tau_neutrino_momentum); // keep only taus in certain eta region and above 15 GeV of visible tau pT if ( tau_vis.pT() <= 15.0*GeV || tau_vis.abseta() > 2.5) continue; // Get prong number (number of tracks) in tau decay and check if tau decays leptonically unsigned int nprong = 0; bool lep_decaying_tau = false; get_prong_number(tau.genParticle(), nprong, lep_decaying_tau); // Apply reconstruction efficiency int tau_id = 15; if (nprong == 1) tau_id = 15; else if (nprong == 3) tau_id = 16; // Get fiducial lepton efficiency simulate reco efficiency const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(tau_id, tau_vis) : 1.0; const bool keep_tau = rand()/static_cast(RAND_MAX) <= eff; // Keep tau if nprong = 1, it decays hadronically, and it's reconstructed by the detector if ( !lep_decaying_tau && nprong == 1 && keep_tau) tau_candidates.push_back(tau_vis); } // Jets (all anti-kt R=0.4 jets with pT > 25 GeV and eta < 4.9) Jets jet_candidates; foreach (const Jet& jet, apply(event, "AntiKtJets04").jetsByPt(25*GeV)) { if (jet.abseta() < 4.9) jet_candidates.push_back(jet); } // ETmiss Particles vfs_particles = apply(event, "VFS").particles(); FourMomentum pTmiss; foreach (const Particle& p, vfs_particles) pTmiss -= p.momentum(); double eTmiss = pTmiss.pT()/GeV; //------------------ // Overlap removal // electron - electron Particles electron_candidates_2; for (size_t ie = 0; ie < electron_candidates.size(); ++ie) { const Particle & e = electron_candidates[ie]; bool away = true; // If electron pair within dR < 0.1: remove electron with lower pT for (size_t ie2=0; ie2 < electron_candidates_2.size(); ++ie2) { if ( deltaR( e.momentum(), electron_candidates_2[ie2].momentum()) < 0.1 ) { away = false; break; } } // If isolated keep it if ( away ) electron_candidates_2.push_back( e ); } // jet - electron Jets recon_jets; foreach (const Jet& jet, jet_candidates) { bool away = true; // if jet within dR < 0.2 of electron: remove jet foreach (const Particle& e, electron_candidates_2) { if (deltaR(e.momentum(), jet.momentum()) < 0.2) { away = false; break; } } // jet - tau if (away) { // If jet within dR < 0.2 of tau: remove jet foreach (const Particle& tau, tau_candidates) { if (deltaR(tau.momentum(), jet.momentum()) < 0.2) { away = false; break; } } } // If isolated keep it if ( away ) recon_jets.push_back( jet ); } // electron - jet Particles recon_leptons, recon_e; for (size_t ie = 0; ie < electron_candidates_2.size(); ++ie) { const Particle& e = electron_candidates_2[ie]; // If electron within 0.2 < dR < 0.4 from any jets: remove electron bool away = true; foreach (const Jet& jet, recon_jets) { if (deltaR(e.momentum(), jet.momentum()) < 0.4) { away = false; break; } } // electron - muon // if electron within dR < 0.1 of a muon: remove electron if (away) { foreach (const Particle& mu, muon_candidates) { if (deltaR(mu.momentum(), e.momentum()) < 0.1) { away = false; break; } } } // If isolated keep it if (away) { recon_e += e; recon_leptons += e; } } // tau - electron Particles recon_tau; foreach ( const Particle& tau, tau_candidates ) { bool away = true; // If tau within dR < 0.2 of an electron: remove tau foreach ( const Particle& e, recon_e ) { if (deltaR( tau.momentum(), e.momentum()) < 0.2) { away = false; break; } } // tau - muon // If tau within dR < 0.2 of a muon: remove tau if (away) { foreach (const Particle& mu, muon_candidates) { if (deltaR(tau.momentum(), mu.momentum()) < 0.2) { away = false; break; } } } // If isolated keep it if (away) recon_tau.push_back( tau ); } // Muon - jet isolation Particles recon_mu, trigger_mu; // If muon within dR < 0.4 of a jet, remove muon foreach (const Particle& mu, muon_candidates) { bool away = true; foreach (const Jet& jet, recon_jets) { if ( deltaR( mu.momentum(), jet.momentum()) < 0.4 ) { away = false; break; } } if (away) { recon_mu.push_back( mu ); recon_leptons.push_back( mu ); if (mu.abseta() < 2.4) trigger_mu.push_back( mu ); } } // End overlap removal //------------------ // Jet cleaning if (rand()/static_cast(RAND_MAX) <= 0.42) { foreach (const Jet& jet, recon_jets) { const double eta = jet.rapidity(); const double phi = jet.azimuthalAngle(MINUSPI_PLUSPI); if (jet.pT() > 25*GeV && inRange(eta, -0.1, 1.5) && inRange(phi, -0.9, -0.5)) vetoEvent; } } // Post-isolation event cuts // Require at least 3 charged tracks in event if (charged_tracks.size() < 3) vetoEvent; // And at least one e/mu passing trigger if (!( !recon_e .empty() && recon_e[0] .pT() > 25*GeV) && !( !trigger_mu.empty() && trigger_mu[0].pT() > 25*GeV) ) { MSG_DEBUG("Hardest lepton fails trigger"); vetoEvent; } // And only accept events with at least 2 electrons and muons and at least 3 leptons in total if (recon_mu.size() + recon_e.size() + recon_tau.size() < 3 || recon_leptons.size() < 2) vetoEvent; // Now it's worth getting the event weight const double weight = event.weight(); // Sort leptons by decreasing pT sortByPt(recon_leptons); sortByPt(recon_tau); // Calculate HTlep, fill lepton pT histograms & store chosen combination of 3 leptons double HTlep = 0.; Particles chosen_leptons; if ( recon_leptons.size() > 2 ) { _h_pt_1_3l->fill(recon_leptons[0].perp()/GeV, weight); _h_pt_2_3l->fill(recon_leptons[1].perp()/GeV, weight); _h_pt_3_3l->fill(recon_leptons[2].perp()/GeV, weight); HTlep = (recon_leptons[0].pT() + recon_leptons[1].pT() + recon_leptons[2].pT())/GeV; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_leptons[2] ); } else { _h_pt_1_2ltau->fill(recon_leptons[0].perp()/GeV, weight); _h_pt_2_2ltau->fill(recon_leptons[1].perp()/GeV, weight); _h_pt_3_2ltau->fill(recon_tau[0].perp()/GeV, weight); HTlep = (recon_leptons[0].pT() + recon_leptons[1].pT() + recon_tau[0].pT())/GeV ; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_tau[0] ); } // Number of prompt e/mu and had taus _h_e_n ->fill(recon_e.size() , weight); _h_mu_n ->fill(recon_mu.size() , weight); _h_tau_n->fill(recon_tau.size(), weight); // Calculate HTjets double HTjets = 0.; foreach ( const Jet & jet, recon_jets ) HTjets += jet.perp()/GeV; // Calculate meff double meff = eTmiss + HTjets; Particles all_leptons; foreach ( const Particle & e , recon_e ) { meff += e.perp()/GeV; all_leptons.push_back( e ); } foreach ( const Particle & mu, recon_mu ) { meff += mu.perp()/GeV; all_leptons.push_back( mu ); } foreach ( const Particle & tau, recon_tau ) { meff += tau.perp()/GeV; all_leptons.push_back( tau ); } // Fill histogram of kinematic variables _h_HTlep_all ->fill(HTlep , weight); _h_HTjets_all->fill(HTjets, weight); _h_MET_all ->fill(eTmiss, weight); _h_Meff_all ->fill(meff , weight); // Determine signal region (3l/2ltau, onZ/offZ) string basic_signal_region; if ( recon_mu.size() + recon_e.size() > 2 ) basic_signal_region += "3l_"; else if ( (recon_mu.size() + recon_e.size() == 2) && (recon_tau.size() > 0)) basic_signal_region += "2ltau_"; // Is there an OSSF pair or a three lepton combination with an invariant mass close to the Z mass int onZ = isonZ(chosen_leptons); if (onZ == 1) basic_signal_region += "onZ"; else if (onZ == 0) basic_signal_region += "offZ"; // Check in which signal regions this event falls and adjust event counters fillEventCountsPerSR(basic_signal_region, onZ, HTlep, eTmiss, HTjets, meff, weight); } /// Normalise histograms etc., after the run void finalize() { // Normalize to an integrated luminosity of 1 fb-1 double norm = crossSection()/femtobarn/sumOfWeights(); string best_signal_region = ""; double ratio_best_SR = 0.; // Loop over all signal regions and find signal region with best sensitivity (ratio signal events/visible cross-section) for (size_t i = 0; i < _signal_regions.size(); i++) { double signal_events = _eventCountsPerSR[_signal_regions[i]] * norm; // Use expected upper limits to find best signal region double UL95 = getUpperLimit(_signal_regions[i], false); double ratio = signal_events / UL95; if (ratio > ratio_best_SR) { best_signal_region = _signal_regions[i]; ratio_best_SR = ratio; } } double signal_events_best_SR = _eventCountsPerSR[best_signal_region] * norm; double exp_UL_best_SR = getUpperLimit(best_signal_region, false); double obs_UL_best_SR = getUpperLimit(best_signal_region, true); // Print out result cout << "----------------------------------------------------------------------------------------" << endl; cout << "Best signal region: " << best_signal_region << endl; cout << "Normalized number of signal events in this best signal region (per fb-1): " << signal_events_best_SR << endl; cout << "Efficiency*Acceptance: " << _eventCountsPerSR[best_signal_region]/sumOfWeights() << endl; cout << "Cross-section [fb]: " << crossSection()/femtobarn << endl; cout << "Expected visible cross-section (per fb-1): " << exp_UL_best_SR << endl; cout << "Ratio (signal events / expected visible cross-section): " << ratio_best_SR << endl; cout << "Observed visible cross-section (per fb-1): " << obs_UL_best_SR << endl; cout << "Ratio (signal events / observed visible cross-section): " << signal_events_best_SR/obs_UL_best_SR << endl; cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the EXPECTED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > exp_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% CL." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the OBSERVED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > obs_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% CL." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; // Normalize to cross section if (norm != 0) { scale(_h_HTlep_all, norm); scale(_h_HTjets_all, norm); scale(_h_MET_all, norm); scale(_h_Meff_all, norm); scale(_h_pt_1_3l, norm); scale(_h_pt_2_3l, norm); scale(_h_pt_3_3l, norm); scale(_h_pt_1_2ltau, norm); scale(_h_pt_2_2ltau, norm); scale(_h_pt_3_2ltau, norm); scale(_h_e_n, norm); scale(_h_mu_n, norm); scale(_h_tau_n, norm); scale(_h_excluded, signal_events_best_SR); } } /// Helper functions //@{ /// Function giving a list of all signal regions vector getSignalRegions() { // List of basic signal regions vector basic_signal_regions; basic_signal_regions.push_back("3l_offZ"); basic_signal_regions.push_back("3l_onZ"); basic_signal_regions.push_back("2ltau_offZ"); basic_signal_regions.push_back("2ltau_onZ"); // List of kinematic variables vector kinematic_variables; kinematic_variables.push_back("HTlep"); kinematic_variables.push_back("METStrong"); kinematic_variables.push_back("METWeak"); kinematic_variables.push_back("Meff"); kinematic_variables.push_back("MeffStrong"); vector signal_regions; // Loop over all kinematic variables and basic signal regions for (size_t i0 = 0; i0 < kinematic_variables.size(); i0++) { for (size_t i1 = 0; i1 < basic_signal_regions.size(); i1++) { // Is signal region onZ? int onZ = (basic_signal_regions[i1].find("onZ") != string::npos) ? 1 : 0; // Get cut values for this kinematic variable vector cut_values = getCutsPerSignalRegion(kinematic_variables[i0], onZ); // Loop over all cut values for (size_t i2 = 0; i2 < cut_values.size(); i2++) { // push signal region into vector signal_regions.push_back( (kinematic_variables[i0] + "_" + basic_signal_regions[i1] + "_cut_" + toString(i2)) ); } } } return signal_regions; } /// Function giving all cut vales per kinematic variable (taking onZ for MET into account) vector getCutsPerSignalRegion(const string& signal_region, int onZ=0) { vector cutValues; // Cut values for HTlep if (signal_region.compare("HTlep") == 0) { cutValues.push_back(0); cutValues.push_back(100); cutValues.push_back(150); cutValues.push_back(200); cutValues.push_back(300); } // Cut values for METStrong (HTjets > 100 GeV) and METWeak (HTjets < 100 GeV) else if (signal_region.compare("METStrong") == 0 || signal_region.compare("METWeak") == 0) { if (onZ == 0) cutValues.push_back(0); else if (onZ == 1) cutValues.push_back(20); cutValues.push_back(50); cutValues.push_back(75); } // Cut values for Meff and MeffStrong (MET > 75 GeV) if (signal_region.compare("Meff") == 0 || signal_region.compare("MeffStrong") == 0) { cutValues.push_back(0); cutValues.push_back(150); cutValues.push_back(300); cutValues.push_back(500); } return cutValues; } /// function fills map EventCountsPerSR by looping over all signal regions /// and looking if the event falls into this signal region void fillEventCountsPerSR(const string& basic_signal_region, int onZ, double HTlep, double eTmiss, double HTjets, double meff, double weight) { // Get cut values for HTlep, loop over them and add event if cut is passed vector cut_values = getCutsPerSignalRegion("HTlep", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (HTlep > cut_values[i]) _eventCountsPerSR[("HTlep_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets > 100.) _eventCountsPerSR[("METStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METWeak, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METWeak", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets <= 100.) _eventCountsPerSR[("METWeak_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for Meff, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("Meff", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i]) _eventCountsPerSR[("Meff_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MeffStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MeffStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i] && eTmiss > 75.) _eventCountsPerSR[("MeffStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } } /// Function returning 4-vector of daughter-particle if it is a tau neutrino /// @todo Move to TauFinder and make less HepMC-ish FourMomentum get_tau_neutrino_mom(const Particle& p) { assert(p.abspid() == PID::TAU); ConstGenVertexPtr dv = p.genParticle()->end_vertex(); assert(dv != nullptr); - ///@todo particles_out_const_iterator is deprecated in HepMC3 - for (HepMC::GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { - if (abs((*pp)->pdg_id()) == PID::NU_TAU) return FourMomentum((*pp)->momentum()); + for(ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ + if (abs(pp->pdg_id()) == PID::NU_TAU) return FourMomentum(pp->momentum()); } return FourMomentum(); } /// Function calculating the prong number of taus /// @todo Move to TauFinder and make less HepMC-ish void get_prong_number(ConstGenParticlePtr p, unsigned int& nprong, bool& lep_decaying_tau) { assert(p != nullptr); //const int tau_barcode = p->barcode(); ConstGenVertexPtr dv = p->end_vertex(); assert(dv != nullptr); - ///@todo particles_out_const_iterator is deprecated in HepMC3 - for (HepMC::GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { + for(ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ // If they have status 1 and are charged they will produce a track and the prong number is +1 - if ((*pp)->status() == 1 ) { - const int id = (*pp)->pdg_id(); + if (pp->status() == 1 ) { + const int id = pp->pdg_id(); if (Rivet::PID::charge(id) != 0 ) ++nprong; // Check if tau decays leptonically // @todo Can a tau decay include a tau in its decay daughters?! if ((abs(id) == PID::ELECTRON || abs(id) == PID::MUON || abs(id) == PID::TAU) && abs(p->pdg_id()) == PID::TAU) lep_decaying_tau = true; } // If the status of the daughter particle is 2 it is unstable and the further decays are checked - else if ((*pp)->status() == 2 ) { - get_prong_number(*pp, nprong, lep_decaying_tau); + else if (pp->status() == 2 ) { + get_prong_number(pp, nprong, lep_decaying_tau); } } } /// Function giving fiducial lepton efficiency double apply_reco_eff(int flavor, const Particle& p) { float pt = p.pT()/GeV; float eta = p.eta(); double eff = 0.; //double err = 0.; if (flavor == 11) { // weight prompt electron -- now including data/MC ID SF in eff. //float rho = 0.820; float p0 = 7.34; float p1 = 0.8977; //float ep0= 0.5 ; float ep1= 0.0087; eff = p1 - p0/pt; //double err0 = ep0/pt; // d(eff)/dp0 //double err1 = ep1; // d(eff)/dp1 //err = sqrt(err0*err0 + err1*err1 - 2*rho*err0*err1); double avgrate = 0.6867; float wz_ele_eta[] = {0.588717,0.603674,0.666135,0.747493,0.762202,0.675051,0.751606,0.745569,0.665333,0.610432,0.592693,}; //float ewz_ele_eta[] ={0.00292902,0.002476,0.00241209,0.00182319,0.00194339,0.00299785,0.00197339,0.00182004,0.00241793,0.00245997,0.00290394,}; int ibin = 3; if (eta >= -2.5 && eta < -2.0) ibin = 0; if (eta >= -2.0 && eta < -1.5) ibin = 1; if (eta >= -1.5 && eta < -1.0) ibin = 2; if (eta >= -1.0 && eta < -0.5) ibin = 3; if (eta >= -0.5 && eta < -0.1) ibin = 4; if (eta >= -0.1 && eta < 0.1) ibin = 5; if (eta >= 0.1 && eta < 0.5) ibin = 6; if (eta >= 0.5 && eta < 1.0) ibin = 7; if (eta >= 1.0 && eta < 1.5) ibin = 8; if (eta >= 1.5 && eta < 2.0) ibin = 9; if (eta >= 2.0 && eta < 2.5) ibin = 10; double eff_eta = wz_ele_eta[ibin]; //double err_eta = ewz_ele_eta[ibin]; eff = (eff*eff_eta)/avgrate; } if (flavor == 12) { // weight electron from tau //float rho = 0.884; float p0 = 6.799; float p1 = 0.842; //float ep0= 0.664; float ep1= 0.016; eff = p1 - p0/pt; //double err0 = ep0/pt; // d(eff)/dp0 //double err1 = ep1; // d(eff)/dp1 //err = sqrt(err0*err0 + err1*err1 - 2*rho*err0*err1); double avgrate = 0.5319; float wz_elet_eta[] = {0.468945,0.465953,0.489545,0.58709,0.59669,0.515829,0.59284,0.575828,0.498181,0.463536,0.481738,}; //float ewz_elet_eta[] ={0.00933795,0.00780868,0.00792679,0.00642083,0.00692652,0.0101568,0.00698452,0.00643524,0.0080002,0.00776238,0.0094699,}; int ibin = 3; if (eta >= -2.5 && eta < -2.0) ibin = 0; if (eta >= -2.0 && eta < -1.5) ibin = 1; if (eta >= -1.5 && eta < -1.0) ibin = 2; if (eta >= -1.0 && eta < -0.5) ibin = 3; if (eta >= -0.5 && eta < -0.1) ibin = 4; if (eta >= -0.1 && eta < 0.1) ibin = 5; if (eta >= 0.1 && eta < 0.5) ibin = 6; if (eta >= 0.5 && eta < 1.0) ibin = 7; if (eta >= 1.0 && eta < 1.5) ibin = 8; if (eta >= 1.5 && eta < 2.0) ibin = 9; if (eta >= 2.0 && eta < 2.5) ibin = 10; double eff_eta = wz_elet_eta[ibin]; //double err_eta = ewz_elet_eta[ibin]; eff = (eff*eff_eta)/avgrate; } if (flavor == 13) {// weight prompt muon //if eta>0.1 float p0 = -18.21; float p1 = 14.83; float p2 = 0.9312; //float ep0= 5.06; float ep1= 1.9; float ep2=0.00069; if ( fabs(eta) < 0.1) { p0 = 7.459; p1 = 2.615; p2 = 0.5138; //ep0 = 10.4; ep1 = 4.934; ep2 = 0.0034; } double arg = ( pt-p0 )/( 2.*p1 ) ; eff = 0.5 * p2 * (1.+erf(arg)); //err = 0.1*eff; } if (flavor == 14) {// weight muon from tau if (fabs(eta) < 0.1) { float p0 = -1.756; float p1 = 12.38; float p2 = 0.4441; //float ep0= 10.39; float ep1= 7.9; float ep2=0.022; double arg = ( pt-p0 )/( 2.*p1 ) ; eff = 0.5 * p2 * (1.+erf(arg)); //err = 0.1*eff; } else { float p0 = 2.102; float p1 = 0.8293; //float ep0= 0.271; float ep1= 0.0083; eff = p1 - p0/pt; //double err0 = ep0/pt; // d(eff)/dp0 //double err1 = ep1; // d(eff)/dp1 //err = sqrt(err0*err0 + err1*err1 - 2*rho*err0*err1); } } if (flavor == 15) {// weight hadronic tau 1p float wz_tau1p[] = {0.0249278,0.146978,0.225049,0.229212,0.21519,0.206152,0.201559,0.197917,0.209249,0.228336,0.193548,}; //float ewz_tau1p[] ={0.00178577,0.00425252,0.00535052,0.00592126,0.00484684,0.00612941,0.00792099,0.0083006,0.0138307,0.015568,0.0501751,}; int ibin = 0; if (pt > 15) ibin = 1; if (pt > 20) ibin = 2; if (pt > 25) ibin = 3; if (pt > 30) ibin = 4; if (pt > 40) ibin = 5; if (pt > 50) ibin = 6; if (pt > 60) ibin = 7; if (pt > 80) ibin = 8; if (pt > 100) ibin = 9; if (pt > 200) ibin = 10; eff = wz_tau1p[ibin]; //err = ewz_tau1p[ibin]; double avgrate = 0.1718; float wz_tau1p_eta[] = {0.162132,0.176393,0.139619,0.178813,0.185144,0.210027,0.203937,0.178688,0.137034,0.164216,0.163713,}; //float ewz_tau1p_eta[] ={0.00706705,0.00617989,0.00506798,0.00525172,0.00581865,0.00865675,0.00599245,0.00529877,0.00506368,0.00617025,0.00726219,}; ibin = 3; if (eta >= -2.5 && eta < -2.0) ibin = 0; if (eta >= -2.0 && eta < -1.5) ibin = 1; if (eta >= -1.5 && eta < -1.0) ibin = 2; if (eta >= -1.0 && eta < -0.5) ibin = 3; if (eta >= -0.5 && eta < -0.1) ibin = 4; if (eta >= -0.1 && eta < 0.1) ibin = 5; if (eta >= 0.1 && eta < 0.5) ibin = 6; if (eta >= 0.5 && eta < 1.0) ibin = 7; if (eta >= 1.0 && eta < 1.5) ibin = 8; if (eta >= 1.5 && eta < 2.0) ibin = 9; if (eta >= 2.0 && eta < 2.5) ibin = 10; double eff_eta = wz_tau1p_eta[ibin]; //double err_eta = ewz_tau1p_eta[ibin]; eff = (eff*eff_eta)/avgrate; } if (flavor == 16) { //weight hadronic tau 3p float wz_tau3p[] = {0.000587199,0.00247181,0.0013031,0.00280112,}; //float ewz_tau3p[] ={0.000415091,0.000617187,0.000582385,0.00197792,}; int ibin = 0; if (pt > 15) ibin = 1; if (pt > 20) ibin = 2; if (pt > 40) ibin = 3; if (pt > 80) ibin = 4; eff = wz_tau3p[ibin]; //err = ewz_tau3p[ibin]; } return eff; } /// Function giving observed upper limit (visible cross-section) double getUpperLimit(const string& signal_region, bool observed) { map upperLimitsObserved; upperLimitsObserved["HTlep_3l_offZ_cut_0"] = 11.; upperLimitsObserved["HTlep_3l_offZ_cut_100"] = 8.7; upperLimitsObserved["HTlep_3l_offZ_cut_150"] = 4.0; upperLimitsObserved["HTlep_3l_offZ_cut_200"] = 4.4; upperLimitsObserved["HTlep_3l_offZ_cut_300"] = 1.6; upperLimitsObserved["HTlep_2ltau_offZ_cut_0"] = 25.; upperLimitsObserved["HTlep_2ltau_offZ_cut_100"] = 14.; upperLimitsObserved["HTlep_2ltau_offZ_cut_150"] = 6.1; upperLimitsObserved["HTlep_2ltau_offZ_cut_200"] = 3.3; upperLimitsObserved["HTlep_2ltau_offZ_cut_300"] = 1.2; upperLimitsObserved["HTlep_3l_onZ_cut_0"] = 48.; upperLimitsObserved["HTlep_3l_onZ_cut_100"] = 38.; upperLimitsObserved["HTlep_3l_onZ_cut_150"] = 14.; upperLimitsObserved["HTlep_3l_onZ_cut_200"] = 7.2; upperLimitsObserved["HTlep_3l_onZ_cut_300"] = 4.5; upperLimitsObserved["HTlep_2ltau_onZ_cut_0"] = 85.; upperLimitsObserved["HTlep_2ltau_onZ_cut_100"] = 53.; upperLimitsObserved["HTlep_2ltau_onZ_cut_150"] = 11.0; upperLimitsObserved["HTlep_2ltau_onZ_cut_200"] = 5.2; upperLimitsObserved["HTlep_2ltau_onZ_cut_300"] = 3.0; upperLimitsObserved["METStrong_3l_offZ_cut_0"] = 2.6; upperLimitsObserved["METStrong_3l_offZ_cut_50"] = 2.1; upperLimitsObserved["METStrong_3l_offZ_cut_75"] = 2.1; upperLimitsObserved["METStrong_2ltau_offZ_cut_0"] = 4.2; upperLimitsObserved["METStrong_2ltau_offZ_cut_50"] = 3.1; upperLimitsObserved["METStrong_2ltau_offZ_cut_75"] = 2.6; upperLimitsObserved["METStrong_3l_onZ_cut_20"] = 11.0; upperLimitsObserved["METStrong_3l_onZ_cut_50"] = 6.4; upperLimitsObserved["METStrong_3l_onZ_cut_75"] = 5.1; upperLimitsObserved["METStrong_2ltau_onZ_cut_20"] = 5.9; upperLimitsObserved["METStrong_2ltau_onZ_cut_50"] = 3.4; upperLimitsObserved["METStrong_2ltau_onZ_cut_75"] = 1.2; upperLimitsObserved["METWeak_3l_offZ_cut_0"] = 11.; upperLimitsObserved["METWeak_3l_offZ_cut_50"] = 5.3; upperLimitsObserved["METWeak_3l_offZ_cut_75"] = 3.1; upperLimitsObserved["METWeak_2ltau_offZ_cut_0"] = 23.; upperLimitsObserved["METWeak_2ltau_offZ_cut_50"] = 4.3; upperLimitsObserved["METWeak_2ltau_offZ_cut_75"] = 3.1; upperLimitsObserved["METWeak_3l_onZ_cut_20"] = 41.; upperLimitsObserved["METWeak_3l_onZ_cut_50"] = 16.; upperLimitsObserved["METWeak_3l_onZ_cut_75"] = 8.0; upperLimitsObserved["METWeak_2ltau_onZ_cut_20"] = 80.; upperLimitsObserved["METWeak_2ltau_onZ_cut_50"] = 4.4; upperLimitsObserved["METWeak_2ltau_onZ_cut_75"] = 1.8; upperLimitsObserved["Meff_3l_offZ_cut_0"] = 11.; upperLimitsObserved["Meff_3l_offZ_cut_150"] = 8.1; upperLimitsObserved["Meff_3l_offZ_cut_300"] = 3.1; upperLimitsObserved["Meff_3l_offZ_cut_500"] = 2.1; upperLimitsObserved["Meff_2ltau_offZ_cut_0"] = 25.; upperLimitsObserved["Meff_2ltau_offZ_cut_150"] = 12.; upperLimitsObserved["Meff_2ltau_offZ_cut_300"] = 3.9; upperLimitsObserved["Meff_2ltau_offZ_cut_500"] = 2.2; upperLimitsObserved["Meff_3l_onZ_cut_0"] = 48.; upperLimitsObserved["Meff_3l_onZ_cut_150"] = 37.; upperLimitsObserved["Meff_3l_onZ_cut_300"] = 11.; upperLimitsObserved["Meff_3l_onZ_cut_500"] = 4.8; upperLimitsObserved["Meff_2ltau_onZ_cut_0"] = 85.; upperLimitsObserved["Meff_2ltau_onZ_cut_150"] = 28.; upperLimitsObserved["Meff_2ltau_onZ_cut_300"] = 5.9; upperLimitsObserved["Meff_2ltau_onZ_cut_500"] = 1.9; upperLimitsObserved["MeffStrong_3l_offZ_cut_0"] = 3.8; upperLimitsObserved["MeffStrong_3l_offZ_cut_150"] = 3.8; upperLimitsObserved["MeffStrong_3l_offZ_cut_300"] = 2.8; upperLimitsObserved["MeffStrong_3l_offZ_cut_500"] = 2.1; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_0"] = 3.9; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_150"] = 4.0; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_300"] = 2.9; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_500"] = 1.5; upperLimitsObserved["MeffStrong_3l_onZ_cut_0"] = 10.0; upperLimitsObserved["MeffStrong_3l_onZ_cut_150"] = 10.0; upperLimitsObserved["MeffStrong_3l_onZ_cut_300"] = 6.8; upperLimitsObserved["MeffStrong_3l_onZ_cut_500"] = 3.9; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_0"] = 1.6; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_150"] = 1.4; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_300"] = 1.5; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_500"] = 0.9; // Expected upper limits are also given but not used in this analysis map upperLimitsExpected; upperLimitsExpected["HTlep_3l_offZ_cut_0"] = 11.; upperLimitsExpected["HTlep_3l_offZ_cut_100"] = 8.5; upperLimitsExpected["HTlep_3l_offZ_cut_150"] = 4.6; upperLimitsExpected["HTlep_3l_offZ_cut_200"] = 3.6; upperLimitsExpected["HTlep_3l_offZ_cut_300"] = 1.9; upperLimitsExpected["HTlep_2ltau_offZ_cut_0"] = 23.; upperLimitsExpected["HTlep_2ltau_offZ_cut_100"] = 14.; upperLimitsExpected["HTlep_2ltau_offZ_cut_150"] = 6.4; upperLimitsExpected["HTlep_2ltau_offZ_cut_200"] = 3.6; upperLimitsExpected["HTlep_2ltau_offZ_cut_300"] = 1.5; upperLimitsExpected["HTlep_3l_onZ_cut_0"] = 33.; upperLimitsExpected["HTlep_3l_onZ_cut_100"] = 25.; upperLimitsExpected["HTlep_3l_onZ_cut_150"] = 12.; upperLimitsExpected["HTlep_3l_onZ_cut_200"] = 6.5; upperLimitsExpected["HTlep_3l_onZ_cut_300"] = 3.1; upperLimitsExpected["HTlep_2ltau_onZ_cut_0"] = 94.; upperLimitsExpected["HTlep_2ltau_onZ_cut_100"] = 61.; upperLimitsExpected["HTlep_2ltau_onZ_cut_150"] = 9.9; upperLimitsExpected["HTlep_2ltau_onZ_cut_200"] = 4.5; upperLimitsExpected["HTlep_2ltau_onZ_cut_300"] = 1.9; upperLimitsExpected["METStrong_3l_offZ_cut_0"] = 3.1; upperLimitsExpected["METStrong_3l_offZ_cut_50"] = 2.4; upperLimitsExpected["METStrong_3l_offZ_cut_75"] = 2.3; upperLimitsExpected["METStrong_2ltau_offZ_cut_0"] = 4.8; upperLimitsExpected["METStrong_2ltau_offZ_cut_50"] = 3.3; upperLimitsExpected["METStrong_2ltau_offZ_cut_75"] = 2.1; upperLimitsExpected["METStrong_3l_onZ_cut_20"] = 8.7; upperLimitsExpected["METStrong_3l_onZ_cut_50"] = 4.9; upperLimitsExpected["METStrong_3l_onZ_cut_75"] = 3.8; upperLimitsExpected["METStrong_2ltau_onZ_cut_20"] = 7.3; upperLimitsExpected["METStrong_2ltau_onZ_cut_50"] = 2.8; upperLimitsExpected["METStrong_2ltau_onZ_cut_75"] = 1.5; upperLimitsExpected["METWeak_3l_offZ_cut_0"] = 10.; upperLimitsExpected["METWeak_3l_offZ_cut_50"] = 4.7; upperLimitsExpected["METWeak_3l_offZ_cut_75"] = 3.0; upperLimitsExpected["METWeak_2ltau_offZ_cut_0"] = 21.; upperLimitsExpected["METWeak_2ltau_offZ_cut_50"] = 4.0; upperLimitsExpected["METWeak_2ltau_offZ_cut_75"] = 2.6; upperLimitsExpected["METWeak_3l_onZ_cut_20"] = 30.; upperLimitsExpected["METWeak_3l_onZ_cut_50"] = 10.; upperLimitsExpected["METWeak_3l_onZ_cut_75"] = 5.4; upperLimitsExpected["METWeak_2ltau_onZ_cut_20"] = 88.; upperLimitsExpected["METWeak_2ltau_onZ_cut_50"] = 5.5; upperLimitsExpected["METWeak_2ltau_onZ_cut_75"] = 2.2; upperLimitsExpected["Meff_3l_offZ_cut_0"] = 11.; upperLimitsExpected["Meff_3l_offZ_cut_150"] = 8.8; upperLimitsExpected["Meff_3l_offZ_cut_300"] = 3.7; upperLimitsExpected["Meff_3l_offZ_cut_500"] = 2.1; upperLimitsExpected["Meff_2ltau_offZ_cut_0"] = 23.; upperLimitsExpected["Meff_2ltau_offZ_cut_150"] = 13.; upperLimitsExpected["Meff_2ltau_offZ_cut_300"] = 4.9; upperLimitsExpected["Meff_2ltau_offZ_cut_500"] = 2.4; upperLimitsExpected["Meff_3l_onZ_cut_0"] = 33.; upperLimitsExpected["Meff_3l_onZ_cut_150"] = 25.; upperLimitsExpected["Meff_3l_onZ_cut_300"] = 9.; upperLimitsExpected["Meff_3l_onZ_cut_500"] = 3.9; upperLimitsExpected["Meff_2ltau_onZ_cut_0"] = 94.; upperLimitsExpected["Meff_2ltau_onZ_cut_150"] = 35.; upperLimitsExpected["Meff_2ltau_onZ_cut_300"] = 6.8; upperLimitsExpected["Meff_2ltau_onZ_cut_500"] = 2.5; upperLimitsExpected["MeffStrong_3l_offZ_cut_0"] = 3.9; upperLimitsExpected["MeffStrong_3l_offZ_cut_150"] = 3.9; upperLimitsExpected["MeffStrong_3l_offZ_cut_300"] = 3.0; upperLimitsExpected["MeffStrong_3l_offZ_cut_500"] = 2.0; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_0"] = 3.8; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_150"] = 3.9; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_300"] = 3.1; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_500"] = 1.6; upperLimitsExpected["MeffStrong_3l_onZ_cut_0"] = 6.9; upperLimitsExpected["MeffStrong_3l_onZ_cut_150"] = 7.1; upperLimitsExpected["MeffStrong_3l_onZ_cut_300"] = 4.9; upperLimitsExpected["MeffStrong_3l_onZ_cut_500"] = 3.0; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_0"] = 2.4; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_150"] = 2.5; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_300"] = 2.0; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_500"] = 1.1; if (observed) return upperLimitsObserved[signal_region]; else return upperLimitsExpected[signal_region]; } /// Function checking if there is an OSSF lepton pair or a combination of 3 leptons with an invariant mass close to the Z mass /// @todo Should the reference Z mass be 91.2? int isonZ (const Particles& particles) { int onZ = 0; double best_mass_2 = 999.; double best_mass_3 = 999.; // Loop over all 2 particle combinations to find invariant mass of OSSF pair closest to Z mass foreach ( const Particle& p1, particles ) { foreach ( const Particle& p2, particles ) { double mass_difference_2_old = fabs(91.0 - best_mass_2); double mass_difference_2_new = fabs(91.0 - (p1.momentum() + p2.momentum()).mass()/GeV); // If particle combination is OSSF pair calculate mass difference to Z mass if ( (p1.pid()*p2.pid() == -121 || p1.pid()*p2.pid() == -169) ) { // Get invariant mass closest to Z mass if (mass_difference_2_new < mass_difference_2_old) best_mass_2 = (p1.momentum() + p2.momentum()).mass()/GeV; // In case there is an OSSF pair take also 3rd lepton into account (e.g. from FSR and photon to electron conversion) foreach ( const Particle & p3 , particles ) { double mass_difference_3_old = fabs(91.0 - best_mass_3); double mass_difference_3_new = fabs(91.0 - (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV); if (mass_difference_3_new < mass_difference_3_old) best_mass_3 = (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV; } } } } // Pick the minimum invariant mass of the best OSSF pair combination and the best 3 lepton combination // If this mass is in a 20 GeV window around the Z mass, the event is classified as onZ double best_mass = min(best_mass_2, best_mass_3); if (fabs(91.0 - best_mass) < 20) onZ = 1; return onZ; } //@} private: /// Histograms //@{ Histo1DPtr _h_HTlep_all, _h_HTjets_all, _h_MET_all, _h_Meff_all; Histo1DPtr _h_pt_1_3l, _h_pt_2_3l, _h_pt_3_3l, _h_pt_1_2ltau, _h_pt_2_2ltau, _h_pt_3_2ltau; Histo1DPtr _h_e_n, _h_mu_n, _h_tau_n; Histo1DPtr _h_excluded; //@} /// Fiducial efficiencies to model the effects of the ATLAS detector bool _use_fiducial_lepton_efficiency; /// List of signal regions and event counts per signal region vector _signal_regions; map _eventCountsPerSR; }; DECLARE_RIVET_PLUGIN(ATLAS_2012_I1204447); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1282447.cc b/analyses/pluginATLAS/ATLAS_2014_I1282447.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1282447.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1282447.cc @@ -1,603 +1,602 @@ // -*- C++ -*- // ATLAS W+c analysis ////////////////////////////////////////////////////////////////////////// /* Description of rivet analysis ATLAS_2014_I1282447 W+c production This rivet routine implements the ATLAS W+c analysis. Apart from those histograms, described and published on HEP Data, here are some helper histograms defined, these are: d02-x01-y01, d02-x01-y02 and d08-x01-y01 are ratios, the nominator ("_plus") and denominator ("_minus") histograms are also given, so that the ratios can be reconstructed if need be (e.g. when running on separate samples). d05 and d06 are ratios over inclusive W production. The routine has to be run on a sample for inclusive W production in order to make sure the denominator ("_winc") is correctly filled. The ratios can be constructed using the following sample code: python divideWCharm.py import yoda hists_wc = yoda.read("Rivet_Wc.yoda") hists_winc = yoda.read("Rivet_Winc.yoda") ## division histograms --> ONLY for different plus minus runs # (merge before using yodamerge Rivet_plus.yoda Rivet_minus.yoda > Rivet_Wc.yoda) d02y01_plus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y01_plus"] d02y01_minus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y01_minus"] ratio_d02y01 = d02y01_plus.divide(d02y01_minus) ratio_d02y01.path = "/ATLAS_2014_I1282447/d02-x01-y01" d02y02_plus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y02_plus"] d02y02_minus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y02_minus"] ratio_d02y02= d02y02_plus.divide(d02y02_minus) ratio_d02y02.path = "/ATLAS_2014_I1282447/d02-x01-y02" d08y01_plus = hists_wc["/ATLAS_2014_I1282447/d08-x01-y01_plus"] d08y01_minus = hists_wc["/ATLAS_2014_I1282447/d08-x01-y01_minus"] ratio_d08y01= d08y01_plus.divide(d08y01_minus) ratio_d08y01.path = "/ATLAS_2014_I1282447/d08-x01-y01" # inclusive cross section h_winc = hists_winc["/ATLAS_2014_I1282447/d05-x01-y01"] h_d = hists_wc["/ATLAS_2014_I1282447/d01-x01-y02"] h_dstar= hists_wc["/ATLAS_2014_I1282447/d01-x01-y03"] ratio_wd = h_d.divide(h_winc) ratio_wd.path = "/ATLAS_2014_I1282447/d05-x01-y02" ratio_wdstar = h_d.divide(h_winc) ratio_wdstar.path = "/ATLAS_2014_I1282447/d05-x01-y03" # pT differential h_winc_plus = hists_winc["/ATLAS_2014_I1282447/d06-x01-y01_winc"] h_winc_minus = hists_winc["/ATLAS_2014_I1282447/d06-x01-y02_winc"] h_wd_plus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y01_wplus"] h_wd_minus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y02_wminus"] h_wdstar_plus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y03_wplus"] h_wdstar_minus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y04_wminus"] ratio_wd_plus = h_wd_plus.divide(h_winc_plus) ratio_wd_plus.path = "/ATLAS_2014_I1282447/d06-x01-y01" ratio_wd_minus = h_wd_plus.divide(h_winc_minus) ratio_wd_minus.path = "/ATLAS_2014_I1282447/d06-x01-y02" ratio_wdstar_plus = h_wdstar_plus.divide(h_winc_plus) ratio_wdstar_plus.path = "/ATLAS_2014_I1282447/d06-x01-y03" ratio_wdstar_minus = h_wdstar_plus.divide(h_winc_minus) ratio_wdstar_minus.path = "/ATLAS_2014_I1282447/d06-x01-y04" ratio_wd_plus = h_wd_plus.divide(h_winc_plus) ratio_wd_plus.path = "/ATLAS_2014_I1282447/d06-x01-y01" ratio_wd_minus = h_wd_plus.divide(h_winc_minus) ratio_wd_minus.path = "/ATLAS_2014_I1282447/d06-x01-y02" h_winc_plus= hists_winc["/ATLAS_2014_I1282447/d06-x01-y01_winc"] h_winc_minus= hists_winc["/ATLAS_2014_I1282447/d06-x01-y02_winc"] ## copy other histograms for plotting d01x01y01= hists_wc["/ATLAS_2014_I1282447/d01-x01-y01"] d01x01y01.path = "/ATLAS_2014_I1282447/d01-x01-y01" d01x01y02= hists_wc["/ATLAS_2014_I1282447/d01-x01-y02"] d01x01y02.path = "/ATLAS_2014_I1282447/d01-x01-y02" d01x01y03= hists_wc["/ATLAS_2014_I1282447/d01-x01-y03"] d01x01y03.path = "/ATLAS_2014_I1282447/d01-x01-y03" d03x01y01= hists_wc["/ATLAS_2014_I1282447/d03-x01-y01"] d03x01y01.path = "/ATLAS_2014_I1282447/d03-x01-y01" d03x01y02= hists_wc["/ATLAS_2014_I1282447/d03-x01-y02"] d03x01y02.path = "/ATLAS_2014_I1282447/d03-x01-y02" d04x01y01= hists_wc["/ATLAS_2014_I1282447/d04-x01-y01"] d04x01y01.path = "/ATLAS_2014_I1282447/d04-x01-y01" d04x01y02= hists_wc["/ATLAS_2014_I1282447/d04-x01-y02"] d04x01y02.path = "/ATLAS_2014_I1282447/d04-x01-y02" d04x01y03= hists_wc["/ATLAS_2014_I1282447/d04-x01-y03"] d04x01y03.path = "/ATLAS_2014_I1282447/d04-x01-y03" d04x01y04= hists_wc["/ATLAS_2014_I1282447/d04-x01-y04"] d04x01y04.path = "/ATLAS_2014_I1282447/d04-x01-y04" d07x01y01= hists_wc["/ATLAS_2014_I1282447/d07-x01-y01"] d07x01y01.path = "/ATLAS_2014_I1282447/d07-x01-y01" yoda.write([ratio_d02y01,ratio_d02y02,ratio_d08y01, ratio_wd ,ratio_wdstar,ratio_wd_plus,ratio_wd_minus ,ratio_wdstar_plus,ratio_wdstar_minus,d01x01y01,d01x01y02,d01x01y03,d03x01y01,d03x01y02,d04x01y01,d04x01y02,d04x01y03,d04x01y04,d07x01y01],"validation.yoda") */ ////////////////////////////////////////////////////////////////////////// #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Projections/WFinder.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" namespace Rivet { class ATLAS_2014_I1282447 : public Analysis { public: /// Constructor ATLAS_2014_I1282447() : Analysis("ATLAS_2014_I1282447") { setNeedsCrossSection(true); } /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { /// @todo Initialise and register projections here UnstableFinalState fs; Cut cuts = Cuts::etaIn(-2.5, 2.5) & (Cuts::pT > 20*GeV); /// should use sample WITHOUT QED radiation off the electron WFinder wfinder_born_el(fs, cuts, PID::ELECTRON, 25*GeV, 8000*GeV, 15*GeV, 0.1, WFinder::CLUSTERALL, WFinder::TRACK); declare(wfinder_born_el, "WFinder_born_el"); WFinder wfinder_born_mu(fs, cuts, PID::MUON , 25*GeV, 8000*GeV, 15*GeV, 0.1, WFinder::CLUSTERALL, WFinder::TRACK); declare(wfinder_born_mu, "WFinder_born_mu"); // all hadrons that could be coming from a charm decay -- // -- for safety, use region -3.5 - 3.5 declare(UnstableFinalState(Cuts::abseta <3.5), "hadrons"); // Input for the jets: no neutrinos, no muons, and no electron which passed the electron cuts // also: NO electron, muon or tau (needed due to ATLAS jet truth reconstruction feature) VetoedFinalState veto; veto.addVetoOnThisFinalState(wfinder_born_el); veto.addVetoOnThisFinalState(wfinder_born_mu); veto.addVetoPairId(PID::ELECTRON); veto.addVetoPairId(PID::MUON); veto.addVetoPairId(PID::TAU); FastJets jets(veto, FastJets::ANTIKT, 0.4); declare(jets, "jets"); // Book histograms // charge separated integrated cross sections _hist_wcjet_charge = bookHisto1D("d01-x01-y01"); _hist_wd_charge = bookHisto1D("d01-x01-y02"); _hist_wdstar_charge = bookHisto1D("d01-x01-y03"); // charge integrated total cross sections _hist_wcjet_ratio = bookScatter2D("d02-x01-y01"); _hist_wd_ratio = bookScatter2D("d02-x01-y02"); _hist_wcjet_minus = bookHisto1D("d02-x01-y01_minus"); _hist_wd_minus = bookHisto1D("d02-x01-y02_minus"); _hist_wcjet_plus = bookHisto1D("d02-x01-y01_plus"); _hist_wd_plus = bookHisto1D("d02-x01-y02_plus"); // eta distributions _hist_wplus_wcjet_eta_lep = bookHisto1D("d03-x01-y01"); _hist_wminus_wcjet_eta_lep = bookHisto1D("d03-x01-y02"); _hist_wplus_wdminus_eta_lep = bookHisto1D("d04-x01-y01"); _hist_wminus_wdplus_eta_lep = bookHisto1D("d04-x01-y02"); _hist_wplus_wdstar_eta_lep = bookHisto1D("d04-x01-y03"); _hist_wminus_wdstar_eta_lep = bookHisto1D("d04-x01-y04"); // ratio of cross section (WD over W inclusive) // postprocess! _hist_w_inc = bookHisto1D("d05-x01-y01"); _hist_wd_winc_ratio = bookScatter2D("d05-x01-y02"); _hist_wdstar_winc_ratio = bookScatter2D("d05-x01-y03"); // ratio of cross section (WD over W inclusive -- function of pT of D meson) _hist_wplusd_wplusinc_pt_ratio = bookScatter2D("d06-x01-y01"); _hist_wminusd_wminusinc_pt_ratio = bookScatter2D("d06-x01-y02"); _hist_wplusdstar_wplusinc_pt_ratio = bookScatter2D("d06-x01-y03"); _hist_wminusdstar_wminusinc_pt_ratio = bookScatter2D("d06-x01-y04"); // could use for postprocessing! _hist_wplusd_wplusinc_pt = bookHisto1D("d06-x01-y01_wplus"); _hist_wminusd_wminusinc_pt = bookHisto1D("d06-x01-y02_wminus"); _hist_wplusdstar_wplusinc_pt = bookHisto1D("d06-x01-y03_wplus"); _hist_wminusdstar_wminusinc_pt = bookHisto1D("d06-x01-y04_wminus"); _hist_wplus_winc = bookHisto1D("d06-x01-y01_winc"); _hist_wminus_winc = bookHisto1D("d06-x01-y02_winc"); // jet multiplicity of charge integrated W+cjet cross section (+0 or +1 jet in addition to the charm jet) _hist_wcjet_jets = bookHisto1D("d07-x01-y01"); // jet multiplicity of W+cjet cross section ratio (+0 or +1 jet in addition to the charm jet) _hist_wcjet_jets_ratio = bookScatter2D("d08-x01-y01"); _hist_wcjet_jets_plus = bookHisto1D("d08-x01-y01_plus"); _hist_wcjet_jets_minus = bookHisto1D("d08-x01-y01_minus"); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); double charge_weight = 0; // account for OS/SS events int lepton_charge = 0; double lepton_eta = 0.; /// Find leptons const WFinder& wfinder_born_el = apply(event, "WFinder_born_el"); const WFinder& wfinder_born_mu = apply(event, "WFinder_born_mu"); if (wfinder_born_el.empty() && wfinder_born_mu.empty()) { MSG_DEBUG("No W bosons found"); vetoEvent; } bool keepevent = false; //check electrons if (!wfinder_born_el.empty()) { const FourMomentum nu = wfinder_born_el.constituentNeutrinos()[0]; if (wfinder_born_el.mT() > 40*GeV && nu.pT() > 25*GeV) { keepevent = true; lepton_charge = wfinder_born_el.constituentLeptons()[0].charge(); lepton_eta = fabs(wfinder_born_el.constituentLeptons()[0].pseudorapidity()); } } //check muons if (!wfinder_born_mu.empty()) { const FourMomentum nu = wfinder_born_mu.constituentNeutrinos()[0]; if (wfinder_born_mu.mT() > 40*GeV && nu.pT() > 25*GeV) { keepevent = true; lepton_charge = wfinder_born_mu.constituentLeptons()[0].charge(); lepton_eta = fabs(wfinder_born_mu.constituentLeptons()[0].pseudorapidity()); } } if (!keepevent) { MSG_DEBUG("Event does not pass mT and MET cuts"); vetoEvent; } if (lepton_charge > 0) { _hist_wplus_winc->fill(10., weight); _hist_wplus_winc->fill(16., weight); _hist_wplus_winc->fill(30., weight); _hist_wplus_winc->fill(60., weight); _hist_w_inc->fill(+1, weight); } else if (lepton_charge < 0) { _hist_wminus_winc->fill(10., weight); _hist_wminus_winc->fill(16., weight); _hist_wminus_winc->fill(30., weight); _hist_wminus_winc->fill(60., weight); _hist_w_inc->fill(-1, weight); } // Find hadrons in the event const UnstableFinalState& fs = apply(event, "hadrons"); /// FIND Different channels // 1: wcjet // get jets const Jets& jets = apply(event, "jets").jetsByPt(Cuts::pT>25.0*GeV && Cuts::abseta<2.5); // loop over jets to select jets used to match to charm Jets js; int matched_charmHadron = 0; double charm_charge = 0.; int njets = 0; int nj = 0; bool mat_jet = false; double ptcharm = 0; if (matched_charmHadron > -1) { for (const Jet& j : jets) { mat_jet = false; njets += 1; for (const Particle& p : fs.particles()) { /// @todo Avoid touching HepMC! ConstGenParticlePtr part = p.genParticle(); if (p.hasCharm()) { //if(isFromBDecay(p)) continue; if (p.fromBottom()) continue; if (p.pT() < 5*GeV ) continue; if (hasCharmedChildren(part)) continue; if (deltaR(p, j) < 0.3) { mat_jet = true; if (p.pT() > ptcharm) { charm_charge = part->pdg_id(); ptcharm = p.pT(); } } } } if (mat_jet) nj++; } if (charm_charge * lepton_charge > 0) charge_weight = -1; else charge_weight = +1; if (nj == 1) { if (lepton_charge > 0) { _hist_wcjet_charge ->fill( 1, weight*charge_weight); _hist_wcjet_plus ->fill( 0, weight*charge_weight); _hist_wplus_wcjet_eta_lep ->fill(lepton_eta, weight*charge_weight); _hist_wcjet_jets_plus ->fill(njets-1 , weight*charge_weight); } else if (lepton_charge < 0) { _hist_wcjet_charge ->fill( -1, weight*charge_weight); _hist_wcjet_minus ->fill( 0, weight*charge_weight); _hist_wminus_wcjet_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wcjet_jets_minus ->fill(njets-1 , weight*charge_weight); } _hist_wcjet_jets->fill(njets-1, weight*charge_weight); } } // // 1/2: w+d(*) meson for (const Particle& p : fs.particles()) { /// @todo Avoid touching HepMC! ConstGenParticlePtr part = p.genParticle(); if (p.pT() < 8*GeV) continue; if (fabs(p.eta()) > 2.2) continue; // W+D if (abs(part->pdg_id()) == 411) { if (lepton_charge * part->pdg_id() > 0) charge_weight = -1; else charge_weight = +1; // fill histos if (lepton_charge > 0) { _hist_wd_charge ->fill( 1, weight*charge_weight); _hist_wd_plus ->fill( 0, weight*charge_weight); _hist_wplus_wdminus_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wplusd_wplusinc_pt ->fill( p.pT(), weight*charge_weight); } else if (lepton_charge < 0) { _hist_wd_charge ->fill( -1, weight*charge_weight); _hist_wd_minus ->fill( 0, weight*charge_weight); _hist_wminus_wdplus_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wminusd_wminusinc_pt ->fill(p.pT() , weight*charge_weight); } } // W+Dstar if ( abs(part->pdg_id()) == 413 ) { if (lepton_charge*part->pdg_id() > 0) charge_weight = -1; else charge_weight = +1; if (lepton_charge > 0) { _hist_wdstar_charge->fill(+1, weight*charge_weight); _hist_wd_plus->fill( 0, weight*charge_weight); _hist_wplus_wdstar_eta_lep->fill( lepton_eta, weight*charge_weight); _hist_wplusdstar_wplusinc_pt->fill( p.pT(), weight*charge_weight); } else if (lepton_charge < 0) { _hist_wdstar_charge->fill(-1, weight*charge_weight); _hist_wd_minus->fill(0, weight*charge_weight); _hist_wminus_wdstar_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wminusdstar_wminusinc_pt->fill(p.pT(), weight*charge_weight); } } } } /// Normalise histograms etc., after the run void finalize() { const double sf = crossSection() / sumOfWeights(); // norm to cross section // d01 scale(_hist_wcjet_charge, sf); scale(_hist_wd_charge, sf); scale(_hist_wdstar_charge, sf); //d02 scale(_hist_wcjet_plus, sf); scale(_hist_wcjet_minus, sf); scale(_hist_wd_plus, sf); scale(_hist_wd_minus, sf); divide(_hist_wcjet_plus, _hist_wcjet_minus, _hist_wcjet_ratio); divide(_hist_wd_plus, _hist_wd_minus, _hist_wd_ratio ); //d03 scale(_hist_wplus_wcjet_eta_lep, sf); scale(_hist_wminus_wcjet_eta_lep, sf); //d04 scale(_hist_wplus_wdminus_eta_lep, crossSection()/sumOfWeights()); scale(_hist_wminus_wdplus_eta_lep, crossSection()/sumOfWeights()); scale(_hist_wplus_wdstar_eta_lep , crossSection()/sumOfWeights()); scale(_hist_wminus_wdstar_eta_lep, crossSection()/sumOfWeights()); //d05 scale(_hist_w_inc, 0.01 * sf); // in percent --> /100 divide(_hist_wd_charge, _hist_w_inc, _hist_wd_winc_ratio ); divide(_hist_wdstar_charge, _hist_w_inc, _hist_wdstar_winc_ratio); //d06, in percentage! scale(_hist_wplusd_wplusinc_pt, sf); scale(_hist_wminusd_wminusinc_pt, sf); scale(_hist_wplusdstar_wplusinc_pt, sf); scale(_hist_wminusdstar_wminusinc_pt, sf); scale(_hist_wplus_winc, 0.01 * sf); // in percent --> /100 scale(_hist_wminus_winc, 0.01 * sf); // in percent --> /100 divide(_hist_wplusd_wplusinc_pt, _hist_wplus_winc , _hist_wplusd_wplusinc_pt_ratio ); divide(_hist_wminusd_wminusinc_pt, _hist_wminus_winc, _hist_wminusd_wminusinc_pt_ratio ); divide(_hist_wplusdstar_wplusinc_pt, _hist_wplus_winc , _hist_wplusdstar_wplusinc_pt_ratio ); divide(_hist_wminusdstar_wminusinc_pt, _hist_wminus_winc, _hist_wminusdstar_wminusinc_pt_ratio); //d07 scale(_hist_wcjet_jets, sf); //d08 scale(_hist_wcjet_jets_minus, sf); scale(_hist_wcjet_jets_plus, sf); divide(_hist_wcjet_jets_plus, _hist_wcjet_jets_minus , _hist_wcjet_jets_ratio); } //@} private: // Data members like post-cuts event weight counters go here // Check whether particle comes from b-decay bool isFromBDecay(const Particle& p) { /// @todo I think we can just replicated the original behaviour with this call /// Note slight difference to Rivet's native Particle::fromBottom method! return p.hasAncestorWith([](const Particle &p)->bool{return p.hasBottom();}); /* bool isfromB = false; if (p.genParticle() == nullptr) return false; ConstGenParticlePtr part = p.genParticle(); ConstGenVertexPtr ivtx = part->production_vertex(); while (ivtx) { if (ivtx->particles_in().size() < 1) { isfromB = false; break; } const HepMC::GenVertex::particles_in_const_iterator iPart_invtx = ivtx->particles_in_const_begin(); part = (*iPart_invtx); if (!part) { isfromB = false; break; } isfromB = PID::hasBottom(part->pdg_id()); if (isfromB == true) break; ivtx = part->production_vertex(); if ( part->pdg_id() == 2212 || !ivtx ) break; // reached beam } return isfromB; */ } // Check whether particle has charmed children /// @todo Use built-in method and avoid HepMC! bool hasCharmedChildren(ConstGenParticlePtr part) { bool hasCharmedChild = false; if (part == nullptr) return false; ConstGenVertexPtr ivtx = part->end_vertex(); if (ivtx == nullptr) return false; // if (ivtx->particles_out_size() < 2) return false; - HepMC::GenVertex::particles_out_const_iterator iPart_invtx = ivtx->particles_out_const_begin(); - HepMC::GenVertex::particles_out_const_iterator end_invtx = ivtx->particles_out_const_end(); + //HepMC::GenVertex::particles_out_const_iterator iPart_invtx = ivtx->particles_out_const_begin(); + //HepMC::GenVertex::particles_out_const_iterator end_invtx = ivtx->particles_out_const_end(); - for ( ; iPart_invtx != end_invtx; iPart_invtx++ ) { - ConstGenParticlePtr p2 = (*iPart_invtx); + for(ConstGenParticlePtr p2: HepMCUtils::particles(ivtx, Relatives::CHILDREN)){ if (p2 == part) continue; hasCharmedChild = PID::hasCharm(p2->pdg_id()); if (hasCharmedChild == true) break; hasCharmedChild = hasCharmedChildren(p2); if (hasCharmedChild == true) break; } return hasCharmedChild; } private: /// @name Histograms //@{ //d01-x01- Histo1DPtr _hist_wcjet_charge; Histo1DPtr _hist_wd_charge; Histo1DPtr _hist_wdstar_charge; //d02-x01- Scatter2DPtr _hist_wcjet_ratio; Scatter2DPtr _hist_wd_ratio; Histo1DPtr _hist_wcjet_plus; Histo1DPtr _hist_wd_plus; Histo1DPtr _hist_wcjet_minus; Histo1DPtr _hist_wd_minus; //d03-x01- Histo1DPtr _hist_wplus_wcjet_eta_lep; Histo1DPtr _hist_wminus_wcjet_eta_lep; //d04-x01- Histo1DPtr _hist_wplus_wdminus_eta_lep; Histo1DPtr _hist_wminus_wdplus_eta_lep; //d05-x01- Histo1DPtr _hist_wplus_wdstar_eta_lep; Histo1DPtr _hist_wminus_wdstar_eta_lep; // postprocessing histos //d05-x01 Histo1DPtr _hist_w_inc; Scatter2DPtr _hist_wd_winc_ratio; Scatter2DPtr _hist_wdstar_winc_ratio; //d06-x01 Histo1DPtr _hist_wplus_winc; Histo1DPtr _hist_wminus_winc; Scatter2DPtr _hist_wplusd_wplusinc_pt_ratio; Scatter2DPtr _hist_wminusd_wminusinc_pt_ratio; Scatter2DPtr _hist_wplusdstar_wplusinc_pt_ratio; Scatter2DPtr _hist_wminusdstar_wminusinc_pt_ratio; Histo1DPtr _hist_wplusd_wplusinc_pt ; Histo1DPtr _hist_wminusd_wminusinc_pt; Histo1DPtr _hist_wplusdstar_wplusinc_pt; Histo1DPtr _hist_wminusdstar_wminusinc_pt; // d07-x01 Histo1DPtr _hist_wcjet_jets ; //d08-x01 Scatter2DPtr _hist_wcjet_jets_ratio ; Histo1DPtr _hist_wcjet_jets_plus ; Histo1DPtr _hist_wcjet_jets_minus; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2014_I1282447); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1306615.cc b/analyses/pluginATLAS/ATLAS_2014_I1306615.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1306615.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1306615.cc @@ -1,487 +1,487 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { /// @brief ATLAS H->yy differential cross-sections measurement /// /// @author Michaela Queitsch-Maitland // // arXiv: http://arxiv.org/abs/ARXIV:1407.4222 // HepData: http://hepdata.cedar.ac.uk/view/ins1306615 class ATLAS_2014_I1306615 : public Analysis { public: // Constructor ATLAS_2014_I1306615() : Analysis("ATLAS_2014_I1306615") { } // Book histograms and initialise projections before the run void init() { // Final state // All particles within |eta| < 5.0 const FinalState FS(Cuts::abseta<5.0); declare(FS,"FS"); // Project photons with pT > 25 GeV and |eta| < 2.37 IdentifiedFinalState ph_FS(Cuts::abseta<2.37 && Cuts::pT>25.0*GeV); ph_FS.acceptIdPair(PID::PHOTON); declare(ph_FS, "PH_FS"); // Project photons for dressing IdentifiedFinalState ph_dressing_FS(FS); ph_dressing_FS.acceptIdPair(PID::PHOTON); // Project bare electrons IdentifiedFinalState el_bare_FS(FS); el_bare_FS.acceptIdPair(PID::ELECTRON); declare(el_bare_FS,"el_bare_FS"); // Project dressed electrons with pT > 15 GeV and |eta| < 2.47 DressedLeptons el_dressed_FS(ph_dressing_FS, el_bare_FS, 0.1, Cuts::abseta < 2.47 && Cuts::pT > 15*GeV); declare(el_dressed_FS,"EL_DRESSED_FS"); // Project bare muons IdentifiedFinalState mu_bare_FS(FS); mu_bare_FS.acceptIdPair(PID::MUON); // Project dressed muons with pT > 15 GeV and |eta| < 2.47 //DressedLeptons mu_dressed_FS(ph_dressing_FS, mu_bare_FS, 0.1, true, -2.47, 2.47, 15.0*GeV, false); DressedLeptons mu_dressed_FS(ph_dressing_FS, mu_bare_FS, 0.1, Cuts::abseta < 2.47 && Cuts::pT > 15*GeV); declare(mu_dressed_FS,"MU_DRESSED_FS"); // Final state excluding muons and neutrinos (for jet building and photon isolation) VetoedFinalState veto_mu_nu_FS(FS); veto_mu_nu_FS.vetoNeutrinos(); veto_mu_nu_FS.addVetoPairId(PID::MUON); declare(veto_mu_nu_FS, "VETO_MU_NU_FS"); // Build the anti-kT R=0.4 jets, using FinalState particles (vetoing muons and neutrinos) FastJets jets(veto_mu_nu_FS, FastJets::ANTIKT, 0.4); declare(jets, "JETS"); // Book histograms // 1D distributions _h_pT_yy = bookHisto1D(1,1,1); _h_y_yy = bookHisto1D(2,1,1); _h_Njets30 = bookHisto1D(3,1,1); _h_Njets50 = bookHisto1D(4,1,1); _h_pT_j1 = bookHisto1D(5,1,1); _h_y_j1 = bookHisto1D(6,1,1); _h_HT = bookHisto1D(7,1,1); _h_pT_j2 = bookHisto1D(8,1,1); _h_Dy_jj = bookHisto1D(9,1,1); _h_Dphi_yy_jj = bookHisto1D(10,1,1); _h_cosTS_CS = bookHisto1D(11,1,1); _h_cosTS_CS_5bin = bookHisto1D(12,1,1); _h_Dphi_jj = bookHisto1D(13,1,1); _h_pTt_yy = bookHisto1D(14,1,1); _h_Dy_yy = bookHisto1D(15,1,1); _h_tau_jet = bookHisto1D(16,1,1); _h_sum_tau_jet = bookHisto1D(17,1,1); _h_y_j2 = bookHisto1D(18,1,1); _h_pT_j3 = bookHisto1D(19,1,1); _h_m_jj = bookHisto1D(20,1,1); _h_pT_yy_jj = bookHisto1D(21,1,1); // 2D distributions of cosTS_CS x pT_yy _h_cosTS_pTyy_low = bookHisto1D(22,1,1); _h_cosTS_pTyy_high = bookHisto1D(22,1,2); _h_cosTS_pTyy_rest = bookHisto1D(22,1,3); // 2D distributions of Njets x pT_yy _h_pTyy_Njets0 = bookHisto1D(23,1,1); _h_pTyy_Njets1 = bookHisto1D(23,1,2); _h_pTyy_Njets2 = bookHisto1D(23,1,3); _h_pTj1_excl = bookHisto1D(24,1,1); // Fiducial regions _h_fidXSecs = bookHisto1D(29,1,1); } // Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); _weight = weight; // Get final state particles const ParticleVector& FS_ptcls = apply(event, "FS").particles(); const ParticleVector& ptcls_veto_mu_nu = apply(event, "VETO_MU_NU_FS").particles(); const ParticleVector& photons = apply(event, "PH_FS").particlesByPt(); const vector& el_dressed = apply(event, "EL_DRESSED_FS").dressedLeptons(); const vector& mu_dressed = apply(event, "MU_DRESSED_FS").dressedLeptons(); // For isolation calculation float dR_iso = 0.4; float ETcut_iso = 14.0; FourMomentum ET_iso; // Fiducial selection: pT > 25 GeV, |eta| < 2.37 and isolation (in cone deltaR = 0.4) is < 14 GeV vector fid_photons; foreach (const Particle& ph, photons) { // Veto photons from hadron or tau decay if ( fromHadronDecay(ph) ) continue; // Calculate isolation ET_iso = - ph.momentum(); // Loop over fs truth particles (excluding muons and neutrinos) foreach (const Particle& p, ptcls_veto_mu_nu) { // Check if the truth particle is in a cone of 0.4 if ( deltaR(ph.momentum(), p.momentum()) < dR_iso ) ET_iso += p.momentum(); } // Check isolation if ( ET_iso.Et() > ETcut_iso ) continue; // Fill vector of photons passing fiducial selection fid_photons.push_back(&ph); } if(fid_photons.size() < 2) vetoEvent; const FourMomentum& y1 = fid_photons[0]->momentum(); const FourMomentum& y2 = fid_photons[1]->momentum(); double m_yy = (y1 + y2).mass(); // Relative pT cuts if ( y1.pT() < 0.35 * m_yy || y2.pT() < 0.25 * m_yy ) vetoEvent; // Mass window cut if ( m_yy < 105 || m_yy > 160 ) vetoEvent; // -------------------------------------------- // // Passed diphoton baseline fiducial selection! // // -------------------------------------------- // // Electron selection vector good_el; foreach(const DressedLepton& els, el_dressed) { const Particle& el = els.constituentLepton(); if ( el.momentum().pT() < 15 ) continue; if ( fabs(el.momentum().eta()) > 2.47 ) continue; if ( deltaR(el.momentum(), y1) < 0.4 ) continue; if ( deltaR(el.momentum(), y2) < 0.4 ) continue; if ( fromHadronDecay(el) ) continue; // Veto electrons from hadron or tau decay good_el.push_back(&el); } // Muon selection vector good_mu; foreach(const DressedLepton& mus, mu_dressed) { const Particle& mu = mus.constituentLepton(); if ( mu.momentum().pT() < 15 ) continue; if ( fabs(mu.momentum().eta()) > 2.47 ) continue; if ( deltaR(mu.momentum(), y1) < 0.4 ) continue; if ( deltaR(mu.momentum(), y2) < 0.4 ) continue; if ( fromHadronDecay(mu) ) continue; // Veto muons from hadron or tau decay good_mu.push_back(&mu); } // Find prompt, invisible particles for missing ET calculation // Based on VisibleFinalState projection FourMomentum invisible(0,0,0,0); foreach (const Particle& p, FS_ptcls) { // Veto non-prompt particles (from hadron or tau decay) if ( fromHadronDecay(p) ) continue; // Charged particles are visible if ( PID::threeCharge( p.pid() ) != 0 ) continue; // Neutral hadrons are visible if ( PID::isHadron( p.pid() ) ) continue; // Photons are visible if ( p.pid() == PID::PHOTON ) continue; // Gluons are visible (for parton level analyses) if ( p.pid() == PID::GLUON ) continue; // Everything else is invisible invisible += p.momentum(); } double MET = invisible.Et(); // Jet selection // Get jets with pT > 25 GeV and |rapidity| < 4.4 //const Jets& jets = apply(event, "JETS").jetsByPt(25.0*GeV, MAXDOUBLE, -4.4, 4.4, RAPIDITY); const Jets& jets = apply(event, "JETS").jetsByPt(Cuts::pT>25.0*GeV && Cuts::absrap <4.4); vector jets_25; vector jets_30; vector jets_50; foreach (const Jet& jet, jets) { bool passOverlap = true; // Overlap with leading photons if ( deltaR(y1, jet.momentum()) < 0.4 ) passOverlap = false; if ( deltaR(y2, jet.momentum()) < 0.4 ) passOverlap = false; // Overlap with good electrons foreach (const Particle* el, good_el) if ( deltaR(el->momentum(), jet.momentum()) < 0.2 ) passOverlap = false; if ( ! passOverlap ) continue; if ( fabs(jet.momentum().eta()) < 2.4 || ( fabs(jet.momentum().eta()) > 2.4 && jet.momentum().pT() > 30 ) ) jets_25.push_back(&jet); if ( jet.momentum().pT() > 30 ) jets_30.push_back(&jet); if ( jet.momentum().pT() > 50 ) jets_50.push_back(&jet); } // Fiducial regions _h_fidXSecs->fill(1,_weight); if ( jets_30.size() >= 1 ) _h_fidXSecs->fill(2, _weight); if ( jets_30.size() >= 2 ) _h_fidXSecs->fill(3, _weight); if ( jets_30.size() >= 3 ) _h_fidXSecs->fill(4, _weight); if ( jets_30.size() >= 2 && passVBFCuts(y1 + y2, jets_30.at(0)->momentum(), jets_30.at(1)->momentum()) ) _h_fidXSecs->fill(5, _weight); if ( (good_el.size() + good_mu.size()) > 0 ) _h_fidXSecs->fill(6, _weight); if ( MET > 80 ) _h_fidXSecs->fill(7, _weight); // Fill histograms // Inclusive variables _pT_yy = (y1 + y2).pT(); _y_yy = fabs( (y1 + y2).rapidity() ); _cosTS_CS = cosTS_CS(y1, y2); _pTt_yy = pTt(y1, y2); _Dy_yy = fabs( deltaRap(y1, y2) ); _Njets30 = jets_30.size() > 3 ? 3 : jets_30.size(); _Njets50 = jets_50.size() > 3 ? 3 : jets_50.size(); _h_Njets30->fill(_Njets30, _weight); _h_Njets50->fill(_Njets50, _weight); _pT_j1 = jets_30.size() > 0 ? jets_30.at(0)->momentum().pT() : 0.; _pT_j2 = jets_30.size() > 1 ? jets_30.at(1)->momentum().pT() : 0.; _pT_j3 = jets_30.size() > 2 ? jets_30.at(2)->momentum().pT() : 0.; _HT = 0.0; foreach (const Jet* jet, jets_30) _HT += jet->momentum().pT(); _tau_jet = tau_jet_max(y1 + y2, jets_25); _sum_tau_jet = sum_tau_jet(y1 + y2, jets_25); _h_pT_yy ->fill(_pT_yy ,_weight); _h_y_yy ->fill(_y_yy ,_weight); _h_pT_j1 ->fill(_pT_j1 ,_weight); _h_cosTS_CS ->fill(_cosTS_CS ,_weight); _h_cosTS_CS_5bin->fill(_cosTS_CS ,_weight); _h_HT ->fill(_HT ,_weight); _h_pTt_yy ->fill(_pTt_yy ,_weight); _h_Dy_yy ->fill(_Dy_yy ,_weight); _h_tau_jet ->fill(_tau_jet ,_weight); _h_sum_tau_jet ->fill(_sum_tau_jet,_weight); // >=1 jet variables if ( jets_30.size() >= 1 ) { FourMomentum j1 = jets_30[0]->momentum(); _y_j1 = fabs( j1.rapidity() ); _h_pT_j2->fill(_pT_j2 ,_weight); _h_y_j1 ->fill(_y_j1 ,_weight); } // >=2 jet variables if ( jets_30.size() >= 2 ) { FourMomentum j1 = jets_30[0]->momentum(); FourMomentum j2 = jets_30[1]->momentum(); _Dy_jj = fabs( deltaRap(j1, j2) ); _Dphi_jj = fabs( deltaPhi(j1, j2) ); _Dphi_yy_jj = fabs( deltaPhi(y1 + y2, j1 + j2) ); _m_jj = (j1 + j2).mass(); _pT_yy_jj = (y1 + y2 + j1 + j2).pT(); _y_j2 = fabs( j2.rapidity() ); _h_Dy_jj ->fill(_Dy_jj ,_weight); _h_Dphi_jj ->fill(_Dphi_jj ,_weight); _h_Dphi_yy_jj ->fill(_Dphi_yy_jj,_weight); _h_m_jj ->fill(_m_jj ,_weight); _h_pT_yy_jj ->fill(_pT_yy_jj ,_weight); _h_pT_j3 ->fill(_pT_j3 ,_weight); _h_y_j2 ->fill(_y_j2 ,_weight); } // 2D distributions of cosTS_CS x pT_yy if ( _pT_yy < 80 ) _h_cosTS_pTyy_low->fill(_cosTS_CS, _weight); else if ( _pT_yy > 80 && _pT_yy < 200 ) _h_cosTS_pTyy_high->fill(_cosTS_CS,_weight); else if ( _pT_yy > 200 ) _h_cosTS_pTyy_rest->fill(_cosTS_CS,_weight); // 2D distributions of pT_yy x Njets if ( _Njets30 == 0 ) _h_pTyy_Njets0->fill(_pT_yy, _weight); else if ( _Njets30 == 1 ) _h_pTyy_Njets1->fill(_pT_yy, _weight); else if ( _Njets30 >= 2 ) _h_pTyy_Njets2->fill(_pT_yy, _weight); if ( _Njets30 == 1 ) _h_pTj1_excl->fill(_pT_j1, _weight); } // Normalise histograms after the run void finalize() { const double xs = crossSectionPerEvent()/femtobarn; scale(_h_pT_yy, xs); scale(_h_y_yy, xs); scale(_h_pT_j1, xs); scale(_h_y_j1, xs); scale(_h_HT, xs); scale(_h_pT_j2, xs); scale(_h_Dy_jj, xs); scale(_h_Dphi_yy_jj, xs); scale(_h_cosTS_CS, xs); scale(_h_cosTS_CS_5bin, xs); scale(_h_Dphi_jj, xs); scale(_h_pTt_yy, xs); scale(_h_Dy_yy, xs); scale(_h_tau_jet, xs); scale(_h_sum_tau_jet, xs); scale(_h_y_j2, xs); scale(_h_pT_j3, xs); scale(_h_m_jj, xs); scale(_h_pT_yy_jj, xs); scale(_h_cosTS_pTyy_low, xs); scale(_h_cosTS_pTyy_high, xs); scale(_h_cosTS_pTyy_rest, xs); scale(_h_pTyy_Njets0, xs); scale(_h_pTyy_Njets1, xs); scale(_h_pTyy_Njets2, xs); scale(_h_pTj1_excl, xs); scale(_h_Njets30, xs); scale(_h_Njets50, xs); scale(_h_fidXSecs, xs); } // Trace event record to see if particle came from a hadron (or a tau from a hadron decay) // Based on fromDecay() function bool fromHadronDecay(const Particle& p ) { if (p.genParticle() == nullptr) return false; ConstGenVertexPtr prodVtx = p.genParticle()->production_vertex(); if (prodVtx == nullptr) return false; - for(ConstGenParticlePtr ancestor: particles(prodVtx, Relatives::ANCESTORS)) { + for(ConstGenParticlePtr ancestor: HepMCUtils::particles(prodVtx, Relatives::ANCESTORS)) { const PdgId pid = ancestor->pdg_id(); if (ancestor->status() == 2 && PID::isHadron(pid)) return true; if (ancestor->status() == 2 && (abs(pid) == PID::TAU && fromHadronDecay(ancestor))) return true; } return false; } // VBF-enhanced dijet topology selection cuts bool passVBFCuts(const FourMomentum &H, const FourMomentum &j1, const FourMomentum &j2) { return ( fabs(deltaRap(j1, j2)) > 2.8 && (j1 + j2).mass() > 400 && fabs(deltaPhi(H, j1 + j2)) > 2.6 ); } // Cosine of the decay angle in the Collins-Soper frame double cosTS_CS(const FourMomentum &y1, const FourMomentum &y2) { return fabs( ( (y1.E() + y1.pz())* (y2.E() - y2.pz()) - (y1.E() - y1.pz()) * (y2.E() + y2.pz()) ) / ((y1 + y2).mass() * sqrt(pow((y1 + y2).mass(), 2) + pow((y1 + y2).pt(), 2)) ) ); } // Diphoton pT along thrust axis double pTt(const FourMomentum &y1, const FourMomentum &y2) { return fabs(y1.px() * y2.py() - y2.px() * y1.py()) / (y1 - y2).pT()*2; } // Tau of jet (see paper for description) // tau_jet = mT/(2*cosh(y*)), where mT = pT (+) m, and y* = rapidty in Higgs rest frame double tau_jet( const FourMomentum &H, const FourMomentum &jet ) { return sqrt( pow(jet.pT(),2) + pow(jet.mass(),2) ) / (2.0 * cosh( jet.rapidity() - H.rapidity() ) ); } // Maximal (leading) tau_jet (see paper for description) double tau_jet_max( const FourMomentum &H, const vector jets, double tau_jet_cut = 8. ) { double max_tj = 0; for (size_t i=0; i < jets.size(); ++i) { FourMomentum jet = jets[i]->momentum(); if ( tau_jet(H, jet) > tau_jet_cut ) max_tj = max( tau_jet(H, jet), max_tj ); } return max_tj; } // Scalar sum of tau for all jets (see paper for description) double sum_tau_jet( const FourMomentum &H, const vector jets, double tau_jet_cut = 8. ) { double sum_tj = 0; for (size_t i=0; i < jets.size(); ++i) { FourMomentum jet = jets[i]->momentum(); if ( tau_jet(H, jet) > tau_jet_cut ) sum_tj += tau_jet(H, jet); } return sum_tj; } private: Histo1DPtr _h_pT_yy; Histo1DPtr _h_y_yy; Histo1DPtr _h_Njets30; Histo1DPtr _h_Njets50; Histo1DPtr _h_pT_j1; Histo1DPtr _h_y_j1; Histo1DPtr _h_HT; Histo1DPtr _h_pT_j2; Histo1DPtr _h_Dy_jj; Histo1DPtr _h_Dphi_yy_jj; Histo1DPtr _h_cosTS_CS; Histo1DPtr _h_cosTS_CS_5bin; Histo1DPtr _h_Dphi_jj; Histo1DPtr _h_pTt_yy; Histo1DPtr _h_Dy_yy; Histo1DPtr _h_tau_jet; Histo1DPtr _h_sum_tau_jet; Histo1DPtr _h_y_j2; Histo1DPtr _h_pT_j3; Histo1DPtr _h_m_jj; Histo1DPtr _h_pT_yy_jj; Histo1DPtr _h_cosTS_pTyy_low; Histo1DPtr _h_cosTS_pTyy_high; Histo1DPtr _h_cosTS_pTyy_rest; Histo1DPtr _h_pTyy_Njets0; Histo1DPtr _h_pTyy_Njets1; Histo1DPtr _h_pTyy_Njets2; Histo1DPtr _h_pTj1_excl; Histo1DPtr _h_fidXSecs; double _weight; int _Njets30; int _Njets50; double _pT_yy; double _y_yy; double _cosTS_CS; double _pT_j1; double _m_jj; double _y_j1; double _HT; double _pT_j2; double _y_j2; double _Dphi_yy_jj; double _pT_yy_jj; double _Dphi_jj; double _Dy_jj; double _pT_j3; double _pTt_yy; double _Dy_yy; double _tau_jet; double _sum_tau_jet; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2014_I1306615); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1327229.cc b/analyses/pluginATLAS/ATLAS_2014_I1327229.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1327229.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1327229.cc @@ -1,1330 +1,1330 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/VisibleFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class ATLAS_2014_I1327229 : public Analysis { public: /// Constructor ATLAS_2014_I1327229() : Analysis("ATLAS_2014_I1327229") { } /// Book histograms and initialise projections before the run void init() { // To calculate the acceptance without having the fiducial lepton efficiencies included, this part can be turned off _use_fiducial_lepton_efficiency = true; // Random numbers for simulation of ATLAS detector reconstruction efficiency /// @todo Replace with SmearedParticles etc. srand(160385); // Read in all signal regions _signal_regions = getSignalRegions(); // Set number of events per signal region to 0 for (size_t i = 0; i < _signal_regions.size(); i++) _eventCountsPerSR[_signal_regions[i]] = 0.0; // Final state including all charged and neutral particles const FinalState fs(-5.0, 5.0, 1*GeV); declare(fs, "FS"); // Final state including all charged particles declare(ChargedFinalState(-2.5, 2.5, 1*GeV), "CFS"); // Final state including all visible particles (to calculate MET, Jets etc.) declare(VisibleFinalState(-5.0,5.0),"VFS"); // Final state including all AntiKt 04 Jets VetoedFinalState vfs; vfs.addVetoPairId(PID::MUON); declare(FastJets(vfs, FastJets::ANTIKT, 0.4), "AntiKtJets04"); // Final state including all unstable particles (including taus) declare(UnstableFinalState(Cuts::abseta < 5.0 && Cuts::pT > 5*GeV),"UFS"); // Final state including all electrons IdentifiedFinalState elecs(Cuts::abseta < 2.47 && Cuts::pT > 10*GeV); elecs.acceptIdPair(PID::ELECTRON); declare(elecs, "elecs"); // Final state including all muons IdentifiedFinalState muons(Cuts::abseta < 2.5 && Cuts::pT > 10*GeV); muons.acceptIdPair(PID::MUON); declare(muons, "muons"); /// Book histograms: _h_HTlep_all = bookHisto1D("HTlep_all", 30,0,3000); _h_HTjets_all = bookHisto1D("HTjets_all", 30,0,3000); _h_MET_all = bookHisto1D("MET_all", 30,0,1500); _h_Meff_all = bookHisto1D("Meff_all", 50,0,5000); _h_min_pT_all = bookHisto1D("min_pT_all", 50, 0, 2000); _h_mT_all = bookHisto1D("mT_all", 50, 0, 2000); _h_e_n = bookHisto1D("e_n", 10, -0.5, 9.5); _h_mu_n = bookHisto1D("mu_n", 10, -0.5, 9.5); _h_tau_n = bookHisto1D("tau_n", 10, -0.5, 9.5); _h_pt_1_3l = bookHisto1D("pt_1_3l", 100, 0, 2000); _h_pt_2_3l = bookHisto1D("pt_2_3l", 100, 0, 2000); _h_pt_3_3l = bookHisto1D("pt_3_3l", 100, 0, 2000); _h_pt_1_2ltau = bookHisto1D("pt_1_2ltau", 100, 0, 2000); _h_pt_2_2ltau = bookHisto1D("pt_2_2ltau", 100, 0, 2000); _h_pt_3_2ltau = bookHisto1D("pt_3_2ltau", 100, 0, 2000); _h_excluded = bookHisto1D("excluded", 2, -0.5, 1.5); } /// Perform the per-event analysis void analyze(const Event& event) { // Muons Particles muon_candidates; const Particles charged_tracks = apply(event, "CFS").particles(); const Particles visible_particles = apply(event, "VFS").particles(); for (const Particle& mu : apply(event, "muons").particlesByPt() ) { // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of muon itself) double pTinCone = -mu.pT(); for (const Particle& track : charged_tracks ) { if (deltaR(mu.momentum(),track.momentum()) < 0.3 ) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles within dR<0.3) double eTinCone = 0.; for (const Particle& visible_particle : visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(mu.momentum(),visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reconstruction int muon_id = 13; if (mu.hasAncestor(PID::TAU) || mu.hasAncestor(-PID::TAU)) muon_id = 14; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(muon_id,mu) : 1.0; const bool keep_muon = rand()/static_cast(RAND_MAX)<=eff; // Keep muon if pTCone30/pT < 0.15 and eTCone30/pT < 0.2 and reconstructed if (keep_muon && pTinCone/mu.pT() <= 0.1 && eTinCone/mu.pT() < 0.1) muon_candidates.push_back(mu); } // Electrons Particles electron_candidates; for (const Particle& e : apply(event, "elecs").particlesByPt() ) { // Neglect electrons in crack regions if (inRange(e.abseta(), 1.37, 1.52)) continue; // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of electron itself) double pTinCone = -e.pT(); for (const Particle& track : charged_tracks) { if (deltaR(e.momentum(), track.momentum()) < 0.3 ) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles (except muons) within dR<0.3) double eTinCone = 0.; for (const Particle& visible_particle : visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(e.momentum(),visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reconstruction int elec_id = 11; if (e.hasAncestor(15) || e.hasAncestor(-15)) elec_id = 12; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(elec_id,e) : 1.0; const bool keep_elec = rand()/static_cast(RAND_MAX)<=eff; // Keep electron if pTCone30/pT < 0.13 and eTCone30/pT < 0.2 and reconstructed if (keep_elec && pTinCone/e.pT() <= 0.1 && eTinCone/e.pT() < 0.1) electron_candidates.push_back(e); } // Taus Particles tau_candidates; for (const Particle& tau : apply(event, "UFS").particles() ) { // Only pick taus out of all unstable particles if ( tau.abspid() != PID::TAU) continue; // Check that tau has decayed into daughter particles if (tau.genParticle()->end_vertex() == 0) continue; // Calculate visible tau momentum using the tau neutrino momentum in the tau decay FourMomentum daughter_tau_neutrino_momentum = get_tau_neutrino_momentum(tau); Particle tau_vis = tau; tau_vis.setMomentum(tau.momentum()-daughter_tau_neutrino_momentum); // keep only taus in certain eta region and above 15 GeV of visible tau pT if ( tau_vis.pT()/GeV <= 15.0 || tau_vis.abseta() > 2.5) continue; // Get prong number (number of tracks) in tau decay and check if tau decays leptonically unsigned int nprong = 0; bool lep_decaying_tau = false; get_prong_number(tau.genParticle(),nprong,lep_decaying_tau); // Apply reconstruction efficiency and simulate reconstruction int tau_id = 15; if (nprong == 1) tau_id = 15; else if (nprong == 3) tau_id = 16; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(tau_id,tau_vis) : 1.0; const bool keep_tau = rand()/static_cast(RAND_MAX)<=eff; // Keep tau if nprong = 1, it decays hadronically and it is reconstructed if ( !lep_decaying_tau && nprong == 1 && keep_tau) tau_candidates.push_back(tau_vis); } // Jets (all anti-kt R=0.4 jets with pT > 30 GeV and eta < 4.9 Jets jet_candidates; for (const Jet& jet : apply(event, "AntiKtJets04").jetsByPt(30.0*GeV) ) { if (jet.abseta() < 4.9 ) jet_candidates.push_back(jet); } // ETmiss Particles vfs_particles = apply(event, "VFS").particles(); FourMomentum pTmiss; for (const Particle& p : vfs_particles) pTmiss -= p.momentum(); double eTmiss = pTmiss.pT()/GeV; // ------------------------- // Overlap removal // electron - electron Particles electron_candidates_2; for(size_t ie = 0; ie < electron_candidates.size(); ++ie) { const Particle& e = electron_candidates[ie]; bool away = true; // If electron pair within dR < 0.1: remove electron with lower pT for(size_t ie2 = 0; ie2 < electron_candidates_2.size(); ++ie2) { if (deltaR(e.momentum(),electron_candidates_2[ie2].momentum()) < 0.1 ) { away = false; break; } } // If isolated keep it if ( away ) electron_candidates_2.push_back( e ); } // jet - electron Jets recon_jets; for (const Jet& jet : jet_candidates) { bool away = true; // If jet within dR < 0.2 of electron: remove jet for (const Particle& e : electron_candidates_2) { if (deltaR(e.momentum(), jet.momentum()) < 0.2 ) { away = false; break; } } // jet - tau if ( away ) { // If jet within dR < 0.2 of tau: remove jet for (const Particle& tau : tau_candidates) { if (deltaR(tau.momentum(), jet.momentum()) < 0.2 ) { away = false; break; } } } // If isolated keep it if ( away ) recon_jets.push_back( jet ); } // electron - jet Particles recon_leptons, recon_e; for (size_t ie = 0; ie < electron_candidates_2.size(); ++ie) { const Particle& e = electron_candidates_2[ie]; // If electron within 0.2 < dR < 0.4 from any jets: remove electron bool away = true; for (const Jet& jet : recon_jets) { if (deltaR(e.momentum(), jet.momentum()) < 0.4 ) { away = false; break; } } // electron - muon // If electron within dR < 0.1 of a muon: remove electron if (away) { for (const Particle& mu : muon_candidates) { if (deltaR(mu.momentum(),e.momentum()) < 0.1) { away = false; break; } } } // If isolated keep it if ( away ) { recon_e.push_back( e ); recon_leptons.push_back( e ); } } // tau - electron Particles recon_tau; for (const Particle& tau : tau_candidates) { bool away = true; // If tau within dR < 0.2 of an electron: remove tau for (const Particle & e : recon_e) { if (deltaR(tau.momentum(),e.momentum()) < 0.2 ) { away = false; break; } } // tau - muon // If tau within dR < 0.2 of a muon: remove tau if (away) { for (const Particle& mu : muon_candidates) { if (deltaR(tau.momentum(), mu.momentum()) < 0.2 ) { away = false; break; } } } // If isolated keep it if (away) recon_tau.push_back( tau ); } // muon - jet Particles recon_mu, trigger_mu; // If muon within dR < 0.4 of a jet: remove muon for (const Particle& mu : muon_candidates ) { bool away = true; for (const Jet& jet : recon_jets) { if (deltaR(mu.momentum(), jet.momentum()) < 0.4 ) { away = false; break; } } if (away) { recon_mu.push_back( mu ); recon_leptons.push_back( mu ); if (mu.abseta() < 2.4) trigger_mu.push_back( mu ); } } // End overlap removal // --------------------- // Jet cleaning if (rand()/static_cast(RAND_MAX) <= 0.42) { for (const Jet& jet : recon_jets ) { const double eta = jet.rapidity(); const double phi = jet.azimuthalAngle(MINUSPI_PLUSPI); if(jet.pT() > 25*GeV && inRange(eta,-0.1,1.5) && inRange(phi,-0.9,-0.5)) vetoEvent; } } // Event selection // Require at least 3 charged tracks in event if (charged_tracks.size() < 3) vetoEvent; // And at least one e/mu passing trigger if( !( !recon_e.empty() && recon_e[0].pT()>26.*GeV) && !( !trigger_mu.empty() && trigger_mu[0].pT()>26.*GeV) ) { MSG_DEBUG("Hardest lepton fails trigger"); vetoEvent; } // And only accept events with at least 2 electrons and muons and at least 3 leptons in total if (recon_mu.size() + recon_e.size() + recon_tau.size() < 3 || recon_leptons.size() < 2) vetoEvent; // Getting the event weight const double weight = event.weight(); // Sort leptons by decreasing pT sortByPt(recon_leptons); sortByPt(recon_tau); // Calculate HTlep, fill lepton pT histograms & store chosen combination of 3 leptons double HTlep = 0.; Particles chosen_leptons; if (recon_leptons.size() > 2) { _h_pt_1_3l->fill(recon_leptons[0].pT()/GeV, weight); _h_pt_2_3l->fill(recon_leptons[1].pT()/GeV, weight); _h_pt_3_3l->fill(recon_leptons[2].pT()/GeV, weight); HTlep = (recon_leptons[0].pT() + recon_leptons[1].pT() + recon_leptons[2].pT())/GeV; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_leptons[2] ); } else { _h_pt_1_2ltau->fill(recon_leptons[0].pT()/GeV, weight); _h_pt_2_2ltau->fill(recon_leptons[1].pT()/GeV, weight); _h_pt_3_2ltau->fill(recon_tau[0].pT()/GeV, weight); HTlep = recon_leptons[0].pT()/GeV + recon_leptons[1].pT()/GeV + recon_tau[0].pT()/GeV; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_tau[0] ); } // Calculate mT and mTW variable Particles mT_leptons; Particles mTW_leptons; for (size_t i1 = 0; i1 < 3; i1 ++) { for (size_t i2 = i1+1; i2 < 3; i2 ++) { double OSSF_inv_mass = isOSSF_mass(chosen_leptons[i1],chosen_leptons[i2]); if (OSSF_inv_mass != 0.) { for (size_t i3 = 0; i3 < 3 ; i3 ++) { if (i3 != i2 && i3 != i1) { mT_leptons.push_back(chosen_leptons[i3]); if ( fabs(91.0 - OSSF_inv_mass) < 20. ) mTW_leptons.push_back(chosen_leptons[i3]); } } } else { mT_leptons.push_back(chosen_leptons[0]); mTW_leptons.push_back(chosen_leptons[0]); } } } sortByPt(mT_leptons); sortByPt(mTW_leptons); double mT = sqrt(2*pTmiss.pT()/GeV*mT_leptons[0].pT()/GeV*(1-cos(pTmiss.phi()-mT_leptons[0].phi()))); double mTW = sqrt(2*pTmiss.pT()/GeV*mTW_leptons[0].pT()/GeV*(1-cos(pTmiss.phi()-mTW_leptons[0].phi()))); // Calculate Min pT variable double min_pT = chosen_leptons[2].pT()/GeV; // Number of prompt e/mu and had taus _h_e_n->fill(recon_e.size(),weight); _h_mu_n->fill(recon_mu.size(),weight); _h_tau_n->fill(recon_tau.size(),weight); // Calculate HTjets variable double HTjets = 0.; for (const Jet& jet : recon_jets) HTjets += jet.pT()/GeV; // Calculate meff variable double meff = eTmiss + HTjets; Particles all_leptons; for (const Particle& e : recon_e ) { meff += e.pT()/GeV; all_leptons.push_back( e ); } for (const Particle& mu : recon_mu) { meff += mu.pT()/GeV; all_leptons.push_back( mu ); } for (const Particle& tau : recon_tau) { meff += tau.pT()/GeV; all_leptons.push_back( tau ); } // Fill histograms of kinematic variables _h_HTlep_all->fill(HTlep,weight); _h_HTjets_all->fill(HTjets,weight); _h_MET_all->fill(eTmiss,weight); _h_Meff_all->fill(meff,weight); _h_min_pT_all->fill(min_pT,weight); _h_mT_all->fill(mT,weight); // Determine signal region (3l / 2ltau , onZ / offZ OSSF / offZ no-OSSF) // 3l vs. 2ltau string basic_signal_region; if (recon_mu.size() + recon_e.size() > 2) basic_signal_region += "3l_"; else if ( (recon_mu.size() + recon_e.size() == 2) && (recon_tau.size() > 0)) basic_signal_region += "2ltau_"; // Is there an OSSF pair or a three lepton combination with an invariant mass close to the Z mass int onZ = isonZ(chosen_leptons); if (onZ == 1) basic_signal_region += "onZ"; else if (onZ == 0) { bool OSSF = isOSSF(chosen_leptons); if (OSSF) basic_signal_region += "offZ_OSSF"; else basic_signal_region += "offZ_noOSSF"; } // Check in which signal regions this event falls and adjust event counters // INFO: The b-jet signal regions of the paper are not included in this Rivet implementation fillEventCountsPerSR(basic_signal_region,onZ,HTlep,eTmiss,HTjets,meff,min_pT,mTW,weight); } /// Normalise histograms etc., after the run void finalize() { // Normalize to an integrated luminosity of 1 fb-1 double norm = crossSection()/femtobarn/sumOfWeights(); string best_signal_region = ""; double ratio_best_SR = 0.; // Loop over all signal regions and find signal region with best sensitivity (ratio signal events/visible cross-section) for (size_t i = 0; i < _signal_regions.size(); i++) { double signal_events = _eventCountsPerSR[_signal_regions[i]] * norm; // Use expected upper limits to find best signal region: double UL95 = getUpperLimit(_signal_regions[i],false); double ratio = signal_events / UL95; if (ratio > ratio_best_SR) { best_signal_region = _signal_regions.at(i); ratio_best_SR = ratio; } } double signal_events_best_SR = _eventCountsPerSR[best_signal_region] * norm; double exp_UL_best_SR = getUpperLimit(best_signal_region, false); double obs_UL_best_SR = getUpperLimit(best_signal_region, true); // Print out result cout << "----------------------------------------------------------------------------------------" << endl; cout << "Number of total events: " << sumOfWeights() << endl; cout << "Best signal region: " << best_signal_region << endl; cout << "Normalized number of signal events in this best signal region (per fb-1): " << signal_events_best_SR << endl; cout << "Efficiency*Acceptance: " << _eventCountsPerSR[best_signal_region]/sumOfWeights() << endl; cout << "Cross-section [fb]: " << crossSection()/femtobarn << endl; cout << "Expected visible cross-section (per fb-1): " << exp_UL_best_SR << endl; cout << "Ratio (signal events / expected visible cross-section): " << ratio_best_SR << endl; cout << "Observed visible cross-section (per fb-1): " << obs_UL_best_SR << endl; cout << "Ratio (signal events / observed visible cross-section): " << signal_events_best_SR/obs_UL_best_SR << endl; cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the EXPECTED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > exp_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% C.L." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the OBSERVED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > obs_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% C.L." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; cout << "INFO: The b-jet signal regions of the paper are not included in this Rivet implementation." << endl; cout << "----------------------------------------------------------------------------------------" << endl; /// Normalize to cross section if (norm != 0) { scale(_h_HTlep_all, norm); scale(_h_HTjets_all, norm); scale(_h_MET_all, norm); scale(_h_Meff_all, norm); scale(_h_min_pT_all, norm); scale(_h_mT_all, norm); scale(_h_pt_1_3l, norm); scale(_h_pt_2_3l, norm); scale(_h_pt_3_3l, norm); scale(_h_pt_1_2ltau, norm); scale(_h_pt_2_2ltau, norm); scale(_h_pt_3_2ltau, norm); scale(_h_e_n, norm); scale(_h_mu_n, norm); scale(_h_tau_n, norm); scale(_h_excluded, norm); } } /// Helper functions //@{ /// Function giving a list of all signal regions vector getSignalRegions() { // List of basic signal regions vector basic_signal_regions; basic_signal_regions.push_back("3l_offZ_OSSF"); basic_signal_regions.push_back("3l_offZ_noOSSF"); basic_signal_regions.push_back("3l_onZ"); basic_signal_regions.push_back("2ltau_offZ_OSSF"); basic_signal_regions.push_back("2ltau_offZ_noOSSF"); basic_signal_regions.push_back("2ltau_onZ"); // List of kinematic variables vector kinematic_variables; kinematic_variables.push_back("HTlep"); kinematic_variables.push_back("METStrong"); kinematic_variables.push_back("METWeak"); kinematic_variables.push_back("Meff"); kinematic_variables.push_back("MeffStrong"); kinematic_variables.push_back("MeffMt"); kinematic_variables.push_back("MinPt"); vector signal_regions; // Loop over all kinematic variables and basic signal regions for (size_t i0 = 0; i0 < kinematic_variables.size(); i0++) { for (size_t i1 = 0; i1 < basic_signal_regions.size(); i1++) { // Is signal region onZ? int onZ = (basic_signal_regions[i1].find("onZ") != string::npos) ? 1 : 0; // Get cut values for this kinematic variable vector cut_values = getCutsPerSignalRegion(kinematic_variables[i0], onZ); // Loop over all cut values for (size_t i2 = 0; i2 < cut_values.size(); i2++) { // Push signal region into vector signal_regions.push_back( kinematic_variables[i0] + "_" + basic_signal_regions[i1] + "_cut_" + toString(cut_values[i2]) ); } } } return signal_regions; } /// Function giving all cut values per kinematic variable vector getCutsPerSignalRegion(const string& signal_region, int onZ = 0) { vector cutValues; // Cut values for HTlep if (signal_region.compare("HTlep") == 0) { cutValues.push_back(0); cutValues.push_back(200); cutValues.push_back(500); cutValues.push_back(800); } // Cut values for MinPt else if (signal_region.compare("MinPt") == 0) { cutValues.push_back(0); cutValues.push_back(50); cutValues.push_back(100); cutValues.push_back(150); } // Cut values for METStrong (HTjets > 150 GeV) and METWeak (HTjets < 150 GeV) else if (signal_region.compare("METStrong") == 0 || signal_region.compare("METWeak") == 0) { cutValues.push_back(0); cutValues.push_back(100); cutValues.push_back(200); cutValues.push_back(300); } // Cut values for Meff if (signal_region.compare("Meff") == 0) { cutValues.push_back(0); cutValues.push_back(600); cutValues.push_back(1000); cutValues.push_back(1500); } // Cut values for MeffStrong (MET > 100 GeV) if ((signal_region.compare("MeffStrong") == 0 || signal_region.compare("MeffMt") == 0) && onZ ==1) { cutValues.push_back(0); cutValues.push_back(600); cutValues.push_back(1200); } return cutValues; } /// function fills map _eventCountsPerSR by looping over all signal regions /// and looking if the event falls into this signal region void fillEventCountsPerSR(const string& basic_signal_region, int onZ, double HTlep, double eTmiss, double HTjets, double meff, double min_pT, double mTW, double weight) { // Get cut values for HTlep, loop over them and add event if cut is passed vector cut_values = getCutsPerSignalRegion("HTlep", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (HTlep > cut_values[i]) _eventCountsPerSR[("HTlep_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MinPt, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MinPt", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (min_pT > cut_values[i]) _eventCountsPerSR[("MinPt_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets > 150.) _eventCountsPerSR[("METStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METWeak, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METWeak", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets <= 150.) _eventCountsPerSR[("METWeak_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for Meff, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("Meff", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i]) _eventCountsPerSR[("Meff_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MeffStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MeffStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i] && eTmiss > 100.) _eventCountsPerSR[("MeffStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MeffMt, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MeffMt", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i] && mTW > 100. && onZ == 1) _eventCountsPerSR[("MeffMt_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } } /// Function returning 4-momentum of daughter-particle if it is a tau neutrino FourMomentum get_tau_neutrino_momentum(const Particle& p) { assert(p.abspid() == PID::TAU); ConstGenVertexPtr dv = p.genParticle()->end_vertex(); assert(dv != nullptr); // Loop over all daughter particles - for(ConstGenParticlePtr pp: dv->particles_out()){ + for(ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ if (abs(pp->pdg_id()) == PID::NU_TAU) return FourMomentum(pp->momentum()); } return FourMomentum(); } /// Function calculating the prong number of taus void get_prong_number(ConstGenParticlePtr p, unsigned int& nprong, bool& lep_decaying_tau) { assert(p != nullptr); ConstGenVertexPtr dv = p->end_vertex(); assert(dv != nullptr); - for(ConstGenParticlePtr pp: dv->particles_out()){ + for(ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ // If they have status 1 and are charged they will produce a track and the prong number is +1 if (pp->status() == 1 ) { const int id = pp->pdg_id(); if (Rivet::PID::charge(id) != 0 ) ++nprong; // Check if tau decays leptonically if (( abs(id) == PID::ELECTRON || abs(id) == PID::MUON || abs(id) == PID::TAU ) && abs(p->pdg_id()) == PID::TAU) lep_decaying_tau = true; } // If the status of the daughter particle is 2 it is unstable and the further decays are checked else if (pp->status() == 2 ) { get_prong_number(pp,nprong,lep_decaying_tau); } } } /// Function giving fiducial lepton efficiency double apply_reco_eff(int flavor, const Particle& p) { double pt = p.pT()/GeV; double eta = p.eta(); double eff = 0.; if (flavor == 11) { // weight prompt electron -- now including data/MC ID SF in eff. double avgrate = 0.685; const static double wz_ele[] = {0.0256,0.522,0.607,0.654,0.708,0.737,0.761,0.784,0.815,0.835,0.851,0.841,0.898}; // double ewz_ele[] = {0.000257,0.00492,0.00524,0.00519,0.00396,0.00449,0.00538,0.00513,0.00773,0.00753,0.0209,0.0964,0.259}; int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100 && pt < 200) ibin = 9; if(pt > 200 && pt < 400) ibin = 10; if(pt > 400 && pt < 600) ibin = 11; if(pt > 600) ibin = 12; double eff_pt = 0.; eff_pt = wz_ele[ibin]; eta = fabs(eta); const static double wz_ele_eta[] = {0.65,0.714,0.722,0.689,0.635,0.615}; // double ewz_ele_eta[] = {0.00642,0.00355,0.00335,0.004,0.00368,0.00422}; ibin = 0; if(eta > 0 && eta < 0.1) ibin = 0; if(eta > 0.1 && eta < 0.5) ibin = 1; if(eta > 0.5 && eta < 1.0) ibin = 2; if(eta > 1.0 && eta < 1.5) ibin = 3; if(eta > 1.5 && eta < 2.0) ibin = 4; if(eta > 2.0 && eta < 2.5) ibin = 5; double eff_eta = 0.; eff_eta = wz_ele_eta[ibin]; eff = (eff_pt * eff_eta) / avgrate; } if (flavor == 12) { // weight electron from tau double avgrate = 0.476; const static double wz_ele[] = {0.00855,0.409,0.442,0.55,0.632,0.616,0.615,0.642,0.72,0.617}; // double ewz_ele[] = {0.000573,0.0291,0.0366,0.0352,0.0363,0.0474,0.0628,0.0709,0.125,0.109}; int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100) ibin = 9; double eff_pt = 0.; eff_pt = wz_ele[ibin]; eta = fabs(eta); const static double wz_ele_eta[] = {0.546,0.5,0.513,0.421,0.47,0.433}; //double ewz_ele_eta[] = {0.0566,0.0257,0.0263,0.0263,0.0303,0.0321}; ibin = 0; if(eta > 0 && eta < 0.1) ibin = 0; if(eta > 0.1 && eta < 0.5) ibin = 1; if(eta > 0.5 && eta < 1.0) ibin = 2; if(eta > 1.0 && eta < 1.5) ibin = 3; if(eta > 1.5 && eta < 2.0) ibin = 4; if(eta > 2.0 && eta < 2.5) ibin = 5; double eff_eta = 0.; eff_eta = wz_ele_eta[ibin]; eff = (eff_pt * eff_eta) / avgrate; } if (flavor == 13) { // weight prompt muon int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100 && pt < 200) ibin = 9; if(pt > 200 && pt < 400) ibin = 10; if(pt > 400) ibin = 11; if(fabs(eta) < 0.1) { const static double wz_mu[] = {0.00705,0.402,0.478,0.49,0.492,0.499,0.527,0.512,0.53,0.528,0.465,0.465}; //double ewz_mu[] = {0.000298,0.0154,0.017,0.0158,0.0114,0.0123,0.0155,0.0133,0.0196,0.0182,0.0414,0.0414}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } if(fabs(eta) > 0.1) { const static double wz_mu[] = {0.0224,0.839,0.887,0.91,0.919,0.923,0.925,0.925,0.922,0.918,0.884,0.834}; //double ewz_mu[] = {0.000213,0.00753,0.0074,0.007,0.00496,0.00534,0.00632,0.00583,0.00849,0.00804,0.0224,0.0963}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } } if (flavor == 14) { // weight muon from tau int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100) ibin = 9; if(fabs(eta) < 0.1) { const static double wz_mu[] = {0.0,0.664,0.124,0.133,0.527,0.283,0.495,0.25,0.5,0.331}; //double ewz_mu[] = {0.0,0.192,0.0437,0.0343,0.128,0.107,0.202,0.125,0.25,0.191}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } if(fabs(eta) > 0.1) { const static double wz_mu[] = {0.0,0.617,0.655,0.676,0.705,0.738,0.712,0.783,0.646,0.745}; //double ewz_mu[] = {0.0,0.043,0.0564,0.0448,0.0405,0.0576,0.065,0.0825,0.102,0.132}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } } if (flavor == 15) { // weight hadronic tau 1p double avgrate = 0.16; const static double wz_tau1p[] = {0.0,0.0311,0.148,0.229,0.217,0.292,0.245,0.307,0.227,0.277}; //double ewz_tau1p[] = {0.0,0.00211,0.0117,0.0179,0.0134,0.0248,0.0264,0.0322,0.0331,0.0427}; int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100) ibin = 9; double eff_pt = 0.; eff_pt = wz_tau1p[ibin]; const static double wz_tau1p_eta[] = {0.166,0.15,0.188,0.175,0.142,0.109}; //double ewz_tau1p_eta[] ={0.0166,0.00853,0.0097,0.00985,0.00949,0.00842}; ibin = 0; if(eta > 0.0 && eta < 0.1) ibin = 0; if(eta > 0.1 && eta < 0.5) ibin = 1; if(eta > 0.5 && eta < 1.0) ibin = 2; if(eta > 1.0 && eta < 1.5) ibin = 3; if(eta > 1.5 && eta < 2.0) ibin = 4; if(eta > 2.0 && eta < 2.5) ibin = 5; double eff_eta = 0.; eff_eta = wz_tau1p_eta[ibin]; eff = (eff_pt * eff_eta) / avgrate; } return eff; } /// Function giving observed and expected upper limits (on the visible cross-section) double getUpperLimit(const string& signal_region, bool observed) { map upperLimitsObserved; map upperLimitsExpected; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_200"] = 0.704; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_500"] = 0.182; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_800"] = 0.147; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_200"] = 1.677; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_500"] = 0.141; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_800"] = 0.155; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_200"] = 0.341; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_500"] = 0.221; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_800"] = 0.140; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_200"] = 0.413; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_500"] = 0.138; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_800"] = 0.150; upperLimitsObserved["HTlep_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["HTlep_3l_onZ_cut_200"] = 3.579; upperLimitsObserved["HTlep_3l_onZ_cut_500"] = 0.466; upperLimitsObserved["HTlep_3l_onZ_cut_800"] = 0.298; upperLimitsObserved["HTlep_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["HTlep_2ltau_onZ_cut_200"] = 3.141; upperLimitsObserved["HTlep_2ltau_onZ_cut_500"] = 0.290; upperLimitsObserved["HTlep_2ltau_onZ_cut_800"] = 0.157; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_0"] = 1.111; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_100"] = 0.354; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_200"] = 0.236; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_300"] = 0.150; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_0"] = 1.881; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_100"] = 0.406; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_200"] = 0.194; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_300"] = 0.134; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_0"] = 0.770; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_100"] = 0.295; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_200"] = 0.149; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_300"] = 0.140; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_0"] = 2.003; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_100"] = 0.806; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_200"] = 0.227; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_300"] = 0.138; upperLimitsObserved["METStrong_3l_onZ_cut_0"] = 6.383; upperLimitsObserved["METStrong_3l_onZ_cut_100"] = 0.959; upperLimitsObserved["METStrong_3l_onZ_cut_200"] = 0.549; upperLimitsObserved["METStrong_3l_onZ_cut_300"] = 0.182; upperLimitsObserved["METStrong_2ltau_onZ_cut_0"] = 10.658; upperLimitsObserved["METStrong_2ltau_onZ_cut_100"] = 0.637; upperLimitsObserved["METStrong_2ltau_onZ_cut_200"] = 0.291; upperLimitsObserved["METStrong_2ltau_onZ_cut_300"] = 0.227; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_0"] = 1.802; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_100"] = 0.344; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_200"] = 0.189; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_300"] = 0.148; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_0"] = 12.321; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_100"] = 0.430; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_200"] = 0.137; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_300"] = 0.134; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_0"] = 0.562; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_100"] = 0.153; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_200"] = 0.154; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_300"] = 0.141; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_0"] = 2.475; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_100"] = 0.244; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_200"] = 0.141; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_300"] = 0.142; upperLimitsObserved["METWeak_3l_onZ_cut_0"] = 24.769; upperLimitsObserved["METWeak_3l_onZ_cut_100"] = 0.690; upperLimitsObserved["METWeak_3l_onZ_cut_200"] = 0.198; upperLimitsObserved["METWeak_3l_onZ_cut_300"] = 0.138; upperLimitsObserved["METWeak_2ltau_onZ_cut_0"] = 194.360; upperLimitsObserved["METWeak_2ltau_onZ_cut_100"] = 0.287; upperLimitsObserved["METWeak_2ltau_onZ_cut_200"] = 0.144; upperLimitsObserved["METWeak_2ltau_onZ_cut_300"] = 0.130; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_600"] = 0.487; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_1000"] = 0.156; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_1500"] = 0.140; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_600"] = 0.687; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_1000"] = 0.224; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_1500"] = 0.155; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_600"] = 0.249; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_1000"] = 0.194; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_1500"] = 0.145; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_600"] = 0.772; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_1000"] = 0.218; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_1500"] = 0.204; upperLimitsObserved["Meff_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["Meff_3l_onZ_cut_600"] = 2.933; upperLimitsObserved["Meff_3l_onZ_cut_1000"] = 0.912; upperLimitsObserved["Meff_3l_onZ_cut_1500"] = 0.225; upperLimitsObserved["Meff_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["Meff_2ltau_onZ_cut_600"] = 1.486; upperLimitsObserved["Meff_2ltau_onZ_cut_1000"] = 0.641; upperLimitsObserved["Meff_2ltau_onZ_cut_1500"] = 0.204; upperLimitsObserved["MeffStrong_3l_offZ_OSSF_cut_0"] = 0.479; upperLimitsObserved["MeffStrong_3l_offZ_OSSF_cut_600"] = 0.353; upperLimitsObserved["MeffStrong_3l_offZ_OSSF_cut_1200"] = 0.187; upperLimitsObserved["MeffStrong_2ltau_offZ_OSSF_cut_0"] = 0.617; upperLimitsObserved["MeffStrong_2ltau_offZ_OSSF_cut_600"] = 0.320; upperLimitsObserved["MeffStrong_2ltau_offZ_OSSF_cut_1200"] = 0.281; upperLimitsObserved["MeffStrong_3l_offZ_noOSSF_cut_0"] = 0.408; upperLimitsObserved["MeffStrong_3l_offZ_noOSSF_cut_600"] = 0.240; upperLimitsObserved["MeffStrong_3l_offZ_noOSSF_cut_1200"] = 0.150; upperLimitsObserved["MeffStrong_2ltau_offZ_noOSSF_cut_0"] = 0.774; upperLimitsObserved["MeffStrong_2ltau_offZ_noOSSF_cut_600"] = 0.417; upperLimitsObserved["MeffStrong_2ltau_offZ_noOSSF_cut_1200"] = 0.266; upperLimitsObserved["MeffStrong_3l_onZ_cut_0"] = 1.208; upperLimitsObserved["MeffStrong_3l_onZ_cut_600"] = 0.837; upperLimitsObserved["MeffStrong_3l_onZ_cut_1200"] = 0.269; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_0"] = 0.605; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_600"] = 0.420; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_1200"] = 0.141; upperLimitsObserved["MeffMt_3l_onZ_cut_0"] = 1.832; upperLimitsObserved["MeffMt_3l_onZ_cut_600"] = 0.862; upperLimitsObserved["MeffMt_3l_onZ_cut_1200"] = 0.222; upperLimitsObserved["MeffMt_2ltau_onZ_cut_0"] = 1.309; upperLimitsObserved["MeffMt_2ltau_onZ_cut_600"] = 0.481; upperLimitsObserved["MeffMt_2ltau_onZ_cut_1200"] = 0.146; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_50"] = 0.500; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_100"] = 0.203; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_150"] = 0.128; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_50"] = 0.859; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_100"] = 0.158; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_150"] = 0.155; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_50"] = 0.295; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_100"] = 0.148; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_150"] = 0.137; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_50"] = 0.314; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_100"] = 0.134; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_150"] = 0.140; upperLimitsObserved["MinPt_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["MinPt_3l_onZ_cut_50"] = 1.767; upperLimitsObserved["MinPt_3l_onZ_cut_100"] = 0.690; upperLimitsObserved["MinPt_3l_onZ_cut_150"] = 0.301; upperLimitsObserved["MinPt_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["MinPt_2ltau_onZ_cut_50"] = 1.050; upperLimitsObserved["MinPt_2ltau_onZ_cut_100"] = 0.155; upperLimitsObserved["MinPt_2ltau_onZ_cut_150"] = 0.146; upperLimitsObserved["nbtag_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["nbtag_3l_offZ_OSSF_cut_1"] = 0.865; upperLimitsObserved["nbtag_3l_offZ_OSSF_cut_2"] = 0.474; upperLimitsObserved["nbtag_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["nbtag_2ltau_offZ_OSSF_cut_1"] = 1.566; upperLimitsObserved["nbtag_2ltau_offZ_OSSF_cut_2"] = 0.426; upperLimitsObserved["nbtag_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["nbtag_3l_offZ_noOSSF_cut_1"] = 0.643; upperLimitsObserved["nbtag_3l_offZ_noOSSF_cut_2"] = 0.321; upperLimitsObserved["nbtag_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["nbtag_2ltau_offZ_noOSSF_cut_1"] = 2.435; upperLimitsObserved["nbtag_2ltau_offZ_noOSSF_cut_2"] = 1.073; upperLimitsObserved["nbtag_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["nbtag_3l_onZ_cut_1"] = 3.908; upperLimitsObserved["nbtag_3l_onZ_cut_2"] = 0.704; upperLimitsObserved["nbtag_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["nbtag_2ltau_onZ_cut_1"] = 9.377; upperLimitsObserved["nbtag_2ltau_onZ_cut_2"] = 0.657; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_200"] = 1.175; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_500"] = 0.265; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_800"] = 0.155; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_200"] = 1.803; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_500"] = 0.159; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_800"] = 0.155; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_200"] = 0.340; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_500"] = 0.218; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_800"] = 0.140; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_200"] = 0.599; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_500"] = 0.146; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_800"] = 0.148; upperLimitsExpected["HTlep_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["HTlep_3l_onZ_cut_200"] = 4.879; upperLimitsExpected["HTlep_3l_onZ_cut_500"] = 0.473; upperLimitsExpected["HTlep_3l_onZ_cut_800"] = 0.266; upperLimitsExpected["HTlep_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["HTlep_2ltau_onZ_cut_200"] = 3.676; upperLimitsExpected["HTlep_2ltau_onZ_cut_500"] = 0.235; upperLimitsExpected["HTlep_2ltau_onZ_cut_800"] = 0.150; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_0"] = 1.196; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_100"] = 0.423; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_200"] = 0.208; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_300"] = 0.158; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_0"] = 2.158; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_100"] = 0.461; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_200"] = 0.186; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_300"] = 0.138; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_0"] = 0.495; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_100"] = 0.284; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_200"] = 0.150; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_300"] = 0.146; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_0"] = 1.967; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_100"] = 0.732; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_200"] = 0.225; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_300"] = 0.147; upperLimitsExpected["METStrong_3l_onZ_cut_0"] = 7.157; upperLimitsExpected["METStrong_3l_onZ_cut_100"] = 1.342; upperLimitsExpected["METStrong_3l_onZ_cut_200"] = 0.508; upperLimitsExpected["METStrong_3l_onZ_cut_300"] = 0.228; upperLimitsExpected["METStrong_2ltau_onZ_cut_0"] = 12.441; upperLimitsExpected["METStrong_2ltau_onZ_cut_100"] = 0.534; upperLimitsExpected["METStrong_2ltau_onZ_cut_200"] = 0.243; upperLimitsExpected["METStrong_2ltau_onZ_cut_300"] = 0.218; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_0"] = 2.199; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_100"] = 0.391; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_200"] = 0.177; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_300"] = 0.144; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_0"] = 12.431; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_100"] = 0.358; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_200"] = 0.150; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_300"] = 0.135; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_0"] = 0.577; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_100"] = 0.214; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_200"] = 0.155; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_300"] = 0.140; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_0"] = 2.474; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_100"] = 0.382; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_200"] = 0.144; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_300"] = 0.146; upperLimitsExpected["METWeak_3l_onZ_cut_0"] = 26.305; upperLimitsExpected["METWeak_3l_onZ_cut_100"] = 1.227; upperLimitsExpected["METWeak_3l_onZ_cut_200"] = 0.311; upperLimitsExpected["METWeak_3l_onZ_cut_300"] = 0.188; upperLimitsExpected["METWeak_2ltau_onZ_cut_0"] = 205.198; upperLimitsExpected["METWeak_2ltau_onZ_cut_100"] = 0.399; upperLimitsExpected["METWeak_2ltau_onZ_cut_200"] = 0.166; upperLimitsExpected["METWeak_2ltau_onZ_cut_300"] = 0.140; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_600"] = 0.649; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_1000"] = 0.252; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_1500"] = 0.150; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_600"] = 0.657; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_1000"] = 0.226; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_1500"] = 0.154; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_600"] = 0.265; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_1000"] = 0.176; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_1500"] = 0.146; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_600"] = 0.678; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_1000"] = 0.243; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_1500"] = 0.184; upperLimitsExpected["Meff_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["Meff_3l_onZ_cut_600"] = 3.219; upperLimitsExpected["Meff_3l_onZ_cut_1000"] = 0.905; upperLimitsExpected["Meff_3l_onZ_cut_1500"] = 0.261; upperLimitsExpected["Meff_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["Meff_2ltau_onZ_cut_600"] = 1.680; upperLimitsExpected["Meff_2ltau_onZ_cut_1000"] = 0.375; upperLimitsExpected["Meff_2ltau_onZ_cut_1500"] = 0.178; upperLimitsExpected["MeffStrong_3l_offZ_OSSF_cut_0"] = 0.571; upperLimitsExpected["MeffStrong_3l_offZ_OSSF_cut_600"] = 0.386; upperLimitsExpected["MeffStrong_3l_offZ_OSSF_cut_1200"] = 0.177; upperLimitsExpected["MeffStrong_2ltau_offZ_OSSF_cut_0"] = 0.605; upperLimitsExpected["MeffStrong_2ltau_offZ_OSSF_cut_600"] = 0.335; upperLimitsExpected["MeffStrong_2ltau_offZ_OSSF_cut_1200"] = 0.249; upperLimitsExpected["MeffStrong_3l_offZ_noOSSF_cut_0"] = 0.373; upperLimitsExpected["MeffStrong_3l_offZ_noOSSF_cut_600"] = 0.223; upperLimitsExpected["MeffStrong_3l_offZ_noOSSF_cut_1200"] = 0.150; upperLimitsExpected["MeffStrong_2ltau_offZ_noOSSF_cut_0"] = 0.873; upperLimitsExpected["MeffStrong_2ltau_offZ_noOSSF_cut_600"] = 0.428; upperLimitsExpected["MeffStrong_2ltau_offZ_noOSSF_cut_1200"] = 0.210; upperLimitsExpected["MeffStrong_3l_onZ_cut_0"] = 2.034; upperLimitsExpected["MeffStrong_3l_onZ_cut_600"] = 1.093; upperLimitsExpected["MeffStrong_3l_onZ_cut_1200"] = 0.293; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_0"] = 0.690; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_600"] = 0.392; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_1200"] = 0.156; upperLimitsExpected["MeffMt_3l_onZ_cut_0"] = 2.483; upperLimitsExpected["MeffMt_3l_onZ_cut_600"] = 0.845; upperLimitsExpected["MeffMt_3l_onZ_cut_1200"] = 0.255; upperLimitsExpected["MeffMt_2ltau_onZ_cut_0"] = 1.448; upperLimitsExpected["MeffMt_2ltau_onZ_cut_600"] = 0.391; upperLimitsExpected["MeffMt_2ltau_onZ_cut_1200"] = 0.146; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_50"] = 0.703; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_100"] = 0.207; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_150"] = 0.143; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_50"] = 0.705; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_100"] = 0.149; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_150"] = 0.155; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_50"] = 0.249; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_100"] = 0.135; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_150"] = 0.136; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_50"] = 0.339; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_100"] = 0.149; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_150"] = 0.145; upperLimitsExpected["MinPt_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["MinPt_3l_onZ_cut_50"] = 2.260; upperLimitsExpected["MinPt_3l_onZ_cut_100"] = 0.438; upperLimitsExpected["MinPt_3l_onZ_cut_150"] = 0.305; upperLimitsExpected["MinPt_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["MinPt_2ltau_onZ_cut_50"] = 1.335; upperLimitsExpected["MinPt_2ltau_onZ_cut_100"] = 0.162; upperLimitsExpected["MinPt_2ltau_onZ_cut_150"] = 0.149; upperLimitsExpected["nbtag_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["nbtag_3l_offZ_OSSF_cut_1"] = 0.923; upperLimitsExpected["nbtag_3l_offZ_OSSF_cut_2"] = 0.452; upperLimitsExpected["nbtag_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["nbtag_2ltau_offZ_OSSF_cut_1"] = 1.774; upperLimitsExpected["nbtag_2ltau_offZ_OSSF_cut_2"] = 0.549; upperLimitsExpected["nbtag_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["nbtag_3l_offZ_noOSSF_cut_1"] = 0.594; upperLimitsExpected["nbtag_3l_offZ_noOSSF_cut_2"] = 0.298; upperLimitsExpected["nbtag_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["nbtag_2ltau_offZ_noOSSF_cut_1"] = 2.358; upperLimitsExpected["nbtag_2ltau_offZ_noOSSF_cut_2"] = 0.958; upperLimitsExpected["nbtag_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["nbtag_3l_onZ_cut_1"] = 3.868; upperLimitsExpected["nbtag_3l_onZ_cut_2"] = 0.887; upperLimitsExpected["nbtag_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["nbtag_2ltau_onZ_cut_1"] = 9.397; upperLimitsExpected["nbtag_2ltau_onZ_cut_2"] = 0.787; if (observed) return upperLimitsObserved[signal_region]; else return upperLimitsExpected[signal_region]; } /// Function checking if there is an OSSF lepton pair or a combination of 3 leptons with an invariant mass close to the Z mass int isonZ (const Particles& particles) { int onZ = 0; double best_mass_2 = 999.; double best_mass_3 = 999.; // Loop over all 2 particle combinations to find invariant mass of OSSF pair closest to Z mass for (const Particle& p1 : particles) { for (const Particle& p2 : particles) { double mass_difference_2_old = fabs(91.0 - best_mass_2); double mass_difference_2_new = fabs(91.0 - (p1.momentum() + p2.momentum()).mass()/GeV); // If particle combination is OSSF pair calculate mass difference to Z mass if ((p1.pid()*p2.pid() == -121 || p1.pid()*p2.pid() == -169)) { // Get invariant mass closest to Z mass if (mass_difference_2_new < mass_difference_2_old) best_mass_2 = (p1.momentum() + p2.momentum()).mass()/GeV; // In case there is an OSSF pair take also 3rd lepton into account (e.g. from FSR and photon to electron conversion) for (const Particle& p3 : particles ) { double mass_difference_3_old = fabs(91.0 - best_mass_3); double mass_difference_3_new = fabs(91.0 - (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV); if (mass_difference_3_new < mass_difference_3_old) best_mass_3 = (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV; } } } } // Pick the minimum invariant mass of the best OSSF pair combination and the best 3 lepton combination double best_mass = min(best_mass_2,best_mass_3); // if this mass is in a 20 GeV window around the Z mass, the event is classified as onZ if ( fabs(91.0 - best_mass) < 20. ) onZ = 1; return onZ; } /// function checking if two leptons are an OSSF lepton pair and giving out the invariant mass (0 if no OSSF pair) double isOSSF_mass (const Particle& p1, const Particle& p2) { double inv_mass = 0.; // Is particle combination OSSF pair? if ((p1.pid()*p2.pid() == -121 || p1.pid()*p2.pid() == -169)) { // Get invariant mass inv_mass = (p1.momentum() + p2.momentum()).mass()/GeV; } return inv_mass; } /// Function checking if there is an OSSF lepton pair bool isOSSF (const Particles& particles) { for (size_t i1=0 ; i1 < 3 ; i1 ++) { for (size_t i2 = i1+1 ; i2 < 3 ; i2 ++) { if ((particles[i1].pid()*particles[i2].pid() == -121 || particles[i1].pid()*particles[i2].pid() == -169)) { return true; } } } return false; } //@} private: /// Histograms //@{ Histo1DPtr _h_HTlep_all, _h_HTjets_all, _h_MET_all, _h_Meff_all, _h_min_pT_all, _h_mT_all; Histo1DPtr _h_pt_1_3l, _h_pt_2_3l, _h_pt_3_3l, _h_pt_1_2ltau, _h_pt_2_2ltau, _h_pt_3_2ltau; Histo1DPtr _h_e_n, _h_mu_n, _h_tau_n; Histo1DPtr _h_excluded; //@} /// Fiducial efficiencies to model the effects of the ATLAS detector bool _use_fiducial_lepton_efficiency; /// List of signal regions and event counts per signal region vector _signal_regions; map _eventCountsPerSR; }; DECLARE_RIVET_PLUGIN(ATLAS_2014_I1327229); } diff --git a/analyses/pluginCDF/CDF_2006_S6653332.cc b/analyses/pluginCDF/CDF_2006_S6653332.cc --- a/analyses/pluginCDF/CDF_2006_S6653332.cc +++ b/analyses/pluginCDF/CDF_2006_S6653332.cc @@ -1,177 +1,177 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/InvMassFinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ChargedLeptons.hh" namespace Rivet { /// @brief CDF Run II analysis: jet \f$ p_T \f$ and \f$ \eta \f$ /// distributions in Z + (b) jet production /// @author Lars Sonnenschein /// /// This CDF analysis provides \f$ p_T \f$ and \f$ \eta \f$ distributions of /// jets in Z + (b) jet production, before and after tagging. class CDF_2006_S6653332 : public Analysis { public: /// Constructor CDF_2006_S6653332() : Analysis("CDF_2006_S6653332"), _Rjet(0.7), _JetPtCut(20.), _JetEtaCut(1.5), _Lep1PtCut(18.), _Lep2PtCut(10.), _LepEtaCut(1.1), _sumWeightsWithZ(0.0), _sumWeightsWithZJet(0.0) { } /// @name Analysis methods //@{ void init() { const FinalState fs(-3.6, 3.6); declare(fs, "FS"); // Create a final state with any e+e- or mu+mu- pair with // invariant mass 76 -> 106 GeV and ET > 20 (Z decay products) vector > vids; vids.push_back(make_pair(PID::ELECTRON, PID::POSITRON)); vids.push_back(make_pair(PID::MUON, PID::ANTIMUON)); FinalState fs2(-3.6, 3.6); InvMassFinalState invfs(fs2, vids, 66*GeV, 116*GeV); declare(invfs, "INVFS"); // Make a final state without the Z decay products for jet clustering VetoedFinalState vfs(fs); vfs.addVetoOnThisFinalState(invfs); declare(vfs, "VFS"); declare(FastJets(vfs, FastJets::CDFMIDPOINT, 0.7), "Jets"); // Book histograms _sigmaBJet = bookHisto1D(1, 1, 1); _ratioBJetToZ = bookHisto1D(2, 1, 1); _ratioBJetToJet = bookHisto1D(3, 1, 1); } /// Do the analysis void analyze(const Event& event) { // Check we have an l+l- pair that passes the kinematic cuts // Get the Z decay products (mu+mu- or e+e- pair) const InvMassFinalState& invMassFinalState = apply(event, "INVFS"); const Particles& ZDecayProducts = invMassFinalState.particles(); // Make sure we have at least 2 Z decay products (mumu or ee) if (ZDecayProducts.size() < 2) vetoEvent; // double Lep1Pt = ZDecayProducts[0].pT(); double Lep2Pt = ZDecayProducts[1].pT(); double Lep1Eta = ZDecayProducts[0].absrap(); ///< @todo This is y... should be abseta()? double Lep2Eta = ZDecayProducts[1].absrap(); ///< @todo This is y... should be abseta()? if (Lep1Eta > _LepEtaCut && Lep2Eta > _LepEtaCut) vetoEvent; if (ZDecayProducts[0].abspid()==13 && Lep1Eta > 1. && Lep2Eta > 1.) vetoEvent; if (Lep1Pt < _Lep1PtCut && Lep2Pt < _Lep2PtCut) vetoEvent; _sumWeightsWithZ += event.weight(); /// @todo Write out a warning if there are more than two decay products FourMomentum Zmom = ZDecayProducts[0].momentum() + ZDecayProducts[1].momentum(); // Put all b-quarks in a vector /// @todo Use jet contents rather than accessing quarks directly Particles bquarks; - /// @todo Use nicer looping - for (GenEvent::particle_const_iterator p = event.genEvent()->particles_begin(); p != event.genEvent()->particles_end(); ++p) { - if ( std::abs((*p)->pdg_id()) == PID::BQUARK ) { - bquarks.push_back(Particle(**p)); + /// @todo is this HepMC wrangling necessary? + for(ConstGenParticlePtr p: HepMCUtils::particles(event.genEvent())){ + if ( std::abs(p->pdg_id()) == PID::BQUARK ) { + bquarks.push_back(Particle(*p)); } } // Get jets const FastJets& jetpro = apply(event, "Jets"); MSG_DEBUG("Jet multiplicity before any pT cut = " << jetpro.size()); const PseudoJets& jets = jetpro.pseudoJetsByPt(); MSG_DEBUG("jetlist size = " << jets.size()); int numBJet = 0; int numJet = 0; // for each b-jet plot the ET and the eta of the jet, normalise to the total cross section at the end // for each event plot N jet and pT(Z), normalise to the total cross section at the end for (PseudoJets::const_iterator jt = jets.begin(); jt != jets.end(); ++jt) { // select jets that pass the kinematic cuts if (jt->perp() > _JetPtCut && fabs(jt->rapidity()) <= _JetEtaCut) { ++numJet; // Does the jet contain a b-quark? /// @todo Use jet contents rather than accessing quarks directly bool bjet = false; foreach (const Particle& bquark, bquarks) { if (deltaR(jt->rapidity(), jt->phi(), bquark.rapidity(), bquark.phi()) <= _Rjet) { bjet = true; break; } } // end loop around b-jets if (bjet) { numBJet++; } } } // end loop around jets if (numJet > 0) _sumWeightsWithZJet += event.weight(); if (numBJet > 0) { _sigmaBJet->fill(1960.0,event.weight()); _ratioBJetToZ->fill(1960.0,event.weight()); _ratioBJetToJet->fill(1960.0,event.weight()); } } /// Finalize void finalize() { MSG_DEBUG("Total sum of weights = " << sumOfWeights()); MSG_DEBUG("Sum of weights for Z production in mass range = " << _sumWeightsWithZ); MSG_DEBUG("Sum of weights for Z+jet production in mass range = " << _sumWeightsWithZJet); scale(_sigmaBJet, crossSection()/sumOfWeights()); scale(_ratioBJetToZ, 1.0/_sumWeightsWithZ); scale(_ratioBJetToJet, 1.0/_sumWeightsWithZJet); } //@} private: /// @name Cuts and counters //@{ double _Rjet; double _JetPtCut; double _JetEtaCut; double _Lep1PtCut; double _Lep2PtCut; double _LepEtaCut; double _sumWeightsWithZ; double _sumWeightsWithZJet; //@} /// @name Histograms //@{ Histo1DPtr _sigmaBJet; Histo1DPtr _ratioBJetToZ; Histo1DPtr _ratioBJetToJet; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CDF_2006_S6653332); } diff --git a/analyses/pluginCDF/CDF_2008_S7540469.cc b/analyses/pluginCDF/CDF_2008_S7540469.cc --- a/analyses/pluginCDF/CDF_2008_S7540469.cc +++ b/analyses/pluginCDF/CDF_2008_S7540469.cc @@ -1,177 +1,177 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { /// @brief Measurement differential Z/\f$ \gamma^* \f$ + jet + \f$ X \f$ cross sections /// @author Frank Siegert class CDF_2008_S7540469 : public Analysis { public: /// Constructor CDF_2008_S7540469() : Analysis("CDF_2008_S7540469") { } /// @name Analysis methods //@{ /// Book histograms void init() { // Full final state FinalState fs(-5.0, 5.0); declare(fs, "FS"); // Leading electrons in tracking acceptance IdentifiedFinalState elfs(Cuts::abseta < 5 && Cuts::pT > 25*GeV); elfs.acceptIdPair(PID::ELECTRON); declare(elfs, "LeadingElectrons"); _h_jet_multiplicity = bookHisto1D(1, 1, 1); _h_jet_pT_cross_section_incl_1jet = bookHisto1D(2, 1, 1); _h_jet_pT_cross_section_incl_2jet = bookHisto1D(3, 1, 1); } /// Do the analysis void analyze(const Event & event) { const double weight = event.weight(); // Skip if the event is empty const FinalState& fs = apply(event, "FS"); if (fs.empty()) { MSG_DEBUG("Skipping event " << numEvents() << " because no final state pair found"); vetoEvent; } // Find the Z candidates const FinalState & electronfs = apply(event, "LeadingElectrons"); std::vector > Z_candidates; Particles all_els=electronfs.particles(); for (size_t i=0; i 116.0) { candidate = false; } double abs_eta_0 = fabs(all_els[i].eta()); double abs_eta_1 = fabs(all_els[j].eta()); if (abs_eta_1 < abs_eta_0) { double tmp = abs_eta_0; abs_eta_0 = abs_eta_1; abs_eta_1 = tmp; } if (abs_eta_0 > 1.0) { candidate = false; } if (!(abs_eta_1 < 1.0 || (inRange(abs_eta_1, 1.2, 2.8)))) { candidate = false; } if (candidate) { Z_candidates.push_back(make_pair(all_els[i], all_els[j])); } } } if (Z_candidates.size() != 1) { MSG_DEBUG("Skipping event " << numEvents() << " because no unique electron pair found "); vetoEvent; } // Now build the jets on a FS without the electrons from the Z (including QED radiation) Particles jetparts; for (const Particle& p : fs.particles()) { bool copy = true; if (p.pid() == PID::PHOTON) { FourMomentum p_e0 = Z_candidates[0].first.momentum(); FourMomentum p_e1 = Z_candidates[0].second.momentum(); FourMomentum p_P = p.momentum(); if (deltaR(p_e0, p_P) < 0.2) copy = false; if (deltaR(p_e1, p_P) < 0.2) copy = false; } else { - if (p.genParticle()->id() == Z_candidates[0].first.genParticle()->id()) copy = false; - if (p.genParticle()->id() == Z_candidates[0].second.genParticle()->id()) copy = false; + if (HepMCUtils::uniqueId(p.genParticle()) == HepMCUtils::uniqueId(Z_candidates[0].first.genParticle())) copy = false; + if (HepMCUtils::uniqueId(p.genParticle()) == HepMCUtils::uniqueId(Z_candidates[0].second.genParticle())) copy = false; } if (copy) jetparts.push_back(p); } // Proceed to lepton dressing const PseudoJets pjs = mkPseudoJets(jetparts); const auto jplugin = make_shared(0.7, 0.5, 1.0); const Jets jets_all = mkJets(fastjet::ClusterSequence(pjs, jplugin.get()).inclusive_jets()); const Jets jets_cut = sortByPt(filterBy(jets_all, Cuts::pT > 30*GeV && Cuts::abseta < 2.1)); // FastJets jetpro(FastJets::CDFMIDPOINT, 0.7); // jetpro.calc(jetparts); // // Take jets with pt > 30, |eta| < 2.1: // const Jets& jets = jetpro.jets(); // Jets jets_cut; // foreach (const Jet& j, jets) { // if (j.pT()/GeV > 30.0 && j.abseta() < 2.1) { // jets_cut.push_back(j); // } // } // // Sort by pT: // sort(jets_cut.begin(), jets_cut.end(), cmpMomByPt); // Return if there are no jets: MSG_DEBUG("Num jets above 30 GeV = " << jets_cut.size()); if (jets_cut.empty()) { MSG_DEBUG("No jets pass cuts "); vetoEvent; } // Cut on Delta R between Z electrons and *all* jets for (const Jet& j : jets_cut) { if (deltaR(Z_candidates[0].first, j) < 0.7) vetoEvent; if (deltaR(Z_candidates[0].second, j) < 0.7) vetoEvent; } // Fill histograms for (size_t njet=1; njet<=jets_cut.size(); ++njet) { _h_jet_multiplicity->fill(njet, weight); } for (const Jet& j : jets_cut) { if (jets_cut.size() > 0) { _h_jet_pT_cross_section_incl_1jet->fill(j.pT(), weight); } if (jets_cut.size() > 1) { _h_jet_pT_cross_section_incl_2jet->fill(j.pT(), weight); } } } /// Rescale histos void finalize() { const double invlumi = crossSection()/femtobarn/sumOfWeights(); scale(_h_jet_multiplicity, invlumi); scale(_h_jet_pT_cross_section_incl_1jet, invlumi); scale(_h_jet_pT_cross_section_incl_2jet, invlumi); } //@} private: /// @name Histograms //@{ Histo1DPtr _h_jet_multiplicity; Histo1DPtr _h_jet_pT_cross_section_incl_1jet; Histo1DPtr _h_jet_pT_cross_section_incl_2jet; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CDF_2008_S7540469); } diff --git a/analyses/pluginCDF/CDF_2008_S8095620.cc b/analyses/pluginCDF/CDF_2008_S8095620.cc --- a/analyses/pluginCDF/CDF_2008_S8095620.cc +++ b/analyses/pluginCDF/CDF_2008_S8095620.cc @@ -1,187 +1,187 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/InvMassFinalState.hh" namespace Rivet { /// @brief CDF Run II Z + b-jet cross-section measurement class CDF_2008_S8095620 : public Analysis { public: /// Constructor. /// jet cuts: |eta| <= 1.5 CDF_2008_S8095620() : Analysis("CDF_2008_S8095620"), _Rjet(0.7), _JetPtCut(20.), _JetEtaCut(1.5), _Lep1PtCut(18.), _Lep2PtCut(10.), _LepEtaCut(3.2), _sumWeightSelected(0.0) { } /// @name Analysis methods //@{ void init() { // Set up projections const FinalState fs(-3.2, 3.2); declare(fs, "FS"); // Create a final state with any e+e- or mu+mu- pair with // invariant mass 76 -> 106 GeV and ET > 18 (Z decay products) vector > vids; vids.push_back(make_pair(PID::ELECTRON, PID::POSITRON)); vids.push_back(make_pair(PID::MUON, PID::ANTIMUON)); FinalState fs2(-3.2, 3.2); InvMassFinalState invfs(fs2, vids, 76*GeV, 106*GeV); declare(invfs, "INVFS"); // Make a final state without the Z decay products for jet clustering VetoedFinalState vfs(fs); vfs.addVetoOnThisFinalState(invfs); declare(vfs, "VFS"); declare(FastJets(vfs, FastJets::CDFMIDPOINT, 0.7), "Jets"); // Book histograms _dStot = bookHisto1D(1, 1, 1); _dSdET = bookHisto1D(2, 1, 1); _dSdETA = bookHisto1D(3, 1, 1); _dSdZpT = bookHisto1D(4, 1, 1); _dSdNJet = bookHisto1D(5, 1, 1); _dSdNbJet = bookHisto1D(6, 1, 1); } // Do the analysis void analyze(const Event& event) { // Check we have an l+l- pair that passes the kinematic cuts // Get the Z decay products (mu+mu- or e+e- pair) const InvMassFinalState& invMassFinalState = apply(event, "INVFS"); const Particles& ZDecayProducts = invMassFinalState.particles(); // make sure we have 2 Z decay products (mumu or ee) if (ZDecayProducts.size() < 2) vetoEvent; //new cuts double Lep1Pt = ZDecayProducts[0].perp(); double Lep2Pt = ZDecayProducts[1].perp(); double Lep1Eta = fabs(ZDecayProducts[0].rapidity()); double Lep2Eta = fabs(ZDecayProducts[1].rapidity()); if (Lep1Eta > _LepEtaCut || Lep2Eta > _LepEtaCut) vetoEvent; if (ZDecayProducts[0].abspid()==13 && ((Lep1Eta > 1.5 || Lep2Eta > 1.5) || (Lep1Eta > 1.0 && Lep2Eta > 1.0))) { vetoEvent; } if (Lep1Pt > Lep2Pt) { if (Lep1Pt < _Lep1PtCut || Lep2Pt < _Lep2PtCut) vetoEvent; } else { if (Lep1Pt < _Lep2PtCut || Lep2Pt < _Lep1PtCut) vetoEvent; } _sumWeightSelected += event.weight(); /// @todo: write out a warning if there are more than two decay products FourMomentum Zmom = ZDecayProducts[0].momentum() + ZDecayProducts[1].momentum(); // Put all b-quarks in a vector /// @todo Use a b-hadron search rather than b-quarks for tagging Particles bquarks; - for(ConstGenParticlePtr p: particles(event.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(event.genEvent())) { if (std::abs(p->pdg_id()) == PID::BQUARK) { bquarks += Particle(*p); } } // Get jets const FastJets& jetpro = apply(event, "Jets"); MSG_DEBUG("Jet multiplicity before any pT cut = " << jetpro.size()); const PseudoJets& jets = jetpro.pseudoJetsByPt(); MSG_DEBUG("jetlist size = " << jets.size()); int numBJet = 0; int numJet = 0; // for each b-jet plot the ET and the eta of the jet, normalise to the total cross section at the end // for each event plot N jet and pT(Z), normalise to the total cross section at the end for (PseudoJets::const_iterator jt = jets.begin(); jt != jets.end(); ++jt) { // select jets that pass the kinematic cuts if (jt->perp() > _JetPtCut && fabs(jt->rapidity()) <= _JetEtaCut) { numJet++; // does the jet contain a b-quark? bool bjet = false; foreach (const Particle& bquark, bquarks) { if (deltaR(jt->rapidity(), jt->phi(), bquark.rapidity(),bquark.phi()) <= _Rjet) { bjet = true; break; } } // end loop around b-jets if (bjet) { numBJet++; _dSdET->fill(jt->perp(),event.weight()); _dSdETA->fill(fabs(jt->rapidity()),event.weight()); } } } // end loop around jets // wasn't asking for b-jets before!!!! if(numJet > 0 && numBJet > 0) _dSdNJet->fill(numJet,event.weight()); if(numBJet > 0) { _dStot->fill(1960.0,event.weight()); _dSdNbJet->fill(numBJet,event.weight()); _dSdZpT->fill(Zmom.pT(),event.weight()); } } // Finalize void finalize() { // normalise histograms // scale by 1 / the sum-of-weights of events that pass the Z cuts // since the cross sections are normalized to the inclusive // Z cross sections. double Scale = 1.0; if (_sumWeightSelected != 0.0) Scale = 1.0/_sumWeightSelected; scale(_dStot,Scale); scale(_dSdET,Scale); scale(_dSdETA,Scale); scale(_dSdNJet,Scale); scale(_dSdNbJet,Scale); scale(_dSdZpT,Scale); } //@} private: double _Rjet; double _JetPtCut; double _JetEtaCut; double _Lep1PtCut; double _Lep2PtCut; double _LepEtaCut; double _sumWeightSelected; //@{ /// Histograms Histo1DPtr _dStot; Histo1DPtr _dSdET; Histo1DPtr _dSdETA; Histo1DPtr _dSdNJet; Histo1DPtr _dSdNbJet; Histo1DPtr _dSdZpT; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CDF_2008_S8095620); } diff --git a/analyses/pluginCMS/CMS_2011_S8941262.cc b/analyses/pluginCMS/CMS_2011_S8941262.cc --- a/analyses/pluginCMS/CMS_2011_S8941262.cc +++ b/analyses/pluginCMS/CMS_2011_S8941262.cc @@ -1,76 +1,76 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Particle.hh" namespace Rivet { class CMS_2011_S8941262 : public Analysis { public: /// Constructor CMS_2011_S8941262() : Analysis("CMS_2011_S8941262") { } /// Book histograms and initialise projections before the run void init() { _h_total = bookHisto1D(1, 1, 1); _h_mupt = bookHisto1D(2, 1, 1); _h_mueta = bookHisto1D(3, 1, 1); nbtot=0.; nbmutot=0.; IdentifiedFinalState ifs(Cuts::abseta < 2.1 && Cuts::pT > 6*GeV); ifs.acceptIdPair(PID::MUON); declare(ifs, "IFS"); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // a b-quark must have been produced /// @todo Ouch. Use hadron tagging... int nb = 0; - for(ConstGenParticlePtr p: particles(event.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(event.genEvent())) { if (abs(p->pdg_id()) == PID::BQUARK) nb += 1; } if (nb == 0) vetoEvent; nbtot += weight; // Event must contain a muon Particles muons = apply(event, "IFS").particlesByPt(); if (muons.size() < 1) vetoEvent; nbmutot += weight; FourMomentum pmu = muons[0].momentum(); _h_total->fill( 7000/GeV, weight); _h_mupt->fill( pmu.pT()/GeV, weight); _h_mueta->fill( pmu.eta()/GeV, weight); } /// Normalise histograms etc., after the run void finalize() { scale(_h_total, crossSection()/microbarn/sumOfWeights()); scale(_h_mupt, crossSection()/nanobarn/sumOfWeights()); scale(_h_mueta, crossSection()/nanobarn/sumOfWeights()); } private: double nbtot, nbmutot; Histo1DPtr _h_total; Histo1DPtr _h_mupt; Histo1DPtr _h_mueta; }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2011_S8941262); } diff --git a/analyses/pluginCMS/CMS_2013_I1256943.cc b/analyses/pluginCMS/CMS_2013_I1256943.cc --- a/analyses/pluginCMS/CMS_2013_I1256943.cc +++ b/analyses/pluginCMS/CMS_2013_I1256943.cc @@ -1,186 +1,186 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ZFinder.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// CMS cross-section and angular correlations in Z boson + b-hadrons events at 7 TeV class CMS_2013_I1256943 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(CMS_2013_I1256943); /// Add projections and book histograms void init() { _sumW = 0; _sumW50 = 0; _sumWpT = 0; FinalState fs(Cuts::abseta < 2.4 && Cuts::pT > 20*GeV); declare(fs, "FS"); UnstableFinalState ufs(Cuts::abseta < 2 && Cuts::pT > 15*GeV); declare(ufs, "UFS"); Cut zetacut = Cuts::abseta < 2.4; ZFinder zfindermu(fs, zetacut, PID::MUON, 81.0*GeV, 101.0*GeV, 0.1, ZFinder::NOCLUSTER, ZFinder::TRACK, 91.2*GeV); declare(zfindermu, "ZFinderMu"); ZFinder zfinderel(fs, zetacut, PID::ELECTRON, 81.0*GeV, 101.0*GeV, 0.1, ZFinder::NOCLUSTER, ZFinder::TRACK, 91.2*GeV); declare(zfinderel, "ZFinderEl"); // Histograms in non-boosted region of Z pT _h_dR_BB = bookHisto1D(1, 1, 1); _h_dphi_BB = bookHisto1D(2, 1, 1); _h_min_dR_ZB = bookHisto1D(3, 1, 1); _h_A_ZBB = bookHisto1D(4, 1, 1); // Histograms in boosted region of Z pT (pT > 50 GeV) _h_dR_BB_boost = bookHisto1D(5, 1, 1); _h_dphi_BB_boost = bookHisto1D(6, 1, 1); _h_min_dR_ZB_boost = bookHisto1D(7, 1, 1); _h_A_ZBB_boost = bookHisto1D(8, 1, 1); _h_min_ZpT = bookHisto1D(9,1,1); } /// Do the analysis void analyze(const Event& e) { vector Bmom; const UnstableFinalState& ufs = apply(e, "UFS"); const ZFinder& zfindermu = apply(e, "ZFinderMu"); const ZFinder& zfinderel = apply(e, "ZFinderEl"); // Look for a Z --> mu+ mu- event in the final state if (zfindermu.empty() && zfinderel.empty()) vetoEvent; const Particles& z = !zfindermu.empty() ? zfindermu.bosons() : zfinderel.bosons(); const bool is_boosted = ( z[0].pT() > 50*GeV ); // Loop over the unstable particles for (const Particle& p : ufs.particles()) { const PdgId pid = p.pid(); // Look for particles with a bottom quark if (PID::hasBottom(pid)) { bool good_B = false; ConstGenParticlePtr pgen = p.genParticle(); ConstGenVertexPtr vgen = pgen -> end_vertex(); // Loop over the decay products of each unstable particle, looking for a b-hadron pair /// @todo Avoid HepMC API - for (ConstGenParticlePtr it: vgen->particles_out()){ + for (ConstGenParticlePtr it: HepMCUtils::particles(vgen, Relatives::CHILDREN)){ // If the particle produced has a bottom quark do not count it and go to the next loop cycle. if (!( PID::hasBottom( it->pdg_id() ) ) ) { good_B = true; continue; } else { good_B = false; break; } } if (good_B ) Bmom.push_back( p.momentum() ); } else continue; } // If there are more than two B's in the final state veto the event if (Bmom.size() != 2 ) vetoEvent; // Calculate the observables double dphiBB = deltaPhi(Bmom[0], Bmom[1]); double dRBB = deltaR(Bmom[0], Bmom[1]); const FourMomentum& pZ = z[0].momentum(); const bool closest_B = ( deltaR(pZ, Bmom[0]) < deltaR(pZ, Bmom[1]) ); const double mindR_ZB = closest_B ? deltaR(pZ, Bmom[0]) : deltaR(pZ, Bmom[1]); const double maxdR_ZB = closest_B ? deltaR(pZ, Bmom[1]) : deltaR(pZ, Bmom[0]); const double AZBB = ( maxdR_ZB - mindR_ZB ) / ( maxdR_ZB + mindR_ZB ); // Get event weight for histogramming const double weight = e.weight(); // Fill the histograms in the non-boosted region _h_dphi_BB->fill(dphiBB, weight); _h_dR_BB->fill(dRBB, weight); _h_min_dR_ZB->fill(mindR_ZB, weight); _h_A_ZBB->fill(AZBB, weight); _sumW += weight; _sumWpT += weight; // Fill the histograms in the boosted region if (is_boosted) { _sumW50 += weight; _h_dphi_BB_boost->fill(dphiBB, weight); _h_dR_BB_boost->fill(dRBB, weight); _h_min_dR_ZB_boost->fill(mindR_ZB, weight); _h_A_ZBB_boost->fill(AZBB, weight); } // Fill Z pT (cumulative) histogram _h_min_ZpT->fill(0, weight); if (pZ.pT() > 40*GeV ) { _sumWpT += weight; _h_min_ZpT->fill(40, weight); } if (pZ.pT() > 80*GeV ) { _sumWpT += weight; _h_min_ZpT->fill(80, weight); } if (pZ.pT() > 120*GeV ) { _sumWpT += weight; _h_min_ZpT->fill(120, weight); } Bmom.clear(); } /// Finalize void finalize() { // Normalize excluding overflow bins (d'oh) normalize(_h_dR_BB, 0.7*crossSection()*_sumW/sumOfWeights(), false); // d01-x01-y01 normalize(_h_dphi_BB, 0.53*crossSection()*_sumW/sumOfWeights(), false); // d02-x01-y01 normalize(_h_min_dR_ZB, 0.84*crossSection()*_sumW/sumOfWeights(), false); // d03-x01-y01 normalize(_h_A_ZBB, 0.2*crossSection()*_sumW/sumOfWeights(), false); // d04-x01-y01 normalize(_h_dR_BB_boost, 0.84*crossSection()*_sumW50/sumOfWeights(), false); // d05-x01-y01 normalize(_h_dphi_BB_boost, 0.63*crossSection()*_sumW50/sumOfWeights(), false); // d06-x01-y01 normalize(_h_min_dR_ZB_boost, 1*crossSection()*_sumW50/sumOfWeights(), false); // d07-x01-y01 normalize(_h_A_ZBB_boost, 0.25*crossSection()*_sumW50/sumOfWeights(), false); // d08-x01-y01 normalize(_h_min_ZpT, 40*crossSection()*_sumWpT/sumOfWeights(), false); // d09-x01-y01 } private: /// @name Weight counters //@{ double _sumW, _sumW50, _sumWpT; //@} /// @name Histograms //@{ Histo1DPtr _h_dphi_BB, _h_dR_BB, _h_min_dR_ZB, _h_A_ZBB; Histo1DPtr _h_dphi_BB_boost, _h_dR_BB_boost, _h_min_dR_ZB_boost, _h_A_ZBB_boost, _h_min_ZpT; //@} }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2013_I1256943); } diff --git a/analyses/pluginCMS/CMS_2015_I1370682.cc b/analyses/pluginCMS/CMS_2015_I1370682.cc --- a/analyses/pluginCMS/CMS_2015_I1370682.cc +++ b/analyses/pluginCMS/CMS_2015_I1370682.cc @@ -1,605 +1,605 @@ #include "Rivet/Analysis.hh" #include "Rivet/Math/LorentzTrans.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { namespace { //< only visible in this compilation unit /// @brief Pseudo top finder /// /// Find top quark in the particle level. /// The definition is based on the agreement at the LHC working group. class PseudoTop : public FinalState { public: /// @name Standard constructors and destructors. //@{ /// The default constructor. May specify the minimum and maximum /// pseudorapidity \f$ \eta \f$ and the min \f$ p_T \f$ (in GeV). PseudoTop(double lepR = 0.1, double lepMinPt = 20, double lepMaxEta = 2.4, double jetR = 0.4, double jetMinPt = 30, double jetMaxEta = 4.7) : FinalState(-MAXDOUBLE, MAXDOUBLE, 0*GeV), _lepR(lepR), _lepMinPt(lepMinPt), _lepMaxEta(lepMaxEta), _jetR(jetR), _jetMinPt(jetMinPt), _jetMaxEta(jetMaxEta) { setName("PseudoTop"); } enum TTbarMode {CH_NONE=-1, CH_FULLHADRON = 0, CH_SEMILEPTON, CH_FULLLEPTON}; enum DecayMode {CH_HADRON = 0, CH_MUON, CH_ELECTRON}; TTbarMode mode() const { if (!_isValid) return CH_NONE; if (_mode1 == CH_HADRON && _mode2 == CH_HADRON) return CH_FULLHADRON; else if ( _mode1 != CH_HADRON && _mode2 != CH_HADRON) return CH_FULLLEPTON; else return CH_SEMILEPTON; } DecayMode mode1() const {return _mode1;} DecayMode mode2() const {return _mode2;} /// Clone on the heap. virtual unique_ptr clone() const { return unique_ptr(new PseudoTop(*this)); } //@} public: Particle t1() const {return _t1;} Particle t2() const {return _t2;} Particle b1() const {return _b1;} Particle b2() const {return _b2;} ParticleVector wDecays1() const {return _wDecays1;} ParticleVector wDecays2() const {return _wDecays2;} Jets jets() const {return _jets;} Jets bjets() const {return _bjets;} Jets ljets() const {return _ljets;} protected: // Apply the projection to the event void project(const Event& e); // override; ///< @todo Re-enable when C++11 allowed void cleanup(std::map >& v, const bool doCrossCleanup=false) const; private: const double _lepR, _lepMinPt, _lepMaxEta; const double _jetR, _jetMinPt, _jetMaxEta; //constexpr ///< @todo Re-enable when C++11 allowed static double _tMass; // = 172.5*GeV; ///< @todo Re-enable when C++11 allowed //constexpr ///< @todo Re-enable when C++11 allowed static double _wMass; // = 80.4*GeV; ///< @todo Re-enable when C++11 allowed private: bool _isValid; DecayMode _mode1, _mode2; Particle _t1, _t2; Particle _b1, _b2; ParticleVector _wDecays1, _wDecays2; Jets _jets, _bjets, _ljets; }; // More implementation below the analysis code } /// Pseudo-top analysis from CMS class CMS_2015_I1370682 : public Analysis { public: CMS_2015_I1370682() : Analysis("CMS_2015_I1370682"), _applyCorrection(true), _doShapeOnly(true) { } void init() { declare(PseudoTop(0.1, 20, 2.4, 0.5, 30, 2.4), "ttbar"); // Lepton + Jet channel _hSL_topPt = bookHisto1D("d15-x01-y01"); // 1/sigma dsigma/dpt(top) _hSL_topPtTtbarSys = bookHisto1D("d16-x01-y01"); // 1/sigma dsigma/dpt*(top) _hSL_topY = bookHisto1D("d17-x01-y01"); // 1/sigma dsigma/dy(top) _hSL_ttbarDelPhi = bookHisto1D("d18-x01-y01"); // 1/sigma dsigma/ddeltaphi(t,tbar) _hSL_topPtLead = bookHisto1D("d19-x01-y01"); // 1/sigma dsigma/dpt(t1) _hSL_topPtSubLead = bookHisto1D("d20-x01-y01"); // 1/sigma dsigma/dpt(t2) _hSL_ttbarPt = bookHisto1D("d21-x01-y01"); // 1/sigma dsigma/dpt(ttbar) _hSL_ttbarY = bookHisto1D("d22-x01-y01"); // 1/sigma dsigma/dy(ttbar) _hSL_ttbarMass = bookHisto1D("d23-x01-y01"); // 1/sigma dsigma/dm(ttbar) // Dilepton channel _hDL_topPt = bookHisto1D("d24-x01-y01"); // 1/sigma dsigma/dpt(top) _hDL_topPtTtbarSys = bookHisto1D("d25-x01-y01"); // 1/sigma dsigma/dpt*(top) _hDL_topY = bookHisto1D("d26-x01-y01"); // 1/sigma dsigma/dy(top) _hDL_ttbarDelPhi = bookHisto1D("d27-x01-y01"); // 1/sigma dsigma/ddeltaphi(t,tbar) _hDL_topPtLead = bookHisto1D("d28-x01-y01"); // 1/sigma dsigma/dpt(t1) _hDL_topPtSubLead = bookHisto1D("d29-x01-y01"); // 1/sigma dsigma/dpt(t2) _hDL_ttbarPt = bookHisto1D("d30-x01-y01"); // 1/sigma dsigma/dpt(ttbar) _hDL_ttbarY = bookHisto1D("d31-x01-y01"); // 1/sigma dsigma/dy(ttbar) _hDL_ttbarMass = bookHisto1D("d32-x01-y01"); // 1/sigma dsigma/dm(ttbar) } void analyze(const Event& event) { // Get the ttbar candidate const PseudoTop& ttbar = apply(event, "ttbar"); if ( ttbar.mode() == PseudoTop::CH_NONE ) vetoEvent; const FourMomentum& t1P4 = ttbar.t1().momentum(); const FourMomentum& t2P4 = ttbar.t2().momentum(); const double pt1 = std::max(t1P4.pT(), t2P4.pT()); const double pt2 = std::min(t1P4.pT(), t2P4.pT()); const double dPhi = deltaPhi(t1P4, t2P4); const FourMomentum ttP4 = t1P4 + t2P4; const FourMomentum t1P4AtCM = LorentzTransform::mkFrameTransformFromBeta(ttP4.betaVec()).transform(t1P4); const double weight = event.weight(); if ( ttbar.mode() == PseudoTop::CH_SEMILEPTON ) { const Particle lCand1 = ttbar.wDecays1()[0]; // w1 dau0 is the lepton in the PseudoTop if (lCand1.pT() < 33*GeV || lCand1.abseta() > 2.1) vetoEvent; _hSL_topPt->fill(t1P4.pT(), weight); _hSL_topPt->fill(t2P4.pT(), weight); _hSL_topPtTtbarSys->fill(t1P4AtCM.pT(), weight); _hSL_topY->fill(t1P4.rapidity(), weight); _hSL_topY->fill(t2P4.rapidity(), weight); _hSL_ttbarDelPhi->fill(dPhi, weight); _hSL_topPtLead->fill(pt1, weight); _hSL_topPtSubLead->fill(pt2, weight); _hSL_ttbarPt->fill(ttP4.pT(), weight); _hSL_ttbarY->fill(ttP4.rapidity(), weight); _hSL_ttbarMass->fill(ttP4.mass(), weight); } else if ( ttbar.mode() == PseudoTop::CH_FULLLEPTON ) { const Particle lCand1 = ttbar.wDecays1()[0]; // dau0 are the lepton in the PseudoTop const Particle lCand2 = ttbar.wDecays2()[0]; // dau0 are the lepton in the PseudoTop if (lCand1.pT() < 20*GeV || lCand1.abseta() > 2.4) vetoEvent; if (lCand2.pT() < 20*GeV || lCand2.abseta() > 2.4) vetoEvent; _hDL_topPt->fill(t1P4.pT(), weight); _hDL_topPt->fill(t2P4.pT(), weight); _hDL_topPtTtbarSys->fill(t1P4AtCM.pT(), weight); _hDL_topY->fill(t1P4.rapidity(), weight); _hDL_topY->fill(t2P4.rapidity(), weight); _hDL_ttbarDelPhi->fill(dPhi, weight); _hDL_topPtLead->fill(pt1, weight); _hDL_topPtSubLead->fill(pt2, weight); _hDL_ttbarPt->fill(ttP4.pT(), weight); _hDL_ttbarY->fill(ttP4.rapidity(), weight); _hDL_ttbarMass->fill(ttP4.mass(), weight); } } void finalize() { if ( _applyCorrection ) { // Correction functions for TOP-12-028 paper, (parton bin height)/(pseudotop bin height) const double ch15[] = { 5.473609, 4.941048, 4.173346, 3.391191, 2.785644, 2.371346, 2.194161, 2.197167, }; const double ch16[] = { 5.470905, 4.948201, 4.081982, 3.225532, 2.617519, 2.239217, 2.127878, 2.185918, }; const double ch17[] = { 10.003667, 4.546519, 3.828115, 3.601018, 3.522194, 3.524694, 3.600951, 3.808553, 4.531891, 9.995370, }; const double ch18[] = { 4.406683, 4.054041, 3.885393, 4.213646, }; const double ch19[] = { 6.182537, 5.257703, 4.422280, 3.568402, 2.889408, 2.415878, 2.189974, 2.173210, }; const double ch20[] = { 5.199874, 4.693318, 3.902882, 3.143785, 2.607877, 2.280189, 2.204124, 2.260829, }; const double ch21[] = { 6.053523, 3.777506, 3.562251, 3.601356, 3.569347, 3.410472, }; const double ch22[] = { 11.932351, 4.803773, 3.782709, 3.390775, 3.226806, 3.218982, 3.382678, 3.773653, 4.788191, 11.905338, }; const double ch23[] = { 7.145255, 5.637595, 4.049882, 3.025917, 2.326430, 1.773824, 1.235329, }; const double ch24[] = { 2.268193, 2.372063, 2.323975, 2.034655, 1.736793, }; const double ch25[] = { 2.231852, 2.383086, 2.341894, 2.031318, 1.729672, 1.486993, }; const double ch26[] = { 3.993526, 2.308249, 2.075136, 2.038297, 2.036302, 2.078270, 2.295817, 4.017713, }; const double ch27[] = { 2.205978, 2.175010, 2.215376, 2.473144, }; const double ch28[] = { 2.321077, 2.371895, 2.338871, 2.057821, 1.755382, }; const double ch29[] = { 2.222707, 2.372591, 2.301688, 1.991162, 1.695343, }; const double ch30[] = { 2.599677, 2.026855, 2.138620, 2.229553, }; const double ch31[] = { 5.791779, 2.636219, 2.103642, 1.967198, 1.962168, 2.096514, 2.641189, 5.780828, }; const double ch32[] = { 2.006685, 2.545525, 2.477745, 2.335747, 2.194226, 2.076500, }; applyCorrection(_hSL_topPt, ch15); applyCorrection(_hSL_topPtTtbarSys, ch16); applyCorrection(_hSL_topY, ch17); applyCorrection(_hSL_ttbarDelPhi, ch18); applyCorrection(_hSL_topPtLead, ch19); applyCorrection(_hSL_topPtSubLead, ch20); applyCorrection(_hSL_ttbarPt, ch21); applyCorrection(_hSL_ttbarY, ch22); applyCorrection(_hSL_ttbarMass, ch23); applyCorrection(_hDL_topPt, ch24); applyCorrection(_hDL_topPtTtbarSys, ch25); applyCorrection(_hDL_topY, ch26); applyCorrection(_hDL_ttbarDelPhi, ch27); applyCorrection(_hDL_topPtLead, ch28); applyCorrection(_hDL_topPtSubLead, ch29); applyCorrection(_hDL_ttbarPt, ch30); applyCorrection(_hDL_ttbarY, ch31); applyCorrection(_hDL_ttbarMass, ch32); } if ( _doShapeOnly ) { normalize(_hSL_topPt ); normalize(_hSL_topPtTtbarSys); normalize(_hSL_topY ); normalize(_hSL_ttbarDelPhi ); normalize(_hSL_topPtLead ); normalize(_hSL_topPtSubLead ); normalize(_hSL_ttbarPt ); normalize(_hSL_ttbarY ); normalize(_hSL_ttbarMass ); normalize(_hDL_topPt ); normalize(_hDL_topPtTtbarSys); normalize(_hDL_topY ); normalize(_hDL_ttbarDelPhi ); normalize(_hDL_topPtLead ); normalize(_hDL_topPtSubLead ); normalize(_hDL_ttbarPt ); normalize(_hDL_ttbarY ); normalize(_hDL_ttbarMass ); } else { const double s = 1./sumOfWeights(); scale(_hSL_topPt , s); scale(_hSL_topPtTtbarSys, s); scale(_hSL_topY , s); scale(_hSL_ttbarDelPhi , s); scale(_hSL_topPtLead , s); scale(_hSL_topPtSubLead , s); scale(_hSL_ttbarPt , s); scale(_hSL_ttbarY , s); scale(_hSL_ttbarMass , s); scale(_hDL_topPt , s); scale(_hDL_topPtTtbarSys, s); scale(_hDL_topY , s); scale(_hDL_ttbarDelPhi , s); scale(_hDL_topPtLead , s); scale(_hDL_topPtSubLead , s); scale(_hDL_ttbarPt , s); scale(_hDL_ttbarY , s); scale(_hDL_ttbarMass , s); } } void applyCorrection(Histo1DPtr h, const double* cf) { vector& bins = h->bins(); for (size_t i=0, n=bins.size(); i >& v, const bool doCrossCleanup) const { vector >::iterator> toErase; set usedLeg1, usedLeg2; if ( !doCrossCleanup ) { /// @todo Reinstate when C++11 allowed: for (auto key = v.begin(); key != v.end(); ++key) { for (map >::iterator key = v.begin(); key != v.end(); ++key) { const size_t leg1 = key->second.first; const size_t leg2 = key->second.second; if (usedLeg1.find(leg1) == usedLeg1.end() and usedLeg2.find(leg2) == usedLeg2.end()) { usedLeg1.insert(leg1); usedLeg2.insert(leg2); } else { toErase.push_back(key); } } } else { /// @todo Reinstate when C++11 allowed: for (auto key = v.begin(); key != v.end(); ++key) { for (map >::iterator key = v.begin(); key != v.end(); ++key) { const size_t leg1 = key->second.first; const size_t leg2 = key->second.second; if (usedLeg1.find(leg1) == usedLeg1.end() and usedLeg1.find(leg2) == usedLeg1.end()) { usedLeg1.insert(leg1); usedLeg1.insert(leg2); } else { toErase.push_back(key); } } } /// @todo Reinstate when C++11 allowed: for (auto& key : toErase) v.erase(key); for (size_t i = 0; i < toErase.size(); ++i) v.erase(toErase[i]); } void PseudoTop::project(const Event& e) { // Leptons : do the lepton clustering anti-kt R=0.1 using stable photons and leptons not from hadron decay // Neutrinos : neutrinos not from hadron decay // MET : vector sum of all invisible particles in x-y plane // Jets : anti-kt R=0.4 using all particles excluding neutrinos and particles used in lepton clustering // add ghost B hadrons during the jet clustering to identify B jets. // W->lv : dressed lepton and neutrino pairs // W->jj : light flavored dijet // W candidate : select lv or jj pairs which minimise |mW1-80.4|+|mW2-80.4| // lepton-neutrino pair will be selected with higher priority // t->Wb : W candidate + b jet // t candidate : select Wb pairs which minimise |mtop1-172.5|+|mtop2-172.5| _isValid = false; _theParticles.clear(); _wDecays1.clear(); _wDecays2.clear(); _jets.clear(); _bjets.clear(); _ljets.clear(); _mode1 = _mode2 = CH_HADRON; // Collect final state particles Particles pForLep, pForJet; Particles neutrinos; // Prompt neutrinos /// @todo Avoid this unsafe jump into HepMC -- all this can be done properly via VisibleFS and HeavyHadrons projections - for (ConstGenParticlePtr p : Rivet::particles(e.genEvent())) {// + for (ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) {// const int status = p->status(); const int pdgId = p->pdg_id(); if (status == 1) { Particle rp = *p; if (!PID::isHadron(pdgId) && !rp.fromHadron()) { // Collect particles not from hadron decay if (rp.isNeutrino()) { // Prompt neutrinos are kept in separate collection neutrinos.push_back(rp); } else if (pdgId == 22 || rp.isLepton()) { // Leptons and photons for the dressing pForLep.push_back(rp); } } else if (!rp.isNeutrino()) { // Use all particles from hadron decay pForJet.push_back(rp); } } else if (PID::isHadron(pdgId) && PID::hasBottom(pdgId)) { // NOTE: Consider B hadrons with pT > 5GeV - not in CMS proposal //if ( p->momentum().perp() < 5 ) continue; // Do unstable particles, to be used in the ghost B clustering // Use last B hadrons only bool isLast = true; - for (ConstGenParticlePtr pp : Rivet::particles(p->end_vertex(), Relatives::CHILDREN)) { + for (ConstGenParticlePtr pp : HepMCUtils::particles(p->end_vertex(), Relatives::CHILDREN)) { if (PID::hasBottom(pp->pdg_id())) { isLast = false; break; } } if (!isLast) continue; Particle ghost(pdgId, FourMomentum(p->momentum())); // Rescale momentum by 10^-20 ghost.setMomentum(ghost.momentum()*1.e-20 / ghost.momentum().rho()); //Particle ghost(pdgId, FourMomentum(p->momentum())*1e-20/p->momentum().rho()); pForJet.push_back(ghost); } }// // Start object building from trivial thing - prompt neutrinos sortByPt(neutrinos); // Proceed to lepton dressing FastJets fjLep(FinalState(), FastJets::ANTIKT, _lepR); fjLep.calc(pForLep); Jets leptons; vector leptonsId; set dressedIdxs; for (const Jet& lep : fjLep.jetsByPt(_lepMinPt)) { if (lep.abseta() > _lepMaxEta) continue; double leadingPt = -1; int leptonId = 0; for (const Particle& p : lep.particles()) { /// @warning Barcodes aren't future-proof in HepMC - dressedIdxs.insert(p.genParticle()->id()); + dressedIdxs.insert(HepMCUtils::uniqueId(p.genParticle())); if (p.isLepton() && p.pT() > leadingPt) { leadingPt = p.pT(); leptonId = p.pid(); } } if (leptonId == 0) continue; leptons.push_back(lep); leptonsId.push_back(leptonId); } // Re-use particles not used in lepton dressing for (const Particle& rp : pForLep) { - const int barcode = rp.genParticle()->id(); + const int barcode = HepMCUtils::uniqueId(rp.genParticle()); // Skip if the particle is used in dressing if (dressedIdxs.find(barcode) != dressedIdxs.end()) continue; // Put back to be used in jet clustering pForJet.push_back(rp); } // Then do the jet clustering FastJets fjJet(FinalState(), FastJets::ANTIKT, _jetR); //fjJet.useInvisibles(); // NOTE: CMS proposal to remove neutrinos (AB: wouldn't work anyway, since they were excluded from clustering inputs) fjJet.calc(pForJet); for (const Jet& jet : fjJet.jetsByPt(_jetMinPt)) { if (jet.abseta() > _jetMaxEta) continue; _jets.push_back(jet); bool isBJet = false; for (const Particle& rp : jet.particles()) { if (PID::hasBottom(rp.pdgId())) { isBJet = true; break; } } if ( isBJet ) _bjets.push_back(jet); else _ljets.push_back(jet); } // Every building blocks are ready. Continue to pseudo-W and pseudo-top combination if (_bjets.size() < 2) return; // Ignore single top for now map > wLepCandIdxs; map > wHadCandIdxs; // Collect leptonic-decaying W's for (size_t iLep = 0, nLep = leptons.size(); iLep < nLep; ++iLep) { const Jet& lep = leptons.at(iLep); for (size_t iNu = 0, nNu = neutrinos.size(); iNu < nNu; ++iNu) { const Particle& nu = neutrinos.at(iNu); const double m = (lep.momentum()+nu.momentum()).mass(); const double dm = std::abs(m-_wMass); wLepCandIdxs[dm] = make_pair(iLep, iNu); } } // Continue to hadronic decaying W's for (size_t i = 0, nLjet = _ljets.size(); i < nLjet; ++i) { const Jet& ljet1 = _ljets[i]; for (size_t j = i+1; j < nLjet; ++j) { const Jet& ljet2 = _ljets[j]; const double m = (ljet1.momentum()+ljet2.momentum()).mass(); const double dm = std::abs(m-_wMass); wHadCandIdxs[dm] = make_pair(i, j); } } // Cleanup W candidate, choose pairs with minimum dm if they share decay products cleanup(wLepCandIdxs); cleanup(wHadCandIdxs, true); const size_t nWLepCand = wLepCandIdxs.size(); const size_t nWHadCand = wHadCandIdxs.size(); if (nWLepCand + nWHadCand < 2) return; // We skip single top int w1Q = 1, w2Q = -1; int w1dau1Id = 1, w2dau1Id = -1; FourMomentum w1dau1LVec, w1dau2LVec; FourMomentum w2dau1LVec, w2dau2LVec; if (nWLepCand == 0) { // Full hadronic case const pair& idPair1 = wHadCandIdxs.begin()->second; const pair& idPair2 = (++wHadCandIdxs.begin())->second; ///< @todo Reinstate std::next const Jet& w1dau1 = _ljets[idPair1.first]; const Jet& w1dau2 = _ljets[idPair1.second]; const Jet& w2dau1 = _ljets[idPair2.first]; const Jet& w2dau2 = _ljets[idPair2.second]; w1dau1LVec = w1dau1.momentum(); w1dau2LVec = w1dau2.momentum(); w2dau1LVec = w2dau1.momentum(); w2dau2LVec = w2dau2.momentum(); } else if (nWLepCand == 1) { // Semi-leptonic case const pair& idPair1 = wLepCandIdxs.begin()->second; const pair& idPair2 = wHadCandIdxs.begin()->second; const Jet& w1dau1 = leptons[idPair1.first]; const Particle& w1dau2 = neutrinos[idPair1.second]; const Jet& w2dau1 = _ljets[idPair2.first]; const Jet& w2dau2 = _ljets[idPair2.second]; w1dau1LVec = w1dau1.momentum(); w1dau2LVec = w1dau2.momentum(); w2dau1LVec = w2dau1.momentum(); w2dau2LVec = w2dau2.momentum(); w1dau1Id = leptonsId[idPair1.first]; w1Q = w1dau1Id > 0 ? -1 : 1; w2Q = -w1Q; switch (w1dau1Id) { case 13: case -13: _mode1 = CH_MUON; break; case 11: case -11: _mode1 = CH_ELECTRON; break; } } else { // Full leptonic case const pair& idPair1 = wLepCandIdxs.begin()->second; const pair& idPair2 = (++wLepCandIdxs.begin())->second; ///< @todo Reinstate std::next const Jet& w1dau1 = leptons[idPair1.first]; const Particle& w1dau2 = neutrinos[idPair1.second]; const Jet& w2dau1 = leptons[idPair2.first]; const Particle& w2dau2 = neutrinos[idPair2.second]; w1dau1LVec = w1dau1.momentum(); w1dau2LVec = w1dau2.momentum(); w2dau1LVec = w2dau1.momentum(); w2dau2LVec = w2dau2.momentum(); w1dau1Id = leptonsId[idPair1.first]; w2dau1Id = leptonsId[idPair2.first]; w1Q = w1dau1Id > 0 ? -1 : 1; w2Q = w2dau1Id > 0 ? -1 : 1; switch (w1dau1Id) { case 13: case -13: _mode1 = CH_MUON; break; case 11: case -11: _mode1 = CH_ELECTRON; break; } switch (w2dau1Id) { case 13: case -13: _mode2 = CH_MUON; break; case 11: case -11: _mode2 = CH_ELECTRON; break; } } const FourMomentum w1LVec = w1dau1LVec+w1dau2LVec; const FourMomentum w2LVec = w2dau1LVec+w2dau2LVec; // Combine b jets double sumDm = 1e9; FourMomentum b1LVec, b2LVec; for (size_t i = 0, n = _bjets.size(); i < n; ++i) { const Jet& bjet1 = _bjets[i]; const double mtop1 = (w1LVec+bjet1.momentum()).mass(); const double dmtop1 = std::abs(mtop1-_tMass); for (size_t j=0; j= 1e9) return; // Failed to make top, but this should not happen. const FourMomentum t1LVec = w1LVec + b1LVec; const FourMomentum t2LVec = w2LVec + b2LVec; // Put all of them into candidate collection _t1 = Particle(w1Q*6, t1LVec); _b1 = Particle(w1Q*5, b1LVec); _wDecays1.push_back(Particle(w1dau1Id, w1dau1LVec)); _wDecays1.push_back(Particle(-w1dau1Id+w1Q, w1dau2LVec)); _t2 = Particle(w2Q*6, t2LVec); _b2 = Particle(w2Q*5, b2LVec); _wDecays2.push_back(Particle(w2dau1Id, w2dau1LVec)); _wDecays2.push_back(Particle(-w2dau1Id+w2Q, w2dau2LVec)); _isValid = true; } } } diff --git a/analyses/pluginCMS/CMS_2016_I1486238.cc b/analyses/pluginCMS/CMS_2016_I1486238.cc --- a/analyses/pluginCMS/CMS_2016_I1486238.cc +++ b/analyses/pluginCMS/CMS_2016_I1486238.cc @@ -1,124 +1,124 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/InitialQuarks.hh" #include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { /// Studies of 2 b-jet + 2 jet production in proton-proton collisions at 7 TeV class CMS_2016_I1486238 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(CMS_2016_I1486238); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { FastJets akt(FinalState(), FastJets::ANTIKT, 0.5); addProjection(akt, "antikT"); _h_Deltaphi_newway = bookHisto1D(1,1,1); _h_deltaphiafterlight = bookHisto1D(9,1,1); _h_SumPLight = bookHisto1D(5,1,1); _h_LeadingBJetpt = bookHisto1D(11,1,1); _h_SubleadingBJetpt = bookHisto1D(15,1,1); _h_LeadingLightJetpt = bookHisto1D(13,1,1); _h_SubleadingLightJetpt = bookHisto1D(17,1,1); _h_LeadingBJeteta = bookHisto1D(10,1,1); _h_SubleadingBJeteta = bookHisto1D(14,1,1); _h_LeadingLightJeteta = bookHisto1D(12,1,1); _h_SubleadingLightJeteta = bookHisto1D(16,1,1); } /// Perform the per-event analysis void analyze(const Event& event) { const Jets& jets = apply(event, "antikT").jetsByPt(Cuts::absrap < 4.7 && Cuts::pT > 20*GeV); if (jets.size() < 4) vetoEvent; // Initial quarks /// @note Quark-level tagging... Particles bquarks; - for (ConstGenParticlePtr p : particles(event.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(event.genEvent())) { if (abs(p->pdg_id()) == PID::BQUARK) bquarks += Particle(p); } Jets bjets, ljets; for (const Jet& j : jets) { const bool btag = any(bquarks, deltaRLess(j, 0.3)); // for (const Particle& b : bquarks) if (deltaR(j, b) < 0.3) btag = true; (btag && j.abseta() < 2.4 ? bjets : ljets).push_back(j); } // Fill histograms const double weight = event.weight(); if (bjets.size() >= 2 && ljets.size() >= 2) { _h_LeadingBJetpt->fill(bjets[0].pT()/GeV, weight); _h_SubleadingBJetpt->fill(bjets[1].pT()/GeV, weight); _h_LeadingLightJetpt->fill(ljets[0].pT()/GeV, weight); _h_SubleadingLightJetpt->fill(ljets[1].pT()/GeV, weight); // _h_LeadingBJeteta->fill(bjets[0].eta(), weight); _h_SubleadingBJeteta->fill(bjets[1].eta(), weight); _h_LeadingLightJeteta->fill(ljets[0].eta(), weight); _h_SubleadingLightJeteta->fill(ljets[1].eta(), weight); const double lightdphi = deltaPhi(ljets[0], ljets[1]); _h_deltaphiafterlight->fill(lightdphi, weight); const double vecsumlightjets = sqrt(sqr(ljets[0].px()+ljets[1].px()) + sqr(ljets[0].py()+ljets[1].py())); //< @todo Just (lj0+lj1).pT()? Or use add_quad const double term2 = vecsumlightjets/(sqrt(sqr(ljets[0].px()) + sqr(ljets[0].py())) + sqrt(sqr(ljets[1].px()) + sqr(ljets[1].py()))); //< @todo lj0.pT() + lj1.pT()? Or add_quad _h_SumPLight->fill(term2, weight); const double pxBsyst2 = bjets[0].px()+bjets[1].px(); // @todo (bj0+bj1).px() const double pyBsyst2 = bjets[0].py()+bjets[1].py(); // @todo (bj0+bj1).py() const double pxJetssyst2 = ljets[0].px()+ljets[1].px(); // @todo (lj0+lj1).px() const double pyJetssyst2 = ljets[0].py()+ljets[1].py(); // @todo (lj0+lj1).py() const double modulusB2 = sqrt(sqr(pxBsyst2)+sqr(pyBsyst2)); //< @todo add_quad const double modulusJets2 = sqrt(sqr(pxJetssyst2)+sqr(pyJetssyst2)); //< @todo add_quad const double cosphiBsyst2 = pxBsyst2/modulusB2; const double cosphiJetssyst2 = pxJetssyst2/modulusJets2; const double phiBsyst2 = ((pyBsyst2 > 0) ? 1 : -1) * acos(cosphiBsyst2); //< @todo sign(pyBsyst2) const double phiJetssyst2 = sign(pyJetssyst2) * acos(cosphiJetssyst2); const double Dphi2 = deltaPhi(phiBsyst2, phiJetssyst2); _h_Deltaphi_newway->fill(Dphi2,weight); } } /// Normalise histograms etc., after the run void finalize() { const double invlumi = crossSection()/picobarn/sumOfWeights(); normalize({_h_SumPLight, _h_deltaphiafterlight, _h_Deltaphi_newway}); scale({_h_LeadingLightJetpt, _h_SubleadingLightJetpt, _h_LeadingBJetpt, _h_SubleadingBJetpt}, invlumi); scale({_h_LeadingLightJeteta, _h_SubleadingLightJeteta, _h_LeadingBJeteta, _h_SubleadingBJeteta}, invlumi); } //@} private: /// @name Histograms //@{ Histo1DPtr _h_deltaphiafterlight, _h_Deltaphi_newway, _h_SumPLight; Histo1DPtr _h_LeadingBJetpt, _h_SubleadingBJetpt, _h_LeadingLightJetpt, _h_SubleadingLightJetpt; Histo1DPtr _h_LeadingBJeteta, _h_SubleadingBJeteta, _h_LeadingLightJeteta, _h_SubleadingLightJeteta; }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2016_I1486238); } diff --git a/analyses/pluginD0/D0_2010_S8570965.cc b/analyses/pluginD0/D0_2010_S8570965.cc --- a/analyses/pluginD0/D0_2010_S8570965.cc +++ b/analyses/pluginD0/D0_2010_S8570965.cc @@ -1,144 +1,144 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Tools/BinnedHistogram.hh" namespace Rivet { /// @brief D0 direct photon pair production class D0_2010_S8570965 : public Analysis { public: D0_2010_S8570965() : Analysis("D0_2010_S8570965") { } void init() { FinalState fs; declare(fs, "FS"); IdentifiedFinalState ifs(Cuts::abseta < 0.9 && Cuts::pT > 20*GeV); ifs.acceptId(PID::PHOTON); declare(ifs, "IFS"); _h_M = bookHisto1D(1, 1, 1); _h_pT = bookHisto1D(2, 1, 1); _h_dPhi = bookHisto1D(3, 1, 1); _h_costheta = bookHisto1D(4, 1, 1); std::pair M_ranges[] = { std::make_pair(30.0, 50.0), std::make_pair(50.0, 80.0), std::make_pair(80.0, 350.0) }; for (size_t i = 0; i < 3; ++i) { _h_pT_M.addHistogram(M_ranges[i].first, M_ranges[i].second, bookHisto1D(5+3*i, 1, 1)); _h_dPhi_M.addHistogram(M_ranges[i].first, M_ranges[i].second, bookHisto1D(6+3*i, 1, 1)); _h_costheta_M.addHistogram(M_ranges[i].first, M_ranges[i].second, bookHisto1D(7+3*i, 1, 1)); } } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); Particles photons = apply(event, "IFS").particlesByPt(); if (photons.size() < 2 || (photons[0].pT() < 21.0*GeV)) { vetoEvent; } // Isolate photons with ET_sum in cone Particles isolated_photons; Particles fs = apply(event, "FS").particles(); foreach (const Particle& photon, photons) { double eta_P = photon.eta(); double phi_P = photon.phi(); double Etsum=0.0; foreach (const Particle& p, fs) { - if (p.genParticle()->id() != photon.genParticle()->id() && + if (HepMCUtils::uniqueId(p.genParticle()) != HepMCUtils::uniqueId(photon.genParticle()) && deltaR(eta_P, phi_P, p.eta(), p.phi()) < 0.4) { Etsum += p.Et(); } } if (Etsum < 2.5*GeV) { isolated_photons.push_back(photon); } } if (isolated_photons.size() != 2) { vetoEvent; } std::sort(isolated_photons.begin(), isolated_photons.end(), cmpMomByPt); FourMomentum y1=isolated_photons[0].momentum(); FourMomentum y2=isolated_photons[1].momentum(); if (deltaR(y1, y2)<0.4) { vetoEvent; } FourMomentum yy=y1+y2; double Myy = yy.mass()/GeV; if (Myy<30.0 || Myy>350.0) { vetoEvent; } double pTyy = yy.pT()/GeV; if (Myyfill(Myy, weight); _h_pT->fill(pTyy, weight); _h_dPhi->fill(dPhiyy, weight); _h_costheta->fill(costhetayy, weight); _h_pT_M.fill(Myy, pTyy, weight); _h_dPhi_M.fill(Myy, dPhiyy, weight); _h_costheta_M.fill(Myy, costhetayy, weight); } void finalize() { scale(_h_M, crossSection()/sumOfWeights()); scale(_h_pT, crossSection()/sumOfWeights()); scale(_h_dPhi, crossSection()/sumOfWeights()); scale(_h_costheta, crossSection()/sumOfWeights()); _h_pT_M.scale(crossSection()/sumOfWeights(), this); _h_dPhi_M.scale(crossSection()/sumOfWeights(), this); _h_costheta_M.scale(crossSection()/sumOfWeights(), this); } private: Histo1DPtr _h_M; Histo1DPtr _h_pT; Histo1DPtr _h_dPhi; Histo1DPtr _h_costheta; BinnedHistogram _h_pT_M; BinnedHistogram _h_dPhi_M; BinnedHistogram _h_costheta_M; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(D0_2010_S8570965); } diff --git a/analyses/pluginLEP/ALEPH_2001_S4656318.cc b/analyses/pluginLEP/ALEPH_2001_S4656318.cc --- a/analyses/pluginLEP/ALEPH_2001_S4656318.cc +++ b/analyses/pluginLEP/ALEPH_2001_S4656318.cc @@ -1,129 +1,129 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// @brief DELPHI b-fragmentation measurement /// @author Hendrik Hoeth class ALEPH_2001_S4656318 : public Analysis { public: /// Constructor ALEPH_2001_S4656318() : Analysis("ALEPH_2001_S4656318") { } /// @name Helper functions /// @note The PID:: namespace functions would be preferable, but don't have exactly the same behaviour. Preserving the original form. //@{ bool isParton(int id) { return abs(id) <= 100 && abs(id) != 22 && (abs(id) < 11 || abs(id) > 18); } // bool isBHadron(int id) { return ((abs(id)/100)%10 == 5) || (abs(id) >= 5000 && abs(id) <= 5999); } //@} /// @name Analysis methods //@{ /// Book projections and histograms void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); _histXbweak = bookHisto1D(1, 1, 1); _histXbprim = bookHisto1D(1, 1, 2); _histMeanXbweak = bookProfile1D(7, 1, 1); _histMeanXbprim = bookProfile1D(7, 1, 2); } void analyze(const Event& e) { const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); - for(ConstGenParticlePtr p : particles(e.genEvent())) { + for(ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { ConstGenVertexPtr pv = p->production_vertex(); ConstGenVertexPtr dv = p->end_vertex(); if (PID::isBottomHadron(p->pdg_id())) { const double xp = p->momentum().e()/meanBeamMom; // If the B-hadron has a parton as parent, call it primary B-hadron: if (pv) { bool is_primary = false; - for (ConstGenParticlePtr pp: pv->particles_in()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if (isParton(pp->pdg_id())) is_primary = true; } if (is_primary) { _histXbprim->fill(xp, weight); _histMeanXbprim->fill(_histMeanXbprim->bin(0).xMid(), xp, weight); } } // If the B-hadron has no B-hadron as a child, it decayed weakly: if (dv) { bool is_weak = true; - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ if (PID::isBottomHadron(pp->pdg_id())) { is_weak = false; } } if (is_weak) { _histXbweak->fill(xp, weight); _histMeanXbweak->fill(_histMeanXbweak->bin(0).xMid(), xp, weight); } } } } } // Finalize void finalize() { normalize(_histXbprim); normalize(_histXbweak); } private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. Histo1DPtr _histXbprim; Histo1DPtr _histXbweak; Profile1DPtr _histMeanXbprim; Profile1DPtr _histMeanXbweak; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_2001_S4656318); } diff --git a/analyses/pluginLEP/DELPHI_2002_069_CONF_603.cc b/analyses/pluginLEP/DELPHI_2002_069_CONF_603.cc --- a/analyses/pluginLEP/DELPHI_2002_069_CONF_603.cc +++ b/analyses/pluginLEP/DELPHI_2002_069_CONF_603.cc @@ -1,130 +1,130 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// @brief DELPHI b-fragmentation measurement /// @author Hendrik Hoeth class DELPHI_2002_069_CONF_603 : public Analysis { public: /// Constructor DELPHI_2002_069_CONF_603() : Analysis("DELPHI_2002_069_CONF_603") { } /// @name Helper functions /// @note The PID:: namespace functions would be preferable, but don't have exactly the same behaviour. Preserving the original form. //@{ bool isParton(int id) { return abs(id) <= 100 && abs(id) != 22 && (abs(id) < 11 || abs(id) > 18); } // bool isBHadron(int id) { return ((abs(id)/100)%10 == 5) || (abs(id) >= 5000 && abs(id) <= 5999); } //@} /// @name Analysis methods //@{ /// Book projections and histograms void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); _histXbprim = bookHisto1D(1, 1, 1); _histXbweak = bookHisto1D(2, 1, 1); _histMeanXbprim = bookProfile1D(4, 1, 1); _histMeanXbweak = bookProfile1D(5, 1, 1); } void analyze(const Event& e) { const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); - for (ConstGenParticlePtr p : particles(e.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { ConstGenVertexPtr pv = p->production_vertex(); ConstGenVertexPtr dv = p->end_vertex(); if (PID::isBottomHadron(p->pdg_id())) { const double xp = p->momentum().e()/meanBeamMom; // If the B-hadron has a parton as parent, call it primary B-hadron: if (pv) { bool is_primary = false; - for (ConstGenParticlePtr pp: pv->particles_in()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if (isParton(pp->pdg_id())) is_primary = true; } if (is_primary) { _histXbprim->fill(xp, weight); _histMeanXbprim->fill(_histMeanXbprim->bin(0).xMid(), xp, weight); } } // If the B-hadron has no B-hadron as a child, it decayed weakly: if (dv) { bool is_weak = true; - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ if (PID::isBottomHadron(pp->pdg_id())) { is_weak = false; } } if (is_weak) { _histXbweak->fill(xp, weight); _histMeanXbweak->fill(_histMeanXbweak->bin(0).xMid(), xp, weight); } } } } } // Finalize void finalize() { normalize(_histXbprim); normalize(_histXbweak); } private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. Histo1DPtr _histXbprim; Histo1DPtr _histXbweak; Profile1DPtr _histMeanXbprim; Profile1DPtr _histMeanXbweak; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_2002_069_CONF_603); } diff --git a/analyses/pluginLEP/OPAL_2004_I631361.cc b/analyses/pluginLEP/OPAL_2004_I631361.cc --- a/analyses/pluginLEP/OPAL_2004_I631361.cc +++ b/analyses/pluginLEP/OPAL_2004_I631361.cc @@ -1,119 +1,119 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { class OPAL_2004_I631361 : public Analysis { public: /// Constructor OPAL_2004_I631361() : Analysis("OPAL_2004_I631361"), _sumW(0.0) { } /// @name Analysis methods //@{ void init() { declare(FinalState(), "FS"); declare(ChargedFinalState(), "CFS"); int ih(0), iy(0); if (inRange(0.5*sqrtS()/GeV, 5.0, 5.5)) { ih = 1; iy = 1; } else if (inRange(0.5*sqrtS()/GeV, 5.5, 6.5)) { ih = 1; iy = 2; } else if (inRange(0.5*sqrtS()/GeV, 6.5, 7.5)) { ih = 1; iy = 3; } else if (inRange(0.5*sqrtS()/GeV, 7.5, 9.5)) { ih = 2; iy = 1; } else if (inRange(0.5*sqrtS()/GeV, 9.5, 13.0)) { ih = 2; iy = 2; } else if (inRange(0.5*sqrtS()/GeV, 13.0, 16.0)) { ih = 3; iy = 1; } else if (inRange(0.5*sqrtS()/GeV, 16.0, 20.0)) { ih = 3; iy = 2; } assert(ih>0); _h_chMult = bookHisto1D(ih,1,iy); if(ih==3) _h_chFragFunc = bookHisto1D(5,1,iy); else _h_chFragFunc = nullptr; } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // find the initial gluons ParticleVector initial; - for (ConstGenParticlePtr p : Rivet::particles(event.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(event.genEvent())) { ConstGenVertexPtr pv = p->production_vertex(); const PdgId pid = p->pdg_id(); if(pid!=21) continue; bool passed = false; - for (ConstGenParticlePtr pp : pv->particles_in()) { + for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) { const PdgId ppid = abs(pp->pdg_id()); passed = (ppid == PID::ELECTRON || ppid == PID::HIGGS || ppid == PID::ZBOSON || ppid == PID::GAMMA); if(passed) break; } if(passed) initial.push_back(Particle(*p)); } if(initial.size()!=2) vetoEvent; // use the direction for the event axis Vector3 axis = initial[0].momentum().p3().unit(); // fill histograms const Particles& chps = applyProjection(event, "CFS").particles(); unsigned int nMult[2] = {0,0}; _sumW += 2.*weight; // distribution foreach(const Particle& p, chps) { double xE = 2.*p.E()/sqrtS(); if(_h_chFragFunc) _h_chFragFunc->fill(xE, weight); if(p.momentum().p3().dot(axis)>0.) ++nMult[0]; else ++nMult[1]; } // multiplcities in jet _h_chMult->fill(nMult[0],weight); _h_chMult->fill(nMult[1],weight); } /// Normalise histograms etc., after the run void finalize() { normalize(_h_chMult); if(_h_chFragFunc) scale(_h_chFragFunc, 1./_sumW); } //@} private: double _sumW; /// @name Histograms //@{ Histo1DPtr _h_chMult; Histo1DPtr _h_chFragFunc; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2004_I631361); } diff --git a/analyses/pluginLEP/OPAL_2004_I648738.cc b/analyses/pluginLEP/OPAL_2004_I648738.cc --- a/analyses/pluginLEP/OPAL_2004_I648738.cc +++ b/analyses/pluginLEP/OPAL_2004_I648738.cc @@ -1,116 +1,116 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { class OPAL_2004_I648738 : public Analysis { public: /// Constructor OPAL_2004_I648738() : Analysis("OPAL_2004_I648738"), _sumW(3,0.) { } /// @name Analysis methods //@{ void init() { declare(FinalState(), "FS"); declare(ChargedFinalState(), "CFS"); unsigned int ih=0; if (inRange(0.5*sqrtS()/GeV, 4.0, 9.0)) { ih = 1; } else if (inRange(0.5*sqrtS()/GeV, 9.0, 19.0)) { ih = 2; } else if (inRange(0.5*sqrtS()/GeV, 19.0, 30.0)) { ih = 3; } else if (inRange(0.5*sqrtS()/GeV, 45.5, 45.7)) { ih = 5; } else if (inRange(0.5*sqrtS()/GeV, 30.0, 70.0)) { ih = 4; } else if (inRange(0.5*sqrtS()/GeV, 91.5, 104.5)) { ih = 6; } assert(ih>0); // book the histograms _histo_xE.push_back(bookHisto1D(ih+5,1,1)); _histo_xE.push_back(bookHisto1D(ih+5,1,2)); if(ih<5) _histo_xE.push_back(bookHisto1D(ih+5,1,3)); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // find the initial quarks/gluons ParticleVector initial; - for (ConstGenParticlePtr p : Rivet::particles(event.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(event.genEvent())) { ConstGenVertexPtr pv = p->production_vertex(); const PdgId pid = abs(p->pdg_id()); if(!( (pid>=1&&pid<=5) || pid ==21) ) continue; bool passed = false; - for (ConstGenParticlePtr pp : pv->particles_in()) { + for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) { const PdgId ppid = abs(pp->pdg_id()); passed = (ppid == PID::ELECTRON || ppid == PID::HIGGS || ppid == PID::ZBOSON || ppid == PID::GAMMA); if(passed) break; } if(passed) initial.push_back(Particle(*p)); } if(initial.size()!=2) { vetoEvent; } // type of event unsigned int itype=2; if(initial[0].pdgId()==-initial[1].pdgId()) { PdgId pid = abs(initial[0].pdgId()); if(pid>=1&&pid<=4) itype=0; else itype=1; } assert(itype<_histo_xE.size()); // fill histograms _sumW[itype] += 2.*weight; const Particles& chps = applyProjection(event, "CFS").particles(); foreach(const Particle& p, chps) { double xE = 2.*p.E()/sqrtS(); _histo_xE[itype]->fill(xE, weight); } } /// Normalise histograms etc., after the run void finalize() { for(unsigned int ix=0;ix<_histo_xE.size();++ix) { if(_sumW[ix]>0.) scale(_histo_xE[ix],1./_sumW[ix]); } } //@} private: vector _sumW; /// @name Histograms //@{ vector _histo_xE; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2004_I648738); } diff --git a/analyses/pluginLEP/SLD_2002_S4869273.cc b/analyses/pluginLEP/SLD_2002_S4869273.cc --- a/analyses/pluginLEP/SLD_2002_S4869273.cc +++ b/analyses/pluginLEP/SLD_2002_S4869273.cc @@ -1,109 +1,109 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// @brief SLD b-fragmentation measurement /// @author Peter Richardson class SLD_2002_S4869273 : public Analysis { public: /// Constructor SLD_2002_S4869273() : Analysis("SLD_2002_S4869273") { } /// @name Helper functions /// @note The PID:: namespace functions would be preferable, but don't have exactly the same behaviour. Preserving the original form. //@{ // bool isParton(int id) { return abs(id) <= 100 && abs(id) != 22 && (abs(id) < 11 || abs(id) > 18); } // bool isBHadron(int id) { return ((abs(id)/100)%10 == 5) || (abs(id) >= 5000 && abs(id) <= 5999); } //@} /// @name Analysis methods //@{ /// Book projections and histograms void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); _histXbweak = bookHisto1D(1, 1, 1); } void analyze(const Event& e) { const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); - for(ConstGenParticlePtr p : particles(e.genEvent())) { + for(ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { ConstGenVertexPtr dv = p->end_vertex(); if (PID::isBottomHadron(p->pdg_id())) { const double xp = p->momentum().e()/meanBeamMom; // If the B-hadron has no B-hadron as a child, it decayed weakly: if (dv) { bool is_weak = true; - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ if (PID::isBottomHadron(pp->pdg_id())) { is_weak = false; } } if (is_weak) { _histXbweak->fill(xp, weight); } } } } } // Finalize void finalize() { normalize(_histXbweak); } private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. Histo1DPtr _histXbweak; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(SLD_2002_S4869273); } diff --git a/analyses/pluginLHCb/LHCB_2010_I867355.cc b/analyses/pluginLHCb/LHCB_2010_I867355.cc --- a/analyses/pluginLHCb/LHCB_2010_I867355.cc +++ b/analyses/pluginLHCb/LHCB_2010_I867355.cc @@ -1,90 +1,90 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Particle.hh" namespace Rivet { class LHCB_2010_I867355 : public Analysis { public: LHCB_2010_I867355() : Analysis("LHCB_2010_I867355") { } void init() { //@ Results are presented for two different fragmentation functions, LEP and Tevatron. Therefore, we have two sets of histograms. _h_sigma_vs_eta_lep = bookHisto1D(1, 1, 1); _h_sigma_vs_eta_tvt = bookHisto1D(1, 1, 2); _h_sigma_total_lep = bookHisto1D(2, 1, 1); _h_sigma_total_tvt = bookHisto1D(2, 1, 2); } /// Perform the per-event analysis void analyze(const Event& event) { double weight = event.weight(); Particles bhadrons; - for(ConstGenParticlePtr p: particles(event.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(event.genEvent())) { if (!( PID::isHadron( p->pdg_id() ) && PID::hasBottom( p->pdg_id() )) ) continue; ConstGenVertexPtr dv = p->end_vertex(); bool hasBdaughter = false; if ( PID::isHadron( p->pdg_id() ) && PID::hasBottom( p->pdg_id() )) { // selecting b-hadrons if (dv) { - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ if (PID::isHadron(pp->pdg_id() ) && PID::hasBottom(pp->pdg_id() )) { hasBdaughter = true; } } } } if (hasBdaughter) continue; // continue if the daughter is another b-hadron bhadrons += Particle(*p); } for(const Particle& particle: bhadrons) { // take fabs() to use full statistics and then multiply weight by 0.5 because LHCb is single-sided double eta = fabs(particle.eta()); _h_sigma_vs_eta_lep->fill( eta, 0.5*weight ); _h_sigma_vs_eta_tvt->fill( eta, 0.5*weight ); _h_sigma_total_lep->fill( eta, 0.5*weight ); // histogram for full kinematic range _h_sigma_total_tvt->fill( eta, 0.5*weight ); // histogram for full kinematic range } } void finalize() { double norm = crossSection()/microbarn/sumOfWeights(); double binwidth = 4.; // integrated over full rapidity space from 2 to 6. // to get the avergae of b and bbar, we scale with 0.5 scale(_h_sigma_vs_eta_lep, 0.5*norm); scale(_h_sigma_vs_eta_tvt, 0.5*norm); scale(_h_sigma_total_lep, 0.5*norm*binwidth); scale(_h_sigma_total_tvt, 0.5*norm*binwidth); } private: Histo1DPtr _h_sigma_total_lep; Histo1DPtr _h_sigma_total_tvt; Histo1DPtr _h_sigma_vs_eta_lep; Histo1DPtr _h_sigma_vs_eta_tvt; }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2010_I867355); } diff --git a/analyses/pluginLHCb/LHCB_2010_S8758301.cc b/analyses/pluginLHCb/LHCB_2010_S8758301.cc --- a/analyses/pluginLHCb/LHCB_2010_S8758301.cc +++ b/analyses/pluginLHCb/LHCB_2010_S8758301.cc @@ -1,341 +1,337 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Math/Constants.hh" #include "Rivet/Math/Units.hh" -#include "HepMC/GenEvent.h" -#include "HepMC/GenParticle.h" -#include "HepMC/GenVertex.h" -#include "HepMC/SimpleVector.h" namespace Rivet { using namespace std; // Lifetime cut: longest living ancestor ctau < 10^-11 [m] namespace { const double MAX_CTAU = 1.0E-11; // [m] const double MIN_PT = 0.0001; // [GeV/c] } class LHCB_2010_S8758301 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2010_S8758301() : Analysis("LHCB_2010_S8758301"), sumKs0_30(0.0), sumKs0_35(0.0), sumKs0_40(0.0), sumKs0_badnull(0), sumKs0_badlft(0), sumKs0_all(0), sumKs0_outup(0), sumKs0_outdwn(0), sum_low_pt_loss(0), sum_high_pt_loss(0) { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { MSG_DEBUG("Initializing analysis!"); fillMap(partLftMap); _h_K0s_pt_30 = bookHisto1D(1,1,1); _h_K0s_pt_35 = bookHisto1D(1,1,2); _h_K0s_pt_40 = bookHisto1D(1,1,3); _h_K0s_pt_y_30 = bookHisto1D(2,1,1); _h_K0s_pt_y_35 = bookHisto1D(2,1,2); _h_K0s_pt_y_40 = bookHisto1D(2,1,3); _h_K0s_pt_y_all = bookHisto1D(3,1,1); declare(UnstableFinalState(), "UFS"); } /// Perform the per-event analysis void analyze(const Event& event) { int id; double y, pT; const double weight = event.weight(); const UnstableFinalState& ufs = apply(event, "UFS"); double ancestor_lftime; foreach (const Particle& p, ufs.particles()) { id = p.pid(); if ((id != 310) && (id != -310)) continue; sumKs0_all ++; ancestor_lftime = 0.; ConstGenParticlePtr long_ancestor = getLongestLivedAncestor(p, ancestor_lftime); if ( !(long_ancestor) ) { sumKs0_badnull ++; continue; } if ( ancestor_lftime > MAX_CTAU ) { sumKs0_badlft ++; MSG_DEBUG("Ancestor " << long_ancestor->pdg_id() << ", ctau: " << ancestor_lftime << " [m]"); continue; } const FourMomentum& qmom = p.momentum(); y = 0.5 * log((qmom.E() + qmom.pz())/(qmom.E() - qmom.pz())); pT = sqrt((qmom.px() * qmom.px()) + (qmom.py() * qmom.py())); if (pT < MIN_PT) { sum_low_pt_loss ++; MSG_DEBUG("Small pT K^0_S: " << pT << " GeV/c."); } if (pT > 1.6) { sum_high_pt_loss ++; } if (y > 2.5 && y < 4.0) { _h_K0s_pt_y_all->fill(pT, weight); if (y > 2.5 && y < 3.0) { _h_K0s_pt_y_30->fill(pT, weight); _h_K0s_pt_30->fill(pT, weight); sumKs0_30 += weight; } else if (y > 3.0 && y < 3.5) { _h_K0s_pt_y_35->fill(pT, weight); _h_K0s_pt_35->fill(pT, weight); sumKs0_35 += weight; } else if (y > 3.5 && y < 4.0) { _h_K0s_pt_y_40->fill(pT, weight); _h_K0s_pt_40->fill(pT, weight); sumKs0_40 += weight; } } else if (y < 2.5) { sumKs0_outdwn ++; } else if (y > 4.0) { sumKs0_outup ++; } } } /// Normalise histograms etc., after the run void finalize() { MSG_DEBUG("Total number Ks0: " << sumKs0_all << endl << "Sum of weights: " << sumOfWeights() << endl << "Weight Ks0 (2.5 < y < 3.0): " << sumKs0_30 << endl << "Weight Ks0 (3.0 < y < 3.5): " << sumKs0_35 << endl << "Weight Ks0 (3.5 < y < 4.0): " << sumKs0_40 << endl << "Nb. unprompt Ks0 [null mother]: " << sumKs0_badnull << endl << "Nb. unprompt Ks0 [mother lifetime exceeded]: " << sumKs0_badlft << endl << "Nb. Ks0 (y > 4.0): " << sumKs0_outup << endl << "Nb. Ks0 (y < 2.5): " << sumKs0_outdwn << endl << "Nb. Ks0 (pT < " << (MIN_PT/MeV) << " MeV/c): " << sum_low_pt_loss << endl << "Nb. Ks0 (pT > 1.6 GeV/c): " << sum_high_pt_loss << endl << "Cross-section [mb]: " << crossSection()/millibarn << endl << "Nb. events: " << numEvents()); // Compute cross-section; multiply by bin width for correct scaling // cross-section given by Rivet in pb double xsection_factor = crossSection()/sumOfWeights(); // Multiply bin width for correct scaling, xsection in mub scale(_h_K0s_pt_30, 0.2*xsection_factor/microbarn); scale(_h_K0s_pt_35, 0.2*xsection_factor/microbarn); scale(_h_K0s_pt_40, 0.2*xsection_factor/microbarn); // Divide by dy (rapidity window width), xsection in mb scale(_h_K0s_pt_y_30, xsection_factor/0.5/millibarn); scale(_h_K0s_pt_y_35, xsection_factor/0.5/millibarn); scale(_h_K0s_pt_y_40, xsection_factor/0.5/millibarn); scale(_h_K0s_pt_y_all, xsection_factor/1.5/millibarn); } //@} private: /// Get particle lifetime from hardcoded data double getLifeTime(int pid) { double lft = -1.0; if (pid < 0) pid = - pid; // Correct Pythia6 PIDs for f0(980), f0(1370) mesons if (pid == 10331) pid = 30221; if (pid == 10221) pid = 9010221; map::iterator pPartLft = partLftMap.find(pid); // search stable particle list if (pPartLft == partLftMap.end()) { if (pid <= 100) return 0.0; for (unsigned int i=0; i < sizeof(stablePDGIds)/sizeof(unsigned int); i++ ) { if (pid == stablePDGIds[i]) { lft = 0.0; break; } } } else { lft = (*pPartLft).second; } if (lft < 0.0) MSG_ERROR("Could not determine lifetime for particle with PID " << pid << "... This K_s^0 will be considered unprompt!"); return lft; } ConstGenParticlePtr getLongestLivedAncestor(const Particle& p, double& lifeTime) { ConstGenParticlePtr ret = nullptr; lifeTime = 1.; if (p.genParticle() == nullptr) return nullptr; ConstGenParticlePtr pmother = p.genParticle(); double longest_ctau = 0.; double mother_ctau; int mother_pid; ConstGenVertexPtr ivertex = pmother->production_vertex(); while (ivertex) { - vector inparts = ivertex->particles_in(); + vector inparts = HepMCUtils::particles(ivertex, Relatives::PARENTS); if (inparts.size() < 1) {ret = nullptr; break;} // error: should never happen! pmother = inparts.at(0); // first mother particle mother_pid = pmother->pdg_id(); ivertex = pmother->production_vertex(); // get next vertex if ( (mother_pid == 2212) || (mother_pid <= 100) ) { if (ret == nullptr) ret = pmother; continue; } mother_ctau = getLifeTime(mother_pid); if (mother_ctau < 0.) { ret= nullptr; break; } // error:should never happen! if (mother_ctau > longest_ctau) { longest_ctau = mother_ctau; ret = pmother; } } if (ret) lifeTime = longest_ctau * c_light; return ret; } // Fill the PDG Id to Lifetime[seconds] map // Data was extract from LHCb Particle Table using ParticleSvc bool fillMap(map &m) { m[6] = 4.707703E-25; m[11] = 1.E+16; m[12] = 1.E+16; m[13] = 2.197019E-06; m[14] = 1.E+16; m[15] = 2.906E-13; m[16] = 1.E+16; m[22] = 1.E+16; m[23] = 2.637914E-25; m[24] = 3.075758E-25; m[25] = 9.4E-26; m[35] = 9.4E-26; m[36] = 9.4E-26; m[37] = 9.4E-26; m[84] = 3.335641E-13; m[85] = 1.290893E-12; m[111] = 8.4E-17; m[113] = 4.405704E-24; m[115] = 6.151516E-24; m[117] = 4.088275E-24; m[119] = 2.102914E-24; m[130] = 5.116E-08; m[150] = 1.525E-12; m[211] = 2.6033E-08; m[213] = 4.405704E-24; m[215] = 6.151516E-24; m[217] = 4.088275E-24; m[219] = 2.102914E-24; m[221] = 5.063171E-19; m[223] = 7.752794E-23; m[225] = 3.555982E-24; m[227] = 3.91793E-24; m[229] = 2.777267E-24; m[310] = 8.953E-11; m[313] = 1.308573E-23; m[315] = 6.038644E-24; m[317] = 4.139699E-24; m[319] = 3.324304E-24; m[321] = 1.238E-08; m[323] = 1.295693E-23; m[325] = 6.682357E-24; m[327] = 4.139699E-24; m[329] = 3.324304E-24; m[331] = 3.210791E-21; m[333] = 1.545099E-22; m[335] = 9.016605E-24; m[337] = 7.565657E-24; m[350] = 1.407125E-12; m[411] = 1.04E-12; m[413] = 6.856377E-21; m[415] = 1.778952E-23; m[421] = 4.101E-13; m[423] = 1.000003E-19; m[425] = 1.530726E-23; m[431] = 5.E-13; m[433] = 1.000003E-19; m[435] = 3.291061E-23; m[441] = 2.465214E-23; m[443] = 7.062363E-21; m[445] = 3.242425E-22; m[510] = 1.525E-12; m[511] = 1.525E-12; m[513] = 1.000019E-19; m[515] = 1.31E-23; m[521] = 1.638E-12; m[523] = 1.000019E-19; m[525] = 1.31E-23; m[530] = 1.536875E-12; m[531] = 1.472E-12; m[533] = 1.E-19; m[535] = 1.31E-23; m[541] = 4.5E-13; m[553] = 1.218911E-20; m[1112] = 4.539394E-24; m[1114] = 5.578069E-24; m[1116] = 1.994582E-24; m[1118] = 2.269697E-24; m[1212] = 4.539394E-24; m[1214] = 5.723584E-24; m[1216] = 1.994582E-24; m[1218] = 1.316424E-24; m[2112] = 8.857E+02; m[2114] = 5.578069E-24; m[2116] = 4.388081E-24; m[2118] = 2.269697E-24; m[2122] = 4.539394E-24; m[2124] = 5.723584E-24; m[2126] = 1.994582E-24; m[2128] = 1.316424E-24; m[2212] = 1.E+16; m[2214] = 5.578069E-24; m[2216] = 4.388081E-24; m[2218] = 2.269697E-24; m[2222] = 4.539394E-24; m[2224] = 5.578069E-24; m[2226] = 1.994582E-24; m[2228] = 2.269697E-24; m[3112] = 1.479E-10; m[3114] = 1.670589E-23; m[3116] = 5.485102E-24; m[3118] = 3.656734E-24; m[3122] = 2.631E-10; m[3124] = 4.219309E-23; m[3126] = 8.227653E-24; m[3128] = 3.291061E-24; m[3212] = 7.4E-20; m[3214] = 1.828367E-23; m[3216] = 5.485102E-24; m[3218] = 3.656734E-24; m[3222] = 8.018E-11; m[3224] = 1.838582E-23; m[3226] = 5.485102E-24; m[3228] = 3.656734E-24; m[3312] = 1.639E-10; m[3314] = 6.648608E-23; m[3322] = 2.9E-10; m[3324] = 7.233101E-23; m[3334] = 8.21E-11; m[4112] = 2.991874E-22; m[4114] = 4.088274E-23; m[4122] = 2.E-13; m[4132] = 1.12E-13; m[4212] = 3.999999E-22; m[4214] = 3.291061E-22; m[4222] = 2.951624E-22; m[4224] = 4.417531E-23; m[4232] = 4.42E-13; m[4332] = 6.9E-14; m[4412] = 3.335641E-13; m[4422] = 3.335641E-13; m[4432] = 3.335641E-13; m[5112] = 1.E-19; m[5122] = 1.38E-12; m[5132] = 1.42E-12; m[5142] = 1.290893E-12; m[5212] = 1.E-19; m[5222] = 1.E-19; m[5232] = 1.42E-12; m[5242] = 1.290893E-12; m[5312] = 1.E-19; m[5322] = 1.E-19; m[5332] = 1.55E-12; m[5342] = 1.290893E-12; m[5442] = 1.290893E-12; m[5512] = 1.290893E-12; m[5522] = 1.290893E-12; m[5532] = 1.290893E-12; m[5542] = 1.290893E-12; m[10111] = 2.48382E-24; m[10113] = 4.635297E-24; m[10115] = 2.54136E-24; m[10211] = 2.48382E-24; m[10213] = 4.635297E-24; m[10215] = 2.54136E-24; m[10223] = 1.828367E-24; m[10225] = 3.636531E-24; m[10311] = 2.437823E-24; m[10313] = 7.313469E-24; m[10315] = 3.538775E-24; m[10321] = 2.437823E-24; m[10323] = 7.313469E-24; m[10325] = 3.538775E-24; m[10331] = 4.804469E-24; m[10411] = 4.38E-24; m[10413] = 3.29E-23; m[10421] = 4.38E-24; m[10423] = 3.22653E-23; m[10431] = 6.5821E-22; m[10433] = 6.5821E-22; m[10441] = 6.453061E-23; m[10511] = 4.39E-24; m[10513] = 1.65E-23; m[10521] = 4.39E-24; m[10523] = 1.65E-23; m[10531] = 4.39E-24; m[10533] = 1.65E-23; m[11114] = 2.194041E-24; m[11116] = 1.828367E-24; m[11212] = 1.880606E-24; m[11216] = 1.828367E-24; m[12112] = 2.194041E-24; m[12114] = 2.194041E-24; m[12116] = 5.063171E-24; m[12126] = 1.828367E-24; m[12212] = 2.194041E-24; m[12214] = 2.194041E-24; m[12216] = 5.063171E-24; m[12224] = 2.194041E-24; m[12226] = 1.828367E-24; m[13112] = 6.582122E-24; m[13114] = 1.09702E-23; m[13116] = 5.485102E-24; m[13122] = 1.316424E-23; m[13124] = 1.09702E-23; m[13126] = 6.928549E-24; m[13212] = 6.582122E-24; m[13214] = 1.09702E-23; m[13216] = 5.485102E-24; m[13222] = 6.582122E-24; m[13224] = 1.09702E-23; m[13226] = 5.485102E-24; m[13312] = 4.135667E-22; m[13314] = 2.742551E-23; m[13324] = 2.742551E-23; m[14122] = 1.828367E-22; m[20022] = 1.E+16; m[20113] = 1.567172E-24; m[20213] = 1.567172E-24; m[20223] = 2.708692E-23; m[20313] = 3.782829E-24; m[20315] = 2.384827E-24; m[20323] = 3.782829E-24; m[20325] = 2.384827E-24; m[20333] = 1.198929E-23; m[20413] = 2.63E-24; m[20423] = 2.63E-24; m[20433] = 6.5821E-22; m[20443] = 7.395643E-22; m[20513] = 2.63E-24; m[20523] = 2.63E-24; m[20533] = 2.63E-24; m[21112] = 2.632849E-24; m[21114] = 3.291061E-24; m[21212] = 2.632849E-24; m[21214] = 6.582122E-24; m[22112] = 4.388081E-24; m[22114] = 3.291061E-24; m[22122] = 2.632849E-24; m[22124] = 6.582122E-24; m[22212] = 4.388081E-24; m[22214] = 3.291061E-24; m[22222] = 2.632849E-24; m[22224] = 3.291061E-24; m[23112] = 7.313469E-24; m[23114] = 2.991874E-24; m[23122] = 4.388081E-24; m[23124] = 6.582122E-24; m[23126] = 3.291061E-24; m[23212] = 7.313469E-24; m[23214] = 2.991874E-24; m[23222] = 7.313469E-24; m[23224] = 2.991874E-24; m[30113] = 2.632849E-24; m[30213] = 2.632849E-24; m[30221] = 1.880606E-24; m[30223] = 2.089563E-24; m[30313] = 2.056913E-24; m[30323] = 2.056913E-24; m[30443] = 2.419898E-23; m[31114] = 1.880606E-24; m[31214] = 3.291061E-24; m[32112] = 3.989164E-24; m[32114] = 1.880606E-24; m[32124] = 3.291061E-24; m[32212] = 3.989164E-24; m[32214] = 1.880606E-24; m[32224] = 1.880606E-24; m[33122] = 1.880606E-23; m[42112] = 6.582122E-24; m[42212] = 6.582122E-24; m[43122] = 2.194041E-24; m[53122] = 4.388081E-24; m[100111] = 1.645531E-24; m[100113] = 1.64553E-24; m[100211] = 1.645531E-24; m[100213] = 1.64553E-24; m[100221] = 1.196749E-23; m[100223] = 3.061452E-24; m[100313] = 2.837122E-24; m[100323] = 2.837122E-24; m[100331] = 4.459432E-25; m[100333] = 4.388081E-24; m[100441] = 4.701516E-23; m[100443] = 2.076379E-21; m[100553] = 2.056913E-20; m[200553] = 3.242425E-20; m[300553] = 3.210791E-23; m[9000111] = 8.776163E-24; m[9000211] = 8.776163E-24; m[9000443] = 8.227652E-24; m[9000553] = 5.983747E-24; m[9010111] = 3.164482E-24; m[9010211] = 3.164482E-24; m[9010221] = 9.403031E-24; m[9010443] = 8.438618E-24; m[9010553] = 8.3318E-24; m[9020221] = 8.093281E-23; m[9020443] = 1.061633E-23; m[9030221] = 6.038644E-24; m[9042413] = 2.07634E-21; m[9050225] = 1.394517E-24; m[9060225] = 3.291061E-24; m[9080225] = 4.388081E-24; m[9090225] = 2.056913E-24; m[9910445] = 2.07634E-21; m[9920443] = 2.07634E-21; return true; } /// @name Histograms //@{ Histo1DPtr _h_K0s_pt_y_30; // histogram for 2.5 < y < 3.0 (d2sigma) Histo1DPtr _h_K0s_pt_y_35; // histogram for 3.0 < y < 3.5 (d2sigma) Histo1DPtr _h_K0s_pt_y_40; // histogram for 3.5 < y < 4.0 (d2sigma) Histo1DPtr _h_K0s_pt_30; // histogram for 2.5 < y < 3.0 (sigma) Histo1DPtr _h_K0s_pt_35; // histogram for 3.0 < y < 3.5 (sigma) Histo1DPtr _h_K0s_pt_40; // histogram for 3.5 < y < 4.0 (sigma) Histo1DPtr _h_K0s_pt_y_all; // histogram for 2.5 < y < 4.0 (d2sigma) double sumKs0_30; // Sum of weights 2.5 < y < 3.0 double sumKs0_35; // Sum of weights 3.0 < y < 3.5 double sumKs0_40; // Sum of weights 3.5 < y < 4.0 // Various counters mainly for debugging and comparisons between different generators size_t sumKs0_badnull; // Nb of particles for which mother could not be identified size_t sumKs0_badlft; // Nb of mesons with long lived mothers size_t sumKs0_all; // Nb of all Ks0 generated size_t sumKs0_outup; // Nb of mesons with y > 4.0 size_t sumKs0_outdwn; // Nb of mesons with y < 2.5 size_t sum_low_pt_loss; // Nb of mesons with very low pT (indicates when units are mixed-up) size_t sum_high_pt_loss; // Nb of mesons with pT > 1.6 GeV/c // Map between PDG id and particle lifetimes in seconds std::map partLftMap; // Set of PDG Ids for stable particles (PDG Id <= 100 are considered stable) static const int stablePDGIds[205]; //@} }; // Actual initialization according to ISO C++ requirements const int LHCB_2010_S8758301::stablePDGIds[205] = { 311, 543, 545, 551, 555, 557, 1103, 2101, 2103, 2203, 3101, 3103, 3201, 3203, 3303, 4101, 4103, 4124, 4201, 4203, 4301, 4303, 4312, 4314, 4322, 4324, 4334, 4403, 4414, 4424, 4434, 4444, 5101, 5103, 5114, 5201, 5203, 5214, 5224, 5301, 5303, 5314, 5324, 5334, 5401, 5403, 5412, 5414, 5422, 5424, 5432, 5434, 5444, 5503, 5514, 5524, 5534, 5544, 5554, 10022, 10333, 10335, 10443, 10541, 10543, 10551, 10553, 10555, 11112, 12118, 12122, 12218, 12222, 13316, 13326, 20543, 20553, 20555, 23314, 23324, 30343, 30353, 30363, 30553, 33314, 33324, 41214, 42124, 52114, 52214, 100311, 100315, 100321, 100325, 100411, 100413, 100421, 100423, 100551, 100555, 100557, 110551, 110553, 110555, 120553, 120555, 130553, 200551, 200555, 210551, 210553, 220553, 1000001, 1000002, 1000003, 1000004, 1000005, 1000006, 1000011, 1000012, 1000013, 1000014, 1000015, 1000016, 1000021, 1000022, 1000023, 1000024, 1000025, 1000035, 1000037, 1000039, 2000001, 2000002, 2000003, 2000004, 2000005, 2000006, 2000011, 2000012, 2000013, 2000014, 2000015, 2000016, 3000111, 3000113, 3000211, 3000213, 3000221, 3000223, 3000331, 3100021, 3100111, 3100113, 3200111, 3200113, 3300113, 3400113, 4000001, 4000002, 4000011, 4000012, 5000039, 9000221, 9900012, 9900014, 9900016, 9900023, 9900024, 9900041, 9900042}; // Hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2010_S8758301); } diff --git a/analyses/pluginLHCb/LHCB_2011_I917009.cc b/analyses/pluginLHCb/LHCB_2011_I917009.cc --- a/analyses/pluginLHCb/LHCB_2011_I917009.cc +++ b/analyses/pluginLHCb/LHCB_2011_I917009.cc @@ -1,325 +1,325 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { class LHCB_2011_I917009 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2011_I917009() : Analysis("LHCB_2011_I917009"), rap_beam(0.0), pt_min(0.0), pt1_edge(0.65), pt2_edge(1.0), pt3_edge(2.5), rap_min(2.), rap_max(0.0), dsShift(0) { } //@} public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { int y_nbins = 4; fillMap(partLftMap); if (fuzzyEquals(sqrtS(), 0.9*TeV)) { rap_beam = 6.87; rap_max = 4.; pt_min = 0.25; } else if (fuzzyEquals(sqrtS(), 7*TeV)) { rap_beam = 8.92; rap_max = 4.5; pt_min = 0.15; y_nbins = 5; dsShift = 8; } else { MSG_ERROR("Incompatible beam energy!"); } // Create the sets of temporary histograms that will be used to make the ratios in the finalize() for (size_t i = 0; i < 12; ++i) _tmphistos[i] = YODA::Histo1D(y_nbins, rap_min, rap_max); for (size_t i = 12; i < 15; ++i) _tmphistos[i] = YODA::Histo1D(refData(dsShift+5, 1, 1)); for (size_t i = 15; i < 18; ++i) _tmphistos[i] = YODA::Histo1D(y_nbins, rap_beam - rap_max, rap_beam - rap_min); declare(UnstableFinalState(), "UFS"); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); const UnstableFinalState& ufs = apply(event, "UFS"); double ancestor_lftsum = 0.0; double y, pT; int id; int partIdx = -1; foreach (const Particle& p, ufs.particles()) { id = p.pid(); // continue if particle not a K0s nor (anti-)Lambda if ( (id == 310) || (id == -310) ) { partIdx = 2; } else if ( id == 3122 ) { partIdx = 1; } else if ( id == -3122 ) { partIdx = 0; } else { continue; } ancestor_lftsum = getMotherLifeTimeSum(p); // Lifetime cut: ctau sum of all particle ancestors < 10^-9 m according to the paper (see eq. 5) const double MAX_CTAU = 1.0E-9; // [m] if ( (ancestor_lftsum < 0.0) || (ancestor_lftsum > MAX_CTAU) ) continue; const FourMomentum& qmom = p.momentum(); y = log((qmom.E() + qmom.pz())/(qmom.E() - qmom.pz()))/2.; // skip this particle if it has too high or too low rapidity (extremely rare cases when E = +- pz) if ( std::isnan(y) || std::isinf(y) ) continue; y = fabs(y); if (!inRange(y, rap_min, rap_max)) continue; pT = sqrt((qmom.px() * qmom.px()) + (qmom.py() * qmom.py())); if (!inRange(pT, pt_min, pt3_edge)) continue; // Filling corresponding temporary histograms for pT intervals if (inRange(pT, pt_min, pt1_edge)) _tmphistos[partIdx*3].fill(y, weight); if (inRange(pT, pt1_edge, pt2_edge)) _tmphistos[partIdx*3+1].fill(y, weight); if (inRange(pT, pt2_edge, pt3_edge)) _tmphistos[partIdx*3+2].fill(y, weight); // Fill histo in rapidity for whole pT interval _tmphistos[partIdx+9].fill(y, weight); // Fill histo in pT for whole rapidity interval _tmphistos[partIdx+12].fill(pT, weight); // Fill histo in rapidity loss for whole pT interval _tmphistos[partIdx+15].fill(rap_beam - y, weight); } } // Generate the ratio histograms void finalize() { int dsId = dsShift + 1; for (size_t j = 0; j < 3; ++j) { /// @todo Compactify to two one-liners Scatter2DPtr s1 = bookScatter2D(dsId, 1, j+1); divide(_tmphistos[j], _tmphistos[3+j], s1); Scatter2DPtr s2 = bookScatter2D(dsId+1, 1, j+1); divide(_tmphistos[j], _tmphistos[6+j], s2); } dsId += 2; for (size_t j = 3; j < 6; ++j) { /// @todo Compactify to two one-liners Scatter2DPtr s1 = bookScatter2D(dsId, 1, 1); divide(_tmphistos[3*j], _tmphistos[3*j+1], s1); dsId += 1; Scatter2DPtr s2 = bookScatter2D(dsId, 1, 1); divide(_tmphistos[3*j], _tmphistos[3*j+2], s2); dsId += 1; } } //@} private: // Get particle lifetime from hardcoded data double getLifeTime(int pid) { double lft = -1.0; if (pid < 0) pid = - pid; // Correct Pythia6 PIDs for f0(980), f0(1370) mesons if (pid == 10331) pid = 30221; if (pid == 10221) pid = 9010221; map::iterator pPartLft = partLftMap.find(pid); // search stable particle list if (pPartLft == partLftMap.end()) { if (pid <= 100) return 0.0; for (size_t i=0; i < sizeof(stablePDGIds)/sizeof(unsigned int); i++) { if (pid == stablePDGIds[i]) { lft = 0.0; break; } } } else { lft = (*pPartLft).second; } if (lft < 0.0 && PID::isHadron(pid)) { MSG_ERROR("Could not determine lifetime for particle with PID " << pid << "... This V^0 will be considered unprompt!"); } return lft; } // Data members like post-cuts event weight counters go here const double getMotherLifeTimeSum(const Particle& p) { if (p.genParticle() == nullptr) return -1.; double lftSum = 0.; double plft = 0.; ConstGenParticlePtr part = p.genParticle(); ConstGenVertexPtr ivtx = part->production_vertex(); while (ivtx) { - vector part_in = ivtx->particles_in(); + vector part_in = HepMCUtils::particles(ivtx, Relatives::PARENTS); if (part_in.size() < 1) { lftSum = -1.; break; }; ConstGenParticlePtr part = part_in.at(0);//(*iPart_invtx); if ( !(part) ) { lftSum = -1.; break; }; ivtx = part->production_vertex(); if ( (part->pdg_id() == 2212) || !(ivtx) ) break; //reached beam plft = getLifeTime(part->pdg_id()); if (plft < 0.) { lftSum = -1.; break; }; lftSum += plft; }; return (lftSum * c_light); } /// @name Private variables //@{ // The rapidity of the beam according to the selected beam energy double rap_beam; // The edges of the intervals of transverse momentum double pt_min, pt1_edge, pt2_edge, pt3_edge; // The limits of the rapidity window double rap_min; double rap_max; // Indicates which set of histograms will be output to yoda file (according to beam energy) int dsShift; // Map between PDG id and particle lifetimes in seconds std::map partLftMap; // Set of PDG Ids for stable particles (PDG Id <= 100 are considered stable) static const int stablePDGIds[205]; //@} /// @name Helper histograms //@{ /// Histograms are defined in the following order: anti-Lambda, Lambda and K0s. /// First 3 suites of 3 histograms correspond to each particle in bins of y for the 3 pT intervals. (9 histos) /// Next 3 histograms contain the particles in y bins for the whole pT interval (3 histos) /// Next 3 histograms contain the particles in y_loss bins for the whole pT interval (3 histos) /// Last 3 histograms contain the particles in pT bins for the whole rapidity (y) interval (3 histos) YODA::Histo1D _tmphistos[18]; //@} // Fill the PDG Id to Lifetime[seconds] map // Data was extracted from LHCb Particle Table through LHCb::ParticlePropertySvc bool fillMap(map& m) { m[6] = 4.707703E-25; m[11] = 1.E+16; m[12] = 1.E+16; m[13] = 2.197019E-06; m[14] = 1.E+16; m[15] = 2.906E-13; m[16] = 1.E+16; m[22] = 1.E+16; m[23] = 2.637914E-25; m[24] = 3.075758E-25; m[25] = 9.4E-26; m[35] = 9.4E-26; m[36] = 9.4E-26; m[37] = 9.4E-26; m[84] = 3.335641E-13; m[85] = 1.290893E-12; m[111] = 8.4E-17; m[113] = 4.405704E-24; m[115] = 6.151516E-24; m[117] = 4.088275E-24; m[119] = 2.102914E-24; m[130] = 5.116E-08; m[150] = 1.525E-12; m[211] = 2.6033E-08; m[213] = 4.405704E-24; m[215] = 6.151516E-24; m[217] = 4.088275E-24; m[219] = 2.102914E-24; m[221] = 5.063171E-19; m[223] = 7.752794E-23; m[225] = 3.555982E-24; m[227] = 3.91793E-24; m[229] = 2.777267E-24; m[310] = 8.953E-11; m[313] = 1.308573E-23; m[315] = 6.038644E-24; m[317] = 4.139699E-24; m[319] = 3.324304E-24; m[321] = 1.238E-08; m[323] = 1.295693E-23; m[325] = 6.682357E-24; m[327] = 4.139699E-24; m[329] = 3.324304E-24; m[331] = 3.210791E-21; m[333] = 1.545099E-22; m[335] = 9.016605E-24; m[337] = 7.565657E-24; m[350] = 1.407125E-12; m[411] = 1.04E-12; m[413] = 6.856377E-21; m[415] = 1.778952E-23; m[421] = 4.101E-13; m[423] = 1.000003E-19; m[425] = 1.530726E-23; m[431] = 5.E-13; m[433] = 1.000003E-19; m[435] = 3.291061E-23; m[441] = 2.465214E-23; m[443] = 7.062363E-21; m[445] = 3.242425E-22; m[510] = 1.525E-12; m[511] = 1.525E-12; m[513] = 1.000019E-19; m[515] = 1.31E-23; m[521] = 1.638E-12; m[523] = 1.000019E-19; m[525] = 1.31E-23; m[530] = 1.536875E-12; m[531] = 1.472E-12; m[533] = 1.E-19; m[535] = 1.31E-23; m[541] = 4.5E-13; m[553] = 1.218911E-20; m[1112] = 4.539394E-24; m[1114] = 5.578069E-24; m[1116] = 1.994582E-24; m[1118] = 2.269697E-24; m[1212] = 4.539394E-24; m[1214] = 5.723584E-24; m[1216] = 1.994582E-24; m[1218] = 1.316424E-24; m[2112] = 8.857E+02; m[2114] = 5.578069E-24; m[2116] = 4.388081E-24; m[2118] = 2.269697E-24; m[2122] = 4.539394E-24; m[2124] = 5.723584E-24; m[2126] = 1.994582E-24; m[2128] = 1.316424E-24; m[2212] = 1.E+16; m[2214] = 5.578069E-24; m[2216] = 4.388081E-24; m[2218] = 2.269697E-24; m[2222] = 4.539394E-24; m[2224] = 5.578069E-24; m[2226] = 1.994582E-24; m[2228] = 2.269697E-24; m[3112] = 1.479E-10; m[3114] = 1.670589E-23; m[3116] = 5.485102E-24; m[3118] = 3.656734E-24; m[3122] = 2.631E-10; m[3124] = 4.219309E-23; m[3126] = 8.227653E-24; m[3128] = 3.291061E-24; m[3212] = 7.4E-20; m[3214] = 1.828367E-23; m[3216] = 5.485102E-24; m[3218] = 3.656734E-24; m[3222] = 8.018E-11; m[3224] = 1.838582E-23; m[3226] = 5.485102E-24; m[3228] = 3.656734E-24; m[3312] = 1.639E-10; m[3314] = 6.648608E-23; m[3322] = 2.9E-10; m[3324] = 7.233101E-23; m[3334] = 8.21E-11; m[4112] = 2.991874E-22; m[4114] = 4.088274E-23; m[4122] = 2.E-13; m[4132] = 1.12E-13; m[4212] = 3.999999E-22; m[4214] = 3.291061E-22; m[4222] = 2.951624E-22; m[4224] = 4.417531E-23; m[4232] = 4.42E-13; m[4332] = 6.9E-14; m[4412] = 3.335641E-13; m[4422] = 3.335641E-13; m[4432] = 3.335641E-13; m[5112] = 1.E-19; m[5122] = 1.38E-12; m[5132] = 1.42E-12; m[5142] = 1.290893E-12; m[5212] = 1.E-19; m[5222] = 1.E-19; m[5232] = 1.42E-12; m[5242] = 1.290893E-12; m[5312] = 1.E-19; m[5322] = 1.E-19; m[5332] = 1.55E-12; m[5342] = 1.290893E-12; m[5442] = 1.290893E-12; m[5512] = 1.290893E-12; m[5522] = 1.290893E-12; m[5532] = 1.290893E-12; m[5542] = 1.290893E-12; m[10111] = 2.48382E-24; m[10113] = 4.635297E-24; m[10115] = 2.54136E-24; m[10211] = 2.48382E-24; m[10213] = 4.635297E-24; m[10215] = 2.54136E-24; m[10223] = 1.828367E-24; m[10225] = 3.636531E-24; m[10311] = 2.437823E-24; m[10313] = 7.313469E-24; m[10315] = 3.538775E-24; m[10321] = 2.437823E-24; m[10323] = 7.313469E-24; m[10325] = 3.538775E-24; m[10331] = 4.804469E-24; m[10411] = 4.38E-24; m[10413] = 3.29E-23; m[10421] = 4.38E-24; m[10423] = 3.22653E-23; m[10431] = 6.5821E-22; m[10433] = 6.5821E-22; m[10441] = 6.453061E-23; m[10511] = 4.39E-24; m[10513] = 1.65E-23; m[10521] = 4.39E-24; m[10523] = 1.65E-23; m[10531] = 4.39E-24; m[10533] = 1.65E-23; m[11114] = 2.194041E-24; m[11116] = 1.828367E-24; m[11212] = 1.880606E-24; m[11216] = 1.828367E-24; m[12112] = 2.194041E-24; m[12114] = 2.194041E-24; m[12116] = 5.063171E-24; m[12126] = 1.828367E-24; m[12212] = 2.194041E-24; m[12214] = 2.194041E-24; m[12216] = 5.063171E-24; m[12224] = 2.194041E-24; m[12226] = 1.828367E-24; m[13112] = 6.582122E-24; m[13114] = 1.09702E-23; m[13116] = 5.485102E-24; m[13122] = 1.316424E-23; m[13124] = 1.09702E-23; m[13126] = 6.928549E-24; m[13212] = 6.582122E-24; m[13214] = 1.09702E-23; m[13216] = 5.485102E-24; m[13222] = 6.582122E-24; m[13224] = 1.09702E-23; m[13226] = 5.485102E-24; m[13314] = 2.742551E-23; m[13324] = 2.742551E-23; m[14122] = 1.828367E-22; m[20022] = 1.E+16; m[20113] = 1.567172E-24; m[20213] = 1.567172E-24; m[20223] = 2.708692E-23; m[20313] = 3.782829E-24; m[20315] = 2.384827E-24; m[20323] = 3.782829E-24; m[20325] = 2.384827E-24; m[20333] = 1.198929E-23; m[20413] = 2.63E-24; m[20423] = 2.63E-24; m[20433] = 6.5821E-22; m[20443] = 7.395643E-22; m[20513] = 2.63E-24; m[20523] = 2.63E-24; m[20533] = 2.63E-24; m[21112] = 2.632849E-24; m[21114] = 3.291061E-24; m[21212] = 2.632849E-24; m[21214] = 6.582122E-24; m[22112] = 4.388081E-24; m[22114] = 3.291061E-24; m[22122] = 2.632849E-24; m[22124] = 6.582122E-24; m[22212] = 4.388081E-24; m[22214] = 3.291061E-24; m[22222] = 2.632849E-24; m[22224] = 3.291061E-24; m[23112] = 7.313469E-24; m[23114] = 2.991874E-24; m[23122] = 4.388081E-24; m[23124] = 6.582122E-24; m[23126] = 3.291061E-24; m[23212] = 7.313469E-24; m[23214] = 2.991874E-24; m[23222] = 7.313469E-24; m[23224] = 2.991874E-24; m[30113] = 2.632849E-24; m[30213] = 2.632849E-24; m[30221] = 1.880606E-24; m[30223] = 2.089563E-24; m[30313] = 2.056913E-24; m[30323] = 2.056913E-24; m[30443] = 2.419898E-23; m[31114] = 1.880606E-24; m[31214] = 3.291061E-24; m[32112] = 3.989164E-24; m[32114] = 1.880606E-24; m[32124] = 3.291061E-24; m[32212] = 3.989164E-24; m[32214] = 1.880606E-24; m[32224] = 1.880606E-24; m[33122] = 1.880606E-23; m[42112] = 6.582122E-24; m[42212] = 6.582122E-24; m[43122] = 2.194041E-24; m[53122] = 4.388081E-24; m[100111] = 1.645531E-24; m[100113] = 1.64553E-24; m[100211] = 1.645531E-24; m[100213] = 1.64553E-24; m[100221] = 1.196749E-23; m[100223] = 3.061452E-24; m[100313] = 2.837122E-24; m[100323] = 2.837122E-24; m[100331] = 4.459432E-25; m[100333] = 4.388081E-24; m[100441] = 4.701516E-23; m[100443] = 2.076379E-21; m[100553] = 2.056913E-20; m[200553] = 3.242425E-20; m[300553] = 3.210791E-23; m[9000111] = 8.776163E-24; m[9000211] = 8.776163E-24; m[9000443] = 8.227652E-24; m[9000553] = 5.983747E-24; m[9010111] = 3.164482E-24; m[9010211] = 3.164482E-24; m[9010221] = 9.403031E-24; m[9010443] = 8.438618E-24; m[9010553] = 8.3318E-24; m[9020443] = 1.061633E-23; m[9030221] = 6.038644E-24; m[9042413] = 2.07634E-21; m[9050225] = 1.394517E-24; m[9060225] = 3.291061E-24; m[9080225] = 4.388081E-24; m[9090225] = 2.056913E-24; m[9910445] = 2.07634E-21; m[9920443] = 2.07634E-21; return true; } }; const int LHCB_2011_I917009::stablePDGIds[205] = { 311, 543, 545, 551, 555, 557, 1103, 2101, 2103, 2203, 3101, 3103, 3201, 3203, 3303, 4101, 4103, 4124, 4201, 4203, 4301, 4303, 4312, 4314, 4322, 4324, 4334, 4403, 4414, 4424, 4434, 4444, 5101, 5103, 5114, 5201, 5203, 5214, 5224, 5301, 5303, 5314, 5324, 5334, 5401, 5403, 5412, 5414, 5422, 5424, 5432, 5434, 5444, 5503, 5514, 5524, 5534, 5544, 5554, 10022, 10333, 10335, 10443, 10541, 10543, 10551, 10553, 10555, 11112, 12118, 12122, 12218, 12222, 13316, 13326, 20543, 20553, 20555, 23314, 23324, 30343, 30353, 30363, 30553, 33314, 33324, 41214, 42124, 52114, 52214, 100311, 100315, 100321, 100325, 100411, 100413, 100421, 100423, 100551, 100555, 100557, 110551, 110553, 110555, 120553, 120555, 130553, 200551, 200555, 210551, 210553, 220553, 1000001, 1000002, 1000003, 1000004, 1000005, 1000006, 1000011, 1000012, 1000013, 1000014, 1000015, 1000016, 1000021, 1000022, 1000023, 1000024, 1000025, 1000035, 1000037, 1000039, 2000001, 2000002, 2000003, 2000004, 2000005, 2000006, 2000011, 2000012, 2000013, 2000014, 2000015, 2000016, 3000111, 3000113, 3000211, 3000213, 3000221, 3000223, 3000331, 3100021, 3100111, 3100113, 3200111, 3200113, 3300113, 3400113, 4000001, 4000002, 4000011, 4000012, 5000039, 9000221, 9900012, 9900014, 9900016, 9900023, 9900024, 9900041, 9900042 }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2011_I917009); } diff --git a/analyses/pluginLHCb/LHCB_2012_I1119400.cc b/analyses/pluginLHCb/LHCB_2012_I1119400.cc --- a/analyses/pluginLHCb/LHCB_2012_I1119400.cc +++ b/analyses/pluginLHCb/LHCB_2012_I1119400.cc @@ -1,358 +1,358 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { class LHCB_2012_I1119400 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2012_I1119400() : Analysis("LHCB_2012_I1119400"), _p_min(5.0), _pt_min(0.0),_pt1_edge(0.8), _pt2_edge(1.2), //_eta_nbins(4), _eta_min(2.5), _eta_max(4.5) { } //@} public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { fillMap(_partLftMap); int id_shift = 0; if (fuzzyEquals(sqrtS(), 7*TeV)) id_shift = 1; // define ratios if second pdgid in pair is -1, it means that is a antiparticle/particle ratio _ratiotype["pbarp"] = make_pair(2212, -1); _ratiotype["kminuskplus"] = make_pair(321, -1); _ratiotype["piminuspiplus"] = make_pair(211, -1); _ratiotype["ppi"] = make_pair(2212, 211); _ratiotype["kpi"] = make_pair(321, 211); _ratiotype["pk"] = make_pair(2212, 321); std::map _hepdataid; _hepdataid["pbarp"] = 1 + id_shift; _hepdataid["kminuskplus"] = 3 + id_shift; _hepdataid["piminuspiplus"] = 5 + id_shift; _hepdataid["ppi"] = 7 + id_shift; _hepdataid["kpi"] = 9 + id_shift; _hepdataid["pk"] = 11 + id_shift; std::map >::iterator it; // booking histograms for (it=_ratiotype.begin(); it!=_ratiotype.end(); it++) { _h_ratio_lowpt [it->first] = bookScatter2D(_hepdataid[it->first], 1, 1); _h_ratio_midpt [it->first] = bookScatter2D(_hepdataid[it->first], 1, 2); _h_ratio_highpt[it->first] = bookScatter2D(_hepdataid[it->first], 1, 3); _h_num_lowpt [it->first] = bookHisto1D ("TMP/num_l_"+it->first,refData(_hepdataid[it->first], 1, 1)); _h_num_midpt [it->first] = bookHisto1D ("TMP/num_m_"+it->first,refData(_hepdataid[it->first], 1, 2)); _h_num_highpt [it->first] = bookHisto1D ("TMP/num_h_"+it->first,refData(_hepdataid[it->first], 1, 3)); _h_den_lowpt [it->first] = bookHisto1D ("TMP/den_l_"+it->first,refData(_hepdataid[it->first], 1, 1)); _h_den_midpt [it->first] = bookHisto1D ("TMP/den_m_"+it->first,refData(_hepdataid[it->first], 1, 2)); _h_den_highpt [it->first] = bookHisto1D ("TMP/den_h_"+it->first,refData(_hepdataid[it->first], 1, 3)); } declare(ChargedFinalState(_eta_min, _eta_max, _pt_min*GeV), "CFS"); } // Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); const ChargedFinalState& cfs = apply(event, "CFS"); foreach (const Particle& p, cfs.particles()) { int id = p.pid(); // continue if particle not a proton, a kaon or a pion if ( !( (abs(id) == 211) || (abs(id) == 321) || (abs(id) == 2212))) { continue; } // cut in momentum const FourMomentum& qmom = p.momentum(); if (qmom.p3().mod() < _p_min) continue; // Lifetime cut: ctau sum of all particle ancestors < 10^-9 m according to the paper (see eq. 5) const double MAX_CTAU = 1.0e-9; // [m] double ancestor_lftsum = getMotherLifeTimeSum(p); if ( (ancestor_lftsum < 0.0) || (ancestor_lftsum > MAX_CTAU) ) continue; double eta = qmom.eta(); double pT = qmom.pT(); std::map >::iterator it; for (it=_ratiotype.begin(); it!=_ratiotype.end(); it++) { // check what type of ratio is if ((it->second.second)==-1) { // check ptbin if (pT < _pt1_edge) { // filling histos for numerator and denominator if (id == -abs(it->second.first)) _h_num_lowpt[it->first]->fill(eta, weight); if (id == abs(it->second.first)) _h_den_lowpt[it->first]->fill(eta, weight); } else if (pT < _pt2_edge) { // filling histos for numerator and denominator if (id == -abs(it->second.first)) _h_num_midpt[it->first]->fill(eta, weight); if (id == abs(it->second.first)) _h_den_midpt[it->first]->fill(eta, weight); } else { // filling histos for numerator and denominator if (id == -abs(it->second.first)) _h_num_highpt[it->first]->fill(eta, weight); if (id == abs(it->second.first)) _h_den_highpt[it->first]->fill(eta, weight); } } else { // check what type of ratio is if (pT < _pt1_edge) { // filling histos for numerator and denominator if (abs(id) == abs(it->second.first)) _h_num_lowpt[it->first]->fill(eta, weight); if (abs(id) == abs(it->second.second)) _h_den_lowpt[it->first]->fill(eta, weight); } else if (pT < _pt2_edge) { // filling histos for numerator and denominator if (abs(id) == abs(it->second.first)) _h_num_midpt[it->first]->fill(eta, weight); if (abs(id) == abs(it->second.second)) _h_den_midpt[it->first]->fill(eta, weight); } else { // filling histos for numerator and denominator if (abs(id) == abs(it->second.first)) _h_num_highpt[it->first]->fill(eta, weight); if (abs(id) == abs(it->second.second)) _h_den_highpt[it->first]->fill(eta, weight); } } } } } // Generate the ratio histograms void finalize() { std::map >::iterator it; // booking histograms for (it=_ratiotype.begin(); it!=_ratiotype.end(); it++) { divide(_h_num_lowpt[it->first], _h_den_lowpt[it->first], _h_ratio_lowpt[it->first]); divide(_h_num_midpt[it->first], _h_den_midpt[it->first], _h_ratio_midpt[it->first]); divide(_h_num_highpt[it->first], _h_den_highpt[it->first], _h_ratio_highpt[it->first]); } } //@} private: // Get particle lifetime from hardcoded data double getLifeTime(int pid) { pid = abs(pid); double lft = -1.0; map::iterator pPartLft = _partLftMap.find(pid); // search stable particle list if (pPartLft == _partLftMap.end()) { if (pid <= 100) return 0.0; for (size_t i=0; i < sizeof(_stablePDGIds)/sizeof(unsigned int); i++) { if (pid == _stablePDGIds[i]) { lft = 0.0; break; } } } else { lft = (*pPartLft).second; } if (lft < 0.0 && PID::isHadron(pid)) { MSG_WARNING("Lifetime map imcomplete --- " << pid << "... assume zero lifetime"); lft = 0.0; } return lft; } // Data members like post-cuts event weight counters go here const double getMotherLifeTimeSum(const Particle& p) { if (p.genParticle() == nullptr) return -1.; double lftSum = 0.; double plft = 0.; ConstGenParticlePtr part = p.genParticle(); ConstGenVertexPtr ivtx = part->production_vertex(); while(ivtx){ - vector part_in = ivtx->particles_in(); + vector part_in = HepMCUtils::particles(ivtx, Relatives::PARENTS); if (part_in.size() < 1) { lftSum = -1.; break; }; part = part_in.at(0); if ( !(part) ) { lftSum = -1.; break; }; ivtx = part->production_vertex(); if ( (part->pdg_id() == 2212) || !(ivtx) ) break; // reached beam plft = getLifeTime(part->pdg_id()); if (plft < 0.) { lftSum = -1.; break; }; lftSum += plft; }; return (lftSum * c_light); } /// @name Private variables // Momentum threshold double _p_min; // The edges of the intervals of transversal momentum double _pt_min; double _pt1_edge; double _pt2_edge; // The limits of the pseudorapidity window //int _eta_nbins; double _eta_min; double _eta_max; // Map between PDG id and particle lifetimes in seconds std::map _partLftMap; // Set of PDG Ids for stable particles (PDG Id <= 100 are considered stable) static const int _stablePDGIds[205]; // Define histograms // ratio std::map _h_ratio_lowpt; std::map _h_ratio_midpt; std::map _h_ratio_highpt; // numerator std::map _h_num_lowpt; std::map _h_num_midpt; std::map _h_num_highpt; // denominator std::map _h_den_lowpt; std::map _h_den_midpt; std::map _h_den_highpt; // Map of ratios and IDs of numerator and denominator std::map > _ratiotype; // Fill the PDG Id to Lifetime[seconds] map // Data was extracted from LHCb Particle Table through LHCb::ParticlePropertySvc bool fillMap(map &m) { m[6] = 4.707703E-25; m[11] = 1.E+16; m[12] = 1.E+16; m[13] = 2.197019E-06; m[14] = 1.E+16; m[15] = 2.906E-13; m[16] = 1.E+16; m[22] = 1.E+16; m[23] = 2.637914E-25; m[24] = 3.075758E-25; m[25] = 9.4E-26; m[35] = 9.4E-26; m[36] = 9.4E-26; m[37] = 9.4E-26; m[84] = 3.335641E-13; m[85] = 1.290893E-12; m[111] = 8.4E-17; m[113] = 4.405704E-24; m[115] = 6.151516E-24; m[117] = 4.088275E-24; m[119] = 2.102914E-24; m[130] = 5.116E-08; m[150] = 1.525E-12; m[211] = 2.6033E-08; m[213] = 4.405704E-24; m[215] = 6.151516E-24; m[217] = 4.088275E-24; m[219] = 2.102914E-24; m[221] = 5.063171E-19; m[223] = 7.752794E-23; m[225] = 3.555982E-24; m[227] = 3.91793E-24; m[229] = 2.777267E-24; m[310] = 8.953E-11; m[313] = 1.308573E-23; m[315] = 6.038644E-24; m[317] = 4.139699E-24; m[319] = 3.324304E-24; m[321] = 1.238E-08; m[323] = 1.295693E-23; m[325] = 6.682357E-24; m[327] = 4.139699E-24; m[329] = 3.324304E-24; m[331] = 3.210791E-21; m[333] = 1.545099E-22; m[335] = 9.016605E-24; m[337] = 7.565657E-24; m[350] = 1.407125E-12; m[411] = 1.04E-12; m[413] = 6.856377E-21; m[415] = 1.778952E-23; m[421] = 4.101E-13; m[423] = 1.000003E-19; m[425] = 1.530726E-23; m[431] = 5.E-13; m[433] = 1.000003E-19; m[435] = 3.291061E-23; m[441] = 2.465214E-23; m[443] = 7.062363E-21; m[445] = 3.242425E-22; m[510] = 1.525E-12; m[511] = 1.525E-12; m[513] = 1.000019E-19; m[515] = 1.31E-23; m[521] = 1.638E-12; m[523] = 1.000019E-19; m[525] = 1.31E-23; m[530] = 1.536875E-12; m[531] = 1.472E-12; m[533] = 1.E-19; m[535] = 1.31E-23; m[541] = 4.5E-13; m[553] = 1.218911E-20; m[1112] = 4.539394E-24; m[1114] = 5.578069E-24; m[1116] = 1.994582E-24; m[1118] = 2.269697E-24; m[1212] = 4.539394E-24; m[1214] = 5.723584E-24; m[1216] = 1.994582E-24; m[1218] = 1.316424E-24; m[2112] = 8.857E+02; m[2114] = 5.578069E-24; m[2116] = 4.388081E-24; m[2118] = 2.269697E-24; m[2122] = 4.539394E-24; m[2124] = 5.723584E-24; m[2126] = 1.994582E-24; m[2128] = 1.316424E-24; m[2212] = 1.E+16; m[2214] = 5.578069E-24; m[2216] = 4.388081E-24; m[2218] = 2.269697E-24; m[2222] = 4.539394E-24; m[2224] = 5.578069E-24; m[2226] = 1.994582E-24; m[2228] = 2.269697E-24; m[3112] = 1.479E-10; m[3114] = 1.670589E-23; m[3116] = 5.485102E-24; m[3118] = 3.656734E-24; m[3122] = 2.631E-10; m[3124] = 4.219309E-23; m[3126] = 8.227653E-24; m[3128] = 3.291061E-24; m[3212] = 7.4E-20; m[3214] = 1.828367E-23; m[3216] = 5.485102E-24; m[3218] = 3.656734E-24; m[3222] = 8.018E-11; m[3224] = 1.838582E-23; m[3226] = 5.485102E-24; m[3228] = 3.656734E-24; m[3312] = 1.639E-10; m[3314] = 6.648608E-23; m[3322] = 2.9E-10; m[3324] = 7.233101E-23; m[3334] = 8.21E-11; m[4112] = 2.991874E-22; m[4114] = 4.088274E-23; m[4122] = 2.E-13; m[4132] = 1.12E-13; m[4212] = 3.999999E-22; m[4214] = 3.291061E-22; m[4222] = 2.951624E-22; m[4224] = 4.417531E-23; m[4232] = 4.42E-13; m[4332] = 6.9E-14; m[4412] = 3.335641E-13; m[4422] = 3.335641E-13; m[4432] = 3.335641E-13; m[5112] = 1.E-19; m[5122] = 1.38E-12; m[5132] = 1.42E-12; m[5142] = 1.290893E-12; m[5212] = 1.E-19; m[5222] = 1.E-19; m[5232] = 1.42E-12; m[5242] = 1.290893E-12; m[5312] = 1.E-19; m[5322] = 1.E-19; m[5332] = 1.55E-12; m[5342] = 1.290893E-12; m[5442] = 1.290893E-12; m[5512] = 1.290893E-12; m[5522] = 1.290893E-12; m[5532] = 1.290893E-12; m[5542] = 1.290893E-12; m[10111] = 2.48382E-24; m[10113] = 4.635297E-24; m[10115] = 2.54136E-24; m[10211] = 2.48382E-24; m[10213] = 4.635297E-24; m[10215] = 2.54136E-24; m[10223] = 1.828367E-24; m[10225] = 3.636531E-24; m[10311] = 2.437823E-24; m[10313] = 7.313469E-24; m[10315] = 3.538775E-24; m[10321] = 2.437823E-24; m[10323] = 7.313469E-24; m[10325] = 3.538775E-24; m[10331] = 4.804469E-24; m[10411] = 4.38E-24; m[10413] = 3.29E-23; m[10421] = 4.38E-24; m[10423] = 3.22653E-23; m[10431] = 6.5821E-22; m[10433] = 6.5821E-22; m[10441] = 6.453061E-23; m[10511] = 4.39E-24; m[10513] = 1.65E-23; m[10521] = 4.39E-24; m[10523] = 1.65E-23; m[10531] = 4.39E-24; m[10533] = 1.65E-23; m[11114] = 2.194041E-24; m[11116] = 1.828367E-24; m[11212] = 1.880606E-24; m[11216] = 1.828367E-24; m[12112] = 2.194041E-24; m[12114] = 2.194041E-24; m[12116] = 5.063171E-24; m[12126] = 1.828367E-24; m[12212] = 2.194041E-24; m[12214] = 2.194041E-24; m[12216] = 5.063171E-24; m[12224] = 2.194041E-24; m[12226] = 1.828367E-24; m[13112] = 6.582122E-24; m[13114] = 1.09702E-23; m[13116] = 5.485102E-24; m[13122] = 1.316424E-23; m[13124] = 1.09702E-23; m[13126] = 6.928549E-24; m[13212] = 6.582122E-24; m[13214] = 1.09702E-23; m[13216] = 5.485102E-24; m[13222] = 6.582122E-24; m[13224] = 1.09702E-23; m[13226] = 5.485102E-24; m[13314] = 2.742551E-23; m[13324] = 2.742551E-23; m[14122] = 1.828367E-22; m[20022] = 1.E+16; m[20113] = 1.567172E-24; m[20213] = 1.567172E-24; m[20223] = 2.708692E-23; m[20313] = 3.782829E-24; m[20315] = 2.384827E-24; m[20323] = 3.782829E-24; m[20325] = 2.384827E-24; m[20333] = 1.198929E-23; m[20413] = 2.63E-24; m[20423] = 2.63E-24; m[20433] = 6.5821E-22; m[20443] = 7.395643E-22; m[20513] = 2.63E-24; m[20523] = 2.63E-24; m[20533] = 2.63E-24; m[21112] = 2.632849E-24; m[21114] = 3.291061E-24; m[21212] = 2.632849E-24; m[21214] = 6.582122E-24; m[22112] = 4.388081E-24; m[22114] = 3.291061E-24; m[22122] = 2.632849E-24; m[22124] = 6.582122E-24; m[22212] = 4.388081E-24; m[22214] = 3.291061E-24; m[22222] = 2.632849E-24; m[22224] = 3.291061E-24; m[23112] = 7.313469E-24; m[23114] = 2.991874E-24; m[23122] = 4.388081E-24; m[23124] = 6.582122E-24; m[23126] = 3.291061E-24; m[23212] = 7.313469E-24; m[23214] = 2.991874E-24; m[23222] = 7.313469E-24; m[23224] = 2.991874E-24; m[30113] = 2.632849E-24; m[30213] = 2.632849E-24; m[30221] = 1.880606E-24; m[30223] = 2.089563E-24; m[30313] = 2.056913E-24; m[30323] = 2.056913E-24; m[30443] = 2.419898E-23; m[31114] = 1.880606E-24; m[31214] = 3.291061E-24; m[32112] = 3.989164E-24; m[32114] = 1.880606E-24; m[32124] = 3.291061E-24; m[32212] = 3.989164E-24; m[32214] = 1.880606E-24; m[32224] = 1.880606E-24; m[33122] = 1.880606E-23; m[42112] = 6.582122E-24; m[42212] = 6.582122E-24; m[43122] = 2.194041E-24; m[53122] = 4.388081E-24; m[100111] = 1.645531E-24; m[100113] = 1.64553E-24; m[100211] = 1.645531E-24; m[100213] = 1.64553E-24; m[100221] = 1.196749E-23; m[100223] = 3.061452E-24; m[100313] = 2.837122E-24; m[100323] = 2.837122E-24; m[100331] = 4.459432E-25; m[100333] = 4.388081E-24; m[100441] = 4.701516E-23; m[100443] = 2.076379E-21; m[100553] = 2.056913E-20; m[200553] = 3.242425E-20; m[300553] = 3.210791E-23; m[9000111] = 8.776163E-24; m[9000211] = 8.776163E-24; m[9000443] = 8.227652E-24; m[9000553] = 5.983747E-24; m[9010111] = 3.164482E-24; m[9010211] = 3.164482E-24; m[9010221] = 9.403031E-24; m[9010443] = 8.438618E-24; m[9010553] = 8.3318E-24; m[9020443] = 1.061633E-23; m[9030221] = 6.038644E-24; m[9042413] = 2.07634E-21; m[9050225] = 1.394517E-24; m[9060225] = 3.291061E-24; m[9080225] = 4.388081E-24; m[9090225] = 2.056913E-24; m[9910445] = 2.07634E-21; m[9920443] = 2.07634E-21; return true; } }; const int LHCB_2012_I1119400::_stablePDGIds[205] = { 311, 543, 545, 551, 555, 557, 1103, 2101, 2103, 2203, 3101, 3103, 3201, 3203, 3303, 4101, 4103, 4124, 4201, 4203, 4301, 4303, 4312, 4314, 4322, 4324, 4334, 4403, 4414, 4424, 4434, 4444, 5101, 5103, 5114, 5201, 5203, 5214, 5224, 5301, 5303, 5314, 5324, 5334, 5401, 5403, 5412, 5414, 5422, 5424, 5432, 5434, 5444, 5503, 5514, 5524, 5534, 5544, 5554, 10022, 10333, 10335, 10443, 10541, 10543, 10551, 10553, 10555, 11112, 12118, 12122, 12218, 12222, 13316, 13326, 20543, 20553, 20555, 23314, 23324, 30343, 30353, 30363, 30553, 33314, 33324, 41214, 42124, 52114, 52214, 100311, 100315, 100321, 100325, 100411, 100413, 100421, 100423, 100551, 100555, 100557, 110551, 110553, 110555, 120553, 120555, 130553, 200551, 200555, 210551, 210553, 220553, 1000001, 1000002, 1000003, 1000004, 1000005, 1000006, 1000011, 1000012, 1000013, 1000014, 1000015, 1000016, 1000021, 1000022, 1000023, 1000024, 1000025, 1000035, 1000037, 1000039, 2000001, 2000002, 2000003, 2000004, 2000005, 2000006, 2000011, 2000012, 2000013, 2000014, 2000015, 2000016, 3000111, 3000113, 3000211, 3000213, 3000221, 3000223, 3000331, 3100021, 3100111, 3100113, 3200111, 3200113, 3300113, 3400113, 4000001, 4000002, 4000011, 4000012, 5000039, 9000221, 9900012, 9900014, 9900016, 9900023, 9900024, 9900041, 9900042 }; // Plugin hook DECLARE_RIVET_PLUGIN(LHCB_2012_I1119400); } diff --git a/analyses/pluginLHCb/LHCB_2014_I1281685.cc b/analyses/pluginLHCb/LHCB_2014_I1281685.cc --- a/analyses/pluginLHCb/LHCB_2014_I1281685.cc +++ b/analyses/pluginLHCb/LHCB_2014_I1281685.cc @@ -1,1178 +1,1178 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// Charged particle multiplicities and densities in $pp$ collisions at $\sqrt{s} = 7$ TeV class LHCB_2014_I1281685 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2014_I1281685() : Analysis("LHCB_2014_I1281685"), _p_min(2.0), _pt_min(0.2), _eta_min(2.0), _eta_max(4.8), _maxlft(1.0e-11) { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { fillMap(_partLftMap); // Projections declare(ChargedFinalState(_eta_min, _eta_max, _pt_min*GeV), "CFS"); // Book histograms _h_mult_total = bookHisto1D("d03-x01-y01", 50, 0.5, 50.5); _h_mult_eta[0] = bookHisto1D("d04-x01-y01", 21, -0.5, 20.5); //eta=[2.0,2.5] _h_mult_eta[1] = bookHisto1D("d04-x01-y02", 21, -0.5, 20.5); //eta=[2.5,3.0] _h_mult_eta[2] = bookHisto1D("d04-x01-y03", 21, -0.5, 20.5); //eta=[3.0,3.5] _h_mult_eta[3] = bookHisto1D("d04-x01-y04", 21, -0.5, 20.5); //eta=[3.5,4.0] _h_mult_eta[4] = bookHisto1D("d04-x01-y05", 21, -0.5, 20.5); //eta=[4.0,4.5] _h_mult_pt[0] = bookHisto1D("d05-x01-y01", 21, -0.5, 20.5); //pT=[0.2,0.3]GeV _h_mult_pt[1] = bookHisto1D("d05-x01-y02", 21, -0.5, 20.5); //pT=[0.3,0.4]GeV _h_mult_pt[2] = bookHisto1D("d05-x01-y03", 21, -0.5, 20.5); //pT=[0.4,0.6]GeV _h_mult_pt[3] = bookHisto1D("d05-x01-y04", 21, -0.5, 20.5); //pT=[0.6,1.0]GeV _h_mult_pt[4] = bookHisto1D("d05-x01-y05", 21, -0.5, 20.5); //pT=[1.0,2.0]GeV _h_dndeta = bookHisto1D("d01-x01-y01", 14, 2.0, 4.8); //eta=[2,4.8] _h_dndpt = bookHisto1D("d02-x01-y01", 18, 0.2, 2.0); //pT =[0,2]GeV // Counters _sumW = 0; } /// Perform the per-event analysis void analyze(const Event& event) { // Variable to store multiplicities per event int LHCbcountAll = 0; //count particles fulfiling all requirements int LHCbcountEta[8] = {0,0,0,0,0,0,0,0}; //count per eta-bin int LHCbcountPt[7] = {0,0,0,0,0,0,0}; //count per pT-bin vector val_dNdEta; vector val_dNdPt; val_dNdEta.clear(); val_dNdPt.clear(); const ChargedFinalState& cfs = apply(event, "CFS"); foreach (const Particle& p, cfs.particles()) { int id = p.pdgId(); // continue if particle is not a pion, kaon, proton, muon or electron if ( !( (abs(id) == 211) || (abs(id) == 321) || (abs(id) == 2212) || (abs(id) == 13) || (abs(id) == 11)) ) { continue; } const FourMomentum& qmom = p.momentum(); const double eta = p.momentum().eta(); const double pT = p.momentum().pT(); //minimum momentum if (qmom.p3().mod() < _p_min) continue; //minimum tr. momentum if (pT < _pt_min) continue; //eta range if ((eta < _eta_min) || (eta > _eta_max)) continue; /* Select only prompt particles via lifetime */ //Sum of all mother lifetimes (PDG lifetime) < 10ps double ancestors_sumlft = getAncestorSumLifetime(p); if( (ancestors_sumlft > _maxlft) || (ancestors_sumlft < 0) ) continue; //after all cuts; LHCbcountAll++; //count particles in whole kin. range //in eta bins if( eta >2.0 && eta <= 2.5) LHCbcountEta[0]++; if( eta >2.5 && eta <= 3.0) LHCbcountEta[1]++; if( eta >3.0 && eta <= 3.5) LHCbcountEta[2]++; if( eta >3.5 && eta <= 4.0) LHCbcountEta[3]++; if( eta >4.0 && eta <= 4.5) LHCbcountEta[4]++; if( eta >2.0 && eta <= 4.8) LHCbcountEta[5]++; //cross-check //in pT bins if( pT > 0.2 && pT <= 0.3) LHCbcountPt[0]++; if( pT > 0.3 && pT <= 0.4) LHCbcountPt[1]++; if( pT > 0.4 && pT <= 0.6) LHCbcountPt[2]++; if( pT > 0.6 && pT <= 1.0) LHCbcountPt[3]++; if( pT > 1.0 && pT <= 2.0) LHCbcountPt[4]++; if( pT > 0.2) LHCbcountPt[5]++; //cross-check //particle densities -> need proper normalization (finalize) val_dNdPt.push_back( pT ); val_dNdEta.push_back( eta ); }//end foreach // Fill histograms only, if at least 1 particle pre event was within the // kinematic range of the analysis! if (LHCbcountAll) { const double weight = event.weight(); _sumW += weight; _h_mult_total->fill(LHCbcountAll, weight); _h_mult_eta[0]->fill(LHCbcountEta[0], weight); _h_mult_eta[1]->fill(LHCbcountEta[1], weight); _h_mult_eta[2]->fill(LHCbcountEta[2], weight); _h_mult_eta[3]->fill(LHCbcountEta[3], weight); _h_mult_eta[4]->fill(LHCbcountEta[4], weight); _h_mult_pt[0]->fill(LHCbcountPt[0], weight); _h_mult_pt[1]->fill(LHCbcountPt[1], weight); _h_mult_pt[2]->fill(LHCbcountPt[2], weight); _h_mult_pt[3]->fill(LHCbcountPt[3], weight); _h_mult_pt[4]->fill(LHCbcountPt[4], weight); for (size_t part = 0; part < val_dNdEta.size(); part++) _h_dndeta->fill(val_dNdEta[part], weight); for (size_t part = 0; part < val_dNdPt.size(); part++) _h_dndpt->fill(val_dNdPt[part], weight); } } /// Normalise histograms etc., after the run void finalize() { const double scalefactor = 1.0/_sumW; // normalize multiplicity histograms by nEvents const double scale1k = 1000.; // to match '10^3' scale in reference histograms scale( _h_dndeta, scalefactor ); scale( _h_dndpt, scalefactor*0.1 ); //additional factor 0.1 for [0.1 GeV/c] scale( _h_mult_total, scalefactor*scale1k); _h_mult_eta[0]->scaleW( scalefactor*scale1k ); _h_mult_eta[1]->scaleW( scalefactor*scale1k ); _h_mult_eta[2]->scaleW( scalefactor*scale1k ); _h_mult_eta[3]->scaleW( scalefactor*scale1k ); _h_mult_eta[4]->scaleW( scalefactor*scale1k ); _h_mult_pt[0]->scaleW( scalefactor*scale1k ); _h_mult_pt[1]->scaleW( scalefactor*scale1k ); _h_mult_pt[2]->scaleW( scalefactor*scale1k ); _h_mult_pt[3]->scaleW( scalefactor*scale1k ); _h_mult_pt[4]->scaleW( scalefactor*scale1k ); } //@} private: // Get mean PDG lifetime for particle with PID double getLifetime(int pid) { double lft = 0.; map::iterator pPartLft = _partLftMap.find(pid); if (pPartLft != _partLftMap.end()) { lft = (*pPartLft).second; } else { // allow identifying missing life times only in debug mode MSG_DEBUG("Could not determine lifetime for particle with PID " << pid << "... Assume non-prompt particle"); lft = -1; } return lft; } // Get sum of all ancestor particles const double getAncestorSumLifetime(const Particle& p) { double lftSum = 0.; double plft = 0.; ConstGenParticlePtr part = p.genParticle(); if ( 0 == part ) return -1; ConstGenVertexPtr ivtx = part->production_vertex(); while(ivtx) { - vector part_in = ivtx->particles_in(); + vector part_in = HepMCUtils::particles(ivtx, Relatives::PARENTS); if (part_in.size() < 1) { lftSum = -1.; break; }; part = part_in.at(0); if ( !(part) ) { lftSum = -1.; break; }; ivtx = part->production_vertex(); if ( (part->pdg_id() == 2212) || !(ivtx) ) break; // reached beam plft = getLifetime(part->pdg_id()); if (plft < 0.) { lftSum = -1.; break; }; lftSum += plft; } return (lftSum); } /// Hard-coded map linking PDG ID with PDG lifetime[s] (converted from ParticleTable.txt) bool fillMap(map& m) { // PDGID = LIFETIME m[22] = 1.000000e+016; m[-11] = 1.000000e+016; m[11] = 1.000000e+016; m[12] = 1.000000e+016; m[-13] = 2.197036e-006; m[13] = 2.197036e-006; m[111] = 8.438618e-017; m[211] = 2.603276e-008; m[-211] = 2.603276e-008; m[130] = 5.174624e-008; m[321] = 1.238405e-008; m[-321] = 1.238405e-008; m[2112] = 885.646128; m[2212] = 1.000000e+016; m[-2212] = 1.000000e+016; m[310] = 8.934603e-011; m[221] = 5.578070e-019; m[3122] = 2.631796e-010; m[3222] = 8.018178e-011; m[3212] = 7.395643e-020; m[3112] = 1.479129e-010; m[3322] = 2.899613e-010; m[3312] = 1.637344e-010; m[3334] = 8.207135e-011; m[-2112] = 885.646128; m[-3122] = 2.631796e-010; m[-3222] = 8.018178e-011; m[-3212] = 7.395643e-020; m[-3112] = 1.479129e-010; m[-3322] = 2.899613e-010; m[-3312] = 1.637344e-010; m[-3334] = 8.207135e-011; m[113] = 4.411610e-024; m[213] = 4.411610e-024; m[-213] = 4.411610e-024; m[223] = 7.798723e-023; m[333] = 1.545099e-022; m[323] = 1.295693e-023; m[-323] = 1.295693e-023; m[313] = 1.298249e-023; m[-313] = 1.298249e-023; m[20213] = 1.500000e-024; m[-20213] = 1.500000e-024; m[450000000] = 1.000000e+015; m[460000000] = 1.000000e+015; m[470000000] = 1.000000e+015; m[480000000] = 1.000000e+015; m[490000000] = 1.000000e+015; m[20022] = 1.000000e+016; m[-15] = 2.906014e-013; m[15] = 2.906014e-013; m[24] = 3.104775e-025; m[-24] = 3.104775e-025; m[23] = 2.637914e-025; m[411] = 1.051457e-012; m[-411] = 1.051457e-012; m[421] = 4.116399e-013; m[-421] = 4.116399e-013; m[431] = 4.904711e-013; m[-431] = 4.904711e-013; m[4122] = 1.994582e-013; m[-4122] = 1.994582e-013; m[443] = 7.565657e-021; m[413] = 6.856377e-021; m[-413] = 6.856377e-021; m[423] = 1.000003e-019; m[-423] = 1.000003e-019; m[433] = 1.000003e-019; m[-433] = 1.000003e-019; m[521] = 1.671000e-012; m[-521] = 1.671000e-012; m[511] = 1.536000e-012; m[-511] = 1.536000e-012; m[531] = 1.461000e-012; m[-531] = 1.461000e-012; m[541] = 4.600000e-013; m[-541] = 4.600000e-013; m[5122] = 1.229000e-012; m[-5122] = 1.229000e-012; m[4112] = 4.388081e-022; m[-4112] = 4.388081e-022; m[4212] = 3.999999e-022; m[-4212] = 3.999999e-022; m[4222] = 3.291060e-022; m[-4222] = 3.291060e-022; m[25] = 9.400000e-026; m[35] = 9.400000e-026; m[36] = 9.400000e-026; m[37] = 9.400000e-026; m[-37] = 9.400000e-026; m[4312] = 9.800002e-014; m[-4312] = 9.800002e-014; m[4322] = 3.500001e-013; m[-4322] = 3.500001e-013; m[4332] = 6.453061e-014; m[-4332] = 6.453061e-014; m[4132] = 9.824063e-014; m[-4132] = 9.824063e-014; m[4232] = 4.417532e-013; m[-4232] = 4.417532e-013; m[5222] = 1.000000e-019; m[-5222] = 1.000000e-019; m[5212] = 1.000000e-019; m[-5212] = 1.000000e-019; m[5112] = 1.000000e-019; m[-5112] = 1.000000e-019; m[5312] = 1.000000e-019; m[-5312] = 1.000000e-019; m[5322] = 1.000000e-019; m[-5322] = 1.000000e-019; m[5332] = 1.550000e-012; m[-5332] = 1.550000e-012; m[5132] = 1.390000e-012; m[-5132] = 1.390000e-012; m[5232] = 1.390000e-012; m[-5232] = 1.390000e-012; m[100443] = 2.194041e-021; m[331] = 3.258476e-021; m[441] = 4.113826e-023; m[10441] = 4.063038e-023; m[20443] = 7.154480e-022; m[445] = 3.164482e-022; m[9000111] = 1.149997e-023; m[9000211] = 1.149997e-023; m[-9000211] = 1.149997e-023; m[20113] = 1.500000e-024; m[115] = 6.151516e-024; m[215] = 6.151516e-024; m[-215] = 6.151516e-024; m[10323] = 7.313469e-024; m[-10323] = 7.313469e-024; m[10313] = 7.313469e-024; m[-10313] = 7.313469e-024; m[20323] = 3.782829e-024; m[-20323] = 3.782829e-024; m[20313] = 3.782829e-024; m[-20313] = 3.782829e-024; m[10321] = 2.238817e-024; m[-10321] = 2.238817e-024; m[10311] = 2.238817e-024; m[-10311] = 2.238817e-024; m[325] = 6.682357e-024; m[-325] = 6.682357e-024; m[315] = 6.038644e-024; m[-315] = 6.038644e-024; m[10411] = 4.380000e-024; m[20413] = 2.630000e-024; m[10413] = 3.290000e-023; m[-415] = 2.632849e-023; m[-10411] = 4.380000e-024; m[-20413] = 2.630000e-024; m[-10413] = 3.290000e-023; m[415] = 2.632849e-023; m[10421] = 4.380000e-024; m[20423] = 2.630000e-024; m[10423] = 3.482604e-023; m[-425] = 2.861792e-023; m[-10421] = 4.380000e-024; m[-20423] = 2.630000e-024; m[-10423] = 3.482604e-023; m[425] = 2.861792e-023; m[10431] = 6.582100e-022; m[20433] = 6.582100e-022; m[10433] = 6.582100e-022; m[435] = 4.388100e-023; m[-10431] = 6.582100e-022; m[-20433] = 6.582100e-022; m[-10433] = 6.582100e-022; m[-435] = 4.388100e-023; m[2224] = 5.485102e-024; m[2214] = 5.485102e-024; m[2114] = 5.485102e-024; m[1114] = 5.485102e-024; m[-2224] = 5.485102e-024; m[-2214] = 5.485102e-024; m[-2114] = 5.485102e-024; m[-1114] = 5.485102e-024; m[-523] = 1.000019e-019; m[523] = 1.000019e-019; m[513] = 1.000019e-019; m[-513] = 1.000019e-019; m[533] = 1.000000e-019; m[-533] = 1.000000e-019; m[10521] = 4.390000e-024; m[20523] = 2.630000e-024; m[10523] = 1.650000e-023; m[525] = 1.310000e-023; m[-10521] = 4.390000e-024; m[-20523] = 2.630000e-024; m[-10523] = 1.650000e-023; m[-525] = 1.310000e-023; m[10511] = 4.390000e-024; m[20513] = 2.630000e-024; m[10513] = 1.650000e-023; m[515] = 1.310000e-023; m[-10511] = 4.390000e-024; m[-20513] = 2.630000e-024; m[-10513] = 1.650000e-023; m[-515] = 1.310000e-023; m[10531] = 4.390000e-024; m[20533] = 2.630000e-024; m[10533] = 1.650000e-023; m[535] = 1.310000e-023; m[-10531] = 4.390000e-024; m[-20533] = 2.630000e-024; m[-10533] = 1.650000e-023; m[-535] = 1.310000e-023; m[14] = 1.000000e+016; m[-14] = 1.000000e+016; m[-12] = 1.000000e+016; m[1] = 0.000000e+000; m[-1] = 0.000000e+000; m[2] = 0.000000e+000; m[-2] = 0.000000e+000; m[3] = 0.000000e+000; m[-3] = 0.000000e+000; m[4] = 0.000000e+000; m[-4] = 0.000000e+000; m[5] = 0.000000e+000; m[-5] = 0.000000e+000; m[6] = 4.707703e-025; m[-6] = 4.707703e-025; m[7] = 0.000000e+000; m[-7] = 0.000000e+000; m[8] = 0.000000e+000; m[-8] = 0.000000e+000; m[16] = 1.000000e+016; m[-16] = 1.000000e+016; m[17] = 0.000000e+000; m[-17] = 0.000000e+000; m[18] = 0.000000e+000; m[-18] = 0.000000e+000; m[21] = 0.000000e+000; m[32] = 0.000000e+000; m[33] = 0.000000e+000; m[34] = 0.000000e+000; m[-34] = 0.000000e+000; m[39] = 0.000000e+000; m[41] = 0.000000e+000; m[-41] = 0.000000e+000; m[42] = 0.000000e+000; m[-42] = 0.000000e+000; m[43] = 0.000000e+000; m[44] = 0.000000e+000; m[-44] = 0.000000e+000; m[81] = 0.000000e+000; m[82] = 0.000000e+000; m[-82] = 0.000000e+000; m[83] = 0.000000e+000; m[84] = 3.335641e-013; m[-84] = 3.335641e-013; m[85] = 1.290893e-012; m[-85] = 1.290893e-012; m[86] = 0.000000e+000; m[-86] = 0.000000e+000; m[87] = 0.000000e+000; m[-87] = 0.000000e+000; m[88] = 0.000000e+000; m[90] = 0.000000e+000; m[91] = 0.000000e+000; m[92] = 0.000000e+000; m[93] = 0.000000e+000; m[94] = 0.000000e+000; m[95] = 0.000000e+000; m[96] = 0.000000e+000; m[97] = 0.000000e+000; m[98] = 0.000000e+000; m[99] = 0.000000e+000; m[117] = 4.088275e-024; m[119] = 1.828367e-024; m[217] = 4.088275e-024; m[-217] = 4.088275e-024; m[219] = 1.828367e-024; m[-219] = 1.828367e-024; m[225] = 3.555982e-024; m[227] = 3.917930e-024; m[229] = 3.392846e-024; m[311] = 1.000000e+016; m[-311] = 1.000000e+016; m[317] = 4.139699e-024; m[-317] = 4.139699e-024; m[319] = 3.324304e-024; m[-319] = 3.324304e-024; m[327] = 4.139699e-024; m[-327] = 4.139699e-024; m[329] = 3.324304e-024; m[-329] = 3.324304e-024; m[335] = 8.660687e-024; m[337] = 7.565657e-024; m[543] = 0.000000e+000; m[-543] = 0.000000e+000; m[545] = 0.000000e+000; m[-545] = 0.000000e+000; m[551] = 0.000000e+000; m[553] = 1.253738e-020; m[555] = 1.000000e+016; m[557] = 0.000000e+000; m[-450000000] = 0.000000e+000; m[-490000000] = 0.000000e+000; m[-460000000] = 0.000000e+000; m[-470000000] = 0.000000e+000; m[1103] = 0.000000e+000; m[-1103] = 0.000000e+000; m[1112] = 4.388081e-024; m[-1112] = 4.388081e-024; m[1116] = 1.880606e-024; m[-1116] = 1.880606e-024; m[1118] = 2.194041e-024; m[-1118] = 2.194041e-024; m[1212] = 4.388081e-024; m[-1212] = 4.388081e-024; m[1214] = 5.485102e-024; m[-1214] = 5.485102e-024; m[1216] = 1.880606e-024; m[-1216] = 1.880606e-024; m[1218] = 1.462694e-024; m[-1218] = 1.462694e-024; m[2101] = 0.000000e+000; m[-2101] = 0.000000e+000; m[2103] = 0.000000e+000; m[-2103] = 0.000000e+000; m[2116] = 4.388081e-024; m[-2116] = 4.388081e-024; m[2118] = 2.194041e-024; m[-2118] = 2.194041e-024; m[2122] = 4.388081e-024; m[-2122] = 4.388081e-024; m[2124] = 5.485102e-024; m[-2124] = 5.485102e-024; m[2126] = 1.880606e-024; m[-2126] = 1.880606e-024; m[2128] = 1.462694e-024; m[-2128] = 1.462694e-024; m[2203] = 0.000000e+000; m[-2203] = 0.000000e+000; m[2216] = 4.388081e-024; m[-2216] = 4.388081e-024; m[2218] = 2.194041e-024; m[-2218] = 2.194041e-024; m[2222] = 4.388081e-024; m[-2222] = 4.388081e-024; m[2226] = 1.880606e-024; m[-2226] = 1.880606e-024; m[2228] = 2.194041e-024; m[-2228] = 2.194041e-024; m[3101] = 0.000000e+000; m[-3101] = 0.000000e+000; m[3103] = 0.000000e+000; m[-3103] = 0.000000e+000; m[3114] = 1.670589e-023; m[-3114] = 1.670589e-023; m[3116] = 5.485102e-024; m[-3116] = 5.485102e-024; m[3118] = 3.656734e-024; m[-3118] = 3.656734e-024; m[3124] = 4.219309e-023; m[-3124] = 4.219309e-023; m[3126] = 8.227653e-024; m[-3126] = 8.227653e-024; m[3128] = 3.291061e-024; m[-3128] = 3.291061e-024; m[3201] = 0.000000e+000; m[-3201] = 0.000000e+000; m[3203] = 0.000000e+000; m[-3203] = 0.000000e+000; m[3214] = 1.828367e-023; m[-3214] = 1.828367e-023; m[3216] = 5.485102e-024; m[-3216] = 5.485102e-024; m[3218] = 3.656734e-024; m[-3218] = 3.656734e-024; m[3224] = 1.838582e-023; m[-3224] = 1.838582e-023; m[3226] = 5.485102e-024; m[-3226] = 5.485102e-024; m[3228] = 3.656734e-024; m[-3228] = 3.656734e-024; m[3303] = 0.000000e+000; m[-3303] = 0.000000e+000; m[3314] = 6.648608e-023; m[-3314] = 6.648608e-023; m[3324] = 7.233101e-023; m[-3324] = 7.233101e-023; m[4101] = 0.000000e+000; m[-4101] = 0.000000e+000; m[4103] = 0.000000e+000; m[-4103] = 0.000000e+000; m[4114] = 0.000000e+000; m[-4114] = 0.000000e+000; m[4201] = 0.000000e+000; m[-4201] = 0.000000e+000; m[4203] = 0.000000e+000; m[-4203] = 0.000000e+000; m[4214] = 3.291061e-022; m[-4214] = 3.291061e-022; m[4224] = 0.000000e+000; m[-4224] = 0.000000e+000; m[4301] = 0.000000e+000; m[-4301] = 0.000000e+000; m[4303] = 0.000000e+000; m[-4303] = 0.000000e+000; m[4314] = 0.000000e+000; m[-4314] = 0.000000e+000; m[4324] = 0.000000e+000; m[-4324] = 0.000000e+000; m[4334] = 0.000000e+000; m[-4334] = 0.000000e+000; m[4403] = 0.000000e+000; m[-4403] = 0.000000e+000; m[4412] = 3.335641e-013; m[-4412] = 3.335641e-013; m[4414] = 3.335641e-013; m[-4414] = 3.335641e-013; m[4422] = 3.335641e-013; m[-4422] = 3.335641e-013; m[4424] = 3.335641e-013; m[-4424] = 3.335641e-013; m[4432] = 3.335641e-013; m[-4432] = 3.335641e-013; m[4434] = 3.335641e-013; m[-4434] = 3.335641e-013; m[4444] = 3.335641e-013; m[-4444] = 3.335641e-013; m[5101] = 0.000000e+000; m[-5101] = 0.000000e+000; m[5103] = 0.000000e+000; m[-5103] = 0.000000e+000; m[5114] = 0.000000e+000; m[-5114] = 0.000000e+000; m[5142] = 1.290893e-012; m[-5142] = 1.290893e-012; m[5201] = 0.000000e+000; m[-5201] = 0.000000e+000; m[5203] = 0.000000e+000; m[-5203] = 0.000000e+000; m[5214] = 0.000000e+000; m[-5214] = 0.000000e+000; m[5224] = 0.000000e+000; m[-5224] = 0.000000e+000; m[5242] = 1.290893e-012; m[-5242] = 1.290893e-012; m[5301] = 0.000000e+000; m[-5301] = 0.000000e+000; m[5303] = 0.000000e+000; m[-5303] = 0.000000e+000; m[5314] = 0.000000e+000; m[-5314] = 0.000000e+000; m[5324] = 0.000000e+000; m[-5324] = 0.000000e+000; m[5334] = 0.000000e+000; m[-5334] = 0.000000e+000; m[5342] = 1.290893e-012; m[-5342] = 1.290893e-012; m[5401] = 0.000000e+000; m[-5401] = 0.000000e+000; m[5403] = 0.000000e+000; m[-5403] = 0.000000e+000; m[5412] = 1.290893e-012; m[-5412] = 1.290893e-012; m[5414] = 1.290893e-012; m[-5414] = 1.290893e-012; m[5422] = 1.290893e-012; m[-5422] = 1.290893e-012; m[5424] = 1.290893e-012; m[-5424] = 1.290893e-012; m[5432] = 1.290893e-012; m[-5432] = 1.290893e-012; m[5434] = 1.290893e-012; m[-5434] = 1.290893e-012; m[5442] = 1.290893e-012; m[-5442] = 1.290893e-012; m[5444] = 1.290893e-012; m[-5444] = 1.290893e-012; m[5503] = 0.000000e+000; m[-5503] = 0.000000e+000; m[5512] = 1.290893e-012; m[-5512] = 1.290893e-012; m[5514] = 1.290893e-012; m[-5514] = 1.290893e-012; m[5522] = 1.290893e-012; m[-5522] = 1.290893e-012; m[5524] = 1.290893e-012; m[-5524] = 1.290893e-012; m[5532] = 1.290893e-012; m[-5532] = 1.290893e-012; m[5534] = 1.290893e-012; m[-5534] = 1.290893e-012; m[5542] = 1.290893e-012; m[-5542] = 1.290893e-012; m[5544] = 1.290893e-012; m[-5544] = 1.290893e-012; m[5554] = 1.290893e-012; m[-5554] = 1.290893e-012; m[10022] = 0.000000e+000; m[10111] = 2.483820e-024; m[10113] = 4.635297e-024; m[10115] = 2.541360e-024; m[10211] = 2.483820e-024; m[-10211] = 2.483820e-024; m[10213] = 4.635297e-024; m[-10213] = 4.635297e-024; m[10215] = 2.541360e-024; m[-10215] = 2.541360e-024; m[9010221] = 1.316424e-023; m[10223] = 1.828367e-024; m[10225] = 0.000000e+000; m[10315] = 3.538775e-024; m[-10315] = 3.538775e-024; m[10325] = 3.538775e-024; m[-10325] = 3.538775e-024; m[10331] = 5.265698e-024; m[10333] = 0.000000e+000; m[10335] = 0.000000e+000; m[10443] = 0.000000e+000; m[10541] = 0.000000e+000; m[-10541] = 0.000000e+000; m[10543] = 0.000000e+000; m[-10543] = 0.000000e+000; m[10551] = 1.000000e+016; m[10553] = 0.000000e+000; m[10555] = 0.000000e+000; m[11112] = 0.000000e+000; m[-11112] = 0.000000e+000; m[11114] = 2.194041e-024; m[-11114] = 2.194041e-024; m[11116] = 1.880606e-024; m[-11116] = 1.880606e-024; m[11212] = 1.880606e-024; m[-11212] = 1.880606e-024; m[11216] = 0.000000e+000; m[-11216] = 0.000000e+000; m[12112] = 1.880606e-024; m[-12112] = 1.880606e-024; m[12114] = 2.194041e-024; m[-12114] = 2.194041e-024; m[12116] = 5.063171e-024; m[-12116] = 5.063171e-024; m[12118] = 0.000000e+000; m[-12118] = 0.000000e+000; m[12122] = 0.000000e+000; m[-12122] = 0.000000e+000; m[12126] = 1.880606e-024; m[-12126] = 1.880606e-024; m[12212] = 1.880606e-024; m[-12212] = 1.880606e-024; m[12214] = 2.194041e-024; m[-12214] = 2.194041e-024; m[12216] = 5.063171e-024; m[-12216] = 5.063171e-024; m[12218] = 0.000000e+000; m[-12218] = 0.000000e+000; m[12222] = 0.000000e+000; m[-12222] = 0.000000e+000; m[12224] = 2.194041e-024; m[-12224] = 2.194041e-024; m[12226] = 1.880606e-024; m[-12226] = 1.880606e-024; m[13112] = 6.582122e-024; m[-13112] = 6.582122e-024; m[13114] = 1.097020e-023; m[-13114] = 1.097020e-023; m[13116] = 5.485102e-024; m[-13116] = 5.485102e-024; m[13122] = 1.316424e-023; m[-13122] = 1.316424e-023; m[13124] = 1.097020e-023; m[-13124] = 1.097020e-023; m[13126] = 6.928549e-024; m[-13126] = 6.928549e-024; m[13212] = 6.582122e-024; m[-13212] = 6.582122e-024; m[13214] = 1.097020e-023; m[-13214] = 1.097020e-023; m[13216] = 5.485102e-024; m[-13216] = 5.485102e-024; m[13222] = 6.582122e-024; m[-13222] = 6.582122e-024; m[13224] = 1.097020e-023; m[-13224] = 1.097020e-023; m[13226] = 5.485102e-024; m[-13226] = 5.485102e-024; m[13314] = 2.742551e-023; m[-13314] = 2.742551e-023; m[13316] = 0.000000e+000; m[-13316] = 0.000000e+000; m[13324] = 2.742551e-023; m[-13324] = 2.742551e-023; m[13326] = 0.000000e+000; m[-13326] = 0.000000e+000; m[14122] = 1.828367e-022; m[-14122] = 1.828367e-022; m[14124] = 0.000000e+000; m[-14124] = 0.000000e+000; m[10221] = 2.194040e-024; m[20223] = 2.742551e-023; m[20315] = 2.384827e-024; m[-20315] = 2.384827e-024; m[20325] = 2.384827e-024; m[-20325] = 2.384827e-024; m[20333] = 1.185968e-023; m[20543] = 0.000000e+000; m[-20543] = 0.000000e+000; m[20553] = 1.000000e+016; m[20555] = 0.000000e+000; m[21112] = 2.632849e-024; m[-21112] = 2.632849e-024; m[21114] = 3.291061e-024; m[-21114] = 3.291061e-024; m[21212] = 2.632849e-024; m[-21212] = 2.632849e-024; m[21214] = 6.582122e-024; m[-21214] = 6.582122e-024; m[22112] = 4.388081e-024; m[-22112] = 4.388081e-024; m[22114] = 3.291061e-024; m[-22114] = 3.291061e-024; m[22122] = 2.632849e-024; m[-22122] = 2.632849e-024; m[22124] = 6.582122e-024; m[-22124] = 6.582122e-024; m[22212] = 4.388081e-024; m[-22212] = 4.388081e-024; m[22214] = 3.291061e-024; m[-22214] = 3.291061e-024; m[22222] = 2.632849e-024; m[-22222] = 2.632849e-024; m[22224] = 3.291061e-024; m[-22224] = 3.291061e-024; m[23112] = 7.313469e-024; m[-23112] = 7.313469e-024; m[23114] = 2.991874e-024; m[-23114] = 2.991874e-024; m[23122] = 4.388081e-024; m[-23122] = 4.388081e-024; m[23124] = 6.582122e-024; m[-23124] = 6.582122e-024; m[23126] = 3.291061e-024; m[-23126] = 3.291061e-024; m[23212] = 7.313469e-024; m[-23212] = 7.313469e-024; m[23214] = 2.991874e-024; m[-23214] = 2.991874e-024; m[23222] = 7.313469e-024; m[-23222] = 7.313469e-024; m[23224] = 2.991874e-024; m[-23224] = 2.991874e-024; m[23314] = 0.000000e+000; m[-23314] = 0.000000e+000; m[23324] = 0.000000e+000; m[-23324] = 0.000000e+000; m[30113] = 2.742551e-024; m[30213] = 2.742551e-024; m[-30213] = 2.742551e-024; m[30223] = 2.991874e-024; m[30313] = 2.056913e-024; m[-30313] = 2.056913e-024; m[30323] = 2.056913e-024; m[-30323] = 2.056913e-024; m[30343] = 0.000000e+000; m[-30343] = 0.000000e+000; m[30353] = 0.000000e+000; m[-30353] = 0.000000e+000; m[30363] = 0.000000e+000; m[-30363] = 0.000000e+000; m[30411] = 0.000000e+000; m[-30411] = 0.000000e+000; m[30413] = 0.000000e+000; m[-30413] = 0.000000e+000; m[30421] = 0.000000e+000; m[-30421] = 0.000000e+000; m[30423] = 0.000000e+000; m[-30423] = 0.000000e+000; m[30443] = 2.789035e-023; m[30553] = 0.000000e+000; m[31114] = 1.880606e-024; m[-31114] = 1.880606e-024; m[31214] = 4.388081e-024; m[-31214] = 4.388081e-024; m[32112] = 4.388081e-024; m[-32112] = 4.388081e-024; m[32114] = 1.880606e-024; m[-32114] = 1.880606e-024; m[32124] = 4.388081e-024; m[-32124] = 4.388081e-024; m[32212] = 4.388081e-024; m[-32212] = 4.388081e-024; m[32214] = 1.880606e-024; m[-32214] = 1.880606e-024; m[32224] = 1.880606e-024; m[-32224] = 1.880606e-024; m[33122] = 1.880606e-023; m[-33122] = 1.880606e-023; m[33314] = 0.000000e+000; m[-33314] = 0.000000e+000; m[33324] = 0.000000e+000; m[-33324] = 0.000000e+000; m[41214] = 0.000000e+000; m[-41214] = 0.000000e+000; m[42112] = 6.582122e-024; m[-42112] = 6.582122e-024; m[42124] = 0.000000e+000; m[-42124] = 0.000000e+000; m[42212] = 6.582122e-024; m[-42212] = 6.582122e-024; m[43122] = 2.194041e-024; m[-43122] = 2.194041e-024; m[52114] = 0.000000e+000; m[-52114] = 0.000000e+000; m[52214] = 0.000000e+000; m[-52214] = 0.000000e+000; m[53122] = 4.388081e-024; m[-53122] = 4.388081e-024; m[100111] = 1.645531e-024; m[100113] = 2.123265e-024; m[100211] = 1.645531e-024; m[-100211] = 1.645531e-024; m[100213] = 2.123265e-024; m[-100213] = 2.123265e-024; m[100221] = 1.196749e-023; m[100223] = 3.871836e-024; m[100225] = 0.000000e+000; m[100311] = 0.000000e+000; m[-100311] = 0.000000e+000; m[100313] = 2.837122e-024; m[-100313] = 2.837122e-024; m[100315] = 0.000000e+000; m[-100315] = 0.000000e+000; m[100321] = 0.000000e+000; m[-100321] = 0.000000e+000; m[100323] = 2.837122e-024; m[-100323] = 2.837122e-024; m[100325] = 0.000000e+000; m[-100325] = 0.000000e+000; m[100331] = 0.000000e+000; m[100333] = 4.388081e-024; m[100335] = 3.291061e-024; m[100441] = 0.000000e+000; m[100551] = 0.000000e+000; m[100553] = 1.495937e-020; m[100555] = 1.000000e+016; m[100557] = 0.000000e+000; m[110551] = 1.000000e+016; m[110553] = 0.000000e+000; m[110555] = 0.000000e+000; m[120553] = 1.000000e+016; m[120555] = 0.000000e+000; m[130553] = 0.000000e+000; m[200111] = 3.134344e-024; m[200211] = 3.134344e-024; m[-200211] = 3.134344e-024; m[200551] = 0.000000e+000; m[200553] = 2.502708e-020; m[200555] = 0.000000e+000; m[210551] = 0.000000e+000; m[210553] = 0.000000e+000; m[220553] = 0.000000e+000; m[300553] = 4.701516e-023; m[9000221] = 0.000000e+000; m[9000443] = 1.265793e-023; m[9000553] = 5.983747e-024; m[9010443] = 8.438618e-024; m[9010553] = 8.331800e-024; m[9020221] = 6.038644e-024; m[9020443] = 1.530726e-023; m[9060225] = 4.388081e-024; m[9070225] = 2.056913e-024; m[1000001] = 0.000000e+000; m[-1000001] = 0.000000e+000; m[1000002] = 0.000000e+000; m[-1000002] = 0.000000e+000; m[1000003] = 0.000000e+000; m[-1000003] = 0.000000e+000; m[1000004] = 0.000000e+000; m[-1000004] = 0.000000e+000; m[1000005] = 0.000000e+000; m[-1000005] = 0.000000e+000; m[1000006] = 0.000000e+000; m[-1000006] = 0.000000e+000; m[1000011] = 0.000000e+000; m[-1000011] = 0.000000e+000; m[1000012] = 0.000000e+000; m[-1000012] = 0.000000e+000; m[1000013] = 0.000000e+000; m[-1000013] = 0.000000e+000; m[1000014] = 0.000000e+000; m[-1000014] = 0.000000e+000; m[1000015] = 0.000000e+000; m[-1000015] = 0.000000e+000; m[1000016] = 0.000000e+000; m[-1000016] = 0.000000e+000; m[1000021] = 0.000000e+000; m[1000022] = 0.000000e+000; m[1000023] = 0.000000e+000; m[1000024] = 0.000000e+000; m[-1000024] = 0.000000e+000; m[1000025] = 0.000000e+000; m[1000035] = 0.000000e+000; m[1000037] = 0.000000e+000; m[-1000037] = 0.000000e+000; m[1000039] = 0.000000e+000; m[2000001] = 0.000000e+000; m[-2000001] = 0.000000e+000; m[2000002] = 0.000000e+000; m[-2000002] = 0.000000e+000; m[2000003] = 0.000000e+000; m[-2000003] = 0.000000e+000; m[2000004] = 0.000000e+000; m[-2000004] = 0.000000e+000; m[2000005] = 0.000000e+000; m[-2000005] = 0.000000e+000; m[2000006] = 0.000000e+000; m[-2000006] = 0.000000e+000; m[2000011] = 0.000000e+000; m[-2000011] = 0.000000e+000; m[2000012] = 0.000000e+000; m[-2000012] = 0.000000e+000; m[2000013] = 0.000000e+000; m[-2000013] = 0.000000e+000; m[2000014] = 0.000000e+000; m[-2000014] = 0.000000e+000; m[2000015] = 0.000000e+000; m[-2000015] = 0.000000e+000; m[2000016] = 0.000000e+000; m[-2000016] = 0.000000e+000; m[3000111] = 0.000000e+000; m[3000113] = 0.000000e+000; m[3000211] = 0.000000e+000; m[-3000211] = 0.000000e+000; m[3000213] = 0.000000e+000; m[-3000213] = 0.000000e+000; m[3000221] = 0.000000e+000; m[3000223] = 0.000000e+000; m[3000331] = 0.000000e+000; m[3100021] = 0.000000e+000; m[3100111] = 0.000000e+000; m[3100113] = 0.000000e+000; m[3200111] = 0.000000e+000; m[3200113] = 0.000000e+000; m[3300113] = 0.000000e+000; m[3400113] = 0.000000e+000; m[4000001] = 0.000000e+000; m[-4000001] = 0.000000e+000; m[4000002] = 0.000000e+000; m[-4000002] = 0.000000e+000; m[4000011] = 0.000000e+000; m[-4000011] = 0.000000e+000; m[4000012] = 0.000000e+000; m[-4000012] = 0.000000e+000; m[5000039] = 0.000000e+000; m[9900012] = 0.000000e+000; m[9900014] = 0.000000e+000; m[9900016] = 0.000000e+000; m[9900023] = 0.000000e+000; m[9900024] = 0.000000e+000; m[-9900024] = 0.000000e+000; m[9900041] = 0.000000e+000; m[-9900041] = 0.000000e+000; m[9900042] = 0.000000e+000; m[-9900042] = 0.000000e+000; m[1027013000] = 0.000000e+000; m[1012006000] = 0.000000e+000; m[1063029000] = 0.000000e+000; m[1014007000] = 0.000000e+000; m[1016008000] = 0.000000e+000; m[1028014000] = 0.000000e+000; m[1065029000] = 0.000000e+000; m[1009004000] = 0.000000e+000; m[1019009000] = 0.000000e+000; m[1056026000] = 0.000000e+000; m[1207082000] = 0.000000e+000; m[1208082000] = 0.000000e+000; m[1029014000] = 0.000000e+000; m[1206082000] = 0.000000e+000; m[1054026000] = 0.000000e+000; m[1018008000] = 0.000000e+000; m[1030014000] = 0.000000e+000; m[1057026000] = 0.000000e+000; m[1204082000] = 0.000000e+000; m[-99000000] = 0.000000e+000; m[1028013000] = 0.000000e+000; m[1040018000] = 0.000000e+000; m[1011005000] = 0.000000e+000; m[1012005000] = 0.000000e+000; m[1013006000] = 0.000000e+000; m[1014006000] = 0.000000e+000; m[1052024000] = 0.000000e+000; m[1024012000] = 0.000000e+000; m[1026012000] = 0.000000e+000; m[1027012000] = 0.000000e+000; m[1015007000] = 0.000000e+000; m[1022010000] = 0.000000e+000; m[1058028000] = 0.000000e+000; m[1060028000] = 0.000000e+000; m[1062028000] = 0.000000e+000; m[1064028000] = 0.000000e+000; m[1007003000] = 0.000000e+000; m[1025012000] = 0.000000e+000; m[1053024000] = 0.000000e+000; m[1055025000] = 0.000000e+000; m[1008004000] = 0.000000e+000; m[1010004000] = 0.000000e+000; m[1010005000] = 0.000000e+000; m[1016007000] = 0.000000e+000; m[1017008000] = 0.000000e+000; m[1019008000] = 0.000000e+000; m[1023010000] = 0.000000e+000; m[1024011000] = 0.000000e+000; m[1031015000] = 0.000000e+000; m[1039017000] = 0.000000e+000; m[1040017000] = 0.000000e+000; m[1036018000] = 0.000000e+000; m[1050024000] = 0.000000e+000; m[1054024000] = 0.000000e+000; m[1059026000] = 0.000000e+000; m[1061028000] = 0.000000e+000; m[1063028000] = 0.000000e+000; m[1092042000] = 0.000000e+000; m[1095042000] = 0.000000e+000; m[1096042000] = 0.000000e+000; m[1097042000] = 0.000000e+000; m[1098042000] = 0.000000e+000; m[1100042000] = 0.000000e+000; m[1108046000] = 0.000000e+000; // Added by hand: m[9902210] = 0.000000e+000; //diffractive p-state -> assume no lifetime return true; } private: /// @name Histograms //@{ Histo1DPtr _h_mult_total; // full kinematic range Histo1DPtr _h_mult_eta[5]; // in eta bins Histo1DPtr _h_mult_pt[5]; // in pT bins Histo1DPtr _h_dndeta; // density dn/deta Histo1DPtr _h_dndpt; // density dn/dpT //@} /// @name Private variables double _p_min; double _pt_min; double _eta_min; double _eta_max; double _maxlft; /// Count selected events double _sumW; map _partLftMap; // Map }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2014_I1281685); } diff --git a/analyses/pluginMC/MC_IDENTIFIED.cc b/analyses/pluginMC/MC_IDENTIFIED.cc --- a/analyses/pluginMC/MC_IDENTIFIED.cc +++ b/analyses/pluginMC/MC_IDENTIFIED.cc @@ -1,104 +1,104 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// Generic analysis looking at various distributions of final state particles /// @todo Rename as MC_HADRONS class MC_IDENTIFIED : public Analysis { public: /// Constructor MC_IDENTIFIED() : Analysis("MC_IDENTIFIED") { } public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Projections const FinalState cnfs(Cuts::abseta < 5.0 && Cuts::pT > 500*MeV); declare(cnfs, "FS"); declare(UnstableFinalState(Cuts::abseta < 5.0 && Cuts::pT > 500*MeV), "UFS"); // Histograms // @todo Choose E/pT ranged based on input energies... can't do anything about kin. cuts, though _histStablePIDs = bookHisto1D("MultsStablePIDs", 3335, -0.5, 3334.5); _histDecayedPIDs = bookHisto1D("MultsDecayedPIDs", 3335, -0.5, 3334.5); _histAllPIDs = bookHisto1D("MultsAllPIDs", 3335, -0.5, 3334.5); _histEtaPi = bookHisto1D("EtaPi", 25, 0, 5); _histEtaK = bookHisto1D("EtaK", 25, 0, 5); _histEtaLambda = bookHisto1D("EtaLambda", 25, 0, 5); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // Unphysical (debug) plotting of all PIDs in the event, physical or otherwise - for(ConstGenParticlePtr gp: particles(event.genEvent())) { + for(ConstGenParticlePtr gp: HepMCUtils::particles(event.genEvent())) { _histAllPIDs->fill(abs(gp->pdg_id()), weight); } // Charged + neutral final state PIDs const FinalState& cnfs = apply(event, "FS"); foreach (const Particle& p, cnfs.particles()) { _histStablePIDs->fill(p.abspid(), weight); } // Unstable PIDs and identified particle eta spectra const UnstableFinalState& ufs = apply(event, "UFS"); foreach (const Particle& p, ufs.particles()) { _histDecayedPIDs->fill(p.pid(), weight); const double eta_abs = p.abseta(); const PdgId pid = p.abspid(); //if (PID::isMeson(pid) && PID::hasStrange()) { if (pid == 211 || pid == 111) _histEtaPi->fill(eta_abs, weight); else if (pid == 321 || pid == 130 || pid == 310) _histEtaK->fill(eta_abs, weight); else if (pid == 3122) _histEtaLambda->fill(eta_abs, weight); } } /// Finalize void finalize() { scale(_histStablePIDs, 1/sumOfWeights()); scale(_histDecayedPIDs, 1/sumOfWeights()); scale(_histAllPIDs, 1/sumOfWeights()); scale(_histEtaPi, 1/sumOfWeights()); scale(_histEtaK, 1/sumOfWeights()); scale(_histEtaLambda, 1/sumOfWeights()); } //@} private: /// @name Histograms //@{ Histo1DPtr _histStablePIDs, _histDecayedPIDs, _histAllPIDs; Histo1DPtr _histEtaPi, _histEtaK, _histEtaLambda; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(MC_IDENTIFIED); } diff --git a/analyses/pluginMC/MC_PDFS.cc b/analyses/pluginMC/MC_PDFS.cc --- a/analyses/pluginMC/MC_PDFS.cc +++ b/analyses/pluginMC/MC_PDFS.cc @@ -1,109 +1,109 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" // #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// Generic analysis looking at various distributions of final state particles class MC_PDFS : public Analysis { public: /// Constructor MC_PDFS() : Analysis("MC_PDFS") { } public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Projections // declare(ChargedFinalState(-5.0, 5.0, 500*MeV), "CFS"); // Histograms _histPdfX = bookHisto1D("PdfX", logspace(50, 0.000001, 1.0)); _histPdfXmin = bookHisto1D("PdfXmin", logspace(50, 0.000001, 1.0)); _histPdfXmax = bookHisto1D("PdfXmax", logspace(50, 0.000001, 1.0)); _histPdfQ = bookHisto1D("PdfQ", 50, 0.0, 30.0); _histPdfXQ = bookHisto2D("PdfXQ", logspace(50, 0.000001, 1.0), linspace(50, 0.0, 30.0)); // _histPdfTrackptVsX = bookProfile1D("PdfTrackptVsX", logspace(50, 0.000001, 1.0)); // _histPdfTrackptVsQ = bookProfile1D("PdfTrackptVsQ", 50, 0.0, 30.0); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // This analysis needs a valid HepMC PDF info object to do anything if (event.genEvent()->pdf_info() == 0) vetoEvent; - HepMC::PdfInfo pdfi = *(event.genEvent()->pdf_info()); + PdfInfo pdfi = *(event.genEvent()->pdf_info()); -#if HEPMC_VERSION_CODE >= 3000000 +#ifdef ENABLE_HEPMC_3 MSG_DEBUG("PDF Q = " << pdfi.scale<< " for (id, x) = " << "(" << pdfi.pdf_id[0] << ", " << pdfi.x[0] << ") " << "(" << pdfi.pdf_id[1] << ", " << pdfi.x[1] << ")"); _histPdfX->fill(pdfi.x[0], weight); _histPdfX->fill(pdfi.x[1], weight); _histPdfXmin->fill(std::min(pdfi.x[0], pdfi.x[1]), weight); _histPdfXmax->fill(std::max(pdfi.x[0], pdfi.x[1]), weight); _histPdfQ->fill(pdfi.scale, weight); // always in GeV? _histPdfXQ->fill(pdfi.x[0], pdfi.scale, weight); // always in GeV? _histPdfXQ->fill(pdfi.x[1], pdfi.scale, weight); // always in GeV? #else MSG_DEBUG("PDF Q = " << pdfi.scalePDF() << " for (id, x) = " << "(" << pdfi.id1() << ", " << pdfi.x1() << ") " << "(" << pdfi.id2() << ", " << pdfi.x2() << ")"); _histPdfX->fill(pdfi.x1(), weight); _histPdfX->fill(pdfi.x2(), weight); _histPdfXmin->fill(std::min(pdfi.x1(), pdfi.x2()), weight); _histPdfXmax->fill(std::max(pdfi.x1(), pdfi.x2()), weight); _histPdfQ->fill(pdfi.scalePDF(), weight); // always in GeV? _histPdfXQ->fill(pdfi.x1(), pdfi.scalePDF(), weight); // always in GeV? _histPdfXQ->fill(pdfi.x2(), pdfi.scalePDF(), weight); // always in GeV? #endif // const FinalState& cfs = apply(event, "CFS"); // foreach (const Particle& p, cfs.particles()) { // if (fabs(eta) < 2.5 && p.pT() > 10*GeV) { // _histPdfTrackptVsX->fill(pdfi.x1(), p.pT()/GeV, weight); // _histPdfTrackptVsX->fill(pdfi.x2(), p.pT()/GeV, weight); // _histPdfTrackptVsQ->fill(pdfi.scalePDF(), p.pT()/GeV, weight); // } // } } /// Finalize void finalize() { scale(_histPdfX, 1/sumOfWeights()); scale(_histPdfXmin, 1/sumOfWeights()); scale(_histPdfXmax, 1/sumOfWeights()); scale(_histPdfQ, 1/sumOfWeights()); } //@} private: /// @name Histograms //@{ Histo1DPtr _histPdfX, _histPdfXmin, _histPdfXmax, _histPdfQ; Histo2DPtr _histPdfXQ; // Profile1DPtr _histPdfTrackptVsX, _histPdfTrackptVsQ; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(MC_PDFS); } diff --git a/analyses/pluginMC/MC_PRINTEVENT.cc b/analyses/pluginMC/MC_PRINTEVENT.cc --- a/analyses/pluginMC/MC_PRINTEVENT.cc +++ b/analyses/pluginMC/MC_PRINTEVENT.cc @@ -1,267 +1,267 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" namespace Rivet { /// @author Andy Buckley class MC_PRINTEVENT : public Analysis { public: /// Constructor MC_PRINTEVENT() : Analysis("MC_PRINTEVENT") { } /// @name Analysis methods //@{ void init() { /// Set up particle name map // quarks & gluons _pnames[1] = "d"; _pnames[-1] = "d~"; _pnames[2] = "u"; _pnames[-2] = "u~"; _pnames[3] = "s"; _pnames[-3] = "s~"; _pnames[4] = "c"; _pnames[-4] = "c~"; _pnames[5] = "b"; _pnames[-5] = "b~"; _pnames[6] = "t"; _pnames[-6] = "t~"; // bosons _pnames[21] = "g"; _pnames[22] = "gamma"; _pnames[23] = "Z0"; _pnames[24] = "W+"; _pnames[-24] = "W-"; _pnames[25] = "h0"; _pnames[26] = "h0"; // leptons _pnames[11] = "e-"; _pnames[-11] = "e+"; _pnames[13] = "mu-"; _pnames[-13] = "mu+"; _pnames[15] = "tau-"; _pnames[-15] = "tau+"; _pnames[12] = "nu_e"; _pnames[-12] = "nu_e~"; _pnames[14] = "nu_mu"; _pnames[-14] = "nu_mu~"; _pnames[16] = "nu_tau"; _pnames[-16] = "nu_tau~"; // common hadrons _pnames[111] = "pi0"; _pnames[211] = "pi+"; _pnames[-211] = "pi-"; _pnames[221] = "eta"; _pnames[331] = "eta'"; _pnames[113] = "rho0"; _pnames[213] = "rho+"; _pnames[-213] = "rho-"; _pnames[223] = "omega"; _pnames[333] = "phi"; _pnames[130] = "K0L"; _pnames[310] = "K0S"; _pnames[311] = "K0"; _pnames[-311] = "K0"; _pnames[321] = "K+"; _pnames[-321] = "K-"; _pnames[313] = "K*0"; _pnames[-313] = "K*0~"; _pnames[323] = "K*+"; _pnames[-323] = "K*-"; _pnames[411] = "D+"; _pnames[-411] = "D-"; _pnames[421] = "D0"; _pnames[-421] = "D0~"; _pnames[413] = "D*+"; _pnames[-413] = "D*-"; _pnames[423] = "D*0"; _pnames[-423] = "D*0~"; _pnames[431] = "Ds+"; _pnames[-431] = "Ds-"; _pnames[433] = "Ds*+"; _pnames[-433] = "Ds*-"; _pnames[511] = "B0"; _pnames[-511] = "B0~"; _pnames[521] = "B+"; _pnames[-521] = "B-"; _pnames[513] = "B*0"; _pnames[-513] = "B*0~"; _pnames[523] = "B*+"; _pnames[-523] = "B*-"; _pnames[531] = "B0s"; _pnames[541] = "Bc+"; _pnames[-541] = "Bc-"; _pnames[441] = "eta_c(1S)"; _pnames[443] = "J/psi(1S)"; _pnames[551] = "eta_b(1S)"; _pnames[553] = "Upsilon(1S)"; _pnames[2212] = "p+"; _pnames[-2212] = "p-"; _pnames[2112] = "n"; _pnames[-2112] = "n~"; _pnames[2224] = "Delta++"; _pnames[2214] = "Delta+"; _pnames[2114] = "Delta0"; _pnames[1114] = "Delta-"; _pnames[3122] = "Lambda"; _pnames[-3122] = "Lambda~"; _pnames[3222] = "Sigma+"; _pnames[-3222] = "Sigma+~"; _pnames[3212] = "Sigma0"; _pnames[-3212] = "Sigma0~"; _pnames[3112] = "Sigma-"; _pnames[-3112] = "Sigma-~"; _pnames[4122] = "Lambda_c+"; _pnames[-4122] = "Lambda_c-"; _pnames[5122] = "Lambda_b"; // exotic _pnames[32] = "Z'"; _pnames[34] = "W'+"; _pnames[-34] = "W'-"; _pnames[35] = "H0"; _pnames[36] = "A0"; _pnames[37] = "H+"; _pnames[-37] = "H-"; // shower-specific _pnames[91] = "cluster"; _pnames[92] = "string"; _pnames[9922212] = "remn"; _pnames[1103] = "dd"; _pnames[2101] = "ud0"; _pnames[2103] = "ud1"; _pnames[2203] = "uu"; } /// Perform the per-event analysis void analyze(const Event& event) { /// @todo Wouldn't this be nice... if HepMC::IO_AsciiParticles was sane :-/ // printEvent(event.genEvent()); - #if HEPMC_VERSION_CODE >= 3000000 + #ifdef ENABLE_HEPMC_3 /// @todo gonna try this instead of replicating everything below - HepMC::Print::content(*(event.genEvent())); + RivetHepMC::Print::content(*(event.genEvent())); #else const GenEvent* evt = event.genEvent(); cout << string(120, '=') << "\n" << endl; // Weights cout << "Weights(" << evt->weights().size() << ")="; /// @todo Re-enable // foreach (double w, evt->weights()) // cout << w << " "; cout << "\n" << "EventScale " << evt->event_scale() << " [energy] \t alphaQCD=" << evt->alphaQCD() << "\t alphaQED=" << evt->alphaQED() << endl; if (evt->pdf_info()) { cout << "PdfInfo: id1=" << evt->pdf_info()->id1() << " id2=" << evt->pdf_info()->id2() << " x1=" << evt->pdf_info()->x1() << " x2=" << evt->pdf_info()->x2() << " q=" << evt->pdf_info()->scalePDF() << " xpdf1=" << evt->pdf_info()->pdf1() << " xpdf2=" << evt->pdf_info()->pdf2() << endl; } else { cout << "PdfInfo: EMPTY"; } // Print a legend to describe the particle info char particle_legend[120]; sprintf( particle_legend," %9s %8s %-15s %4s %8s %8s (%9s,%9s,%9s,%9s,%9s)", "Barcode","PDG ID","Name","Stat","ProdVtx","DecayVtx","Px","Py","Pz","E ","m"); cout << endl; cout << " GenParticle Legend\n" << particle_legend << "\n"; // if (m_vertexinfo) { // sprintf( particle_legend," %60s (%9s,%9s,%9s,%9s)"," ","Vx","Vy","Vz","Vct "); // cout << particle_legend << endl; // } // cout << string(120, '_') << endl; // Print all particles // const HepPDT::ParticleDataTable* pdt = m_ppsvc->PDT(); for (HepMC::GenEvent::particle_const_iterator p = evt->particles_begin(); p != evt->particles_end(); ++p) { int p_bcode = (*p)->barcode(); int p_pdg_id = (*p)->pdg_id(); double p_px = (*p)->momentum().px(); double p_py = (*p)->momentum().py(); double p_pz = (*p)->momentum().pz(); double p_pe = (*p)->momentum().e(); int p_stat = (*p)->status(); int p_prodvtx = 0; if ((*p)->production_vertex() && (*p)->production_vertex()->barcode() != 0) { p_prodvtx = (*p)->production_vertex()->barcode(); } int p_endvtx = 0; if ((*p)->end_vertex() && (*p)->end_vertex()->barcode() != 0) { p_endvtx=(*p)->end_vertex()->barcode(); } // double v_x = 0; // double v_y = 0; // double v_z = 0; // double v_ct = 0; // if ((*p)->production_vertex()) { // v_x = (*p)->production_vertex()->position().x(); // v_y = (*p)->production_vertex()->position().y(); // v_z = (*p)->production_vertex()->position().z(); // v_ct = (*p)->production_vertex()->position().t(); // } // Mass (prefer generated mass if available) double p_mass = (*p)->generated_mass(); if (p_mass == 0 && !(p_stat == 1 && p_pdg_id == 22)) p_mass = (*p)->momentum().m(); // Particle names string sname = (_pnames.find(p_pdg_id) != _pnames.end()) ? _pnames[p_pdg_id] : ""; const char* p_name = sname.c_str() ; char particle_entries[120]; sprintf(particle_entries, " %9i %8i %-15s %4i %8i %8i (%+9.3g,%+9.3g,%+9.3g,%+9.3g,%9.3g)", p_bcode, p_pdg_id, p_name, p_stat, p_prodvtx, p_endvtx, p_px, p_py, p_pz, p_pe, p_mass); cout << particle_entries << "\n"; // if (m_vertexinfo) { // sprintf(particle_entries," %60s (%+9.3g,%+9.3g,%+9.3g,%+9.3g)"," ",v_x, v_y, v_z, v_ct); // cout << particle_entries << "\n"; // } } cout << "\n" << endl; #endif // VERSION_CODE >= 3000000 } /// Normalise histograms etc., after the run void finalize() { } //@} private: map _pnames; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(MC_PRINTEVENT); } diff --git a/analyses/pluginMC/MC_XS.cc b/analyses/pluginMC/MC_XS.cc --- a/analyses/pluginMC/MC_XS.cc +++ b/analyses/pluginMC/MC_XS.cc @@ -1,87 +1,90 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" + +#ifndef ENABLE_HEPMC_3 #include "HepMC/HepMCDefs.h" +#endif namespace Rivet { /// @brief Analysis for the generated cross section class MC_XS : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor MC_XS() : Analysis("MC_XS") { } //@} public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { /// @todo Convert to Scatter1D or Counter _h_XS = bookScatter2D("XS"); _h_N = bookHisto1D("N", 1, 0.0, 1.0); _h_pmXS = bookHisto1D("pmXS", 2, -1.0, 1.0); _h_pmN = bookHisto1D("pmN", 2, -1.0, 1.0); _mc_xs = _mc_error = 0.; } /// Perform the per-event analysis void analyze(const Event& event) { _h_N->fill(0.5,1.); _h_pmXS->fill(0.5*(event.weight() > 0 ? 1. : -1), abs(event.weight())); _h_pmN ->fill(0.5*(event.weight() > 0 ? 1. : -1), 1.); - #ifdef HEPMC_HAS_CROSS_SECTION - #if HEPMC_VERSION_CODE >= 3000000 - _mc_xs = event.genEvent()->cross_section()->cross_section; - _mc_error = event.genEvent()->cross_section()->cross_section_error; - #else + #if defined ENABLE_HEPMC_3 + //@todo HepMC3::GenCrossSection methods aren't const accessible :( + RivetHepMC::GenCrossSection gcs = *(event.genEvent()->cross_section()); + _mc_xs = gcs.xsec(); + _mc_error = gcs.xsec_err(); + #elif defined HEPMC_HAS_CROSS_SECTION _mc_xs = event.genEvent()->cross_section()->cross_section(); _mc_error = event.genEvent()->cross_section()->cross_section_error(); #endif // VERSION_CODE >= 3000000 - #endif // HAS_CROSS_SECTION } /// Normalise histograms etc., after the run void finalize() { scale(_h_pmXS, crossSection()/sumOfWeights()); #ifndef HEPMC_HAS_CROSS_SECTION _mc_xs = crossSection(); _mc_error = 0.0; #endif _h_XS->addPoint(0, _mc_xs, 0.5, _mc_error); } //@} private: /// @name Histograms //@{ Scatter2DPtr _h_XS; Histo1DPtr _h_N; Histo1DPtr _h_pmXS; Histo1DPtr _h_pmN; double _mc_xs, _mc_error; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(MC_XS); } diff --git a/analyses/pluginMisc/ARGUS_1993_S2653028.cc b/analyses/pluginMisc/ARGUS_1993_S2653028.cc --- a/analyses/pluginMisc/ARGUS_1993_S2653028.cc +++ b/analyses/pluginMisc/ARGUS_1993_S2653028.cc @@ -1,177 +1,177 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief BELLE pi+/-, K+/- and proton/antiproton spectrum at Upsilon(4S) /// @author Peter Richardson class ARGUS_1993_S2653028 : public Analysis { public: ARGUS_1993_S2653028() : Analysis("ARGUS_1993_S2653028"), _weightSum(0.) { } void analyze(const Event& e) { const double weight = e.weight(); // Find the upsilons Particles upsilons; // First in unstable final state const UnstableFinalState& ufs = apply(e, "UFS"); for(const Particle& p: ufs.particles()) { if (p.pid() == 300553) upsilons.push_back(p); } // Then in whole event if that failed if (upsilons.empty()) { - for(ConstGenParticlePtr p: particles(e.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(e.genEvent())) { if (p->pdg_id() != 300553) continue; ConstGenVertexPtr pv = p->production_vertex(); bool passed = true; if (pv) { - for(ConstGenParticlePtr pp: pv->particles_in()) { + for(ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } // Find an upsilon foreach (const Particle& p, upsilons) { _weightSum += weight; vector pionsA,pionsB,protonsA,protonsB,kaons; // Find the decay products we want findDecayProducts(p.genParticle(), pionsA, pionsB, protonsA, protonsB, kaons); LorentzTransform cms_boost; if (p.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); for (size_t ix = 0; ix < pionsA.size(); ++ix) { FourMomentum ptemp(pionsA[ix]->momentum()); FourMomentum p2 = cms_boost.transform(ptemp); double pcm = cms_boost.transform(ptemp).vector3().mod(); _histPiA->fill(pcm,weight); } _multPiA->fill(10.58,double(pionsA.size())*weight); for (size_t ix = 0; ix < pionsB.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(pionsB[ix]->momentum())).vector3().mod(); _histPiB->fill(pcm,weight); } _multPiB->fill(10.58,double(pionsB.size())*weight); for (size_t ix = 0; ix < protonsA.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(protonsA[ix]->momentum())).vector3().mod(); _histpA->fill(pcm,weight); } _multpA->fill(10.58,double(protonsA.size())*weight); for (size_t ix = 0; ix < protonsB.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(protonsB[ix]->momentum())).vector3().mod(); _histpB->fill(pcm,weight); } _multpB->fill(10.58,double(protonsB.size())*weight); for (size_t ix = 0 ;ix < kaons.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(kaons[ix]->momentum())).vector3().mod(); _histKA->fill(pcm,weight); _histKB->fill(pcm,weight); } _multK->fill(10.58,double(kaons.size())*weight); } } void finalize() { if (_weightSum > 0.) { scale(_histPiA, 1./_weightSum); scale(_histPiB, 1./_weightSum); scale(_histKA , 1./_weightSum); scale(_histKB , 1./_weightSum); scale(_histpA , 1./_weightSum); scale(_histpB , 1./_weightSum); scale(_multPiA, 1./_weightSum); scale(_multPiB, 1./_weightSum); scale(_multK , 1./_weightSum); scale(_multpA , 1./_weightSum); scale(_multpB , 1./_weightSum); } } void init() { declare(UnstableFinalState(), "UFS"); // spectra _histPiA = bookHisto1D(1, 1, 1); _histPiB = bookHisto1D(2, 1, 1); _histKA = bookHisto1D(3, 1, 1); _histKB = bookHisto1D(6, 1, 1); _histpA = bookHisto1D(4, 1, 1); _histpB = bookHisto1D(5, 1, 1); // multiplicities _multPiA = bookHisto1D( 7, 1, 1); _multPiB = bookHisto1D( 8, 1, 1); _multK = bookHisto1D( 9, 1, 1); _multpA = bookHisto1D(10, 1, 1); _multpB = bookHisto1D(11, 1, 1); } // init private: //@{ /// Count of weights double _weightSum; /// Spectra Histo1DPtr _histPiA, _histPiB, _histKA, _histKB, _histpA, _histpB; /// Multiplicities Histo1DPtr _multPiA, _multPiB, _multK, _multpA, _multpB; //@} void findDecayProducts(ConstGenParticlePtr p, vector& pionsA, vector& pionsB, vector& protonsA, vector& protonsB, vector& kaons) { int parentId = p->pdg_id(); ConstGenVertexPtr dv = p->end_vertex(); /// @todo Use better looping - for(ConstGenParticlePtr pp: dv->particles_out()){ + for(ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ int id = abs(pp->pdg_id()); if (id == PID::PIPLUS) { if (parentId != PID::LAMBDA && parentId != PID::K0S) { pionsA.push_back(pp); pionsB.push_back(pp); } else pionsB.push_back(pp); } else if (id == PID::PROTON) { if (parentId != PID::LAMBDA && parentId != PID::K0S) { protonsA.push_back(pp); protonsB.push_back(pp); } else protonsB.push_back(pp); } else if (id == PID::KPLUS) { kaons.push_back(pp); } else if (pp->end_vertex()) findDecayProducts(pp, pionsA, pionsB, protonsA, protonsB, kaons); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ARGUS_1993_S2653028); } diff --git a/analyses/pluginMisc/ARGUS_1993_S2669951.cc b/analyses/pluginMisc/ARGUS_1993_S2669951.cc --- a/analyses/pluginMisc/ARGUS_1993_S2669951.cc +++ b/analyses/pluginMisc/ARGUS_1993_S2669951.cc @@ -1,191 +1,191 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief Production of the $\eta'(958)$ and $f_0(980)$ in $e^+e^-$ annihilation in the Upsilon region /// @author Peter Richardson class ARGUS_1993_S2669951 : public Analysis { public: ARGUS_1993_S2669951() : Analysis("ARGUS_1993_S2669951"), _count_etaPrime_highZ(2, 0.), _count_etaPrime_allZ(3, 0.), _count_f0(3, 0.), _weightSum_cont(0.), _weightSum_Ups1(0.), _weightSum_Ups2(0.) { } void init() { declare(UnstableFinalState(), "UFS"); _hist_cont_f0 = bookHisto1D(2, 1, 1); _hist_Ups1_f0 = bookHisto1D(3, 1, 1); _hist_Ups2_f0 = bookHisto1D(4, 1, 1); } void analyze(const Event& e) { // Find the Upsilons among the unstables const UnstableFinalState& ufs = apply(e, "UFS"); Particles upsilons; // First in unstable final state foreach (const Particle& p, ufs.particles()) if (p.pid() == 553 || p.pid() == 100553) upsilons.push_back(p); // Then in whole event if fails if (upsilons.empty()) { /// @todo Replace HepMC digging with Particle::descendents etc. calls - for(ConstGenParticlePtr p: Rivet::particles(e.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(e.genEvent())) { if ( p->pdg_id() != 553 && p->pdg_id() != 100553 ) continue; // Discard it if its parent has the same PDG ID code (avoid duplicates) ConstGenVertexPtr pv = p->production_vertex(); bool passed = true; if (pv) { - for(ConstGenParticlePtr pp: pv->particles_in()) { + for(ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } // Finding done, now fill counters const double weight = e.weight(); if (upsilons.empty()) { // Continuum MSG_DEBUG("No Upsilons found => continuum event"); _weightSum_cont += weight; unsigned int nEtaA(0), nEtaB(0), nf0(0); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); const double xp = 2.*p.E()/sqrtS(); const double beta = p.p3().mod() / p.E(); if (id == 9010221) { _hist_cont_f0->fill(xp, weight/beta); nf0 += 1; } else if (id == 331) { if (xp > 0.35) nEtaA += 1; nEtaB += 1; } } _count_f0[2] += nf0*weight; _count_etaPrime_highZ[1] += nEtaA*weight; _count_etaPrime_allZ[2] += nEtaB*weight; } else { // Upsilon(s) found MSG_DEBUG("Upsilons found => resonance event"); foreach (const Particle& ups, upsilons) { const int parentId = ups.pid(); ((parentId == 553) ? _weightSum_Ups1 : _weightSum_Ups2) += weight; Particles unstable; // Find the decay products we want findDecayProducts(ups.genParticle(), unstable); LorentzTransform cms_boost; if (ups.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(ups.momentum().betaVec()); const double mass = ups.mass(); unsigned int nEtaA(0), nEtaB(0), nf0(0); foreach(const Particle& p, unstable) { const int id = p.abspid(); const FourMomentum p2 = cms_boost.transform(p.momentum()); const double xp = 2.*p2.E()/mass; const double beta = p2.p3().mod()/p2.E(); if (id == 9010221) { //< ? ((parentId == 553) ? _hist_Ups1_f0 : _hist_Ups2_f0)->fill(xp, weight/beta); nf0 += 1; } else if (id == 331) { //< ? if (xp > 0.35) nEtaA += 1; nEtaB += 1; } } if (parentId == 553) { _count_f0[0] += nf0*weight; _count_etaPrime_highZ[0] += nEtaA*weight; _count_etaPrime_allZ[0] += nEtaB*weight; } else { _count_f0[1] += nf0*weight; _count_etaPrime_allZ[1] += nEtaB*weight; } } } } void finalize() { // High-Z eta' multiplicity Scatter2DPtr s111 = bookScatter2D(1, 1, 1, true); if (_weightSum_Ups1 > 0) // Point at 9.460 s111->point(0).setY(_count_etaPrime_highZ[0] / _weightSum_Ups1, 0); if (_weightSum_cont > 0) // Point at 9.905 s111->point(1).setY(_count_etaPrime_highZ[1] / _weightSum_cont, 0); // All-Z eta' multiplicity Scatter2DPtr s112 = bookScatter2D(1, 1, 2, true); if (_weightSum_Ups1 > 0) // Point at 9.460 s112->point(0).setY(_count_etaPrime_allZ[0] / _weightSum_Ups1, 0); if (_weightSum_cont > 0) // Point at 9.905 s112->point(1).setY(_count_etaPrime_allZ[2] / _weightSum_cont, 0); if (_weightSum_Ups2 > 0) // Point at 10.02 s112->point(2).setY(_count_etaPrime_allZ[1] / _weightSum_Ups2, 0); // f0 multiplicity Scatter2DPtr s511 = bookScatter2D(5, 1, 1, true); if (_weightSum_Ups1 > 0) // Point at 9.46 s511->point(0).setY(_count_f0[0] / _weightSum_Ups1, 0); if (_weightSum_Ups2 > 0) // Point at 10.02 s511->point(1).setY(_count_f0[1] / _weightSum_Ups2, 0); if (_weightSum_cont > 0) // Point at 10.45 s511->point(2).setY(_count_f0[2] / _weightSum_cont, 0); // Scale histos if (_weightSum_cont > 0.) scale(_hist_cont_f0, 1./_weightSum_cont); if (_weightSum_Ups1 > 0.) scale(_hist_Ups1_f0, 1./_weightSum_Ups1); if (_weightSum_Ups2 > 0.) scale(_hist_Ups2_f0, 1./_weightSum_Ups2); } private: /// @name Counters //@{ vector _count_etaPrime_highZ, _count_etaPrime_allZ, _count_f0; double _weightSum_cont,_weightSum_Ups1,_weightSum_Ups2; //@} /// Histos Histo1DPtr _hist_cont_f0, _hist_Ups1_f0, _hist_Ups2_f0; /// Recursively walk the HepMC tree to find decay products of @a p void findDecayProducts(ConstGenParticlePtr p, Particles& unstable) { ConstGenVertexPtr dv = p->end_vertex(); - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ const int id = abs(pp->pdg_id()); if (id == 331 || id == 9010221) unstable.push_back(Particle(pp)); else if (pp->end_vertex()) findDecayProducts(pp, unstable); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ARGUS_1993_S2669951); } diff --git a/analyses/pluginMisc/ARGUS_1993_S2789213.cc b/analyses/pluginMisc/ARGUS_1993_S2789213.cc --- a/analyses/pluginMisc/ARGUS_1993_S2789213.cc +++ b/analyses/pluginMisc/ARGUS_1993_S2789213.cc @@ -1,256 +1,256 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief ARGUS vector meson production /// @author Peter Richardson class ARGUS_1993_S2789213 : public Analysis { public: ARGUS_1993_S2789213() : Analysis("ARGUS_1993_S2789213"), _weightSum_cont(0.),_weightSum_Ups1(0.),_weightSum_Ups4(0.) { } void init() { declare(UnstableFinalState(), "UFS"); _mult_cont_Omega = bookHisto1D( 1, 1, 1); _mult_cont_Rho0 = bookHisto1D( 1, 1, 2); _mult_cont_KStar0 = bookHisto1D( 1, 1, 3); _mult_cont_KStarPlus = bookHisto1D( 1, 1, 4); _mult_cont_Phi = bookHisto1D( 1, 1, 5); _mult_Ups1_Omega = bookHisto1D( 2, 1, 1); _mult_Ups1_Rho0 = bookHisto1D( 2, 1, 2); _mult_Ups1_KStar0 = bookHisto1D( 2, 1, 3); _mult_Ups1_KStarPlus = bookHisto1D( 2, 1, 4); _mult_Ups1_Phi = bookHisto1D( 2, 1, 5); _mult_Ups4_Omega = bookHisto1D( 3, 1, 1); _mult_Ups4_Rho0 = bookHisto1D( 3, 1, 2); _mult_Ups4_KStar0 = bookHisto1D( 3, 1, 3); _mult_Ups4_KStarPlus = bookHisto1D( 3, 1, 4); _mult_Ups4_Phi = bookHisto1D( 3, 1, 5); _hist_cont_KStarPlus = bookHisto1D( 4, 1, 1); _hist_Ups1_KStarPlus = bookHisto1D( 5, 1, 1); _hist_Ups4_KStarPlus = bookHisto1D( 6, 1, 1); _hist_cont_KStar0 = bookHisto1D( 7, 1, 1); _hist_Ups1_KStar0 = bookHisto1D( 8, 1, 1); _hist_Ups4_KStar0 = bookHisto1D( 9, 1, 1); _hist_cont_Rho0 = bookHisto1D(10, 1, 1); _hist_Ups1_Rho0 = bookHisto1D(11, 1, 1); _hist_Ups4_Rho0 = bookHisto1D(12, 1, 1); _hist_cont_Omega = bookHisto1D(13, 1, 1); _hist_Ups1_Omega = bookHisto1D(14, 1, 1); } void analyze(const Event& e) { const double weight = e.weight(); // Find the upsilons Particles upsilons; // First in unstable final state const UnstableFinalState& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) if (p.pid() == 300553 || p.pid() == 553) upsilons.push_back(p); // Then in whole event if that failed if (upsilons.empty()) { - for(ConstGenParticlePtr p: Rivet::particles(e.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(e.genEvent())) { if (p->pdg_id() != 300553 && p->pdg_id() != 553) continue; ConstGenVertexPtr pv = p->production_vertex(); bool passed = true; if (pv) { - for(ConstGenParticlePtr pp: pv->particles_in()) { + for(ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } if (upsilons.empty()) { // continuum _weightSum_cont += weight; unsigned int nOmega(0), nRho0(0), nKStar0(0), nKStarPlus(0), nPhi(0); foreach (const Particle& p, ufs.particles()) { int id = p.abspid(); double xp = 2.*p.E()/sqrtS(); double beta = p.p3().mod()/p.E(); if (id == 113) { _hist_cont_Rho0->fill(xp, weight/beta); ++nRho0; } else if (id == 313) { _hist_cont_KStar0->fill(xp, weight/beta); ++nKStar0; } else if (id == 223) { _hist_cont_Omega->fill(xp, weight/beta); ++nOmega; } else if (id == 323) { _hist_cont_KStarPlus->fill(xp,weight/beta); ++nKStarPlus; } else if (id == 333) { ++nPhi; } } /// @todo Replace with Counters and fill one-point Scatters at the end _mult_cont_Omega ->fill(10.45, weight*nOmega ); _mult_cont_Rho0 ->fill(10.45, weight*nRho0 ); _mult_cont_KStar0 ->fill(10.45, weight*nKStar0 ); _mult_cont_KStarPlus->fill(10.45, weight*nKStarPlus); _mult_cont_Phi ->fill(10.45, weight*nPhi ); } else { // found an upsilon foreach (const Particle& ups, upsilons) { const int parentId = ups.pid(); (parentId == 553 ? _weightSum_Ups1 : _weightSum_Ups4) += weight; Particles unstable; // Find the decay products we want findDecayProducts(ups.genParticle(),unstable); /// @todo Update to new LT mk* functions LorentzTransform cms_boost; if (ups.p3().mod() > 0.001) cms_boost = LorentzTransform::mkFrameTransformFromBeta(ups.momentum().betaVec()); double mass = ups.mass(); unsigned int nOmega(0),nRho0(0),nKStar0(0),nKStarPlus(0),nPhi(0); foreach(const Particle & p , unstable) { int id = p.abspid(); FourMomentum p2 = cms_boost.transform(p.momentum()); double xp = 2.*p2.E()/mass; double beta = p2.p3().mod()/p2.E(); if (id == 113) { if (parentId == 553) _hist_Ups1_Rho0->fill(xp,weight/beta); else _hist_Ups4_Rho0->fill(xp,weight/beta); ++nRho0; } else if (id == 313) { if (parentId == 553) _hist_Ups1_KStar0->fill(xp,weight/beta); else _hist_Ups4_KStar0->fill(xp,weight/beta); ++nKStar0; } else if (id == 223) { if (parentId == 553) _hist_Ups1_Omega->fill(xp,weight/beta); ++nOmega; } else if (id == 323) { if (parentId == 553) _hist_Ups1_KStarPlus->fill(xp,weight/beta); else _hist_Ups4_KStarPlus->fill(xp,weight/beta); ++nKStarPlus; } else if (id == 333) { ++nPhi; } } if (parentId == 553) { _mult_Ups1_Omega ->fill(9.46,weight*nOmega ); _mult_Ups1_Rho0 ->fill(9.46,weight*nRho0 ); _mult_Ups1_KStar0 ->fill(9.46,weight*nKStar0 ); _mult_Ups1_KStarPlus->fill(9.46,weight*nKStarPlus); _mult_Ups1_Phi ->fill(9.46,weight*nPhi ); } else { _mult_Ups4_Omega ->fill(10.58,weight*nOmega ); _mult_Ups4_Rho0 ->fill(10.58,weight*nRho0 ); _mult_Ups4_KStar0 ->fill(10.58,weight*nKStar0 ); _mult_Ups4_KStarPlus->fill(10.58,weight*nKStarPlus); _mult_Ups4_Phi ->fill(10.58,weight*nPhi ); } } } } void finalize() { if (_weightSum_cont > 0.) { /// @todo Replace with Counters and fill one-point Scatters at the end scale(_mult_cont_Omega , 1./_weightSum_cont); scale(_mult_cont_Rho0 , 1./_weightSum_cont); scale(_mult_cont_KStar0 , 1./_weightSum_cont); scale(_mult_cont_KStarPlus, 1./_weightSum_cont); scale(_mult_cont_Phi , 1./_weightSum_cont); scale(_hist_cont_KStarPlus, 1./_weightSum_cont); scale(_hist_cont_KStar0 , 1./_weightSum_cont); scale(_hist_cont_Rho0 , 1./_weightSum_cont); scale(_hist_cont_Omega , 1./_weightSum_cont); } if (_weightSum_Ups1 > 0.) { /// @todo Replace with Counters and fill one-point Scatters at the end scale(_mult_Ups1_Omega , 1./_weightSum_Ups1); scale(_mult_Ups1_Rho0 , 1./_weightSum_Ups1); scale(_mult_Ups1_KStar0 , 1./_weightSum_Ups1); scale(_mult_Ups1_KStarPlus, 1./_weightSum_Ups1); scale(_mult_Ups1_Phi , 1./_weightSum_Ups1); scale(_hist_Ups1_KStarPlus, 1./_weightSum_Ups1); scale(_hist_Ups1_KStar0 , 1./_weightSum_Ups1); scale(_hist_Ups1_Rho0 , 1./_weightSum_Ups1); scale(_hist_Ups1_Omega , 1./_weightSum_Ups1); } if (_weightSum_Ups4 > 0.) { /// @todo Replace with Counters and fill one-point Scatters at the end scale(_mult_Ups4_Omega , 1./_weightSum_Ups4); scale(_mult_Ups4_Rho0 , 1./_weightSum_Ups4); scale(_mult_Ups4_KStar0 , 1./_weightSum_Ups4); scale(_mult_Ups4_KStarPlus, 1./_weightSum_Ups4); scale(_mult_Ups4_Phi , 1./_weightSum_Ups4); scale(_hist_Ups4_KStarPlus, 1./_weightSum_Ups4); scale(_hist_Ups4_KStar0 , 1./_weightSum_Ups4); scale(_hist_Ups4_Rho0 , 1./_weightSum_Ups4); } } private: //@{ Histo1DPtr _mult_cont_Omega, _mult_cont_Rho0, _mult_cont_KStar0, _mult_cont_KStarPlus, _mult_cont_Phi; Histo1DPtr _mult_Ups1_Omega, _mult_Ups1_Rho0, _mult_Ups1_KStar0, _mult_Ups1_KStarPlus, _mult_Ups1_Phi; Histo1DPtr _mult_Ups4_Omega, _mult_Ups4_Rho0, _mult_Ups4_KStar0, _mult_Ups4_KStarPlus, _mult_Ups4_Phi; Histo1DPtr _hist_cont_KStarPlus, _hist_Ups1_KStarPlus, _hist_Ups4_KStarPlus; Histo1DPtr _hist_cont_KStar0, _hist_Ups1_KStar0, _hist_Ups4_KStar0 ; Histo1DPtr _hist_cont_Rho0, _hist_Ups1_Rho0, _hist_Ups4_Rho0; Histo1DPtr _hist_cont_Omega, _hist_Ups1_Omega; double _weightSum_cont,_weightSum_Ups1,_weightSum_Ups4; //@} void findDecayProducts(ConstGenParticlePtr p, Particles& unstable) { ConstGenVertexPtr dv = p->end_vertex(); /// @todo Use better looping - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ int id = abs(pp->pdg_id()); if (id == 113 || id == 313 || id == 323 || id == 333 || id == 223 ) { unstable.push_back(Particle(pp)); } else if (pp->end_vertex()) findDecayProducts(pp, unstable); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ARGUS_1993_S2789213); } diff --git a/analyses/pluginMisc/BABAR_2003_I593379.cc b/analyses/pluginMisc/BABAR_2003_I593379.cc --- a/analyses/pluginMisc/BABAR_2003_I593379.cc +++ b/analyses/pluginMisc/BABAR_2003_I593379.cc @@ -1,186 +1,186 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief Babar charmonium spectra /// @author Peter Richardson class BABAR_2003_I593379 : public Analysis { public: BABAR_2003_I593379() : Analysis("BABAR_2003_I593379"), _weightSum(0.) { } void analyze(const Event& e) { const double weight = e.weight(); // Find the charmonia Particles upsilons; // First in unstable final state const UnstableFinalState& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) if (p.pid() == 300553) upsilons.push_back(p); // Then in whole event if fails if (upsilons.empty()) { - for(ConstGenParticlePtr p: Rivet::particles(e.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(e.genEvent())) { if (p->pdg_id() != 300553) continue; ConstGenVertexPtr pv = p->production_vertex(); bool passed = true; if (pv) { - for(ConstGenParticlePtr pp: pv->particles_in()) { + for(ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(p)); } } // Find upsilons foreach (const Particle& p, upsilons) { _weightSum += weight; // Find the charmonium resonances /// @todo Use Rivet::Particles vector allJpsi, primaryJpsi, Psiprime, all_chi_c1, all_chi_c2, primary_chi_c1, primary_chi_c2; findDecayProducts(p.genParticle(), allJpsi, primaryJpsi, Psiprime, all_chi_c1, all_chi_c2, primary_chi_c1, primary_chi_c2); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.mom().betaVec()); for (size_t i = 0; i < allJpsi.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(allJpsi[i]->momentum())).p(); _hist_all_Jpsi->fill(pcm, weight); } _mult_JPsi->fill(10.58, weight*double(allJpsi.size())); for (size_t i = 0; i < primaryJpsi.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(primaryJpsi[i]->momentum())).p(); _hist_primary_Jpsi->fill(pcm, weight); } _mult_JPsi_direct->fill(10.58, weight*double(primaryJpsi.size())); for (size_t i=0; imomentum())).p(); _hist_Psi_prime->fill(pcm, weight); } _mult_Psi2S->fill(10.58, weight*double(Psiprime.size())); for (size_t i = 0; i < all_chi_c1.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(all_chi_c1[i]->momentum())).p(); _hist_chi_c1->fill(pcm, weight); } _mult_chi_c1->fill(10.58, weight*double(all_chi_c1.size())); _mult_chi_c1_direct->fill(10.58, weight*double(primary_chi_c1.size())); for (size_t i = 0; i < all_chi_c2.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(all_chi_c2[i]->momentum())).p(); _hist_chi_c2->fill(pcm, weight); } _mult_chi_c2->fill(10.58, weight*double(all_chi_c2.size())); _mult_chi_c2_direct->fill(10.58, weight*double(primary_chi_c2.size())); } } // analyze void finalize() { scale(_hist_all_Jpsi , 0.5*0.1/_weightSum); scale(_hist_chi_c1 , 0.5*0.1/_weightSum); scale(_hist_chi_c2 , 0.5*0.1/_weightSum); scale(_hist_Psi_prime , 0.5*0.1/_weightSum); scale(_hist_primary_Jpsi , 0.5*0.1/_weightSum); scale(_mult_JPsi , 0.5*100./_weightSum); scale(_mult_JPsi_direct , 0.5*100./_weightSum); scale(_mult_chi_c1 , 0.5*100./_weightSum); scale(_mult_chi_c1_direct, 0.5*100./_weightSum); scale(_mult_chi_c2 , 0.5*100./_weightSum); scale(_mult_chi_c2_direct, 0.5*100./_weightSum); scale(_mult_Psi2S , 0.5*100./_weightSum); } // finalize void init() { declare(UnstableFinalState(), "UFS"); _mult_JPsi = bookHisto1D(1, 1, 1); _mult_JPsi_direct = bookHisto1D(1, 1, 2); _mult_chi_c1 = bookHisto1D(1, 1, 3); _mult_chi_c1_direct = bookHisto1D(1, 1, 4); _mult_chi_c2 = bookHisto1D(1, 1, 5); _mult_chi_c2_direct = bookHisto1D(1, 1, 6); _mult_Psi2S = bookHisto1D(1, 1, 7); _hist_all_Jpsi = bookHisto1D(6, 1, 1); _hist_chi_c1 = bookHisto1D(7, 1, 1); _hist_chi_c2 = bookHisto1D(7, 1, 2); _hist_Psi_prime = bookHisto1D(8, 1, 1); _hist_primary_Jpsi = bookHisto1D(10, 1, 1); } // init private: //@{ // count of weights double _weightSum; /// Histograms Histo1DPtr _hist_all_Jpsi; Histo1DPtr _hist_chi_c1; Histo1DPtr _hist_chi_c2; Histo1DPtr _hist_Psi_prime; Histo1DPtr _hist_primary_Jpsi; Histo1DPtr _mult_JPsi; Histo1DPtr _mult_JPsi_direct; Histo1DPtr _mult_chi_c1; Histo1DPtr _mult_chi_c1_direct; Histo1DPtr _mult_chi_c2; Histo1DPtr _mult_chi_c2_direct; Histo1DPtr _mult_Psi2S; //@} void findDecayProducts(ConstGenParticlePtr p, vector& allJpsi, vector& primaryJpsi, vector& Psiprime, vector& all_chi_c1, vector& all_chi_c2, vector& primary_chi_c1, vector& primary_chi_c2) { ConstGenVertexPtr dv = p->end_vertex(); bool isOnium = false; /// @todo Use better looping - for (ConstGenParticlePtr pp: dv->particles_in()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::PARENTS)){ int id = pp->pdg_id(); id = id%1000; id -= id%10; id /= 10; if (id==44) isOnium = true; } /// @todo Use better looping - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ int id = pp->pdg_id(); if (id==100443) { Psiprime.push_back(pp); } else if (id==20443) { all_chi_c1.push_back(pp); if (!isOnium) primary_chi_c1.push_back(pp); } else if (id==445) { all_chi_c2.push_back(pp); if (!isOnium) primary_chi_c2.push_back(pp); } else if (id==443) { allJpsi.push_back(pp); if (!isOnium) primaryJpsi.push_back(pp); } if (pp->end_vertex()) { findDecayProducts(pp, allJpsi, primaryJpsi, Psiprime, all_chi_c1, all_chi_c2, primary_chi_c1, primary_chi_c2); } } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2003_I593379); } diff --git a/analyses/pluginMisc/BABAR_2005_S6181155.cc b/analyses/pluginMisc/BABAR_2005_S6181155.cc --- a/analyses/pluginMisc/BABAR_2005_S6181155.cc +++ b/analyses/pluginMisc/BABAR_2005_S6181155.cc @@ -1,145 +1,145 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief BABAR Xi_c baryons from fragmentation /// @author Peter Richardson class BABAR_2005_S6181155 : public Analysis { public: BABAR_2005_S6181155() : Analysis("BABAR_2005_S6181155") { } void init() { declare(Beam(), "Beams"); declare(UnstableFinalState(), "UFS"); _histOnResonanceA = bookHisto1D(1,1,1); _histOnResonanceB = bookHisto1D(2,1,1); _histOffResonance = bookHisto1D(2,1,2); _sigma = bookHisto1D(3,1,1); _histOnResonanceA_norm = bookHisto1D(4,1,1); _histOnResonanceB_norm = bookHisto1D(5,1,1); _histOffResonance_norm = bookHisto1D(5,1,2); } void analyze(const Event& e) { const double weight = e.weight(); // Loop through unstable FS particles and look for charmed mesons/baryons const UnstableFinalState& ufs = apply(e, "UFS"); const Beam beamproj = apply(e, "Beams"); const ParticlePair& beams = beamproj.beams(); const FourMomentum mom_tot = beams.first.momentum() + beams.second.momentum(); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(mom_tot.betaVec()); const double s = sqr(beamproj.sqrtS()); const bool onresonance = fuzzyEquals(beamproj.sqrtS()/GeV, 10.58, 2E-3); foreach (const Particle& p, ufs.particles()) { // 3-momentum in CMS frame const double mom = cms_boost.transform(p.momentum()).vector3().mod(); // Only looking at Xi_c^0 if (p.abspid() != 4132 ) continue; if (onresonance) { _histOnResonanceA_norm->fill(mom,weight); _histOnResonanceB_norm->fill(mom,weight); } else { _histOffResonance_norm->fill(mom,s/sqr(10.58)*weight); } MSG_DEBUG("mom = " << mom); // off-resonance cross section if (checkDecay(p.genParticle())) { if (onresonance) { _histOnResonanceA->fill(mom,weight); _histOnResonanceB->fill(mom,weight); } else { _histOffResonance->fill(mom,s/sqr(10.58)*weight); _sigma->fill(10.6,weight); } } } } void finalize() { scale(_histOnResonanceA, crossSection()/femtobarn/sumOfWeights()); scale(_histOnResonanceB, crossSection()/femtobarn/sumOfWeights()); scale(_histOffResonance, crossSection()/femtobarn/sumOfWeights()); scale(_sigma , crossSection()/femtobarn/sumOfWeights()); normalize(_histOnResonanceA_norm); normalize(_histOnResonanceB_norm); normalize(_histOffResonance_norm); } private: //@{ /// Histograms Histo1DPtr _histOnResonanceA; Histo1DPtr _histOnResonanceB; Histo1DPtr _histOffResonance; Histo1DPtr _sigma ; Histo1DPtr _histOnResonanceA_norm; Histo1DPtr _histOnResonanceB_norm; Histo1DPtr _histOffResonance_norm; //@} bool checkDecay(ConstGenParticlePtr p) { unsigned int nstable = 0, npip = 0, npim = 0; unsigned int nXim = 0, nXip = 0; findDecayProducts(p, nstable, npip, npim, nXip, nXim); int id = p->pdg_id(); // Xi_c if (id == 4132) { if (nstable == 2 && nXim == 1 && npip == 1) return true; } else if (id == -4132) { if (nstable == 2 && nXip == 1 && npim == 1) return true; } return false; } void findDecayProducts(ConstGenParticlePtr p, unsigned int& nstable, unsigned int& npip, unsigned int& npim, unsigned int& nXip, unsigned int& nXim) { ConstGenVertexPtr dv = p->end_vertex(); /// @todo Use better looping - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ int id = pp->pdg_id(); if (id==3312) { ++nXim; ++nstable; } else if (id == -3312) { ++nXip; ++nstable; } else if(id == 111 || id == 221) { ++nstable; } else if (pp->end_vertex()) { findDecayProducts(pp, nstable, npip, npim, nXip, nXim); } else { if (id != 22) ++nstable; if (id == 211) ++npip; else if(id == -211) ++npim; } } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2005_S6181155); } diff --git a/analyses/pluginMisc/BABAR_2007_S7266081.cc b/analyses/pluginMisc/BABAR_2007_S7266081.cc --- a/analyses/pluginMisc/BABAR_2007_S7266081.cc +++ b/analyses/pluginMisc/BABAR_2007_S7266081.cc @@ -1,181 +1,181 @@ // -*- C++ -*- #include #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief BABAR tau lepton to three charged hadrons /// @author Peter Richardson class BABAR_2007_S7266081 : public Analysis { public: BABAR_2007_S7266081() : Analysis("BABAR_2007_S7266081"), _weight_total(0), _weight_pipipi(0), _weight_Kpipi(0), _weight_KpiK(0), _weight_KKK(0) { } void init() { declare(UnstableFinalState(), "UFS"); _hist_pipipi_pipipi = bookHisto1D( 1, 1, 1); _hist_pipipi_pipi = bookHisto1D( 2, 1, 1); _hist_Kpipi_Kpipi = bookHisto1D( 3, 1, 1); _hist_Kpipi_Kpi = bookHisto1D( 4, 1, 1); _hist_Kpipi_pipi = bookHisto1D( 5, 1, 1); _hist_KpiK_KpiK = bookHisto1D( 6, 1, 1); _hist_KpiK_KK = bookHisto1D( 7, 1, 1); _hist_KpiK_piK = bookHisto1D( 8, 1, 1); _hist_KKK_KKK = bookHisto1D( 9, 1, 1); _hist_KKK_KK = bookHisto1D(10, 1, 1); } void analyze(const Event& e) { double weight = e.weight(); // Find the taus Particles taus; foreach(const Particle& p, apply(e, "UFS").particles(Cuts::pid==PID::TAU)) { _weight_total += weight; Particles pip, pim, Kp, Km; unsigned int nstable = 0; // Get the boost to the rest frame LorentzTransform cms_boost; if (p.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); // Find the decay products we want findDecayProducts(p.genParticle(), nstable, pip, pim, Kp, Km); if (p.pid() < 0) { swap(pip, pim); swap(Kp, Km ); } if (nstable != 4) continue; // pipipi if (pim.size() == 2 && pip.size() == 1) { _weight_pipipi += weight; _hist_pipipi_pipipi-> fill((pip[0].momentum()+pim[0].momentum()+pim[1].momentum()).mass(), weight); _hist_pipipi_pipi-> fill((pip[0].momentum()+pim[0].momentum()).mass(), weight); _hist_pipipi_pipi-> fill((pip[0].momentum()+pim[1].momentum()).mass(), weight); } else if (pim.size() == 1 && pip.size() == 1 && Km.size() == 1) { _weight_Kpipi += weight; _hist_Kpipi_Kpipi-> fill((pim[0].momentum()+pip[0].momentum()+Km[0].momentum()).mass(), weight); _hist_Kpipi_Kpi-> fill((pip[0].momentum()+Km[0].momentum()).mass(), weight); _hist_Kpipi_pipi-> fill((pim[0].momentum()+pip[0].momentum()).mass(), weight); } else if (Kp.size() == 1 && Km.size() == 1 && pim.size() == 1) { _weight_KpiK += weight; _hist_KpiK_KpiK-> fill((Kp[0].momentum()+Km[0].momentum()+pim[0].momentum()).mass(), weight); _hist_KpiK_KK-> fill((Kp[0].momentum()+Km[0].momentum()).mass(), weight); _hist_KpiK_piK-> fill((Kp[0].momentum()+pim[0].momentum()).mass(), weight); } else if (Kp.size() == 1 && Km.size() == 2) { _weight_KKK += weight; _hist_KKK_KKK-> fill((Kp[0].momentum()+Km[0].momentum()+Km[1].momentum()).mass(), weight); _hist_KKK_KK-> fill((Kp[0].momentum()+Km[0].momentum()).mass(), weight); _hist_KKK_KK-> fill((Kp[0].momentum()+Km[1].momentum()).mass(), weight); } } } void finalize() { if (_weight_pipipi > 0.) { scale(_hist_pipipi_pipipi, 1.0/_weight_pipipi); scale(_hist_pipipi_pipi , 0.5/_weight_pipipi); } if (_weight_Kpipi > 0.) { scale(_hist_Kpipi_Kpipi , 1.0/_weight_Kpipi); scale(_hist_Kpipi_Kpi , 1.0/_weight_Kpipi); scale(_hist_Kpipi_pipi , 1.0/_weight_Kpipi); } if (_weight_KpiK > 0.) { scale(_hist_KpiK_KpiK , 1.0/_weight_KpiK); scale(_hist_KpiK_KK , 1.0/_weight_KpiK); scale(_hist_KpiK_piK , 1.0/_weight_KpiK); } if (_weight_KKK > 0.) { scale(_hist_KKK_KKK , 1.0/_weight_KKK); scale(_hist_KKK_KK , 0.5/_weight_KKK); } /// @note Using autobooking for these scatters since their x values are not really obtainable from the MC data bookScatter2D(11, 1, 1, true)->point(0).setY(100*_weight_pipipi/_weight_total, 100*sqrt(_weight_pipipi)/_weight_total); bookScatter2D(12, 1, 1, true)->point(0).setY(100*_weight_Kpipi/_weight_total, 100*sqrt(_weight_Kpipi)/_weight_total); bookScatter2D(13, 1, 1, true)->point(0).setY(100*_weight_KpiK/_weight_total, 100*sqrt(_weight_KpiK)/_weight_total); bookScatter2D(14, 1, 1, true)->point(0).setY(100*_weight_KKK/_weight_total, 100*sqrt(_weight_KKK)/_weight_total); } private: //@{ // Histograms Histo1DPtr _hist_pipipi_pipipi, _hist_pipipi_pipi; Histo1DPtr _hist_Kpipi_Kpipi, _hist_Kpipi_Kpi, _hist_Kpipi_pipi; Histo1DPtr _hist_KpiK_KpiK, _hist_KpiK_KK, _hist_KpiK_piK; Histo1DPtr _hist_KKK_KKK, _hist_KKK_KK; // Weights counters double _weight_total, _weight_pipipi, _weight_Kpipi, _weight_KpiK, _weight_KKK; //@} void findDecayProducts(ConstGenParticlePtr p, unsigned int & nstable, Particles& pip, Particles& pim, Particles& Kp, Particles& Km) { ConstGenVertexPtr dv = p->end_vertex(); /// @todo Use better looping - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ int id = pp->pdg_id(); if (id == PID::PI0 ) ++nstable; else if (id == PID::K0S) ++nstable; else if (id == PID::PIPLUS) { pip.push_back(Particle(*pp)); ++nstable; } else if (id == PID::PIMINUS) { pim.push_back(Particle(*pp)); ++nstable; } else if (id == PID::KPLUS) { Kp.push_back(Particle(*pp)); ++nstable; } else if (id == PID::KMINUS) { Km.push_back(Particle(*pp)); ++nstable; } else if (pp->end_vertex()) { findDecayProducts(pp, nstable, pip, pim, Kp, Km); } else ++nstable; } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2007_S7266081); } diff --git a/analyses/pluginMisc/BABAR_2013_I1238276.cc b/analyses/pluginMisc/BABAR_2013_I1238276.cc --- a/analyses/pluginMisc/BABAR_2013_I1238276.cc +++ b/analyses/pluginMisc/BABAR_2013_I1238276.cc @@ -1,118 +1,118 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// @brief BaBar pion, kaon and proton production in the continuum /// @author Peter Richardson class BABAR_2013_I1238276 : public Analysis { public: BABAR_2013_I1238276() : Analysis("BABAR_2013_I1238276") { } void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); _histPion_no_dec = bookHisto1D(1,1,1); _histKaon_no_dec = bookHisto1D(1,1,2); _histProton_no_dec = bookHisto1D(1,1,3); _histPion_dec = bookHisto1D(2,1,1); _histKaon_dec = bookHisto1D(2,1,2); _histProton_dec = bookHisto1D(2,1,3); } void analyze(const Event& e) { const double weight = e.weight(); // Loop through charged FS particles and look for charmed mesons/baryons const ChargedFinalState& fs = apply(e, "FS"); const Beam beamproj = apply(e, "Beams"); const ParticlePair& beams = beamproj.beams(); const FourMomentum mom_tot = beams.first.momentum() + beams.second.momentum(); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(mom_tot.betaVec()); MSG_DEBUG("CMS Energy sqrt s = " << beamproj.sqrtS()); foreach (const Particle& p, fs.particles()) { // check if prompt or not ConstGenParticlePtr pmother = p.genParticle(); ConstGenVertexPtr ivertex = pmother->production_vertex(); bool prompt = true; while (ivertex) { - vector inparts = ivertex->particles_in(); + vector inparts = HepMCUtils::particles(ivertex, Relatives::PARENTS); int n_inparts = inparts.size(); if (n_inparts < 1) break; pmother = inparts[0]; // first mother particle int mother_pid = abs(pmother->pdg_id()); if (mother_pid==PID::K0S || mother_pid==PID::LAMBDA) { prompt = false; break; } else if (mother_pid<6) { break; } ivertex = pmother->production_vertex(); } // momentum in CMS frame const double mom = cms_boost.transform(p.momentum()).vector3().mod(); const int PdgId = p.abspid(); MSG_DEBUG("pdgID = " << PdgId << " Momentum = " << mom); switch (PdgId) { case PID::PIPLUS: if(prompt) _histPion_no_dec->fill(mom,weight); _histPion_dec ->fill(mom,weight); break; case PID::KPLUS: if(prompt) _histKaon_no_dec->fill(mom,weight); _histKaon_dec ->fill(mom,weight); break; case PID::PROTON: if(prompt) _histProton_no_dec->fill(mom,weight); _histProton_dec ->fill(mom,weight); default : break; } } } void finalize() { scale(_histPion_no_dec ,1./sumOfWeights()); scale(_histKaon_no_dec ,1./sumOfWeights()); scale(_histProton_no_dec,1./sumOfWeights()); scale(_histPion_dec ,1./sumOfWeights()); scale(_histKaon_dec ,1./sumOfWeights()); scale(_histProton_dec ,1./sumOfWeights()); } private: //@{ // Histograms for continuum data (sqrt(s) = 10.52 GeV) // no K_S and Lambda decays Histo1DPtr _histPion_no_dec; Histo1DPtr _histKaon_no_dec; Histo1DPtr _histProton_no_dec; // including decays Histo1DPtr _histPion_dec; Histo1DPtr _histKaon_dec; Histo1DPtr _histProton_dec; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2013_I1238276); } diff --git a/analyses/pluginMisc/BELLE_2001_S4598261.cc b/analyses/pluginMisc/BELLE_2001_S4598261.cc --- a/analyses/pluginMisc/BELLE_2001_S4598261.cc +++ b/analyses/pluginMisc/BELLE_2001_S4598261.cc @@ -1,104 +1,104 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief BELLE pi0 spectrum at Upsilon(4S) /// @author Peter Richardson class BELLE_2001_S4598261 : public Analysis { public: BELLE_2001_S4598261() : Analysis("BELLE_2001_S4598261"), _weightSum(0.) { } void init() { declare(UnstableFinalState(), "UFS"); _histdSigDp = bookHisto1D(1, 1, 1); // spectrum _histMult = bookHisto1D(2, 1, 1); // multiplicity } void analyze(const Event& e) { const double weight = e.weight(); // Find the upsilons Particles upsilons; // First in unstable final state const UnstableFinalState& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) if (p.pid()==300553) upsilons.push_back(p); // Then in whole event if fails if (upsilons.empty()) { - for(ConstGenParticlePtr p: Rivet::particles(e.genEvent())) { + for(ConstGenParticlePtr p: HepMCUtils::particles(e.genEvent())) { if (p->pdg_id() != 300553) continue; ConstGenVertexPtr pv = p->production_vertex(); bool passed = true; if (pv) { - for (ConstGenParticlePtr pp: pv->particles_in()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(pv, Relatives::PARENTS)){ if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(p)); } } // Find upsilons foreach (const Particle& p, upsilons) { _weightSum += weight; // Find the neutral pions from the decay vector pions; findDecayProducts(p.genParticle(), pions); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); for (size_t ix=0; ixmomentum())).p(); _histdSigDp->fill(pcm,weight); } _histMult->fill(0., pions.size()*weight); } } void finalize() { scale(_histdSigDp, 1./_weightSum); scale(_histMult , 1./_weightSum); } private: //@{ // count of weights double _weightSum; /// Histograms Histo1DPtr _histdSigDp; Histo1DPtr _histMult; //@} void findDecayProducts(ConstGenParticlePtr p, vector& pions) { ConstGenVertexPtr dv = p->end_vertex(); - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ const int id = pp->pdg_id(); if (id == 111) { pions.push_back(pp); } else if (pp->end_vertex()) findDecayProducts(pp, pions); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2001_S4598261); } diff --git a/analyses/pluginMisc/BELLE_2008_I786560.cc b/analyses/pluginMisc/BELLE_2008_I786560.cc --- a/analyses/pluginMisc/BELLE_2008_I786560.cc +++ b/analyses/pluginMisc/BELLE_2008_I786560.cc @@ -1,112 +1,112 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @brief BELLE tau lepton to pi pi /// @author Peter Richardson class BELLE_2008_I786560 : public Analysis { public: BELLE_2008_I786560() : Analysis("BELLE_2008_I786560"), _weight_total(0), _weight_pipi(0) { } void init() { declare(UnstableFinalState(), "UFS"); _hist_pipi = bookHisto1D( 1, 1, 1); } void analyze(const Event& e) { // Find the taus Particles taus; const UnstableFinalState& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if (p.abspid() != PID::TAU) continue; _weight_total += 1.; Particles pip, pim, pi0; unsigned int nstable = 0; // get the boost to the rest frame LorentzTransform cms_boost; if (p.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); // find the decay products we want findDecayProducts(p.genParticle(), nstable, pip, pim, pi0); if (p.pid() < 0) { swap(pip, pim); } if (nstable != 3) continue; // pipi if (pim.size() == 1 && pi0.size() == 1) { _weight_pipi += 1.; _hist_pipi->fill((pi0[0].momentum()+pim[0].momentum()).mass2(),1.); } } } void finalize() { if (_weight_pipi > 0.) scale(_hist_pipi, 1./_weight_pipi); } private: //@{ // Histograms Histo1DPtr _hist_pipi; // Weights counters double _weight_total, _weight_pipi; //@} void findDecayProducts(ConstGenParticlePtr p, unsigned int & nstable, Particles& pip, Particles& pim, Particles& pi0) { ConstGenVertexPtr dv = p->end_vertex(); /// @todo Use better looping - for (ConstGenParticlePtr pp: dv->particles_out()){ + for (ConstGenParticlePtr pp: HepMCUtils::particles(dv, Relatives::CHILDREN)){ int id = pp->pdg_id(); if (id == PID::PI0 ) { pi0.push_back(Particle(*pp)); ++nstable; } else if (id == PID::K0S) ++nstable; else if (id == PID::PIPLUS) { pip.push_back(Particle(*pp)); ++nstable; } else if (id == PID::PIMINUS) { pim.push_back(Particle(*pp)); ++nstable; } else if (id == PID::KPLUS) { ++nstable; } else if (id == PID::KMINUS) { ++nstable; } else if (pp->end_vertex()) { findDecayProducts(pp, nstable, pip, pim, pi0); } else ++nstable; } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2008_I786560); } diff --git a/analyses/pluginPetra/PLUTO_1980_I154270.cc b/analyses/pluginPetra/PLUTO_1980_I154270.cc --- a/analyses/pluginPetra/PLUTO_1980_I154270.cc +++ b/analyses/pluginPetra/PLUTO_1980_I154270.cc @@ -1,94 +1,94 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// @brief Add a short analysis description here class PLUTO_1980_I154270 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(PLUTO_1980_I154270); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { const ChargedFinalState cfs; declare(cfs, "CFS"); if(fuzzyEquals(sqrtS()/GeV,30.75)) { _hist=bookProfile1D(1, 2, 1); } else if (fuzzyEquals(sqrtS()/GeV,9.4 ) || fuzzyEquals(sqrtS()/GeV,12.0) || fuzzyEquals(sqrtS()/GeV,13.0) || fuzzyEquals(sqrtS()/GeV,17.0) || fuzzyEquals(sqrtS()/GeV,22.0) || fuzzyEquals(sqrtS()/GeV,27.6) || fuzzyEquals(sqrtS()/GeV,30.2) || fuzzyEquals(sqrtS()/GeV,30.7) || fuzzyEquals(sqrtS()/GeV,31.3)) { _hist=bookProfile1D(1, 1, 1); } else { MSG_WARNING("CoM energy of events sqrt(s) = " << sqrtS()/GeV << " doesn't match any available analysis energy ."); } } /// Perform the per-event analysis void analyze(const Event& event) { const FinalState& cfs = apply(event, "CFS"); MSG_DEBUG("Total charged multiplicity = " << cfs.size()); unsigned int nPart(0); for(const Particle& p: cfs.particles()) { // check if prompt or not ConstGenParticlePtr pmother = p.genParticle(); ConstGenVertexPtr ivertex = pmother->production_vertex(); bool prompt = true; while (ivertex) { - vector inparts = ivertex->particles_in(); + vector inparts = HepMCUtils::particles(ivertex, Relatives::PARENTS); int n_inparts = inparts.size(); if (n_inparts < 1) break; pmother = inparts[0]; // first mother particle int mother_pid = abs(pmother->pdg_id()); if (mother_pid==PID::K0S || mother_pid==PID::LAMBDA) { prompt = false; break; } else if (mother_pid<6) { break; } ivertex = pmother->production_vertex(); } if(prompt) ++nPart; } _hist->fill(sqrtS(),nPart,event.weight()); } /// Normalise histograms etc., after the run void finalize() { } //@} private: // Histogram Profile1DPtr _hist; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(PLUTO_1980_I154270); } diff --git a/bin/rivet-buildplugin.in b/bin/rivet-buildplugin.in --- a/bin/rivet-buildplugin.in +++ b/bin/rivet-buildplugin.in @@ -1,304 +1,306 @@ #!/usr/bin/env bash ## -*- sh -*- ## @configure_input@ ## Get program name PROG=$(basename $0) ## Default value for num_jobs is all available cores let num_jobs=$(getconf _NPROCESSORS_ONLN) function usage { cat <<-EOF $PROG: Compile a Rivet analysis plugin library from one or more sources Usage: $PROG [options] [libname] source1 [...] [compiler_flags] can be a path, provided the filename is of the form 'Rivet*.so' If is not specified, the default name is 'RivetAnalysis.so'. To make special build variations you can add appropriate compiler flags to the arguments and these will be passed directly to the compiler. For example, for a debug build of your plugin library, add '-g', and for a 32 bit build on a 64 bit system add '-m32'. Options: -h | --help: display this help message -j NUM number of parallel compile jobs [$num_jobs] -r | --with-root: add ROOT link options (requires root-config on system) -n | --cmd | --dry-run: just print the generated commands, do not execute -k | --keep: keep intermediate files -v | --verbose: write out debug info EOF } ################## ## Option handling # http://www.bahmanm.com/blogs/command-line-options-how-to-parse-in-bash-using-getopt ## Translate long options to short # https://stackoverflow.com/a/5255468 args=() for arg do case "$arg" in # translate our long options --help ) args+=("-h");; --with-root ) args+=("-r");; --cmd | --dry-run ) args+=("-n");; --keep ) args+=("-k");; --verbose ) args+=("-v");; # pass through anything else * ) args+=("$arg");; esac done ## Reset the translated args set -- "${args[@]}" ## Now we can process with getopt while getopts ":hj:rnkv" opt; do case $opt in h) usage; exit 0 ;; j) num_jobs=$OPTARG ;; r) with_root=yes ;; n) only_show=yes ;; k) keep_tmps=yes ;; v) debug=yes ;; \?) echo "Unknown option -$OPTARG" >&2; exit 1 ;; :) echo "Option -$OPTARG requires an argument" >&2; exit 1 ;; esac done ## Remove options shift $((OPTIND-1)) ## Check num_jobs is a number if [[ ! $num_jobs -eq $num_jobs ]]; then echo "Unknown argument to -j" >&2; exit 1 fi if [[ $num_jobs -lt 1 ]]; then echo "Number of jobs must be positive" >&2; exit 1 fi #echo "$with_root" #echo "$only_show" #echo "$@" ## Need some args left at this point if [[ $# -lt 1 ]]; then usage >&2 exit 1 fi ## Get and check the target library name libname="$1" match1=$(basename "$libname" | egrep '^.*\.so') match2=$(basename "$libname" | egrep '^Rivet.*\.so') if test -n "$match1"; then if test -z "$match2"; then echo "Library name '$libname' does not have the required 'Rivet*.so' name pattern" >&2 exit 1 fi ## If we're using the first arg as the library name, shift it off the positional list shift else if [[ -z $only_show ]]; then echo "Using default library name 'RivetAnalysis.so'" fi libname="RivetAnalysis.so" fi ## Again need some args left at this point if [[ $# -lt 1 ]]; then usage >&2 exit 1 fi ################## ## Now assemble the build flags ## These variables need to exist, may be used in later substitutions ## Note no use of $DESTDIR... we ignore it so that destdir can be used ## for temp installs later copied to / prefix="@prefix@" exec_prefix="@exec_prefix@" datarootdir="@datarootdir@" ## Work out shared library build flags by platform if [[ $(uname) == "Darwin" ]]; then ## Mac OS X shared_flags="-undefined dynamic_lookup -bundle" else ## Unix shared_flags="-shared -fPIC" fi ## Get Rivet system C++ compiler (fall back to $CXX and then g++ if needed) mycxx=g++ rivetcxx=$(which $(echo "@RIVETCXX@" | awk '{print $1}') 2> /dev/null) abscxx=$(which "$CXX" 2> /dev/null) if [[ -x "$rivetcxx" ]]; then mycxx="@CXX@" elif [[ -x "$abscxx" ]]; then mycxx="$CXX" fi ## Get Rivet system C++ compiler flags if [[ -n "@AM_CXXFLAGS@" ]]; then mycxxflags="@AM_CXXFLAGS@" fi if [[ -n "@RIVETCXXFLAGS@" ]]; then mycxxflags="$mycxxflags @RIVETCXXFLAGS@" fi ## Get Rivet system C preprocessor flags (duplicating that in rivet-config.in) if [[ -n "$RIVET_BUILDPLUGIN_BEFORE_INSTALL" ]]; then irivet="@top_srcdir@/include" else irivet="@includedir@" fi test -n "$irivet" && mycppflags="$mycppflags -I${irivet}" @ENABLE_HEPMC_3_TRUE@ihepmc_1="@HEPMC3INCPATH@ @CPPFLAGS@" -@ENABLE_HEPMC_3_FALSE@ihepmc_2="@HEPMC3INCPATH@" +@ENABLE_HEPMC_3_FALSE@ihepmc_2="@HEPMCINCPATH@" test -n "$ihepmc_1" && mycppflags="$mycppflags -I${ihepmc_1}" test -n "$ihepmc_2" && mycppflags="$mycppflags -I${ihepmc_2}" iyoda="@YODAINCPATH@" test -n "$iyoda" && mycppflags="$mycppflags -I${iyoda}" ifastjet="@FASTJETINCPATH@" test -n "$ifastjet" && mycppflags="$mycppflags -I${ifastjet}" # igsl="@GSLINCPATH@" # test -n "$igsl" && mycppflags="$mycppflags -I${igsl}" # iboost="@BOOST_CPPFLAGS@" # test -n "$iboost" && mycppflags="$mycppflags ${iboost}" ## Get Rivet system linker flags (duplicating that in rivet-config.in) myldflags="" if [[ -n "$RIVET_BUILDPLUGIN_BEFORE_INSTALL" ]]; then lrivet="@top_builddir@/src/.libs" else lrivet="@libdir@" fi test -n "$lrivet" && myldflags="$myldflags -L${lrivet}" -lhepmc="@HEPMCLIBPATH@" -test -n "$lhepmc" && myldflags="$myldflags -L${lhepmc}" +@ENABLE_HEPMC_3_TRUE@lhepmc_1="@HEPMC3LIBPATH@" +@ENABLE_HEPMC_3_FALSE@lhepmc_2="@HEPMCLIBPATH@" +test -n "$lhepmc_1" && myldflags="$myldflags -L${lhepmc_1}" +test -n "$lhepmc_2" && myldflags="$myldflags -L${lhepmc_2}" lyoda="@YODALIBPATH@" test -n "$lyoda" && myldflags="$myldflags -L${lyoda}" lfastjet="@FASTJETCONFIGLIBADD@" test -n "$lfastjet" && myldflags="$myldflags ${lfastjet}" ## Detect whether the linker accepts the --no-as-needed flag and prepend the linker flag with it if possible if (cd /tmp && echo -e 'int main() { return 0; }' > $$.cc; $mycxx -Wl,--no-as-needed $$.cc -o $$ 2> /dev/null); then myldflags="-Wl,--no-as-needed $myldflags" fi ## Get ROOT flags if needed if [[ -n $with_root ]]; then rootcxxflags=$(root-config --cflags 2> /dev/null) rootldflags=$(root-config --libs 2> /dev/null) fi ################## ## Assemble and run build machinery ## Split sources into buckets, one for each core let idx=1 sources="" for src in "$@" do if [[ -s "$src" ]]; then sources="$sources $src" buckets[$idx]="${buckets[$idx]} $src" let idx=(idx%$num_jobs)+1 else if [[ ${src:0:1} == "-" ]]; then ## Found a user option usercxxflags="$usercxxflags $src" else echo "Warning: $src not found" >&2 fi fi done ## May be less than num_jobs let num_buckets=${#buckets[@]} if [[ $num_buckets -lt 1 ]]; then echo "Error: no source files found" >&2 exit 2 fi ## Loop over buckets for idx in $(seq 1 $num_buckets); do # DO NOT SIMPLIFY, the OS X mktemp can't deal with suffixes directly tmpfile="$(mktemp tmp.XXXXXXXX)" mv "$tmpfile" "$tmpfile.cc" tmpfile="$tmpfile.cc" for i in $(echo ${buckets[$idx]}); do #< find the real way to do this! echo "#line 1 \"$i\"" >> "$tmpfile" cat "$i" >> "$tmpfile" done if [[ -s "$tmpfile" ]]; then srcnames[$idx]="$tmpfile" fi done objnames=("${srcnames[@]/.cc/.o}") if [[ -z "$debug" ]]; then silencer="@"; fi tmpmakefile=$(mktemp Makefile.tmp.XXXXXXXXXX) cat > "$tmpmakefile" < [ ...]" << endl; return 1; } foreach (const string& a, Rivet::AnalysisLoader::analysisNames()) cout << a << endl; Rivet::AnalysisHandler ah; for (int i = 2; i < argc; ++i) { ah.addAnalysis(argv[i]); } - std::shared_ptr reader = Rivet::RivetHepMC::deduce_reader(argv[1]); + std::ifstream istr(argv[1], std::ios::in); - while(!reader->failed()){ - - Rivet::RivetHepMC::GenEvent evt; - reader->read_event(evt); - ah.analyze(evt); + std::shared_ptr reader = Rivet::HepMCUtils::makeReader(istr); + + std::shared_ptr evt = make_shared(); + + while(reader && Rivet::HepMCUtils::readEvent(reader, evt)){ + ah.analyze(evt.get()); + evt.reset(new Rivet::RivetHepMC::GenEvent()); } - reader->close(); + istr.close(); ah.setCrossSection(1.0); ah.finalize(); ah.writeData("Rivet.yoda"); return 0; } diff --git a/include/Rivet/Run.hh b/include/Rivet/Run.hh --- a/include/Rivet/Run.hh +++ b/include/Rivet/Run.hh @@ -1,112 +1,111 @@ // -*- C++ -*- #ifndef RIVET_Run_HH #define RIVET_Run_HH #include "Rivet/Tools/RivetSTL.hh" #include "Rivet/Tools/RivetHepMC.hh" namespace Rivet { // Forward declaration class AnalysisHandler; /// @brief Interface to handle a run of events read from a HepMC stream or file. class Run { public: /// @name Standard constructors and destructors. */ //@{ /// The standard constructor. Run(AnalysisHandler& ah); /// The destructor ~Run(); //@} public: /// @name Set run properties //@{ /// Get the cross-section for this run. Run& setCrossSection(const double xs); /// Get the current cross-section from the analysis handler in pb. double crossSection() const; /// Declare whether to list available analyses Run& setListAnalyses(const bool dolist); //@} /// @name File processing stages //@{ /// Set up HepMC file readers (using the appropriate file weight for the first file) bool init(const std::string& evtfile, double weight=1.0); /// Open a HepMC GenEvent file (using the appropriate file weight for the first file) bool openFile(const std::string& evtfile, double weight=1.0); /// Read the next HepMC event bool readEvent(); /// Read the next HepMC event only to skip it //bool skipEvent(); /// Handle next event bool processEvent(); /// Close up HepMC I/O bool finalize(); //@} private: /// AnalysisHandler object AnalysisHandler& _ah; /// @name Run variables obtained from events or command line //@{ /// @brief An extra event weight scaling per event file. /// Useful for e.g. AlpGen n-parton event file combination. double _fileweight; /// Cross-section from command line. double _xs; //@} /// Flag to show list of analyses bool _listAnalyses; /// @name HepMC I/O members //@{ /// Current event std::shared_ptr _evt; /// Output stream for HepMC writer - /// @todo reinstate once works with HepMC3 streams - //std::shared_ptr _istr; + std::shared_ptr _istr; /// HepMC reader - std::shared_ptr _hepmcReader; + std::shared_ptr _hepmcReader; //@} }; } #endif diff --git a/include/Rivet/Tools/CentralityBinner.hh b/include/Rivet/Tools/CentralityBinner.hh --- a/include/Rivet/Tools/CentralityBinner.hh +++ b/include/Rivet/Tools/CentralityBinner.hh @@ -1,834 +1,834 @@ // -*- C++ -*- #ifndef RIVET_CENTRALITYBINNER_HH #define RIVET_CENTRALITYBINNER_HH #include #include "Rivet/Config/RivetCommon.hh" #include "Rivet/Tools/RivetYODA.hh" namespace Rivet { /** @brief Base class for projections giving the value of an observable sensitive to the centrality of a collision. @author Leif Lönnblad The centrality of a collision is not really an observable, but the concept is anyway often used in the heavy ion community as if it were just that. This base class can be used to provide a an estimator for the centrality by projecting down to a single number which then can be used by a CentralityBinner object to select a histogram to be filled with another observable depending on centrality percentile. The estimate() should be a non-negative number with large values indicating a higher overlap than small ones. A negative value indicates that the centrality estimate could not be calculated. In the best of all worlds the centrality estimator should be a proper hadron-level observable corrected for detector effects, however, this base class only returns the inverse of the impact_parameter member of the GenHeavyIon object in an GenEvent if present and zero otherwise. */ class CentralityEstimator : public Projection { public: /// Constructor. CentralityEstimator(): _estimate(-1.0) {} /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(CentralityEstimator); protected: /// Perform the projection on the Event void project(const Event& e) { _estimate = -1.0; - HepMC::ConstGenHeavyIonPtr hi = e.genEvent()->heavy_ion(); + ConstGenHeavyIonPtr hi = e.genEvent()->heavy_ion(); if(hi){ -#if HEPMC_VERSION_CODE >= 3000000 +#ifdef ENABLE_HEPMC_3 _estimate = hi->impact_parameter > 0.0 ? 1.0/hi->impact_parameter: numeric_limits::max(); #else _estimate = hi->impact_parameter() > 0.0 ? 1.0/hi->impact_parameter(): numeric_limits::max(); #endif } } /// Compare projections int compare(const Projection& p) const { return mkNamedPCmp(p, "CentEst"); } public: /// The value of the centrality estimate. double estimate() const { return _estimate; } protected: /// The value of the centrality estimate. double _estimate; }; /// This is a traits class describing how to handle object handles by /// CentralityBinner. The default implementation basically describes /// what to do with Histo1DPtr. template struct CentralityBinTraits { /// Make a clone of the given object. static T clone(const T & t) { return T(t->newclone()); } /// Add the contents of @a o to @a t. static void add(T & t, const T & o) { *t += *o; } /// Scale the contents of a given object. static void scale(T & t, double f) { t->scaleW(f); } /// Normalize the AnalysisObject to the sum of weights in a /// centrality bin. static void normalize(T & t, double sumw) { if ( t->sumW() > 0.0 ) t->normalize(t->sumW()/sumw); } /// Return the path of an AnalysisObject. static string path(T t) { return t->path(); } }; /// The sole purpose of the MergeDistance class is to provide a /// "distance" for a potential merging of two neighboring bins in a /// CentralityBinner. struct MergeDistance { /// This function should return a generalized distance between two /// adjecent centrality bins to be merged. CentralityBinner will /// always try to merge bins with the smallest distance. @a cestLo /// and @cestHi are the lower and upper edges of resulting bin. @a /// weight is the resulting sum of event weights in the bin. @a /// centLo and @a centHi are the lower and upper prcentile limits /// where the two bins currently resides. The two last arguments are /// the total number of events in the two bins and the total number /// of previous mergers repectively. static double dist(double cestLo, double cestHi, double weight, double clo, double chi, double, double) { return (cestHi - cestLo)*weight/(cestHi*(chi - clo)); } }; /** * CentralityBinner contains a series of AnalysisObject of the same * quantity each in a different percentiles of another quantity. For * example, a CentralityBinner may e.g. contain histograms of the * cross section differential in \f$ p_T \f$ in different centrality * regions for heavy ion collisions based on forward energy flow. **/ template class CentralityBinner: public ProjectionApplier { public: /// Create a new empty CentralityBinner. @a maxbins is the maximum /// number of bins used by the binner. Default is 1000, which is /// typically enough. @a wlim is the mximum allowed error allowed /// for the centrality limits before a warning is emitted. CentralityBinner(int maxbins = 200, double wlim = 0.02) : _currentCEst(-1.0), _maxBins(maxbins), _warnlimit(wlim), _weightsum(0.0) { _percentiles.insert(0.0); _percentiles.insert(1.0); } /// Set the centrality projection to be used. Note that this /// projection must have already been declared to Rivet. void setProjection(const CentralityEstimator & p, string pname) { declare(p, pname); _estimator = pname; } /// Return the class name. virtual std::string name() const { return "Rivet::CentralityBinner"; } /// Add an AnalysisObject in the region between @a cmin and @a cmax to /// this set of CentralityBinners. The range represent /// percentiles and must be between 0 and 100. No overlaping bins /// are allowed. /// Note that (cmin=0, cmax=5), means the five percent MOST central /// events although the internal notation is reversed for /// convenience. /// Optionally supply corresponding limits @a cestmin and @a cestmax /// of the centrality extimator. void add(T t, double cmin, double cmax, double cestmin = -1.0, double cestmax = -1.0 ) { _percentiles.insert(max(1.0 - cmax/100.0, 0.0)); _percentiles.insert(min(1.0 - cmin/100.0, 1.0)); if ( _unfilled.empty() && _ready.empty() ) _devnull = CentralityBinTraits::clone(t); if ( cestmin < 0.0 ) _unfilled.push_back(Bin(t, 1.0 - cmax/100.0, 1.0 - cmin/100.0)); else _ready[t] = Bin(t, 1.0 - cmax/100.0, 1.0 - cmin/100.0, cestmin, cestmax); } /// Return one of the AnalysisObjects in the CentralityBinner for /// the given @a event. This version requires that a /// CentralityEstimator object has been assigned that can compute /// the value of the centrality estimator from the @a /// event. Optionally the @a weight of the event is given. This /// should be the weight that will be used to fill the /// AnalysisObject. If the centrality estimate is less than zero, /// the _devnull object will be returned. T select(const Event & event, double weight = 1.0) { return select(applyProjection (event, _estimator).estimate(), weight); } /// Return one of the AnalysisObjecsts in the Setup the /// CentralityBinner depending on the value of the centrality /// estimator, @a cest. Optionally the @a weight of the event is /// given. This should be the weight that will be used to fill the /// AnalysisObject. If the centrality estimate is less than zero, /// the _devnull object will be returned. T select(double cest, double weight = 1.0); /// At the end of the run, calculate the percentiles and fill the /// AnalysisObjectss provided with the add() function. This is /// typically called from the finalize method in an Analysis, but /// can also be called earlier in which case the the select /// functions can be continued to run as before with the edges /// between the centrality regions now fixed. void finalize(); /// Normalize each AnalysisObjects to the sum of event weights in the /// corresponding centrality bin. void normalizePerEvent() { for ( auto & b : _ready ) b.second.normalizePerEvent(); } /// Return a map bin edges of the centrality extimator indexed by /// the corresponing percentile. map edges() const { map ret; for ( auto & b : _ready ) { ret[1.0 - b.second._centLo] = b.second._cestLo; ret[1.0 - b.second._centHi] = b.second._cestHi; } return ret; } /// Return the current AnalysisObject from the latest call to select(). const T & current() const { return _currenT; } /// Return the value of the centrality estimator set in the latest /// call to select(). double estimator() const { return _currentCEst; } vector allObjects() { vector ret; for ( auto & fb : _flexiBins ) ret.push_back(fb._t); if ( !ret.empty() ) return ret; for ( auto b : _ready ) ret.push_back(b.second._t); return ret; } private: /// A flexible bin struct to be used to store temporary AnalysisObjects. struct FlexiBin { /// Construct with an initial centrality estimate and an event /// weight. FlexiBin(T & t, double cest = 0.0, double weight = 0.0) : _t(t), _cestLo(cest), _cestHi(cest), _weightsum(weight), _n(1), _m(0) {} /// Construct a temporary FlexiBin for finding a bin in a set. FlexiBin(double cest) : _cestLo(cest), _cestHi(cest), _weightsum(0.0), _n(0), _m(0) {} /// Merge in the contents of another FlexiBin into this. void merge(const FlexiBin & fb) { _cestLo = min(_cestLo, fb._cestLo); _cestHi = max(_cestHi, fb._cestHi); _weightsum += fb._weightsum; CentralityBinTraits::add(_t, fb._t); _n += fb._n; _m += fb._m + 1; } /// Comparisons for containers. bool operator< (const FlexiBin & fb) const { return _cestLo < fb._cestLo; } /// Return true if the given centrality estimate is in the range /// of this bin. bool inRange(double cest) const { return cest == _cestLo || ( _cestLo < cest && cest < _cestHi ); } /// The associated AnalysisObject. T _t; /// Current lower and upper edge of the centrality estimator for /// the fills in the associated AnalysiObject. double _cestLo, _cestHi; /// The sum of weights for all events entering the associated /// AnalysisObject. mutable double _weightsum; /// The number of times this bin has been selected. mutable int _n; /// The number of times this bin has been merged. mutable int _m; }; struct Bin { /// Construct a completely empty bin. Bin() : _centLo(-1.0), _centHi(-1.0), _cestLo(-1.0), _cestHi(-1.0), _weightsum(0.0), _underflow(0.0), _overflow(0.0), _ambiguous(0), _ambweight(0.0) {} /// Constructor taking an AnalysisObject and centrality interval /// as argument. Optionally the interval in the estimator can be /// given, in which case this bin is considered to be /// "final". Bin(T t, double centLo, double centHi, double cestLo = -1.0, double cestHi = -1.0) : _t(t), _centLo(centLo), _centHi(centHi), _cestLo(cestLo), _cestHi(cestHi), _weightsum(0.0), _underflow(0.0), _overflow(0.0), _ambiguous(0.0), _ambweight(0.0) {} /// Return true if the given centrality estimate is in the range /// of this AnalysisObject. bool inRange(double cest) const { return _cestLo >= 0 && _cestLo <= cest && ( _cestHi < 0.0 || cest <= _cestHi ); } /// Normalise the AnalysisObject to the tital cross section. void normalizePerEvent() { CentralityBinTraits::normalize(_t, _weightsum); } /// The AnalysisObject. T _t; /// The range in centrality. double _centLo, _centHi; /// The corresponding range in the centrality estimator. double _cestLo, _cestHi; /// The sum of event weights for this bin; double _weightsum; /// The weight in a final AnalysisObject that contains events /// below the centrality limit. double _underflow; /// The weight in a final AnalysisObject that contain events above /// the centrality limit. double _overflow; /// Number of ambiguous events in this bin. double _ambiguous; /// Sum of abmiguous weights. double _ambweight; }; protected: /// Convenient typedefs. typedef set FlexiBinSet; /// Find a bin corresponding to a given value of the centrality /// estimator. typename FlexiBinSet::iterator _findBin(double cest) { if ( _flexiBins.empty() ) return _flexiBins.end(); auto it = _flexiBins.lower_bound(FlexiBin(cest)); if ( it->_cestLo == cest ) return it; if ( it != _flexiBins.begin() ) --it; if ( it->_cestLo < cest && cest < it->_cestHi ) return it; return _flexiBins.end(); } /// The name of the CentralityEstimator projection to be used. string _estimator; /// The current temporary AnalysisObject selected for the centrality /// estimator calculated from the event presented in setup(). T _currenT; /// The current value of the centrality estimator. double _currentCEst; /// The oversampling of centrality bins. For each requested /// centrality bin this number of dynamic bins will be used. int _maxBins; /// If the fraction of events in a bin that comes from adjecent /// centrality bins exceeds this, emit a warning. double _warnlimit; /// The unfilled AnalysisObjectss where the esimator edges has not yet /// been determined. vector _unfilled; /// The dynamic bins for ranges of centrality estimators. FlexiBinSet _flexiBins; /// The sum of all event weights so far. double _weightsum; /// The requested percentile limits. set _percentiles; /// The filled AnalysisObjects where the estimator edges has been determined. map _ready; /// A special AnalysisObject which will be filled if the centrality /// estimate is out of range (negative). T _devnull; public: /// Print out the _flexiBins to cerr. void debug(); void fulldebug(); }; /// Traits specialization for Profile histograms. template <> struct CentralityBinTraits { typedef Profile1DPtr T; /// Make a clone of the given object. static T clone(const T & t) { return Profile1DPtr(t->newclone()); } /// Add the contents of @a o to @a t. static void add(T & t, const T & o) { *t += *o; } /// Scale the contents of a given object. static void scale(T & t, double f) { t->scaleW(f); } static void normalize(T & t, double sumw) {} /// Return the path of an AnalysisObject. static string path(T t) { return t->path(); } }; /// Traits specialization for Profile histograms. template <> struct CentralityBinTraits { typedef Profile2DPtr T; /// Make a clone of the given object. static T clone(const T & t) { return Profile2DPtr(t->newclone()); } /// Add the contents of @a o to @a t. static void add(T & t, const T & o) { *t += *o; } /// Scale the contents of a given object. static void scale(T & t, double f) { t->scaleW(f); } static void normalize(T & t, double sumw) {} /// Return the name of an AnalysisObject. static string path(T t) { return t->path(); } }; template struct CentralityBinTraits< vector > { /// Make a clone of the given object. static vector clone(const vector & tv) { vector rtv; for ( auto t : tv ) rtv.push_back(CentralityBinTraits::clone(t)); return rtv; } /// Add the contents of @a o to @a t. static void add(vector & tv, const vector & ov) { for ( int i = 0, N = tv.size(); i < N; ++i ) CentralityBinTraits::add(tv[i], ov[i]); } /// Scale the contents of a given object. static void scale(vector & tv, double f) { for ( auto t : tv ) CentralityBinTraits::scale(t, f); } static void normalize(vector & tv, double sumw) { for ( auto t : tv ) CentralityBinTraits::normalize(t, sumw); } /// Return the path of an AnalysisObject. static string path(const vector & tv) { string ret = "(vector:"; for ( auto t : tv ) { ret += " "; ret += CentralityBinTraits::path(t); } ret += ")"; return ret; } }; template struct TupleCentralityBinTraitsHelper { typedef tuple Tuple; typedef typename tuple_element::type T; static void clone(Tuple & ret, const Tuple & tup) { get(ret) = CentralityBinTraits::clone(get(tup)); TupleCentralityBinTraitsHelper::clone(ret, tup); } static void add(Tuple & tup, const Tuple & otup) { CentralityBinTraits::add(get(tup),get(otup)); TupleCentralityBinTraitsHelper::add(tup, otup); } static void scale(Tuple & tup, double f) { CentralityBinTraits::scale(get(tup), f); TupleCentralityBinTraitsHelper::scale(tup, f); } static void normalize(Tuple & tup, double sumw) { CentralityBinTraits::normalize(get(tup), sumw); TupleCentralityBinTraitsHelper::normalize(tup, sumw); } static string path(const Tuple & tup) { return " " + CentralityBinTraits::path(get(tup)) + TupleCentralityBinTraitsHelper::path(tup); } }; template struct TupleCentralityBinTraitsHelper<0,Types...> { typedef tuple Tuple; static void clone(Tuple &, const Tuple &) {} static void add(Tuple & tup, const Tuple & otup) {} static void scale(Tuple & tup, double f) {} static void normalize(Tuple & tup, double sumw) {} static string path(const Tuple & tup) {return "";} }; template struct CentralityBinTraits< tuple > { typedef tuple Tuple; static const size_t N = tuple_size::value; /// Make a clone of the given object. static Tuple clone(const Tuple & tup) { Tuple ret; TupleCentralityBinTraitsHelper::clone(ret, tup); return ret; } /// Add the contents of @a o to @a t. static void add(Tuple & tup, const Tuple & otup) { TupleCentralityBinTraitsHelper::add(tup, otup); } /// Scale the contents of a given object. static void scale(Tuple & tup, double f) { TupleCentralityBinTraitsHelper::scale(tup, f); } static void normalize(Tuple & tup, double sumw) { TupleCentralityBinTraitsHelper::normalize(tup, sumw); } /// Return the path of an AnalysisObject. static string path(const Tuple & tup) { string ret = "(tuple:"; ret += TupleCentralityBinTraitsHelper::path(tup); ret += ")"; return ret; } }; template T CentralityBinner::select(double cest, double weight) { _currenT = _devnull; _currentCEst = cest; _weightsum += weight; // If estimator is negative, something has gone wrong. if ( _currentCEst < 0.0 ) return _currenT; // If we already have finalized the limits on the centrality // estimator, we just add the weights to their bins and return the // corresponding AnalysisObject. if ( _unfilled.empty() ) { for ( auto & b : _ready ) if ( b.second.inRange(_currentCEst) ) { b.second._weightsum += weight; return b.second._t; } return _currenT; } auto it = _findBin(cest); if ( it == _flexiBins.end() ) { _currenT = CentralityBinTraits::clone(_unfilled.begin()->_t); it = _flexiBins.insert(FlexiBin(_currenT, _currentCEst, weight)).first; } else { it->_weightsum += weight; ++(it->_n); _currenT = it->_t; } if ( (int)_flexiBins.size() <= _maxBins ) return _currenT; set::iterator citn = _percentiles.begin(); set::iterator cit0 = citn++; auto selectit = _flexiBins.end(); double mindist = -1.0; double acc = 0.0; auto next = _flexiBins.begin(); auto prev = next++; for ( ; next != _flexiBins.end(); prev = next++ ) { acc += prev->_weightsum/_weightsum; if ( acc > *citn ) { cit0 = citn++; continue; } if ( acc + next->_weightsum/_weightsum > *citn ) continue; double dist = MDist::dist(prev->_cestLo, next->_cestHi, next->_weightsum + prev->_weightsum, *cit0, *citn, next->_n + prev->_n, next->_m + prev->_m); if ( mindist < 0.0 || dist < mindist ) { selectit = prev; mindist = dist; } } if ( selectit == _flexiBins.end() ) return _currenT; auto mergeit = selectit++; FlexiBin merged = *mergeit; merged.merge(*selectit); if ( merged.inRange(cest) || selectit->inRange(cest) ) _currenT = merged._t; _flexiBins.erase(mergeit); _flexiBins.erase(selectit); _flexiBins.insert(merged); return _currenT; } template void CentralityBinner::finalize() { // Take the contents of the dynamical binning and fill the original // AnalysisObjects. double clo = 0.0; for ( const FlexiBin & fb : _flexiBins ) { double chi = min(clo + fb._weightsum/_weightsum, 1.0); for ( Bin & bin : _unfilled ) { double olo = bin._centLo; double ohi = bin._centHi; if ( clo > ohi || chi <= olo ) continue; // If we only have partial overlap we need to scale double lo = max(olo, clo); double hi = min(ohi, chi); T t = CentralityBinTraits::clone(fb._t); double frac = (hi - lo)/(chi - clo); CentralityBinTraits::scale(t, frac); CentralityBinTraits::add(bin._t, t); bin._weightsum += fb._weightsum*frac; if ( clo <= olo ) bin._cestLo = fb._cestLo + (fb._cestHi - fb._cestLo)*(olo - clo)/(chi - clo); if ( clo < olo ) { bin._underflow = clo; bin._ambiguous += fb._n*frac; bin._ambweight += fb._weightsum*frac*(1.0 - frac); } if ( chi > ohi ) { bin._cestHi = fb._cestLo + (fb._cestHi - fb._cestLo)*(ohi - clo)/(chi - clo); bin._overflow = chi; bin._ambiguous += fb._n*frac; bin._ambweight += fb._weightsum*frac*(1.0 - frac); } } clo = chi; } _flexiBins.clear(); for ( Bin & bin : _unfilled ) { if ( bin._overflow == 0.0 ) bin._overflow = 1.0; _ready[bin._t] = bin; if ( bin._ambweight/bin._weightsum >_warnlimit ) MSG_WARNING("Analysis object \"" << CentralityBinTraits::path(bin._t) << "\", contains events with centralities between " << bin._underflow*100.0 << " and " << bin._overflow*100.0 << "% (" << int(bin._ambiguous + 0.5) << " ambiguous events with effectively " << 100.0*bin._ambweight/bin._weightsum << "% of the weights)." << "Consider increasing the number of bins."); } _unfilled.clear(); } template void CentralityBinner::fulldebug() { cerr << endl; double acc = 0.0; set::iterator citn = _percentiles.begin(); set::iterator cit0 = citn++; int i = 0; for ( auto it = _flexiBins.begin(); it != _flexiBins.end(); ) { ++i; auto curr = it++; double w = curr->_weightsum/_weightsum; acc += w; if ( curr == _flexiBins.begin() || it == _flexiBins.end() || acc > *citn ) cerr << "*"; else cerr << " "; if ( acc > *citn ) cit0 = citn++; cerr << setw(6) << i << setw(12) << acc - w << setw(12) << acc << setw(8) << curr->_n << setw(8) << curr->_m << setw(12) << curr->_cestLo << setw(12) << curr->_cestHi << endl; } cerr << "Number of sampler bins: " << _flexiBins.size() << endl; } template void CentralityBinner::debug() { cerr << endl; double acc = 0.0; int i = 0; set::iterator citn = _percentiles.begin(); set::iterator cit0 = citn++; for ( auto it = _flexiBins.begin(); it != _flexiBins.end(); ) { auto curr = it++; ++i; double w = curr->_weightsum/_weightsum; acc += w; if ( curr == _flexiBins.begin() || it == _flexiBins.end() || acc > *citn ) { if ( acc > *citn ) cit0 = citn++; cerr << setw(6) << i << setw(12) << acc - w << setw(12) << acc << setw(8) << curr->_n << setw(8) << curr->_m << setw(12) << curr->_cestLo << setw(12) << curr->_cestHi << endl; } } cerr << "Number of sampler bins: " << _flexiBins.size() << endl; } /// Example of CentralityEstimator projection that the generated /// centrality as given in the GenHeavyIon object in HepMC3. class GeneratedCentrality: public CentralityEstimator { public: /// Constructor. GeneratedCentrality() {} /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(GeneratedCentrality); protected: /// Perform the projection on the Event void project(const Event& e) { _estimate = -1.0; -#if HEPMC_VERSION_CODE >= 3000000 - HepMC::ConstGenHeavyIonPtr hi = e.genEvent()->heavy_ion(); +#ifdef ENABLE_HEPMC_3 + RivetHepMC::ConstGenHeavyIonPtr hi = e.genEvent()->heavy_ion(); if ( hi ) _estimate = 100.0 - hi->centrality; // @TODO We don't really know how to interpret this number! #endif } /// Compare projections int compare(const Projection& p) const { return mkNamedPCmp(p, "GeneratedCentrality"); } }; } #endif diff --git a/include/Rivet/Tools/RivetHepMC.hh b/include/Rivet/Tools/RivetHepMC.hh --- a/include/Rivet/Tools/RivetHepMC.hh +++ b/include/Rivet/Tools/RivetHepMC.hh @@ -1,324 +1,335 @@ // -*- C++ -*- #ifndef RIVET_RivetHepMC_HH #define RIVET_RivetHepMC_HH #ifdef ENABLE_HEPMC_3 #include "HepMC3/HepMC3.h" #include "HepMC3/Relatives.h" -#include "HepMC3/ReaderFactory.h" +#include "HepMC3/Reader.h" namespace Rivet{ namespace RivetHepMC = HepMC3; using RivetHepMC::ConstGenParticlePtr; using RivetHepMC::ConstGenVertexPtr; using RivetHepMC::Relatives; + using RivetHepMC::ConstGenHeavyIonPtr; + + using HepMC_IO_type = RivetHepMC::Reader; + + using PdfInfo = RivetHepMC::GenPdfInfo; } #else #include "HepMC/GenEvent.h" #include "HepMC/GenParticle.h" #include "HepMC/GenVertex.h" #include "HepMC/Version.h" #include "HepMC/GenRanges.h" +#include "HepMC/IO_GenEvent.h" namespace Rivet{ namespace RivetHepMC = HepMC; // HepMC 2.07 provides its own #defines -#define ConstGenParticlePtr const HepMC::GenParticle* -#define ConstGenVertexPtr const HepMC::GenVertex* -#define Relatives HepMC::IteratorRange + #define ConstGenParticlePtr const HepMC::GenParticle* + #define ConstGenVertexPtr const HepMC::GenVertex* + #define ConstGenHeavyIonPtr const HepMC::HeavyIon* + + /// @brief Replicated the HepMC3 Relatives syntax using HepMC2 IteratorRanges + /// This is necessary mainly because of capitalisation differences + class Relatives{ + + public: + + constexpr Relatives(HepMC::IteratorRange relo): _internal(relo){} + + constexpr HepMC::IteratorRange operator()() const {return _internal;} + operator HepMC::IteratorRange() const {return _internal;} + + const static Relatives PARENTS; + const static Relatives CHILDREN; + const static Relatives ANCESTORS; + const static Relatives DESCENDANTS; + + private: + const HepMC::IteratorRange _internal; + + }; + + using HepMC_IO_type = HepMC::IO_GenEvent; + using PdfInfo = RivetHepMC::PdfInfo; + } #endif #include "Rivet/Tools/RivetSTL.hh" #include "Rivet/Tools/Exceptions.hh" namespace Rivet { using RivetHepMC::GenEvent; using ConstGenEventPtr = std::shared_ptr; /// @todo Use mcutils? - std::vector particles(ConstGenEventPtr ge); - std::vector particles(const GenEvent *ge); - std::vector vertices(ConstGenEventPtr ge); - std::vector vertices(const GenEvent *ge); - std::vector particles(ConstGenVertexPtr gv, const Relatives &relo); - std::vector particles(ConstGenParticlePtr gp, const Relatives &relo); - int uniqueId(ConstGenParticlePtr gp); - std::vector beams(const GenEvent *ge); + namespace HepMCUtils{ + std::vector particles(ConstGenEventPtr ge); + std::vector particles(const GenEvent *ge); + std::vector vertices(ConstGenEventPtr ge); + std::vector vertices(const GenEvent *ge); + std::vector particles(ConstGenVertexPtr gv, const Relatives &relo); + std::vector particles(ConstGenParticlePtr gp, const Relatives &relo); + int uniqueId(ConstGenParticlePtr gp); + int particles_size(ConstGenEventPtr ge); + int particles_size(const GenEvent *ge); + std::vector beams(const GenEvent *ge); + std::shared_ptr makeReader(std::istream &istr); + bool readEvent(std::shared_ptr io, std::shared_ptr evt); + }; + /* - #if HEPMC_VERSION_CODE >= 3000000 - - /// @name Accessors from GenEvent - //@{ -/* - inline std::vector particles(const GenEvent* ge) { - assert(ge); - return ge->particles(); - // std::vector rtn; - // for (const GenParticlePtr p : ge->particles()) rtn.push_back(p); - // return rtn; - } - - inline std::vector vertices(const GenEvent* ge) { - assert(ge); - return ge->vertices(); - // std::vector rtn; - // for (const GenVertexPtr v : ge->vertices()) rtn.push_back(v); - // return rtn; - } -*/ //@} /// @name Accessors from GenVertex //@{ /// @todo are these really necessary? Why not call GenVertex::particles_[in, out] directly? //inline const vector& particles_in(const GenVertexPtr& gv) { return gv->particles_in(); } // inline vector& particles_in(GenVertexPtr& gv) { return gv->particles_in(); } //inline const vector& particles_out(const GenVertexPtr& gv) { return gv->particles_out(); } // inline vector& particles_out(GenVertexPtr& gv) { return gv->particles_out(); } -/* - inline std::vector particles(const GenVertexPtr& gv, HepMC::IteratorRange range) { - return HepMC::FindParticles(gv, range).results(); - } -*/ // /// Get the direct parents or all-ancestors of GenParticle @a gp // inline std::vector particles_in(GenParticlePtr gp, HepMC::IteratorRange range=HepMC::ancestors) { // assert(gp); // if (range != HepMC::parents && range != HepMC::ancestors) // throw UserError("Requested particles_in(GenParticlePtr) with a non-'in' iterator range"); // return particles(gp->production_vertex(), range); // } // /// Get the direct children or all-descendents of GenParticle @a gp // inline std::vector particles_out(GenParticlePtr gp, HepMC::IteratorRange range=HepMC::ancestors) { // assert(gp); // if (range != HepMC::children && range != HepMC::descendants) // throw UserError("Requested particles_out(GenParticlePtr) with a non-'out' iterator range"); // return particles(gp->production_vertex(), range); // } //@} /// @name Accessors from GenParticle //@{ /// Get any relatives of GenParticle @a gp - /* + inline std::vector particles(GenParticlePtr gp, HepMC::IteratorRange range) { return HepMC::FindParticles(gp, range).results(); - }*/ + } //@} -/* #else /// @name Accessors from GenEvent //@{ inline std::vector particles(const GenEvent* ge) { assert(ge); std::vector rtn; #if HEPMC_VERSION_CODE >= 2007000 for (const GenParticlePtr p : ge->particles()) rtn.push_back(p); #else for (GenEvent::particle_const_iterator pi = ge->particles_begin(); pi != ge->particles_end(); ++pi) rtn.push_back(*pi); #endif return rtn; } inline std::vector particles(GenEvent* ge) { assert(ge); std::vector rtn; #if HEPMC_VERSION_CODE >= 2007000 for (GenParticlePtr p : ge->particles()) rtn.push_back(p); #else for (GenEvent::particle_iterator pi = ge->particles_begin(); pi != ge->particles_end(); ++pi) rtn.push_back(*pi); #endif return rtn; } inline std::vector vertices(const GenEvent* ge) { assert(ge); std::vector rtn; #if HEPMC_VERSION_CODE >= 2007000 for (const GenVertexPtr v : ge->vertices()) rtn.push_back(v); #else for (GenEvent::vertex_const_iterator vi = ge->vertices_begin(); vi != ge->vertices_end(); ++vi) rtn.push_back(*vi); #endif return rtn; } inline std::vector vertices(GenEvent* ge) { assert(ge); std::vector rtn; #if HEPMC_VERSION_CODE >= 2007000 for (const GenVertexPtr v : ge->vertices()) rtn.push_back(v); #else for (GenEvent::vertex_iterator vi = ge->vertices_begin(); vi != ge->vertices_end(); ++vi) rtn.push_back(*vi); #endif return rtn; } //@} /// @name Accessors from GenVertex //@{ inline std::vector particles(const GenVertexPtr gv, HepMC::IteratorRange range=HepMC::relatives) { std::vector rtn; /// @todo A particle_const_iterator on GenVertex would be nice... // Before HepMC 2.7.0 there were no GV::particles_const_iterators and constness consistency was all screwed up :-/ #if HEPMC_VERSION_CODE >= 2007000 for (const GenParticlePtr p : gv->particles(range)) rtn.push_back(p); #else GenVertexPtr gv2 = const_cast(gv); for (GenVertex::particle_iterator pi = gv2->particles_begin(range); pi != gv2->particles_end(range); ++pi) rtn.push_back(const_cast(*pi)); #endif return rtn; } inline std::vector particles(GenVertexPtr gv, HepMC::IteratorRange range=HepMC::relatives) { std::vector rtn; for (GenVertex::particle_iterator pi = gv->particles_begin(range); pi != gv->particles_end(range); ++pi) rtn.push_back(*pi); return rtn; } // Get iterator ranges as wrapped begin/end pairs /// @note GenVertex _in and _out iterators are actually, secretly the same types *sigh* struct GenVertexIterRangeC { typedef vector::const_iterator genvertex_particles_const_iterator; GenVertexIterRangeC(const genvertex_particles_const_iterator& begin, const genvertex_particles_const_iterator& end) : _begin(begin), _end(end) { } const genvertex_particles_const_iterator& begin() { return _begin; } const genvertex_particles_const_iterator& end() { return _end; } private: const genvertex_particles_const_iterator _begin, _end; }; inline GenVertexIterRangeC particles_in(const GenVertexPtr gv) { return GenVertexIterRangeC(gv->particles_in_const_begin(), gv->particles_in_const_end()); } inline GenVertexIterRangeC particles_out(const GenVertexPtr gv) { return GenVertexIterRangeC(gv->particles_out_const_begin(), gv->particles_out_const_end()); } #if HEPMC_VERSION_CODE >= 2007000 // Get iterator ranges as wrapped begin/end pairs /// @note GenVertex _in and _out iterators are actually, secretly the same types *sigh* struct GenVertexIterRange { typedef vector::iterator genvertex_particles_iterator; GenVertexIterRange(const genvertex_particles_iterator& begin, const genvertex_particles_iterator& end) : _begin(begin), _end(end) { } const genvertex_particles_iterator& begin() { return _begin; } const genvertex_particles_iterator& end() { return _end; } private: const genvertex_particles_iterator _begin, _end; }; inline GenVertexIterRange particles_in(GenVertexPtr gv) { return GenVertexIterRange(gv->particles_in_begin(), gv->particles_in_end()); } inline GenVertexIterRange particles_out(GenVertexPtr gv) { return GenVertexIterRange(gv->particles_out_begin(), gv->particles_out_end()); } #endif /// @name Accessors from GenParticle //@{ /// Get the direct parents or all-ancestors of GenParticle @a gp inline std::vector _particles_in(const GenParticlePtr gp, HepMC::IteratorRange range=HepMC::ancestors) { assert(gp); if (range != HepMC::parents && range != HepMC::ancestors) throw UserError("Requested particles_in(GenParticlePtr) with a non-'in' iterator range"); if (!gp->production_vertex()) return std::vector(); #if HEPMC_VERSION_CODE >= 2007000 return particles(gp->production_vertex(), range); #else // Before HepMC 2.7.0 the constness consistency of methods and their return types was all screwed up :-/ std::vector rtn; for (GenParticlePtr gp2 : particles(gp->production_vertex(), range)) rtn.push_back( const_cast(gp2) ); return rtn; #endif } /// Get the direct parents or all-ancestors of GenParticle @a gp inline std::vector _particles_in(GenParticlePtr gp, HepMC::IteratorRange range=HepMC::ancestors) { assert(gp); if (range != HepMC::parents && range != HepMC::ancestors) throw UserError("Requested particles_in(GenParticlePtr) with a non-'in' iterator range"); return (gp->production_vertex()) ? particles(gp->production_vertex(), range) : std::vector(); } /// Get the direct children or all-descendents of GenParticle @a gp inline std::vector _particles_out(const GenParticlePtr gp, HepMC::IteratorRange range=HepMC::descendants) { assert(gp); if (range != HepMC::children && range != HepMC::descendants) throw UserError("Requested particles_out(GenParticlePtr) with a non-'out' iterator range"); if (!gp->end_vertex()) return std::vector(); #if HEPMC_VERSION_CODE >= 2007000 return particles(gp->end_vertex(), range); #else // Before HepMC 2.7.0 the constness consistency of methods and their return types was all screwed up :-/ std::vector rtn; foreach (GenParticlePtr gp2, particles(gp->end_vertex(), range)) rtn.push_back( const_cast(gp2) ); return rtn; #endif } /// Get the direct children or all-descendents of GenParticle @a gp inline std::vector _particles_out(GenParticlePtr gp, HepMC::IteratorRange range=HepMC::descendants) { assert(gp); if (range != HepMC::children && range != HepMC::descendants) throw UserError("Requested particles_out(GenParticlePtr) with a non-'out' iterator range"); return (gp->end_vertex()) ? particles(gp->end_vertex(), range) : std::vector(); } /// Get any relatives of GenParticle @a gp inline std::vector particles(const GenParticlePtr gp, HepMC::IteratorRange range) { if (range == HepMC::parents || range == HepMC::ancestors) return _particles_in(gp, range); if (range == HepMC::children || range == HepMC::descendants) return _particles_out(gp, range); throw UserError("Requested particles(const GenParticlePtr) with an unsupported iterator range"); } /// Get any relatives of GenParticle @a gp inline std::vector particles(GenParticlePtr gp, HepMC::IteratorRange range) { if (range == HepMC::parents || range == HepMC::ancestors) return _particles_in(gp, range); if (range == HepMC::children || range == HepMC::descendants) return _particles_out(gp, range); throw UserError("Requested particles(GenParticlePtr) with an unsupported iterator range"); } #endif */ } #endif diff --git a/pyext/setup.py.in b/pyext/setup.py.in --- a/pyext/setup.py.in +++ b/pyext/setup.py.in @@ -1,65 +1,63 @@ #! /usr/bin/env python from distutils.core import setup from distutils.extension import Extension from glob import glob ## Extension definition import os.path incdir1 = os.path.abspath("@abs_top_srcdir@/include") incdir2 = os.path.abspath("@abs_top_builddir@/include") incdir3 = os.path.abspath("@abs_srcdir@/rivet") incdir4 = os.path.abspath("@abs_builddir@/rivet") srcdir = os.path.abspath("@abs_top_srcdir@/src") libdir = os.path.abspath("@abs_top_builddir@/src/.libs") ## Assemble the library search dirs -lookupdirs = [ - libdir, - "@HEPMC3LIBPATH@", - "@FASTJETLIBPATH@", - "@YODALIBPATH@" ] +@ENABLE_HEPMC_3_TRUE@lookupdirs = [libdir, "@HEPMC3LIBPATH@", "@FASTJETLIBPATH@", "@YODALIBPATH@" ] +@ENABLE_HEPMC_3_FALSE@lookupdirs = [libdir, "@HEPMCLIBPATH@", "@FASTJETLIBPATH@", "@YODALIBPATH@" ] if "RIVET_LOCAL" in os.environ: BASE_LINK_ARGS = ["-L@abs_top_builddir@/src/.libs"] else: BASE_LINK_ARGS = ["-L@prefix@/lib"] ## Be careful with extracting the GSL path from the flags string # import re # re_libdirflag = re.compile(r".*-L\s*(\S+).*") # re_match = re_libdirflag.search("@GSL_LDFLAGS@") # if re_match: # lookupdirs.append( re_match.group(1) ) ## A helper function def ext(name, depends=[], statics=[]): fullname = '@abs_builddir@/rivet/%s.cpp' % name if not os.path.isfile(fullname): # distcheck has it in srcdir fullname = os.path.relpath("@abs_srcdir@/rivet/%s.cpp" % name) return Extension( "rivet.%s" % name, [fullname] + statics, language="c++", # depends=depends, include_dirs=[incdir1, incdir2, incdir3, incdir4], # extra_compile_args="-I@prefix@/include @PYEXT_CXXFLAGS@ @HEPMCCPPFLAGS@ @FASTJETCPPFLAGS@ @YODACPPFLAGS@ @GSLCPPFLAGS@".split(), extra_compile_args="-I@prefix@/include @PYEXT_CXXFLAGS@ @HEPMCCPPFLAGS@ @HEPMC3CPPFLAGS@ @CPPFLAGS@ @FASTJETCPPFLAGS@ @YODACPPFLAGS@".split(), extra_link_args=BASE_LINK_ARGS, library_dirs=lookupdirs, runtime_library_dirs=lookupdirs[1:], - libraries=["HepMC3", "fastjet", "YODA", "Rivet"]) + @ENABLE_HEPMC_3_TRUE@libraries=["HepMC3", "fastjet", "YODA", "Rivet"]) + @ENABLE_HEPMC_3_FALSE@libraries=["HepMC", "fastjet", "YODA", "Rivet"]) # libraries=["gsl", "HepMC", "fastjet", "YODA", "Rivet"]) #header_files = glob("../include/Rivet/*.h") + glob("../include/Rivet/Utils/*.h") extns = [ext("core")]#, header_files)] setup(name = "rivet", version="@PACKAGE_VERSION@", ext_modules = extns, packages = ["rivet"]) diff --git a/src/Core/AnalysisHandler.cc b/src/Core/AnalysisHandler.cc --- a/src/Core/AnalysisHandler.cc +++ b/src/Core/AnalysisHandler.cc @@ -1,362 +1,365 @@ // -*- C++ -*- #include "Rivet/Config/RivetCommon.hh" #include "Rivet/AnalysisHandler.hh" #include "Rivet/Analysis.hh" #include "Rivet/Tools/ParticleName.hh" #include "Rivet/Tools/BeamConstraint.hh" #include "Rivet/Tools/Logging.hh" #include "Rivet/Projections/Beam.hh" #include "YODA/IO.h" namespace Rivet { AnalysisHandler::AnalysisHandler(const string& runname) : _runname(runname), _eventcounter("/_EVTCOUNT"), _xs(NAN), _xserr(NAN), _initialised(false), _ignoreBeams(false) { } AnalysisHandler::~AnalysisHandler() { } Log& AnalysisHandler::getLog() const { return Log::getLog("Rivet.Analysis.Handler"); } void AnalysisHandler::init(const GenEvent& ge) { if (_initialised) throw UserError("AnalysisHandler::init has already been called: cannot re-initialize!"); setRunBeams(Rivet::beams(ge)); MSG_DEBUG("Initialising the analysis handler"); _eventcounter.reset(); // Check that analyses are beam-compatible, and remove those that aren't const size_t num_anas_requested = analysisNames().size(); vector anamestodelete; for (const AnaHandle a : _analyses) { if (!_ignoreBeams && !a->isCompatible(beams())) { //MSG_DEBUG(a->name() << " requires beams " << a->requiredBeams() << " @ " << a->requiredEnergies() << " GeV"); anamestodelete.push_back(a->name()); } } for (const string& aname : anamestodelete) { MSG_WARNING("Analysis '" << aname << "' is incompatible with the provided beams: removing"); removeAnalysis(aname); } if (num_anas_requested > 0 && analysisNames().empty()) { cerr << "All analyses were incompatible with the first event's beams\n" << "Exiting, since this probably wasn't intentional!" << endl; exit(1); } // Warn if any analysis' status is not unblemished for (const AnaHandle a : analyses()) { if (toUpper(a->status()) == "PRELIMINARY") { MSG_WARNING("Analysis '" << a->name() << "' is preliminary: be careful, it may change and/or be renamed!"); } else if (toUpper(a->status()) == "OBSOLETE") { MSG_WARNING("Analysis '" << a->name() << "' is obsolete: please update!"); } else if (toUpper(a->status()).find("UNVALIDATED") != string::npos) { MSG_WARNING("Analysis '" << a->name() << "' is unvalidated: be careful, it may be broken!"); } } // Initialize the remaining analyses for (AnaHandle a : _analyses) { MSG_DEBUG("Initialising analysis: " << a->name()); try { // Allow projection registration in the init phase onwards a->_allowProjReg = true; a->init(); //MSG_DEBUG("Checking consistency of analysis: " << a->name()); //a->checkConsistency(); } catch (const Error& err) { cerr << "Error in " << a->name() << "::init method: " << err.what() << endl; exit(1); } MSG_DEBUG("Done initialising analysis: " << a->name()); } _initialised = true; MSG_DEBUG("Analysis handler initialised"); } void AnalysisHandler::analyze(const GenEvent& ge) { // Call init with event as template if not already initialised if (!_initialised) init(ge); assert(_initialised); // Ensure that beam details match those from the first event (if we're checking beams) if ( !_ignoreBeams ) { const PdgIdPair beams = Rivet::beamIds(ge); const double sqrts = Rivet::sqrtS(ge); if (!compatible(beams, _beams) || !fuzzyEquals(sqrts, sqrtS())) { cerr << "Event beams mismatch: " << PID::toBeamsString(beams) << " @ " << sqrts/GeV << " GeV" << " vs. first beams " << this->beams() << " @ " << this->sqrtS()/GeV << " GeV" << endl; exit(1); } } // Create the Rivet event wrapper /// @todo Filter/normalize the event here Event event(ge); // Weights /// @todo Drop this / just report first weight when we support multiweight events _eventcounter.fill(event.weight()); MSG_DEBUG("Event #" << _eventcounter.numEntries() << " weight = " << event.weight()); // Cross-section - #ifdef HEPMC_HAS_CROSS_SECTION + + #if defined ENABLE_HEPMC_3 if (ge.cross_section()) { - #if HEPMC_VERSION_CODE >= 3000000 - _xs = ge.cross_section()->cross_section; - _xserr = ge.cross_section()->cross_section_error; - #else + //@todo HepMC3::GenCrossSection methods aren't const accessible :( + RivetHepMC::GenCrossSection gcs = *(event.genEvent()->cross_section()); + _xs = gcs.xsec(); + _xserr = gcs.xsec_err(); + } + #elif defined HEPMC_HAS_CROSS_SECTION + if (ge.cross_section()) { _xs = ge.cross_section()->cross_section(); _xserr = ge.cross_section()->cross_section_error(); - #endif } #endif // Run the analyses for (AnaHandle a : _analyses) { MSG_TRACE("About to run analysis " << a->name()); try { a->analyze(event); } catch (const Error& err) { cerr << "Error in " << a->name() << "::analyze method: " << err.what() << endl; exit(1); } MSG_TRACE("Finished running analysis " << a->name()); } } void AnalysisHandler::analyze(const GenEvent* ge) { if (ge == nullptr) { MSG_ERROR("AnalysisHandler received null pointer to GenEvent"); //throw Error("AnalysisHandler received null pointer to GenEvent"); } analyze(*ge); } void AnalysisHandler::finalize() { if (!_initialised) return; MSG_INFO("Finalising analyses"); for (AnaHandle a : _analyses) { a->setCrossSection(_xs); try { a->finalize(); } catch (const Error& err) { cerr << "Error in " << a->name() << "::finalize method: " << err.what() << endl; exit(1); } } // Print out number of events processed const int nevts = _eventcounter.numEntries(); MSG_INFO("Processed " << nevts << " event" << (nevts != 1 ? "s" : "")); // // Delete analyses // MSG_DEBUG("Deleting analyses"); // _analyses.clear(); // Print out MCnet boilerplate cout << endl; cout << "The MCnet usage guidelines apply to Rivet: see http://www.montecarlonet.org/GUIDELINES" << endl; cout << "Please acknowledge plots made with Rivet analyses, and cite arXiv:1003.0694 (http://arxiv.org/abs/1003.0694)" << endl; } AnalysisHandler& AnalysisHandler::addAnalysis(const string& analysisname) { // Check for a duplicate analysis /// @todo Might we want to be able to run an analysis twice, with different params? /// Requires avoiding histo tree clashes, i.e. storing the histos on the analysis objects. for (const AnaHandle& a : _analyses) { if (a->name() == analysisname) { MSG_WARNING("Analysis '" << analysisname << "' already registered: skipping duplicate"); return *this; } } AnaHandle analysis( AnalysisLoader::getAnalysis(analysisname) ); if (analysis.get() != 0) { // < Check for null analysis. MSG_DEBUG("Adding analysis '" << analysisname << "'"); analysis->_analysishandler = this; _analyses.insert(analysis); } else { MSG_WARNING("Analysis '" << analysisname << "' not found."); } // MSG_WARNING(_analyses.size()); // for (const AnaHandle& a : _analyses) MSG_WARNING(a->name()); return *this; } AnalysisHandler& AnalysisHandler::removeAnalysis(const string& analysisname) { std::shared_ptr toremove; for (const AnaHandle a : _analyses) { if (a->name() == analysisname) { toremove = a; break; } } if (toremove.get() != 0) { MSG_DEBUG("Removing analysis '" << analysisname << "'"); _analyses.erase(toremove); } return *this; } ///////////////////////////// void AnalysisHandler::addData(const std::vector& aos) { for (const AnalysisObjectPtr ao : aos) { const string path = ao->path(); if (path.size() > 1) { // path > "/" try { const string ananame = split(path, "/")[0]; AnaHandle a = analysis(ananame); a->addAnalysisObject(ao); /// @todo Need to statistically merge... } catch (const Error& e) { MSG_WARNING(e.what()); } } } } void AnalysisHandler::readData(const string& filename) { vector aos; try { /// @todo Use new YODA SFINAE to fill the smart ptr vector directly vector aos_raw; YODA::read(filename, aos_raw); for (AnalysisObject* aor : aos_raw) aos.push_back(AnalysisObjectPtr(aor)); } catch (...) { //< YODA::ReadError& throw UserError("Unexpected error in reading file: " + filename); } if (!aos.empty()) addData(aos); } vector AnalysisHandler::getData() const { vector rtn; // Event counter rtn.push_back( make_shared(_eventcounter) ); // Cross-section + err as scatter YODA::Scatter1D::Points pts; pts.insert(YODA::Point1D(_xs, _xserr)); rtn.push_back( make_shared(pts, "/_XSEC") ); // Analysis histograms for (const AnaHandle a : analyses()) { vector aos = a->analysisObjects(); // MSG_WARNING(a->name() << " " << aos.size()); for (const AnalysisObjectPtr ao : aos) { // Exclude paths from final write-out if they contain a "TMP" layer (i.e. matching "/TMP/") /// @todo This needs to be much more nuanced for re-entrant histogramming if (ao->path().find("/TMP/") != string::npos) continue; rtn.push_back(ao); } } // Sort histograms alphanumerically by path before write-out sort(rtn.begin(), rtn.end(), [](AnalysisObjectPtr a, AnalysisObjectPtr b) {return a->path() < b->path();}); return rtn; } void AnalysisHandler::writeData(const string& filename) const { const vector aos = getData(); try { YODA::write(filename, aos.begin(), aos.end()); } catch (...) { //< YODA::WriteError& throw UserError("Unexpected error in writing file: " + filename); } } std::vector AnalysisHandler::analysisNames() const { std::vector rtn; for (AnaHandle a : _analyses) { rtn.push_back(a->name()); } return rtn; } const AnaHandle AnalysisHandler::analysis(const std::string& analysisname) const { for (const AnaHandle a : analyses()) if (a->name() == analysisname) return a; throw Error("No analysis named '" + analysisname + "' registered in AnalysisHandler"); } AnalysisHandler& AnalysisHandler::addAnalyses(const std::vector& analysisnames) { for (const string& aname : analysisnames) { //MSG_DEBUG("Adding analysis '" << aname << "'"); addAnalysis(aname); } return *this; } AnalysisHandler& AnalysisHandler::removeAnalyses(const std::vector& analysisnames) { for (const string& aname : analysisnames) { removeAnalysis(aname); } return *this; } bool AnalysisHandler::needCrossSection() const { bool rtn = false; for (const AnaHandle a : _analyses) { if (!rtn) rtn = a->needsCrossSection(); if (rtn) break; } return rtn; } AnalysisHandler& AnalysisHandler::setCrossSection(double xs) { _xs = xs; return *this; } bool AnalysisHandler::hasCrossSection() const { return (!std::isnan(crossSection())); } AnalysisHandler& AnalysisHandler::addAnalysis(Analysis* analysis) { analysis->_analysishandler = this; _analyses.insert(AnaHandle(analysis)); return *this; } PdgIdPair AnalysisHandler::beamIds() const { return Rivet::beamIds(beams()); } double AnalysisHandler::sqrtS() const { return Rivet::sqrtS(beams()); } void AnalysisHandler::setIgnoreBeams(bool ignore) { _ignoreBeams=ignore; } } diff --git a/src/Core/Event.cc b/src/Core/Event.cc --- a/src/Core/Event.cc +++ b/src/Core/Event.cc @@ -1,49 +1,38 @@ // -*- C++ -*- #include "Rivet/Event.hh" #include "Rivet/Tools/BeamConstraint.hh" #include "Rivet/Projections/Beam.hh" namespace Rivet { double Event::weight() const { return genEvent()->weights().empty() ? 1.0 : _genevent.weights()[0]; } - /* - double Event::centrality() const { - /// @todo Use direct "centrality" property if using HepMC3 - - #if HEPMC_VERSION_CODE >= 3000000 - return genEvent()->heavy_ion() ? genEvent()->heavy_ion()->centrality: -1; - #else - return genEvent()->heavy_ion() ? genEvent()->heavy_ion()->impact_parameter() : -1; - #endif - } -*/ ParticlePair Event::beams() const { return Rivet::beams(*this); } double Event::sqrtS() const { return Rivet::sqrtS(beams()); } double Event::asqrtS() const { return Rivet::asqrtS(beams()); } void Event::_init(const GenEvent& ge) { // Use Rivet's preferred units if possible #ifdef HEPMC_HAS_UNITS _genevent.use_units(HepMC::Units::GEV, HepMC::Units::MM); #endif } const Particles& Event::allParticles() const { if (_particles.empty()) { //< assume that empty means no attempt yet made - for (ConstGenParticlePtr gp : particles(genEvent())) { + for (ConstGenParticlePtr gp : HepMCUtils::particles(genEvent())) { _particles += Particle(gp); } } return _particles; } } diff --git a/src/Core/Jet.cc b/src/Core/Jet.cc --- a/src/Core/Jet.cc +++ b/src/Core/Jet.cc @@ -1,201 +1,194 @@ #include "Rivet/Jet.hh" #include "Rivet/Tools/Cuts.hh" #include "Rivet/Tools/ParticleName.hh" #include "Rivet/Tools/Logging.hh" #include "Rivet/Tools/ParticleIdUtils.hh" namespace Rivet { Jet& Jet::clear() { _momentum = FourMomentum(); _pseudojet.reset(0,0,0,0); _particles.clear(); return *this; } Jet& Jet::setState(const FourMomentum& mom, const Particles& particles, const Particles& tags) { clear(); _momentum = mom; _pseudojet = fastjet::PseudoJet(mom.px(), mom.py(), mom.pz(), mom.E()); _particles = particles; _tags = tags; return *this; } Jet& Jet::setState(const fastjet::PseudoJet& pj, const Particles& particles, const Particles& tags) { clear(); _pseudojet = pj; _momentum = FourMomentum(pj.e(), pj.px(), pj.py(), pj.pz()); _particles = particles; _tags = tags; // if (_particles.empty()) { // for (const fastjet::PseudoJet pjc : _pseudojet.constituents()) { // // If there is no attached user info, we can't create a meaningful particle, so skip // if (!pjc.has_user_info()) continue; // const RivetFJInfo& fjinfo = pjc.user_info(); // // Don't add ghosts to the particles list // if (fjinfo.isGhost) continue; // // Otherwise construct a Particle from the PseudoJet, preferably from an associated GenParticle // ?if (fjinfo.genParticle != NULL) { // _particles.push_back(Particle(fjinfo.genParticle)); // } else { // if (fjinfo.pid == 0) continue; // skip if there is a null PID entry in the FJ info // const FourMomentum pjcmom(pjc.e(), pjc.px(), pjc.py(), pjc.pz()); // _particles.push_back(Particle(fjinfo.pid, pjcmom)); // } // } // } return *this; } Jet& Jet::setParticles(const Particles& particles) { _particles = particles; return *this; } bool Jet::containsParticle(const Particle& particle) const { - #if HEPMC_VERSION_CODE >= 3000000 - const int idcode = particle.genParticle()->id(); + const int barcode = HepMCUtils::uniqueId(particle.genParticle()); for (const Particle& p : particles()) { - if (p.genParticle()->id() == idcode) return true; + if (HepMCUtils::uniqueId(p.genParticle()) == barcode) return true; } - #else - const int barcode = uniqueId(particle.genParticle()); - for (const Particle& p : particles()) { - if (uniqueId(p.genParticle()) == barcode) return true; - } - #endif return false; } bool Jet::containsParticleId(PdgId pid) const { for (const Particle& p : particles()) { if (p.pid() == pid) return true; } return false; } bool Jet::containsParticleId(const vector& pids) const { for (const Particle& p : particles()) { for (PdgId pid : pids) { if (p.pid() == pid) return true; } } return false; } /// @todo Jet::containsMatch(Matcher m) { ... if m(pid) return true; ... } Jet& Jet::transformBy(const LorentzTransform& lt) { _momentum = lt.transform(_momentum); for (Particle& p : _particles) p.transformBy(lt); for (Particle& t : _tags) t.transformBy(lt); _pseudojet.reset(_momentum.px(), _momentum.py(), _momentum.pz(), _momentum.E()); //< lose ClusterSeq etc. return *this; } double Jet::neutralEnergy() const { double e_neutral = 0.0; for (const Particle& p : particles()) { if (p.charge3() == 0) e_neutral += p.E(); } return e_neutral; } double Jet::hadronicEnergy() const { double e_hadr = 0.0; for (const Particle& p : particles()) { if (isHadron(p)) e_hadr += p.E(); } return e_hadr; } bool Jet::containsCharm(bool include_decay_products) const { for (const Particle& p : particles()) { if (p.abspid() == PID::CQUARK) return true; if (isHadron(p) && hasCharm(p)) return true; if (include_decay_products) { ConstGenVertexPtr gv = p.genParticle()->production_vertex(); if (gv) { - for (ConstGenParticlePtr pi : Rivet::particles(gv, Relatives::ANCESTORS)) { + for (ConstGenParticlePtr pi : HepMCUtils::particles(gv, Relatives::ANCESTORS)) { const PdgId pid2 = pi->pdg_id(); if (PID::isHadron(pid2) && PID::hasCharm(pid2)) return true; } } } } return false; } bool Jet::containsBottom(bool include_decay_products) const { for (const Particle& p : particles()) { if (p.abspid() == PID::BQUARK) return true; if (isHadron(p) && hasBottom(p)) return true; if (include_decay_products) { ConstGenVertexPtr gv = p.genParticle()->production_vertex(); if (gv) { - for (ConstGenParticlePtr pi : Rivet::particles(gv, Relatives::ANCESTORS)) { + for (ConstGenParticlePtr pi : HepMCUtils::particles(gv, Relatives::ANCESTORS)) { const PdgId pid2 = pi->pdg_id(); if (PID::isHadron(pid2) && PID::hasBottom(pid2)) return true; } } } } return false; } Particles Jet::tags(const Cut& c) const { return filter_select(tags(), c); } Particles Jet::bTags(const Cut& c) const { Particles rtn; for (const Particle& tp : tags()) { if (hasBottom(tp) && c->accept(tp)) rtn.push_back(tp); } return rtn; } Particles Jet::cTags(const Cut& c) const { Particles rtn; for (const Particle& tp : tags()) { /// @todo Is making b and c tags exclusive the right thing to do? if (hasCharm(tp) && !hasBottom(tp) && c->accept(tp)) rtn.push_back(tp); } return rtn; } Particles Jet::tauTags(const Cut& c) const { Particles rtn; for (const Particle& tp : tags()) { if (isTau(tp) && c->accept(tp)) rtn.push_back(tp); } return rtn; } /// Allow a Jet to be passed to an ostream. std::ostream& operator << (std::ostream& os, const Jet& j) { os << "Jet<" << j.mom()/GeV << " GeV; Nparticles=" << j.size() << "; "; os << "bTag=" << boolalpha << j.bTagged() << ", "; os << "cTag=" << boolalpha << j.cTagged() << ", "; os << "tauTag=" << boolalpha << j.tauTagged() << ">"; return os; } } diff --git a/src/Core/Particle.cc b/src/Core/Particle.cc --- a/src/Core/Particle.cc +++ b/src/Core/Particle.cc @@ -1,329 +1,299 @@ #include "Rivet/Particle.hh" #include "Rivet/Tools/Cuts.hh" #include "Rivet/Tools/ParticleIdUtils.hh" #include "Rivet/Tools/ParticleUtils.hh" namespace Rivet { void Particle::setConstituents(const vector& cs, bool setmom) { _constituents = cs; if (setmom) _momentum = sum(cs, p4, FourMomentum()); } void Particle::addConstituent(const Particle& c, bool addmom) { _constituents += c; if (addmom) _momentum += c; } void Particle::addConstituents(const vector& cs, bool addmom) { _constituents += cs; if (addmom) for (const Particle& c : cs) _momentum += c; } vector Particle::rawConstituents() const { if (!isComposite()) return Particles{*this}; vector rtn; for (const Particle& p : constituents()) rtn += p.rawConstituents(); return rtn; } Particle& Particle::transformBy(const LorentzTransform& lt) { _momentum = lt.transform(_momentum); return *this; } bool Particle::isVisible() const { // Charged particles are visible if ( PID::threeCharge(pid()) != 0 ) return true; // Neutral hadrons are visible if ( PID::isHadron(pid()) ) return true; // Photons are visible if ( pid() == PID::PHOTON ) return true; // Gluons are visible (for parton level analyses) if ( pid() == PID::GLUON ) return true; // Everything else is invisible return false; } bool Particle::isStable() const { return genParticle() != NULL && genParticle()->status() == 1 && genParticle()->end_vertex() == NULL; } vector Particle::ancestors(const Cut& c, bool physical_only) const { /// @todo: use HepMC FindParticles when it actually works properly for const objects //HepMC::FindParticles searcher(genParticle(), HepMC::ANCESTORS, HepMC::STATUS==1); vector rtn; // this case needed protecting against (at least for the latest Herwig... not sure why // it didn't show up earlier if (genParticle() == nullptr) return rtn; ConstGenVertexPtr gv = genParticle()->production_vertex(); if (gv == nullptr) return rtn; - vector ancestors = Rivet::particles(genParticle(), Relatives::ANCESTORS); + vector ancestors = HepMCUtils::particles(genParticle(), Relatives::ANCESTORS); for(const auto &a: ancestors){ if(physical_only && a->status() != 1 && a->status() != 2) continue; const Particle p(a); if(c != Cuts::OPEN && !c->accept(p)) continue; rtn += p; } return rtn; } vector Particle::parents(const Cut& c) const { vector rtn; - #if HEPMC_VERSION_CODE >= 3000000 - for (ConstGenParticlePtr gp : genParticle()->parents()) { - const Particle p(gp); - if (c != Cuts::OPEN && !c->accept(p)) continue; - rtn += p; - } - #else ConstGenVertexPtr gv = genParticle()->production_vertex(); if (gv == nullptr) return rtn; - for(ConstGenParticlePtr it: particles(gv, RivetHepMC::Relatives::PARENTS)){ + for(ConstGenParticlePtr it: HepMCUtils::particles(gv, Relatives::PARENTS)){ const Particle p(it); if (c != Cuts::OPEN && !c->accept(p)) continue; rtn += p; } - #endif return rtn; } vector Particle::children(const Cut& c) const { vector rtn; if (isStable()) return rtn; - #if HEPMC_VERSION_CODE >= 3000000 - for (ConstGenParticlePtr gp : genParticle()->children()) { - const Particle p(gp); - if (c != Cuts::OPEN && !c->accept(p)) continue; - rtn += p; - } - #else ConstGenVertexPtr gv = genParticle()->end_vertex(); if (gv == nullptr) return rtn; /// @todo Would like to do this, but the range objects are broken // foreach (const GenParticlePtr gp, gv->particles(HepMC::children)) // rtn += Particle(gp); - for(ConstGenParticlePtr it: particles(gv, RivetHepMC::Relatives::CHILDREN)){ + for(ConstGenParticlePtr it: HepMCUtils::particles(gv, Relatives::CHILDREN)){ const Particle p(it); if (c != Cuts::OPEN && !c->accept(p)) continue; rtn += p; } - #endif return rtn; } /// @todo Insist that the current particle is post-hadronization, otherwise throw an exception? /// @todo Use recursion through replica-avoiding functions to avoid bookkeeping duplicates vector Particle::allDescendants(const Cut& c, bool remove_duplicates) const { vector rtn; if (isStable()) return rtn; - #if HEPMC_VERSION_CODE >= 3000000 - for (ConstGenParticlePtr gp : Rivet::particles(genParticle(), Relatives::DESCENDANTS)){ - const Particle p(gp); - if (c != Cuts::OPEN && !c->accept(p)) continue; - if (remove_duplicates) { - bool dup = false; - for (ConstGenParticlePtr gp2 : gp->children()) { - if (gp->pid() == gp2->pid()) { dup = true; break; } - } - if (dup) continue; - } - rtn += p; - } - #else + ConstGenVertexPtr gv = genParticle()->end_vertex(); if (gv == nullptr) return rtn; /// @todo Would like to do this, but the range objects are broken // foreach (const GenParticlePtr gp, gv->particles(HepMC::descendants)) - for(ConstGenParticlePtr it: particles(gv, RivetHepMC::Relatives::DESCENDANTS)){ + for(ConstGenParticlePtr it: HepMCUtils::particles(gv, Relatives::DESCENDANTS)){ const Particle p(it); if (c != Cuts::OPEN && !c->accept(p)) continue; if (remove_duplicates && it->end_vertex() != NULL) { // size_t n = 0; ///< @todo Only remove 1-to-1 duplicates? bool dup = false; /// @todo Yuck, HepMC - for(ConstGenParticlePtr it2: particles(it->end_vertex(), RivetHepMC::Relatives::CHILDREN)){ + for(ConstGenParticlePtr it2: HepMCUtils::particles(it->end_vertex(), Relatives::CHILDREN)){ // n += 1; if (n > 1) break; if (it->pdg_id() == it2->pdg_id()) { dup = true; break; } } if (dup) continue; } rtn += p; } return rtn; } /// @todo Insist that the current particle is post-hadronization, otherwise throw an exception? vector Particle::stableDescendants(const Cut& c) const { vector rtn; if (isStable()) return rtn; ConstGenVertexPtr gv = genParticle()->end_vertex(); if (gv == nullptr) return rtn; /// @todo Would like to do this, but the range objects are broken // foreach (const GenParticlePtr gp, gv->particles(HepMC::descendants)) - for(ConstGenParticlePtr it: particles(gv, RivetHepMC::Relatives::DESCENDANTS)){ + for(ConstGenParticlePtr it: HepMCUtils::particles(gv, Relatives::DESCENDANTS)){ //for (GenVertex::particle_iterator it = gv->particles_begin(HepMC::descendants); it != gv->particles_end(HepMC::descendants); ++it) { // if ((*it)->status() != 1 || (*it)->end_vertex() != NULL) continue; const Particle p(*it); if (!p.isStable()) continue; if (c != Cuts::OPEN && !c->accept(p)) continue; rtn += p; } - #endif return rtn; } double Particle::flightLength() const { if (isStable()) return -1; if (genParticle() == NULL) return 0; if (genParticle()->production_vertex() == NULL) return 0; const RivetHepMC::FourVector v1 = genParticle()->production_vertex()->position(); const RivetHepMC::FourVector v2 = genParticle()->end_vertex()->position(); return sqrt(sqr(v2.x()-v1.x()) + sqr(v2.y()-v1.y()) + sqr(v2.z()-v1.z())); } bool Particle::hasParent(PdgId pid) const { return hasParentWith(hasPID(pid)); } bool Particle::hasParentWith(const Cut& c) const { return hasParentWith([&](const Particle& p){return c->accept(p);}); } bool Particle::hasAncestor(PdgId pid, bool only_physical) const { return hasAncestorWith(hasPID(pid), only_physical); } bool Particle::hasAncestorWith(const Cut& c, bool only_physical) const { return hasAncestorWith([&](const Particle& p){return c->accept(p);}, only_physical); } bool Particle::hasChildWith(const Cut& c) const { return hasChildWith([&](const Particle& p){return c->accept(p);}); } bool Particle::hasDescendantWith(const Cut& c, bool remove_duplicates) const { return hasDescendantWith([&](const Particle& p){return c->accept(p);}, remove_duplicates); } bool Particle::hasStableDescendantWith(const Cut& c) const { return hasStableDescendantWith([&](const Particle& p){return c->accept(p);}); } bool Particle::fromBottom() const { return hasAncestorWith([](const Particle& p){ return p.genParticle()->status() == 2 && p.isHadron() && p.hasBottom(); }); } bool Particle::fromCharm() const { return hasAncestorWith([](const Particle& p){ return p.genParticle()->status() == 2 && p.isHadron() && p.hasCharm(); }); } bool Particle::fromHadron() const { return hasAncestorWith([](const Particle& p){ return p.genParticle()->status() == 2 && p.isHadron(); }); } bool Particle::fromTau(bool prompt_taus_only) const { if (prompt_taus_only && fromHadron()) return false; return hasAncestorWith([](const Particle& p){ return p.genParticle()->status() == 2 && isTau(p); }); } bool Particle::fromHadronicTau(bool prompt_taus_only) const { return hasAncestorWith([&](const Particle& p){ return p.genParticle()->status() == 2 && isTau(p) && (!prompt_taus_only || p.isPrompt()) && hasHadronicDecay(p); }); } bool Particle::isDirect(bool allow_from_direct_tau, bool allow_from_direct_mu) const { if (genParticle() == nullptr) return false; // no HepMC connection, give up! Throw UserError exception? ConstGenVertexPtr prodVtx = genParticle()->production_vertex(); if (prodVtx == nullptr) return false; // orphaned particle, has to be assume false /// @todo Would be nicer to be able to write this recursively up the chain, exiting as soon as a parton or string/cluster is seen - for (ConstGenParticlePtr ancestor : Rivet::particles(prodVtx, Relatives::ANCESTORS)) { + for (ConstGenParticlePtr ancestor : HepMCUtils::particles(prodVtx, Relatives::ANCESTORS)) { const PdgId pid = ancestor->pdg_id(); if (ancestor->status() != 2) continue; // no non-standard statuses or beams to be used in decision making bool isBeam = false; - for(ConstGenParticlePtr b: Rivet::beams(prodVtx->parent_event())){ + for(ConstGenParticlePtr b: HepMCUtils::beams(prodVtx->parent_event())){ if(ancestor == b){ isBeam = true; break; } } if(isBeam) continue; // PYTHIA6 uses status 2 for beams, I think... (sigh) if (PID::isParton(pid)) continue; // PYTHIA6 also uses status 2 for some partons, I think... (sigh) if (PID::isHadron(pid)) return false; // direct particles can't be from hadron decays if (abs(pid) == PID::TAU && abspid() != PID::TAU && !allow_from_direct_tau) return false; // allow or ban particles from tau decays (permitting tau copies) if (abs(pid) == PID::MUON && abspid() != PID::MUON && !allow_from_direct_mu) return false; // allow or ban particles from muon decays (permitting muon copies) } return true; } /////////////////////// /// Allow a Particle to be passed to an ostream. std::ostream& operator << (std::ostream& os, const Particle& p) { string pname; try { pname = PID::toParticleName(p.pid()); } catch (...) { pname = "PID=" + to_str(p.pid()); } os << "Particle<" << pname << " @ " << p.momentum()/GeV << " GeV>"; return os; } /// Allow ParticlePair to be passed to an ostream. std::ostream& operator << (std::ostream& os, const ParticlePair& pp) { os << "[" << pp.first << ", " << pp.second << "]"; return os; } } diff --git a/src/Core/Run.cc b/src/Core/Run.cc --- a/src/Core/Run.cc +++ b/src/Core/Run.cc @@ -1,180 +1,173 @@ // -*- C++ -*- #include "Rivet/Run.hh" #include "Rivet/AnalysisHandler.hh" -//#include "HepMC/IO_GenEvent.h" #include "Rivet/Math/MathUtils.hh" #include "Rivet/Tools/RivetPaths.hh" /// @todo reinstate zlib once HepMC3 stream reading is ok -//#include "zstr/zstr.hpp" +#include "zstr/zstr.hpp" #include namespace Rivet { Run::Run(AnalysisHandler& ah) : _ah(ah), _fileweight(1.0), _xs(NAN) { } Run::~Run() { } Run& Run::setCrossSection(const double xs) { _xs = xs; return *this; } double Run::crossSection() const { return _ah.crossSection(); } Run& Run::setListAnalyses(const bool dolist) { _listAnalyses = dolist; return *this; } // Fill event and check for a bad read state bool Run::readEvent() { /// @todo Clear rather than new the GenEvent object per-event? _evt.reset(new GenEvent()); - _hepmcReader->read_event(*_evt); - if(_hepmcReader->failed()){ + if(!HepMCUtils::readEvent(_hepmcReader, _evt)){ Log::getLog("Rivet.Run") << Log::DEBUG << "Read failed. End of file?" << endl; return false; } // Rescale event weights by file-level weight, if scaling is non-trivial if (!fuzzyEquals(_fileweight, 1.0)) { for (size_t i = 0; i < (size_t) _evt->weights().size(); ++i) { _evt->weights()[i] *= _fileweight; } } return true; } /* // Fill event and check for a bad read state --- to skip, maybe HEPMC3 will have a better way bool Run::skipEvent() { if (_io->rdstate() != 0 || !_io->fill_next_event(_evt.get()) ) { Log::getLog("Rivet.Run") << Log::DEBUG << "Read failed. End of file?" << endl; return false; } return true; } */ bool Run::openFile(const std::string& evtfile, double weight) { // Set current weight-scaling member _fileweight = weight; // Set up HepMC input reader objects if (evtfile == "-") { - /// @todo No way of knowing with stdin whether the stream is HepMC2 or HepMC3. Assume HepMC2 for now? - _hepmcReader = std::make_shared(std::cin); + _hepmcReader = HepMCUtils::makeReader(std::cin); } else { - _hepmcReader = RivetHepMC::ReaderFactory::make_reader(evtfile); - /* if (!fileexists(evtfile)) throw Error("Event file '" + evtfile + "' not found"); #ifdef HAVE_LIBZ // NB. zstr auto-detects if file is deflated or plain-text _istr.reset(new zstr::ifstream(evtfile.c_str())); #else _istr.reset(new std::fstream(evtfile.c_str(), std::ios::in)); #endif - _io.reset(new HepMC::IO_GenEvent(*_istr)); - */ + _hepmcReader = HepMCUtils::makeReader(*_istr); + } - if (_hepmcReader->failed()) { + if (_hepmcReader == nullptr) { Log::getLog("Rivet.Run") << Log::ERROR << "Read error on file " << evtfile << endl; return false; } return true; } bool Run::init(const std::string& evtfile, double weight) { if (!openFile(evtfile, weight)) return false; // Read first event to define run conditions bool ok = readEvent(); if (!ok) return false; - if(particles(_evt).size() == 0){ - - /* - #if HEPMC_VERSION_CODE >= 3000000 - if (_evt->particles().empty()) { - #else - if (_evt->particles_size() == 0) { - #endif */ + if(HepMCUtils::particles(_evt).size() == 0){ Log::getLog("Rivet.Run") << Log::ERROR << "Empty first event." << endl; return false; } // Initialise AnalysisHandler with beam information from first event _ah.init(*_evt); // Set cross-section from command line if (!std::isnan(_xs)) { Log::getLog("Rivet.Run") << Log::DEBUG << "Setting user cross-section = " << _xs << " pb" << endl; _ah.setCrossSection(_xs); } // List the chosen & compatible analyses if requested if (_listAnalyses) { for (const std::string& ana : _ah.analysisNames()) { cout << ana << endl; } } return true; } bool Run::processEvent() { // Set cross-section if found in event and not from command line - #ifdef HEPMC_HAS_CROSS_SECTION + + #if defined ENABLE_HEPMC_3 if (std::isnan(_xs) && _evt->cross_section()) { - #if HEPMC_VERSION_CODE >= 3000000 - const double xs = _evt->cross_section()->cross_section; ///< in pb - #else + const double xs = _evt->cross_section()->xsec(); ///< in pb + Log::getLog("Rivet.Run") + << Log::DEBUG << "Setting cross-section = " << xs << " pb" << endl; + _ah.setCrossSection(xs); + } + #elif defined HEPMC_HAS_CROSS_SECTION + if (std::isnan(_xs) && _evt->cross_section()) { const double xs = _evt->cross_section()->cross_section(); ///< in pb - #endif Log::getLog("Rivet.Run") - << Log::DEBUG << "Setting cross-section = " << xs << " pb" << endl; + << Log::DEBUG << "Setting cross-section = " << xs << " pb" << endl; _ah.setCrossSection(xs); } #endif + // Complain about absence of cross-section if required! if (_ah.needCrossSection() && !_ah.hasCrossSection()) { Log::getLog("Rivet.Run") << Log::ERROR << "Total cross-section needed for at least one of the analyses. " << "Please set it (on the command line with '-x' if using the 'rivet' program)" << endl; return false; } // Analyze event _ah.analyze(*_evt); return true; } bool Run::finalize() { _evt.reset(); /// @todo reinstate for HepMC3 //_istr.reset(); if (!std::isnan(_xs)) _ah.setCrossSection(_xs); _ah.finalize(); return true; } } diff --git a/src/Projections/Beam.cc b/src/Projections/Beam.cc --- a/src/Projections/Beam.cc +++ b/src/Projections/Beam.cc @@ -1,166 +1,147 @@ // -*- C++ -*- #include "Rivet/Config/RivetCommon.hh" #include "Rivet/Tools/Logging.hh" #include "Rivet/Projections/Beam.hh" namespace Rivet { ParticlePair beams(const Event& e) { - #ifdef ENABLE_HEPMC_3 // First try the official way: ask the GenEvent for the beam pointers - assert(e.genEvent()->particles().size() >= 2); - vector beams = e.genEvent()->beams(); + assert(HepMCUtils::particles_size(e.genEvent()) >= 2); + vector beams = HepMCUtils::beams(e.genEvent()); if (beams.size() == 2 && beams[0] && beams[1]) { return ParticlePair{beams[0], beams[1]}; } // Ok, that failed: let's find the status = 4 particles by hand const vector pstat4s = e.allParticles([](const Particle& p){ return p.genParticle()->status() == 4; }); if (pstat4s.size() >= 2) { return ParticlePair{pstat4s[0], pstat4s[1]}; } /// There are no barcodes in HepMC3 /// @todo implement some other fallback rubric? - /* - // Hmm, this sucks. Last guess is that barcodes 1 and 2 are the beams - if (e.genEvent()->barcode_to_particle(1) && e.genEvent()->barcode_to_particle(2)) { - return ParticlePair{e.genEvent()->barcode_to_particle(1), e.genEvent()->barcode_to_particle(2)}; - } - */ - #else - - // First try the official way: ask the GenEvent for the beam pointers - assert(e.genEvent()->particles_size() >= 2); - if (e.genEvent()->valid_beam_particles()) { - vector beams = e.genEvent()->beam_particles(); - assert(beams.size()==2 && beams.at(0) && beams.at(1)); - return ParticlePair{beams.at(0), beams.at(1)}; - } - // Ok, that failed: let's find the status = 4 particles by hand - const vector pstat4s = e.allParticles([](const Particle& p){ return p.genParticle()->status() == 4; }); - if (pstat4s.size() >= 2) { - return ParticlePair{pstat4s[0], pstat4s[1]}; - } + #ifndef ENABLE_HEPMC_3 + // Hmm, this sucks. Last guess is that barcodes 1 and 2 are the beams if (e.genEvent()->barcode_to_particle(1) && e.genEvent()->barcode_to_particle(2)) { return ParticlePair{e.genEvent()->barcode_to_particle(1), e.genEvent()->barcode_to_particle(2)}; } #endif // Give up: return null beams return ParticlePair{Particle(), Particle()}; } double sqrtS(const FourMomentum& pa, const FourMomentum& pb) { const double mom1 = pa.pz(); const double e1 = pa.E(); const double mom2 = pb.pz(); const double e2 = pb.E(); double sqrts = sqrt( sqr(e1+e2) - sqr(mom1+mom2) ); return sqrts; } double asqrtS(const FourMomentum& pa, const FourMomentum& pb) { const static double MNUCLEON = 939*MeV; //< nominal nucleon mass return sqrtS(pa/(pa.mass()/MNUCLEON), pb/(pb.mass()/MNUCLEON)); } double asqrtS(const ParticlePair& beams) { return sqrtS(beams.first.mom()/nuclA(beams.first), beams.second.mom()/nuclA(beams.second)); } FourMomentum acmsBoostVec(const FourMomentum& pa, const FourMomentum& pb) { const static double MNUCLEON = 939*MeV; //< nominal nucleon mass const double na = pa.mass()/MNUCLEON, nb = pb.mass()/MNUCLEON; return cmsBoostVec(pa/na, pb/nb); } FourMomentum acmsBoostVec(const ParticlePair& beams) { return cmsBoostVec(beams.first.mom()/nuclA(beams.first), beams.second.mom()/nuclA(beams.second)); } Vector3 cmsBetaVec(const FourMomentum& pa, const FourMomentum& pb) { // const Vector3 rtn = (pa.p3() + pb.p3()) / (pa.E() + pb.E()); const Vector3 rtn = (pa + pb).betaVec(); return rtn; } Vector3 acmsBetaVec(const FourMomentum& pa, const FourMomentum& pb) { const static double MNUCLEON = 939*MeV; //< nominal nucleon mass const Vector3 rtn = cmsBetaVec(pa/(pa.mass()/MNUCLEON), pb/(pb.mass()/MNUCLEON)); return rtn; } Vector3 acmsBetaVec(const ParticlePair& beams) { const Vector3 rtn = cmsBetaVec(beams.first.mom()/nuclA(beams.first), beams.second.mom()/nuclA(beams.second)); return rtn; } Vector3 cmsGammaVec(const FourMomentum& pa, const FourMomentum& pb) { // const Vector3 rtn = (pa + pb).gammaVec(); const double gamma = (pa.E() + pb.E()) / sqrt( sqr(pa.mass()) + sqr(pb.mass()) + 2*(pa.E()*pb.E() - dot(pa.p3(), pb.p3())) ); const Vector3 rtn = gamma * (pa.p3() + pb.p3()).unit(); return rtn; } Vector3 acmsGammaVec(const FourMomentum& pa, const FourMomentum& pb) { const static double MNUCLEON = 939*MeV; //< nominal nucleon mass Vector3 rtn = cmsGammaVec(pa/(pa.mass()/MNUCLEON), pb/(pb.mass()/MNUCLEON)); return rtn; } Vector3 acmsGammaVec(const ParticlePair& beams) { Vector3 rtn = cmsGammaVec(beams.first.mom()/nuclA(beams.first), beams.second.mom()/nuclA(beams.second)); return rtn; } LorentzTransform cmsTransform(const FourMomentum& pa, const FourMomentum& pb) { /// @todo Automatically choose to construct from beta or gamma according to which is more precise? return LorentzTransform::mkFrameTransformFromGamma(cmsGammaVec(pa, pb)); } LorentzTransform acmsTransform(const FourMomentum& pa, const FourMomentum& pb) { /// @todo Automatically choose to construct from beta or gamma according to which is more precise? return LorentzTransform::mkFrameTransformFromGamma(acmsGammaVec(pa, pb)); } LorentzTransform acmsTransform(const ParticlePair& beams) { return LorentzTransform::mkFrameTransformFromGamma(acmsGammaVec(beams)); } ///////////////////////////////////////////// void Beam::project(const Event& e) { _theBeams = Rivet::beams(e); MSG_DEBUG("Beam particles = " << _theBeams << " => sqrt(s) = " << sqrtS()/GeV << " GeV"); } FourVector Beam::pv() const { RivetHepMC::FourVector v1, v2; const ParticlePair bpair = beams(); if (bpair.first.genParticle() && bpair.first.genParticle()->end_vertex()) v1 = bpair.first.genParticle()->end_vertex()->position(); if (bpair.second.genParticle() && bpair.second.genParticle()->end_vertex()) v2 = bpair.second.genParticle()->end_vertex()->position(); const FourVector rtn = (v1 == v2) ? FourVector(v1.t(), v1.x(), v1.y(), v1.z()) : FourVector(); MSG_DEBUG("Beam PV 4-position = " << rtn); return rtn; } } diff --git a/src/Projections/FinalPartons.cc b/src/Projections/FinalPartons.cc --- a/src/Projections/FinalPartons.cc +++ b/src/Projections/FinalPartons.cc @@ -1,42 +1,42 @@ // -*- C++ -*- #include "Rivet/Projections/FinalPartons.hh" #include "Rivet/Tools/RivetHepMC.hh" namespace Rivet { bool FinalPartons::accept(const Particle& p) const { // Reject if *not* a parton if (!isParton(p)) return false; // Accept partons if they end on a standard hadronization vertex if (p.genParticle()->end_vertex() != nullptr && p.genParticle()->end_vertex()->id() == 5) return true; // Reject if p has a parton child. for (const Particle& c : p.children()) if (isParton(c)) return false; // Reject if from a hadron or tau decay. if (p.fromDecay()) return false; return _cuts->accept(p); } void FinalPartons::project(const Event& e) { _theParticles.clear(); - for (ConstGenParticlePtr gp : Rivet::particles(e.genEvent())) { + for (ConstGenParticlePtr gp : HepMCUtils::particles(e.genEvent())) { if (!gp) continue; const Particle p(gp); if (accept(p)) _theParticles.push_back(p); } } } diff --git a/src/Projections/FinalState.cc b/src/Projections/FinalState.cc --- a/src/Projections/FinalState.cc +++ b/src/Projections/FinalState.cc @@ -1,82 +1,82 @@ // -*- C++ -*- #include "Rivet/Projections/FinalState.hh" namespace Rivet { FinalState::FinalState(const Cut & c) : ParticleFinder(c) { setName("FinalState"); const bool open = (c == Cuts::open()); MSG_TRACE("Check for open FS conditions: " << std::boolalpha << open); if (!open) addProjection(FinalState(), "OpenFS"); } /// @deprecated, keep for backwards compatibility for now. FinalState::FinalState(double mineta, double maxeta, double minpt) { setName("FinalState"); const bool openpt = isZero(minpt); const bool openeta = (mineta <= -MAXDOUBLE && maxeta >= MAXDOUBLE); MSG_TRACE("Check for open FS conditions:" << std::boolalpha << " eta=" << openeta << ", pt=" << openpt); if (openpt && openeta) { _cuts = Cuts::open(); } else { addProjection(FinalState(), "OpenFS"); if (openeta) _cuts = (Cuts::pT >= minpt); else if ( openpt ) _cuts = Cuts::etaIn(mineta, maxeta); else _cuts = (Cuts::etaIn(mineta, maxeta) && Cuts::pT >= minpt); } } int FinalState::compare(const Projection& p) const { const FinalState& other = dynamic_cast(p); return _cuts == other._cuts ? EQUIVALENT : UNDEFINED; } void FinalState::project(const Event& e) { _theParticles.clear(); // Handle "open FS" special case if (_cuts == Cuts::OPEN) { MSG_TRACE("Open FS processing: should only see this once per event (" << e.genEvent()->event_number() << ")"); - for (ConstGenParticlePtr p : Rivet::particles(e.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { if (p->status() == 1) { MSG_TRACE("FS GV = " << p->production_vertex()->position()); _theParticles.push_back(Particle(p)); } } return; } // If this is not itself the "open" FS, base the calculations on the open FS' results /// @todo In general, we'd like to calculate a restrictive FS based on the most restricted superset FS. const Particles allstable = applyProjection(e, "OpenFS").particles(); for (const Particle& p : allstable) { const bool passed = accept(p); MSG_TRACE("Choosing: ID = " << p.pid() << ", pT = " << p.pT()/GeV << " GeV" << ", eta = " << p.eta() << ": result = " << std::boolalpha << passed); if (passed) _theParticles.push_back(p); } MSG_TRACE("Number of final-state particles = " << _theParticles.size()); } /// Decide if a particle is to be accepted or not. bool FinalState::accept(const Particle& p) const { // Not having status == 1 should never happen! assert(p.genParticle() == NULL || p.genParticle()->status() == 1); return _cuts->accept(p); } } diff --git a/src/Projections/HeavyHadrons.cc b/src/Projections/HeavyHadrons.cc --- a/src/Projections/HeavyHadrons.cc +++ b/src/Projections/HeavyHadrons.cc @@ -1,66 +1,66 @@ // -*- C++ -*- #include "Rivet/Projections/HeavyHadrons.hh" namespace Rivet { void HeavyHadrons::project(const Event& e) { _theParticles.clear(); _theBs.clear(); _theCs.clear(); /// @todo Allow user to choose whether primary or final HF hadrons are to be returned const Particles& unstables = applyProjection(e, "UFS").particles(); for (const Particle& p : unstables) { // Exclude non-b/c-hadrons if (!isHadron(p)) continue; if (!hasCharm(p) && !hasBottom(p)) continue; MSG_DEBUG("Found a heavy (b or c) unstable hadron: " << p.pid()); // An unbound, or undecayed status 2 hadron: this is weird, but I guess is allowed... if (!p.genParticle() || !p.genParticle()->end_vertex()) { MSG_DEBUG("Heavy hadron " << p.pid() << " with no GenParticle or decay found"); _theParticles.push_back(p); if (hasBottom(p)) _theBs.push_back(p); else _theCs.push_back(p); continue; } // There are descendants -- check them for b or c content /// @todo What about charm hadrons coming from bottom hadron decays? - vector children = Rivet::particles(p.genParticle(), Relatives::CHILDREN); + vector children = HepMCUtils::particles(p.genParticle(), Relatives::CHILDREN); if (hasBottom(p)) { bool has_b_child = false; for (ConstGenParticlePtr p2 : children) { if (PID::hasBottom(p2->pdg_id())) { has_b_child = true; break; } } if (!has_b_child) { _theParticles.push_back(p); _theBs.push_back(p); } } else if (hasCharm(p)) { bool has_c_child = false; for (ConstGenParticlePtr p2 : children) { if (PID::hasCharm(p2->pdg_id())) { has_c_child = true; break; } } if (!has_c_child) { _theParticles.push_back(p); _theCs.push_back(p); } } } MSG_DEBUG("Num b hadrons = " << _theBs.size() << ", num c hadrons = " << _theCs.size() << ", total = " << _theParticles.size()); } } diff --git a/src/Projections/InitialQuarks.cc b/src/Projections/InitialQuarks.cc --- a/src/Projections/InitialQuarks.cc +++ b/src/Projections/InitialQuarks.cc @@ -1,61 +1,61 @@ // -*- C++ -*- #include "Rivet/Projections/InitialQuarks.hh" namespace Rivet { int InitialQuarks::compare(const Projection& p) const { return EQUIVALENT; } void InitialQuarks::project(const Event& e) { _theParticles.clear(); - for (ConstGenParticlePtr p : Rivet::particles(e.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { ConstGenVertexPtr pv = p->production_vertex(); ConstGenVertexPtr dv = p->end_vertex(); const PdgId pid = abs(p->pdg_id()); bool passed = inRange((long)pid, 1, 6); if (passed) { if (pv != 0) { - for (ConstGenParticlePtr pp : ::Rivet::particles(pv, Relatives::PARENTS)){ + for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)){ // Only accept if parent is electron or Z0 const PdgId pid = abs(pp->pdg_id()); passed = (pid == PID::ELECTRON || abs(pp->pdg_id()) == PID::ZBOSON || abs(pp->pdg_id()) == PID::GAMMA); } } else { passed = false; } } if (getLog().isActive(Log::TRACE)) { const int st = p->status(); const double pT = p->momentum().perp(); const double eta = p->momentum().eta(); MSG_TRACE(std::boolalpha << "ID = " << p->pdg_id() << ", status = " << st << ", pT = " << pT << ", eta = " << eta << ": result = " << passed); if (pv != 0) { - for (ConstGenParticlePtr pp : ::Rivet::particles(pv, Relatives::PARENTS)) { + for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) { MSG_TRACE(std::boolalpha << " parent ID = " << pp->pdg_id()); } } if (dv != 0) { - for (ConstGenParticlePtr pp : ::Rivet::particles(dv, Relatives::CHILDREN)) { + for (ConstGenParticlePtr pp : HepMCUtils::particles(dv, Relatives::CHILDREN)) { MSG_TRACE(std::boolalpha << " child ID = " << pp->pdg_id()); } } } if (passed) _theParticles.push_back(Particle(p)); } MSG_DEBUG("Number of initial quarks = " << _theParticles.size()); if (!_theParticles.empty()) { for (size_t i = 0; i < _theParticles.size(); i++) { MSG_DEBUG("Initial quark[" << i << "] = " << _theParticles[i].pid()); } } } } diff --git a/src/Projections/InvMassFinalState.cc b/src/Projections/InvMassFinalState.cc --- a/src/Projections/InvMassFinalState.cc +++ b/src/Projections/InvMassFinalState.cc @@ -1,185 +1,185 @@ // -*- C++ -*- #include "Rivet/Projections/InvMassFinalState.hh" namespace Rivet { InvMassFinalState::InvMassFinalState(const FinalState& fsp, const pair& idpair, // pair of decay products double minmass, // min inv mass double maxmass, // max inv mass double masstarget) : _minmass(minmass), _maxmass(maxmass), _masstarget(masstarget), _useTransverseMass(false) { setName("InvMassFinalState"); addProjection(fsp, "FS"); _decayids.push_back(idpair); } InvMassFinalState::InvMassFinalState(const FinalState& fsp, const vector >& idpairs, // vector of pairs of decay products double minmass, // min inv mass double maxmass, // max inv mass double masstarget) : _decayids(idpairs), _minmass(minmass), _maxmass(maxmass), _masstarget(masstarget), _useTransverseMass(false) { setName("InvMassFinalState"); addProjection(fsp, "FS"); } InvMassFinalState::InvMassFinalState(const pair& idpair, // pair of decay products double minmass, // min inv mass double maxmass, // max inv mass double masstarget) : _minmass(minmass), _maxmass(maxmass), _masstarget(masstarget), _useTransverseMass(false) { setName("InvMassFinalState"); _decayids.push_back(idpair); } InvMassFinalState::InvMassFinalState(const vector >& idpairs, // vector of pairs of decay products double minmass, // min inv mass double maxmass, // max inv mass double masstarget) : _decayids(idpairs), _minmass(minmass), _maxmass(maxmass), _masstarget(masstarget), _useTransverseMass(false) { setName("InvMassFinalState"); } int InvMassFinalState::compare(const Projection& p) const { // First compare the final states we are running on int fscmp = mkNamedPCmp(p, "FS"); if (fscmp != EQUIVALENT) return fscmp; // Then compare the two as final states const InvMassFinalState& other = dynamic_cast(p); fscmp = FinalState::compare(other); if (fscmp != EQUIVALENT) return fscmp; // Compare the mass limits int masstypecmp = cmp(_useTransverseMass, other._useTransverseMass); if (masstypecmp != EQUIVALENT) return masstypecmp; int massllimcmp = cmp(_minmass, other._minmass); if (massllimcmp != EQUIVALENT) return massllimcmp; int masshlimcmp = cmp(_maxmass, other._maxmass); if (masshlimcmp != EQUIVALENT) return masshlimcmp; // Compare the decay species int decaycmp = cmp(_decayids, other._decayids); if (decaycmp != EQUIVALENT) return decaycmp; // Finally compare them as final states return FinalState::compare(other); } void InvMassFinalState::project(const Event& e) { const FinalState& fs = applyProjection(e, "FS"); calc(fs.particles()); } void InvMassFinalState::calc(const Particles& inparticles) { _theParticles.clear(); _particlePairs.clear(); // Containers for the particles of type specified in the pair vector type1, type2; // Get all the particles of the type specified in the pair from the particle list for (const Particle& ipart : inparticles) { // Loop around possible particle pairs for (const PdgIdPair& ipair : _decayids) { if (ipart.pid() == ipair.first) { if (accept(ipart)) type1 += &ipart; } else if (ipart.pid() == ipair.second) { if (accept(ipart)) type2 += &ipart; } } } if (type1.empty() || type2.empty()) return; // Temporary container of selected particles iterators // Useful to compare iterators and avoid double occurrences of the same // particle in case it matches with more than another particle vector tmp; // Now calculate the inv mass pair > closestPair; closestPair.first = 1e30; for (const Particle* i1 : type1) { for (const Particle* i2 : type2) { // Check this is actually a pair // (if more than one pair in vector particles can be unrelated) bool found = false; for (const PdgIdPair& ipair : _decayids) { if (i1->pid() == ipair.first && i2->pid() == ipair.second) { found = true; break; } } if (!found) continue; FourMomentum v4 = i1->momentum() + i2->momentum(); if (v4.mass2() < 0) { MSG_DEBUG("Constructed negative inv mass2: skipping!"); continue; } bool passedMassCut = false; if (_useTransverseMass) { passedMassCut = inRange(mT(i1->momentum(), i2->momentum()), _minmass, _maxmass); } else { passedMassCut = inRange(v4.mass(), _minmass, _maxmass); } if (passedMassCut) { MSG_DEBUG("Selecting particles with IDs " << i1->pid() << " & " << i2->pid() << " and mass = " << v4.mass()/GeV << " GeV"); // Store accepted particles, avoiding duplicates if (find(tmp.begin(), tmp.end(), i1) == tmp.end()) { tmp.push_back(i1); _theParticles += *i1; } if (find(tmp.begin(), tmp.end(), i2) == tmp.end()) { tmp.push_back(i2); _theParticles += *i2; } // Store accepted particle pairs _particlePairs += make_pair(*i1, *i2); if (_masstarget>0.0) { double diff=fabs(v4.mass()-_masstarget); if (diff 0.0 && closestPair.first < 1e30) { _theParticles.clear(); _particlePairs.clear(); _theParticles += closestPair.second.first; _theParticles += closestPair.second.second; _particlePairs += closestPair.second; } MSG_DEBUG("Selected " << _theParticles.size() << " particles " << "(" << _particlePairs.size() << " pairs)"); if (getLog().isActive(Log::TRACE)) { for (const Particle& p : _theParticles) { - MSG_TRACE("PDG ID: " << p.pid() << ", ID: " << p.genParticle()->id()); + MSG_TRACE("PDG ID: " << p.pid() << ", ID: " << HepMCUtils::uniqueId(p.genParticle())) ; } } } /// Constituent pairs const std::vector >& InvMassFinalState::particlePairs() const { return _particlePairs; } } diff --git a/src/Projections/PrimaryHadrons.cc b/src/Projections/PrimaryHadrons.cc --- a/src/Projections/PrimaryHadrons.cc +++ b/src/Projections/PrimaryHadrons.cc @@ -1,40 +1,40 @@ // -*- C++ -*- #include "Rivet/Projections/PrimaryHadrons.hh" namespace Rivet { void PrimaryHadrons::project(const Event& e) { _theParticles.clear(); const Particles& unstables = applyProjection(e, "UFS").particles(); for (const Particle& p : unstables) { // Exclude taus etc. if (!isHadron(p)) continue; // A spontaneously appearing hadron: this is weird, but I guess is allowed... and is primary if (!p.genParticle() || !p.genParticle()->production_vertex()) { MSG_DEBUG("Hadron " << p.pid() << " with no GenParticle or parent found: treating as primary"); _theParticles.push_back(p); continue; } // There are ancestors -- check them for status=2 hadronic content - vector ancestors = Rivet::particles(p.genParticle(), Relatives::ANCESTORS); + vector ancestors = HepMCUtils::particles(p.genParticle(), Relatives::ANCESTORS); bool has_hadron_parent = false; for (ConstGenParticlePtr pa : ancestors) { if (pa->status() != 2) continue; /// @todo Are hadrons from tau decays "primary hadrons"? I guess not if (PID::isHadron(pa->pdg_id()) || abs(pa->pdg_id()) == PID::TAU) { has_hadron_parent = true; break; } } // If the particle seems to be a primary hadron, add it to the list if (!has_hadron_parent) _theParticles.push_back(p); } MSG_DEBUG("Number of primary hadrons = " << _theParticles.size()); } } diff --git a/src/Projections/UnstableFinalState.cc b/src/Projections/UnstableFinalState.cc --- a/src/Projections/UnstableFinalState.cc +++ b/src/Projections/UnstableFinalState.cc @@ -1,64 +1,64 @@ // -*- C++ -*- #include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { /// @todo Add a FIRST/LAST/ANY enum to specify the mode for uniquifying replica chains (default = LAST) void UnstableFinalState::project(const Event& e) { _theParticles.clear(); /// @todo Replace PID veto list with PID:: functions? vector vetoIds; vetoIds += 22; // status 2 photons don't count! vetoIds += 110; vetoIds += 990; vetoIds += 9990; // Reggeons //vetoIds += 9902210; // something weird from PYTHIA6 - for (ConstGenParticlePtr p : Rivet::particles(e.genEvent())) { + for (ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { const int st = p->status(); bool passed = (st == 1 || (st == 2 && !contains(vetoIds, abs(p->pdg_id())))) && !PID::isParton(p->pdg_id()) && ///< Always veto partons p->status() !=4 && // Filter beam particles _cuts->accept(p->momentum()); // Avoid double counting by re-marking as unpassed if ID == (any) parent ID ConstGenVertexPtr pv = p->production_vertex(); // Avoid double counting by re-marking as unpassed if ID == any child ID ConstGenVertexPtr dv = p->end_vertex(); if (passed && dv) { - for (ConstGenParticlePtr pp : ::Rivet::particles(dv, Relatives::CHILDREN)) { + for (ConstGenParticlePtr pp : HepMCUtils::particles(dv, Relatives::CHILDREN)) { if (p->pdg_id() == pp->pdg_id() && pp->status() == 2) { passed = false; break; } } } // Add to output particles collection if (passed) _theParticles.push_back(Particle(p)); // Log parents and children if (getLog().isActive(Log::TRACE)) { MSG_TRACE("ID = " << p->pdg_id() << ", status = " << st << ", pT = " << p->momentum().perp() << ", eta = " << p->momentum().eta() << ": result = " << std::boolalpha << passed); if (pv) { - for (ConstGenParticlePtr pp : ::Rivet::particles(pv, Relatives::PARENTS)) + for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) MSG_TRACE(" parent ID = " << pp->pdg_id()); } if (dv) { - for (ConstGenParticlePtr pp : Rivet::particles(dv, Relatives::CHILDREN)) + for (ConstGenParticlePtr pp : HepMCUtils::particles(dv, Relatives::CHILDREN)) MSG_TRACE(" child ID = " << pp->pdg_id()); } } } MSG_DEBUG("Number of unstable final-state particles = " << _theParticles.size()); } } diff --git a/src/Projections/VetoedFinalState.cc b/src/Projections/VetoedFinalState.cc --- a/src/Projections/VetoedFinalState.cc +++ b/src/Projections/VetoedFinalState.cc @@ -1,144 +1,144 @@ // -*- C++ -*- #include "Rivet/Projections/VetoedFinalState.hh" namespace Rivet { int VetoedFinalState::compare(const Projection& p) const { const PCmp fscmp = mkNamedPCmp(p, "FS"); if (fscmp != EQUIVALENT) return fscmp; /// @todo We can do better than this... if (_vetofsnames.size() != 0) return UNDEFINED; const VetoedFinalState& other = dynamic_cast(p); return \ cmp(_vetoCodes, other._vetoCodes) || cmp(_compositeVetoes, other._compositeVetoes) || cmp(_parentVetoes, other._parentVetoes); } void VetoedFinalState::project(const Event& e) { const FinalState& fs = applyProjection(e, "FS"); _theParticles.clear(); _theParticles.reserve(fs.particles().size()); // Veto by PID code if (getLog().isActive(Log::TRACE)) { /// @todo Should be PdgId, but _vetoCodes is currently a long vector codes; for (auto& code : _vetoCodes) codes += code.first; MSG_TRACE("Veto codes = " << codes << " (" << codes.size() << ")"); } if (_vetoCodes.empty()) { _theParticles = fs.particles(); } else { // Test every particle against the codes for (const Particle& p : fs.particles()) { VetoDetails::iterator iter = _vetoCodes.find(p.pid()); if (iter == _vetoCodes.end()) { // MSG_TRACE("Storing with PDG code = " << p.pid() << ", pT = " << p.pT()); _theParticles.push_back(p); } else { // This particle code is listed as a possible veto... check pT. // Make sure that the pT range is sensible: BinaryCut ptrange = iter->second; assert(ptrange.first <= ptrange.second); stringstream rangess; if (ptrange.first < numeric_limits::max()) rangess << ptrange.second; rangess << " - "; if (ptrange.second < numeric_limits::max()) rangess << ptrange.second; MSG_TRACE("ID = " << p.pid() << ", pT range = " << rangess.str()); stringstream debugline; debugline << "with PDG code = " << p.pid() << " pT = " << p.pT(); if (p.pT() < ptrange.first || p.pT() > ptrange.second) { MSG_TRACE("Storing " << debugline.str()); _theParticles.push_back(p); } else { MSG_TRACE("Vetoing " << debugline.str()); } } } } /// @todo What is this block? Mass vetoing? set toErase; for (set::iterator nIt = _nCompositeDecays.begin(); nIt != _nCompositeDecays.end() && !_theParticles.empty(); ++nIt) { map, FourMomentum> oldMasses; map, FourMomentum> newMasses; set start; start.insert(_theParticles.begin()); oldMasses.insert(pair, FourMomentum>(start, _theParticles.begin()->momentum())); for (int nParts = 1; nParts != *nIt; ++nParts) { for (map, FourMomentum>::iterator mIt = oldMasses.begin(); mIt != oldMasses.end(); ++mIt) { Particles::iterator pStart = *(mIt->first.rbegin()); for (Particles::iterator pIt = pStart + 1; pIt != _theParticles.end(); ++pIt) { FourMomentum cMom = mIt->second + pIt->momentum(); set pList(mIt->first); pList.insert(pIt); newMasses[pList] = cMom; } } oldMasses = newMasses; newMasses.clear(); } for (map, FourMomentum>::iterator mIt = oldMasses.begin(); mIt != oldMasses.end(); ++mIt) { double mass2 = mIt->second.mass2(); if (mass2 >= 0.0) { double mass = sqrt(mass2); for (CompositeVeto::iterator cIt = _compositeVetoes.lower_bound(*nIt); cIt != _compositeVetoes.upper_bound(*nIt); ++cIt) { BinaryCut massRange = cIt->second; if (mass < massRange.second && mass > massRange.first) { for (set::iterator lIt = mIt->first.begin(); lIt != mIt->first.end(); ++lIt) { toErase.insert(*lIt); } } } } } } for (set::reverse_iterator p = toErase.rbegin(); p != toErase.rend(); ++p) { _theParticles.erase(*p); } // Remove particles whose parents match entries in the parent veto PDG ID codes list /// @todo There must be a nice way to do this -- an STL algorithm (or we provide a nicer wrapper) for (PdgId vetoid : _parentVetoes) { for (Particles::iterator ip = _theParticles.begin(); ip != _theParticles.end(); ++ip) { ConstGenVertexPtr startVtx = ip->genParticle()->production_vertex(); if (startVtx == NULL) continue; // Loop over parents and test their IDs /// @todo Could use any() here? - for (ConstGenParticlePtr parent : Rivet::particles(startVtx, Relatives::ANCESTORS)) { + for (ConstGenParticlePtr parent : HepMCUtils::particles(startVtx, Relatives::ANCESTORS)) { if (vetoid == parent->pdg_id()) { ip = _theParticles.erase(ip); --ip; //< Erase this _theParticles entry break; } } } } // Finally veto on the registered FSes for (const string& ifs : _vetofsnames) { const ParticleFinder& vfs = applyProjection(e, ifs); const Particles& pvetos = vfs.rawParticles(); ifilter_discard(_theParticles, [&](const Particle& pcheck) { if (pcheck.genParticle() == nullptr) return false; for (const Particle& pveto : pvetos) { if (pveto.genParticle() == nullptr) continue; if (pveto.genParticle() == pcheck.genParticle()) { MSG_TRACE("Vetoing: " << pcheck); return true; } } return false; }); } MSG_DEBUG("FS vetoing from #particles = " << fs.size() << " -> " << _theParticles.size()); } } diff --git a/src/Tools/RivetHepMC_2.cc b/src/Tools/RivetHepMC_2.cc new file mode 100644 --- /dev/null +++ b/src/Tools/RivetHepMC_2.cc @@ -0,0 +1,116 @@ +// -*- C++ -*- + +#include "Rivet/Tools/RivetHepMC.hh" + +namespace Rivet{ + + const Relatives Relatives::PARENTS = HepMC::parents; + const Relatives Relatives::CHILDREN = HepMC::children; + const Relatives Relatives::ANCESTORS = HepMC::ancestors; + const Relatives Relatives::DESCENDANTS = HepMC::descendants; + + namespace HepMCUtils{ + + std::vector particles(ConstGenEventPtr ge){ + std::vector result; + for(GenEvent::particle_const_iterator pi = ge->particles_begin(); pi != ge->particles_end(); ++pi){ + result.push_back(*pi); + } + return result; + } + + std::vector particles(const GenEvent *ge){ + std::vector result; + for(GenEvent::particle_const_iterator pi = ge->particles_begin(); pi != ge->particles_end(); ++pi){ + result.push_back(*pi); + } + return result; + } + + std::vector vertices(ConstGenEventPtr ge){ + std::vector result; + for(GenEvent::vertex_const_iterator vi = ge->vertices_begin(); vi != ge->vertices_end(); ++vi){ + result.push_back(*vi); + } + return result; + } + + std::vector vertices(const GenEvent *ge){ + std::vector result; + for(GenEvent::vertex_const_iterator vi = ge->vertices_begin(); vi != ge->vertices_end(); ++vi){ + result.push_back(*vi); + } + return result; + } + + std::vector particles(ConstGenVertexPtr gv, const Relatives &relo){ + std::vector result; + /// @todo A particle_const_iterator on GenVertex would be nice... + // Before HepMC 2.7.0 there were no GV::particles_const_iterators and constness consistency was all screwed up :-/ +#if HEPMC_VERSION_CODE >= 2007000 + for (HepMC::GenVertex::particle_iterator pi = gv->particles_begin(relo); pi != gv->particles_end(relo); ++pi) + result.push_back(*pi); +#else + HepMC::GenVertex* gv2 = const_cast(gv); + for (HepMC::GenVertex::particle_iterator pi = gv2->particles_begin(relo); pi != gv2->particles_end(relo); ++pi) + result.push_back(const_cast(*pi)); +#endif + return result; + } + + std::vector particles(ConstGenParticlePtr gp, const Relatives &relo){ + ConstGenVertexPtr vtx; + + switch(relo){ + case HepMC::parents: + + case HepMC::ancestors: + vtx = gp->production_vertex(); + break; + + case HepMC::children: + + case HepMC::descendants: + vtx = gp->end_vertex(); + break; + + default: + + throw std::runtime_error("Not implemented!"); + break; + } + + return particles(vtx, relo); + } + + + + int uniqueId(ConstGenParticlePtr gp){ + return gp->barcode(); + } + + int particles_size(ConstGenEventPtr ge){ + return ge->particles_size(); + } + + int particles_size(const GenEvent *ge){ + return ge->particles_size(); + } + + std::vector beams(const GenEvent *ge){ + pair beams = ge->beam_particles(); + return std::vector{beams.first, beams.second}; + } + + std::shared_ptr makeReader(std::istream &istr){ + return make_shared(istr); + } + + bool readEvent(std::shared_ptr io, std::shared_ptr evt){ + if(io->rdstate() != 0) return false; + if(!io->fill_next_event(evt.get())) return false; + return true; + } + + } +} diff --git a/src/Tools/RivetHepMC_3.cc b/src/Tools/RivetHepMC_3.cc --- a/src/Tools/RivetHepMC_3.cc +++ b/src/Tools/RivetHepMC_3.cc @@ -1,41 +1,83 @@ // -*- C++ -*- #include "Rivet/Tools/RivetHepMC.hh" +#include "HepMC3/ReaderAscii.h" +#include "HepMC3/ReaderAsciiHepMC2.h" namespace Rivet{ - std::vector particles(ConstGenEventPtr ge){ - return ge->particles(); + namespace HepMCUtils{ + + std::vector particles(ConstGenEventPtr ge){ + return ge->particles(); + } + + std::vector particles(const GenEvent *ge){ + assert(ge); + return ge->particles(); + } + + std::vector vertices(ConstGenEventPtr ge){ + return ge->vertices(); + } + + std::vector vertices(const GenEvent *ge){ + assert(ge); + return ge->vertices(); + } + + std::vector particles(ConstGenVertexPtr gv, const Relatives &relo){ + return relo(gv); + } + + std::vector particles(ConstGenParticlePtr gp, const Relatives &relo){ + return relo(gp); + } + + int uniqueId(ConstGenParticlePtr gp){ + return gp->id(); + } + + std::vector beams(const GenEvent *ge){ + return ge->beams(); + } + + bool readEvent(std::shared_ptr io, std::shared_ptr evt){ + io->read_event(*evt); + return !io->failed(); + } + + shared_ptr makeReader(std::istream &istr){ + if(&istr == &std::cin) return make_shared(istr); + + istr.seekg(istr.beg); + std::string line1, line2; + + while(line1.empty()){ + std::getline(istr, line1); + } + + while(line2.empty()){ + std::getline(istr, line2); + } + + istr.seekg(istr.beg); + + // if this is absent it doesn't appear to be a HepMC file :( + if(line1.find("HepMC::Version") == std::string::npos) return nullptr; + + shared_ptr result; + + // Looks like the new HepMC 3 format! + if(line2.find("HepMC::Asciiv3") != std::string::npos){ + result = make_shared(istr); + } + + // assume old HepMC 2 format from here + result = make_shared(istr); + + if(result->failed()) result.reset((RivetHepMC::Reader*)nullptr); + return result; + } } - - std::vector particles(const GenEvent *ge){ - assert(ge); - return ge->particles(); - } - - std::vector vertices(ConstGenEventPtr ge){ - return ge->vertices(); - } - - std::vector vertices(const GenEvent *ge){ - assert(ge); - return ge->vertices(); - } - - std::vector particles(ConstGenVertexPtr gv, const Relatives &relo){ - return relo(gv); - } - - std::vector particles(ConstGenParticlePtr gp, const Relatives &relo){ - return relo(gp); - } - - int uniqueId(ConstGenParticlePtr gp){ - return gp->id(); - } - - std::vector beams(const GenEvent *ge){ - return ge->beams(); - } - }