diff --git a/ChangeLog b/ChangeLog --- a/ChangeLog +++ b/ChangeLog @@ -1,7010 +1,7026 @@ +2019-06-09 Christian Gutschow + + * Removing all annotations from histos when booked via ref data apart from Path + +2019-06-08 Christian Gutschow + + * Commenting out ununsed function in ChargedFinalState.cc + * removing unused member variable in Spherocity.hh + * Clean up logic around CmpState + * Implement skipWeights flag + * Make rivet-mkanalysis more intuitive + * Interface change: remove title/axis-label options from book methods + * fix Py2/Py3 encoding issue in bin/rivet + * making fjcontrib a dependency of Rivet + * remove regex includes and re-write in terms of stringstream + 2019-06-05 Peter Richardson * L3_2004_I652683 add missing plots and sync yoda, still missing n charged as functiomn of CMS energy and jet plots * PLUTO_1980_I154270 clean up to prepare for yoda sync * CDF_2015_I1388868, changes to allow yoda sync, histograms renumbered and systematic error added to y values * sync yoda for CDF analysis where y errors changed due better calc in HEPData 2019-06-05 Przemyslaw Karczmarczyk * Adding more heavy-ion beam particles: DEUTERON, ALUMINIUM, COPPER, XENON, URANIUM 2019-06-04 Peter Richardson * sync yoda for BELLE_2015_I1397632, two hists change label + final binchangeset: 7157:2482c97f5ef8 * update BABAR_2007_S6895344.yoda with improved error calc * sync yoda for H1_1994_S2919893.yoda, only difference handling of non-existance bins, better in new version * sync yoda for OPAL_2000_S4418603.yoda, changes only in histos not used by rivet * sync yoda for OPAL_1997_S3608263, missing histos in rivet copy * sync yoda for OPAL_1997_S3396100, trival difference some asymmetric x bins in new output but bin limits same * remove null bins from yoda to allow sync with hepdata in CLEO_2004_S5809304.yoda * add OPAL_1997_I421977 for sigma +/- production 2019-06-03 Przemyslaw Karczmarczyk * Adding new heavy-ion beam particles: GOLD, LEAD. * Switching already existing heavy-ion analyses to use new beam types 2019-06-03 Peter Richardson * sync ALEPH_1996_S3196992.yoda with HEPdata, rounding issues in old version * sync ALEPH_2001_S4656318.yoda, differences in histos not used in analysis * sync DELPHI_2011_I890503.yoda, fixes y errors but still a zero width bin to deal with * switch to using counters in OPAL_2002_S5361494, errors now calculated, and allows yoda sync with HEPdata * add DELPHI_2003_I620250 for event shapes below Z pole * switch to using counters in DELPHI_2000_S4328825, errors now calculated, and allows partial yoda sync with HEPdata * update PLUTO_1980_I154270.yoda for new hepdata error calc * update JADE_1983_I190818 to handle zero bins widths and update the yoda from HEPData 2019-06-02 Peter Richardson * add DELPHI_2000_I524694 for sigma- and lambdsa(1520) production at LEP * clean up code in OPAL_1997_S3396100, fixed normalisation of sigma-(1385) distributions 2019-06-01 Peter Richardson * update yoda for ARGUS_1993_S2653028, improved y errors, still zero bin width issues * update yoda for SND_2016_I1471515, ALEPH_1995_I382179, ALEPH_1999_S4193598 * add analysis CLEO_1991_I29927, B* production in e+e- 2019-05-31 Christian Gutschow * Skip beam particles when trying to determine promptness (thanks to Marek Schoenherr for pointing this out) * Apple's CLANG compiler really doesn't like uninitialised variables ... sorry Peter! 2019-05-29 Andy Buckley * Use integer rather than overloaded Projection comparison at the apply stage. Improves Smeared* projection reproducibility and may speed up the apply step. Thanks to Xiangyang Ju. 2019-05-29 Peter Richardson * Move BABAR and BELLE analyses to separate plugin and add new mainly radiative return analyses * new pluginNovosibirsk with analyses from SND, CMD, etc * new pluginOrsay with analyses from DM1 and DM2 * new PETRA analyses, mainly for R * move existing Tristan analyses to separate plugin and add new, mainy R analyses * new pluginFrascati with analyses from ADONE and DAPHNE * move older SLAC e+e- analyses to separate plugin and add new analyses * move DORIS analysis to separate plugin and add new ones * move CLEO analyses to new CESR plugin and add more analyses * new plugin with analyses from the BES experiment * new MC analyses for hadron/tau decays based on Herwig internal ones * sync BABAR_2007_I729388, BABAR_2013_I1238276, BABAR_2005_S6181155 and all those where no difference with yodadiff, similarly for BELLE * fix no of tracks plots in ATLAS_2016_I1424838, normalise to 100 not 1, agreed by Chris G * merge two versions of OPAL_2004_I631361, now using new options feature instead. 2019-05-29 Leif Lönnblad * The GenEvents in rivet can now be stripped from (most) light quarks and gluons to speed up searching. * Status lines in info files can now contain words like NOTREENTRY, NOHEPDATA, SINGLEWEIGHT, and UNPHYSICAL. * Can now configured to use HepMC3 (3.1.1 or later). 2019-05-23 Andy Buckley * Fix Unicode and argparse bugs in bin/rivet. Thanks to Frank Siegert. 2019-05-21 Andy Buckley * Release version 2.7.2 * Fixes to CMS_2016_I1487288 jet selection and normalisation. Thanks to Peter Richardson for highlighting the issues. 2019-05-20 Andy Buckley * Improve Vector3::azimuthalAngle() (also used by FourMomentum and FourVector) to use exact rather than fuzzy is-zero check, to only check the perpendicular components, and to note that IEEE floating point implementations of atan2 should already be 'safe' unless we decide that this function should throw or return NaN in case of null or along-z vectors. Thanks to Louis Moureaux from CMS for the report and diagnosis. 2019-05-17 Andy Buckley * Tools/Utils.h: Add isum() functions, and mark other container functions as wanting a conversion to use std::function. 2019-05-16 Andy Buckley * Add super-generic (i)discardIfAny(particlebases, particlebases, bool(pb,pb)) functions. 2019-05-16 Christian Gutschow * Introduce TTMODE options for MC_TTBAR to pick the decay mode 2019-05-09 Christian Gutschow * Release version 2.7.1 2019-05-09 Christian Gutschow * add correlation information for those analyses that are compatible with HEPData * prevent Variations meta data from being copied * comment out redudnant code in MeasureDefinition.cc to suppress compiler warning 2019-05-08 Andy Buckley * Use an std::map rather than std::set to store analyses in AnalysisHandler, hopefully therefore fixing the analysis evaluation order and making multi-analysis runs with random numbers repeatable. 2019-05-07 Andy Buckley * Modify LorentzTransform::setBetaVec to behave better for boost along x, y, and z axes. * Attach bare lepton GenParticle pointers to output of DressedLeptons, to allow ancestry/decay navigation (request by Markus Seidel). * Add Particle::setGenParticle method and manual Particle constructors with the option to pass a GenParticle*. * Improve DressedLeptons constructor to deconstruct incoming particles into bare and dressing components (report by Markus Seidel). * Remove GSL includes and single remaining method (for now). 2019-05-06 Andy Buckley * First set of script conversions from optparse to argparse for Python3. 2019-05-06 Peter Richardson * Fix normalisation in ATLAS_2016_I1444991 after yoda update from HepData 2019-05-04 Christian Gutschow * default entry point of ATLAS_2014_I1319490 should be average of muon and electron channel * fix units in ATLAS_2014_I1319490 and apply Peter Richardson's plot updates 2019-05-04 Peter Richardson * Fix plot selection and labels in ATLAS_2016_I1419652 (due yoda update from HepData) * Fix to mode switch and plot labels in ATLAS_2014_I1319490 2019-05-03 Peter Richardson * Fix info text and plot labels in ATLAS_2013_I1217863 * Fix Zfinder test in ATLAS_2014_I1312627, size is now no of Z's not leptons 2019-05-02 Peter Richardson * Fix unicode in ATLAS_2014_I1312627,ATLAS_2016_I1424838,ATLAS_2016_I1426523 2019-05-02 Peter Richardson * Fix make-plots CustomMajorTicks and CustomMinorTicks 2019-04-30 Peter Richardson * Fix make-plots rendering when last point is NaN-valued. 2019-04-30 Christian Gutschow * path sign behaviour in deltaEta (MathUtils.hh) * add signed option for deltaRap 2019-04-29 Andy Buckley * Add tau mistag efficiency functions for jet smearing. * Make RIVET_RANDOM_SEED have effect outside OpenMP builds, too. 2019-04-29 Christian Gutschow * remove IsRef from output Histo1D * fix HEPData IDs in hist booking for ATLAS_2014_I1315949 2019-04-26 Christian Gutschow * use jet size rather than 4-momentum size (?) to fill in CDF_1996_S3108457 2019-04-25 Christian Gutschow * fix mapping to ref data in STAR_2006_S6500200, put bin width manually into ref data file 2019-04-24 Christian Gutschow * Patch weird behaviour in ATLAS_2017_I1589844 and ATLAS_2017_I1609448 when Cut argument is passed to VetoedFinalState constructor. In version 2.6.2 or prior, this would select the particles passing the cut, after that they are being vetoed. 2019-04-24 Andy Buckley * Add CMS_2016_I1487288 (CMS WZ differential cross-sections at 8 TeV) from Shehu AbdusSalam, refined and extended to jet distributions by AB. 2019-04-24 Jon Butterworth * Fix logic of call the vetoedFinalState in ATLAS_2017_I1609448 which was giving wrong met calculation. 2019-04-23 Chris Gutschow * Apply suggested corrections from Peter Richardson to ATLAS_2014_I1282441 (normalisation of second plot is in millibarn not microbarn), ATLAS_2015_I1387176 and ATLAS_2015_I1397635 (Unicode issues in the .info), ATLAS_2016_I1419070 (need more than 1 entry for a variance not just 1), CMS_2010_PAS_QCD_10_024 and CMS_2012_PAS_FSQ_12_020 (class names need to be consistent with file names to find the .yoda ref data), CMS_2013_I1261026 (need to ensure more than 1 entry in histos to avoid LowStats issues from YODA), CMS_2014_I1298810, CMS_2016_I1421646, CMS_2016_I1454211 (the polymarker command in the yoda caused make-plots to fail if there is more than 1 MC line), CMS_2017_I1467451 (cross sections in fb), CMS_2017_I1605749 (data fixed in HepData so don't need to divide by bin width anymore), ALEPH_1996_S3486095 (delete a lot of unused variables), ALEPH_2014_I1267648 (only worked if the final decay products were direct children of the tau, which is not the case in Herwig or Pythia -- rewritten to search down the decay tree), DELPHI_1996_S3430090, DELPHI_2000_S4328825, OPAL_1998_S374990, ARGUS_1993_S2669951, ALEPH_1996_S3486095 (switch ' to ^\prime in plots as it wasn't working as it was), ALEPH_2004_S5765862 (fix typo in plot label), BABAR_2003_I593379 and BELLE_2008_I786560 (switch to Rivet particle from genParticle), and to add the 2D case to the Sphericity projection so tranSphericity works. 2019-04-23 Andy Buckley * Add RIVET_CACHE_PROJECTIONS environment variable, for runtime disabling of the caching mechanism (for debugging and cross-checking). * Add optional only_physical and remove_duplicates args (passed through to the Particle methods) to hasParticleAncestorWith and hasParticleDescendantWith functors. 2019-04-08 Przemyslaw Karczmarczyk * Restored behaviour of getData function to return finalized plots by default 2019-04-03 Andy Buckley * Remove/protect against last Unicode .encode() calls that broke Python3 compatibility. * Remove last assert (for mod2() >= 0) from Vector classes. 2019-04-01 Andy Buckley * Move inline projections *inside* analysis classes, since our aggregated build mechanism means there's no longer a unique unnamed namespace for each analysis .cc file. * Adding analyses ATLAS_2018_I1677498 (WWbb), ATLAS_2018_I1711114 (g -> bb), and ATLAS_2019_I1720442 (4-lepton lineshape). 2019-03-19 Andy Buckley * Extend more deltaPhi(x,y) functions with an optional bool for signed dPhi. * Add Particle::isSame and isSame(Particle, Particle) functions in lieu of implicit (and wrong) Particle::operator==. * Reinstate Particle -> GenParticle as an explicit cast option. 2019-03-15 Leif Lönnblad * src/Projections/Sphericity.cc: Skip zero-momentum particles. * [Reader,Writer]CompressedAscii[cc,hh]: Introduced possibility to wead and write HepMC (v3) files in a compressed and stripped format. Also included a simple progra bin/rivet-hepmz to convert between this new format and othe HepMC fortmats. 2019-03-15 Andy Buckley * Add CMS_2018_I1686000.cc single-top plus photon analysis (with an info file warning about fiducial definition oddity) * Remove Particle -> GenParticle implicit cast. * analyses/pluginCMS/CMS_2018_I1682495.cc: Fix normalisation of final plots (patch from Sal Rappoccio, spot by Deepak Kar) 2019-02-27 Andy Buckley * bin/make-plots: Fix a few Py2/3 incompabilities in make-plots. Thanks to Leif Gellersen for the tip-off * analyses/Makefile.am: Adopt a more make-friendly plugin building rule. Thanks to Dima Konstantinov. 2019-02-24 Jon Butterworth * Fix multiple bugs in ATLAS_2017_I1514251 (Z+jets) including a problem with REF data having zero bin widths 2019-02-23 Jon Butterworth * Added first version of ZEUS_2012_I1116258 (dijet photoproduction). Currently works but unvalidated, need to check exact recombination scheme with ZEUS contacts. Note ZEUS_2001_S4815815 also needs the recombination scheme checking. (Now done; changed to Et scheme are checking original code, 27/2/19.) 2019-02-20 Andy Buckley * Move UnstableParticles to a consistently-named header, with UnstableFinalState.hh retained for backward compatibility. * Improve/fix UnstableParticles projection's Cut constructor argument to apply the cut on a Rivet::Particle rather than a HepMC::FourVector, meaning that PID cuts can now be used. 2019-02-17 Andy Buckley * Convert ATLAS_2013_I1217863 analysis variants to use the LMODE analysis option (from Jon Butterworth). 2019-02-20 James Monk * src/Tools/RivetHepMC_2.cc: implementation of HepMC helper funcs for HepMC2 * Add HepMCUtils namespace for helper funcs * Relatives class spoofs HepMC3::Relatives interface using HepMC2 iterator_ranges * Replace calls to particles_in() and particles_out() by HepMCUtils::particles * Fix pyext/setup.py.in for both HepMC2 and HepMC3 * Configures with either --with-hepmc=/blah or --with-hepmc3=/blah * Compiles for either HepMC2 or HepMC3 (3.1 or higher) 2019-02-17 James Monk * Update many build paths to cope with new HepMC3 include and lib paths :( * Use RivetHepMC namespace in place of HepMC:: RivetHepMC.hh should take care of it * configure.ac addes appropriate define to CPPFLAGS * wrangle rivet-buildplugin to cope with HePMC3 paths * HepMC::Reader is still a bit fubar :( 2018-12-10 James Monk * Merge from default 2018-12-06 James Monk * RivetHepMC.hh: Much simplified. Use only Const version of GenParticlePtr Only func delarations - two separate implementation files for HepMC2 or 3 * configure.ac: HepMC version dependence for building RivetHepMC_2.cc or RivetHepMC_3.cc * src/Makefile.am: HepMC version dependence * src/Tools/RivetHepMC_3.cc: implementations of funs for HepMC v3 * bin/rivet-nopy.cc: re-implement using HepMC3 reader interface (may need separate implementation for HepMC2) * Particle.hh, Event.cc, Jet.cc: const GenParticle replaced by ConstGenParticle * Projections Beam, DISLepton, FinalPartons, FinalState, HeavyHadrons, InitialQuarks, MergedFinalState, PrimaryHadrons, UnstableFinalState, VetoedFinalState: Use ConstGenParticlePtr and vector consistently * pluginATLAS: Start to fix some uses of const GenParticlePtr 2019-02-15 Leif Lönnblad * Release 2.7.0 2019-02-12 Christian Bierlich * Introduced CentralityProjection, allowing an analysis to cut on percentiles of single event quantities, preloaded from a user generated or supplied (by experiment) histogram. Notably used for the centrality definition in heavy ion analyses. User specifies the centrality definition as a special analysis option called cent, eg: "MyAnalysis:cent=GEN". Example usage: Calibration analysis: MC_Cent_pPb_Calib, Analysis using that calibration: MC_Cent_pPb_Eta. * Introduced EventMixingFinalState to provide simple event mixing functionality. Currently only works with unit event weights. Example usage: ALICE_2016_I1507157. * Introduced Correlators, a framework for calculating single event correlators based on the generic framework (arXiv: 1010.0233 and arXiv: 1312.3572), and perfoming all event averages giving flow coefficents. Implemented as new analysis base class. Example usage: ALICE_2016_I1419244. * Introduced a PrimaryParticle projection, replicating experimental definitions of stable particles through decay chains. Recommended for analyses which would otherwise have to require stable particles at generator level. * Introduced AliceCommon and AtlasCommon convenience tools, defining several triggers, primary particle definitions and acceptances. * Contributed, validated analyses using above features: ALICE_2010_I880049: Multiplicity at mid-rapidity, PbPb @ 2.76 TeV/nn. ALICE_2012_I1127497: Nuclear modification factor, PbPb @ 2.76 TeV/nn. ALICE_2012_I930312: Di-hadron correlations, PbPb @ 2.76 TeV/nn. * Contributed, unvalidated analyses using above features: BRAHMS_2004_I647076: pi, K, p spectra as function of rapidity, AuAu @ 200 GeV/nn ALICE_2012_I1126966: pi, K, p spectra, PbPb @ 2.76 TeV/nn. ALICE_2013_I1225979: Charged multiplicity, PbPb @ 2.76 TeV/nn. ALICE_2014_I1243865: Multi-strange baryons, PbPb @ 2.76 TeV/nn. ALICE_2014_I1244523: Multi-strange baryons, pPb @ 5.02 TeV/nn. ALICE_2015_PBPBCentrality: Centrality calibration for PbPb. Note that the included 5.02 TeV/nn data is not well defined at particle level, and cannot be compared to experiment without full detector simulation. ALICE_2016_I1394676: Charged multiplicity, PbPb @ 2.76 TeV/nn. ALICE_2016_I1419244: Multiparticle correlations (flow) using generic framework, PbPb @ 5.02 TeV/nn. ALICE_2016_I1471838: Multi-strange baryons, pp @ 7 TeV. ALICE_2016_I1507090: Charged multiplicity, PbPb @ 5.02 TeV/nn. ALICE_2016_I1507157: Angular correlations, pp @ 7 TeV. ATLAS_2015_I1386475: Charged multiplicity, pPb @ 5.02 TeV/nn. ATLAS_PBPB_CENTRALITY: Forward energy flow + centrality calibration, data not unfolded, but well defined at particle level, PbPb @ 2.76 TeV/nn. ATLAS_2015_I1360290: Charged multiplicity + spectra, PbPb @ 2.76 TeV/nn. ATLAS_pPb_Calib: Forward energy flow + centrality calibration, data not unfolded, but well defined at particle level, pPb @ 5.02 TeV/nn. STAR_2016_I1414638: Di-hadron correlations, AuAu @ 200 GeV/nn. CMS_2017_I1471287: Multiparticle correlations (flow) using generic framework, pp @ 7 TeV. * Contributed analyses without data: ALICE_2015_PPCentrality: ALICE pp centrality (multiplicity classes) calibration. BRAHMS_2004_CENTRALITY: BRAHMS centrality calibration. STAR_BES_CALIB: STAR centrality calibration. MC_Cent_pPb_Calib: Example analysis, centrality calibration. MC_Cent_pPb_Eta: Example analysis, centrality usage. 2019-01-29 Andy Buckley * Add real CMS Run 1 and Run 2 MET resolution smearing functions, based on 8 TeV paper and 13 TeV PAS. 2019-01-07 Leif Lönnblad * Reintroduced the PXCONE option in FastJets using a local version of the Fortran based pxcone algorithm converted to c++ with f2c and slightly hacked to avoid dependency on Fortran runtime libraries. * Introduced rivet-merge for statistically correct merging of YODA files produced by Rivet. Only works on analysis with reentrant finalize. * Introduced --dump flag to the rivet script to periodically run finalize and write out the YODA file for anayses with reentrant finalize. * Introduced reentrant finalize. Rivet now produces YODA files where all analysis objects are stored in two version. One is prefixed by "/RAW" and gives the state of the object before finalize was run, and the other is the properly finalized object. Analyses must be flagged "Reentrant: True" in the .info file to properly use this feature. * Added an option system. Analyses can now be added to rivet with options. Adding eg. "MyAnalysis:Opt1=val1:Opt2=val2" will create and add a MyAnalysis object making the options available through the Analysis::getOption() function. Several objects of MyAnalysis with different options can be added in the same run. Allowed options must be specified in the MyAnalysis.info file. * Added several utilities for heavy ions. 2019-01-03 Andy Buckley * Add setting of cross-section error in AnalysisHandler and Run. 2018-12-21 Andy Buckley * Add hasNoTag jet-classification functor, to complement hasBTag and hasCTag. 2018-12-20 Andy Buckley * Rework VetoedFinalState to be based on Cuts, and to be constructible from Cut arguments. * Pre-emptively exclude all hadrons and partons from returning true in isDirect computations. * Cache the results of isDirect calculations on Particle (a bit awkwardly... roll on C++17). * Add a default-FinalState version of the DressedLeptons constructor. 2018-12-19 Andy Buckley * Add a FIRST/LAST enum switch for PartonicTops, to choose which top quark clone to use. 2018-12-14 Andy Buckley * Add a FastJet clustering mode for DressedLeptons. 2018-12-10 Andy Buckley * Release 2.6.2 * Info file bugfixes for LHCF_2016_I1385877, from Eugenio Berti. * Update references in three CMS analysis .info files. 2018-12-05 Andy Buckley * Rework doc directory no-build by default to be compatible with 'make dist' packaging. * Add fjcontrib RecursiveTools to Rivet/Tools/fjcontrib set. 2018-11-21 Andy Buckley * Add CMS_2018_I1653948, CMS_2018_I1653948, CMS_2018_I1682495, and CMS_2018_I1690148 analyses. * Add FastJet EnergyCorrelator and rejig the internal fjcontrib bundle a little. 2018-11-15 Andy Buckley * Merge ATLAS_2017_I1517194_MU and ATLAS_2018_I1656578. * Add signed calculation optional bool argument on all deltaPhi functions. 2018-11-12 Andy Buckley * Fix CMS_2012_I1102908 efficiency calculation. Thanks to Anton Karneyeu! 2018-11-09 Andy Buckley * Remove doc dir from default top-level make 2018-09-20 Andy Buckley * Use updated ATLAS R2 muon efficiencies. * Use proper ATLAS photon efficiency functions for Runs 1 and 2, from arXiv:1606.01813 and ATL-PHYS-PUB-2016-014. 2018-08-31 Andy Buckley * Update embedded yaml-cpp to v0.6.0. 2018-08-29 Andy Buckley * Change default plugin library installation to $prefix/lib/Rivet, since they're non-linkable. 2018-08-14 Andy Buckley * Version 2.6.1 release. 2018-08-08 Andy Buckley * Add a RIVET_RANDOM_SEED variable to fix the smearing random-seed engine for validation comparisons. 2018-07-19 Andy Buckley * Merge in ATLAS_2017_I1604029 (ttbar+gamma), ATLAS_2017_I1626105 (dileptonic ttbar), ATLAS_2017_I1644367 (triphotons), and ATLAS_2017_I1645627 (photon + jets). * Postpone Particles enhancement now, since the required C++11 isn't supported on lxplus7 = CentOS7. * Add MC_DILEPTON analysis. 2018-07-10 Andy Buckley * Fix HepData tarball download handling: StringIO is *not* safe anymore 2018-07-08 Andy Buckley * Add LorentzTransform factory functions direct from FourMomentum, and operator()s 2018-06-20 Andy Buckley * Add FinalState(fs, cut) augmenting constructor, and PrevFS projection machinery. Validated for a abscharge > 0 cut. * Add hasProjection() methods to ProjectionHandler and ProjectionApplier. * Clone MC_GENERIC as MC_FSPARTICLES and deprecate the badly-named original. * Fix Spires -> Inspire ID for CMS_2017_I1518399. 2018-06-04 Andy Buckley * Fix installation of (In)DirectFinalState.hh 2018-05-31 Andy Buckley * Add init-time setting of a single weight-vector index from the RIVET_WEIGHT_INDEX environment variable. To be removed in v3, but really we should have done this years ago... and we don't know how long the handover will be. 2018-05-22 Neil Warrack * Include 'unphysical' photon parents in PartonicTops' veto of prompt leptons from photon conversions. 2018-05-20 Andy Buckley * Make Particles and Jets into actual specialisations of std::vector rather than typedefs, and update surrounding classes to use them. The specialisations can implicitly cast to vectors of FourMomentum (and maybe Pseudojet). 2018-05-18 Andy Buckley * Make CmpAnaHandle::operator() const, for GCC 8 (thanks to CMS) 2018-05-07 Andy Buckley * CMS_2016_I1421646.cc: Add patch from CMS to veto if leading jets outside |y| < 2.5, rather than only considering jets in that acceptance. Thanks to CMS and Markus Seidel. 2018-04-27 Andy Buckley * Tidy keywords and luminosity entries, and add both to BSM search .info files. * Add Luminosity_fb and Keywords placeholders in mkanalysis output. 2018-04-26 Andy Buckley * Add pairMass and pairPt functions. * Add (i)discardIfAnyDeltaRLess and (i)discardIfAnyDeltaPhiLess functions. * Add normalize() methods to Cutflow and Cutflows. * Add DirectFinalState and IndirectFinalState alias headers, for forward compatibility. 'Prompt' is confusing. 2018-04-24 Andy Buckley * Add initializer_list overload for binIndex. Needed for other util functions operating on vectors. * Fix function signature bug is isMT2 overload. * Add isSameSign, isOppSign, isSameFlav, isOppFlav, and isOSSF etc. functions on PIDs and Particles. 2018-03-27 Andy Buckley * Add RatioPlotLogY key to make-plots. Thanks to Antonin Maire. 2018-02-22 Andy Buckley * Adding boolean operator syntactic sugar for composition of bool functors. * Copy & paste error fixes in implementation of BoolJetAND,OR,NOT. 2018-02-01 Andy Buckley * Make the project() and compare() methods of projections public. * Fix a serious bug in the SmearedParticles and SmearedJets compare methods. * Add string representations and streamability to the Cut objects, for debugging. 2018-01-08 Andy Buckley * Add highlighted source to HTML analysis metadata listings. 2017-12-21 Andy Buckley * Version 2.6.0 release. 2017-12-20 Andy Buckley * Typo fix in TOTEM_2012_I1220862 data -- thanks to Anton Karneyeu. 2017-12-19 Andy Buckley * Adding contributed analyses: 1 ALICE, 6 ATLAS, 1 CMS. * Fix bugged PID codes in MC_PRINTEVENT. 2017-12-13 Andy Buckley * Protect Run methods and rivet script against being told to run from a missing or unreadable file. 2017-12-11 Andy Buckley * Replace manual event count & weight handling with a YODA Counter object. 2017-11-28 Andy Buckley * Providing neater & more YODA-consistent sumW and sumW2 methods on AnalysisHandler and Analysis. * Fix to Python version check for >= 2.7.10 (patch submitted to GNU) 2017-11-17 Andy Buckley * Various improvements to DISKinematics, DISLepton, and the ZEUS 2001 analysis. 2017-11-06 Andy Buckley * Extend AOPath regex to allow dots and underscores in weight names. 2017-10-27 Andy Buckley * Add energy to the list of cuts (both as Cuts::E and Cuts::energy) * Add missing pT (rather than Et) functions to SmearedMET, although they are just copies of the MET functions for now. 2017-10-09 Andy Buckley * Embed zstr and enable transparent reading of gzipped HepMC streams. 2017-10-03 Andy Buckley * Use Lester MT2 bisection header, and expose a few more mT2 function signatures. 2017-09-26 Andy Buckley * Use generic YODA read and write functions -- enables zipped yoda.gz output. * Add ChargedLeptons enum and mode argument to ZFinder and WFinder constructors, to allow control over whether the selected charged leptons are prompt. This is mostly cosmetic/for symmetry in the case of ZFinder, since the same can be achieved by passing a PromptFinalState as the fs argument, but for WFinder it's essential since passing a prompt final state screws up the MET calculation. Both are slightly different in the treatment of the lepton dressing, although conventionally this is an area where only prompt photons are used. 2017-09-25 Andy Buckley * Add deltaR2 functions for squared distances. 2017-09-10 Andy Buckley * Add white backgrounds to make-plots main and ratio plot frames. 2017-09-05 Andy Buckley * Add CMS_2016_PAS_TOP_15_006 jet multiplicity in lepton+jets ttbar at 8 TeV analysis. * Add CMS_2017_I1467451 Higgs -> WW -> emu + MET in 8 TeV pp analysis. * Add ATLAS_2017_I1609448 Z->ll + pTmiss analysis. * Add vectorMissingEt/Pt and vectorMET/MPT convenience methods to MissingMomentum. * Add ATLAS_2017_I1598613 J/psi + mu analysis. * Add CMS SUSY 0-lepton search CMS_2017_I1594909 (unofficial implementation, validated vs. published cutflows) 2017-09-04 Andy Buckley * Change license explicitly to GPLv3, cf. MCnet3 agreement. * Add a better jet smearing resolution parametrisation, based on GAMBIT code from Matthias Danninger. 2017-08-16 Andy Buckley * Protect make-plots against NaNs in error band values (patch from Dmitry Kalinkin). 2017-07-20 Andy Buckley * Add sumPt, sumP4, sumP3 utility functions. * Record truth particles as constituents of SmearedParticles output. * Rename UnstableFinalState -> UnstableParticles, and convert ZFinder to be a general ParticleFinder rather than FinalState. 2017-07-19 Andy Buckley * Add implicit cast from FourVector & FourMomentum to Vector3, and tweak mT implementation. * Add rawParticles() to ParticleFinder, and update DressedLeptons, WFinder, ZFinder and VetoedFinalState to cope. * Add isCharged() and isChargedLepton() to Particle. * Add constituents() and rawConstituents() to Particle. * Add support for specifying bin edges as braced initializer lists rather than explicit vector. 2017-07-18 Andy Buckley * Enable methods for booking of Histo2D and Profile2D from Scatter3D reference data. * Remove IsRef annotation from autobooked histogram objects. 2017-07-17 Andy Buckley * Add pair-smearing to SmearedJets. 2017-07-08 Andy Buckley * Add Event::centrality(), for non-HepMC access to the generator value if one has been recorded -- otherwise -1. 2017-06-28 Andy Buckley * Split the smearing functions into separate header files for generic/momentum, Particle, Jet, and experiment-specific smearings & efficiencies. 2017-06-27 Andy Buckley * Add 'JetFinder' alias for JetAlg, by analogy with ParticleFinder. 2017-06-26 Andy Buckley * Convert SmearedParticles to a more general list of combined efficiency+smearing functions, with extra constructors and some variadic template cleverness to allow implicit conversions from single-operation eff and smearing function. Yay for C++11 ;-) This work based on a macro-based version of combined eff/smear functions by Karl Nordstrom -- thanks! * Add *EffFn, *SmearFn, and *EffSmearFn types to SmearingFunctions.hh. 2017-06-23 Andy Buckley * Add portable OpenMP enabling flags to AM_CXXFLAGS. 2017-06-22 Andy Buckley * Fix the smearing random number seed and make it thread-specific if OpenMP is available (not yet in the build system). * Remove the UNUSED macro and find an alternative solution for the cases where it was used, since there was a risk of macro clashes with embedding codes. * Add a -o output directory option to make-plots. * Vector4.hh: Add mT2(vec,vec) functions. 2017-06-21 Andy Buckley * Add a full set of in-range kinematics functors: ptInRange, (abs)etaInRange, (abs)phiInRange, deltaRInRange, deltaPhiInRange, deltaEtaInRange, deltaRapInRange. * Add a convenience JET_BTAG_EFFS functor with several constructors to handle mistag rates. * Add const efficiency functors operating on Particle, Jet, and FourMomentum. * Add const-efficiency constructor variants for SmearedParticles. 2017-06-21 Jon Butterworth * Fix normalisations in CMS_2016_I1454211. * Fix analysis name in ref histo paths for ATLAS_2017_I1591327. 2017-06-18 Andy Buckley * Move all standard plugin files into subdirs of src/Analyses, with some custom make rules driving rivet-buildplugin. 2017-06-18 David Grellscheid * Parallelise rivet-buildplugin, with source-file cat'ing and use of a temporary Makefile. 2016-06-18 Holger Schulz * Version 2.5.4 release! 2016-06-17 Holger Schulz * Fix 8 TeV DY (ATLAS_2016_I1467454), EL/MU bits were bissing. * Add 13 TeV DY (ATLAS_2017_I1514251) and mark ATLAS_2015_CONF_2015_041 obsolete * Add missing install statement for ATLAS_2016_I1448301.yoda/plot/info leading to segfault 2017-06-09 Andy Buckley * Slight improvements to Particle constructors. * Improvement to Beam projection: before falling back to barcodes 1 & 2, try a manual search for status=4 particles. Based on a patch from Andrii Verbytskyi. 2017-06-05 Andy Buckley * Add CMS_2016_I1430892: dilepton channel ttbar charge asymmetry analysis. * Add CMS_2016_I1413748: dilepton channel ttbar spin correlations and polarisation analysis. * Add CMS_2017_I1518399: leading jet mass for boosted top quarks at 8 TeV. * Add convenience constructors for ChargedLeptons projection. 2017-06-03 Andy Buckley * Add FinalState and Cut (optional) constructor arguments and usage to DISFinalState. Thanks to Andrii Verbytskyi for the idea and initial patch. 2017-05-23 Andy Buckley * Add ATLAS_2016_I1448301, Z/gamma cross section measurement at 8 TeV. * Add ATLAS_2016_I1426515, WW production at 8 TeV. 2016-05-19 Holger Schulz * Add BELLE measurement of semileptonic B0bar -> D*+ ell nu decays. I took the liberty to correct the data in the sense that I take the bin widhts into account in the normalisation. BELLE_2017_I1512299. This is a nice analysis as it probes the hadronic and the leptonic side of the decay so very valuable for model building and of course it is rare as it is an unfolded B measurement. 2016-05-17 Holger Schulz * Add ALEPH measurement of hadronic tau decays, ALEPH_2014_I1267648. * Add ALEPH dimuon invariant mass (OS and SS) analysis, ALEPH_2016_I1492968 * The latter needed GENKTEE FastJet algorithm so I added that FastJets * Protection against logspace exception in histobooking of MC_JetAnalysis * Fix compiler complaints about uninitialised variable in OPAL_2004. 2016-05-16 Holger Schulz * Tidy ALEPH_1999 charm fragmentation analysis and normalise to data integral. Added DSTARPLUS and DSTARMINUS to PID. 2017-05-16 Andy Buckley * Add ATLAS_2016_CONF_2016_092, inclusive jet cross sections using early 13 TeV data. * Add ATLAS_2017_I1591327, isolated diphoton + X differential cross-sections. * Add ATLAS_2017_I1589844, ATLAS_2017_I1589844_EL, ATLAS_2017_I1589844_MU: kT splittings in Z events at 8 TeV. * Add ATLAS_2017_I1509919, track-based underlying event at 13 TeV in ATLAS. * Add ATLAS_2016_I1492320_2l2j and ATLAS_2016_I1492320_3l, the WWW cross-section at 8 TeV. 2017-05-12 Andy Buckley * Add ATLAS_2016_I1449082, charge asymmetry in top quark pair production in dilepton channel. * Add ATLAS_2015_I1394865, inclusive 4-lepton/ZZ lineshape. 2017-05-11 Andy Buckley * Add ATLAS_2013_I1234228, high-mass Drell-Yan at 7 TeV. 2017-05-10 Andy Buckley * Add CMS_2017_I1519995, search for new physics with dijet angular distributions in proton-proton collisions at sqrt{(s) = 13 TeV. * Add CMS_2017_I1511284, inclusive energy spectrum in the very forward direction in proton-proton collisions at 13 TeV. * Add CMS_2016_I1486238, studies of 2 b-jet + 2 jet production in proton-proton collisions at 7 TeV. * Add CMS_2016_I1454211, boosted ttbar in pp collisions at sqrtS = 8 TeV. * Add CMS_2016_I1421646, CMS azimuthal decorrelations at 8 TeV. 2017-05-09 Andy Buckley * Add CMS_2015_I1380605, per-event yield of the highest transverse momentum charged particle and charged-particle jet. * Add CMS_2015_I1370682_PARTON, a partonic-top version of the CMS 7 TeV pseudotop ttbar differential cross-section analysis. * Adding EHS_1988_I265504 from Felix Riehn: charged-particle production in K+ p, pi+ p and pp interactions at 250 GeV/c. * Fix ALICE_2012_I1116147 for pi0 and Lambda feed-down. 2017-05-08 Andy Buckley * Add protection against leptons from QED FSR photon conversions in assigning PartonicTop decay modes. Thanks to Markus Seidel for the report and suggested fix. * Reimplement FastJets methods in terms of new static helper functions. * Add new mkClusterInputs, mkJet and mkJets static methods to FastJets, to help with direct calls to FastJet where particle lookup for constituents and ghost tags are required. * Fix Doxygen config and Makefile target to allow working with out-of-source builds. Thanks to Christian Holm Christensen. * Improve DISLepton for HERA analyses: thanks to Andrii Verbytskyi for the patch! 2017-03-30 Andy Buckley * Replace non-template Analysis::refData functions with C++11 default T=Scatter2D. 2017-03-29 Andy Buckley * Allow yes/no and true/false values for LogX, etc. plot options. * Add --errs as an alias for --mc-errs to rivet-mkhtml and rivet-cmphistos. 2017-03-08 Peter Richardson * Added 6 analyses AMY_1990_I295160, HRS_1986_I18502, JADE_1983_I190818, PLUTO_1980_I154270, TASSO_1989_I277658, TPC_1987_I235694 for charged multiplicity in e+e- at CMS energies below the Z pole * Added 2 analyses for charged multiplicity at the Z pole DELPHI_1991_I301657, OPAL_1992_I321190 * Updated ALEPH_1991_S2435284 to plot the average charged multiplcity * Added analyses OPAL_2004_I631361, OPAL_2004_I631361_qq, OPAL_2004_I648738 for gluon jets in e+e-, most need fictitious e+e- > g g process 2017-03-29 Andy Buckley * Add Cut and functor selection args to HeavyHadrons accessor methods. 2017-03-03 Andy Buckley * bin/rivet-mkanalysis: Add FastJets.hh include by default -- it's almost always used. 2017-03-02 Andy Buckley * src/Analyses/CMS_2016_I1473674.cc: Patch from CMS to use partonic tops. * src/Analyses/CMS_2015_I1370682.cc: Patch to inline jet finding from CMS. 2017-03-01 Andy Buckley * Convert DressedLeptons use of fromDecay to instead veto photons that match fromHadron() || fromHadronicTau() -- meaning that electrons and muons from leptonic taus will now be dressed. * Move Particle and Jet std::function aliases to .fhh files, and replace many uses of templates for functor arguments with ParticleSelector meta-types instead. * Move the canonical implementations of hasAncestorWith, etc. and isLastWith, etc. from ParticleUtils.hh into Particle. * Disable the event-to-event beam consistency check if the ignore-beams mode is active. 2017-02-27 Andy Buckley * Add BoolParticleAND, BoolJetOR, etc. functor combiners to Tools/ParticleUtils.hh and Tools/JetUtils.hh. 2017-02-24 Andy Buckley * Mark ATLAS_2016_CONF_2016_078 and CMS_2016_PAS_SUS_16_14 analyses as validated, since their cutflows match the documentation. 2017-02-22 Andy Buckley * Add aggregate signal regions to CMS_2016_PAS_SUS_16_14. 2017-02-18 Andy Buckley * Add getEnvParam function, for neater use of environment variable parameters with a required default. 2017-02-05 Andy Buckley * Add HasBTag and HasCTag jet functors, with lower-case aliases. 2017-01-31 Andy Buckley * Start making analyses HepMC3-compatible, including major tidying in ATLAS_2013_I1243871. * Add Cut-arg variants to HeavyHadrons particle-retrieving methods. * Convert core to be compatible with HepMC3. 2017-01-21 Andy Buckley * Removing lots of long-deprecated functions & methods. 2017-01-18 Andy Buckley * Use std::function in functor-expanded method signatures on JetAlg. 2017-01-16 Andy Buckley * Convert FinalState particles() accessors to use std::function rather than a template arg for sorting, and add filtering functor support -- including a mix of filtering and sorting functors. Yay for C++11! * Add ParticleEffFilter and JetEffFilter constructors from a double (encoding constant efficiency). * Add Vector3::abseta() 2016-12-13 Andy Buckley * Version 2.5.3 release. 2016-12-12 Holger Schulz * Add cut in BZ calculation in OPAL 4 jet analysis. Paper is not clear about treatment of parallel vectors, leads to division by zero and nan-fill and subsequent YODA RangeError (OPAL_2001_S4553896) 2016-12-12 Andy Buckley * Fix bugs in SmearedJets treatment of b & c tagging rates. * Adding ATLAS_2016_I1467454 analysis (high-mass Drell-Yan at 8 TeV) * Tweak to 'convert' call to improve the thumbnail quality from rivet-mkhtml/make-plots. 2016-12-07 Andy Buckley * Require Cython 0.24 or later. 2016-12-02 Andy Buckley * Adding L3_2004_I652683 (LEP 1 & 2 event shapes) and LHCB_2014_I1262703 (Z+jet at 7 TeV). * Adding leading dijet mass plots to MC_JetAnalysis (and all derived classes). Thanks to Chris Gutschow! * Adding CMS_2012_I1298807 (ZZ cross-section at 8 TeV), CMS_2016_I1459051 (inclusive jet cross-sections at 13 TeV) and CMS_PAS_FSQ_12_020 (preliminary 7 TeV leading-track underlying event). * Adding CDF_2015_1388868 (ppbar underlying event at 300, 900, and 1960 GeV) * Adding ATLAS_2016_I1467230 (13 TeV min bias), ATLAS_2016_I1468167 (13 TeV inelastic pp cross-section), and ATLAS_2016_I1479760 (7 TeV pp double-parton scattering with 4 jets). 2016-12-01 Andy Buckley * Adding ALICE_2012_I1116147 (eta and pi0 pTs and ratio) and ATLAS_2011_I929691 (7 TeV jet frag) 2016-11-30 Andy Buckley * Fix bash bugs in rivet-buildplugin, including fixing the --cmd mode. 2016-11-28 Andy Buckley * Add LHC Run 2 BSM analyses ATLAS_2016_CONF_2016_037 (3-lepton and same-sign 2-lepton), ATLAS_2016_CONF_2016_054 (1-lepton + jets), ATLAS_2016_CONF_2016_078 (ICHEP jets + MET), ATLAS_2016_CONF_2016_094 (1-lepton + many jets), CMS_2013_I1223519 (alphaT + b-jets), and CMS_2016_PAS_SUS_16_14 (jets + MET). * Provide convenience reversed-argument versions of apply and declare methods, to allow presentational choice of declare syntax in situations where the projection argument is very long, and reduce requirements on the user's memory since this is one situation in Rivet where there is no 'most natural' ordering choice. 2016-11-24 Andy Buckley * Adding pTvec() function to 4-vectors and ParticleBase. * Fix --pwd option of the rivet script 2016-11-21 Andy Buckley * Add weights and scaling to Cutflow/s. 2016-11-19 Andy Buckley * Add Et(const ParticleBase&) unbound function. 2016-11-18 Andy Buckley * Fix missing YAML quote mark in rivet-mkanalysis. 2016-11-15 Andy Buckley * Fix constness requirements on ifilter_select() and Particle/JetEffFilter::operator(). * src/Analyses/ATLAS_2016_I1458270.cc: Fix inverted particle efficiency filtering. 2016-10-24 Andy Buckley * Add rough ATLAS and CMS photon reco efficiency functions from Delphes (ATLAS and CMS versions are identical, hmmm) 2016-10-12 Andy Buckley * Tidying/fixing make-plots custom z-ticks code. Thanks to Dmitry Kalinkin. 2016-10-03 Holger Schulz * Fix SpiresID -> InspireID in some analyses (show-analysis pointed to non-existing web page) 2016-09-29 Holger Schulz * Add Luminosity_fb to AnalysisInfo * Added some keywords and Lumi to ATLAS_2016_I1458270 2016-09-28 Andy Buckley * Merge the ATLAS and CMS from-Delphes electron and muon tracking efficiency functions into generic trkeff functions -- this is how it should be. * Fix return type typo in Jet::bTagged(FN) templated method. * Add eta and pT cuts to ATLAS truth b-jet definition. * Use rounding rather than truncation in Cutflow percentage efficiency printing. 2016-09-28 Frank Siegert * make-plots bugfix in y-axis labels for RatioPlotMode=deviation 2016-09-27 Andy Buckley * Add vector and scalar pT (rather than Et) to MissingMomentum. 2016-09-27 Holger Schulz * Analysis keyword machinery * rivet -a @semileptonic * rivet -a @semileptonic@^bdecays -a @semileptonic@^ddecays 2016-09-22 Holger Schulz * Release version 2.5.2 2016-09-21 Andy Buckley * Add a requirement to DressedLeptons that the FinalState passed as 'bareleptons' will be filtered to only contain charged leptons, if that is not already the case. Thanks to Markus Seidel for the suggestion. 2016-09-21 Holger Schulz * Add Simone Amoroso's plugin for hadron spectra (ALEPH_1995_I382179) * Add Simone Amoroso's plugin for hadron spectra (OPAL_1993_I342766) 2016-09-20 Holger Schulz * Add CMS ttbar analysis from contrib, mark validated (CMS_2016_I1473674) * Extend rivet-mkhtml --booklet to also work with pdfmerge 2016-09-20 Andy Buckley * Fix make-plots automatic YMax calculation, which had a typo from code cleaning (mea culpa!). * Fix ChargedLeptons projection, which failed to exclude neutrinos!!! Thanks to Markus Seidel. * Add templated FN filtering arg versions of the Jet::*Tags() and Jet::*Tagged() functions. 2016-09-18 Andy Buckley * Add CMS partonic top analysis (CMS_2015_I1397174) 2016-09-18 Holger Schulz * Add L3 xp analysis of eta mesons, thanks Simone (L3_1992_I336180) * Add D0 1.8 TeV jet shapes analysis, thanks Simone (D0_1995_I398175) 2016-09-17 Andy Buckley * Add has{Ancestor,Parent,Child,Descendant}With functions and HasParticle{Ancestor,Parent,Child,Descendant}With functors. 2016-09-16 Holger Schulz * Add ATLAS 8TeV ttbar analysis from contrib (ATLAS_2015_I1404878) 2016-09-16 Andy Buckley * Add particles(GenParticlePtr) to RivetHepMC.hh * Add hasParent, hasParentWith, and hasAncestorWith to Particle. 2016-09-15 Holger Schulz * Add ATLAS 8TeV dijet analysis from contrib (ATLAS_2015_I1393758) * Add ATLAS 8TeV 'number of tracks in jets' analysis from contrib (ATLAS_2016_I1419070) * Add ATLAS 8TeV g->H->WW->enumunu analysis from contrib (ATLAS_2016_I1444991) 2016-09-14 Holger Schulz * Explicit std::toupper and std::tolower to make clang happy 2016-09-14 Andy Buckley * Add ATLAS Run 2 0-lepton SUSY and monojet search papers (ATLAS_2016_I1452559, ATLAS_2016_I1458270) 2016-09-13 Andy Buckley * Add experimental Cutflow and Cutflows objects for BSM cut tracking. * Add 'direct' versions of any, all, none to Utils.hh, with an implicity bool() transforming function. 2016-09-13 Holger Schulz * Add and mark validated B+ to omega analysis (BABAR_2013_I1116411) * Add and mark validated D0 to pi- analysis (BABAR_2015_I1334693) * Add a few more particle names and use PID names in recently added analyses * Add Simone's OPAL b-frag analysis (OPAL_2003_I599181) after some cleanup and heavy usage of new features * Restructured DELPHI_2011_I890503 in the same manner --- picks up a few more B-hadrons now (e.g. 20523 and such) * Clean up and add ATLAS 8TeV MinBias (from contrib ATLAS_2016_I1426695) 2016-09-12 Andy Buckley * Add a static constexpr DBL_NAN to Utils.hh for convenience, and move some utils stuff out of MathHeader.hh 2016-09-12 Holger Schulz * Add count function to Tools/Utils.h * Add and mark validated B0bar and Bminus-decay to pi analysis (BELLE_2013_I1238273) * Add and mark validated B0-decay analysis (BELLE_2011_I878990) * Add and mark validated B to D decay analysis (BELLE_2011_I878990) 2016-09-08 Andy Buckley * Add C-array version of multi-target Analysis::scale() and normalize(), and fix (semantic) constness. * Add == and != operators for cuts applied to integers. * Add missing delta{Phi,Eta,Rap}{Gtr,Less} functors to ParticleBaseUtils.hh 2016-09-07 Andy Buckley * Add templated functor filtering args to the Particle parent/child/descendent methods. 2016-09-06 Andy Buckley * Add ATLAS Run 1 medium and tight electron ID efficiency functions. * Update configure scripts to use newer (Py3-safe) Python testing macros. 2016-09-02 Andy Buckley * Add isFirstWith(out), isLastWith(out) functions, and functor wrappers, using Cut and templated function/functor args. * Add Particle::parent() method. * Add using import/typedef of HepMC *Ptr types (useful step for HepMC 2.07 and 3.00). * Various typo fixes (and canonical renaming) in ParticleBaseUtils functor collection. * Add ATLAS MV2c10 and MV2c20 b-tagging effs to SmearingFunctions.hh collection. 2016-09-01 Andy Buckley * Add a PartonicTops projection. * Add overloaded versions of the Event::allParticles() method with selection Cut or templated selection function arguments. 2016-08-25 Andy Buckley * Add rapidity scheme arg to DeltaR functor constructors. 2016-08-23 Andy Buckley * Provide an Analysis::bookCounter(d,x,y, title) function, for convenience and making the mkanalysis template valid. * Improve container utils functions, and provide combined remove_if+erase filter_* functions for both select- and discard-type selector functions. 2016-08-22 Holger Schulz * Bugfix in rivet-mkhtml (NoneType: ana.spiresID() --> spiresid) * Added include to Rivet/Tools/Utils.h to make gcc6 happy 2016-08-22 Andy Buckley * Add efffilt() functions and Particle/JetEffFilt functors to SmearingFunctions.hh 2016-08-20 Andy Buckley * Adding filterBy methods for Particle and Jet which accept generic boolean functions as well as the Cut specialisation. 2016-08-18 Andy Buckley * Add a Jet::particles(Cut&) method, for inline filtering of jet constituents. * Add 'conjugate' behaviours to container head and tail functions via negative length arg values. 2016-08-15 Andy Buckley * Add convenience headers for including all final-state and smearing projections, to save user typing. 2016-08-12 Andy Buckley * Add standard MET functions for ATLAS R1 (and currently copies for R2 and CMS). * Add lots of vector/container helpers for e.g. container slicing, summing, and min/max calculation. * Adapt SmearedMET to take *two* arguments, since SET is typically used to calculate MET resolution. * Adding functors for computing vector & ParticleBase differences w.r.t. another vector. 2016-08-12 Holger Schulz * Implemented a few more cuts in prompt photon analysis (CDF_1993_S2742446) but to no avail, the rise of the data towards larger costheta values cannot be reproduced --- maybe this is a candidate for more scrutiny and using the boosting machinery such that the c.m. cuts can be done in a non-approximate way 2016-08-11 Holger Schulz * Rename CDF_2009_S8383952 to CDF_2009_I856131 due to invalid Spires entry. * Add InspireID to all analysis known by their Spires key 2016-08-09 Holger Schulz * Release 2.5.1 2016-08-08 Andy Buckley * Add a simple MC_MET analysis for out-of-the-box MET distribution testing. 2016-08-08 Holger Schulz * Add DELPHI_2011_I890503 b-quark fragmentation function measurement, which superseded DELPHI_2002_069_CONF_603. The latter is marked OBSOLETE. 2016-08-05 Holger Schulz * Use Jet mass and energy smearing in CDF_1997_... six-jet analysis, mark validated. * Mark CDF_2001_S4563131 validated * D0_1996_S3214044 --- cut on jet Et rather than pT, fix filling of costheta and theta plots, mark validated. Concerning the jet algorithm, I tried with the implementation of fastjet fastjet/D0RunIConePlugin.hh but that really does not help. * D0_1996_S3324664 --- fix normalisations, sorting jets properly now, cleanup and mark validated. 2016-08-04 Holger Schulz * Use Jet mass and energy smearing in CDF_1996_S310 ... jet properties analysis. Cleanup analysis and mark validated. Added some more run info. The same for CDF_1996_S334... (pretty much the same cuts, different observables). * Minor fixes in SmearedJets projection 2016-08-03 Andy Buckley * Protect SmearedJets against loss of tagging information if a momentum smearing function is used (rather than a dedicated Jet smearing fn) via implicit casts. 2016-08-02 Andy Buckley * Add SmearedMET projection, wrapping MissingMomentum. * Include base truth-level projections in SmearedParticles/Jets compare() methods. 2016-07-29 Andy Buckley * Rename TOTEM_2012_002 to proper TOTEM_2012_I1220862 name. * Remove conditional building of obsolete, preliminary and unvalidated analyses. Now always built, since there are sufficient warnings. 2016-07-28 Holger Schulz * Mark D0_2000... W pT analysis validated * Mark LHCB_2011_S919... phi meson analysis validated 2016-07-25 Andy Buckley * Add unbound accessors for momentum properties of ParticleBase objects. * Add Rivet/Tools/ParticleBaseUtils.hh to collect tools like functors for particle & jet filtering. * Add vector versions of Analysis::scale() and ::normalize(), for batched scaling. * Add Analysis::scale() and Analysis::divide() methods for Counter types. * Utils.hh: add a generic sum() function for containers, and use auto in loop to support arrays. * Set data path as well as lib path in scripts with --pwd option, and use abs path to $PWD. * Add setAnalysisDataPaths and addAnalysisDataPath to RivetPaths.hh/cc and Python. * Pass absolutized RIVET_DATA_PATH from rivet-mkhtml to rivet-cmphistos. 2016-07-24 Holger Schulz * Mark CDF_2008_S77... b jet shapes validated * Added protection against low stats yoda exception in finalize for that analysis 2016-07-22 Andy Buckley * Fix newly introduced bug in make-plots which led to data point markers being skipped for all but the last bin. 2016-07-21 Andy Buckley * Add pid, abspid, charge, abscharge, charge3, and abscharge3 Cut enums, handled by Particle cut targets. * Add abscharge() and abscharge3() methods to Particle. * Add optional Cut and duplicate-removal flags to Particle children & descendants methods. * Add unbound versions of Particle is* and from* methods, for easier functor use. * Add Particle::isPrompt() as a member rather than unbound function. * Add protections against -ve mass from numerical precision errors in smearing functions. 2016-07-20 Andy Buckley * Move several internal system headers into the include/Rivet/Tools directory. * Fix median-computing safety logic in ATLAS_2010_S8914702 and tidy this and @todo markers in several similar analyses. * Add to_str/toString and stream functions for Particle, and a bit of Particle util function reorganisation. * Add isStrange/Charm/Bottom PID and Particle functions. * Add RangeError exception throwing from MathUtils.hh stats functions if given empty/mismatched datasets. * Add Rivet/Tools/PrettyPrint.hh, based on https://louisdx.github.io/cxx-prettyprint/ * Allow use of path regex group references in .plot file keyed values. 2016-07-20 Holger Schulz * Fix the --nskip behaviour on the main rivet script. 2016-07-07 Andy Buckley * Release version 2.5.0 2016-07-01 Andy Buckley * Fix pandoc interface flag version detection. 2016-06-28 Andy Buckley * Release version 2.4.3 * Add ATLAS_2016_I1468168 early ttbar fully leptonic fiducial cross-section analysis at 13 TeV. 2016-06-21 Andy Buckley * Add ATLAS_2016_I1457605 inclusive photon analysis at 8 TeV. 2016-06-15 Andy Buckley * Add a --show-bibtex option to the rivet script, for convenient outputting of a BibTeX db for the used analyses. 2016-06-14 Andy Buckley * Add and rename 4-vector boost calculation methods: new methods beta, betaVec, gamma & gammaVec are now preferred to the deprecated boostVector method. 2016-06-13 Andy Buckley * Add and use projection handling methods declare(proj, pname) and apply(evt, pname) rather than the longer and explicitly 'projectiony' addProjection & applyProjection. * Start using the DEFAULT_RIVET_ANALYSIS_CTOR macro (newly created preferred alias to long-present DEFAULT_RIVET_ANA_CONSTRUCTOR) * Add a DEFAULT_RIVET_PROJ_CLONE macro for implementing the clone() method boiler-plate code in projections. 2016-06-10 Andy Buckley * Add a NonPromptFinalState projection, and tweak the PromptFinalState and unbound Particle functions a little in response. May need some more finessing. * Add user-facing aliases to ProjectionApplier add, get, and apply methods... the templated versions of which can now be called without using the word 'projection', which makes the function names a bit shorter and pithier, and reduces semantic repetition. 2016-06-10 Andy Buckley * Adding ATLAS_2015_I1397635 Wt at 8 TeV analysis. * Adding ATLAS_2015_I1390114 tt+b(b) at 8 TeV analysis. 2016-06-09 Andy Buckley * Downgrade some non-fatal error messages from ERROR to WARNING status, because *sigh* ATLAS's software treats any appearance of the word 'ERROR' in its log file as a reason to report the job as failed (facepalm). 2016-06-07 Andy Buckley * Adding ATLAS 13 TeV minimum bias analysis, ATLAS_2016_I1419652. 2016-05-30 Andy Buckley * pyext/rivet/util.py: Add pandoc --wrap/--no-wrap CLI detection and batch conversion. * bin/rivet: add -o as a more standard 'output' option flag alias to -H. 2016-05-23 Andy Buckley * Remove the last ref-data bin from table 16 of ATLAS_2010_S8918562, due to data corruption. The corresponding HepData record will be amended by ATLAS. 2016-05-12 Holger Schulz * Mark ATLAS_2012_I1082009 as validated after exhaustive tests with Pythia8 and Sherpa in inclusive QCD mode. 2016-05-11 Andy Buckley * Specialise return error codes from the rivet script. 2016-05-11 Andy Buckley * Add Event::allParticles() to provide neater (but not *helpful*) access to Rivet-wrapped versions of the raw particles in the Event::genEvent() record, and hence reduce HepMC digging. 2016-05-05 Andy Buckley * Version 2.4.2 release! * Update SLD_2002_S4869273 ref data to match publication erratum, now updated in HepData. Thanks to Peter Skands for the report and Mike Whalley / Graeme Watt for the quick fix and heads-up. 2016-04-27 Andy Buckley * Add CMS_2014_I1305624 event shapes analysis, with standalone variable calculation struct embedded in an unnamed namespace. 2016-04-19 Andy Buckley * Various clean-ups and fixes in ATLAS analyses using isolated photons with median pT density correction. 2016-04-18 Andy Buckley * Add transformBy(LT) methods to Particle and Jet. * Add mkObjectTransform and mkFrameTransform factory methods to LorentzTransform. 2016-04-17 Andy Buckley * Add null GenVertex protection in Particle children & descendants methods. 2016-04-15 Andy Buckley * Add ATLAS_2015_I1397637, ATLAS 8 TeV boosted top cross-section vs. pT 2016-04-14 Andy Buckley * Add a --no-histos argument to the rivet script. 2016-04-13 Andy Buckley * Add ATLAS_2015_I1351916 (8 TeV Z FB asymmetry) and ATLAS_2015_I1408516 (8 TeV Z phi* and pT) analyses, and their _EL, _MU variants. 2016-04-12 Andy Buckley * Patch PID utils for ordering issues in baryon decoding. 2016-04-11 Andy Buckley * Actually implement ZEUS_2001_S4815815... only 10 years late! 2016-04-08 Andy Buckley * Add a --guess-prefix flag to rivet-config, cf. fastjet-config. * Add RIVET_DATA_PATH variable and related functions in C++ and Python as a common first-fallback for RIVET_REF_PATH, RIVET_INFO_PATH, and RIVET_PLOT_PATH. * Add --pwd options to rivet-mkhtml and rivet-cmphistos 2016-04-07 Andy Buckley * Remove implicit conventional event rotation for HERA -- this needs to be done explicitly from now. * Add comBoost functions and methods to Beam.hh, and tidy LorentzTransformation. * Restructure Beam projection functions for beam particle and sqrtS extraction, and add asqrtS functions. * Rename and improve PID and Particle Z,A,lambda functions -> nuclZ,nuclA,nuclNlambda. 2016-04-05 Andy Buckley * Improve binIndex function, with an optional argument to allow overflow lookup, and add it to testMath. * Adding setPE, setPM, setPtEtaPhiM, etc. methods and corresponding mk* static methods to FourMomentum, as well as adding more convenience aliases and vector attributes for completeness. Coordinate conversion functions taken from HEPUtils::P4. New attrs also mapped to ParticleBase. 2016-03-29 Andy Buckley * ALEPH_1996_S3196992.cc, ATLAS_2010_S8914702.cc, ATLAS_2011_I921594.cc, ATLAS_2011_S9120807.cc, ATLAS_2012_I1093738.cc, ATLAS_2012_I1199269.cc, ATLAS_2013_I1217867.cc, ATLAS_2013_I1244522.cc, ATLAS_2013_I1263495.cc, ATLAS_2014_I1307756.cc, ATLAS_2015_I1364361.cc, CDF_2008_S7540469.cc, CMS_2015_I1370682.cc, MC_JetSplittings.cc, STAR_2006_S6870392.cc: Updates for new FastJets interface, and other cleaning. * Deprecate 'standalone' FastJets constructors -- they are misleading. * More improvements around jets, including unbound conversion and filtering routines between collections of Particles, Jets, and PseudoJets. * Place 'Cut' forward declaration in a new Cuts.fhh header. * Adding a Cuts::OPEN extern const (a bit more standard- and constant-looking than Cuts::open()) 2016-03-28 Andy Buckley * Improvements to FastJets constructors, including specification of optional AreaDefinition as a constructor arg, disabling dodgy no-FS constructors which I suspect don't work properly in the brave new world of automatic ghost tagging, using a bit of judicious constructor delegation, and completing/exposing use of shared_ptr for internal memory management. 2016-03-26 Andy Buckley * Remove Rivet/Tools/RivetBoost.hh and Boost references from rivet-config, rivet-buildplugin, and configure.ac. It's gone ;-) * Replace Boost assign usage with C++11 brace initialisers. All Boost use is gone from Rivet! * Replace Boost lexical_cast and string algorithms. 2016-03-25 Andy Buckley * Bug-fix in semi-leptonic top selection of CMS_2015_I1370682. 2016-03-12 Andy Buckley * Allow multi-line major tick labels on make-plots linear x and y axes. Linebreaks are indicated by \n in the .dat file. 2016-03-09 Andy Buckley * Release 2.4.1 2016-03-03 Andy Buckley * Add a --nskip flag to the rivet command-line tool, to allow processing to begin in the middle of an event file (useful for batched processing of large files, in combination with --nevts) 2016-03-03 Holger Schulz * Add ATLAS 7 TeV event shapes in Z+jets analysis (ATLAS_2016_I1424838) 2016-02-29 Andy Buckley * Update make-plots to use multiprocessing rather than threading. * Add FastJets::trimJet method, thanks to James Monk for the suggestion and patch. * Add new preferred name PID::charge3 in place of PID::threeCharge, and also convenience PID::abscharge and PID::abscharge3 functions -- all derived from changes in external HEPUtils. * Add analyze(const GenEvent*) and analysis(string&) methods to AnalysisHandler, plus some docstring improvements. 2016-02-23 Andy Buckley * New ATLAS_2015_I1394679 analysis. * New MC_HHJETS analysis from Andreas Papaefstathiou. * Ref data updates for ATLAS_2013_I1219109, ATLAS_2014_I1312627, and ATLAS_2014_I1319490. * Add automatic output paging to 'rivet --show-analyses' 2016-02-16 Andy Buckley * Apply cross-section unit fixes and plot styling improvements to ATLAS_2013_I1217863 analyses, thanks to Christian Gutschow. * Fix to rivet-cmphistos to avoid overwriting RatioPlotYLabel if already set via e.g. the PLOT pseudo-file. Thanks to Johann Felix v. Soden-Fraunhofen. 2016-02-15 Andy Buckley * Add Analysis::bookCounter and some machinery in rivet-cmphistos to avoid getting tripped up by unplottable (for now) data types. * Add --font and --format options to rivet-mkhtml and make-plots, to replace the individual flags used for that purpose. Not fully cleaned up, but a necessary step. * Add new plot styling options to rivet-cmphistos and rivet-mkhtml. Thanks to Gavin Hesketh. * Modify rivet-cmphistos and rivet-mkhtml to apply plot hiding if *any* path component is hidden by an underscore prefix, as implemented in AOPath, plus other tidying using new AOPath methods. * Add pyext/rivet/aopaths.py, containing AOPath object for central & standard decoding of Rivet-standard analysis object path structures. 2016-02-12 Andy Buckley * Update ParticleIdUtils.hh (i.e. PID:: functions) to use the functions from the latest version of MCUtils' PIDUtils.h. 2016-01-15 Andy Buckley * Change rivet-cmphistos path matching logic from match to search (user can add explicit ^ marker if they want match semantics). 2015-12-20 Andy Buckley * Improve linspace (and hence also logspace) precision errors by using multiplication rather than repeated addition to build edge list (thanks to Holger Schulz for the suggestion). 2015-12-15 Andy Buckley * Add cmphistos and make-plots machinery for handling 'suffix' variations on plot paths, currently just by plotting every line, with the variations in a 70% faded tint. * Add Beam::pv() method for finding the beam interaction primary vertex 4-position. * Add a new Particle::setMomentum(E,x,y,z) method, and an origin position member which is automatically populated from the GenParticle, with access methods corresponding to the momentum ones. 2015-12-10 Andy Buckley * make-plots: improve custom tick attribute handling, allowing empty lists. Also, any whitespace now counts as a tick separator -- explicit whitespace in labels should be done via ~ or similar LaTeX markup. 2015-12-04 Andy Buckley * Pro-actively use -m/-M arguments when initially loading histograms in mkhtml, *before* passing them to cmphistos. 2015-12-03 Andy Buckley * Move contains() and has_key() functions on STL containers from std to Rivet namespaces. * Adding IsRef attributes to all YODA refdata files; this will be used to replace the /REF prefix in Rivet v3 onwards. The migration has also removed leading # characters from BEGIN/END blocks, as per YODA format evolution: new YODA versions as required by current Rivet releases are able to read both the old and new formats. 2015-12-02 Andy Buckley * Add handling of a command-line PLOT 'file' argument to rivet-mkhtml, cf. rivet-cmphistos. * Improvements to rivet-mkhtml behaviour re. consistency with rivet-cmphistos in how muti-part histo paths are decomposed into analysis-name + histo name, and removal of 'NONE' strings. 2015-11-30 Andy Buckley * Relax rivet/plotinfo.py pattern matching on .plot file components, to allow leading whitespace and around = signs, and to make the leading # optional on BEGIN/END blocks. 2015-11-26 Andy Buckley * Write out intermediate histogram files by default, with event interval of 10k. 2015-11-25 Andy Buckley * Protect make-plots against lock-up due to partial pstricks command when there are no data points. 2015-11-17 Andy Buckley * rivet-cmphistos: Use a ratio label that doesn't mention 'data' when plotting MC vs. MC. 2015-11-12 Andy Buckley * Tweak plot and subplot sizing defaults in make-plots so the total canvas is always the same size by default. 2015-11-10 Andy Buckley * Handle 2D histograms better in rivet-cmphistos (since they can't be overlaid) 2015-11-05 Andy Buckley * Allow comma-separated analysis name lists to be passed to a single -a/--analysis/--analyses option. * Convert namespace-global const variables to be static, to suppress compiler warnings. * Use standard MAX_DBL and MAX_INT macros as a source for MAXDOUBLE and MAXINT, to suppress GCC5 warnings. 2015-11-04 Holger Schulz * Adding LHCB inelastic xsection measurement (LHCB_2015_I1333223) * Adding ATLAS colour flow in ttbar->semileptonic measurement (ATLAS_2015_I1376945) 2015-10-07 Chris Pollard * Release 2.4.0 2015-10-06 Holger Schulz * Adding CMS_2015_I1327224 dijet analysis (Mjj>2 TeV) 2015-10-03 Holger Schulz * Adding CMS_2015_I1346843 Z+gamma 2015-09-30 Andy Buckley * Important improvement in FourVector & FourMomentum: new reverse() method to return a 4-vector in which only the spatial component has been inverted cf. operator- which flips the t/E component as well. 2015-09-28 Holger Schulz * Adding D0_2000_I503361 ZPT at 1800 GeV 2015-09-29 Chris Pollard * Adding ATLAS_2015_CONF_2015_041 2015-09-29 Chris Pollard * Adding ATLAS_2015_I1387176 2015-09-29 Chris Pollard * Adding ATLAS_2014_I1327229 2015-09-28 Chris Pollard * Adding ATLAS_2014_I1326641 2015-09-28 Holger Schulz * Adding CMS_2013_I1122847 FB assymetry in DY analysis 2015-09-28 Andy Buckley * Adding CMS_2015_I1385107 LHA pp 2.76 TeV track-jet underlying event. 2015-09-27 Andy Buckley * Adding CMS_2015_I1384119 LHC Run 2 minimum bias dN/deta with no B field. 2015-09-25 Andy Buckley * Adding TOTEM_2014_I1328627 forward charged density in eta analysis. 2015-09-23 Andy Buckley * Add CMS_2015_I1310737 Z+jets analysis. * Allow running MC_{W,Z}INC, MC_{W,Z}JETS as separate bare lepton analyses. 2015-09-23 Andy Buckley * FastJets now allows use of FastJet pure ghosts, by excluding them from the constituents of Rivet Jet objects. Thanks to James Monk for raising the issue and providing a patch. 2015-09-15 Andy Buckley * More MissingMomentum changes: add optional 'mass target' argument when retrieving the vector sum as a 4-momentum, with the mass defaulting to 0 rather than sqrt(sum(E)^2 - sum(p)^2). * Require Boost 1.55 for robust compilation, as pointed out by Andrii Verbytskyi. 2015-09-10 Andy Buckley * Allow access to MissingMomentum projection via WFinder. * Adding extra methods to MissingMomentum, to make it more user-friendly. 2015-09-09 Andy Buckley * Fix factor of 2 in LHCB_2013_I1218996 normalisation, thanks to Felix Riehn for the report. 2015-08-20 Frank Siegert * Add function to ZFinder to retrieve all fiducial dressed leptons, e.g. to allow vetoing on a third one (proposed by Christian Gutschow). 2015-08-18 Andy Buckley * Rename xs and counter AOs to start with underscores, and modify rivet-cmphistos to skip AOs whose basenames start with _. 2015-08-17 Andy Buckley * Add writing out of cross-section and total event counter by default. Need to add some name protection to avoid them being plotted. 2015-08-16 Andy Buckley * Add templated versions of Analysis::refData() to use data types other than Scatter2DPtr, and convert the cached ref data store to generic AnalysisObjectPtrs to make it possible. 2015-07-29 Andy Buckley * Add optional Cut arguments to all the Jet tag methods. * Add exception handling and pre-emptive testing for a non-writeable output directory (based on patch from Lukas Heinrich). 2015-07-24 Andy Buckley * Version 2.3.0 release. 2015-07-02 Holger Schulz * Tidy up ATLAS higgs combination analysis. * Add ALICE kaon, pion analysis (ALICE_2015_I1357424) * Add ALICE strange baryon analysis (ALICE_2014_I1300380) * Add CDF ZpT measurement in Z->ee events analysis (CDF_2012_I1124333) * Add validated ATLAS W+charm measurement (ATLAS_2014_I1282447) * Add validated CMS jet and dijet analysis (CMS_2013_I1208923) 2015-07-01 Andy Buckley * Define a private virtual operator= on Projection, to block 'sliced' accidental value copies of derived class instances. * Add new muon-in-jet options to FastJet constructors, pass that and invisibles enums correctly to JetAlg, tweak the default strategies, and add a FastJets constructor from a fastjet::JetDefinition (while deprecating the plugin-by-reference constructor). 2015-07-01 Holger Schulz * Add D0 phi* measurement (D0_2015_I1324946). * Remove WUD and MC_PHOTONJETUE analyses * Don't abort ATLAS_2015_I1364361 if there is no stable Higgs print a warning instead and veto event 2015-07-01 Andy Buckley * Add all, none, from-decay muon filtering options to JetAlg and FastJets. * Rename NONPROMPT_INVISIBLES to DECAY_INVISIBLES for clarity & extensibility. * Remove FastJets::ySubJet, splitJet, and filterJet methods -- they're BDRS-paper-specific and you can now use the FastJet objects directly to do this and much more. * Adding InvisiblesStrategy to JetAlg, using it rather than a bool in the useInvisibles method, and updating FastJets to use this approach for its particle filtering and to optionally use the enum in the constructor arguments. The new default invisibles-using behaviour is to still exclude _prompt_ invisibles, and the default is still to exclude them all. Only one analysis (src/Analyses/STAR_2006_S6870392.cc) required updating, since it was the only one to be using the FastJets legacy seed_threshold constructor argument. * Adding isVisible method to Particle, taken from VisibleFinalState (which now uses this). 2015-06-30 Andy Buckley * Marking many old & superseded ATLAS analyses as obsolete. * Adding cmpMomByMass and cmpMomByAscMass sorting functors. * Bump version to 2.3.0 and require YODA > 1.4.0 (current head at time of development). 2015-06-08 Andy Buckley * Add handling of -m/-M flags on rivet-cmphistos and rivet-mkhtml, moving current rivet-mkhtml -m/-M to -a/-A (for analysis name pattern matching). Requires YODA head (will be YODA 1.3.2 of 1.4.0). * src/Analyses/ATLAS_2015_I1364361.cc: Now use the built-in prompt photon selecting functions. * Tweak legend positions in MC_JETS .plot file. * Add a bit more debug output from ZFinder and WFinder. 2015-05-24 Holger Schulz * Normalisation discussion concerning ATLAS_2014_I1325553 is resolved. Changed YLabel accordingly. 2015-05-19 Holger Schulz * Add (preliminary) ATLAS combined Higgs analysis (ATLAS_2015_I1364361). Data will be updated and more histos added as soon as paper is published in journal. For now using data taken from public ressource https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HIGG-2014-11/ 2015-05-19 Peter Richardson * Fix ATLAS_2014_I1325553 normalisation of histograms was wrong by factor of two |y| vs y problem 2015-05-01 Andy Buckley * Fix MC_HJETS/HINC/HKTSPLITTINGS analyses to (ab)use the ZFinder with a mass range of 115-135 GeV and a mass target of 125 GeV (was previously 115-125 and mass target of mZ) 2015-04-30 Andy Buckley * Removing uses of boost::assign::list_of, preferring the existing comma-based assign override for now, for C++11 compatibility. * Convert MC_Z* analysis finalize methods to use scale() rather than normalize(). 2015-04-01 Holger Schulz * Add CMS 7 TeV rapidity gap analysis (CMS_2015_I1356998). * Remove FinalState Projection. 2015-03-30 Holger Schulz * Add ATLAS 7 TeV photon + jets analysis (ATLAS_2013_I1244522). 2015-03-26 Andy Buckley * Updates for HepMC 2.07 interface constness improvements. 2015-03-25 Holger Schulz * Add ATLAS double parton scattering in W+2j analysis (ATLAS_2013_I1216670). 2015-03-24 Andy Buckley * 2.2.1 release! 2015-03-23 Holger Schulz * Add ATLAS differential Higgs analysis (ATLAS_2014_I1306615). 2015-03-19 Chris Pollard * Add ATLAS V+gamma analyses (ATLAS_2013_I1217863) 2015-03-20 Andy Buckley * Adding ATLAS R-jets analysis i.e. ratios of W+jets and Z+jets observables (ATLAS_2014_I1312627 and _EL, _MU variants) * include/Rivet/Tools/ParticleUtils.hh: Adding same/oppSign and same/opp/diffCharge functions, operating on two Particles. * include/Rivet/Tools/ParticleUtils.hh: Adding HasAbsPID functor and removing optional abs arg from HasPID. 2015-03-19 Andy Buckley * Mark ATLAS_2012_I1083318 as VALIDATED and fix d25-x01-y02 ref data. 2015-03-19 Chris Pollard * Add ATLAS W and Z angular analyses (ATLAS_2011_I928289) 2015-03-19 Andy Buckley * Add LHCb charged particle multiplicities and densities analysis (LHCB_2014_I1281685) * Add LHCb Z y and phi* analysis (LHCB_2012_I1208102) 2015-03-19 Holger Schulz * Add ATLAS dijet analysis (ATLAS_2014_I1325553). * Add ATLAS Z pT analysis (ATLAS_2014_I1300647). * Add ATLAS low-mass Drell-Yan analysis (ATLAS_2014_I1288706). * Add ATLAS gap fractions analysis (ATLAS_2014_I1307243). 2015-03-18 Andy Buckley * Adding CMS_2014_I1298810 and CMS_2014_I1303894 analyses. 2015-03-18 Holger Schulz * Add PDG_TAUS analysis which makes use of the TauFinder. * Add ATLAS 'traditional' Underlying Event in Z->mumu analysis (ATLAS_2014_I1315949). 2015-03-18 Andy Buckley * Change UnstableFinalState duplicate resolution to use the last in a chain rather than the first. 2015-03-17 Holger Schulz * Update Taufinder to use decaytyoe (can be HADRONIC, LEPTONIC or ANY), in FastJet.cc --- set TauFinder mode to hadronic for tau-tagging 2015-03-16 Chris Pollard * Removed fuzzyEquals() from Vector3::angle() 2015-03-16 Andy Buckley * Adding Cuts-based constructor to PrimaryHadrons. * Adding missing compare() method to HeavyHadrons projection. 2015-03-15 Chris Pollard * Adding FinalPartons projection which selects the quarks and gluons immediately before hadronization 2015-03-05 Andy Buckley * Adding Cuts-based constructors and other tidying in UnstableFinalState and HeavyHadrons 2015-03-03 Andy Buckley * Add support for a PLOT meta-file argument to rivet-cmphistos. 2015-02-27 Andy Buckley * Improved time reporting. 2015-02-24 Andy Buckley * Add Particle::fromHadron and Particle::fromPromptTau, and add a boolean 'prompt' argument to Particle::fromTau. * Fix WFinder use-transverse-mass property setting. Thanks to Christian Gutschow. 2015-02-04 Andy Buckley * Add more protection against math domain errors with log axes. * Add some protection against nan-valued points and error bars in make-plots. 2015-02-03 Andy Buckley * Converting 'bitwise' to 'logical' Cuts combinations in all analyses. 2015-02-02 Andy Buckley * Use vector MET rather than scalar VET (doh...) in WFinder cut. Thanks to Ines Ochoa for the bug report. * Updating and tidying analyses with deprecation warnings. * Adding more Cuts/FS constructors for Charged,Neutral,UnstableFinalState. * Add &&, || and ! operators for without-parens-warnings Cut combining. Note these don't short-circuit, but this is ok since Cut comparisons don't have side-effects. * Add absetaIn, absrapIn Cut range definitions. * Updating use of sorted particle/jet access methods and cmp functors in projections and analyses. 2014-12-09 Andy Buckley * Adding a --cmd arg to rivet-buildplugin to allow the output paths to be sed'ed (to help deal with naive Grid distribution). For example BUILDROOT=`rivet-config --prefix`; rivet-buildplugin PHOTONS.cc --cmd | sed -e "s:$BUILDROOT:$SITEROOT:g" 2014-11-26 Andy Buckley * Interface improvements in DressedLeptons constructor. * Adding DEPRECATED macro to throw compiler deprecation warnings when using deprecated features. 2014-11-25 Andy Buckley * Adding Cut-based constructors, and various constructors with lists of PDG codes to IdentifiedFinalState. 2014-11-20 Andy Buckley * Analysis updates (ATLAS, CMS, CDF, D0) to apply the changes below. * Adding JetAlg jets(Cut, Sorter) methods, and other interface improvements for cut and sorted ParticleBase retrieval from JetAlg and ParticleFinder projections. Some old many-doubles versions removed, syntactic sugar sorting methods deprecated. * Adding Cuts::Et and Cuts::ptIn, Cuts::etIn, Cuts::massIn. * Moving FastJet includes, conversions, uses etc. into Tools/RivetFastJet.hh 2014-10-07 Andy Buckley * Fix a bug in the isCharmHadron(pid) function and remove isStrange* functions. 2014-09-30 Andy Buckley * 2.2.0 release! * Mark Jet::containsBottom and Jet::containsCharm as deprecated methods: use the new methods. Analyses updated. * Add Jet::bTagged(), Jet::cTagged() and Jet::tauTagged() as ghost-assoc-based replacements for the 'contains' tagging methods. 2014-09-17 Andy Buckley * Adding support for 1D and 3D YODA scatters, and helper methods for calling the efficiency, asymm and 2D histo divide functions. 2014-09-12 Andy Buckley * Adding 5 new ATLAS analyses: ATLAS_2011_I921594: Inclusive isolated prompt photon analysis with full 2010 LHC data ATLAS_2013_I1263495: Inclusive isolated prompt photon analysis with 2011 LHC data ATLAS_2014_I1279489: Measurements of electroweak production of dijets + $Z$ boson, and distributions sensitive to vector boson fusion ATLAS_2014_I1282441: The differential production cross section of the $\phi(1020)$ meson in $\sqrt{s}=7$ TeV $pp$ collisions measured with the ATLAS detector ATLAS_2014_I1298811: Leading jet underlying event at 7 TeV in ATLAS * Adding a median(vector) function and fixing the other stats functions to operate on vector rather than vector. 2014-09-03 Andy Buckley * Fix wrong behaviour of LorentzTransform with a null boost vector -- thanks to Michael Grosse. 2014-08-26 Andy Buckley * Add calc() methods to Hemispheres as requested, to allow it to be used with Jet or FourMomentum inputs outside the normal projection system. 2014-08-17 Andy Buckley * Improvements to the particles methods on ParticleFinder/FinalState, in particular adding the range of cuts arguments cf. JetAlg (and tweaking the sorted jets equivalent) and returning as a copy rather than a reference if cut/sorted to avoid accidentally messing up the cached copy. * Creating ParticleFinder projection base class, and moving Particles-accessing methods from FinalState into it. * Adding basic forms of MC_ELECTRONS, MC_MUONS, and MC_TAUS analyses. 2014-08-15 Andy Buckley * Version bump to 2.2.0beta1 for use at BOOST and MCnet school. 2014-08-13 Andy Buckley * New analyses: ATLAS_2014_I1268975 (high mass dijet cross-section at 7 TeV) ATLAS_2014_I1304688 (jet multiplicity and pT at 7 TeV) ATLAS_2014_I1307756 (scalar diphoton resonance search at 8 TeV -- no histograms!) CMSTOTEM_2014_I1294140 (charged particle pseudorapidity at 8 TeV) 2014-08-09 Andy Buckley * Adding PromptFinalState, based on code submitted by Alex Grohsjean and Will Bell. Thanks! 2014-08-06 Andy Buckley * Adding MC_HFJETS and MC_JETTAGS analyses. 2014-08-05 Andy Buckley * Update all analyses to use the xMin/Max/Mid, xMean, xWidth, etc. methods on YODA classes rather than the deprecated lowEdge etc. * Merge new HasPID functor from Holger Schulz into Rivet/Tools/ParticleUtils.hh, mainly for use with the any() function in Rivet/Tools/Utils.hh 2014-08-04 Andy Buckley * Add ghost tagging of charms, bottoms and taus to FastJets, and tag info accessors to Jet. * Add constructors from and cast operators to FastJet's PseudoJet object from Particle and Jet. * Convert inRange to not use fuzzy comparisons on closed intervals, providing old version as fuzzyInRange. 2014-07-30 Andy Buckley * Remove classifier functions accepting a Particle from the PID inner namespace. 2014-07-29 Andy Buckley * MC_JetAnalysis.cc: re-enable +- ratios for eta and y, now that YODA divide doesn't throw an exception. * ATLAS_2012_I1093734: fix a loop index error which led to the first bin value being unfilled for half the dphi plots. * Fix accidental passing of a GenParticle pointer as a PID code int in HeavyHadrons.cc. Effect limited to incorrect deductions about excited HF decay chains and should be small. Thanks to Tomasz Przedzinski for finding and reporting the issue during HepMC3 design work! 2014-07-23 Andy Buckley * Fix to logspace: make sure that start and end values are exact, not the result of exp(log(x)). 2014-07-16 Andy Buckley * Fix setting of library paths for doc building: Python can't influence the dynamic loader in its own process by setting an environment variable because the loader only looks at the variable once, when it starts. 2014-07-02 Andy Buckley * rivet-cmphistos now uses the generic yoda.read() function rather than readYODA() -- AIDA files can also be compared and plotted directly now. 2014-06-24 Andy Buckley * Add stupid missing include and std:: prefix in Rivet.hh 2014-06-20 Holger Schulz * bin/make-plots: Automatic generation of minor xtick labels if LogX is requested but data resides e.g. in [200, 700]. Fixes m_12 plots of, e.g. ATLAS_2010_S8817804 2014-06-17 David Grellscheid * pyext/rivet/Makefile.am: 'make distcheck' and out-of-source builds should work now. 2014-06-10 Andy Buckley * Fix use of the install command for bash completion installation on Macs. 2014-06-07 Andy Buckley * Removing direct includes of MathUtils.hh and others from analysis code files. 2014-06-02 Andy Buckley * Rivet 2.1.2 release! 2014-05-30 Andy Buckley * Using Particle absrap(), abseta() and abspid() where automatic conversion was feasible. * Adding a few extra kinematics mappings to ParticleBase. * Adding p3() accessors to the 3-momentum on FourMomentum, Particle, and Jet. * Using Jet and Particle kinematics methods directly (without momentum()) where possible. * More tweaks to make-plots 2D histo parsing behaviour. 2014-05-30 Holger Schulz * Actually fill the XQ 2D histo, .plot decorations. * Have make-plots produce colourmaps using YODA_3D_SCATTER objects. Remove the grid in colourmaps. * Some tweaks for the SFM analysis, trying to contact Martin Wunsch who did the unfolding back then. 2014-05-29 Holger Schulz * Re-enable 2D histo in MC_PDFS 2014-05-28 Andy Buckley * Updating analysis and project routines to use Particle::pid() by preference to Particle::pdgId(), and Particle::abspid() by preference to abs(Particle::pdgId()), etc. * Adding interfacing of smart pointer types and booking etc. for YODA 2D histograms and profiles. * Improving ParticleIdUtils and ParticleUtils functions based on merging of improved function collections from MCUtils, and dropping the compiled ParticleIdUtils.cc file. 2014-05-27 Andy Buckley * Adding CMS_2012_I1090423 (dijet angular distributions), CMS_2013_I1256943 (Zbb xsec and angular correlations), CMS_2013_I1261026 (jet and UE properties vs. Nch) and D0_2000_I499943 (bbbar production xsec and angular correlations). 2014-05-26 Andy Buckley * Fixing a bug in plot file handling, and adding a texpand() routine to rivet.util, to be used to expand some 'standard' physics TeX macros. * Adding ATLAS_2012_I1124167 (min bias event shapes), ATLAS_2012_I1203852 (ZZ cross-section), and ATLAS_2013_I1190187 (WW cross-section) analyses. 2014-05-16 Andy Buckley * Adding any(iterable, fn) and all(iterable, fn) template functions for convenience. 2014-05-15 Holger Schulz * Fix some bugs in identified hadron PIDs in OPAL_1998_S3749908. 2014-05-13 Andy Buckley * Writing out [UNVALIDATED], [PRELIMINARY], etc. in the --list-analyses output if analysis is not VALIDATED. 2014-05-12 Andy Buckley * Adding CMS_2013_I1265659 colour coherence analysis. 2014-05-07 Andy Buckley * Bug fixes in CMS_2013_I1209721 from Giulio Lenzi. * Fixing compiler warnings from clang, including one which indicated a misapplied cut bug in CDF_2006_S6653332. 2014-05-05 Andy Buckley * Fix missing abs() in Particle::abspid()!!!! 2014-04-14 Andy Buckley * Adding the namespace protection workaround for Boost described at http://www.boost.org/doc/libs/1_55_0/doc/html/foreach.html 2014-04-13 Andy Buckley * Adding a rivet.pc template file and installation rule for pkg-config to use. * Updating data/refdata/ALEPH_2001_S4656318.yoda to corrected version in HepData. 2014-03-27 Andy Buckley * Flattening PNG output of make-plots (i.e. no transparency) and other tweaks. 2014-03-23 Andy Buckley * Renaming the internal meta-particle class in DressedLeptons (and exposed in the W/ZFinders) from ClusteredLepton to DressedLepton for consistency with the change in name of its containing class. * Removing need for cmake and unportable yaml-cpp trickery by using libtool to build an embedded symbol-mangled copy of yaml-cpp rather than trying to mangle and build direct from the tarball. 2014-03-10 Andy Buckley * Rivet 2.1.1 release. 2014-03-07 Andy Buckley * Adding ATLAS multilepton search (no ref data file), ATLAS_2012_I1204447. 2014-03-05 Andy Buckley * Also renaming Breit-Wigner functions to cdfBW, invcdfBW and bwspace. * Renaming index_between() to the more Rivety binIndex(), since that's the only real use of such a function... plus a bit of SFINAE type relaxation trickery. 2014-03-04 Andy Buckley * Adding programmatic access to final histograms via AnalysisHandler::getData(). * Adding CMS 4 jet correlations analysis, CMS_2013_I1273574. * Adding CMS W + 2 jet double parton scattering analysis, CMS_2013_I1272853. * Adding ATLAS isolated diphoton measurement, ATLAS_2012_I1199269. * Improving the index_between function so the numeric types don't have to exactly match. * Adding better momentum comparison functors and sortBy, sortByX functions to use them easily on containers of Particle, Jet, and FourMomentum. 2014-02-10 Andy Buckley * Removing duplicate and unused ParticleBase sorting functors. * Removing unused HT increment and units in ATLAS_2012_I1180197 (unvalidated SUSY). * Fixing photon isolation logic bug in CMS_2013_I1258128 (Z rapidity). * Replacing internal uses of #include Rivet/Rivet.hh with Rivet/Config/RivetCommon.hh, removing the MAXRAPIDITY const, and repurposing Rivet/Rivet.hh as a convenience include for external API users. * Adding isStable, children, allDescendants, stableDescendants, and flightLength functions to Particle. * Replacing Particle and Jet deltaX functions with generic ones on ParticleBase, and adding deltaRap variants. * Adding a Jet.fhh forward declaration header, including fastjet::PseudoJet. * Adding a RivetCommon.hh header to allow Rivet.hh to be used externally. * Fixing HeavyHadrons to apply pT cuts if specified. 2014-02-06 Andy Buckley * 2.1.0 release! 2014-02-05 Andy Buckley * Protect against invalid prefix value if the --prefix configure option is unused. 2014-02-03 Andy Buckley * Adding the ATLAS_2012_I1093734 fwd-bwd / azimuthal minbias correlations analysis. * Adding the LHCB_2013_I1208105 forward energy flow analysis. 2014-01-31 Andy Buckley * Checking the YODA minimum version in the configure script. * Fixing the JADE_OPAL analysis ycut values to the midpoints, thanks to information from Christoph Pahl / Stefan Kluth. 2014-01-29 Andy Buckley * Removing unused/overrestrictive Isolation* headers. 2014-01-27 Andy Buckley * Re-bundling yaml-cpp, now built as a mangled static lib based on the LHAPDF6 experience. * Throw a UserError rather than an assert if AnalysisHandler::init is called more than once. 2014-01-25 David Grellscheid * src/Core/Cuts.cc: New Cuts machinery, already used in FinalState. Old-style "mineta, maxeta, minpt" constructors kept around for ease of transition. Minimal set of convenience functions available, like EtaIn(), should be expanded as needed. 2014-01-22 Andy Buckley * configure.ac: Remove opportunistic C++11 build, until this becomes mandatory (in version 2.2.0?). Anyone who wants C++11 can explicitly set the CXXFLAGS (and DYLDFLAGS for pre-Mavericks Macs) 2014-01-21 Leif Lonnblad * src/Core/Analysis.cc: Fixed bug in Analysis::isCompatible where an 'abs' was left out when checking that beam energes does not differ by more than 1GeV. * src/Analyses/CMS_2011_S8978280.cc: Fixed checking of beam energy and booking corresponding histograms. 2013-12-19 Andy Buckley * Adding pid() and abspid() methods to Particle. * Adding hasCharm and hasBottom methods to Particle. * Adding a sorting functor arg version of the ZFinder::constituents() method. * Adding pTmin cut accessors to HeavyHadrons. * Tweak to the WFinder constructor to place the target W (trans) mass argument last. 2013-12-18 Andy Buckley * Adding a GenParticle* cast operator to Particle, removing the Particle and Jet copies of the momentum cmp functors, and general tidying/improvement/unification of the momentum properties of jets and particles. 2013-12-17 Andy Buckley * Using SFINAE techniques to improve the math util functions. * Adding isNeutrino to ParticleIdUtils, and isHadron/isMeson/isBaryon/isLepton/isNeutrino methods to Particle. * Adding a FourMomentum cast operator to ParticleBase, so that Particle and Jet objects can be used directly as FourMomentums. 2013-12-16 Andy Buckley * LeptonClusters renamed to DressedLeptons. * Adding singular particle accessor functions to WFinder and ZFinder. * Removing ClusteredPhotons and converting ATLAS_2010_S8919674. 2013-12-12 Andy Buckley * Fixing a problem with --disable-analyses (thanks to David Hall) * Require FastJet version 3. * Bumped version to 2.1.0a0 * Adding -DNDEBUG to the default build flags, unless in --enable-debug mode. * Adding a special treatment of RIVET_*_PATH variables: if they end in :: the default search paths will not be appended. Used primarily to restrict the doc builds to look only inside the build dirs, but potentially also useful in other special circumstances. * Adding a definition of exec_prefix to rivet-buildplugin. * Adding -DNDEBUG to the default non-debug build flags. 2013-11-27 Andy Buckley * Removing accidentally still-present no-as-needed linker flag from rivet-config. * Lots of analysis clean-up and migration to use new features and W/Z finder APIs. * More momentum method forwarding on ParticleBase and adding abseta(), absrap() etc. functions. * Adding the DEFAULT_RIVET_ANA_CONSTRUCTOR cosmetic macro. * Adding deltaRap() etc. function variations * Adding no-decay photon clustering option to WFinder and ZFinder, and replacing opaque bool args with enums. * Adding an option for ignoring photons from hadron/tau decays in LeptonClusters. 2013-11-22 Andy Buckley * Adding Particle::fromBottom/Charm/Tau() members. LHCb were aready mocking this up, so it seemed sensible to add it to the interface as a more popular (and even less dangerous) version of hasAncestor(). * Adding an empty() member to the JetAlg interface. 2013-11-07 Andy Buckley * Adding the GSL lib path to the library path in the env scripts and the rivet-config --ldflags output. 2013-10-25 Andy Buckley * 2.0.0 release!!!!!! 2013-10-24 Andy Buckley * Supporting zsh completion via bash completion compatibility. 2013-10-22 Andy Buckley * Updating the manual to describe YODA rather than AIDA and the new rivet-cmphistos script. * bin/make-plots: Adding paths to error messages in histogram combination. * CDF_2005_S6217184: fixes to low stats errors and final scatter plot binning. 2013-10-21 Andy Buckley * Several small fixes in jet shape analyses, SFM_1984, etc. found in the last H++ validation run. 2013-10-18 Andy Buckley * Updates to configure and the rivetenv scripts to try harder to discover YODA. 2013-09-26 Andy Buckley * Now bundling Cython-generated files in the tarballs, so Cython is not a build requirement for non-developers. 2013-09-24 Andy Buckley * Removing unnecessary uses of a momentum() indirection when accessing kinematic variables. * Clean-up in Jet, Particle, and ParticleBase, in particular splitting PID functions on Particle from those on PID codes, and adding convenience kinematic functions to ParticleBase. 2013-09-23 Andy Buckley * Add the -avoid-version flag to libtool. * Final analysis histogramming issues resolved. 2013-08-16 Andy Buckley * Adding a ConnectBins flag in make-plots, to decide whether to connect adjacent, gapless bins with a vertical line. Enabled by default (good for the step-histo default look of MC lines), but now rivet-cmphistos disables it for the reference data. 2013-08-14 Andy Buckley * Making 2.0.0beta3 -- just a few remaining analysis migration issues remaining but it's worth making another beta since there were lots of framework fixes/improvements. 2013-08-11 Andy Buckley * ARGUS_1993_S2669951 also fixed using scatter autobooking. * Fixing remaining issues with booking in BABAR_2007_S7266081 using the feature below (far nicer than hard-coding). * Adding a copy_pts param to some Analysis::bookScatter2D methods: pre-setting the points with x values is sometimes genuinely useful. 2013-07-26 Andy Buckley * Removed the (officially) obsolete CDF 2008 LEADINGJETS and NOTE_9351 underlying event analyses -- superseded by the proper versions of these analyses based on the final combined paper. * Removed the semi-demo Multiplicity projection -- only the EXAMPLE analysis and the trivial ALEPH_1991_S2435284 needed adaptation. 2013-07-24 Andy Buckley * Adding a rejection of histo paths containing /TMP/ from the writeData function. Use this to handle booked temporary histograms... for now. 2013-07-23 Andy Buckley * Make rivet-cmphistos _not_ draw a ratio plot if there is only one line. * Improvements and fixes to HepData lookup with rivet-mkanalysis. 2013-07-22 Andy Buckley * Add -std=c++11 or -std=c++0x to the Rivet compiler flags if supported. * Various fixes to analyses with non-zero numerical diffs. 2013-06-12 Andy Buckley * Adding a new HeavyHadrons projection. * Adding optional extra include_end args to logspace() and linspace(). 2013-06-11 Andy Buckley * Moving Rivet/RivetXXX.hh tools headers into Rivet/Tools/. * Adding PrimaryHadrons projection. * Adding particles_in/out functions on GenParticle to RivetHepMC. * Moved STL extensions from Utils.hh to RivetSTL.hh and tidying. * Tidying, improving, extending, and documenting in RivetSTL.hh. * Adding a #include of Logging.hh into Projection.hh, and removing unnecessary #includes from all Projection headers. 2013-06-10 Andy Buckley * Moving htmlify() and detex() Python functions into rivet.util. * Add HepData URL for Inspire ID lookup to the rivet script. * Fix analyses' info files which accidentally listed the Inspire ID under the SpiresID metadata key. 2013-06-07 Andy Buckley * Updating mk-analysis-html script to produce MathJax output * Adding a version of Analysis::removeAnalysisObject() which takes an AO pointer as its argument. * bin/rivet: Adding pandoc-based conversion of TeX summary and description strings to plain text on the terminal output. * Add MathJax to rivet-mkhtml output, set up so the .info entries should render ok. * Mark the OPAL 1993 analysis as UNVALIDATED: from the H++ benchmark runs it looks nothing like the data, and there are some outstanding ambiguities. 2013-06-06 Andy Buckley * Releasing 2.0.0b2 beta version. 2013-06-05 Andy Buckley * Renaming Analysis::add() etc. to very explicit addAnalysisObject(), sorting out shared_pointer polymorphism issues via the Boost dynamic_pointer_cast, and adding a full set of getHisto1D(), etc. explicitly named and typed accessors, including ones with HepData dataset/axis ID signatures. * Adding histo booking from an explicit reference Scatter2D (and more placeholders for 2D histos / 3D scatters) and rewriting existing autobooking to use this. * Converting inappropriate uses of size_t to unsigned int in Analysis. * Moving Analysis::addPlot to Analysis::add() (or reg()?) and adding get() and remove() (or unreg()?) * Fixing attempted abstraction of import fallbacks in rivet.util.import_ET(). * Removing broken attempt at histoDir() caching which led to all histograms being registered under the same analysis name. 2013-06-04 Andy Buckley * Updating the Cython version requirement to 0.18 * Adding Analysis::integrate() functions and tidying the Analysis.hh file a bit. 2013-06-03 Andy Buckley * Adding explicit protection against using inf/nan scalefactors in ATLAS_2011_S9131140 and H1_2000_S4129130. * Making Analysis::scale noisly complain about invalid scalefactors. 2013-05-31 Andy Buckley * Reducing the TeX main memory to ~500MB. Turns out that it *can* be too large with new versions of TeXLive! 2013-05-30 Andy Buckley * Reverting bookScatter2D behaviour to never look at ref data, and updating a few affected analyses. This should fix some bugs with doubled datapoints introduced by the previous behaviour+addPoint. * Adding a couple of minor Utils.hh and MathUtils.hh features. 2013-05-29 Andy Buckley * Removing Constraints.hh header. * Minor bugfixes and improvements in Scatter2D booking and MC_JetAnalysis. 2013-05-28 Andy Buckley * Removing defunct HistoFormat.hh and HistoHandler.{hh,cc} 2013-05-27 Andy Buckley * Removing includes of Logging.hh, RivetYODA.hh, and ParticleIdUtils.hh from analyses (and adding an include of ParticleIdUtils.hh to Analysis.hh) * Removing now-unused .fhh files. * Removing lots of unnecessary .fhh includes from core classes: everything still compiling ok. A good opportunity to tidy this up before the release. * Moving the rivet-completion script from the data dir to bin (the completion is for scripts in bin, and this makes development easier). * Updating bash completion scripts for YODA format and compare-histos -> rivet-cmphistos. 2013-05-23 Andy Buckley * Adding Doxy comments and a couple of useful convenience functions to Utils.hh. * Final tweaks to ATLAS ttbar jet veto analysis (checked logic with Kiran Joshi). 2013-05-15 Andy Buckley * Many 1.0 -> weight bugfixes in ATLAS_2011_I945498. * yaml-cpp v3 support re-introduced in .info parsing. * Lots of analysis clean-ups for YODA TODO issues. 2013-05-13 Andy Buckley * Analysis histo booking improvements for Scatter2D, placeholders for 2D histos, and general tidying. 2013-05-12 Andy Buckley * Adding configure-time differentiation between yaml-cpp API versions 3 and 5. 2013-05-07 Andy Buckley * Converting info file reading to use the yaml-cpp 0.5.x API. 2013-04-12 Andy Buckley * Tagging as 2.0.0b1 2013-04-04 Andy Buckley * Removing bundling of yaml-cpp: it needs to be installed by the user / bootstrap script from now on. 2013-04-03 Andy Buckley * Removing svn:external m4 directory, and converting Boost detection to use better boost.m4 macros. 2013-03-22 Andy Buckley * Moving PID consts to the PID namespace and corresponding code updates and opportunistic clean-ups. * Adding Particle::fromDecay() method. 2013-03-09 Andy Buckley * Version bump to 2.0.0b1 in anticipation of first beta release. * Adding many more 'popular' particle ID code named-consts and aliases, and updating the RapScheme enum with ETA -> ETARAP, and fixing affected analyses (plus other opportunistic tidying / minor bug-fixing). * Fixing a symbol misnaming in ATLAS_2012_I1119557. 2013-03-07 Andy Buckley * Renaming existing uses of ParticleVector to the new 'Particles' type. * Updating util classes, projections, and analyses to deal with the HepMC return value changes. * Adding new Particle(const GenParticle*) constructor. * Converting Particle::genParticle() to return a const pointer rather than a reference, for the same reason as below (+ consistency within Rivet and with the HepMC pointer-centric coding design). * Converting Event to use a different implementation of original and modified GenParticles, and to manage the memory in a more future-proof way. Event::genParticle() now returns a const pointer rather than a reference, to signal that the user is leaving the happy pastures of 'normal' Rivet behind. * Adding a Particles typedef by analogy to Jets, and in preference to the cumbersome ParticleVector. * bin/: Lots of tidying/pruning of messy/defunct scripts. * Creating spiresbib, util, and plotinfo rivet python module submodules: this eliminates lighthisto and the standalone spiresbib modules. Util contains convenience functions for Python version testing, clean ElementTree import, and process renaming, for primary use by the rivet-* scripts. * Removing defunct scripts that have been replaced/obsoleted by YODA. 2013-03-06 Andy Buckley * Fixing doc build so that the reference histos and titles are ~correctly documented. We may want to truncate some of the lists! 2013-03-06 Hendrik Hoeth * Added ATLAS_2012_I1125575 analysis * Converted rivet-mkhtml to yoda * Introduced rivet-cmphistos as yoda based replacement for compare-histos 2013-03-05 Andy Buckley * Replacing all AIDA ref data with YODA versions. * Fixing the histograms entries in the documentation to be tolerant to plotinfo loading failures. * Making the findDatafile() function primarily find YODA data files, then fall back to AIDA. The ref data loader will use the appropriate YODA format reader. 2013-02-05 David Grellscheid * include/Rivet/Math/MathUtils.hh: added BWspace bin edge method to give equal-area Breit-Wigner bins 2013-02-01 Andy Buckley * Adding an element to the PhiMapping enum and a new mapAngle(angle, mapping) function. * Fixes to Vector3::azimuthalAngle and Vector3::polarAngle calculation (using the mapAngle functions). 2013-01-25 Frank Siegert * Split MC_*JETS analyses into three separate bits: MC_*INC (inclusive properties) MC_*JETS (jet properties) MC_*KTSPLITTINGS (kT splitting scales). 2013-01-22 Hendrik Hoeth * Fix TeX variable in the rivetenv scripts, especially for csh 2012-12-21 Andy Buckley * Version 1.8.2 release! 2012-12-20 Andy Buckley * Adding ATLAS_2012_I1119557 analysis (from Roman Lysak and Lily Asquith). 2012-12-18 Andy Buckley * Adding TOTEM_2012_002 analysis, from Sercan Sen. 2012-12-18 Hendrik Hoeth * Added CMS_2011_I954992 analysis 2012-12-17 Hendrik Hoeth * Added CMS_2012_I1193338 analysis * Fixed xi cut in ATLAS_2011_I894867 2012-12-17 Andy Buckley * Adding analysis descriptions to the HTML analysis page ToC. 2012-12-14 Hendrik Hoeth * Added CMS_2012_PAS_FWD_11_003 analysis * Added LHCB_2012_I1119400 analysis 2012-12-12 Andy Buckley * Correction to jet acceptance in CMS_2011_S9120041, from Sercan Sen: thanks! 2012-12-12 Hendrik Hoeth * Added CMS_2012_PAS_QCD_11_010 analysis 2012-12-07 Andy Buckley * Version number bump to 1.8.2 -- release approaching. * Rewrite of ALICE_2012_I1181770 analysis to make it a bit more sane and acceptable. * Adding a note on FourVector and FourMomentum that operator- and operator-= invert both the space and time components: use of -= can result in a vector with negative energy. * Adding particlesByRapidity and particlesByAbsRapidity to FinalState. 2012-12-07 Hendrik Hoeth * Added ALICE_2012_I1181770 analysis * Bump version to 1.8.2 2012-12-06 Hendrik Hoeth * Added ATLAS_2012_I1188891 analysis * Added ATLAS_2012_I1118269 analysis * Added CMS_2012_I1184941 analysis * Added LHCB_2010_I867355 analysis * Added TGraphErrors support to root2flat 2012-11-27 Andy Buckley * Converting CMS_2012_I1102908 analysis to use YODA. * Adding XLabel and YLabel setting in histo/profile/scatter booking. 2012-11-27 Hendrik Hoeth * Fix make-plots png creation for SL5 2012-11-23 Peter Richardson * Added ATLAS_2012_CONF_2012_153 4-lepton SUSY search 2012-11-17 Andy Buckley * Adding MC_PHOTONS by Steve Lloyd and AB, for testing general unisolated photon properties, especially those associated with charged leptons (e and mu). 2012-11-16 Andy Buckley * Adding MC_PRINTEVENT, a convenient (but verbose!) analysis for printing out event details to stdout. 2012-11-15 Andy Buckley * Removing the long-unused/defunct autopackage system. 2012-11-15 Hendrik Hoeth * Added LHCF_2012_I1115479 analysis * Added ATLAS_2011_I894867 analysis 2012-11-14 Hendrik Hoeth * Added CMS_2012_I1102908 analysis 2012-11-14 Andy Buckley * Converting the argument order of logspace, clarifying the arguments, updating affected code, and removing Analysis::logBinEdges. * Merging updates from the AIDA maintenance branch up to r4002 (latest revision for next merges is r4009). 2012-11-11 Andy Buckley * include/Math/: Various numerical fixes to Vector3::angle and changing the 4 vector mass treatment to permit spacelike virtualities (in some cases even the fuzzy isZero assert check was being violated). The angle check allows a clean-up of some workaround code in MC_VH2BB. 2012-10-15 Hendrik Hoeth * Added CMS_2012_I1107658 analysis 2012-10-11 Hendrik Hoeth * Added CDF_2012_NOTE10874 analysis 2012-10-04 Hendrik Hoeth * Added ATLAS_2012_I1183818 analysis 2012-07-17 Hendrik Hoeth * Cleanup and multiple fixes in CMS_2011_S9120041 * Bugfixed in ALEPH_2004_S5765862 and ATLAS_2010_CONF_2010_049 (thanks to Anil Pratap) 2012-08-09 Andy Buckley * Fixing aida2root command-line help message and converting to TH* rather than TGraph by default. 2012-07-24 Andy Buckley * Improvements/migrations to rivet-mkhtml, rivet-mkanalysis, and rivet-buildplugin. 2012-07-17 Hendrik Hoeth * Add CMS_2012_I1087342 2012-07-12 Andy Buckley * Fix rivet-mkanalysis a bit for YODA compatibility. 2012-07-05 Hendrik Hoeth * Version 1.8.1! 2012-07-05 Holger Schulz * Add ATLAS_2011_I945498 2012-07-03 Hendrik Hoeth * Bugfix for transverse mass (thanks to Gavin Hesketh) 2012-06-29 Hendrik Hoeth * Merge YODA branch into trunk. YODA is alive!!!!!! 2012-06-26 Holger Schulz * Add ATLAS_2012_I1091481 2012-06-20 Hendrik Hoeth * Added D0_2011_I895662: 3-jet mass 2012-04-24 Hendrik Hoeth * fixed a few bugs in rivet-rmgaps * Added new TOTEM dN/deta analysis 2012-03-19 Andy Buckley * Version 1.8.0! * src/Projections/UnstableFinalState.cc: Fix compiler warning. * Version bump for testing: 1.8.0beta1. * src/Core/AnalysisInfo.cc: Add printout of YAML parser exception error messages to aid debugging. * bin/Makefile.am: Attempt to fix rivet-nopy build on SLC5. * src/Analyses/LHCB_2010_S8758301.cc: Add two missing entries to the PDGID -> lifetime map. * src/Projections/UnstableFinalState.cc: Extend list of vetoed particles to include reggeons. 2012-03-16 Andy Buckley * Version change to 1.8.0beta0 -- nearly ready for long-awaited release! * pyext/setup.py.in: Adding handling for the YAML library: fix for Genser build from Anton Karneyeu. * src/Analyses/LHCB_2011_I917009.cc: Hiding lifetime-lookup error message if the offending particle is not a hadron. * include/Rivet/Math/MathHeader.hh: Using unnamespaced std::isnan and std::isinf as standard. 2012-03-16 Hendrik Hoeth * Improve default plot behaviour for 2D histograms 2012-03-15 Hendrik Hoeth * Make ATLAS_2012_I1084540 less verbose, and general code cleanup of that analysis. * New-style plugin hook in ATLAS_2011_I926145, ATLAS_2011_I944826 and ATLAS_2012_I1084540 * Fix compiler warnings in ATLAS_2011_I944826 and CMS_2011_S8973270 * CMS_2011_S8941262: Weights are double, not int. * disable inRange() tests in test/testMath.cc until we have a proper fix for the compiler warnings we see on SL5. 2012-03-07 Andy Buckley * Marking ATLAS_2011_I919017 as VALIDATED (this should have happened a long time ago) and adding more references. 2012-02-28 Hendrik Hoeth * lighthisto.py: Caching for re.compile(). This speeds up aida2flat and flat2aida by more than an order of magnitude. 2012-02-27 Andy Buckley * doc/mk-analysis-html: Adding more LaTeX/text -> HTML conversion replacements, including better <,> handling. 2012-02-26 Andy Buckley * Adding CMS_2011_S8973270, CMS_2011_S8941262, CMS_2011_S9215166, CMS_QCD_10_024, from CMS. * Adding LHCB_2011_I917009 analysis, from Alex Grecu. * src/Core/Analysis.cc, include/Rivet/Analysis.hh: Add a numeric-arg version of histoPath(). 2012-02-24 Holger Schulz * Adding ATLAS Ks/Lambda analysis. 2012-02-20 Andy Buckley * src/Analyses/ATLAS_2011_I925932.cc: Using new overflow-aware normalize() in place of counters and scale(..., 1/count) 2012-02-14 Andy Buckley * Splitting MC_GENERIC to put the PDF and PID plotting into MC_PDFS and MC_IDENTIFIED respectively. * Renaming MC_LEADINGJETS to MC_LEADJETUE. 2012-02-14 Hendrik Hoeth * DELPHI_1996_S3430090 and ALEPH_1996_S3486095: fix rapidity vs {Thrust,Sphericity}-axis. 2012-02-14 Andy Buckley * bin/compare-histos: Don't attempt to remove bins from MC histos where they aren't found in the ref file, if the ref file is not expt data, or if the new --no-rmgapbins arg is given. * bin/rivet: Remove the conversion of requested analysis names to upper-case: mixed-case analysis names will now work. 2012-02-14 Frank Siegert * Bugfixes and improvements for MC_TTBAR: - Avoid assert failure with logspace starting at 0.0 - Ignore charged lepton in jet finding (otherwise jet multi is always +1). - Add some dR/deta/dphi distributions as noted in TODO - Change pT plots to logspace as well (to avoid low-stat high pT bins) 2012-02-10 Hendrik Hoeth * rivet-mkhtml -c option now has the semantics of a .plot file. The contents are appended to the dat output by compare-histos. 2012-02-09 David Grellscheid * Fixed broken UnstableFS behaviour 2012-01-25 Frank Siegert * Improvements in make-plots: - Add PlotTickLabels and RatioPlotTickLabels options (cf. make-plots.txt) - Make ErrorBars and ErrorBands non-exclusive (and change their order, such that Bars are on top of Bands) 2012-01-25 Holger Schulz * Add ATLAS diffractive gap analysis 2012-01-23 Andy Buckley * bin/rivet: When using --list-analyses, the analysis summary is now printed out when log level is <= INFO, rather than < INFO. The effect on command line behaviour is that useful identifying info is now printed by default when using --list-analyses, rather than requiring --list-analyses -v. To get the old behaviour, e.g. if using the output of rivet --list-analyses for scripting, now use --list-analyses -q. 2012-01-22 Andy Buckley * Tidying lighthisto, including fixing the order in which +- error values are passed to the Bin constructor in fromFlatHisto. 2012-01-16 Frank Siegert * Bugfix in ATLAS_2012_I1083318: Include non-signal neutrinos in jet clustering. * Add first version of ATLAS_2012_I1083318 (W+jets). Still UNVALIDATED until final happiness with validation plots arises and data is in Hepdata. * Bugfix in ATLAS_2010_S8919674: Really use neutrino with highest pT for Etmiss. Doesn't seem to make very much difference, but is more correct in principle. 2012-01-16 Peter Richardson * Fixes to ATLAS_20111_S9225137 to include reference data 2012-01-13 Holger Schulz * Add ATLAS inclusive lepton analysis 2012-01-12 Hendrik Hoeth * Font selection support in rivet-mkhtml 2012-01-11 Peter Richardson * Added pi0 to list of particles. 2012-01-11 Andy Buckley * Removing references to Boost random numbers. 2011-12-30 Andy Buckley * Adding a placeholder rivet-which script (not currently installed). * Tweaking to avoid a very time-consuming debug printout in compare-histos with the -v flag, and modifying the Rivet env vars in rivet-mkhtml before calling compare-histos to eliminate problems induced by relative paths (i.e. "." does not mean the same thing when the directory is changed within the script). 2011-12-12 Andy Buckley * Adding a command line completion function for rivet-mkhtml. 2011-12-12 Frank Siegert * Fix for factor of 2.0 in normalisation of CMS_2011_S9086218 * Add --ignore-missing option to rivet-mkhtml to ignore non-existing AIDA files. 2011-12-06 Andy Buckley * Include underflow and overflow bins in the normalisation when calling Analysis::normalise(h) 2011-11-23 Andy Buckley * Bumping version to 1.8.0alpha0 since the Jet interface changes are quite a major break with backward compatibility (although the vast majority of analyses should be unaffected). * Removing crufty legacy stuff from the Jet class -- there is never any ambiguity between whether Particle or FourMomentum objects are the constituents now, and the jet 4-momentum is set explicitly by the jet alg so that e.g. there is no mismatch if the FastJet pt recombination scheme is used. * Adding default do-nothing implementations of Analysis::init() and Analysis::finalize(), since it is possible for analysis implementations to not need to do anything in these methods, and forcing analysis authors to write do-nothing boilerplate code is not "the Rivet way"! 2011-11-19 Andy Buckley * Adding variant constructors to FastJets with a more natural Plugin* argument, and decrufting the constructor implementations a bit. * bin/rivet: Adding a more helpful error message if the rivet module can't be loaded, grouping the option parser options, removing the -A option (this just doesn't seem useful anymore), and providing a --pwd option as a shortcut to append "." to the search path. 2011-11-18 Andy Buckley * Adding a guide to compiling a new analysis template to the output message of rivet-mkanalysis. 2011-11-11 Andy Buckley * Version 1.7.0 release! * Protecting the OPAL 2004 analysis against NaNs in the hemispheres projection -- I can't track the origin of these and suspect some occasional memory corruption. 2011-11-09 Andy Buckley * Renaming source files for EXAMPLE and PDG_HADRON_MULTIPLICITIES(_RATIOS) analyses to match the analysis names. * Cosmetic fixes in ATLAS_2011_S9212183 SUSY analysis. * Adding new ATLAS W pT analysis from Elena Yatsenko (slightly adapted). 2011-10-20 Frank Siegert * Extend API of W/ZFinder to allow for specification of input final state in which to search for leptons/photons. 2011-10-19 Andy Buckley * Adding new version of LHCB_2010_S8758301, based on submission from Alex Grecu. There is some slightly dodgy-looking GenParticle* fiddling going on, but apparently it's necessary (and hopefully robust). 2011-10-17 Andy Buckley * bin/rivet-nopy linker line tweak to make compilation work with GCC 4.6 (-lHepMC has to be explicitly added for some reason). 2011-10-13 Frank Siegert * Add four CMS QCD analyses kindly provided by CMS. 2011-10-12 Andy Buckley * Adding a separate test program for non-matrix/vector math functions, and adding a new set of int/float literal arg tests for the inRange functions in it. * Adding a jet multiplicity plot for jets with pT > 30 GeV to MC_TTBAR. 2011-10-11 Andy Buckley * Removing SVertex. 2011-10-11 James Monk * root2flat was missing the first bin (plus spurious last bin) * My version of bash does not understand the pipe syntax |& in rivet-buildplugin 2011-09-30 James Monk * Fix bug in ATLAS_2010_S8817804 that misidentified the akt4 jets as akt6 2011-09-29 Andy Buckley * Converting FinalStateHCM to a slightly more general DISFinalState. 2011-09-26 Andy Buckley * Adding a default libname argument to rivet-buildplugin. If the first argument doesn't have a .so library suffix, then use RivetAnalysis.so as the default. 2011-09-19 Hendrik Hoeth * make-plots: Fixing regex for \physicscoor. Adding "FrameColor" option. 2011-09-17 Andy Buckley * Improving interactive metadata printout, by not printing headings for missing info. * Bumping the release number to 1.7.0alpha0, since with these SPIRES/Inspire changes and the MissingMomentum API change we need more than a minor release. * Updating the mkanalysis, BibTeX-grabbing and other places that care about analysis SPIRES IDs to also be able to handle the new Inspire system record IDs. The missing link is getting to HepData from an Inspire code... * Using the .info file rather than an in-code declaration to specify that an analysis needs cross-section information. * Adding Inspire support to the AnalysisInfo and Analysis interfaces. Maybe we can find a way to combine the two, e.g. return the SPIRES code prefixed with an "S" if no Inspire ID is available... 2011-09-17 Hendrik Hoeth * Added ALICE_2011_S8909580 (strange particle production at 900 GeV) * Feed-down correction in ALICE_2011_S8945144 2011-09-16 Andy Buckley * Adding ATLAS track jet analysis, modified from the version provided by Seth Zenz: ATLAS_2011_I919017. Note that this analysis is currently using the Inspire ID rather than the Spires one: we're clearly going to have to update the API to handle Inspire codes, so might as well start now... 2011-09-14 Andy Buckley * Adding the ATLAS Z pT measurement at 7 TeV (ATLAS_2011_S9131140) and an MC analysis for VH->bb events (MC_VH2BB). 2011-09-12 Andy Buckley * Removing uses of getLog, cout, cerr, and endl from all standard analyses and projections, except in a very few special cases. 2011-09-10 Andy Buckley * Changing the behaviour and interface of the MissingMomentum projection to calculate vector ET correctly. This was previously calculated according to the common definition of -E*sin(theta) of the summed visible 4-momentum in the event, but that is incorrect because the timelike term grows monotonically. Instead, transverse 2-vectors of size ET need to be constructed for each visible particle, and vector-summed in the transverse plane. The rewrite of this behaviour made it opportune to make an API improvement: the previous method names scalarET/vectorET() have been renamed to scalar/vectorEt() to better match the Rivet FourMomentum::Et() method, and MissingMomentum::vectorEt() now returns a Vector3 rather than a double so that the transverse missing Et direction is also available. Only one data analysis has been affected by this change in behaviour: the D0_2004_S5992206 dijet delta(phi) analysis. It's expected that this change will not be very significant, as it is a *veto* on significant missing ET to reduce non-QCD contributions. MC studies using this analysis ~always run with QCD events only, so these contributions should be small. The analysis efficiency may have been greatly improved, as fewer events will now fail the missing ET veto cut. * Add sorting of the ParticleVector returned by the ChargedLeptons projection. * configure.ac: Adding a check to make sure that no-one tries to install into --prefix=$PWD. 2011-09-04 Andy Buckley * lighthisto fixes from Christian Roehr. 2011-08-26 Andy Buckley * Removing deprecated features: the setBeams(...) method on Analysis, the MaxRapidity constant, the split(...) function, the default init() method from AnalysisHandler and its test, and the deprecated TotalVisibleMomentum and PVertex projections. 2011-08-23 Andy Buckley * Adding a new DECLARE_RIVET_PLUGIN wrapper macro to hide the details of the plugin hook system from analysis authors. Migration of all analyses and the rivet-mkanalysis script to use this as the standard plugin hook syntax. * Also call the --cflags option on root-config when using the --root option with rivet-biuldplugin (thanks to Richard Corke for the report) 2011-08-23 Frank Siegert * Added ATLAS_2011_S9126244 * Added ATLAS_2011_S9128077 2011-08-23 Hendrik Hoeth * Added ALICE_2011_S8945144 * Remove obsolete setBeams() from the analyses * Update CMS_2011_S8957746 reference data to the official numbers * Use Inspire rather than Spires. 2011-08-19 Frank Siegert * More NLO parton level generator friendliness: Don't crash or fail when there are no beam particles. * Add --ignore-beams option to skip compatibility check. 2011-08-09 David Mallows * Fix aida2flat to ignore empty dataPointSet 2011-08-07 Andy Buckley * Adding TEXINPUTS and LATEXINPUTS prepend definitions to the variables provided by rivetenv.(c)sh. A manual setting of these variables that didn't include the Rivet TEXMFHOME path was breaking make-plots on lxplus, presumably since the system LaTeX packages are so old there. 2011-08-02 Frank Siegert Version 1.6.0 release! 2011-08-01 Frank Siegert * Overhaul of the WFinder and ZFinder projections, including a change of interface. This solves potential problems with leptons which are not W/Z constituents being excluded from the RemainingFinalState. 2011-07-29 Andy Buckley * Version 1.5.2 release! * New version of aida2root from James Monk. 2011-07-29 Frank Siegert * Fix implementation of --config file option in make-plots. 2011-07-27 David Mallows * Updated MC_TTBAR.plot to reflect updated analysis. 2011-07-25 Andy Buckley * Adding a useTransverseMass flag method and implementation to InvMassFinalState, and using it in the WFinder, after feedback from Gavin Hesketh. This was the neatest way I could do it :S Some other tidying up happened along the way. * Adding transverse mass massT and massT2 methods and functions for FourMomentum. 2011-07-22 Frank Siegert * Added ATLAS_2011_S9120807 * Add two more observables to MC_DIPHOTON and make its isolation cut more LHC-like * Add linear photon pT histo to MC_PHOTONJETS 2011-07-20 Andy Buckley * Making MC_TTBAR work with semileptonic ttbar events and generally tidying the code. 2011-07-19 Andy Buckley * Version bump to 1.5.2.b01 in preparation for a release in the very near future. 2011-07-18 David Mallows * Replaced MC_TTBAR: Added t,tbar reconstruction. Not yet working. 2011-07-18 Andy Buckley * bin/rivet-buildplugin.in: Pass the AM_CXXFLAGS variable (including the warning flags) to the C++ compiler when building user analysis plugins. * include/LWH/DataPointSet.h: Fix accidental setting of errorMinus = scalefactor * error_Plus_. Thanks to Anton Karneyeu for the bug report! 2011-07-18 Hendrik Hoeth * Added CMS_2011_S8884919 (charged hadron multiplicity in NSD events corrected to pT>=0). * Added CMS_2010_S8656010 and CMS_2010_S8547297 (charged hadron pT and eta in NSD events) * Added CMS_2011_S8968497 (chi_dijet) * Added CMS_2011_S8978280 (strangeness) 2011-07-13 Andy Buckley * Rivet PDF manual updates, to not spread disinformation about bootstrapping a Genser repo. 2011-07-12 Andy Buckley * bin/make-plots: Protect property reading against unstripped \r characters from DOS newlines. * bin/rivet-mkhtml: Add a -M unmatch regex flag (note that these are matching the analysis path rather than individual histos on this script), and speed up the initial analysis identification and selection by avoiding loops of regex comparisons for repeats of strings which have already been analysed. * bin/compare-histos: remove the completely (?) unused histogram list, and add -m and -M regex flags, as for aida2flat and flat2aida. 2011-06-30 Hendrik Hoeth * fix fromFlat() in lighthistos: It would ignore histogram paths before. * flat2aida: preserve histogram order from .dat files 2011-06-27 Andy Buckley * pyext/setup.py.in: Use CXXFLAGS and LDFLAGS safely in the Python extension build, and improve the use of build/src directory arguments. 2011-06-23 Andy Buckley * Adding a tentative rivet-updateanalyses script, based on lhapdf-getdata, which will download new analyses as requested. We could change our analysis-providing behaviour a bit to allow this sort of delivery mechanism to be used as the normal way of getting analysis updates without us having to make a whole new Rivet release. It is nice to be able to identify analyses with releases, though, for tracking whether bugs have been addressed. 2011-06-10 Frank Siegert * Bugfixes in WFinder. 2011-06-10 Andy Buckley * Adding \physicsxcoor and \physicsycoor treatment to make-plots. 2011-06-06 Hendrik Hoeth * Allow for negative cross-sections. NLO tools need this. * make-plots: For RatioPlotMode=deviation also consider the MC uncertainties, not just data. 2011-06-04 Hendrik Hoeth * Add support for goodness-of-fit calculations to make-plots. The results are shown in the legend, and one histogram can be selected to determine the color of the plot margin. See the documentation for more details. 2011-06-04 Andy Buckley * Adding auto conversion of Histogram2D to DataPointSets in the AnalysisHandler _normalizeTree method. 2011-06-03 Andy Buckley * Adding a file-weight feature to the Run object, which will optionally rescale the weights in the provided HepMC files. This should be useful for e.g. running on multiple differently-weighted AlpGen HepMC files/streams. The new functionality is used by the rivet command via an optional weight appended to the filename with a colon delimiter, e.g. "rivet fifo1.hepmc fifo2.hepmc:2.31" 2011-06-01 Hendrik Hoeth * Add BeamThrust projection 2011-05-31 Hendrik Hoeth * Fix LIBS for fastjet-3.0 * Add basic infrastructure for Taylor plots in make-plots * Fix OPAL_2004_S6132243: They are using charged+neutral. * Release 1.5.1 2011-05-22 Andy Buckley * Adding plots of stable and decayed PID multiplicities to MC_GENERIC (useful for sanity-checking generator setups). * Removing actually-unused ProjectionApplier.fhh forward declaration header. 2011-05-20 Andy Buckley * Removing import of ipython shell from rivet-rescale, having just seen it throw a multi-coloured warning message on a student's lxplus Rivet session! * Adding support for the compare-histos --no-ratio flag when using rivet-mkhtml. Adding --rel-ratio, --linear, etc. is an exercise for the enthusiast ;-) 2011-05-10 Andy Buckley * Internal minor changes to the ProjectionHandler and ProjectionApplier interfaces, in particular changing the ProjectionHandler::create() function to be called getInstance and to return a reference rather than a pointer. The reference change is to make way for an improved singleton implementation, which cannot yet be used due to a bug in projection memory management. The code of the improved singleton is available, but commented out, in ProjectionManager.hh to allow for easier migration and to avoid branching. 2011-05-08 Andy Buckley * Extending flat2aida to be able to read from and write to stdin/out as for aida2flat, and also eliminating the internal histo parsing representation in favour of the one in lighthisto. lighthisto's fromFlat also needed a bit of an overhaul: it has been extended to parse each histo's chunk of text (including BEGIN and END lines) in fromFlatHisto, and for fromFlat to parse a collection of histos from a file, in keeping with the behaviour of fromDPS/fromAIDA. Merging into Professor is now needed. * Extending aida2flat to have a better usage message, to accept input from stdin for command chaining via pipes, and to be a bit more sensibly internally structured (although it also now has to hold all histos in memory before writing out -- that shouldn't be a problem for anything other than truly huge histo files). 2011-05-04 Andy Buckley * compare-histos: If using --mc-errs style, prefer dotted and dashdotted line styles to dashed, since dashes are often too long to be distinguishable from solid lines. Even better might be to always use a solid line for MC errs style, and to add more colours. * rivet-mkhtml: use a no-mc-errors drawing style by default, to match the behaviour of compare-histos, which it calls. The --no-mc-errs option has been replaced with an --mc-errs option. 2011-05-04 Hendrik Hoeth * Ignore duplicate files in compare-histos. 2011-04-25 Andy Buckley * Adding some hadron-specific N and sumET vs. |eta| plots to MC_GENERIC. * Re-adding an explicit attempt to get the beam particles, since HepMC's IO_HERWIG seems to not always set them even though it's meant to. 2011-04-19 Hendrik Hoeth * Added ATLAS_2011_S9002537 W asymmetry analysis 2011-04-14 Hendrik Hoeth * deltaR, deltaPhi, deltaEta now available in all combinations of FourVector, FourMomentum, Vector3, doubles. They also accept jets and particles as arguments now. 2011-04-13 David Grellscheid * added ATLAS 8983313: 0-lepton BSM 2011-04-01 Andy Buckley * bin/rivet-mkanalysis: Don't try to download SPIRES or HepData info if it's not a standard analysis (i.e. if the SPIRES ID is not known), and make the default .info file validly parseable by YAML, which was an unfortunate gotcha for anyone writing a first analysis. 2011-03-31 Andy Buckley * bin/compare-histos: Write more appropriate ratio plot labels when not comparing to data, and use the default make-plots labels when comparing to data. * bin/rivet-mkhtml: Adding a timestamp to the generated pages, and a -t/--title option to allow setting the main HTML page title on the command line: otherwise it becomes impossible to tell these pages apart when you have a lot of them, except by URL! 2011-03-24 Andy Buckley * bin/aida2flat: Adding a -M option to *exclude* histograms whose paths match a regex. Writing a negative lookahead regex with positive matching was far too awkward! 2011-03-18 Leif Lonnblad * src/Core/AnalysisHandler.cc (AnalysisHandler::removeAnalysis): Fixed strange shared pointer assignment that caused seg-fault. 2011-03-13 Hendrik Hoeth * filling of functions works now in a more intuitive way (I hope). 2011-03-09 Andy Buckley * Version 1.5.0 release! 2011-03-08 Andy Buckley * Adding some extra checks for external packages in make-plots. 2011-03-07 Andy Buckley * Changing the accuracy of the beam energy checking to 1%, to make the UI a bit more forgiving. It's still best to specify exactly the right energy of course! 2011-03-01 Andy Buckley * Adding --no-plottitle to compare-histos (+ completion). * Fixing segfaults in UA1_1990_S2044935 and UA5_1982_S875503. * Bump ABI version numbers for 1.5.0 release. * Use AnalysisInfo for storage of the NeedsCrossSection analysis flag. * Allow field setting in AnalysisInfo. 2011-02-27 Hendrik Hoeth * Support LineStyle=dashdotted in make-plots * New command line option --style for compare-histos. Options are "default", "bw" and "talk". * cleaner uninstall 2011-02-26 Andy Buckley * Changing internal storage and return type of Particle::pdgId() to PdgId, and adding Particle::energy(). * Renaming Analysis::energies() as Analysis::requiredEnergies(). * Adding beam energies into beam consistency checking: Analysis::isCompatible methods now also require the beam energies to be provided. * Removing long-deprecated AnalysisHandler::init() constructor and AnalysisHandler::removeIncompatibleAnalyses() methods. 2011-02-25 Andy Buckley * Adding --disable-obsolete, which takes its value from the value of --disable-preliminary by default. * Replacing RivetUnvalidated and RivetPreliminary plugin libraries with optionally-configured analysis contents in the experiment-specific plugin libraries. This avoids issues with making libraries rebuild consistently when sources were reassigned between libraries. 2011-02-24 Andy Buckley * Changing analysis plugin registration to fall back through available paths rather than have RIVET_ANALYSIS_PATH totally override the built-in paths. The first analysis hook of a given name to be found is now the one that's used: any duplicates found will be warned about but unused. getAnalysisLibPaths() now returns *all* the search paths, in keeping with the new search behaviour. 2011-02-22 Andy Buckley * Moving the definition of the MSG_* macros into the Logging.hh header. They can't be used everywhere, though, as they depend on the existence of a this->getLog() method in the call scope. This move makes them available in e.g. AnalysisHandler and other bits of framework other than projections and analyses. * Adding a gentle print-out from the Rivet AnalysisHandler if preliminary analyses are being used, and strengthening the current warning if unvalidated analyses are used. * Adding documentation about the validation "process" and the (un)validated and preliminary analysis statuses. * Adding the new RivetPreliminary analysis library, and the corresponding --disable-preliminary configure flag. Analyses in this library are subject to change names, histograms, reference data values, etc. between releases: make sure you check any dependences on these analyses when upgrading Rivet. * Change the Python script ref data search behaviours to use Rivet ref data by default where available, rather than requiring a -R option. Where relevant, -R is still a valid option, to avoid breaking legacy scripts, and there is a new --no-rivet-refs option to turn the default searching *off*. * Add the prepending and appending optional arguments to the path searching functions. This will make it easier to combine the search functions with user-supplied paths in Python scripts. * Make make-plots killable! * Adding Rivet version to top of run printout. * Adding Run::crossSection() and printing out the cross-section in pb at the end of a Rivet run. 2011-02-22 Hendrik Hoeth * Make lighthisto.py aware of 2D histograms * Adding published versions of the CDF_2008 leading jets and DY analyses, and marking the preliminary ones as "OBSOLETE". 2011-02-21 Andy Buckley * Adding PDF documentation for path searching and .info/.plot files, and tidying overfull lines. * Removing unneeded const declarations from various return by value path and internal binning functions. Should not affect ABI compatibility but will force recompilation of external packages using the RivetPaths.hh and Utils.hh headers. * Adding findAnalysis*File(fname) functions, to be used by Rivet scripts and external programs to find files known to Rivet according to Rivet's (newly standard) lookup rule. * Changing search path function behaviour to always return *all* search directories rather than overriding the built-in locations if the environment variables are set. 2011-02-20 Andy Buckley * Adding the ATLAS 2011 transverse jet shapes analysis. 2011-02-18 Hendrik Hoeth * Support for transparency in make-plots 2011-02-18 Frank Siegert * Added ATLAS prompt photon analysis ATLAS_2010_S8914702 2011-02-10 Hendrik Hoeth * Simple NOOP constructor for Thrust projection * Add CMS event shape analysis. Data read off the plots. We will get the final numbers when the paper is accepted by the journal. 2011-02-10 Frank Siegert * Add final version of ATLAS dijet azimuthal decorrelation 2011-02-10 Hendrik Hoeth * remove ATLAS conf note analyses for which we have final data * reshuffle histograms in ATLAS minbias analysis to match Hepdata * small pT-cut fix in ATLAS track based UE analysis 2011-01-31 Andy Buckley * Doc tweaks and adding cmp-by-|p| functions for Jets, to match those added by Hendrik for Particles. * Don't sum photons around muons in the D0 2010 Z pT analysis. 2011-01-27 Andy Buckley * Adding ATLAS 2010 min bias and underlying event analyses and data. 2011-01-23 Andy Buckley * Make make-plots write out PDF rather than PS by default. 2011-01-12 Andy Buckley * Fix several rendering and comparison bugs in rivet-mkhtml. * Allow make-plots to write into an existing directory, at the user's own risk. * Make rivet-mkhtml produce PDF-based output rather than PS by default (most people want PDF these days). Can we do the same change of default for make-plots? * Add getAnalysisPlotPaths() function, and use it in compare-histos * Use proper .info file search path function internally in AnalysisInfo::make. 2011-01-11 Andy Buckley * Clean up ATLAS dijet analysis. 2010-12-30 Andy Buckley * Adding a run timeout option, and small bug-fixes to the event timeout handling, and making first event timeout work nicely with the run timeout. Run timeout is intended to be used in conjunction with timed batch token expiry, of the type that likes to make 0 byte AIDA files on LCG when Grid proxies time out. 2010-12-21 Andy Buckley * Fix the cuts in the CDF 1994 colour coherence analysis. 2010-12-19 Andy Buckley * Fixing CDF midpoint cone jet algorithm default construction to have an overlap threshold of 0.5 rather than 0.75. This was recommended by the FastJet manual, and noticed while adding the ATLAS and CMS cones. * Adding ATLAS and CMS old iterative cones as "official" FastJets constructor options (they could always have been used by explicit instantiation and attachment of a Fastjet plugin object). * Removing defunct and unused ClosestJetShape projection. 2010-12-16 Andy Buckley * bin/compare-histos, pyext/lighthisto.py: Take ref paths from rivet module API rather than getting the environment by hand. * pyext/lighthisto.py: Only read .plot info from the first matching file (speed-up compare-histos). 2010-12-14 Andy Buckley * Augmenting the physics vector functionality to make FourMomentum support maths operators with the correct return type (FourMomentum rather than FourVector). 2010-12-11 Andy Buckley * Adding a --event-timeout option to control the event timeout, adding it to the completion script, and making sure that the init time check is turned OFF once successful! * Adding an 3600 second timeout for initialising an event file. If it takes longer than (or anywhere close to) this long, chances are that the event source is inactive for some reason (perhaps accidentally unspecified and stdin is not active, or the event generator has died at the other end of the pipe. The reason for not making it something shorter is that e.g. Herwig++ or Sherpa can have long initialisation times to set up the MPI handler or to run the matrix element integration. An timeout after an hour is still better than a batch job which runs for two days before you realise that you forgot to generate any events! 2010-12-10 Andy Buckley * Fixing unbooked-histo segfault in UA1_1990_S2044935 at 63 GeV. 2010-12-08 Hendrik Hoeth * Fixes in ATLAS_2010_CONF_083, declaring it validated * Added ATLAS_2010_CONF_046, only two plots are implemented. The paper will be out soon, and we don't need the other plots right now. Data is read off the plots in the note. * New option "SmoothLine" for HISTOGRAM sections in make-plots * Changed CustomTicks to CustomMajorTicks and added CustomMinorTicks in make-plots. 2010-12-07 Andy Buckley * Update the documentation to explain this latest bump to path lookup behaviours. * Various improvements to existing path lookups. In particular, the analysis lib path locations are added to the info and ref paths to avoid having to set three variables when you have all three file types in the same personal plugin directory. * Adding setAnalysisLibPaths and addAnalysisLibPath functions. rivet --analysis-path{,-append} now use these and work correctly. Hurrah! * Add --show-analyses as an alias for --show-analysis, following a comment at the ATLAS tutorial. 2010-12-07 Hendrik Hoeth * Change LegendXPos behaviour in make-plots. Now the top left corner of the legend is used as anchor point. 2010-12-03 Andy Buckley * 1.4.0 release. * Add bin-skipping to compare-histos to avoid one use of rivet-rmgaps (it's still needed for non-plotting post-processing like Professor). 2010-12-03 Hendrik Hoeth * Fix normalisation issues in UA5 and ALEPH analyses 2010-11-27 Andy Buckley * MathUtils.hh: Adding fuzzyGtrEquals and fuzzyLessEquals, and tidying up the math utils collection a bit. * CDF 1994 colour coherence analysis overhauled and correction/norm factors fixed. Moved to VALIDATED status. * Adding programmable completion for aida2flat and flat2aida. * Improvements to programmable completion using the neat _filedir completion shell function which I just discovered. 2010-11-26 Andy Buckley * Tweak to floating point inRange to use fuzzyEquals for CLOSED interval equality comparisons. * Some BibTeX generation improvements, and fixing the ATLAS dijet BibTeX key. * Resolution upgrade in PNG make-plots output. * CDF_2005_S6217184.cc, CDF_2008_S7782535.cc: Updates to use the new per-jet JetAlg interface (and some other fixes). * JetAlg.cc: Changed the interface on request to return per-jet rather than per-event jet shapes, with an extra jet index argument. * MathUtils.hh: Adding index_between(...) function, which is handy for working out which bin a value falls in, given a set of bin edges. 2010-11-25 Andy Buckley * Cmp.hh: Adding ASC/DESC (and ANTISORTED) as preferred non-EQUIVALENT enum value synonyms over misleading SORTED/UNSORTED. * Change of rapidity scheme enum name to RapScheme * Reworking JetShape a bit further: constructor args now avoid inconsistencies (it was previously possible to define incompatible range-ends and interval). Internal binning implementation also reworked to use a vector of bin edges: the bin details are available via the interface. The general jet pT cuts can be applied via the JetShape constructor. * MathUtils.hh: Adding linspace and logspace utility functions. Useful for defining binnings. * Adding more general cuts on jet pT and (pseudo)rapidity. 2010-11-11 Andy Buckley * Adding special handling of FourMomentum::mass() for computed zero-mass vectors for which mass2 can go (very slightly) negative due to numerical precision. 2010-11-10 Hendrik Hoeth * Adding ATLAS-CONF-2010-083 conference note. Data is read from plots. When I run Pythia 6 the bins close to pi/2 are higher than in the note, so I call this "unvalidated". But then ... the note doesn't specify a tune or even just a version for the generators in the plots. Not even if they used Pythia 6 or Pythia 8. Probably 6, since they mention AGILe. 2010-11-10 Andy Buckley * Adding a JetAlg::useInvisibles(bool) mechanism to allow ATLAS jet studies to include neutrinos. Anyone who chooses to use this mechanism had better be careful to remove hard neutrinos manually in the provided FinalState object. 2010-11-09 Hendrik Hoeth * Adding ATLAS-CONF-2010-049 conference note. Data is read from plots. Fragmentation functions look good, but I can't reproduce the MC lines (or even the relative differences between them) in the jet cross-section plots. So consider those unvalidated for now. Oh, and it seems ATLAS screwed up the error bands in their ratio plots, too. They are upside-down. 2010-11-07 Hendrik Hoeth * Adding ATLAS-CONF-2010-081 conference note. Data is read from plots. 2010-11-06 Andy Buckley * Deprecating the old JetShape projection and renaming to ClosestJetShape: the algorithm has a tenuous relationship with that actually used in the CDF (and ATLAS) jet shape analyses. CDF analyses to be migrated to the new JetShape projection... and some of that projection's features, design elements, etc. to be finished off: we may as well take this opportunity to clear up what was one of our nastiest pieces of code. 2010-11-05 Hendrik Hoeth * Adding ATLAS-CONF-2010-031 conference note. Data is read from plots. 2010-10-29 Andy Buckley * Making rivet-buildplugin use the same C++ compiler and CXXFLAGS variable as used for the Rivet system build. * Fixing NeutralFinalState projection to, erm, actually select neutral particles (by Hendrik). * Allow passing a general FinalState reference to the JetShape projection, rather than requiring a VetoedFS. 2010-10-07 Andy Buckley * Adding a --with-root flag to rivet-buildplugin to add root-config --libs flags to the plugin build command. 2010-09-24 Andy Buckley * Releasing as Rivet 1.3.0. * Bundling underscore.sty to fix problems with running make-plots on dat files generated by compare-histos from AIDA files with underscores in their names. 2010-09-16 Andy Buckley * Fix error in N_effective definition for weighted profile errors. 2010-08-18 Andy Buckley * Adding MC_GENERIC analysis. NB. Frank Siegert also added MC_HJETS. 2010-08-03 Andy Buckley * Fixing compare-histos treatment of what is now a ref file, and speeding things up... again. What a mess! 2010-08-02 Andy Buckley * Adding rivet-nopy: a super-simple Rivet C++ command line interface which avoids Python to make profiling and debugging easier. * Adding graceful exception handling to the AnalysisHandler event loop methods. * Changing compare-histos behaviour to always show plots for which there is at least one MC histo. The default behaviour should now be the correct one in 99% of use cases. 2010-07-30 Andy Buckley * Merging in a fix for shared_ptrs not being compared for insertion into a set based on raw pointer value. 2010-07-16 Andy Buckley * Adding an explicit library dependency declaration on libHepMC, and hence removing the -lHepMC from the rivet-config --libs output. 2010-07-14 Andy Buckley * Adding a manual section on use of Rivet (and AGILe) as libraries, and how to use the -config scripts to aid compilation. * FastJets projection now allows setting of a jet area definition, plus a hacky mapping for getting the area-enabled cluster sequence. Requested by Pavel Starovoitov & Paolo Francavilla. * Lots of script updates in last two weeks! 2010-06-30 Andy Buckley * Minimising amount of Log class mapped into SWIG. * Making Python ext build checks fail with error rather than warning if it has been requested (or, rather, not explicitly disabled). 2010-06-28 Andy Buckley * Converting rivet Python module to be a package, with the dlopen flag setting etc. done around the SWIG generated core wrapper module (rivet.rivetwrap). * Requiring Python >= 2.4.0 in rivet scripts (and adding a Python version checker function to rivet module) * Adding --epspng option to make-plots (and converting to use subprocess.Popen). 2010-06-27 Andy Buckley * Converting JADE_OPAL analysis to use the fastjet exclusive_ymerge_*max* function, rather than just exclusive_ymerge: everything looks good now. It seems that fastjet >= 2.4.2 is needed for this to work properly. 2010-06-24 Andy Buckley * Making rivet-buildplugin look in its own bin directory when trying to find rivet-config. 2010-06-23 Andy Buckley * Adding protection and warning about numerical precision issues in jet mass calculation/histogramming to the MC_JetAnalysis analysis. * Numerical precision improvement in calculation of Vector4::mass2. * Adding relative scale ratio plot flag to compare-histos * Extended command completion to rivet-config, compare-histos, and make-plots. * Providing protected log messaging macros, MSG_{TRACE,DEBUG,INFO,WARNING,ERROR} cf. Athena. * Adding environment-aware functions for Rivet search path list access. 2010-06-21 Andy Buckley * Using .info file beam ID and energy info in HTML and LaTeX documentation. * Using .info file beam ID and energy info in command-line printout. * Fixing a couple of references to temporary variables in the analysis beam info, which had been introduced during refactoring: have reinstated reference-type returns as the more efficient solution. This should not affect API compatibility. * Making SWIG configure-time check include testing for incompatibilities with the C++ compiler (re. the recurring _const_ char* literals issue). * Various tweaks to scripts: make-plots and compare-histos processes are now renamed (on Linux), rivet-config is avoided when computing the Rivet version,and RIVET_REF_PATH is also set using the rivet --analysis-path* flags. compare-histos now uses multiple ref data paths for .aida file globbing. * Hendrik changed VetoedFinalState comparison to always return UNDEFINED if vetoing on the results of other FS projections is being used. This is the only simple way to avoid problems emanating from the remainingFinalState thing. 2010-06-19 Andy Buckley * Adding --analysis-path and --analysis-path-append command-line flags to the rivet script, as a "persistent" way to set or extend the RIVET_ANALYSIS_PATH variable. * Changing -Q/-V script verbosity arguments to more standard -q/-v, after Hendrik moaned about it ;) * Small fix to TinyXML operator precendence: removes a warning, and I think fixes a small bug. * Adding plotinfo entries for new jet rapidity and jet mass plots in MC_JetAnalysis derivatives. * Moving MC_JetAnalysis base class into a new libRivetAnalysisTools library, with analysis base class and helper headers to be stored in the reinstated Rivet/Analyses include directory. 2010-06-08 Andy Buckley * Removing check for CEDARSTD #define guard, since we no longer compile against AGILe and don't have to be careful about duplication. * Moving crappy closest approach and decay significance functions from Utils into SVertex, which is the only place they have ever been used (and is itself almost entirely pointless). * Overhauling particle ID <-> name system to clear up ambiguities between enums, ints, particles and beams. There are no more enums, although the names are still available as const static ints, and names are now obtained via a singleton class which wraps an STL map for name/ID lookups in both directions. 2010-05-18 Hendrik Hoeth * Fixing factor-of-2 bug in the error calculation when scaling histograms. * Fixing D0_2001_S4674421 analysis. 2010-05-11 Andy Buckley * Replacing TotalVisibleMomentum with MissingMomentum in analyses and WFinder. Using vector ET rather than scalar ET in some places. 2010-05-07 Andy Buckley * Revamping the AnalysisHandler constructor and data writing, with some LWH/AIDA mangling to bypass the stupid AIDA idea of having to specify the sole output file and format when making the data tree. Preferred AnalysisHandler constructor now takes only one arg -- the runname -- and there is a new AH.writeData(outfile) method to replace AH.commitData(). Doing this now to begin migration to more flexible histogramming in the long term. 2010-04-21 Hendrik Hoeth * Fixing LaTeX problems (make-plots) on ancient machines, like lxplus. 2010-04-29 Andy Buckley * Fixing (I hope!) the treatment of weighted profile bin errors in LWH. 2010-04-21 Andy Buckley * Removing defunct and unused KtJets and Configuration classes. * Hiding some internal details from Doxygen. * Add @brief Doxygen comments to all analyses, projections and core classes which were missing them. 2010-04-21 Hendrik Hoeth * remove obsolete reference to InitialQuarks from DELPHI_2002 * fix normalisation in CDF_2000_S4155203 2010-04-20 Hendrik Hoeth * bin/make-plots: real support for 2-dim histograms plotted as colormaps, updated the documentation accordingly. * fix misleading help comment in configure.ac 2010-04-08 Andy Buckley * bin/root2flat: Adding this little helper script, minimally modified from one which Holger Schulz made for internal use in ATLAS. 2010-04-05 Andy Buckley * Using spiresbib in rivet-mkanalysis: analysis templates made with rivet-mkanalysis will now contain a SPIRES-dumped BibTeX key and entry if possible! * Adding BibKey and BibTeX entries to analysis metadata files, and updating doc build to use them rather than the time-consuming SPIRES screen-scraping. Added SPIRES BibTeX dumps to all analysis metadata using new (uninstalled & unpackaged) doc/get-spires-data script hack. * Updating metadata files to add Energies, Beams and PtCuts entries to all of them. * Adding ToDo, NeedsCrossSection, and better treatment of Beams and Energies entries in metadata files and in AnalysisInfo and Analysis interfaces. 2010-04-03 Andy Buckley * Frank Siegert: Update of rivet-mkhtml to conform to improved compare-histos. * Frank Siegert: LWH output in precision-8 scientific notation, to solve a binning precision problem... the first time weve noticed a problem! * Improved treatment of data/reference datasets and labels in compare-histos. * Rewrite of rivet-mkanalysis in Python to make way for neat additions. * Improving SWIG tests, since once again the user's biuld system must include SWIG (no test to check that it's a 'good SWIG', since the meaning of that depends on which compiler is being used and we hope that the user system is consistent... evidence from Finkified Macs and bloody SLC5 notwithstanding). 2010-03-23 Andy Buckley * Tag as patch release 1.2.1. 2010-03-22 Andy Buckley * General tidying of return arguments and intentionally unused parameters to keep -Wextra happy (some complaints remain from TinyXML, FastJet, and HepMC). * Some extra bug fixes: in FastJets projection with explicit plugin argument, removing muon veto cut on FoxWolframMoments. * Adding UNUSED macro to help with places where compiler warnings can't be helped. * Turning on -Wextra warnings, and fixing some violations. 2010-03-21 Andy Buckley * Adding MissingMomentum projection, as replacement for ~all uses of now-deprecated TotalVisibleMomentum projection. * Fixing bug with TotalVisibleMomentum projection usage in MC_SUSY analysis. * Frank Siegert fixed major bug in pTmin param passing to FastJets projection. D'oh: requires patch release. 2010-03-02 Andy Buckley * Tagging for 1.2.0 release... at last! 2010-03-01 Andy Buckley * Updates to manual, manual generation scripts, analysis info etc. * Add HepData URL to metadata print-out with rivet --show-analysis * Fix average Et plot in UA1 analysis to only apply to the tracker acceptance (but to include neutral particle contributions, i.e. the region of the calorimeter in the tracker acceptance). * Use Et rather than pT in filling the scalar Et measure in TotalVisibleMomentum. * Fixes to UA1 normalisation (which is rather funny in the paper). 2010-02-26 Andy Buckley * Update WFinder to not place cuts and other restrictions on the neutrino. 2010-02-11 Andy Buckley * Change analysis loader behaviour to use ONLY RIVET_ANALYSIS_PATH locations if set, otherwise use ONLY the standard Rivet analysis install path. Should only impact users of personal plugin analyses, who should now explicitly set RIVET_ANALYSIS_PATH to load their analysis... and who can now create personal versions of standard analyses without getting an error message about duplicate loading. 2010-01-15 Andy Buckley * Add tests for "stable" heavy flavour hadrons in jets (rather than just testing for c/b hadrons in the ancestor lists of stable jet constituents) 2009-12-23 Hendrik Hoeth * New option "RatioPlotMode=deviation" in make-plots. 2009-12-14 Hendrik Hoeth * New option "MainPlot" in make-plots. For people who only want the ratio plot and nothing else. * New option "ConnectGaps" in make-plots. Set to 1 if you want to connect gaps in histograms with a line when ErrorBars=0. Works both in PLOT and in HISTOGRAM sections. * Eliminated global variables for coordinates in make-plots and enabled multithreading. 2009-12-14 Andy Buckley * AnalysisHandler::execute now calls AnalysisHandler::init(event) if it has not yet been initialised. * Adding more beam configuration features to Beam and AnalysisHandler: the setRunBeams(...) methods on the latter now allows a beam configuration for the run to be specified without using the Run class. 2009-12-11 Andy Buckley * Removing use of PVertex from few remaining analyses. Still used by SVertex, which is itself hardly used and could maybe be removed... 2009-12-10 Andy Buckley * Updating JADE_OPAL to do the histo booking in init(), since sqrtS() is now available at that stage. * Renaming and slightly re-engineering all MC_*_* analyses to not be collider-specific (now the Analysis::sqrtS/beams()) methods mean that histograms can be dynamically binned. * Creating RivetUnvalidated.so plugin library for unvalidated analyses. Unvalidated analyses now need to be explicitly enabled with a --enable-unvalidated flag on the configure script. * Various min bias analyses updated and validated. 2009-12-10 Hendrik Hoeth * Propagate SPECIAL and HISTOGRAM sections from .plot files through compare-histos * STAR_2006_S6860818: vs particle mass, validate analysis 2009-12-04 Andy Buckley * Use scaling rather than normalising in DELPHI_1996: this is generally desirable, since normalizing to 1 for 1/sig dsig/dx observables isn't correct if any events fall outside the histo bounds. * Many fixes to OPAL_2004. * Improved Hemispheres interface to remove unnecessary consts on returned doubles, and to also return non-squared versions of (scaled) hemisphere masses. * Add "make pyclean" make target at the top level to make it easier for developers to clean their Python module build when the API is extended. * Identify use of unvalidated analyses with a warning message at runtime. * Providing Analysis::sqrtS() and Analysis::beams(), and making sure they're available by the time the init methods are called. 2009-12-02 Andy Buckley * Adding passing of first event sqrt(s) and beams to analysis handler. * Restructuring running to only use one HepMC input file (no-one was using multiple ones, right?) and to break down the Run class to cleanly separate the init and event loop phases. End of file is now neater. 2009-12-01 Andy Buckley * Adding parsing of beam types and pairs of energies from YAML. 2009-12-01 Hendrik Hoeth * Fixing trigger efficiency in CDF_2009_S8233977 2009-11-30 Andy Buckley * Using shared pointers to make I/O object memory management neater and less error-prone. * Adding crossSectionPerEvent() method [== crossSection()/sumOfWeights()] to Analysis. Useful for histogram scaling since numerator of sumW_passed/sumW_total (to calculate pass-cuts xsec) is cancelled by dividing histo by sumW_passed. * Clean-up of Particle class and provision of inline PID:: functions which take a Particle as an argument to avoid having to explicitly call the Particle::pdgId() method. 2009-11-30 Hendrik Hoeth * Fixing division by zero in Profile1D bin errors for bins with just a single entry. 2009-11-24 Hendrik Hoeth * First working version of STAR_2006_S6860818 2009-11-23 Hendrik Hoeth * Adding missing CDF_2001_S4751469 plots to uemerge * New "ShowZero" option in make-plots * Improving lots of plot defaults * Fixing typos / non-existing bins in CDF_2001_S4751469 and CDF_2004_S5839831 reference data 2009-11-19 Hendrik Hoeth * Fixing our compare() for doubles. 2009-11-17 Hendrik Hoeth * Zeroth version of STAR_2006_S6860818 analysis (identified strange particles). Not working yet for unstable particles. 2009-11-11 Andy Buckley * Adding separate jet-oriented and photon-oriented observables to MC PHOTONJETUE analysis. * Bug fix in MC leading jets analysis, and general tidying of leading jet analyses to insert units, etc. (should not affect any current results) 2009-11-10 Hendrik Hoeth * Fixing last issues in STAR_2006_S6500200 and setting it to VALIDATED. * Noramlise STAR_2006_S6870392 to cross-section 2009-11-09 Andy Buckley * Overhaul of jet caching and ParticleBase interface. * Adding lists of analyses' histograms (obtained by scanning the plot info files) to the LaTeX documentation. 2009-11-07 Andy Buckley * Adding checking system to ensure that Projections aren't registered before the init phase of analyses. * Now that the ProjHandler isn't full of defunct pointers (which tend to coincidentally point to *new* Projection pointers rather than undefined memory, hence it wasn't noticed until recently!), use of a duplicate projection name is now banned with a helpful message at runtime. * (Huge) overhaul of ProjectionHandler system to use shared_ptr: projections are now deleted much more efficiently, naturally cleaning themselves out of the central repository as they go out of scope. 2009-11-06 Andy Buckley * Adding Cmp specialisation, using fuzzyEquals(). 2009-11-05 Hendrik Hoeth * Fixing histogram division code. 2009-11-04 Hendrik Hoeth * New analysis STAR_2006_S6500200 (pion and proton pT spectra in pp collisions at 200 GeV). It is still unclear if they used a cut in rapidity or pseudorapidity, thus the analysis is declared "UNDER DEVELOPMENT" and "DO NOT USE". * Fixing compare() in NeutralFinalState and MergedFinalState 2009-11-04 Andy Buckley * Adding consistence checking on beam ID and sqrt(s) vs. those from first event. 2009-11-03 Andy Buckley * Adding more assertion checks to linear algebra testing. 2009-11-02 Hendrik Hoeth * Fixing normalisation issue with stacked histograms in make-plots. 2009-10-30 Hendrik Hoeth * CDF_2009_S8233977: Updating data and axes labels to match final paper. Normalise to cross-section instead of data. 2009-10-23 Andy Buckley * Fixing Cheese-3 plot in CDF 2004... at last! 2009-10-23 Hendrik Hoeth * Fix muon veto in CDF_1994_S2952106, CDF_2005_S6217184, CDF_2008_S7782535, and D0_2004_S5992206 2009-10-19 Andy Buckley * Adding analysis info files for MC SUSY and PHOTONJETUE analyses. * Adding MC UE analysis in photon+jet events. 2009-10-19 Hendrik Hoeth * Adding new NeutralFinalState projection. Note that this final state takes E_T instead of p_T as argument (makes more sense for neutral particles). The compare() method does not yet work as expected (E_T comparison still missing). * Adding new MergedFinalState projection. This merges two final states, removing duplicate particles. Duplicates are identified by looking at the genParticle(), so users need to take care of any manually added particles themselves. * Fixing most open issues with the STAR_2009_UE_HELEN analysis. There is only one question left, regarding the away region. * Set the default split-merge value for SISCone in our FastJets projection to the recommended (but not Fastjet-default!) value of 0.75. 2009-10-17 Andy Buckley * Adding parsing of units in cross-sections passed to the "-x" flag, i.e. "-x 101 mb" is parsed internally into 1.01e11 pb. 2009-10-16 Hendrik Hoeth * Disabling DELPHI_2003_WUD_03_11 in the Makefiles, since I don't trust the data. * Getting STAR_2009_UE_HELEN to work. 2009-10-04 Andy Buckley * Adding triggers and other tidying to (still unvalidated) UA1_1990 analysis. * Fixing definition of UA5 trigger to not be intrinscally different for pp and ppbar: this is corrected for (although it takes some readng to work this out) in the 1982 paper, which I think is the only one to compare the two modes. * Moving projection setup and registration into init() method for remaining analyses. * Adding trigger implementations as projections for CDF Runs 0 & 1, and for UA5. 2009-10-01 Andy Buckley * Moving projection setup and registration into init() method for analyses from ALEPH, CDF and the MC_ group. * Adding generic SUSY validation analysis, based on plots used in ATLAS Herwig++ validation. * Adding sorted particle accessors to FinalState (cf. JetAlg). 2009-09-29 Andy Buckley * Adding optional use of args as regex match expressions with -l/--list-analyses. 2009-09-03 Andy Buckley * Passing GSL include path to compiler, since its absence was breaking builds on systems with no GSL installation in a standard location (such as SLC5, for some mysterious reason!) * Removing lib extension passing to compiler from the configure script, because Macs and Linux now both use .so extensions for the plugin analysis modules. 2009-09-02 Andy Buckley * Adding analysis info file path search with RIVET_DATA_PATH variable (and using this to fix doc build.) * Improvements to AnalysisLoader path search. * Moving analysis sources back into single directory, after a proletarian uprising ;) 2009-09-01 Andy Buckley * Adding WFinder and WAnalysis, based on Z proj and analysis, with some tidying of the Z code. * ClusteredPhotons now uses an IdentifiedFS to pick the photons to be looped over, and only clusters photons around *charged* signal particles. 2009-08-31 Andy Buckley * Splitting analyses by directory, to make it easier to disable building of particular analysis group plugin libs. * Removing/merging headers for all analyses except for the special MC_JetAnalysis base class. * Exit with an error message if addProjection is used twice from the same parent with distinct projections. 2009-08-28 Andy Buckley * Changed naming convention for analysis plugin libraries, since the loader has changed so much: they must now *start* with the word "Rivet" (i.e. no lib prefix). * Split standard plugin analyses into several plugin libraries: these will eventually move into separate subdirs for extra build convenience. * Started merging analysis headers into the source files, now that we can (the plugin hooks previously forbade this). * Replacement of analysis loader system with a new one based on ideas from ThePEG, which uses dlopen-time instantiation of templated global variables to reduce boilerplate plugin hooks to one line in analyses. 2009-07-14 Frank Siegert * Replacing in-source histo-booking metadata with .plot files. 2009-07-14 Andy Buckley * Making Python wrapper files copy into place based on bundled versions for each active HepMC interface (2.3, 2.4 & 2.5), using a new HepMC version detector test in configure. * Adding YAML metadata files and parser, removing same metadata from the analysis classes' source headers. 2009-07-07 Andy Buckley * Adding Jet::hadronicEnergy() * Adding VisibleFinalState and automatically using it in JetAlg projections. * Adding YAML parser for new metadata (and eventually ref data) files. 2009-07-02 Andy Buckley * Adding Jet::neutralEnergy() (and Jet::totalEnergy() for convenience/symmetry). 2009-06-25 Andy Buckley * Tidying and small efficiency improvements in CDF_2008_S7541902 W+jets analysis (remove unneeded second stage of jet storing, sorting the jets twice, using foreach, etc.). 2009-06-24 Andy Buckley * Fixing Jet's containsBottom and containsCharm methods, since B hadrons are not necessarily to be found in the final state. Discovered at the same time that HepMC::GenParticle defines a massively unhelpful copy constructor that actually loses the tree information; it would be better to hide it entirely! * Adding RivetHepMC.hh, which defines container-type accessors to HepMC particles and vertices, making it possible to use Boost foreach and hence avoiding the usual huge boilerplate for-loops. 2009-06-11 Andy Buckley * Adding --disable-pdfmanual option, to make the bootstrap a bit more robust. * Re-enabling D0IL in FastJets: adding 10^-10 to the pTmin removes the numerical instability! * Fixing CDF_2004 min/max cone analysis to use calo jets for the leading jet Et binning. Thanks to Markus Warsinsky for (re)discovering this bug: I was sure it had been fixed. I'm optimistic that this will fix the main distributions, although Swiss Cheese "minus 3" is still likely to be broken. Early tests look okay, but it'll take more stats before we can remove the "do not trust" sign. 2009-06-10 Andy Buckley * Providing "calc" methods so that Thrust and Sphericity projections can be used as calculators without having to use the projecting/caching system. 2009-06-09 Andy Buckley * 1.1.3 release! * More doc building and SWIG robustness tweaks. 2009-06-07 Andy Buckley * Make doc build from metadata work even before the library is installed. 2009-06-07 Hendrik Hoeth * Fix phi rotation in CDF_2008_LEADINGJETS. 2009-06-07 Andy Buckley * Disabling D0 IL midpoint cone (using CDF modpoint instead), since there seems to be a crashing bug in FastJet's implementation: we can't release that way, since ~no D0 analyses will run. 2009-06-03 Andy Buckley * Putting SWIG-generated source files under SVN control to make life easier for people who we advise to check out the SVN head version, but who don't have a sufficiently modern copy of SWIG to * Adding the --disable-analyses option, for people who just want to use Rivet as a framework for their own analyses. * Enabling HepMC cross-section reading, now that HepMC 2.5.0 has been released. 2009-05-23 Hendrik Hoeth * Using gsl-config to locate libgsl * Fix the paths for linking such that our own libraries are found before any system libraries, e.g. for the case that there is an outdated fastjet version installed on the system while we want to use our own up-to-date version. * Change dmerge to ymerge in the e+e- analyses using JADE or DURHAM from fastjet. That's what it is called in fastjet-2.4 now. 2009-05-18 Andy Buckley * Adding use of gsl-config in configure script. 2009-05-16 Andy Buckley * Removing argument to vetoEvent macro, since no weight subtraction is now needed. It's now just an annotated return, with built-in debug log message. * Adding an "open" FinalState, which is only calculated once per even, then used by all other FSes, avoiding the loop over non-status 1 particles. 2009-05-15 Andy Buckley * Removing incorrect setting of DPS x-errs in CDF_2008 jet shape analysis: the DPS autobooking already gets this bit right. * Using Jet rather than FastJet::PseudoJet where possible, as it means that the phi ranges match up nicely between Particle and the Jet object. The FastJet objects are only really needed if you want to do detailed things like look at split/merge scales for e.g. diff jet rates or "y-split" analyses. * Tidying and debugging CDF jet shape analyses and jet shape plugin... ongoing, but I think I've found at least one real bug, plus a lot of stuff that can be done a lot more nicely. * Fully removing deprecated math functions and updating affected analyses. 2009-05-14 Andy Buckley * Removing redundant rotation in DISKinematics... this was a legacy of Peter using theta rather than pi-theta in his rotation. * Adding convenience phi, rho, eta, theta, and perp,perp2 methods to the 3 and 4 vector classes. 2009-05-12 Andy Buckley * Adding event auto-rotation for events with one proton... more complete approach? 2009-05-09 Hendrik Hoeth * Renaming CDF_2008_NOTE_9337 to CDF_2009_S8233977. * Numerous small bug fixes in ALEPH_1996_S3486095. * Adding data for one of the Rick-Field-style STAR UE analyses. 2009-05-08 Andy Buckley * Adding rivet-mkanalysis script, to make generating new analysis source templates easier. 2009-05-07 Andy Buckley * Adding null vector check to Vector3::azimuthalAngle(). * Fixing definition of HCM/Breit frames in DISKinematics, and adding asserts to check that the transformation is doing what it should. 2009-05-05 Andy Buckley * Removing eta and Et cuts from CDF 2000 Z pT analysis, based on our reading of the paper, and converting most of the analysis to a call of the ZFinder projection. 2009-05-05 Hendrik Hoeth * Support non-default seed_threshold in CDF cone jet algorithms. * New analyses STAR_2006_S6870392 and STAR_2008_S7993412. In STAR_2008_S7993412 only the first distribution is filled at the moment. STAR_2006_S6870392 is normalised to data instead of the Monte Carlo cross-section, since we don't have that available in the HepMC stream yet. 2009-05-05 Andy Buckley * Changing Event wrapper to copy whole GenEvents rather than pointers, use std units if supported in HepMC, and run a placeholder function for event auto-orientation. 2009-04-28 Andy Buckley * Removing inclusion of IsolationTools header by analyses that aren't actually using the isolation tools... which is all of them. Leaving the isolation tools in place for now, as there might still be use cases for them and there's quite a lot of code there that deserves a second chance to be used! 2009-04-24 Andy Buckley * Deleting Rivet implementations of TrackJet and D0ILConeJets: the code from these has now been incorporated into FastJet 2.4.0. * Removed all mentions of the FastJet JADE patch and the HAVE_JADE preprocessor macro. * Bug fix to D0_2008_S6879055 to ensure that cuts compare to both electron and positron momenta (was just comparing against electrons, twice, probably thanks to the miracle of cut and paste). * Converting all D0 IL Cone jets to use FastJets. Involved tidying D0_2004 jet azimuthal decorrelation analysis and D0_2008_S6879055 as part of migration away from using the getLorentzJets method, and removing the D0ILConeJets header from quite a few analyses that weren't using it at all. * Updating CDF 2001 to use FastJets in place of TrackJet, and adding axis labels to its plots. * Note that ZEUS_2001_S4815815 uses the wrong jet definition: it should be a cone but curently uses kT. * Fixing CDF_2005_S6217184 to use correct (midpoint, R=0.7) jet definition. That this was using a kT definition with R=1.0 was only made obvious when the default FastJets constructor was removed. * Removing FastJets default constructor: since there are now several good (IRC safe) jet definitions available, there is no obvious safe default and analyses should have to specify which they use. * Moving FastJets constructors into implementation file to reduce recompilation dependencies, and adding new plugins. * Ensuring that axis labels actually get written to the output data file. 2009-04-22 Andy Buckley * Adding explicit FastJet CDF jet alg overlap_threshold constructor param values, since the default value from 2.3.x is now removed in version 2.4.0. * Removing use of HepMC ThreeVector::mag method (in one place only) since this has been removed in version 2.5.0b. * Fix to hepmc.i (included by rivet.i) to ignore new HepMC 2.5.0b GenEvent stream operator. 2009-04-21 Andy Buckley * Dependency on FastJet now requires version 2.4.0 or later. Jade algorithm is now native. * Moving all analysis constructors and Projection headers from the analysis header files into their .cc implementation files, cutting header dependencies. * Removing AIDA headers: now using LWH headers only, with enhancement to use axis labels. This facility is now used by the histo booking routines, and calling the booking function versions which don't specify axis labels will result in a runtime warning. 2009-04-07 Andy Buckley * Adding $(DESTDIR) prefix to call to Python module "setup.py install" * Moving HepMC SWIG mappings into Python Rivet module for now: seems to work-around the SL type-mapping bug. 2009-04-03 Andy Buckley * Adding MC analysis for LHC UE: higher-pT replica of Tevatron 2008 leading jets study. * Adding CDF_1990 pseudorapidity analysis. * Moving CDF_2001 constructor into implementation file. * Cleaning up CDF_2008_LEADINGJETS a bit, e.g. using foreach loops. * Adding function interface for specifying axis labels in histo bookings. Currently has no effect, since AIDA doesn't seem to have a mechanism for axis labels. It really is a piece of crap. 2009-03-18 Andy Buckley * Adding docs "make upload" and other tweaks to make the doc files fit in with the Rivet website. * Improving LaTex docs to show email addresses in printable form and to group analyses by collider or other metadata. * Adding doc script to include run info in LaTeX docs, and to make HTML docs. * Removing WZandh projection, which wasn't generator independent and whose sole usage was already replaced by ZFinder. * Improvements to constructors of ZFinder and InvMassFS. * Changing ExampleTree to use real FS-based Z finding. 2009-03-16 Andy Buckley * Allow the -H histo file spec to give a full name if wanted. If it doesn't end in the desired extension, it will be added. * Adding --runname option (and API elements) to choose a run name to be prepended as a "top level directory" in histo paths. An empty value results in no extra TLD. 2009-03-06 Andy Buckley * Adding R=0.2 photon clustering to the electrons in the CDF 2000 Z pT analysis. 2009-03-04 Andy Buckley * Fixing use of fastjet-config to not use the user's PATH variable. * Fixing SWIG type table for HepMC object interchange. 2009-02-20 Andy Buckley * Adding use of new metadata in command line analysis querying with the rivet command, and in building the PDF Rivet manual. * Adding extended metadata methods to the Analysis interface and the Python wrapper. All standard analyses comply with this new interface. 2009-02-19 Andy Buckley * Adding usefully-scoped config headers, a Rivet::version() function which uses them, and installing the generated headers to fix "external" builds against an installed copy of Rivet. The version() function has been added to the Python wrapper. 2009-02-05 Andy Buckley * Removing ROOT dependency and linking. Woo! There's no need for this now, because the front-end accepts no histo format switch and we'll just use aida2root for output conversions. Simpler this way, and it avoids about half of our compilation bug reports from 32/64 bit ROOT build confusions. 2009-02-04 Andy Buckley * Adding automatic generation of LaTeX manual entries for the standard analyses. 2009-01-20 Andy Buckley * Removing RivetGun and TCLAP source files! 2009-01-19 Andy Buckley * Added psyco Python optimiser to rivet, make-plots and compare-histos. * bin/aida2root: Added "-" -> "_" mangling, following requests. 2009-01-17 Andy Buckley * 1.1.2 release. 2009-01-15 Andy Buckley * Converting Python build system to bundle SWIG output in tarball. 2009-01-14 Andy Buckley * Converting AIDA/LWH divide function to return a DPS so that bin width factors don't get all screwed up. Analyses adapted to use the new division operation (a DPS/DPS divide would also be nice... but can wait for YODA). 2009-01-06 Andy Buckley * bin/make-plots: Added --png option for making PNG output files, using 'convert' (after making a PDF --- it's a bit messy) * bin/make-plots: Added --eps option for output filtering through ps2eps. 2009-01-05 Andy Buckley * Python: reworking Python extension build to use distutils and newer m4 Python macros. Probably breaks distcheck but is otherwise more robust and platform independent (i.e. it should now work on Macs). 2008-12-19 Andy Buckley * make-plots: Multi-threaded make-plots and cleaned up the LaTeX building a bit (necessary to remove the implicit single global state). 2008-12-18 Andy Buckley * make-plots: Made LaTeX run in no-stop mode. * compare-histos: Updated to use a nicer labelling syntax on the command line and to successfully build MC-MC plots. 2008-12-16 Andy Buckley * Made LWH bin edge comparisons safe against numerical errors. * Added Particle comparison functions for sorting. * Removing most bad things from ExampleTree and tidying up. Marked WZandh projection for removal. 2008-12-03 Hendrik Hoeth * Added the two missing observables to the CDF_2008_NOTE_9337 analysis, i.e. track pT and sum(ET). There is a small difference between our MC output and the MC plots of the analysis' author, we're still waiting for the author's comments. 2008-12-02 Andy Buckley * Overloading use of a std::set in the interface, since the version of SWIG on Sci Linux doesn't have a predefined mapping for STL sets. 2008-12-02 Hendrik Hoeth * Fixed uemerge. The output was seriously broken by a single line of debug information in fillAbove(). Also changed uemerge output to exponential notation. * Unified ref and mc histos in compare-histos. Histos with one bin are plotted linear. Option for disabling the ratio plot. Several fixes for labels, legends, output directories, ... * Changed rivetgun's fallback directory for parameter files to $PREFIX/share/AGILe, since that's where the steering files now are. * Running aida2flat in split mode now produces make-plots compatible dat-files for direct plotting. 2008-11-28 Andy Buckley * Replaced binreloc with an upgraded and symbol-independent copy. 2008-11-25 Andy Buckley * Added searching of $RIVET_REF_PATH for AIDA reference data files. 2008-11-24 Andy Buckley * Removing "get"s and other obsfucated syntax from ProjectionApplier (Projection and Analysis) interfaces. 2008-11-21 Andy Buckley * Using new "global" Jet and V4 sorting functors in TrackJet. Looks like there was a sorting direction problem before... * Verbose mode with --list-analyses now shows descriptions as well as analysis names. * Moved data/Rivet to data/refdata and moved data/RivetGun contents to AGILe (since generator steering is no longer a Rivet thing) * Added unchecked ratio plots to D0 Run II jet + photon analysis. * Added D0 inclusive photon analysis. * Added D0 Z rapidity analysis. * Tidied up constructor interface and projection chain implementation of InvMassFinalState. * Added ~complete set of Jet and FourMomentum sorting functors. 2008-11-20 Andy Buckley * Added IdentifiedFinalState. * Moved a lot of TrackJet and Jet code into .cc files. * Fixed a caching bug in Jet: cache flag resets should never be conditional, since they are then sensitive to initialisation errors. * Added quark enum values to ParticleName. * Rationalised JetAlg interfaces somewhat, with "size()" and "jets()" methods in the interface. * Added D0 W charge asymmetry and D0 inclusive jets analyses. 2008-11-18 Andy Buckley * Adding D0 inclusive Z pT shape analysis. * Added D0 Z + jet pT and photon + jet pT spectrum analyses. * Lots of tidying up of particle, event, particle name etc. * Now the first event is used to detect the beam type and remove incompatible analyses. 2008-11-17 Andy Buckley * Added bash completion for rivetgun. * Starting to provide stand-alone call methods on projections so they can be used without the caching infrastructure. This could also be handy for unit testing. * Adding functionality (sorting function and built-in sorting schemes) to the JetAlg interface. 2008-11-10 Hendrik Hoeth * Fix floating point number output format in aida2flat and flat2aida * Added CDF_2002_S4796047: CDF Run-I charged multiplicity distribution * Renamed CDF_2008_MINBIAS to CDF_2008_NOTE_9337, since the note is publicly available now. 2008-11-10 Hendrik Hoeth * Added DELPHI_2003_WUD_03_11: Delphi 4-jet angular distributions. There is still a problem with the analysis, so don't use it yet. But I need to commit the code because my laptop is broken ... 2008-11-06 Andy Buckley * Code review: lots of tidying up of projections and analyses. * Fixes for compatibility with the LLVM C & C++ compiler. * Change of Particle interface to remove "get"-prefixed method names. 2008-11-05 Andy Buckley * Adding ability to query analysis metadata from the command line. * Example of a plugin analyis now in plugindemo, with a make check test to make sure that the plugin analysis is recognised by the command line "rivet" tool. * GCC 4.3 fix to mat-vec tests. 2008-11-04 Andy Buckley * Adding native logger control from Python interface. 2008-11-03 Andy Buckley * Adding bash_completion for rivet executable. 2008-10-31 Andy Buckley * Clean-up of histo titles and analysis code review. * Added momentum construction functions from FastJet PseudoJets. 2008-10-28 Andy Buckley * Auto-booking of histograms with a name, rather than the HepData ID 3-tuple is now possible. * Fix in CDF 2001 pT spectra to get the normalisations to depend on the pT_lead cutoff. 2008-10-23 Andy Buckley * rivet handles signals neatly, as for rivetgun, so that premature killing of the analysis process will still result in an analysis file. * rivet now accepts cross-section as a command line argument and, if it is missing and is required, will prompt the user for it interactively. 2008-10-22 Andy Buckley * rivet (Python interface) now can list analyses, check when adding analyses that the given names are valid, specify histo file name, and provide sensibly graded event number logging. 2008-10-20 Andy Buckley * Corrections to CDF 2004 analysis based on correspondance with Joey Huston. M bias dbns now use whole event within |eta| < 0.7, and Cheese plots aren't filled at all if there are insufficient jets (and the correct ETlead is used). 2008-10-08 Andy Buckley * Added AnalysisHandler::commitData() method, to allow the Python interface to write out a histo file without having to know anything about the histogramming API. * Reduced SWIG interface file to just map a subset of Analysis and AnalysisHandler functionality. This will be the basis for a new command line interface. 2008-10-06 Andy Buckley * Converted FastJets plugin to use a Boost shared_pointer to the cached ClusterSequence. The nullness of the pointer is now used to indicate an empty tracks (and hence jets) set. Once FastJet natively support empty CSeqs, we can rewrite this a bit more neatly and ditch the shared_ptr. 2008-10-02 Andy Buckley * The CDF_2004 (Acosta) data file now includes the full range of pT for the min bias data at both 630 and 1800 GeV. Previously, only the small low-pT insert plot had been entered into HepData. 2008-09-30 Andy Buckley * Lots of updates to CDF_2004 (Acosta) UE analysis, including sorting jets by E rather than Et, and factorising transverse cone code into a function so that it can be called with a random "leading jet" in min bias mode. Min bias histos are now being trial-filled just with tracks in the transverse cones, since the paper is very unclear on this. * Discovered a serious caching problem in FastJets projection when an empty tracks vector is passed from the FinalState. Unfortunately, FastJet provides no API way to solve the problem, so we'll have to report this upstream. For now, we're solving this for CDF_2004 by explicitly vetoing events with no tracks. * Added Doxygen to the build with target "dox" * Moved detection of whether cross-section information is needed into the AnalysisHandler, with dynamic computation by scanning contained analyses. * Improved robustness of event reading to detect properly when the input file is smaller than expected. 2008-09-29 Hendrik Hoeth * New analysis: CDF_2000_S4155203 2008-09-23 Andy Buckley * rivetgun can now be built and run without AGILe. Based on a patch by Frank Siegert. 2008-09-23 Hendrik Hoeth * Some preliminary numbers for the CDF_2008_LEADINGJETS analysis (only transverse region and not all observables. But all we have now.) 2008-09-17 Andy Buckley * Breaking up the mammoth "generate" function, to make Python mapping easier, among other reasons. * Added if-zero-return-zero checking to angle mapping functions, to avoid problems where 1e-19 gets mapped on to 2 pi and then fails boundary asserts. * Added HistoHandler singleton class, which will be a central repository for holding analyses' histogram objects to be accessed via a user-chosen name. 2008-08-26 Andy Buckley * Allowing rivet-config to return combined flags. 2008-08-14 Andy Buckley * Fixed some g++ 4.3 compilation bugs, including "vector" not being a valid name for a method which returns a physics vector, since it clashes with std::vector (which we globally import). Took the opportunity to rationalise the Jet interface a bit, since "particle" was used to mean "FourMomentum", and "Particle" types required a call to "getFullParticle". I removed the "gets" at the same time, as part of our gradual migration to a coherent naming policy. 2008-08-11 Andy Buckley * Tidying of FastJets and added new data files from HepData. 2008-08-10 James Monk * FastJets now uses user_index property of fastjet::PseudoJet to reconstruct PID information in jet contents. 2008-08-07 Andy Buckley * Reworking of param file and command line parsing. Tab characters are now handled by the parser, in a way equivalent to spaces. 2008-08-06 Andy Buckley * Added extra histos and filling to Acosta analysis - all HepData histos should now be filled, depending on sqrt{s}. Also trialling use of LaTeX math mode in titles. 2008-08-05 Andy Buckley * More data files for CDF analyses (2 x 2008, 1 x 1994), and moved the RivetGun AtlasPythia6.params file to more standard fpythia-atlas.params (and added to the install list). 2008-08-04 Andy Buckley * Reduced size of available options blocks in RivetGun help text by removing "~" negating variants (which are hardly ever used in practice) and restricting beam particles to PROTON, ANTIPROTON,ELECTRON and POSITRON. * Fixed Et sorting in Acosta analysis. 2008-08-01 Andy Buckley * Added AIDA headers to the install list, since external (plugin-type) analyses need them to be present for compilation to succeed. 2008-07-29 Andy Buckley * Fixed missing ROOT compile flags for libRivet. * Added command line repetition to logging. 2008-07-29 Hendrik Hoeth * Included the missing numbers and three more observables in the CDF_2008_NOTE_9351 analysis. 2008-07-29 Andy Buckley * Fixed wrong flags on rivet-config 2008-07-28 Hendrik Hoeth * Renamed CDF_2008_DRELLYAN to CDF_2008_NOTE_9351. Updated numbers and cuts to the final version of this CDF note. 2008-07-28 Andy Buckley * Fixed polar angle calcuation to use atan2. * Added "mk" prefixes and x/setX convention to math classes. 2008-07-28 Hendrik Hoeth * Fixed definition of FourMomentum::pT (it had been returning pT2) 2008-07-27 Andy Buckley * Added better tests for Boost headers. * Added testing for -ansi, -pedantic and -Wall compiler flags. 2008-07-25 Hendrik Hoeth * updated DELPHI_2002_069_CONF_603 according to information from the author 2008-07-17 Andy Buckley * Improvements to aida2flat: now can produce one output file per histo, and there is a -g "gnuplot mode" which comments out the YODA/make_plot headers to make the format readable by gnuplot. * Import boost::assign namespace contents into the Rivet namespace --- provides very useful intuitive collection initialising functions. 2008-07-15 Andy Buckley * Fixed missing namespace in vector/matrix testing. * Removed Boost headers: now a system dependency. * Fixed polarRadius infinite loop. 2008-07-09 Andy Buckley * Fixed definitions of mapAngleMPiToPi, etc. and used them to fix the Jet::getPhi method. * Trialling use of "foreach" loop in CDF_2004: it works! Very nice. 2008-07-08 Andy Buckley * Removed accidental reference to an "FS" projection in FinalStateHCM's compare method. rivetgun -A now works again. * Added TASSO, SLD and D0_2008 reference data. The TASSO and SLD papers aren't installed or included in the tarball since there are currently no plans to implement these analyses. * Added Rivet namespacing to vector, matrix etc. classes. This required some re-writing and the opportunity was taken to move some canonical function definitions inside the classes and to improve the header structure of the Math area. 2008-07-07 Andy Buckley * Added Rivet namespace to Units.hh and Constants.hh. * Added Doxygen "@brief" flags to analyses. * Added "RIVET_" namespacing to all header guards. * Merged Giulio Lenzi's isolation/vetoing/invmass projections and D0 2008 analysis. 2008-06-23 Jon Butterworth * Modified FastJet to fix ysplit and split and filter. * Modified ExampleTree to show how to call them. 2008-06-19 Hendrik Hoeth * Added first version of the CDF_2008_DRELLYAN analysis described on http://www-cdf.fnal.gov/physics/new/qcd/abstracts/UEinDY_08.html There is a small difference between the analysis and this implementation, but it's too small to be visible. The fpythia-cdfdrellyan.params parameter file is for this analysis. * Added first version of the CDF_2008_MINBIAS analysis described on http://www-cdf.fnal.gov/physics/new/qcd/abstracts/minbias_08.html The .aida file is read from the plots on the web and will change. I'm still discussing some open questions about the analysis with the author. 2008-06-18 Jon Butterworth * Added First versions of splitJet and filterJet methods to fastjet.cc. Not yet tested, buyer beware. 2008-06-18 Andy Buckley * Added extra sorted Jets and Pseudojets methods to FastJets, and added ptmin argument to the JetAlg getJets() method, requiring a change to TrackJet. 2008-06-13 Andy Buckley * Fixed processing of "RG" parameters to ensure that invalid iterators are never used. 2008-06-10 Andy Buckley * Updated AIDA reference files, changing "/HepData" root path to "/REF". Still missing a couple of reference files due to upstream problems with the HepData records. 2008-06-09 Andy Buckley * rivetgun now handles termination signals (SIGTERM, SIGINT and SIGHUP) gracefully, finishing the event loop and finalising histograms. This means that histograms will always get written out, even if not all the requested events have been generated. 2008-06-04 Hendrik Hoeth * Added DELPHI_2002_069_CONF_603 analysis 2008-05-30 Hendrik Hoeth * Added InitialQuarks projection * Added OPAL_1998_S3780481 analysis 2008-05-29 Andy Buckley * distcheck compatibility fixes and autotools tweaks. 2008-05-28 Andy Buckley * Converted FastJet to use Boost smart_ptr for its plugin handling, to solve double-delete errors stemming from the heap cloning of projections. * Added (a subset of) Boost headers, particularly the smart pointers. 2008-05-24 Andy Buckley * Added autopackage spec files. * Merged these changes into the trunk. * Added a registerClonedProjection(...) method to ProjectionHandler: this is needed so that cloned projections will have valid pointer entries in the ProjectHandler repository. * Added clone() methods to all projections (need to use this, since the templated "new PROJ(proj)" approach to cloning can't handle object polymorphism. 2008-05-19 Andy Buckley * Moved projection-applying functions into ProjectionApplier base class (from which Projection and Analysis both derive). * Added Rivet-specific exceptions in place of std::runtime_error. * Removed unused HepML reference files. * Added error handling for requested analyses with wrong case convention / missing name. 2008-05-15 Hendrik Hoeth * New analysis PDG_Hadron_Multiplicities * flat2aida converter 2008-05-15 Andy Buckley * Removed unused mysterious Perl scripts! * Added RivetGun.HepMC logging of HepMC event details. 2008-05-14 Hendrik Hoeth * New analysis DELPHI_1995_S3137023. This analysis contains the xp spectra of Xi+- and Sigma(1385)+-. 2008-05-13 Andy Buckley * Improved logging interface: log levels are now integers (for cross-library compatibility and level setting also applies to existing loggers. 2008-05-09 Andy Buckley * Improvements to robustness of ROOT checks. * Added --version flag on config scripts and rivetgun. 2008-05-06 Hendrik Hoeth * New UnstableFinalState projection which selects all hadrons, leptons and real photons including unstable particles. * In the DELPHI_1996_S3430090 analysis the multiplicities for pi+/pi- and p0 are filled, using the UnstableFinalState projection. 2008-05-06 Andy Buckley * FastJets projection now protects against the case where no particles exist in the final state (where FastJet throws an exception). * AIDA file writing is now separated from the AnalysisHandler::finalize method... API users can choose what to do with the histo objects, be that writing out or further processing. 2008-04-29 Andy Buckley * Increased default tolerances in floating point comparisons as they were overly stringent and valid f.p. precision errors were being treated as significant. * Implemented remainder of Acosta UE analysis. * Added proper getEtSum() to Jet. * Added Et2() member and function to FourMomentum. * Added aida2flat conversion script. * Fixed ambiguity in TrackJet algorithm as to how the iteration continues when tracks are merged into jets in the inner loop. 2008-04-28 Andy Buckley * Merged in major "ProjectionHandler" branch. Projections are now all stored centrally in the singleton ProjectionHandler container, rather than as member pointers in projections and analyses. This also affects the applyProjection mechanism, which is now available as a templated method on Analysis and Projection. Still a few wrinkles need to be worked out. * The branch changes required a comprehensive review of all existing projections and analyses: lots of tidying up of these classes, as well as the auxiliary code like math utils, has taken place. Too much to list and track, unfortunately! 2008-03-28 Andy Buckley * Started second CDF UE analysis ("Acosta"): histograms defined. * Fixed anomalous factor of 2 in LWH conversion from Profile1D to DataPointSet. * Added pT distribution histos to CDF 2001 UE analysis. 2008-03-26 Andy Buckley * Removed charged+neutral versions of histograms and projections from DELPHI analysis since they just duplicate the more robust charged-only measurements and aren't really of interest for tuning. 2008-03-10 Andy Buckley * Profile histograms now use error computation with proper weighting, as described here: http://en.wikipedia.org/wiki/Weighted_average 2008-02-28 Andy Buckley * Added --enable-jade flag for Professor studies with patched FastJet. * Minor fixes to LCG tag generator and gfilt m4 macros. * Fixed projection slicing issues with Field UE analysis. * Added Analysis::vetoEvent(e) function, which keeps track of the correction to the sum of weights due to event vetoing in analysis classes. 2008-02-26 Andy Buckley * Vector and derived classes now initialise to have zeroed components when the no-arg constructor is used. * Added Analysis::scale() function to scale 1D histograms. Analysis::normalize() uses it internally, and the DELPHI (A)EEC, whose histo weights are not pure event weights, and normalised using scale(h, 1/sumEventWeights). 2008-02-21 Hendrik Hoeth * Added EEC and AEEC to the DELPHI_1996_S3430090 analysis. The normalisation of these histograms is still broken (ticket #163). 2008-02-19 Hendrik Hoeth * Many fixes to the DELPHI_1996_S3430090 analysis: bugfix in the calulation of eigenvalues/eigenvectors in MatrixDiag.hh for the sphericity, rewrite of Thrust/Major/Minor, fixed scaled momentum, hemisphere masses, normalisation in single particle events, final state slicing problems in the projections for Thrust, Sphericity and Hemispheres. 2008-02-08 Andy Buckley * Applied fixes and extensions to DIS classes, based on submissions by Dan Traynor. 2008-02-06 Andy Buckley * Made projection pointers used for cut combining into const pointers. Required some redefinition of the Projection* comparison operator. * Temporarily added FinalState member to ChargedFinalState to stop projection lifetime crash. 2008-02-01 Andy Buckley * Fixed another misplaced factor of bin width in the Analysis::normalize() method. 2008-01-30 Andy Buckley * Fixed the conversion of IHistogram1D to DPS, both via the explicit Analysis::normalize() method and the implicit AnalysisHandler::treeNormalize() route. The root of the problem is the AIDA choice of the word "height" to represent the sum of weights in a bin: i.e. the bin width is not taken into account either in computing bin height or error. 2008-01-22 Andy Buckley * Beam projection now uses HepMC GenEvent::beam_particles() method to get the beam particles. This is more portable and robust for C++ generators, and equivalent to the existing "first two" method for Fortran generators. 2008-01-17 Andy Buckley * Added angle range fix to pseudorapidity function (thanks to Piergiulio Lenzi). 2008-01-10 Andy Buckley * Changed autobooking plot codes to use zero-padding (gets the order right in JAS, file browser, ROOT etc.). Also changed the 'ds' part to 'd' for consistency. HepData's AIDA output has been correspondingly updated, as have the bundled data files. 2008-01-04 Andy Buckley * Tidied up JetShape projection a bit, including making the constructor params const references. This seems to have sorted the runtime segfault in the CDF_2005 analysis. * Added caching of the analysis bin edges from the AIDA file - each analysis object will now only read its reference file once, which massively speeds up the rivetgun startup time for analyses with large numbhers of autobooked histos (e.g. the DELPHI_1996_S3430090 analysis). 2008-01-02 Andy Buckley * CDF_2001_S4751469 now uses the LossyFinalState projection, with an 8% loss rate. * Added LossyFinalState and HadronicFinalState, and fixed a "polarity" bug in the charged final state projection (it was keeping only the *uncharged* particles). * Now using isatty(1) to determine whether or not color escapes can be used. Also removed --color argument, since it can't have an effect (TCLAP doesn't do position-based flag toggling). * Made Python extension build optional (and disabled by default). 2008-01-01 Andy Buckley * Removed some unwanted DEBUG statements, and lowered the level of some infrastructure DEBUGs to TRACE level. * Added bash color escapes to the logger system. 2007-12-21 Leif Lonnblad * include/LWH/ManagedObject.h: Fixed infinite loop in encodeForXML cf. ticket #135. 2007-12-20 Andy Buckley * Removed HepPID, HepPDT and Boost dependencies. * Fixed XML entity encoding in LWH. Updated CDF_2007_S7057202 analysis to not do its own XML encoding of titles. 2007-12-19 Andy Buckley * Changed units header to set GeV = 1 (HepMC convention) and using units in CDF UE analysis. 2007-12-15 Andy Buckley * Introduced analysis metadata methods for all analyses (and made them part of the Analysis interface). 2007-12-11 Andy Buckley * Added JetAlg base projection for TrackJet, FastJet etc. 2007-12-06 Andy Buckley * Added checking for Boost library, and the standard Boost test program for shared_ptr. * Got basic Python interface running - required some tweaking since Python and Rivet's uses of dlopen collide (another RTLD_GLOBAL issue - see http://muttley.hates-software.com/2006/01/25/c37456e6.html ) 2007-12-05 Andy Buckley * Replaced all use of KtJets projection with FastJets projection. KtJets projection disabled but left undeleted for now. CLHEP and KtJet libraries removed from configure searches and Makefile flags. 2007-12-04 Andy Buckley * Param file loading now falls back to the share/RivetGun directory if a local file can't be found and the provided name has no directory separators in it. * Converted TrackJet projection to update the jet centroid with each particle added, using pT weighting in the eta and phi averaging. 2007-12-03 Andy Buckley * Merged all command line handling functions into one large parse function, since only one executable now needs them. This removes a few awkward memory leaks. * Removed rivet executable - HepMC file reading functionality will move into rivetgun. * Now using HepMC IO_GenEvent format (IO_Ascii and IO_ExtendedAscii are deprecated). Now requires HepMC >= 2.3.0. * Added forward declarations of GSL diagonalisation routines, eliminating need for GSL headers to be installed on build machine. 2007-11-27 Andy Buckley * Removed charge differentiation from Multiplicity projection (use CFS proj) and updated ExampleAnalysis to produce more useful numbers. * Introduced binreloc for runtime path determination. * Fixed several bugs in FinalState, ChargedFinalState, TrackJet and Field analysis. * Completed move to new analysis naming scheme. 2007-11-26 Andy Buckley * Removed conditional HAVE_FASTJET bits: FastJet is now compulsory. * Merging appropriate RivetGun parts into Rivet. RivetGun currently broken. 2007-11-23 Andy Buckley * Renaming analyses to Spires-ID scheme: currently of form S, to become __. 2007-11-20 Andy Buckley * Merged replacement vectors, matrices and boosts into trunk. 2007-11-15 Leif Lonnblad * src/Analysis.cc, include/Rivet/Analysis.hh: Introduced normalize function. See ticket #126. 2007-10-31 Andy Buckley * Tagging as 1.0b2 for HERA-LHC meeting. 2007-10-25 Andy Buckley * Added AxesDefinition base interface to Sphericity and Thrust, used by Hemispheres. * Exposed BinaryCut class, improved its interface and fixed a few bugs. It's now used by VetoedFinalState for momentum cuts. * Removed extra output from autobooking AIDA reader. * Added automatic DPS booking. 2007-10-12 Andy Buckley * Improved a few features of the build system 2007-10-09 James Monk * Fixed dylib dlopen on Mac OS X. 2007-10-05 Andy Buckley * Added new reference files. 2007-10-03 Andy Buckley * Fixed bug in configure.ac which led to explicit CXX setting being ignored. * Including Logging.hh in Projection.hh, hence new transitive dependency on Logging.hh being installed. Since this is the normal behaviour, I don't think this is a problem. * Fixed segfaulting bug due to use of addProjection() in locally-scoped contained projections. This isn't a proper fix, since the whole framework should be designed to avoid the possibility of bugs like this. * Added newly built HepML and AIDA reference files for current analyses. 2007-10-02 Andy Buckley * Fixed possible null-pointer dereference in Particle copy constructor and copy assignment: this removes one of two blocker segfaults, the other of which is related to the copy-assignment of the TotalVisMomentum projection in the ExampleTree analysis. 2007-10-01 Andy Buckley * Fixed portable path to Rivet share directory. 2007-09-28 Andy Buckley * Added more functionality to the rivet-config script: now has libdir, includedir, cppflags, ldflags and ldlibs options. 2007-09-26 Andy Buckley * Added the analysis library closer function to the AnalysisHandler finalize() method, and also moved the analysis delete loop into AnalysisHandler::finalize() so as not to try deleting objects whose libraries have already closed. * Replaced the RivetPaths.cc.in method for portable paths with something using -D defines - much simpler! 2007-09-21 Lars Sonnenschein * Added HepEx0505013 analysis and JetShape projection (some fixes by AB.) * Added GetLorentzJets member function to D0 RunII cone jet projection 2007-09-21 Andy Buckley * Fixed lots if bugs and bad practice in HepEx0505013 (to make it compile-able!) * Downclassed the log messages from the Test analysis to DEBUG level. * Added isEmpty() method to final state projection. * Added testing for empty final state and useful debug log messages to sphericity projection. 2007-09-20 Andy Buckley * Added Hemispheres projection, which calculates event hemisphere masses and broadenings. 2007-09-19 Andy Buckley * Added an explicit copy assignment operator to Particle: the absence of one of these was responsible for the double-delete error. * Added a "fuzzy equals" utility function for float/double types to Utils.hh (which already contains a variety of handy little functions). * Removed deprecated Beam::operator(). * Added ChargedFinalState projection and de-pointered the contained FinalState projection in VetoedFinalState. 2007-09-18 Andy Buckley * Major bug fixes to the regularised version of the sphericity projection (and hence the Parisi tensor projection). Don't trust C & D param results from any previous version! * Added extra methods to thrust and sphericity projections to get the oblateness and the sphericity basis (currently returns dummy axes since I can't yet work out how to get the similarity transform eigenvectors from CLHEP) 2007-09-14 Andy Buckley * Merged in a branch of pluggable analysis mechanisms. 2007-06-25 Jon Butterworth * Fixed some bugs in the root output for DataPoint.h 2007-06-25 Andy Buckley * include/Rivet/**/Makefile.am: No longer installing headers for "internal" functionality. * include/Rivet/Projections/*.hh: Removed the private restrictions on copy-assignment operators. 2007-06-18 Leif Lonnblad * include/LWH/Tree.h: Fixed minor bug in listObjectNames. * include/LWH/DataPointSet.h: Fixed setCoordinate functions so that they resize the vector of DataPoints if it initially was empty. * include/LWH/DataPoint.h: Added constructor taking a vector of measuremts. 2007-06-16 Leif Lonnblad * include/LWH/Tree.h: Implemented the listObjectNames and ls functions. * include/Rivet/Projections/FinalStateHCM.hh, include/Rivet/Projections/VetoedFinalState.hh: removed _theParticles and corresponding access function. Use base class variable instead. * include/Rivet/Projections/FinalState.hh: Made _theParticles protected. 2007-06-13 Leif Lonnblad * src/Projections/FinalStateHCM.cc, src/Projections/DISKinematics.cc: Equality checks using GenParticle::operator== changed to check for pointer equality. * include/Rivet/Analysis/HepEx9506012.hh: Uses modified DISLepton projection. * include/Rivet/Particle.hh: Added member function to check if a GenParticle is associated. * include/Rivet/Projections/DISLepton.hh, src/Projections/DISLepton.cc: Fixed bug in projection. Introduced final state projection to limit searching for scattered lepton. Still not properly tested. 2007-06-08 Leif Lonnblad * include/Rivet/Projections/PVertex.hh, src/Projections/PVertex.cc: Fixed the projection to simply get the signal_process_vertex from the GenEvent. This is the way it should work. If the GenEvent does not have a signal_process_vertex properly set up in this way, the problem is with the class that fills the GenEvent. 2007-06-06 Jon Butterworth * Merged TotalVisibleMomentum and CalMET * Added pT ranges to Vetoed final state projection 2007-05-27 Jon Butterworth * Fixed initialization of VetoedFinalStateProjection in ExampleTree 2007-05-27 Leif Lonnblad * include/Rivet/Projections/KtJets.*: Make sure the KtEvent is deleted properly. 2007-05-26 Jon Butterworth * Added leptons to the ExampleTree. * Added TotalVisibleEnergy projection, and added output to ExampleTree. 2007-05-25 Jon Butterworth * Added a charged lepton projection 2007-05-23 Andy Buckley * src/Analysis/HepEx0409040.cc: Changed range of the histograms to the "pi" range rather than the "128" range. * src/Analysis/Analysis.cc: Fixed a bug in the AIDA path building. Histogram auto-booking now works. 2007-05-23 Leif Lonnblad * src/Analysis/HepEx9506012.cc: Now uses the histogram booking function in the Analysis class. 2007-05-23 Jon Butterworth * Fixed bug in PRD65092002 (was failing on zero jets) 2007-05-23 Andy Buckley * Added (but haven't properly tested) a VetoedFinalState projection. * Added normalize() method for AIDA 1D histograms. * Added configure checking for Mac OS X version, and setting the development target flag accordingly. 2007-05-22 Andy Buckley * Added an ostream method for AnalysisName enums. * Converted Analyses and Projections to use projection lists, cuts and beam constraints. * Added beam pair combining to the BeamPair sets of Projections by finding set meta-intersections. * Added methods to Cuts, Analysis and Projection to make Cut definition easier. * Fixed default fall-through in cut handling switch statement and now using -numeric_limits::max() rather than min() * Added more control of logging presentation via static flag methods on Log. 2007-05-13 Andy Buckley * Added self-consistency checking mechanisms for Cuts and Beam * Re-implemented the cut-handling part of RivetInfo as a Cuts class. * Changed names of Analysis and Projection name() and handler() methods to getName() and getHandler() to be more consistent with the rest of the public method names in those classes. 2007-05-02 Andy Buckley * Added auto-booking of histogram bins from AIDA XML files. The AIDA files are located via a C++ function which is generated from RivetPaths.cc.in by running configure. 2007-04-18 Andy Buckley * Added a preliminary version of the Rick Field UE analysis, under the name PRD65092002. 2007-04-19 Leif Lonnblad * src/Analysis/HepEx0409040.cc: The reason this did not compile under gcc-4 is that some iterators into a vector were wrongly assued to be pointers and were initialized to 0 and later compared to 0. I've changed this to initialize to end() of the corresponding vector and to compare with the same end() later. 2007-04-05 Andy Buckley * Lots of name changes in anticipation of the MCNet school. RivetHandler is now AnalysisHandler (since that's what it does!), BeamParticle has become ParticleName, and RivetInfo has been split into Cut and BeamConstraint portions. * Added BeamConstraint mechanism, which can be used to determine if an analysis is compatible with the beams being used in the generator. The ParticleName includes an "ANY" wildcard for this purpose. 2006-03-19 Andy Buckley * Added "rivet" executable which can read in HepMC ASCII dump files and apply Rivet analyses on the events. 2007-02-24 Leif Lonnblad * src/Projections/KtJets.cc: Added comparison of member variables in compare() function * all: Merged changes from polymorphic-projections branch into trunk 2007-02-17 Leif Lonnblad * all: projections and analysis handlers: All projections which uses other projctions now has a pointer rather than a copy of those projections to allow for polymorphism. The constructors has also been changed to require the used projections themselves, rather than the arguments needed to construct them. 2007-02-17 Leif Lonnblad * src/Projections/FinalState.cc, include/Rivet/Projections/FinalState.icc (Rivet), include/Rivet/Projections/FinalState.hh: Added cut in transverse momentum on the particles to be included in the final state. 2007-02-06 Leif Lonnblad * include/LWH/HistogramFactory.h: Fixed divide-by-zero in divide function. Also fixed bug in error calculation in divide function. Introduced checkBin function to make sure two histograms are equal even if they have variable bin widths. * include/LWH/Histogram1D.h: In normalize(double), do not do anything if the sum of the bins are zero to avoid dividing by zero. 2007-01-20 Leif Lonnblad * src/Test/testLWH.cc: Modified to output files using the Tree. * configure.ac: Removed AC_CONFIG_AUX_DIR([include/Rivet/Config]) since the directory does not exist anymore. 2006-12-21 Andy Buckley * Rivet will now conditionally install the AIDA and LWH headers if it can't find them when configure'ing. * Started integrating Leif's LWH package to fulfill the AIDA duties. * Replaced multitude of CLHEP wrapper headers with a single RivetCLHEP.h header. 2006-11-20 Andy Buckley * Introduced log4cpp logging. * Added analysis enum, which can be used as input to an analysis factory by Rivet users. 2006-11-02 Andy Buckley * Yet more, almost pointless, administrative moving around of things with the intention of making the structure a bit better-defined: * The RivetInfo and RivetHandler classes have been moved from src/Analysis into src as they are really the main Rivet interface classes. The Rivet.h header has also been moved into the "header root". * The build of a single shared library in lib has been disabled, with the library being built instead in src. 2006-10-14 Andy Buckley * Introduced a minimal subset of the Sherpa math tools, such as Vector{3,4}D, Matrix, etc. The intention is to eventually cut the dependency on CLHEP. 2006-07-28 Andy Buckley * Moving things around: all sources now in directories under src 2006-06-04 Leif Lonnblad * Analysis/Examples/HZ95108.*: Now uses CentralEtHCM. Also set GeV units on the relevant histograms. * Projections/CentralEtHCM.*: Making a special class just to get out one number - the summed Et in the central rapidity bin - may seem like an overkill. But in case some one else might nees it... 2006-06-03 Leif Lonnblad * Analysis/Examples/HZ95108.*: Added the hz95108 energy flow analysis from HZtool. * Projections/DISLepton.*: Since many HERA measurements do not care if we have electron or positron beam, it is now possible to specify lepton or anti-lepton. * Projections/Event.*: Added member and access function for the weight of an event (taken from the GenEvent object.weights()[0]. * Analysis/RivetHandler.*: Now depends explicitly on the AIDA interface. An AIDA analysis factory must be specified in the constructor, where a tree and histogram factory is automatically created. Added access functions to the relevant AIDA objects. * Analysis/AnalysisBase.*: Added access to the RivetHandler and its AIDA factories. 2005-12-27 Leif Lonnblad * configure.ac: Added -I$THEPEGPATH/include to AM_CPPFLAGS. * Config/Rivet.h: Added some std incudes and using std:: declaration. * Analysis/RivetInfo.*: Fixed some bugs. The RivetInfo facility now works, although it has not been thoroughly tested. * Analysis/Examples/TestMultiplicity.*: Re-introduced FinalStateHCM for testing purposes but commented it away again. * .: Made a number of changes to implement handling of RivetInfo objects. diff --git a/analyses/pluginALICE/ALICE_2012_I1127497.cc b/analyses/pluginALICE/ALICE_2012_I1127497.cc --- a/analyses/pluginALICE/ALICE_2012_I1127497.cc +++ b/analyses/pluginALICE/ALICE_2012_I1127497.cc @@ -1,215 +1,212 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Tools/Cuts.hh" #include "Rivet/Projections/SingleValueProjection.hh" #include "Rivet/Tools/AliceCommon.hh" #include "Rivet/Projections/AliceCommon.hh" #include "Rivet/Projections/HepMCHeavyIon.hh" namespace Rivet { /// @brief ALICE PbPb at 2.76 TeV R_AA analysis. class ALICE_2012_I1127497 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ALICE_2012_I1127497); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Access the HepMC heavy ion info declare(HepMCHeavyIon(), "HepMC"); // Declare centrality projection declareCentrality(ALICE::V0MMultiplicity(), "ALICE_2015_PBPBCentrality", "V0M", "V0M"); // Charged, primary particles with |eta| < 0.5 and pT > 150 MeV declare(ALICE::PrimaryParticles(Cuts::abseta < 0.5 && Cuts::pT > 150*MeV && Cuts::abscharge > 0), "APRIM"); // Loop over all histograms for (size_t ihist = 0; ihist < NHISTOS; ++ihist) { // Initialize PbPb objects book(_histNch[PBPB][ihist], ihist+1, 1, 1); std::string nameCounterPbPb = "counter.pbpb." + std::to_string(ihist); - book(_counterSOW[PBPB][ihist], nameCounterPbPb, - "Sum of weights counter for PbPb"); + book(_counterSOW[PBPB][ihist], nameCounterPbPb); // Sum of weights counter for PbPb std::string nameCounterNcoll = "counter.ncoll." + std::to_string(ihist); - book(_counterNcoll[ihist], nameCounterNcoll, - "Ncoll counter for PbPb"); + book(_counterNcoll[ihist], nameCounterNcoll); // Ncoll counter for PbPb // Initialize pp objects. In principle, only one pp histogram would be // needed since centrality does not make any difference here. However, // in some cases in this analysis the binning differ from each other, // so this is easy-to-implement way to account for that. std::string namePP = _histNch[PBPB][ihist]->name() + "-pp"; // The binning is taken from the reference data book(_histNch[PP][ihist], namePP, refData(ihist+1, 1, 1)); std::string nameCounterpp = "counter.pp." + std::to_string(ihist); - book(_counterSOW[PP][ihist], nameCounterpp, - "Sum of weights counter for pp"); + book(_counterSOW[PP][ihist], nameCounterpp); // Sum of weights counter for pp // Book ratios, to be used in finalize book(_histRAA[ihist], ihist+16, 1, 1); } // Centrality regions keeping boundaries for a certain region. // Note, that some regions overlap with other regions. _centrRegions.clear(); _centrRegions = {{0., 5.}, {5., 10.}, {10., 20.}, {20., 30.}, {30., 40.}, {40., 50.}, {50., 60.}, {60., 70.}, {70., 80.}, {0., 10.}, {0., 20.}, {20., 40.}, {40., 60.}, {40., 80.}, {60., 80.}}; // Find out the beam type, also specified from option. string beamOpt = getOption("beam","NONE"); if (beamOpt != "NONE") { MSG_WARNING("You are using a specified beam type, instead of using what" "is provided by the generator. " "Only do this if you are completely sure what you are doing."); if (beamOpt=="PP") isHI = false; else if (beamOpt=="HI") isHI = true; else { MSG_ERROR("Beam error (option)!"); return; } } else { const ParticlePair& beam = beams(); if (beam.first.pid() == PID::PROTON && beam.second.pid() == PID::PROTON) isHI = false; else if (beam.first.pid() == PID::LEAD && beam.second.pid() == PID::LEAD) isHI = true; else { MSG_ERROR("Beam error (found)!"); return; } } } /// Perform the per-event analysis void analyze(const Event& event) { // Charged, primary particles with at least pT = 150 MeV // in eta range of |eta| < 0.5 Particles chargedParticles = apply(event,"APRIM").particlesByPt(); // Check type of event. if ( isHI ) { const HepMCHeavyIon & hi = apply(event, "HepMC"); // Prepare centrality projection and value const CentralityProjection& centrProj = apply(event, "V0M"); double centr = centrProj(); // Veto event for too large centralities since those are not used // in the analysis at all if ((centr < 0.) || (centr > 80.)) vetoEvent; // Fill PbPb histograms and add weights based on centrality value for (size_t ihist = 0; ihist < NHISTOS; ++ihist) { if (inRange(centr, _centrRegions[ihist].first, _centrRegions[ihist].second)) { _counterSOW[PBPB][ihist]->fill(); _counterNcoll[ihist]->fill(hi.Ncoll()); for (const Particle& p : chargedParticles) { double pT = p.pT()/GeV; if (pT < 50.) { const double pTAtBinCenter = _histNch[PBPB][ihist]->binAt(pT).xMid(); _histNch[PBPB][ihist]->fill(pT, 1/pTAtBinCenter); } } } } } else { // Fill all pp histograms and add weights for (size_t ihist = 0; ihist < NHISTOS; ++ihist) { _counterSOW[PP][ihist]->fill(); for (const Particle& p : chargedParticles) { double pT = p.pT()/GeV; if (pT < 50.) { const double pTAtBinCenter = _histNch[PP][ihist]->binAt(pT).xMid(); _histNch[PP][ihist]->fill(pT, 1/pTAtBinCenter); } } } } } /// Normalise histograms etc., after the run void finalize() { // Right scaling of the histograms with their individual weights. for (size_t itype = 0; itype < EVENT_TYPES; ++itype ) { for (size_t ihist = 0; ihist < NHISTOS; ++ihist) { if (_counterSOW[itype][ihist]->sumW() > 0.) { scale(_histNch[itype][ihist], (1. / _counterSOW[itype][ihist]->sumW() / 2. / M_PI)); } } } // Postprocessing of the histograms for (size_t ihist = 0; ihist < NHISTOS; ++ihist) { // If there are entires in histograms for both beam types if (_histNch[PP][ihist]->numEntries() > 0 && _histNch[PBPB][ihist]->numEntries() > 0) { // Initialize and fill R_AA histograms divide(_histNch[PBPB][ihist], _histNch[PP][ihist], _histRAA[ihist]); // Scale by Ncoll. Unfortunately some generators does not provide // Ncoll value (eg. JEWEL), so the following scaling will be done // only if there are entries in the counters double ncoll = _counterNcoll[ihist]->sumW(); double sow = _counterSOW[PBPB][ihist]->sumW(); if (ncoll > 1e-6 && sow > 1e-6) _histRAA[ihist]->scaleY(1. / (ncoll / sow)); } } } //@} private: bool isHI; static const int NHISTOS = 15; static const int EVENT_TYPES = 2; static const int PP = 0; static const int PBPB = 1; /// @name Histograms //@{ Histo1DPtr _histNch[EVENT_TYPES][NHISTOS]; CounterPtr _counterSOW[EVENT_TYPES][NHISTOS]; CounterPtr _counterNcoll[NHISTOS]; Scatter2DPtr _histRAA[NHISTOS]; //@} std::vector> _centrRegions; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALICE_2012_I1127497); } diff --git a/analyses/pluginALICE/ALICE_2012_I930312.cc b/analyses/pluginALICE/ALICE_2012_I930312.cc --- a/analyses/pluginALICE/ALICE_2012_I930312.cc +++ b/analyses/pluginALICE/ALICE_2012_I930312.cc @@ -1,342 +1,338 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/SingleValueProjection.hh" #include "Rivet/Tools/AliceCommon.hh" #include "Rivet/Projections/AliceCommon.hh" namespace Rivet { /// @brief ALICE PbPb at 2.76 TeV azimuthal di-hadron correlations class ALICE_2012_I930312 : public Analysis { public: // Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ALICE_2012_I930312); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Declare centrality projection - declareCentrality(ALICE::V0MMultiplicity(), - "ALICE_2015_PBPBCentrality", "V0M", "V0M"); + declareCentrality(ALICE::V0MMultiplicity(), "ALICE_2015_PBPBCentrality", "V0M", "V0M"); // Projection for trigger particles: charged, primary particles // with |eta| < 1.0 and 8 < pT < 15 GeV/c declare(ALICE::PrimaryParticles(Cuts::abseta < 1.0 && Cuts::abscharge > 0 && Cuts::ptIn(8.*GeV, 15.*GeV)), "APRIMTrig"); // pT bins edges vector ptBins = { 3., 4., 6., 8., 10. }; // Projections for associated particles: charged, primary particles // with |eta| < 1.0 and different pT bins for (int ipt = 0; ipt < PT_BINS; ++ipt) { Cut cut = Cuts::abseta < 1.0 && Cuts::abscharge > 0 && Cuts::ptIn(ptBins[ipt]*GeV, ptBins[ipt+1]*GeV); declare(ALICE::PrimaryParticles(cut), "APRIMAssoc" + toString(ipt)); } // Create event strings vector evString = { "pp", "central", "peripheral" }; // Initialize trigger counters and yield histograms string title = "Per trigger particle yield"; string xtitle = "$\\Delta\\eta$ (rad)"; string ytitle = "$1 / N_{trig} {\\rm d}N_{assoc} / {\\rm d}\\Delta\\eta$ (rad$^-1$)"; for (int itype = 0; itype < EVENT_TYPES; ++itype) { book(_counterTrigger[itype], "counter." + toString(itype)); for (int ipt = 0; ipt < PT_BINS; ++ipt) { string name = "yield." + evString[itype] + ".pt" + toString(ipt); - book(_histYield[itype][ipt], name, 36, - -0.5*M_PI, 1.5*M_PI, title, xtitle, ytitle); + book(_histYield[itype][ipt], name, 36, -0.5*M_PI, 1.5*M_PI); } } // Find out the beam type, also specified from option. string beamOpt = getOption("beam","NONE"); if (beamOpt != "NONE") { MSG_WARNING("You are using a specified beam type, instead of using what" "is provided by the generator. " "Only do this if you are completely sure what you are doing."); if (beamOpt=="PP") isHI = false; else if (beamOpt=="HI") isHI = true; else { MSG_ERROR("Beam error (option)!"); return; } } else { const ParticlePair& beam = beams(); if (beam.first.pid() == PID::PROTON && beam.second.pid() == PID::PROTON) isHI = false; else if (beam.first.pid() == PID::LEAD && beam.second.pid() == PID::LEAD) isHI = true; else { MSG_ERROR("Beam error (found)!"); return; } } // Initialize IAA and ICP histograms book(_histIAA[0], 1, 1, 1); book(_histIAA[1], 2, 1, 1); book(_histIAA[2], 5, 1, 1); book(_histIAA[3], 3, 1, 1); book(_histIAA[4], 4, 1, 1); book(_histIAA[5], 6, 1, 1); // Initialize background-subtracted yield histograms for (int itype = 0; itype < EVENT_TYPES; ++itype) { for (int ipt = 0; ipt < PT_BINS; ++ipt) { string newname = _histYield[itype][ipt]->name() + ".nobkg"; string newtitle = _histYield[itype][ipt]->title() + ", background subtracted"; - book(_histYieldNoBkg[itype][ipt], newname, 36, -0.5*M_PI, 1.5*M_PI, newtitle, - _histYield[itype][ipt]->annotation("XLabel"), - _histYield[itype][ipt]->annotation("YLabel")); + book(_histYieldNoBkg[itype][ipt], newname, 36, -0.5*M_PI, 1.5*M_PI); } } } /// Perform the per-event analysis void analyze(const Event& event) { // Trigger particles Particles trigParticles = applyProjection(event,"APRIMTrig").particles(); // Associated particles Particles assocParticles[PT_BINS]; for (int ipt = 0; ipt < PT_BINS; ++ipt) { string pname = "APRIMAssoc" + toString(ipt); assocParticles[ipt] = applyProjection(event,pname).particles(); } // Check type of event. This may not be a perfect way to check for the // type of event as there might be some weird conditions hidden inside. // For example some HepMC versions check if number of hard collisions // is equal to 0 and assign 'false' in that case, which is usually wrong. // This might be changed in the future int ev_type = 0; // pp if ( isHI ) { // Prepare centrality projection and value const CentralityProjection& centrProj = apply(event, "V0M"); double centr = centrProj(); // Set the flag for the type of the event if (centr > 0.0 && centr < 5.0) ev_type = 1; // PbPb, central else if (centr > 60.0 && centr < 90.0) ev_type = 2; // PbPb, peripherial else vetoEvent; // PbPb, other, this is not used in the analysis at all } // Fill trigger histogram for a proper event type _counterTrigger[ev_type]->fill(trigParticles.size()); // Loop over trigger particles for (const Particle& trigParticle : trigParticles) { // For each pt bin for (int ipt = 0; ipt < PT_BINS; ++ipt) { // Loop over associated particles for (const Particle& assocParticle : assocParticles[ipt]) { // If associated and trigger particle are not the same particles. if (!isSame(trigParticle, assocParticle)) { // Test trigger particle. if (trigParticle.pt() > assocParticle.pt()) { // Calculate delta phi in range (-0.5*PI, 1.5*PI). double dPhi = deltaPhi(trigParticle, assocParticle, true); if (dPhi < -0.5 * M_PI) dPhi += 2 * M_PI; // Fill yield histogram for calculated delta phi _histYield[ev_type][ipt]->fill(dPhi); } } } } } } /// Normalise histograms etc., after the run void finalize() { // Check for the reentrant finalize bool pp_available = false, PbPb_available = false; for (int itype = 0; itype < EVENT_TYPES; ++itype) { for (int ipt = 0; ipt < PT_BINS; ++ipt) { if (_histYield[itype][ipt]->numEntries() > 0) itype == 0 ? pp_available = true : PbPb_available = true; } } // Skip postprocessing if pp or PbPb histograms are not available if (!(pp_available && PbPb_available)) return; // Variable for near and away side peak integral calculation double integral[EVENT_TYPES][PT_BINS][2] = { { {0.0} } }; // Variables for background calculation double bkg = 0.0; double bkgErr[EVENT_TYPES][PT_BINS] = { {0.0} }; // Variables for integration error calculation double norm[EVENT_TYPES] = {0.0}; double numEntries[EVENT_TYPES][PT_BINS][2] = { { {0.0} } }; int numBins[EVENT_TYPES][PT_BINS][2] = { { {0} } }; // For each event type for (int itype = 0; itype < EVENT_TYPES; ++itype) { // Get counter CounterPtr counter = _counterTrigger[itype]; // For each pT range for (int ipt = 0; ipt < PT_BINS; ++ipt) { // Get yield histograms Histo1DPtr hYield = _histYield[itype][ipt]; Histo1DPtr hYieldNoBkg = _histYieldNoBkg[itype][ipt]; // Check if histograms are fine if (counter->sumW() == 0 || hYield->numEntries() == 0) { MSG_WARNING("There are no entries in one of the histograms"); continue; } // Scale yield histogram norm[itype] = 1. / counter->sumW(); scale(hYield, norm[itype]); // Calculate background double sum = 0.0; int nbins = 0; for (size_t ibin = 0; ibin < hYield->numBins(); ++ibin) { double xmid = hYield->bin(ibin).xMid(); if (inRange(xmid, -0.5 * M_PI, -0.5 * M_PI + 0.4) || inRange(xmid, 0.5 * M_PI - 0.4, 0.5 * M_PI + 0.4) || inRange(xmid, 1.5 * M_PI - 0.4, 1.5 * M_PI)) { sum += hYield->bin(ibin).sumW(); nbins += 1; } } if (nbins == 0) { MSG_WARNING("Failed to estimate background!"); continue; } bkg = sum / nbins; // Calculate background error sum = 0.0; nbins = 0; for (size_t ibin = 0; ibin < hYield->numBins(); ++ibin) { double xmid = hYield->bin(ibin).xMid(); if (inRange(xmid, 0.5 * M_PI - 0.4, 0.5 * M_PI + 0.4)) { sum += (hYield->bin(ibin).sumW() - bkg) * (hYield->bin(ibin).sumW() - bkg); nbins++; } } if (nbins < 2) { MSG_WARNING("Failed to estimate background error!"); continue; } bkgErr[itype][ipt] = sqrt(sum / (nbins - 1)); // Fill histograms with removed background for (size_t ibin = 0; ibin < hYield->numBins(); ++ibin) { hYieldNoBkg->fillBin(ibin, hYield->bin(ibin).sumW() - bkg); } // Integrate near-side yield size_t lowerBin = hYield->binIndexAt(-0.7 + 0.02); size_t upperBin = hYield->binIndexAt( 0.7 - 0.02) + 1; nbins = upperBin - lowerBin; numBins[itype][ipt][NEAR] = nbins; integral[itype][ipt][NEAR] = hYield->integralRange(lowerBin, upperBin) - nbins * bkg; numEntries[itype][ipt][NEAR] = hYield->integralRange(lowerBin, upperBin) * counter->sumW(); // Integrate away-side yield lowerBin = hYield->binIndexAt(M_PI - 0.7 + 0.02); upperBin = hYield->binIndexAt(M_PI + 0.7 - 0.02) + 1; nbins = upperBin - lowerBin; numBins[itype][ipt][AWAY] = nbins; integral[itype][ipt][AWAY] = hYield->integralRange(lowerBin, upperBin) - nbins * bkg; numEntries[itype][ipt][AWAY] = hYield->integralRange(lowerBin, upperBin) * counter->sumW(); } } // Variables for IAA/ICP plots double yval[2] = { 0.0, 0.0 }; double yerr[2] = { 0.0, 0.0 }; double xval[PT_BINS] = { 3.5, 5.0, 7.0, 9.0 }; double xerr[PT_BINS] = { 0.5, 1.0, 1.0, 1.0 }; int types1[3] = {1, 2, 1}; int types2[3] = {0, 0, 2}; // Fill IAA/ICP plots for near and away side peak for (int ihist = 0; ihist < 3; ++ihist) { int type1 = types1[ihist]; int type2 = types2[ihist]; double norm1 = norm[type1]; double norm2 = norm[type2]; for (int ipt = 0; ipt < PT_BINS; ++ipt) { double bkgErr1 = bkgErr[type1][ipt]; double bkgErr2 = bkgErr[type2][ipt]; for (int ina = 0; ina < 2; ++ina) { double integ1 = integral[type1][ipt][ina]; double integ2 = integral[type2][ipt][ina]; double numEntries1 = numEntries[type1][ipt][ina]; double numEntries2 = numEntries[type2][ipt][ina]; double numBins1 = numBins[type1][ipt][ina]; double numBins2 = numBins[type2][ipt][ina]; yval[ina] = integ1 / integ2; yerr[ina] = norm1 * norm1 * numEntries1 + norm2 * norm2 * numEntries2 * integ1 * integ1 / (integ2 * integ2) + numBins1 * numBins1 * bkgErr1 * bkgErr1 + numBins2 * numBins2 * bkgErr2 * bkgErr2 * integ1 * integ1 / (integ2 * integ2); yerr[ina] = sqrt(yerr[ina])/integ2; } _histIAA[ihist]->addPoint(xval[ipt], yval[NEAR], xerr[ipt], yerr[NEAR]); _histIAA[ihist + 3]->addPoint(xval[ipt], yval[AWAY], xerr[ipt], yerr[AWAY]); } } } //@} private: bool isHI; static const int PT_BINS = 4; static const int EVENT_TYPES = 3; static const int NEAR = 0; static const int AWAY = 1; /// @name Histograms //@{ Histo1DPtr _histYield[EVENT_TYPES][PT_BINS]; Histo1DPtr _histYieldNoBkg[EVENT_TYPES][PT_BINS]; CounterPtr _counterTrigger[EVENT_TYPES]; Scatter2DPtr _histIAA[6]; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALICE_2012_I930312); } diff --git a/analyses/pluginALICE/ALICE_2015_PBPBCentrality.cc b/analyses/pluginALICE/ALICE_2015_PBPBCentrality.cc --- a/analyses/pluginALICE/ALICE_2015_PBPBCentrality.cc +++ b/analyses/pluginALICE/ALICE_2015_PBPBCentrality.cc @@ -1,66 +1,66 @@ #include #include #include namespace Rivet { /// Dummy analysis for centrality calibration in Pb-Pb at 5.02TeV /// /// @author Christian Holm Christensen class ALICE_2015_PBPBCentrality : public Analysis { public: /// Constructor ALICE_2015_PBPBCentrality() : Analysis("ALICE_2015_PBPBCentrality") { } /// Initialize this analysis. void init() { ALICE::V0AndTrigger v0and; declare(v0and,"V0-AND"); ALICE::V0MMultiplicity v0m; declare(v0m,"V0M"); // Access the HepMC heavy ion info declare(HepMCHeavyIon(), "HepMC"); - book(_v0m, "V0M","Forward multiplicity","V0M","Events"); - book(_imp, "V0M_IMP",100,0,20, "Impact parameter","b (fm)","Events"); + book(_v0m, "V0M"); + book(_imp, "V0M_IMP",100,0,20); } /// Analyse a single event. void analyze(const Event& event) { // Get and fill in the impact parameter value if the information is valid. _imp->fill(apply(event, "HepMC").impact_parameter()); // Check if we have any hit in either V0-A or -C. If not, the // event is not selected and we get out. if (!apply(event,"V0-AND")()) return; // Fill in the V0 multiplicity for this event _v0m->fill(apply(event,"V0M")()); } /// Finalize this analysis void finalize() { _v0m->normalize(); _imp->normalize(); } /// The distribution of V0M multiplicity Histo1DPtr _v0m; /// The distribution of impact parameters Histo1DPtr _imp; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALICE_2015_PBPBCentrality); } diff --git a/analyses/pluginALICE/ALICE_2016_I1507157.cc b/analyses/pluginALICE/ALICE_2016_I1507157.cc --- a/analyses/pluginALICE/ALICE_2016_I1507157.cc +++ b/analyses/pluginALICE/ALICE_2016_I1507157.cc @@ -1,186 +1,186 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/AliceCommon.hh" #include "Rivet/Projections/PrimaryParticles.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/EventMixingFinalState.hh" namespace Rivet { /// @brief Correlations of identified particles in pp. /// Also showcasing use of EventMixingFinalState. class ALICE_2016_I1507157 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ALICE_2016_I1507157); /// @name Analysis methods //@{ /// @brief Calculate angular distance between particles. double phaseDif(double a1, double a2){ double dif = a1 - a2; while (dif < -M_PI/2) dif += 2*M_PI; while (dif > 3*M_PI/2) dif -= 2*M_PI; return dif; } /// Book histograms and initialise projections before the run void init() { double etamax = 0.8; double pTmin = 0.5; // GeV // Trigger declare(ALICE::V0AndTrigger(), "V0-AND"); // Charged tracks used to manage the mixing observable. ChargedFinalState cfsMult(Cuts::abseta < etamax); declare(cfsMult, "CFSMult"); // Primary particles. PrimaryParticles pp({Rivet::PID::PIPLUS, Rivet::PID::KPLUS, Rivet::PID::K0S, Rivet::PID::K0L, Rivet::PID::PROTON, Rivet::PID::NEUTRON, Rivet::PID::LAMBDA, Rivet::PID::SIGMAMINUS, Rivet::PID::SIGMAPLUS, Rivet::PID::XIMINUS, Rivet::PID::XI0, Rivet::PID::OMEGAMINUS},Cuts::abseta < etamax && Cuts::pT > pTmin*GeV); declare(pp,"APRIM"); // The event mixing projection declare(EventMixingFinalState(cfsMult, pp, 5, 0, 100, 10),"EVM"); // The particle pairs. pid = {{211, -211}, {321, -321}, {2212, -2212}, {3122, -3122}, {211, 211}, {321, 321}, {2212, 2212}, {3122, 3122}, {2212, 3122}, {2212, -3122}}; // The associated histograms in the data file. vector refdata = {"d04-x01-y01","d04-x01-y02","d04-x01-y03", "d06-x01-y02","d05-x01-y01","d05-x01-y02","d05-x01-y03","d06-x01-y01", "d01-x01-y02","d02-x01-y02"}; ratio.resize(refdata.size()); signal.resize(refdata.size()); background.resize(refdata.size()); for (int i = 0, N = refdata.size(); i < N; ++i) { // The ratio plots. book(ratio[i], refdata[i], true); // Signal and mixed background. - book(signal[i], "/TMP/" + refdata[i] + "-s", *ratio[i], refdata[i] + "-s"); - book(background[i], "/TMP/" + refdata[i] + "-b", *ratio[i], refdata[i] + "-b"); + book(signal[i], "/TMP/" + refdata[i] + "-s"); + book(background[i], "/TMP/" + refdata[i] + "-b"); // Number of signal and mixed pairs. nsp.push_back(0.); nmp.push_back(0.); } } /// Perform the per-event analysis void analyze(const Event& event) { // Triggering if (!apply(event, "V0-AND")()) return; // The projections const PrimaryParticles& pp = applyProjection(event,"APRIM"); const EventMixingFinalState& evm = applyProjection(event, "EVM"); // Test if we have enough mixing events available to continue. if (!evm.hasMixingEvents()) return; for(const Particle& p1 : pp.particles()) { // Start by doing the signal distributions for(const Particle& p2 : pp.particles()) { if(isSame(p1,p2)) continue; double dEta = abs(p1.eta() - p2.eta()); double dPhi = phaseDif(p1.phi(), p2.phi()); if(dEta < 1.3) { for (int i = 0, N = pid.size(); i < N; ++i) { int pid1 = pid[i].first; int pid2 = pid[i].second; bool samesign = (pid1 * pid2 > 0); if (samesign && ((pid1 == p1.pid() && pid2 == p2.pid()) || (pid1 == -p1.pid() && pid2 == -p2.pid()))) { signal[i]->fill(dPhi); nsp[i] += 1.0; } if (!samesign && abs(pid1) == abs(pid2) && pid1 == p1.pid() && pid2 == p2.pid()) { signal[i]->fill(dPhi); nsp[i] += 1.0; } if (!samesign && abs(pid1) != abs(pid2) && ( (pid1 == p1.pid() && pid2 == p2.pid()) || (pid2 == p1.pid() && pid1 == p2.pid()) ) ) { signal[i]->fill(dPhi); nsp[i] += 1.0; } } } } // Then do the background distribution for(const Particle& pMix : evm.particles()){ double dEta = abs(p1.eta() - pMix.eta()); double dPhi = phaseDif(p1.phi(), pMix.phi()); if(dEta < 1.3) { for (int i = 0, N = pid.size(); i < N; ++i) { int pid1 = pid[i].first; int pid2 = pid[i].second; bool samesign = (pid1 * pid2 > 0); if (samesign && ((pid1 == p1.pid() && pid2 == pMix.pid()) || (pid1 == -p1.pid() && pid2 == -pMix.pid()))) { background[i]->fill(dPhi); nmp[i] += 1.0; } if (!samesign && abs(pid1) == abs(pid2) && pid1 == p1.pid() && pid2 == pMix.pid()) { background[i]->fill(dPhi); nmp[i] += 1.0; } if (!samesign && abs(pid1) != abs(pid2) && ( (pid1 == p1.pid() && pid2 == pMix.pid()) || (pid2 == p1.pid() && pid1 == pMix.pid()) ) ) { background[i]->fill(dPhi); nmp[i] += 1.0; } } } } } } /// Normalise histograms etc., after the run void finalize() { for (int i = 0, N = pid.size(); i < N; ++i) { double sc = nmp[i] / nsp[i]; signal[i]->scaleW(sc); divide(signal[i],background[i],ratio[i]); } } //@} /// @name Histograms //@{ vector > pid; vector signal; vector background; vector ratio; vector nsp; vector nmp; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALICE_2016_I1507157); } diff --git a/analyses/pluginATLAS/ATLAS_2011_I945498.cc b/analyses/pluginATLAS/ATLAS_2011_I945498.cc --- a/analyses/pluginATLAS/ATLAS_2011_I945498.cc +++ b/analyses/pluginATLAS/ATLAS_2011_I945498.cc @@ -1,303 +1,304 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ZFinder.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/LeadingParticlesFinalState.hh" namespace Rivet { /// ATLAS Z+jets in pp at 7 TeV class ATLAS_2011_I945498 : public Analysis { public: /// Constructor ATLAS_2011_I945498() : Analysis("ATLAS_2011_I945498") { } /// Book histograms and initialise projections before the run void init() { // Variable initialisation _isZeeSample = false; _isZmmSample = false; for (size_t chn = 0; chn < 3; ++chn) { book(weights_nj0[chn], "weights_nj0_" + to_str(chn)); book(weights_nj1[chn], "weights_nj1_" + to_str(chn)); book(weights_nj2[chn], "weights_nj2_" + to_str(chn)); book(weights_nj3[chn], "weights_nj3_" + to_str(chn)); book(weights_nj4[chn], "weights_nj4_" + to_str(chn)); } // Set up projections FinalState fs; ZFinder zfinder_mu(fs, Cuts::abseta < 2.4 && Cuts::pT > 20*GeV, PID::MUON, 66*GeV, 116*GeV, 0.1, ZFinder::ClusterPhotons::NODECAY); declare(zfinder_mu, "ZFinder_mu"); Cut cuts = (Cuts::abseta < 1.37 || Cuts::absetaIn(1.52, 2.47)) && Cuts::pT > 20*GeV; ZFinder zfinder_el(fs, cuts, PID::ELECTRON, 66*GeV, 116*GeV, 0.1, ZFinder::ClusterPhotons::NODECAY); declare(zfinder_el, "ZFinder_el"); Cut cuts25_20 = Cuts::abseta < 2.5 && Cuts::pT > 20*GeV; // For combined cross-sections (combined phase space + dressed level) ZFinder zfinder_comb_mu(fs, cuts25_20, PID::MUON, 66.0*GeV, 116.0*GeV, 0.1, ZFinder::ClusterPhotons::NODECAY); declare(zfinder_comb_mu, "ZFinder_comb_mu"); ZFinder zfinder_comb_el(fs, cuts25_20, PID::ELECTRON, 66.0*GeV, 116.0*GeV, 0.1, ZFinder::ClusterPhotons::NODECAY); declare(zfinder_comb_el, "ZFinder_comb_el"); // Define veto FS in order to prevent Z-decay products entering the jet algorithm VetoedFinalState remfs; remfs.addVetoOnThisFinalState(zfinder_el); remfs.addVetoOnThisFinalState(zfinder_mu); VetoedFinalState remfs_comb; remfs_comb.addVetoOnThisFinalState(zfinder_comb_el); remfs_comb.addVetoOnThisFinalState(zfinder_comb_mu); FastJets jets(remfs, FastJets::ANTIKT, 0.4); jets.useInvisibles(); declare(jets, "jets"); FastJets jets_comb(remfs_comb, FastJets::ANTIKT, 0.4); jets_comb.useInvisibles(); declare(jets_comb, "jets_comb"); // 0=el, 1=mu, 2=comb for (size_t chn = 0; chn < 3; ++chn) { book(_h_njet_incl[chn] ,1, 1, chn+1); book(_h_njet_ratio[chn] ,2, 1, chn+1); book(_h_ptjet[chn] ,3, 1, chn+1); book(_h_ptlead[chn] ,4, 1, chn+1); book(_h_ptseclead[chn] ,5, 1, chn+1); book(_h_yjet[chn] ,6, 1, chn+1); book(_h_ylead[chn] ,7, 1, chn+1); book(_h_yseclead[chn] ,8, 1, chn+1); book(_h_mass[chn] ,9, 1, chn+1); book(_h_deltay[chn] ,10, 1, chn+1); book(_h_deltaphi[chn] ,11, 1, chn+1); book(_h_deltaR[chn] ,12, 1, chn+1); } } // Jet selection criteria universal for electron and muon channel /// @todo Replace with a Cut passed to jetsByPt Jets selectJets(const ZFinder* zf, const FastJets* allJets) { const FourMomentum l1 = zf->constituents()[0].momentum(); const FourMomentum l2 = zf->constituents()[1].momentum(); Jets jets; for (const Jet& jet : allJets->jetsByPt(30*GeV)) { const FourMomentum jmom = jet.momentum(); if (jmom.absrap() < 4.4 && deltaR(l1, jmom) > 0.5 && deltaR(l2, jmom) > 0.5) { jets.push_back(jet); } } return jets; } /// Perform the per-event analysis void analyze(const Event& event) { vector zfs; zfs.push_back(& (apply(event, "ZFinder_el"))); zfs.push_back(& (apply(event, "ZFinder_mu"))); zfs.push_back(& (apply(event, "ZFinder_comb_el"))); zfs.push_back(& (apply(event, "ZFinder_comb_mu"))); vector fjs; fjs.push_back(& (apply(event, "jets"))); fjs.push_back(& (apply(event, "jets_comb"))); // Determine what kind of MC sample this is const bool isZee = (zfs[0]->bosons().size() == 1) || (zfs[2]->bosons().size() == 1); const bool isZmm = (zfs[1]->bosons().size() == 1) || (zfs[3]->bosons().size() == 1); if (isZee) _isZeeSample = true; if (isZmm) _isZmmSample = true; // Require exactly one electronic or muonic Z-decay in the event bool isZeemm = ( (zfs[0]->bosons().size() == 1 && zfs[1]->bosons().size() != 1) || (zfs[1]->bosons().size() == 1 && zfs[0]->bosons().size() != 1) ); bool isZcomb = ( (zfs[2]->bosons().size() == 1 && zfs[3]->bosons().size() != 1) || (zfs[3]->bosons().size() == 1 && zfs[2]->bosons().size() != 1) ); if (!isZeemm && !isZcomb) vetoEvent; vector zfIDs; vector fjIDs; if (isZeemm) { int chn = zfs[0]->bosons().size() == 1 ? 0 : 1; zfIDs.push_back(chn); fjIDs.push_back(0); } if (isZcomb) { int chn = zfs[2]->bosons().size() == 1 ? 2 : 3; zfIDs.push_back(chn); fjIDs.push_back(1); } for (size_t izf = 0; izf < zfIDs.size(); ++izf) { int zfID = zfIDs[izf]; int fjID = fjIDs[izf]; int chn = zfID; if (zfID == 2 || zfID == 3) chn = 2; Jets jets = selectJets(zfs[zfID], fjs[fjID]); switch (jets.size()) { case 0: weights_nj0[chn]->fill(); break; case 1: weights_nj0[chn]->fill(); weights_nj1[chn]->fill(); break; case 2: weights_nj0[chn]->fill(); weights_nj1[chn]->fill(); weights_nj2[chn]->fill(); break; case 3: weights_nj0[chn]->fill(); weights_nj1[chn]->fill(); weights_nj2[chn]->fill(); weights_nj3[chn]->fill(); break; default: // >= 4 weights_nj0[chn]->fill(); weights_nj1[chn]->fill(); weights_nj2[chn]->fill(); weights_nj3[chn]->fill(); weights_nj4[chn]->fill(); } // Require at least one jet if (jets.empty()) continue; // Fill jet multiplicities for (size_t ijet = 1; ijet <= jets.size(); ++ijet) { _h_njet_incl[chn]->fill(ijet); } // Loop over selected jets, fill inclusive jet distributions for (size_t ijet = 0; ijet < jets.size(); ++ijet) { _h_ptjet[chn]->fill(jets[ijet].pT()/GeV); _h_yjet [chn]->fill(fabs(jets[ijet].rapidity())); } // Leading jet histos const double ptlead = jets[0].pT()/GeV; const double yabslead = fabs(jets[0].rapidity()); _h_ptlead[chn]->fill(ptlead); _h_ylead [chn]->fill(yabslead); if (jets.size() >= 2) { // Second jet histos const double pt2ndlead = jets[1].pT()/GeV; const double yabs2ndlead = fabs(jets[1].rapidity()); _h_ptseclead[chn] ->fill(pt2ndlead); _h_yseclead [chn] ->fill(yabs2ndlead); // Dijet histos const double deltaphi = fabs(deltaPhi(jets[1], jets[0])); const double deltarap = fabs(jets[0].rapidity() - jets[1].rapidity()) ; const double deltar = fabs(deltaR(jets[0], jets[1], RAPIDITY)); const double mass = (jets[0].momentum() + jets[1].momentum()).mass(); _h_mass [chn] ->fill(mass/GeV); _h_deltay [chn] ->fill(deltarap); _h_deltaphi[chn] ->fill(deltaphi); _h_deltaR [chn] ->fill(deltar); } } } /// @name Ratio calculator util functions //@{ /// Calculate the ratio, being careful about div-by-zero double ratio(double a, double b) { return (b != 0) ? a/b : 0; } /// Calculate the ratio error, being careful about div-by-zero double ratio_err(double a, double b) { return (b != 0) ? sqrt(a/sqr(b) + sqr(a)/(b*b*b)) : 0; } //@} void finalize() { // Fill ratio histograms for (size_t chn = 0; chn < 3; ++chn) { _h_njet_ratio[chn]->addPoint(1, ratio(weights_nj1[chn]->val(), weights_nj0[chn]->val()), 0.5, ratio_err(weights_nj1[chn]->val(), weights_nj0[chn]->val())); _h_njet_ratio[chn]->addPoint(2, ratio(weights_nj2[chn]->val(), weights_nj1[chn]->val()), 0.5, ratio_err(weights_nj2[chn]->val(), weights_nj1[chn]->val())); _h_njet_ratio[chn]->addPoint(3, ratio(weights_nj3[chn]->val(), weights_nj2[chn]->val()), 0.5, ratio_err(weights_nj3[chn]->val(), weights_nj2[chn]->val())); _h_njet_ratio[chn]->addPoint(4, ratio(weights_nj4[chn]->val(), weights_nj3[chn]->val()), 0.5, ratio_err(weights_nj4[chn]->val(), weights_nj3[chn]->val())); } // Scale other histos for (size_t chn = 0; chn < 3; ++chn) { // For ee and mumu channels: normalize to Njet inclusive cross-section - double xs = (chn == 2) ? crossSectionPerEvent()/picobarn : 1 / weights_nj0[chn]->val(); + double xs = crossSectionPerEvent()/picobarn; + if (chn != 2 && weights_nj0[chn]->val() != 0.) xs = 1.0 / weights_nj0[chn]->val(); // For inclusive MC sample(ee/mmu channels together) we want the single-lepton-flavor xsec - if (_isZeeSample && _isZmmSample) xs /= 2; + if (_isZeeSample && _isZmmSample) xs *= 0.5; // Special case histogram: always not normalized scale(_h_njet_incl[chn], (chn < 2) ? crossSectionPerEvent()/picobarn : xs); scale(_h_ptjet[chn] , xs); scale(_h_ptlead[chn] , xs); scale(_h_ptseclead[chn], xs); scale(_h_yjet[chn] , xs); scale(_h_ylead[chn] , xs); scale(_h_yseclead[chn] , xs); scale(_h_deltaphi[chn] , xs); scale(_h_deltay[chn] , xs); scale(_h_deltaR[chn] , xs); scale(_h_mass[chn] , xs); } } //@} private: bool _isZeeSample; bool _isZmmSample; CounterPtr weights_nj0[3]; CounterPtr weights_nj1[3]; CounterPtr weights_nj2[3]; CounterPtr weights_nj3[3]; CounterPtr weights_nj4[3]; Scatter2DPtr _h_njet_ratio[3]; Histo1DPtr _h_njet_incl[3]; Histo1DPtr _h_ptjet[3]; Histo1DPtr _h_ptlead[3]; Histo1DPtr _h_ptseclead[3]; Histo1DPtr _h_yjet[3]; Histo1DPtr _h_ylead[3]; Histo1DPtr _h_yseclead[3]; Histo1DPtr _h_deltaphi[3]; Histo1DPtr _h_deltay[3]; Histo1DPtr _h_deltaR[3]; Histo1DPtr _h_mass[3]; }; DECLARE_RIVET_PLUGIN(ATLAS_2011_I945498); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1306294.cc b/analyses/pluginATLAS/ATLAS_2014_I1306294.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1306294.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1306294.cc @@ -1,189 +1,189 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ZFinder.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/HeavyHadrons.hh" #include "Rivet/Projections/VetoedFinalState.hh" namespace Rivet { /// Electroweak Wjj production at 8 TeV class ATLAS_2014_I1306294 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ATLAS_2014_I1306294); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Get options from the new option system _mode = 1; if ( getOption("LMODE") == "EL" ) _mode = 1; if ( getOption("LMODE") == "MU" ) _mode = 2; FinalState fs; Cut cuts = Cuts::abseta < 2.5 && Cuts::pT > 20*GeV; ZFinder zfinder(fs, cuts, _mode==1? PID::ELECTRON : PID::MUON, 76.0*GeV, 106.0*GeV, 0.1, - ZFinder::ChargedLeptons::PROMPT, ZFinder::ClusterPhotons::NODECAY, ZFinder::AddPhotons::NO); + ZFinder::ChargedLeptons::ALL, ZFinder::ClusterPhotons::NODECAY, ZFinder::AddPhotons::NO); declare(zfinder, "ZFinder"); VetoedFinalState jet_fs(fs); jet_fs.addVetoOnThisFinalState(getProjection("ZFinder")); - FastJets jetpro1(jet_fs, FastJets::ANTIKT, 0.4, JetAlg::Muons::ALL, JetAlg::Invisibles::NONE); + FastJets jetpro1(jet_fs, FastJets::ANTIKT, 0.4, JetAlg::Muons::ALL, JetAlg::Invisibles::ALL); declare(jetpro1, "AntiKtJets04"); declare(HeavyHadrons(), "BHadrons"); // Histograms with data binning book(_h_bjet_Pt , 3, 1, 1); book(_h_bjet_Y , 5, 1, 1); book(_h_bjet_Yboost , 7, 1, 1); book(_h_bjet_DY20 , 9, 1, 1); book(_h_bjet_ZdPhi20 ,11, 1, 1); book(_h_bjet_ZdR20 ,13, 1, 1); book(_h_bjet_ZPt ,15, 1, 1); book(_h_bjet_ZY ,17, 1, 1); book(_h_2bjet_dR ,21, 1, 1); book(_h_2bjet_Mbb ,23, 1, 1); book(_h_2bjet_ZPt ,25, 1, 1); book(_h_2bjet_ZY ,27, 1, 1); } /// Perform the per-event analysis void analyze(const Event& e) { // Check we have a Z: const ZFinder& zfinder = apply(e, "ZFinder"); if (zfinder.bosons().size() != 1) vetoEvent; const Particles boson_s = zfinder.bosons(); const Particle boson_f = boson_s[0]; const Particles zleps = zfinder.constituents(); // Stop processing the event if no true b-partons or hadrons are found const Particles allBs = apply(e, "BHadrons").bHadrons(5.0*GeV); Particles stableBs = filter_select(allBs, Cuts::abseta < 2.5); if (stableBs.empty()) vetoEvent; // Get the b-jets const Jets& jets = apply(e, "AntiKtJets04").jetsByPt(Cuts::pT >20.0*GeV && Cuts::abseta <2.4); Jets b_jets; for (const Jet& jet : jets) { //veto overlaps with Z leptons: bool veto = false; for (const Particle& zlep : zleps) { if (deltaR(jet, zlep) < 0.5) veto = true; } if (veto) continue; for (const Particle& bhadron : stableBs) { if (deltaR(jet, bhadron) <= 0.3) { b_jets.push_back(jet); break; // match } } } // Make sure we have at least 1 if (b_jets.empty()) vetoEvent; // Fill the plots const double ZpT = boson_f.pT()/GeV; const double ZY = boson_f.absrap(); _h_bjet_ZPt->fill(ZpT); _h_bjet_ZY ->fill(ZY); for (const Jet& jet : b_jets) { _h_bjet_Pt->fill(jet.pT()/GeV); _h_bjet_Y ->fill(jet.absrap()); const double Yboost = 0.5 * fabs(boson_f.rapidity() + jet.rapidity()); _h_bjet_Yboost->fill(Yboost); if(ZpT > 20.) { const double ZBDY = fabs( boson_f.rapidity() - jet.rapidity() ); const double ZBDPHI = fabs( deltaPhi(jet.phi(), boson_f.phi()) ); const double ZBDR = deltaR(jet, boson_f, RAPIDITY); _h_bjet_DY20->fill( ZBDY); _h_bjet_ZdPhi20->fill(ZBDPHI); _h_bjet_ZdR20->fill( ZBDR); } } //loop over b-jets if (b_jets.size() < 2) return; _h_2bjet_ZPt->fill(ZpT); _h_2bjet_ZY ->fill(ZY); const double BBDR = deltaR(b_jets[0], b_jets[1], RAPIDITY); const double Mbb = (b_jets[0].momentum() + b_jets[1].momentum()).mass(); _h_2bjet_dR ->fill(BBDR); _h_2bjet_Mbb->fill(Mbb); } // end of analysis loop /// Normalise histograms etc., after the run void finalize() { const double normfac = crossSection() / sumOfWeights(); scale( _h_bjet_Pt, normfac); scale( _h_bjet_Y, normfac); scale( _h_bjet_Yboost, normfac); scale( _h_bjet_DY20, normfac); scale( _h_bjet_ZdPhi20, normfac); scale( _h_bjet_ZdR20, normfac); scale( _h_bjet_ZPt, normfac); scale( _h_bjet_ZY, normfac); scale( _h_2bjet_dR, normfac); scale( _h_2bjet_Mbb, normfac); scale( _h_2bjet_ZPt, normfac); scale( _h_2bjet_ZY, normfac); } //@} protected: // Data members like post-cuts event weight counters go here size_t _mode; private: Histo1DPtr _h_bjet_Pt; Histo1DPtr _h_bjet_Y; Histo1DPtr _h_bjet_Yboost; Histo1DPtr _h_bjet_DY20; Histo1DPtr _h_bjet_ZdPhi20; Histo1DPtr _h_bjet_ZdR20; Histo1DPtr _h_bjet_ZPt; Histo1DPtr _h_bjet_ZY; Histo1DPtr _h_2bjet_dR; Histo1DPtr _h_2bjet_Mbb; Histo1DPtr _h_2bjet_ZPt; Histo1DPtr _h_2bjet_ZY; }; DECLARE_RIVET_PLUGIN(ATLAS_2014_I1306294); } diff --git a/analyses/pluginATLAS/ATLAS_2016_I1444991.cc b/analyses/pluginATLAS/ATLAS_2016_I1444991.cc --- a/analyses/pluginATLAS/ATLAS_2016_I1444991.cc +++ b/analyses/pluginATLAS/ATLAS_2016_I1444991.cc @@ -1,195 +1,195 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/PromptFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/VisibleFinalState.hh" namespace Rivet { class ATLAS_2016_I1444991 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ATLAS_2016_I1444991); public: /// Book histograms and initialise projections before the run void init() { // All particles within |eta| < 5.0 const FinalState FS(Cuts::abseta < 5.0); // Project photons for dressing IdentifiedFinalState photon_id(FS); photon_id.acceptIdPair(PID::PHOTON); // Project dressed electrons with pT > 15 GeV and |eta| < 2.47 IdentifiedFinalState el_id(FS); el_id.acceptIdPair(PID::ELECTRON); PromptFinalState el_bare(el_id); Cut cuts = (Cuts::abseta < 2.47) && ( (Cuts::abseta <= 1.37) || (Cuts::abseta >= 1.52) ) && (Cuts::pT > 15*GeV); DressedLeptons el_dressed_FS(photon_id, el_bare, 0.1, cuts, true); declare(el_dressed_FS,"EL_DRESSED_FS"); // Project dressed muons with pT > 15 GeV and |eta| < 2.5 IdentifiedFinalState mu_id(FS); mu_id.acceptIdPair(PID::MUON); PromptFinalState mu_bare(mu_id); DressedLeptons mu_dressed_FS(photon_id, mu_bare, 0.1, Cuts::abseta < 2.5 && Cuts::pT > 15*GeV, true); declare(mu_dressed_FS,"MU_DRESSED_FS"); // get MET from generic invisibles VetoedFinalState inv_fs(FS); inv_fs.addVetoOnThisFinalState(VisibleFinalState(FS)); declare(inv_fs, "InvisibleFS"); // Project jets FastJets jets(FS, FastJets::ANTIKT, 0.4); jets.useInvisibles(JetAlg::Invisibles::NONE); jets.useMuons(JetAlg::Muons::NONE); declare(jets, "jets"); // Book histograms book(_h_Njets , 2,1,1); book(_h_PtllMET , 3,1,1); book(_h_Yll , 4,1,1); book(_h_PtLead , 5,1,1); book(_h_Njets_norm , 6,1,1); book(_h_PtllMET_norm , 7,1,1); book(_h_Yll_norm , 8,1,1); book(_h_PtLead_norm , 9,1,1); book(_h_JetVeto , 10, 1, 1, true); //histos for jetveto std::vector ptlead25_bins = { 0., 25., 300. }; std::vector ptlead40_bins = { 0., 40., 300. }; - book(_h_pTj1_sel25 , "pTj1_sel25", ptlead25_bins, "", "", "" ); - book(_h_pTj1_sel40 , "pTj1_sel40", ptlead40_bins, "", "", "" ); + book(_h_pTj1_sel25 , "pTj1_sel25", ptlead25_bins); + book(_h_pTj1_sel40 , "pTj1_sel40", ptlead40_bins); } /// Perform the per-event analysis void analyze(const Event& event) { // Get final state particles const FinalState& ifs = applyProjection(event, "InvisibleFS"); const vector& good_mu = applyProjection(event, "MU_DRESSED_FS").dressedLeptons(); const vector& el_dressed = applyProjection(event, "EL_DRESSED_FS").dressedLeptons(); const Jets& jets = applyProjection(event, "jets").jetsByPt(Cuts::pT>25*GeV && Cuts::abseta < 4.5); //find good electrons vector good_el; for (const DressedLepton& el : el_dressed){ bool keep = true; for (const DressedLepton& mu : good_mu) { keep &= deltaR(el, mu) >= 0.1; } if (keep) good_el += el; } // select only emu events if ((good_el.size() != 1) || good_mu.size() != 1) vetoEvent; //built dilepton FourMomentum dilep = good_el[0].momentum() + good_mu[0].momentum(); double Mll = dilep.mass(); double Yll = dilep.rapidity(); double DPhill = fabs(deltaPhi(good_el[0], good_mu[0])); double pTl1 = (good_el[0].pT() > good_mu[0].pT())? good_el[0].pT() : good_mu[0].pT(); //get MET FourMomentum met; for (const Particle& p : ifs.particles()) met += p.momentum(); // do a few cuts before looking at jets if (pTl1 <= 22. || DPhill >= 1.8 || met.pT() <= 20.) vetoEvent; if (Mll <= 10. || Mll >= 55.) vetoEvent; Jets jets_selected; for (const Jet &j : jets) { if( j.abseta() > 2.4 && j.pT()<=30*GeV ) continue; bool keep = true; for(DressedLepton el : good_el) { keep &= deltaR(j, el) >= 0.3; } if (keep) jets_selected += j; } double PtllMET = (met + good_el[0].momentum() + good_mu[0].momentum()).pT(); double Njets = jets_selected.size() > 2 ? 2 : jets_selected.size(); double pTj1 = jets_selected.size()? jets_selected[0].pT() : 0.1; // Fill histograms _h_Njets->fill(Njets); _h_PtllMET->fill(PtllMET); _h_Yll->fill(fabs(Yll)); _h_PtLead->fill(pTj1); _h_Njets_norm->fill(Njets); _h_PtllMET_norm->fill(PtllMET); _h_Yll_norm->fill(fabs(Yll)); _h_PtLead_norm->fill(pTj1); _h_pTj1_sel25->fill(pTj1); _h_pTj1_sel40->fill(pTj1); } /// Normalise histograms etc., after the run void finalize() { const double xs = crossSectionPerEvent()/femtobarn; /// @todo Normalise, scale and otherwise manipulate histograms here scale(_h_Njets, xs); scale(_h_PtllMET, xs); scale(_h_Yll, xs); scale(_h_PtLead, xs); normalize(_h_Njets_norm); normalize(_h_PtllMET_norm); normalize(_h_Yll_norm); normalize(_h_PtLead_norm); scale(_h_pTj1_sel25, xs); scale(_h_pTj1_sel40, xs); normalize(_h_pTj1_sel25); normalize(_h_pTj1_sel40); // fill jet veto efficiency histogram _h_JetVeto->point(0).setY(_h_pTj1_sel25->bin(0).sumW(), sqrt(_h_pTj1_sel25->bin(0).sumW2())); _h_JetVeto->point(1).setY(_h_PtLead_norm->bin(0).sumW(), sqrt(_h_PtLead_norm->bin(0).sumW2())); _h_JetVeto->point(2).setY(_h_pTj1_sel40->bin(0).sumW(), sqrt(_h_pTj1_sel25->bin(0).sumW2())); scale(_h_PtLead_norm , 1000.); // curveball unit change in HepData, just for this one scale(_h_PtllMET_norm, 1000.); // curveball unit change in HepData, and this one } private: /// @name Histograms //@{ Histo1DPtr _h_Njets; Histo1DPtr _h_PtllMET; Histo1DPtr _h_Yll; Histo1DPtr _h_PtLead; Histo1DPtr _h_Njets_norm; Histo1DPtr _h_PtllMET_norm; Histo1DPtr _h_Yll_norm; Histo1DPtr _h_PtLead_norm; Scatter2DPtr _h_JetVeto; Histo1DPtr _h_pTj1_sel25; Histo1DPtr _h_pTj1_sel40; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2016_I1444991); } diff --git a/analyses/pluginATLAS/ATLAS_2017_I1614149.cc b/analyses/pluginATLAS/ATLAS_2017_I1614149.cc --- a/analyses/pluginATLAS/ATLAS_2017_I1614149.cc +++ b/analyses/pluginATLAS/ATLAS_2017_I1614149.cc @@ -1,358 +1,359 @@ // -*- C++ -* #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/PromptFinalState.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/MissingMomentum.hh" -#include "Rivet/Tools/fjcontrib/Njettiness.hh" -#include "Rivet/Tools/fjcontrib/Nsubjettiness.hh" -#include "Rivet/Tools/fjcontrib/NjettinessPlugin.hh" + +#include "fastjet/contrib/Njettiness.hh" +#include "fastjet/contrib/Nsubjettiness.hh" +#include "fastjet/contrib/NjettinessPlugin.hh" namespace Rivet { class ATLAS_2017_I1614149 : public Analysis { public: /// Constructor ///@brief: Resolved and boosted ttbar l+jets cross sections at 13 TeV DEFAULT_RIVET_ANALYSIS_CTOR(ATLAS_2017_I1614149); void init() { // Eta ranges Cut eta_full = (Cuts::abseta < 5.0); Cut lep_cuts = (Cuts::abseta < 2.5) && (Cuts::pT > 25*GeV); // All final state particles FinalState fs(eta_full); IdentifiedFinalState all_photons(fs); all_photons.acceptIdPair(PID::PHOTON); // Get photons to dress leptons IdentifiedFinalState ph_id(fs); ph_id.acceptIdPair(PID::PHOTON); // Projection to find the electrons IdentifiedFinalState el_id(fs); el_id.acceptIdPair(PID::ELECTRON); PromptFinalState photons(ph_id); photons.acceptTauDecays(true); declare(photons, "photons"); PromptFinalState electrons(el_id); electrons.acceptTauDecays(true); DressedLeptons dressedelectrons(photons, electrons, 0.1, lep_cuts); declare(dressedelectrons, "elecs"); DressedLeptons ewdressedelectrons(all_photons, electrons, 0.1, eta_full); // Projection to find the muons IdentifiedFinalState mu_id(fs); mu_id.acceptIdPair(PID::MUON); PromptFinalState muons(mu_id); muons.acceptTauDecays(true); DressedLeptons dressedmuons(photons, muons, 0.1, lep_cuts); declare(dressedmuons, "muons"); DressedLeptons ewdressedmuons(all_photons, muons, 0.1, eta_full); // Projection to find MET declare(MissingMomentum(fs), "MET"); // remove prompt neutrinos from jet clustering IdentifiedFinalState nu_id(fs); nu_id.acceptNeutrinos(); PromptFinalState neutrinos(nu_id); neutrinos.acceptTauDecays(true); // Jet clustering. VetoedFinalState vfs(fs); vfs.addVetoOnThisFinalState(ewdressedelectrons); vfs.addVetoOnThisFinalState(ewdressedmuons); vfs.addVetoOnThisFinalState(neutrinos); FastJets jets(vfs, FastJets::ANTIKT, 0.4, JetAlg::Muons::ALL, JetAlg::Invisibles::ALL); declare(jets, "jets"); // Addition of the large-R jets VetoedFinalState vfs1(fs); vfs1.addVetoOnThisFinalState(neutrinos); FastJets fjets(vfs1, FastJets::ANTIKT, 1.); fjets.useInvisibles(JetAlg::Invisibles::NONE); fjets.useMuons(JetAlg::Muons::NONE); declare(fjets, "fjets"); bookHists("top_pt_res", 15); bookHists("top_absrap_res", 17); bookHists("ttbar_pt_res", 19); bookHists("ttbar_absrap_res", 21); bookHists("ttbar_m_res", 23); bookHists("top_pt_boost", 25); bookHists("top_absrap_boost", 27); } void analyze(const Event& event) { // Get the selected objects, using the projections. vector electrons = apply(event, "elecs").dressedLeptons(); vector muons = apply(event, "muons").dressedLeptons(); const Jets& jets = apply(event, "jets").jetsByPt(Cuts::pT > 25*GeV && Cuts::abseta < 2.5); const PseudoJets& all_fjets = apply(event, "fjets").pseudoJetsByPt(); // get MET const Vector3 met = apply(event, "MET").vectorMPT(); Jets bjets, lightjets; for (const Jet& jet : jets) { bool b_tagged = jet.bTags(Cuts::pT > 5*GeV).size(); if ( b_tagged && bjets.size() < 2) bjets +=jet; else lightjets += jet; } // Implementing large-R jets definition // trim the jets PseudoJets trimmed_fatJets; float Rfilt = 0.2; float pt_fraction_min = 0.05; fastjet::Filter trimmer(fastjet::JetDefinition(fastjet::kt_algorithm, Rfilt), fastjet::SelectorPtFractionMin(pt_fraction_min)); for (PseudoJet pjet : all_fjets) trimmed_fatJets += trimmer(pjet); trimmed_fatJets = fastjet::sorted_by_pt(trimmed_fatJets); PseudoJets trimmed_jets; for (unsigned int i = 0; i < trimmed_fatJets.size(); ++i) { FourMomentum tj_mom = momentum(trimmed_fatJets[i]); if (tj_mom.pt() <= 300*GeV) continue; if (tj_mom.abseta() >= 2.0) continue; trimmed_jets.push_back(trimmed_fatJets[i]); } bool single_electron = (electrons.size() == 1) && (muons.empty()); bool single_muon = (muons.size() == 1) && (electrons.empty()); DressedLepton *lepton = NULL; if (single_electron) lepton = &electrons[0]; else if (single_muon) lepton = &muons[0]; if (!single_electron && !single_muon) vetoEvent; bool pass_resolved = true; bool num_b_tagged_jets = (bjets.size() == 2); if (!num_b_tagged_jets) pass_resolved = false; if (jets.size() < 4) pass_resolved = false; bool pass_boosted = true; int fatJetIndex = -1; bool passTopTag = false; bool passDphi = false; bool passAddJet = false; bool goodLepJet = false; bool lepbtag = false; bool hadbtag=false; vector lepJetIndex; vector jet_farFromHadTopJetCandidate; if (met.mod() < 20*GeV) pass_boosted = false; if (pass_boosted) { double transmass = _mT(lepton->momentum(), met); if (transmass + met.mod() < 60*GeV) pass_boosted = false; } if (pass_boosted) { if (trimmed_jets.size() >= 1) { for (unsigned int j = 0; j 100*GeV && momentum(trimmed_jets.at(j)).pt() > 300*GeV && momentum(trimmed_jets.at(j)).pt() < 1500*GeV && fabs(momentum(trimmed_jets.at(j)).eta()) < 2.) { passTopTag = true; fatJetIndex = j; break; } } } } if(!passTopTag && fatJetIndex == -1) pass_boosted = false; if (pass_boosted) { double dPhi_fatjet = deltaPhi(lepton->phi(), momentum(trimmed_jets.at(fatJetIndex)).phi()); double dPhi_fatjet_lep_cut = 1.0; //2.3 if (dPhi_fatjet > dPhi_fatjet_lep_cut ) { passDphi = true; } } if (!passDphi) pass_boosted = false; if (bjets.empty()) pass_boosted = false; if (pass_boosted) { for (unsigned int sj = 0; sj < jets.size(); ++sj) { double dR = deltaR(jets.at(sj).momentum(), momentum(trimmed_jets.at(fatJetIndex))); if(dR > 1.5) { passAddJet = true; jet_farFromHadTopJetCandidate.push_back(sj); } } } if (!passAddJet) pass_boosted = false; if (pass_boosted) { for (int ltj : jet_farFromHadTopJetCandidate) { double dR_jet_lep = deltaR(jets.at(ltj).momentum(), lepton->momentum()); double dR_jet_lep_cut = 2.0;//1.5 if (dR_jet_lep < dR_jet_lep_cut) { lepJetIndex.push_back(ltj); goodLepJet = true; } } } if(!goodLepJet) pass_boosted = false; if (pass_boosted) { for (int lepj : lepJetIndex) { lepbtag = jets.at(lepj).bTags(Cuts::pT > 5*GeV).size(); if (lepbtag) break; } } double dR_fatBjet_cut = 1.0; if (pass_boosted) { for (const Jet& bjet : bjets) { hadbtag |= deltaR(momentum(trimmed_jets.at(fatJetIndex)), bjet) < dR_fatBjet_cut; } } if (!(lepbtag || hadbtag)) pass_boosted = false; FourMomentum pbjet1; //Momentum of bjet1 FourMomentum pbjet2; //Momentum of bjet int Wj1index = -1, Wj2index = -1; if (pass_resolved) { if ( deltaR(bjets[0], *lepton) <= deltaR(bjets[1], *lepton) ) { pbjet1 = bjets[0].momentum(); pbjet2 = bjets[1].momentum(); } else { pbjet1 = bjets[1].momentum(); pbjet2 = bjets[0].momentum(); } double bestWmass = 1000.0*TeV; double mWPDG = 80.399*GeV; for (unsigned int i = 0; i < (lightjets.size() - 1); ++i) { for (unsigned int j = i + 1; j < lightjets.size(); ++j) { double wmass = (lightjets[i].momentum() + lightjets[j].momentum()).mass(); if (fabs(wmass - mWPDG) < fabs(bestWmass - mWPDG)) { bestWmass = wmass; Wj1index = i; Wj2index = j; } } } FourMomentum pjet1 = lightjets[Wj1index].momentum(); FourMomentum pjet2 = lightjets[Wj2index].momentum(); // compute hadronic W boson FourMomentum pWhadron = pjet1 + pjet2; double pz = computeneutrinoz(lepton->momentum(), met); FourMomentum ppseudoneutrino( sqrt(sqr(met.x()) + sqr(met.y()) + sqr(pz)), met.x(), met.y(), pz); //compute leptonic, hadronic, combined pseudo-top FourMomentum ppseudotoplepton = lepton->momentum() + ppseudoneutrino + pbjet1; FourMomentum ppseudotophadron = pbjet2 + pWhadron; FourMomentum pttbar = ppseudotoplepton + ppseudotophadron; fillHists("top_pt_res", ppseudotophadron.pt()/GeV); fillHists("top_absrap_res", ppseudotophadron.absrap()); fillHists("ttbar_pt_res", pttbar.pt()/GeV); fillHists("ttbar_absrap_res", pttbar.absrap()); fillHists("ttbar_m_res", pttbar.mass()/GeV); } if (pass_boosted) {// Boosted selection double hadtop_pt= momentum(trimmed_jets.at(fatJetIndex)).pt() / GeV; double hadtop_absrap= momentum(trimmed_jets.at(fatJetIndex)).absrap(); fillHists("top_pt_boost", hadtop_pt); fillHists("top_absrap_boost", hadtop_absrap); } } void finalize() { // Normalize to cross-section const double sf = (crossSection() / sumOfWeights()); for (HistoMap::value_type& hist : _h) { scale(hist.second, sf); if (hist.first.find("_norm") != string::npos) normalize(hist.second); } } void bookHists(std::string name, unsigned int index) { book(_h[name], index, 1 ,1); book(_h[name + "_norm"], index + 1, 1, 1); } void fillHists(std::string name, double value) { _h[name]->fill(value); _h[name + "_norm"]->fill(value); } double _mT(const FourMomentum &l, const Vector3 &met) const { return sqrt(2.0 * l.pT() * met.mod() * (1 - cos(deltaPhi(l, met))) ); } double tau32(const fastjet::PseudoJet &jet, double jet_rad) const { double alpha = 1.0; - fjcontrib::Nsubjettiness::NormalizedCutoffMeasure normalized_measure(alpha, jet_rad, 1000000); + fjcontrib::NormalizedCutoffMeasure normalized_measure(alpha, jet_rad, 1000000); // WTA definition // Nsubjettiness::OnePass_WTA_KT_Axes wta_kt_axes; // as in JetSubStructure recommendations - fjcontrib::Nsubjettiness::KT_Axes kt_axes; + fjcontrib::KT_Axes kt_axes; /// NsubjettinessRatio uses the results from Nsubjettiness to calculate the ratio /// tau_N/tau_M, where N and M are specified by the user. The ratio of different tau values /// is often used in analyses, so this class is helpful to streamline code. - fjcontrib::Nsubjettiness::NsubjettinessRatio tau32_kt(3, 2, kt_axes, normalized_measure); + fjcontrib::NsubjettinessRatio tau32_kt(3, 2, kt_axes, normalized_measure); double tau32 = tau32_kt.result(jet); return tau32; } double computeneutrinoz(const FourMomentum& lepton, const Vector3 &met) const { //computing z component of neutrino momentum given lepton and met double pzneutrino; double m_W = 80.399; // in GeV, given in the paper double k = (( sqr( m_W ) - sqr( lepton.mass() ) ) / 2 ) + (lepton.px() * met.x() + lepton.py() * met.y()); double a = sqr ( lepton.E() )- sqr ( lepton.pz() ); double b = -2*k*lepton.pz(); double c = sqr( lepton.E() ) * sqr( met.mod() ) - sqr( k ); double discriminant = sqr(b) - 4 * a * c; double quad[2] = { (- b - sqrt(discriminant)) / (2 * a), (- b + sqrt(discriminant)) / (2 * a) }; //two possible quadratic solns if (discriminant < 0) pzneutrino = - b / (2 * a); //if the discriminant is negative else { //if the discriminant is greater than or equal to zero, take the soln with smallest absolute value double absquad[2]; for (int n=0; n<2; ++n) absquad[n] = fabs(quad[n]); if (absquad[0] < absquad[1]) pzneutrino = quad[0]; else pzneutrino = quad[1]; } return pzneutrino; } private: /// @name Objects that are used by the event selection decisions typedef map HistoMap; HistoMap _h; }; DECLARE_RIVET_PLUGIN(ATLAS_2017_I1614149); } diff --git a/analyses/pluginCDF/CDF_1996_S3108457.cc b/analyses/pluginCDF/CDF_1996_S3108457.cc --- a/analyses/pluginCDF/CDF_1996_S3108457.cc +++ b/analyses/pluginCDF/CDF_1996_S3108457.cc @@ -1,108 +1,108 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/SmearedJets.hh" namespace Rivet { /// @brief CDF properties of high-mass multi-jet events class CDF_1996_S3108457 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(CDF_1996_S3108457); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { /// Initialise and register projections here const FinalState fs(Cuts::abseta < 4.2); FastJets fj(fs, FastJets::CDFJETCLU, 0.7); // Smear energy and mass with the 10% uncertainty quoted in the paper SmearedJets sj_E(fj, [](const Jet& jet){ return P4_SMEAR_MASS_GAUSS(P4_SMEAR_E_GAUSS(jet, 0.1*jet.E()), 0.1*jet.mass()); }); declare(sj_E, "SmearedJets_E"); /// Book histograms here, e.g.: for (size_t i=0; i<5; ++i) { book(_h_m[i], 1+i, 1, 1); book(_h_costheta[i], 10+i, 1, 1); book(_h_pT[i], 15+i, 1, 1); } /// @todo Ratios of mass histograms left out: Binning doesn't work out } /// Perform the per-event analysis void analyze(const Event& event) { // Get the smeared jets const Jets SJets = apply(event, "SmearedJets_E").jets(Cuts::Et > 20.0*GeV, cmpMomByEt); if (SJets.size() < 2 || SJets.size() > 6) vetoEvent; // Calculate Et, total jet 4 Momentum double sumEt(0), sumE(0); FourMomentum JS(0,0,0,0); for (const Jet& jet : SJets) { sumEt += jet.Et()/GeV; sumE += jet.E()/GeV; JS+=jet.momentum(); } if (sumEt < 420. || sumE > 2000.) vetoEvent; double mass = JS.mass()/GeV; LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(JS.betaVec()); FourMomentum jet0boosted(cms_boost.transform(SJets[0].momentum())); double costheta0 = fabs(cos(jet0boosted.theta())); if (costheta0 < 2.0/3.0) _h_m[SJets.size()-2]->fill(mass); - if (mass > 600.0*GeV) _h_costheta[JS.size()-2]->fill(costheta0); - if (costheta0 < 2.0/3.0 && mass > 600.0*GeV) { + if (mass > 600.) _h_costheta[SJets.size()-2]->fill(costheta0); + if (costheta0 < 2.0/3.0 && mass > 600.) { for (const Jet& jet : SJets) _h_pT[SJets.size()-2]->fill(jet.pT()); } } /// Normalise histograms etc., after the run void finalize() { /// Normalise, scale and otherwise manipulate histograms here for (size_t i=0; i<5; ++i) { normalize(_h_m[i], 40.0); normalize(_h_costheta[i], 2.0); normalize(_h_pT[i], 20.0); } } //@} private: /// @name Histograms //@{ Histo1DPtr _h_m[5]; Histo1DPtr _h_costheta[5]; Histo1DPtr _h_pT[5]; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CDF_1996_S3108457); } diff --git a/analyses/pluginCMS/CMS_2016_PAS_TOP_15_006.cc b/analyses/pluginCMS/CMS_2016_PAS_TOP_15_006.cc --- a/analyses/pluginCMS/CMS_2016_PAS_TOP_15_006.cc +++ b/analyses/pluginCMS/CMS_2016_PAS_TOP_15_006.cc @@ -1,176 +1,174 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" namespace Rivet { /// Jet multiplicity in lepton+jets ttbar at 8 TeV class CMS_2016_PAS_TOP_15_006 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(CMS_2016_PAS_TOP_15_006); /// @name Analysis methods //@{ /// Set up projections and book histograms void init() { // Complete final state FinalState fs; Cut superLooseLeptonCuts = Cuts::pt > 5*GeV; SpecialDressedLeptons dressedleptons(fs, superLooseLeptonCuts); declare(dressedleptons, "DressedLeptons"); // Projection for jets VetoedFinalState fsForJets(fs); fsForJets.addVetoOnThisFinalState(dressedleptons); declare(FastJets(fsForJets, FastJets::ANTIKT, 0.5), "Jets"); // Booking of histograms - book(_normedElectronMuonHisto, "normedElectronMuonHisto", 7, 3.5, 10.5, - "Normalized differential cross section in lepton+jets channel", "Jet multiplicity", "Normed units"); - book(_absXSElectronMuonHisto , "absXSElectronMuonHisto", 7, 3.5, 10.5, - "Differential cross section in lepton+jets channel", "Jet multiplicity", "pb"); + book(_normedElectronMuonHisto, "normedElectronMuonHisto", 7, 3.5, 10.5); + book(_absXSElectronMuonHisto , "absXSElectronMuonHisto", 7, 3.5, 10.5); } /// Per-event analysis void analyze(const Event& event) { // Select ttbar -> lepton+jets const SpecialDressedLeptons& dressedleptons = applyProjection(event, "DressedLeptons"); vector selleptons; for (const DressedLepton& dressedlepton : dressedleptons.dressedLeptons()) { // Select good leptons if (dressedlepton.pT() > 30*GeV && dressedlepton.abseta() < 2.4) selleptons += dressedlepton.mom(); // Veto loose leptons else if (dressedlepton.pT() > 15*GeV && dressedlepton.abseta() < 2.5) vetoEvent; } if (selleptons.size() != 1) vetoEvent; // Identify hardest tight lepton const FourMomentum lepton = selleptons[0]; // Jets const FastJets& jets = applyProjection(event, "Jets"); const Jets jets30 = jets.jetsByPt(30*GeV); int nJets = 0, nBJets = 0; for (const Jet& jet : jets30) { if (jet.abseta() > 2.5) continue; if (deltaR(jet.momentum(), lepton) < 0.5) continue; nJets += 1; if (jet.bTagged(Cuts::pT > 5*GeV)) nBJets += 1; } // Require >= 4 resolved jets, of which two must be b-tagged if (nJets < 4 || nBJets < 2) vetoEvent; // Fill histograms _normedElectronMuonHisto->fill(min(nJets, 10)); _absXSElectronMuonHisto ->fill(min(nJets, 10)); } void finalize() { const double ttbarXS = !std::isnan(crossSectionPerEvent()) ? crossSection() : 252.89*picobarn; if (std::isnan(crossSectionPerEvent())) MSG_INFO("No valid cross-section given, using NNLO (arXiv:1303.6254; sqrt(s)=8 TeV, m_t=172.5 GeV): " << ttbarXS/picobarn << " pb"); const double xsPerWeight = ttbarXS/picobarn / sumOfWeights(); scale(_absXSElectronMuonHisto, xsPerWeight); normalize(_normedElectronMuonHisto); } //@} /// @brief Special dressed lepton finder /// /// Find dressed leptons by clustering all leptons and photons class SpecialDressedLeptons : public FinalState { public: /// Constructor SpecialDressedLeptons(const FinalState& fs, const Cut& cut) : FinalState(cut) { setName("SpecialDressedLeptons"); IdentifiedFinalState ifs(fs); ifs.acceptIdPair(PID::PHOTON); ifs.acceptIdPair(PID::ELECTRON); ifs.acceptIdPair(PID::MUON); declare(ifs, "IFS"); declare(FastJets(ifs, FastJets::ANTIKT, 0.1), "LeptonJets"); } /// Clone on the heap virtual unique_ptr clone() const { return unique_ptr(new SpecialDressedLeptons(*this)); } /// Retrieve the dressed leptons const vector& dressedLeptons() const { return _clusteredLeptons; } /// Perform the calculation void project(const Event& e) { _theParticles.clear(); _clusteredLeptons.clear(); vector allClusteredLeptons; const Jets jets = applyProjection(e, "LeptonJets").jetsByPt(5*GeV); for (const Jet& jet : jets) { Particle lepCand; for (const Particle& cand : jet.particles()) { const int absPdgId = cand.abspid(); if (absPdgId == PID::ELECTRON || absPdgId == PID::MUON) { if (cand.pt() > lepCand.pt()) lepCand = cand; } } // Central lepton must be the major component if ((lepCand.pt() < jet.pt()/2.) || (lepCand.pid() == 0)) continue; DressedLepton lepton = DressedLepton(lepCand); for (const Particle& cand : jet.particles()) { if (isSame(cand, lepCand)) continue; lepton.addConstituent(cand, true); } allClusteredLeptons.push_back(lepton); } for (const DressedLepton& lepton : allClusteredLeptons) { if (accept(lepton)) { _clusteredLeptons.push_back(lepton); _theParticles.push_back(lepton.constituentLepton()); _theParticles += lepton.constituentPhotons(); } } } private: /// Container which stores the clustered lepton objects vector _clusteredLeptons; }; private: /// Histograms Histo1DPtr _normedElectronMuonHisto, _absXSElectronMuonHisto; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2016_PAS_TOP_15_006); } diff --git a/analyses/pluginCMS/CMS_2018_I1682495.cc b/analyses/pluginCMS/CMS_2018_I1682495.cc --- a/analyses/pluginCMS/CMS_2018_I1682495.cc +++ b/analyses/pluginCMS/CMS_2018_I1682495.cc @@ -1,156 +1,157 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/FastJets.hh" -#include "Rivet/Tools/fjcontrib/SoftDrop.hh" + +#include "fastjet/contrib/SoftDrop.hh" namespace Rivet { // Soft-drop and ungroomed jet mass measurement class CMS_2018_I1682495 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor CMS_2018_I1682495() : Analysis("CMS_2018_I1682495"), _softdrop(fjcontrib::SoftDrop(0, 0.1, 0.8) ) // parameters are beta, zcut, R0 { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // define a projection that keeps all the particles up to |eta|=5 const FinalState fs(Cuts::abseta < 5.); // use FastJet, anti-kt(R=0.8) to do the clustering declare(FastJets(fs, FastJets::ANTIKT, 0.8), "JetsAK8"); // Histograms for (size_t i = 0; i < N_PT_BINS_dj; ++i ) { book(_h_ungroomedJetMass_dj[i][0], i+1+0*N_PT_BINS_dj, 1, 1); // Ungroomed mass, absolute book(_h_sdJetMass_dj[i][0], i+1+1*N_PT_BINS_dj, 1, 1); // Groomed mass, absolute book(_h_ungroomedJetMass_dj[i][1], i+1+2*N_PT_BINS_dj, 1, 1); // Ungroomed mass, normalized book(_h_sdJetMass_dj[i][1], i+1+3*N_PT_BINS_dj, 1, 1); // Groomed mass, normalized } } // Find the pT histogram bin index for value pt (in GeV), to hack a 2D histogram equivalent /// @todo Use a YODA axis/finder alg when available size_t findPtBin(double ptJ) { for (size_t ibin = 0; ibin < N_PT_BINS_dj; ++ibin) { if (inRange(ptJ, ptBins_dj[ibin], ptBins_dj[ibin+1])) return ibin; } return N_PT_BINS_dj; } /// Perform the per-event analysis void analyze(const Event& event) { // Look at events with >= 2 jets auto jetsAK8 = applyProjection(event, "JetsAK8").jetsByPt(Cuts::pT > 200*GeV and Cuts::abseta < 2.4); if (jetsAK8.size() < 2) vetoEvent; // Get the leading two jets const fastjet::PseudoJet& j0 = jetsAK8[0].pseudojet(); const fastjet::PseudoJet& j1 = jetsAK8[1].pseudojet(); // Calculate delta phi and the pt asymmetry double deltaPhi = Rivet::deltaPhi( j0.phi(), j1.phi() ); double ptasym = (j0.pt() - j1.pt()) / (j0.pt() + j1.pt()); if (deltaPhi < 2.0 ) vetoEvent; if (ptasym > 0.3) vetoEvent; // Find the appropriate pT bins and fill the histogram const size_t njetBin0 = findPtBin(j0.pt()/GeV); const size_t njetBin1 = findPtBin(j1.pt()/GeV); if (njetBin0 < N_PT_BINS_dj && njetBin1 < N_PT_BINS_dj) { for ( size_t jbin = 0; jbin < N_CATEGORIES; jbin++ ){ _h_ungroomedJetMass_dj[njetBin0][jbin]->fill(j0.m()/GeV); _h_ungroomedJetMass_dj[njetBin1][jbin]->fill(j1.m()/GeV); } } // Now run the substructure algs... fastjet::PseudoJet sd0 = _softdrop(j0); fastjet::PseudoJet sd1 = _softdrop(j1); // ... and repeat if (njetBin0 < N_PT_BINS_dj && njetBin1 < N_PT_BINS_dj) { for ( size_t jbin = 0; jbin < N_CATEGORIES; jbin++ ){ _h_sdJetMass_dj[njetBin0][jbin]->fill(sd0.m()/GeV); _h_sdJetMass_dj[njetBin1][jbin]->fill(sd1.m()/GeV); } } } /// Normalise histograms etc., after the run void finalize() { // Normalize the normalized cross section histograms to unity, for (size_t i = 0; i < N_PT_BINS_dj; ++i) { normalize(_h_ungroomedJetMass_dj[i][1]); normalize(_h_sdJetMass_dj[i][1]); } // Normalize the absolute cross section histograms to xs * lumi. for (size_t i = 0; i < N_PT_BINS_dj; ++i) { scale(_h_ungroomedJetMass_dj[i][0], crossSection()/picobarn / sumOfWeights() / (ptBins_dj[i+1]-ptBins_dj[i]) ); scale(_h_sdJetMass_dj[i][0], crossSection()/picobarn / sumOfWeights() / (ptBins_dj[i+1]-ptBins_dj[i]) ); } } //@} private: /// @name FastJet grooming tools (configured in constructor init list) //@{ const fjcontrib::SoftDrop _softdrop; //@} /// @name Histograms //@{ enum { PT_200_260_dj=0, PT_260_350_dj, PT_350_460_dj, PT_460_550_dj, PT_550_650_dj, PT_650_760_dj, PT_760_900_dj, PT_900_1000_dj, PT_1000_1100_dj, PT_1100_1200_dj, PT_1200_1300_dj, PT_1300_Inf_dj, N_PT_BINS_dj }; static const int N_CATEGORIES=2; const double ptBins_dj[N_PT_BINS_dj+1]= { 200., 260., 350., 460., 550., 650., 760., 900., 1000., 1100., 1200., 1300., 13000.}; Histo1DPtr _h_ungroomedJet0pt, _h_ungroomedJet1pt; Histo1DPtr _h_sdJet0pt, _h_sdJet1pt; // Here, store both the absolute (index 0) and normalized (index 1) cross sections. Histo1DPtr _h_ungroomedJetMass_dj[N_PT_BINS_dj][2]; Histo1DPtr _h_sdJetMass_dj[N_PT_BINS_dj][2]; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2018_I1682495); } diff --git a/analyses/pluginCMS/CMS_2018_I1690148.cc b/analyses/pluginCMS/CMS_2018_I1690148.cc --- a/analyses/pluginCMS/CMS_2018_I1690148.cc +++ b/analyses/pluginCMS/CMS_2018_I1690148.cc @@ -1,452 +1,460 @@ #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ChargedLeptons.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/PromptFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/InvMassFinalState.hh" #include "Rivet/Projections/MissingMomentum.hh" #include "Rivet/Math/MatrixN.hh" -#include "Rivet/Tools/fjcontrib/Nsubjettiness.hh" -#include "Rivet/Tools/fjcontrib/EnergyCorrelator.hh" + +#include "fastjet/contrib/Nsubjettiness.hh" +#include "fastjet/contrib/EnergyCorrelator.hh" namespace Rivet { /// Measurement of jet substructure observables in $t\bar{t}$ events from $pp$ collisions at 13~TeV class CMS_2018_I1690148 : public Analysis { public: enum Reconstruction { CHARGED=0, ALL=1 }; - enum Observable { MULT=0, PTDS=1, GA_LHA=2, GA_WIDTH=3, GA_THRUST=4, ECC=5, ZG=6, ZGDR=7, NSD=8, TAU21=9, TAU32=10, TAU43=11, C1_00=12, C1_02=13, C1_05=14, C1_10=15, C1_20=16, C2_00=17, C2_02=18, C2_05=19, C2_10=20, C2_20=21, C3_00=22, C3_02=23, C3_05=24, C3_10=25, C3_20=26, M2_B1=27, N2_B1=28, N3_B1=29, M2_B2=30, N2_B2=31, N3_B2=32 }; + enum Observable { MULT=0, PTDS=1, GA_LHA=2, GA_WIDTH=3, + GA_THRUST=4, ECC=5, ZG=6, ZGDR=7, NSD=8, + TAU21=9, TAU32=10, TAU43=11, C1_00=12, + C1_02=13, C1_05=14, C1_10=15, C1_20=16, + C2_00=17, C2_02=18, C2_05=19, C2_10=20, + C2_20=21, C3_00=22, C3_02=23, C3_05=24, + C3_10=25, C3_20=26, M2_B1=27, N2_B1=28, + N3_B1=29, M2_B2=30, N2_B2=31, N3_B2=32 }; enum Flavor { INCL=0, BOTTOM=1, QUARK=2, GLUON=3 }; /// Minimal constructor DEFAULT_RIVET_ANALYSIS_CTOR(CMS_2018_I1690148); /// @name Analysis methods //@{ /// Set up projections and book histograms void init() { // Cuts particle_cut = (Cuts::abseta < 5.0 && Cuts::pT > 0.*GeV); lepton_cut = (Cuts::abseta < 2.4 && Cuts::pT > 15.*GeV); jet_cut = (Cuts::abseta < 2.4 && Cuts::pT > 30.*GeV); // Generic final state FinalState fs(particle_cut); // Dressed leptons ChargedLeptons charged_leptons(fs); IdentifiedFinalState photons(fs); photons.acceptIdPair(PID::PHOTON); PromptFinalState prompt_leptons(charged_leptons); prompt_leptons.acceptMuonDecays(true); prompt_leptons.acceptTauDecays(true); PromptFinalState prompt_photons(photons); prompt_photons.acceptMuonDecays(true); prompt_photons.acceptTauDecays(true); // NB. useDecayPhotons=true allows for photons with tau ancestor; photons from hadrons are vetoed by the PromptFinalState; DressedLeptons dressed_leptons(prompt_photons, prompt_leptons, 0.1, lepton_cut, true); declare(dressed_leptons, "DressedLeptons"); // Projection for jets VetoedFinalState fsForJets(fs); fsForJets.addVetoOnThisFinalState(dressed_leptons); declare(FastJets(fsForJets, FastJets::ANTIKT, 0.4, JetAlg::Muons::ALL, JetAlg::Invisibles::NONE), "Jets"); // Booking of histograms int d = 0; for (int r = 0; r < 2; ++r) { // reconstruction (charged, all) for (int o = 0; o < 33; ++o) { // observable d += 1; for (int f = 0; f < 4; ++f) { // flavor char buffer [20]; sprintf(buffer, "d%02d-x01-y%02d", d, f+1); book(_h[r][o][f], buffer); } } } } void analyze(const Event& event) { // select ttbar -> lepton+jets const vector& leptons = applyProjection(event, "DressedLeptons").dressedLeptons(); int nsel_leptons = 0; for (const DressedLepton& lepton : leptons) { if (lepton.pt() > 26.) nsel_leptons += 1; else vetoEvent; // found veto lepton } if (nsel_leptons != 1) vetoEvent; const Jets all_jets = applyProjection(event, "Jets").jetsByPt(jet_cut); if (all_jets.size() < 4) vetoEvent; // categorize jets int nsel_bjets = 0; int nsel_wjets = 0; Jets jets[4]; for (const Jet& jet : all_jets) { // check for jet-lepton overlap -> do not consider for selection if (deltaR(jet, leptons[0]) < 0.4) continue; bool overlap = false; bool w_jet = false; for (const Jet& jet2 : all_jets) { if (jet.momentum() == jet2.momentum()) continue; // check for jet-jet overlap -> do not consider for analysis if (deltaR(jet, jet2) < 0.8) overlap = true; // check for W candidate if (jet.bTagged() or jet2.bTagged()) continue; FourMomentum w_cand = jet.momentum() + jet2.momentum(); if (abs(w_cand.mass() - 80.4) < 15.) w_jet = true; } // count jets for event selection if (jet.bTagged()) nsel_bjets += 1; if (w_jet) nsel_wjets += 1; // jets for analysis if (jet.abseta() > 2. or overlap) continue; if (jet.bTagged()) { jets[BOTTOM].push_back(jet); } else if (w_jet) { jets[QUARK].push_back(jet); } else { jets[GLUON].push_back(jet); } } if (nsel_bjets != 2) vetoEvent; if (nsel_wjets < 2) vetoEvent; // substructure analysis // no loop over incl jets -> more loc but faster for (int f = 1; f < 4; ++f) { for (const Jet& jet : jets[f]) { // apply cuts on constituents vector particles[2]; for (const Particle& p : jet.particles(Cuts::pT > 1.*GeV)) { particles[ALL].push_back( PseudoJet(p.px(), p.py(), p.pz(), p.energy()) ); if (p.charge3() != 0) particles[CHARGED].push_back( PseudoJet(p.px(), p.py(), p.pz(), p.energy()) ); } if (particles[CHARGED].size() == 0) continue; // recluster with C/A and anti-kt+WTA PseudoJet ca_jet[2]; JetDefinition ca_def(fastjet::cambridge_algorithm, fastjet::JetDefinition::max_allowable_R); ClusterSequence ca_charged(particles[CHARGED], ca_def); ClusterSequence ca_all(particles[ALL], ca_def); ca_jet[CHARGED] = ca_charged.exclusive_jets(1)[0]; ca_jet[ALL] = ca_all.exclusive_jets(1)[0]; PseudoJet akwta_jet[2]; JetDefinition akwta_def(fastjet::antikt_algorithm, fastjet::JetDefinition::max_allowable_R, fastjet::RecombinationScheme::WTA_pt_scheme); ClusterSequence akwta_charged(particles[CHARGED], akwta_def); ClusterSequence akwta_all(particles[ALL], akwta_def); akwta_jet[CHARGED] = akwta_charged.exclusive_jets(1)[0]; akwta_jet[ALL] = akwta_all.exclusive_jets(1)[0]; // calculate observables for (int r = 0; r < 2; ++r) { int mult = akwta_jet[r].constituents().size(); // generalized angularities _h[r][MULT][INCL]->fill(mult); _h[r][MULT][f]->fill(mult); if (mult > 1) { double ptds = getPtDs(akwta_jet[r]); double ga_lha = calcGA(0.5, 1., akwta_jet[r]); double ga_width = calcGA(1., 1., akwta_jet[r]); double ga_thrust = calcGA(2., 1., akwta_jet[r]); _h[r][PTDS][INCL]->fill(ptds); _h[r][PTDS][f]->fill(ptds); _h[r][GA_LHA][INCL]->fill(ga_lha); _h[r][GA_LHA][f]->fill(ga_lha); _h[r][GA_WIDTH][INCL]->fill(ga_width); _h[r][GA_WIDTH][f]->fill(ga_width); _h[r][GA_THRUST][INCL]->fill(ga_thrust); _h[r][GA_THRUST][f]->fill(ga_thrust); } // eccentricity if (mult > 3) { double ecc = getEcc(akwta_jet[r]); _h[r][ECC][INCL]->fill(ecc); _h[r][ECC][f]->fill(ecc); } // N-subjettiness if (mult > 2) { double tau21 = getTau(2, 1, ca_jet[r]); _h[r][TAU21][INCL]->fill(tau21); _h[r][TAU21][f]->fill(tau21); } if (mult > 3) { double tau32 = getTau(3, 2, ca_jet[r]); _h[r][TAU32][INCL]->fill(tau32); _h[r][TAU32][f]->fill(tau32); } if (mult > 4) { double tau43 = getTau(4, 3, ca_jet[r]); _h[r][TAU43][INCL]->fill(tau43); _h[r][TAU43][f]->fill(tau43); } // soft drop if (mult > 1) { vector sd_results = getZg(ca_jet[r]); if (sd_results[0] > 0.) { _h[r][ZG][INCL]->fill(sd_results[0]); _h[r][ZG][f]->fill(sd_results[0]); _h[r][ZGDR][INCL]->fill(sd_results[1]); _h[r][ZGDR][f]->fill(sd_results[1]); } } int nsd = getNSD(0.007, -1., ca_jet[r]); _h[r][NSD][INCL]->fill(nsd); _h[r][NSD][f]->fill(nsd); // C-series energy correlation ratios if (mult > 1) { double cn_00 = getC(1, 0.0, ca_jet[r]); double cn_02 = getC(1, 0.2, ca_jet[r]); double cn_05 = getC(1, 0.5, ca_jet[r]); double cn_10 = getC(1, 1.0, ca_jet[r]); double cn_20 = getC(1, 2.0, ca_jet[r]); _h[r][C1_00][INCL]->fill(cn_00); _h[r][C1_02][INCL]->fill(cn_02); _h[r][C1_05][INCL]->fill(cn_05); _h[r][C1_10][INCL]->fill(cn_10); _h[r][C1_20][INCL]->fill(cn_20); _h[r][C1_00][f]->fill(cn_00); _h[r][C1_02][f]->fill(cn_02); _h[r][C1_05][f]->fill(cn_05); _h[r][C1_10][f]->fill(cn_10); _h[r][C1_20][f]->fill(cn_20); } if (mult > 2) { double cn_00 = getC(2, 0.0, ca_jet[r]); double cn_02 = getC(2, 0.2, ca_jet[r]); double cn_05 = getC(2, 0.5, ca_jet[r]); double cn_10 = getC(2, 1.0, ca_jet[r]); double cn_20 = getC(2, 2.0, ca_jet[r]); _h[r][C2_00][INCL]->fill(cn_00); _h[r][C2_02][INCL]->fill(cn_02); _h[r][C2_05][INCL]->fill(cn_05); _h[r][C2_10][INCL]->fill(cn_10); _h[r][C2_20][INCL]->fill(cn_20); _h[r][C2_00][f]->fill(cn_00); _h[r][C2_02][f]->fill(cn_02); _h[r][C2_05][f]->fill(cn_05); _h[r][C2_10][f]->fill(cn_10); _h[r][C2_20][f]->fill(cn_20); } if (mult > 3) { double cn_00 = getC(3, 0.0, ca_jet[r]); double cn_02 = getC(3, 0.2, ca_jet[r]); double cn_05 = getC(3, 0.5, ca_jet[r]); double cn_10 = getC(3, 1.0, ca_jet[r]); double cn_20 = getC(3, 2.0, ca_jet[r]); _h[r][C3_00][INCL]->fill(cn_00); _h[r][C3_02][INCL]->fill(cn_02); _h[r][C3_05][INCL]->fill(cn_05); _h[r][C3_10][INCL]->fill(cn_10); _h[r][C3_20][INCL]->fill(cn_20); _h[r][C3_00][f]->fill(cn_00); _h[r][C3_02][f]->fill(cn_02); _h[r][C3_05][f]->fill(cn_05); _h[r][C3_10][f]->fill(cn_10); _h[r][C3_20][f]->fill(cn_20); } // M/N-series energy correlation ratios if (mult > 2) { double m2_b1 = getM(2, 1., ca_jet[r]); double m2_b2 = getM(2, 2., ca_jet[r]); double n2_b1 = getN(2, 1., ca_jet[r]); double n2_b2 = getN(2, 2., ca_jet[r]); _h[r][M2_B1][INCL]->fill(m2_b1); _h[r][M2_B2][INCL]->fill(m2_b2); _h[r][N2_B1][INCL]->fill(n2_b1); _h[r][N2_B2][INCL]->fill(n2_b2); _h[r][M2_B1][f]->fill(m2_b1); _h[r][M2_B2][f]->fill(m2_b2); _h[r][N2_B1][f]->fill(n2_b1); _h[r][N2_B2][f]->fill(n2_b2); } if (mult > 3) { double n3_b1 = getN(3, 1., ca_jet[r]); double n3_b2 = getN(3, 2., ca_jet[r]); _h[r][N3_B1][INCL]->fill(n3_b1); _h[r][N3_B2][INCL]->fill(n3_b2); _h[r][N3_B1][f]->fill(n3_b1); _h[r][N3_B2][f]->fill(n3_b2); } } } } } void finalize() { for (int r = 0; r < 2; ++r) { // reconstruction (charged, all) for (int o = 0; o < 33; ++o) { // observable for (int f = 0; f < 4; ++f) { // flavor normalize(_h[r][o][f], 1.0, false); } } } } //@} private: double deltaR(PseudoJet j1, PseudoJet j2) { double deta = j1.eta() - j2.eta(); double dphi = j1.delta_phi_to(j2); return sqrt(deta*deta + dphi*dphi); } double getPtDs(PseudoJet jet) { double mult = jet.constituents().size(); double sumpt = 0.; // would be jet.pt() in WTA scheme but better keep it generic double sumpt2 = 0.; for (auto p : jet.constituents()) { sumpt += p.pt(); sumpt2 += pow(p.pt(), 2); } double ptd = sumpt2/pow(sumpt,2); return max(0., sqrt((ptd-1./mult) * mult/(mult-1.))); } double calcGA(double beta, double kappa, PseudoJet jet) { double sumpt = 0.; for (const auto& p : jet.constituents()) { sumpt += p.pt(); } double ga = 0.; for (auto p : jet.constituents()) { ga += pow(p.pt()/sumpt, kappa) * pow(deltaR(jet, p)/0.4, beta); } return ga; } double getEcc(PseudoJet jet) { // Covariance matrix Matrix<2> M; for (const auto& p : jet.constituents()) { Matrix<2> MPart; MPart.set(0, 0, (p.eta() - jet.eta()) * (p.eta() - jet.eta())); MPart.set(0, 1, (p.eta() - jet.eta()) * mapAngleMPiToPi(p.phi() - jet.phi())); MPart.set(1, 0, mapAngleMPiToPi(p.phi() - jet.phi()) * (p.eta() - jet.eta())); MPart.set(1, 1, mapAngleMPiToPi(p.phi() - jet.phi()) * mapAngleMPiToPi(p.phi() - jet.phi())); M += MPart * p.e(); } // Calculate eccentricity from eigenvalues // Check that the matrix is symmetric. const bool isSymm = M.isSymm(); if (!isSymm) { MSG_ERROR("Error: energy tensor not symmetric:"); MSG_ERROR("[0,1] vs. [1,0]: " << M.get(0,1) << ", " << M.get(1,0)); } // If not symmetric, something's wrong (we made sure the error msg appeared first). assert(isSymm); const double a = M.get(0,0); const double b = M.get(1,1); const double c = M.get(1,0); const double l1 = 0.5*(a+b+sqrt( (a-b)*(a-b) + 4 *c*c)); const double l2 = 0.5*(a+b-sqrt( (a-b)*(a-b) + 4 *c*c)); return 1. - l2/l1; } double getTau(int N, int M, PseudoJet jet) { - fjcontrib::Nsubjettiness::NsubjettinessRatio tau_ratio(N, M, fjcontrib::Nsubjettiness::OnePass_WTA_KT_Axes(), - fjcontrib::Nsubjettiness::NormalizedMeasure(1.0, 0.4)); + fjcontrib::NsubjettinessRatio tau_ratio(N, M, fjcontrib::OnePass_WTA_KT_Axes(), + fjcontrib::NormalizedMeasure(1.0, 0.4)); return tau_ratio(jet); } vector getZg(PseudoJet jet) { PseudoJet jet0 = jet; PseudoJet jet1, jet2; double zg = 0.; while (zg < 0.1 && jet0.has_parents(jet1, jet2)) { zg = jet2.pt()/jet0.pt(); jet0 = jet1; } if (zg < 0.1) return {-1., -1.}; return {zg, jet1.delta_R(jet2)}; } int getNSD(double zcut, double beta, PseudoJet jet) { PseudoJet jet0 = jet; PseudoJet jet1, jet2; int nsd = 0.; double zg = 0.; while (jet0.has_parents(jet1, jet2)) { zg = jet2.pt()/jet0.pt(); if (zg > zcut * pow(jet1.delta_R(jet2)/0.4, beta)) nsd += 1; jet0 = jet1; } return nsd; } double getC(int N, double beta, PseudoJet jet) { fjcontrib::EnergyCorrelatorDoubleRatio C(N, beta); return C(jet); } double getM(int N, double beta, PseudoJet jet) { fjcontrib::EnergyCorrelatorMseries CM(N, beta); return CM(jet); } double getN(int N, double beta, PseudoJet jet) { fjcontrib::EnergyCorrelatorNseries CN(N, beta); return CN(jet); } private: // @name Histogram data members //@{ Cut particle_cut, lepton_cut, jet_cut; Histo1DPtr _h[2][33][4]; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2018_I1690148); } diff --git a/analyses/pluginHERA/H1_2000_S4129130.cc b/analyses/pluginHERA/H1_2000_S4129130.cc --- a/analyses/pluginHERA/H1_2000_S4129130.cc +++ b/analyses/pluginHERA/H1_2000_S4129130.cc @@ -1,258 +1,257 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Math/Constants.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/DISKinematics.hh" namespace Rivet { /// @brief H1 energy flow and charged particle spectra /// /// @author Peter Richardson /// /// Based on the HZTOOL analysis HZ99091 class H1_2000_S4129130 : public Analysis { public: /// Constructor H1_2000_S4129130() : Analysis("H1_2000_S4129130") { } /// @name Analysis methods //@{ /// Initialise projections and histograms void init() { // Projections declare(DISLepton(), "Lepton"); declare(DISKinematics(), "Kinematics"); declare(FinalState(), "FS"); - // Histos - Histo1DPtr h; - // Histograms and weight vectors for low Q^2 a + _histETLowQa.resize(17); for (size_t ix = 0; ix < 17; ++ix) { - book(h ,ix+1, 1, 1); - _histETLowQa.push_back(h); - book(_weightETLowQa[ix], "TMP/ETLowQa"); + book(_histETLowQa[ix], ix+1, 1, 1); + book(_weightETLowQa[ix], "TMP/ETLowQa" + to_string(ix)); } // Histograms and weight vectors for high Q^2 a + _histETHighQa.resize(7); for (size_t ix = 0; ix < 7; ++ix) { - book(h ,ix+18, 1, 1); - _histETHighQa.push_back(h); - book(_weightETHighQa[ix], "TMP/ETHighQa"); + book(_histETHighQa[ix], ix+18, 1, 1); + book(_weightETHighQa[ix], "TMP/ETHighQa" + to_string(ix)); } // Histograms and weight vectors for low Q^2 b + _histETLowQb.resize(5); for (size_t ix = 0; ix < 5; ++ix) { - book(h ,ix+25, 1, 1); - _histETLowQb.push_back(h); - book(_weightETLowQb[ix], "TMP/ETLowQb"); + book(_histETLowQb[ix], ix+25, 1, 1); + book(_weightETLowQb[ix], "TMP/ETLowQb" + to_string(ix)); } // Histograms and weight vectors for high Q^2 b + _histETHighQb.resize(5); for (size_t ix = 0; ix < 3; ++ix) { - book(h ,30+ix, 1, 1); - _histETHighQb.push_back(h); - book(_weightETHighQb[ix], "TMP/ETHighQb"); + book(_histETHighQb[ix], 30+ix, 1, 1); + book(_weightETHighQb[ix], "TMP/ETHighQb" + to_string(ix)); } // Histograms for the averages book(_histAverETCentral ,33, 1, 1); book(_histAverETFrag ,34, 1, 1); } /// Analyze each event void analyze(const Event& event) { // DIS kinematics const DISKinematics& dk = apply(event, "Kinematics"); + if ( dk.failed() ) vetoEvent; double q2 = dk.Q2(); double x = dk.x(); double y = dk.y(); double w2 = dk.W2(); // Kinematics of the scattered lepton const DISLepton& dl = apply(event,"Lepton"); + if ( dl.failed() ) vetoEvent; const FourMomentum leptonMom = dl.out(); const double enel = leptonMom.E(); const double thel = 180 - leptonMom.angle(dl.in().mom())/degree; // Extract the particles other than the lepton const FinalState& fs = apply(event, "FS"); Particles particles; particles.reserve(fs.size()); ConstGenParticlePtr dislepGP = dl.out().genParticle(); ///< @todo Is the GenParticle stuff necessary? (Not included in Particle::==?) for(const Particle& p: fs.particles()) { ConstGenParticlePtr loopGP = p.genParticle(); if (loopGP == dislepGP) continue; particles.push_back(p); } // Cut on the forward energy double efwd = 0.; for (const Particle& p : particles) { const double th = 180 - p.angle(dl.in())/degree; if (inRange(th, 4.4, 15.0)) efwd += p.E(); } // There are four possible selections for events bool evcut[4]; // Low Q2 selection a evcut[0] = enel/GeV > 12. && w2 >= 4400.*GeV2 && efwd/GeV > 0.5 && inRange(thel,157.,176.); // Low Q2 selection b evcut[1] = enel/GeV > 12. && inRange(y,0.3,0.5); // High Q2 selection a evcut[2] = inRange(thel,12.,150.) && inRange(y,0.05,0.6) && w2 >= 4400.*GeV2 && efwd > 0.5; // High Q2 selection b evcut[3] = inRange(thel,12.,150.) && inRange(y,0.05,0.6) && inRange(w2,27110.*GeV2,45182.*GeV2); // Veto if fails all cuts /// @todo Can we use all()? if (! (evcut[0] || evcut[1] || evcut[2] || evcut[3]) ) vetoEvent; // Find the bins int bin[4] = {-1,-1,-1,-1}; // For the low Q2 selection a) if (q2 > 2.5*GeV2 && q2 <= 5.*GeV2) { if (x > 0.00005 && x <= 0.0001 ) bin[0] = 0; if (x > 0.0001 && x <= 0.0002 ) bin[0] = 1; if (x > 0.0002 && x <= 0.00035) bin[0] = 2; if (x > 0.00035 && x <= 0.0010 ) bin[0] = 3; } else if (q2 > 5.*GeV2 && q2 <= 10.*GeV2) { if (x > 0.0001 && x <= 0.0002 ) bin[0] = 4; if (x > 0.0002 && x <= 0.00035) bin[0] = 5; if (x > 0.00035 && x <= 0.0007 ) bin[0] = 6; if (x > 0.0007 && x <= 0.0020 ) bin[0] = 7; } else if (q2 > 10.*GeV2 && q2 <= 20.*GeV2) { if (x > 0.0002 && x <= 0.0005) bin[0] = 8; if (x > 0.0005 && x <= 0.0008) bin[0] = 9; if (x > 0.0008 && x <= 0.0015) bin[0] = 10; if (x > 0.0015 && x <= 0.040 ) bin[0] = 11; } else if (q2 > 20.*GeV2 && q2 <= 50.*GeV2) { if (x > 0.0005 && x <= 0.0014) bin[0] = 12; if (x > 0.0014 && x <= 0.0030) bin[0] = 13; if (x > 0.0030 && x <= 0.0100) bin[0] = 14; } else if (q2 > 50.*GeV2 && q2 <= 100.*GeV2) { if (x >0.0008 && x <= 0.0030) bin[0] = 15; if (x >0.0030 && x <= 0.0200) bin[0] = 16; } // check in one of the bins evcut[0] &= bin[0] >= 0; // For the low Q2 selection b) if (q2 > 2.5*GeV2 && q2 <= 5. *GeV2) bin[1] = 0; if (q2 > 5. *GeV2 && q2 <= 10. *GeV2) bin[1] = 1; if (q2 > 10.*GeV2 && q2 <= 20. *GeV2) bin[1] = 2; if (q2 > 20.*GeV2 && q2 <= 50. *GeV2) bin[1] = 3; if (q2 > 50.*GeV2 && q2 <= 100.*GeV2) bin[1] = 4; // check in one of the bins evcut[1] &= bin[1] >= 0; // for the high Q2 selection a) if (q2 > 100.*GeV2 && q2 <= 400.*GeV2) { if (x > 0.00251 && x <= 0.00631) bin[2] = 0; if (x > 0.00631 && x <= 0.0158 ) bin[2] = 1; if (x > 0.0158 && x <= 0.0398 ) bin[2] = 2; } else if (q2 > 400.*GeV2 && q2 <= 1100.*GeV2) { if (x > 0.00631 && x <= 0.0158 ) bin[2] = 3; if (x > 0.0158 && x <= 0.0398 ) bin[2] = 4; if (x > 0.0398 && x <= 1. ) bin[2] = 5; } else if (q2 > 1100.*GeV2 && q2 <= 100000.*GeV2) { if (x > 0. && x <= 1.) bin[2] = 6; } // check in one of the bins evcut[2] &= bin[2] >= 0; // for the high Q2 selection b) if (q2 > 100.*GeV2 && q2 <= 220.*GeV2) bin[3] = 0; else if (q2 > 220.*GeV2 && q2 <= 400.*GeV2) bin[3] = 1; else if (q2 > 400. ) bin[3] = 2; // check in one of*GeV the bins evcut[3] &= bin[3] >= 0; // Veto if fails all cuts after bin selection /// @todo Can we use all()? if (! (evcut[0] || evcut[1] || evcut[2] || evcut[3])) vetoEvent; // Increment the count for normalisation if (evcut[0]) _weightETLowQa [bin[0]]->fill(); if (evcut[1]) _weightETLowQb [bin[1]]->fill(); if (evcut[2]) _weightETHighQa[bin[2]]->fill(); if (evcut[3]) _weightETHighQb[bin[3]]->fill(); // Boost to hadronic CoM const LorentzTransform hcmboost = dk.boostHCM(); // Loop over the particles double etcent = 0; double etfrag = 0; for (const Particle& p : particles) { // Boost momentum to CMS const FourMomentum hcmMom = hcmboost.transform(p.momentum()); double et = fabs(hcmMom.Et()); double eta = hcmMom.eta(); // Averages in central and forward region if (fabs(eta) < .5 ) etcent += et; if (eta > 2 && eta <= 3.) etfrag += et; // Histograms of Et flow if (evcut[0]) _histETLowQa [bin[0]]->fill(eta, et); if (evcut[1]) _histETLowQb [bin[1]]->fill(eta, et); if (evcut[2]) _histETHighQa[bin[2]]->fill(eta, et); if (evcut[3]) _histETHighQb[bin[3]]->fill(eta, et); } // Fill histograms for the average quantities if (evcut[1] || evcut[3]) { _histAverETCentral->fill(q2, etcent); _histAverETFrag ->fill(q2, etfrag); } } // Finalize void finalize() { // Normalization of the Et distributions /// @todo Simplify by using normalize() instead? Are all these being normalized to area=1? for (size_t ix = 0; ix < 17; ++ix) if (_weightETLowQa[ix]->val() != 0) scale(_histETLowQa[ix], 1/ *_weightETLowQa[ix]); for (size_t ix = 0; ix < 7; ++ix) if (_weightETHighQa[ix]->val() != 0) scale(_histETHighQa[ix], 1/ *_weightETHighQa[ix]); for (size_t ix = 0; ix < 5; ++ix) if (_weightETLowQb[ix]->val() != 0) scale(_histETLowQb[ix], 1/ *_weightETLowQb[ix]); for (size_t ix = 0; ix < 3; ++ix) if (_weightETHighQb[ix]->val() != 0) scale(_histETHighQb[ix], 1/ *_weightETHighQb[ix]); } //@} private: /// @name Histograms //@{ vector _histETLowQa; vector _histETHighQa; vector _histETLowQb; vector _histETHighQb; Profile1DPtr _histAverETCentral; Profile1DPtr _histAverETFrag; //@} /// @name storage of weights for normalisation //@{ array _weightETLowQa; array _weightETHighQa; array _weightETLowQb; array _weightETHighQb; //@} }; DECLARE_RIVET_PLUGIN(H1_2000_S4129130); } diff --git a/analyses/pluginLEP/ALEPH_1996_S3486095.cc b/analyses/pluginLEP/ALEPH_1996_S3486095.cc --- a/analyses/pluginLEP/ALEPH_1996_S3486095.cc +++ b/analyses/pluginLEP/ALEPH_1996_S3486095.cc @@ -1,478 +1,479 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief ALEPH QCD study with event shapes and identified particles /// @author Holger Schulz class ALEPH_1996_S3486095 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ALEPH_1996_S3486095); /// @name Analysis methods //@{ void init() { // Set up projections declare(Beam(), "Beams"); const ChargedFinalState cfs; declare(cfs, "FS"); declare(UnstableParticles(), "UFS"); declare(FastJets(cfs, FastJets::DURHAM, 0.7), "DurhamJets"); declare(Sphericity(cfs), "Sphericity"); declare(ParisiTensor(cfs), "Parisi"); const Thrust thrust(cfs); declare(thrust, "Thrust"); declare(Hemispheres(thrust), "Hemispheres"); // Book histograms book(_histSphericity ,1, 1, 1); book(_histAplanarity ,2, 1, 1); book(_hist1MinusT ,3, 1, 1); book(_histTMinor ,4, 1, 1); book(_histY3 ,5, 1, 1); book(_histHeavyJetMass ,6, 1, 1); book(_histCParam ,7, 1, 1); book(_histOblateness ,8, 1, 1); book(_histScaledMom ,9, 1, 1); book(_histRapidityT ,10, 1, 1); book(_histPtSIn ,11, 1, 1); book(_histPtSOut ,12, 1, 1); book(_histLogScaledMom ,17, 1, 1); book(_histChMult ,18, 1, 1); book(_histMeanChMult ,19, 1, 1); book(_histMeanChMultRapt05,20, 1, 1); book(_histMeanChMultRapt10,21, 1, 1); book(_histMeanChMultRapt15,22, 1, 1); book(_histMeanChMultRapt20,23, 1, 1); // Particle spectra book(_histMultiPiPlus ,25, 1, 1); book(_histMultiKPlus ,26, 1, 1); book(_histMultiP ,27, 1, 1); book(_histMultiPhoton ,28, 1, 1); book(_histMultiPi0 ,29, 1, 1); book(_histMultiEta ,30, 1, 1); book(_histMultiEtaPrime ,31, 1, 1); book(_histMultiK0 ,32, 1, 1); book(_histMultiLambda0 ,33, 1, 1); book(_histMultiXiMinus ,34, 1, 1); book(_histMultiSigma1385Plus ,35, 1, 1); book(_histMultiXi1530_0 ,36, 1, 1); book(_histMultiRho ,37, 1, 1); book(_histMultiOmega782 ,38, 1, 1); book(_histMultiKStar892_0 ,39, 1, 1); book(_histMultiPhi ,40, 1, 1); book(_histMultiKStar892Plus ,43, 1, 1); // Mean multiplicities book(_histMeanMultiPi0 ,44, 1, 2); book(_histMeanMultiEta ,44, 1, 3); book(_histMeanMultiEtaPrime ,44, 1, 4); book(_histMeanMultiK0 ,44, 1, 5); book(_histMeanMultiRho ,44, 1, 6); book(_histMeanMultiOmega782 ,44, 1, 7); book(_histMeanMultiPhi ,44, 1, 8); book(_histMeanMultiKStar892Plus ,44, 1, 9); book(_histMeanMultiKStar892_0 ,44, 1, 10); book(_histMeanMultiLambda0 ,44, 1, 11); book(_histMeanMultiSigma0 ,44, 1, 12); book(_histMeanMultiXiMinus ,44, 1, 13); book(_histMeanMultiSigma1385Plus ,44, 1, 14); book(_histMeanMultiXi1530_0 ,44, 1, 15); book(_histMeanMultiOmegaOmegaBar ,44, 1, 16); book(_weightedTotalPartNum, "/TMP/TotalPartNum"); + book(_weightedTotalPartNum, "/TMP/weightedTotalPartNum"); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); _weightedTotalPartNum->fill(numParticles); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Thrusts MSG_DEBUG("Calculating thrust"); const Thrust& thrust = apply(e, "Thrust"); _hist1MinusT->fill(1 - thrust.thrust()); _histTMinor->fill(thrust.thrustMinor()); _histOblateness->fill(thrust.oblateness()); // Jets MSG_DEBUG("Calculating differential jet rate plots:"); const FastJets& durjet = apply(e, "DurhamJets"); if (durjet.clusterSeq()) { double y3 = durjet.clusterSeq()->exclusive_ymerge_max(2); if (y3>0.0) _histY3->fill(-1. * std::log(y3)); } // Sphericities MSG_DEBUG("Calculating sphericity"); const Sphericity& sphericity = apply(e, "Sphericity"); _histSphericity->fill(sphericity.sphericity()); _histAplanarity->fill(sphericity.aplanarity()); // C param MSG_DEBUG("Calculating Parisi params"); const ParisiTensor& parisi = apply(e, "Parisi"); _histCParam->fill(parisi.C()); // Hemispheres MSG_DEBUG("Calculating hemisphere variables"); const Hemispheres& hemi = apply(e, "Hemispheres"); _histHeavyJetMass->fill(hemi.scaledM2high()); // Iterate over all the charged final state particles. double Evis = 0.0; double rapt05 = 0.; double rapt10 = 0.; double rapt15 = 0.; double rapt20 = 0.; MSG_DEBUG("About to iterate over charged FS particles"); for (const Particle& p : fs.particles()) { // Get momentum and energy of each particle. const Vector3 mom3 = p.p3(); const double energy = p.E(); Evis += energy; // Scaled momenta. const double mom = mom3.mod(); const double scaledMom = mom/meanBeamMom; const double logInvScaledMom = -std::log(scaledMom); _histLogScaledMom->fill(logInvScaledMom); _histScaledMom->fill(scaledMom); // Get momenta components w.r.t. thrust and sphericity. const double momT = dot(thrust.thrustAxis(), mom3); const double pTinS = dot(mom3, sphericity.sphericityMajorAxis()); const double pToutS = dot(mom3, sphericity.sphericityMinorAxis()); _histPtSIn->fill(fabs(pTinS/GeV)); _histPtSOut->fill(fabs(pToutS/GeV)); // Calculate rapidities w.r.t. thrust. const double rapidityT = 0.5 * std::log((energy + momT) / (energy - momT)); _histRapidityT->fill(fabs(rapidityT)); if (std::fabs(rapidityT) <= 0.5) { rapt05 += 1.0; } if (std::fabs(rapidityT) <= 1.0) { rapt10 += 1.0; } if (std::fabs(rapidityT) <= 1.5) { rapt15 += 1.0; } if (std::fabs(rapidityT) <= 2.0) { rapt20 += 1.0; } } _histChMult->fill(numParticles); _histMeanChMultRapt05->fill(_histMeanChMultRapt05->bin(0).xMid(), rapt05); _histMeanChMultRapt10->fill(_histMeanChMultRapt10->bin(0).xMid(), rapt10); _histMeanChMultRapt15->fill(_histMeanChMultRapt15->bin(0).xMid(), rapt15); _histMeanChMultRapt20->fill(_histMeanChMultRapt20->bin(0).xMid(), rapt20); _histMeanChMult->fill(_histMeanChMult->bin(0).xMid(), numParticles); //// Final state of unstable particles to get particle spectra const UnstableParticles& ufs = apply(e, "UFS"); for (Particles::const_iterator p = ufs.particles().begin(); p != ufs.particles().end(); ++p) { const Vector3 mom3 = p->momentum().p3(); int id = abs(p->pid()); const double mom = mom3.mod(); const double energy = p->momentum().E(); const double scaledMom = mom/meanBeamMom; const double scaledEnergy = energy/meanBeamMom; // meanBeamMom is approximately beam energy switch (id) { case 22: _histMultiPhoton->fill(-1.*std::log(scaledMom)); break; case -321: case 321: _histMultiKPlus->fill(scaledMom); break; case 211: case -211: _histMultiPiPlus->fill(scaledMom); break; case 2212: case -2212: _histMultiP->fill(scaledMom); break; case 111: _histMultiPi0->fill(scaledMom); _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid()); break; case 221: if (scaledMom >= 0.1) { _histMultiEta->fill(scaledEnergy); _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid()); } break; case 331: if (scaledMom >= 0.1) { _histMultiEtaPrime->fill(scaledEnergy); _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid()); } break; case 130: //klong case 310: //kshort _histMultiK0->fill(scaledMom); _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid()); break; case 113: _histMultiRho->fill(scaledMom); _histMeanMultiRho->fill(_histMeanMultiRho->bin(0).xMid()); break; case 223: _histMultiOmega782->fill(scaledMom); _histMeanMultiOmega782->fill(_histMeanMultiOmega782->bin(0).xMid()); break; case 333: _histMultiPhi->fill(scaledMom); _histMeanMultiPhi->fill(_histMeanMultiPhi->bin(0).xMid()); break; case 313: case -313: _histMultiKStar892_0->fill(scaledMom); _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid()); break; case 323: case -323: _histMultiKStar892Plus->fill(scaledEnergy); _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid()); break; case 3122: case -3122: _histMultiLambda0->fill(scaledMom); _histMeanMultiLambda0->fill(_histMeanMultiLambda0->bin(0).xMid()); break; case 3212: case -3212: _histMeanMultiSigma0->fill(_histMeanMultiSigma0->bin(0).xMid()); break; case 3312: case -3312: _histMultiXiMinus->fill(scaledEnergy); _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid()); break; case 3114: case -3114: case 3224: case -3224: _histMultiSigma1385Plus->fill(scaledEnergy); _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid()); break; case 3324: case -3324: _histMultiXi1530_0->fill(scaledEnergy); _histMeanMultiXi1530_0->fill(_histMeanMultiXi1530_0->bin(0).xMid()); break; case 3334: _histMeanMultiOmegaOmegaBar->fill(_histMeanMultiOmegaOmegaBar->bin(0).xMid()); break; } } } /// Finalize void finalize() { // Normalize inclusive single particle distributions to the average number // of charged particles per event. const double avgNumParts = _weightedTotalPartNum->sumW() / sumOfWeights(); normalize(_histPtSIn, avgNumParts); normalize(_histPtSOut, avgNumParts); normalize(_histRapidityT, avgNumParts); normalize(_histY3); normalize(_histLogScaledMom, avgNumParts); normalize(_histScaledMom, avgNumParts); // particle spectra scale(_histMultiPiPlus ,1./sumOfWeights()); scale(_histMultiKPlus ,1./sumOfWeights()); scale(_histMultiP ,1./sumOfWeights()); scale(_histMultiPhoton ,1./sumOfWeights()); scale(_histMultiPi0 ,1./sumOfWeights()); scale(_histMultiEta ,1./sumOfWeights()); scale(_histMultiEtaPrime ,1./sumOfWeights()); scale(_histMultiK0 ,1./sumOfWeights()); scale(_histMultiLambda0 ,1./sumOfWeights()); scale(_histMultiXiMinus ,1./sumOfWeights()); scale(_histMultiSigma1385Plus ,1./sumOfWeights()); scale(_histMultiXi1530_0 ,1./sumOfWeights()); scale(_histMultiRho ,1./sumOfWeights()); scale(_histMultiOmega782 ,1./sumOfWeights()); scale(_histMultiKStar892_0 ,1./sumOfWeights()); scale(_histMultiPhi ,1./sumOfWeights()); scale(_histMultiKStar892Plus ,1./sumOfWeights()); // event shape normalize(_hist1MinusT); normalize(_histTMinor); normalize(_histOblateness); normalize(_histSphericity); normalize(_histAplanarity); normalize(_histHeavyJetMass); normalize(_histCParam); // mean multiplicities scale(_histChMult , 2.0/sumOfWeights()); // taking into account the binwidth of 2 scale(_histMeanChMult , 1.0/sumOfWeights()); scale(_histMeanChMultRapt05 , 1.0/sumOfWeights()); scale(_histMeanChMultRapt10 , 1.0/sumOfWeights()); scale(_histMeanChMultRapt15 , 1.0/sumOfWeights()); scale(_histMeanChMultRapt20 , 1.0/sumOfWeights()); scale(_histMeanMultiPi0 , 1.0/sumOfWeights()); scale(_histMeanMultiEta , 1.0/sumOfWeights()); scale(_histMeanMultiEtaPrime , 1.0/sumOfWeights()); scale(_histMeanMultiK0 , 1.0/sumOfWeights()); scale(_histMeanMultiRho , 1.0/sumOfWeights()); scale(_histMeanMultiOmega782 , 1.0/sumOfWeights()); scale(_histMeanMultiPhi , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892Plus , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892_0 , 1.0/sumOfWeights()); scale(_histMeanMultiLambda0 , 1.0/sumOfWeights()); scale(_histMeanMultiSigma0 , 1.0/sumOfWeights()); scale(_histMeanMultiXiMinus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385Plus, 1.0/sumOfWeights()); scale(_histMeanMultiXi1530_0 , 1.0/sumOfWeights()); scale(_histMeanMultiOmegaOmegaBar, 1.0/sumOfWeights()); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. CounterPtr _weightedTotalPartNum; /// @name Histograms //@{ Histo1DPtr _histSphericity; Histo1DPtr _histAplanarity; Histo1DPtr _hist1MinusT; Histo1DPtr _histTMinor; Histo1DPtr _histY3; Histo1DPtr _histHeavyJetMass; Histo1DPtr _histCParam; Histo1DPtr _histOblateness; Histo1DPtr _histScaledMom; Histo1DPtr _histRapidityT; Histo1DPtr _histPtSIn; Histo1DPtr _histPtSOut; Histo1DPtr _histJetRate2Durham; Histo1DPtr _histJetRate3Durham; Histo1DPtr _histJetRate4Durham; Histo1DPtr _histJetRate5Durham; Histo1DPtr _histLogScaledMom; Histo1DPtr _histChMult; Histo1DPtr _histMultiPiPlus; Histo1DPtr _histMultiKPlus; Histo1DPtr _histMultiP; Histo1DPtr _histMultiPhoton; Histo1DPtr _histMultiPi0; Histo1DPtr _histMultiEta; Histo1DPtr _histMultiEtaPrime; Histo1DPtr _histMultiK0; Histo1DPtr _histMultiLambda0; Histo1DPtr _histMultiXiMinus; Histo1DPtr _histMultiSigma1385Plus; Histo1DPtr _histMultiXi1530_0; Histo1DPtr _histMultiRho; Histo1DPtr _histMultiOmega782; Histo1DPtr _histMultiKStar892_0; Histo1DPtr _histMultiPhi; Histo1DPtr _histMultiKStar892Plus; // mean multiplicities Histo1DPtr _histMeanChMult; Histo1DPtr _histMeanChMultRapt05; Histo1DPtr _histMeanChMultRapt10; Histo1DPtr _histMeanChMultRapt15; Histo1DPtr _histMeanChMultRapt20; Histo1DPtr _histMeanMultiPi0; Histo1DPtr _histMeanMultiEta; Histo1DPtr _histMeanMultiEtaPrime; Histo1DPtr _histMeanMultiK0; Histo1DPtr _histMeanMultiRho; Histo1DPtr _histMeanMultiOmega782; Histo1DPtr _histMeanMultiPhi; Histo1DPtr _histMeanMultiKStar892Plus; Histo1DPtr _histMeanMultiKStar892_0; Histo1DPtr _histMeanMultiLambda0; Histo1DPtr _histMeanMultiSigma0; Histo1DPtr _histMeanMultiXiMinus; Histo1DPtr _histMeanMultiSigma1385Plus; Histo1DPtr _histMeanMultiXi1530_0; Histo1DPtr _histMeanMultiOmegaOmegaBar; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_1996_S3486095); } diff --git a/analyses/pluginLEP/ALEPH_2004_S5765862.cc b/analyses/pluginLEP/ALEPH_2004_S5765862.cc --- a/analyses/pluginLEP/ALEPH_2004_S5765862.cc +++ b/analyses/pluginLEP/ALEPH_2004_S5765862.cc @@ -1,331 +1,331 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include "Rivet/Projections/Beam.hh" namespace Rivet { /// @brief ALEPH jet rates and event shapes at LEP 1 and 2 class ALEPH_2004_S5765862 : public Analysis { public: ALEPH_2004_S5765862() : Analysis("ALEPH_2004_S5765862") , _initialisedJets(false), _initialisedSpectra(false) { } public: void init() { _initialisedJets = true; _initialisedSpectra = true; // TODO: According to the paper they seem to discard neutral particles // between 1 and 2 GeV. That correction is included in the systematic // uncertainties and overly complicated to program, so we ignore it. const FinalState fs; declare(fs, "FS"); FastJets durhamjets(fs, FastJets::DURHAM, 0.7, JetAlg::Muons::ALL, JetAlg::Invisibles::ALL); declare(durhamjets, "DurhamJets"); const Thrust thrust(fs); declare(thrust, "Thrust"); declare(Sphericity(fs), "Sphericity"); declare(ParisiTensor(fs), "Parisi"); declare(Hemispheres(thrust), "Hemispheres"); const ChargedFinalState cfs; declare(Beam(), "Beams"); declare(cfs, "CFS"); // Histos // offset for the event shapes and jets int offset = 0; switch (int(sqrtS()/GeV + 0.5)) { case 91: offset = 0; break; case 133: offset = 1; break; case 161: offset = 2; break; case 172: offset = 3; break; case 183: offset = 4; break; case 189: offset = 5; break; case 200: offset = 6; break; case 206: offset = 7; break; default: _initialisedJets = false; } // event shapes if(_initialisedJets) { book(_h_thrust ,offset+54, 1, 1); book(_h_heavyjetmass ,offset+62, 1, 1); book(_h_totaljetbroadening ,offset+70, 1, 1); book(_h_widejetbroadening ,offset+78, 1, 1); book(_h_cparameter ,offset+86, 1, 1); book(_h_thrustmajor ,offset+94, 1, 1); book(_h_thrustminor ,offset+102, 1, 1); book(_h_jetmassdifference ,offset+110, 1, 1); book(_h_aplanarity ,offset+118, 1, 1); if ( offset != 0 ) book(_h_planarity, offset+125, 1, 1); book(_h_oblateness ,offset+133, 1, 1); book(_h_sphericity ,offset+141, 1, 1); // Durham n->m jet resolutions book(_h_y_Durham[0] ,offset+149, 1, 1); // y12 d149 ... d156 book(_h_y_Durham[1] ,offset+157, 1, 1); // y23 d157 ... d164 if (offset<6) { // there is no y34, y45 and y56 for 200 gev book(_h_y_Durham[2] ,offset+165, 1, 1); // y34 d165 ... d172, but not 171 book(_h_y_Durham[3] ,offset+173, 1, 1); // y45 d173 ... d179 book(_h_y_Durham[4] ,offset+180, 1, 1); // y56 d180 ... d186 } else if (offset==6) { _h_y_Durham[2] = Histo1DPtr(); _h_y_Durham[3] = Histo1DPtr(); _h_y_Durham[4] = Histo1DPtr(); } else if (offset==7) { book(_h_y_Durham[2] ,172, 1, 1); book(_h_y_Durham[3] ,179, 1, 1); book(_h_y_Durham[4] ,186, 1, 1); } // Durham n-jet fractions book(_h_R_Durham[0] ,offset+187, 1, 1); // R1 d187 ... d194 book(_h_R_Durham[1] ,offset+195, 1, 1); // R2 d195 ... d202 book(_h_R_Durham[2] ,offset+203, 1, 1); // R3 d203 ... d210 book(_h_R_Durham[3] ,offset+211, 1, 1); // R4 d211 ... d218 book(_h_R_Durham[4] ,offset+219, 1, 1); // R5 d219 ... d226 book(_h_R_Durham[5] ,offset+227, 1, 1); // R>=6 d227 ... d234 } // offset for the charged particle distributions offset = 0; switch (int(sqrtS()/GeV + 0.5)) { case 133: offset = 0; break; case 161: offset = 1; break; case 172: offset = 2; break; case 183: offset = 3; break; case 189: offset = 4; break; case 196: offset = 5; break; case 200: offset = 6; break; case 206: offset = 7; break; default: _initialisedSpectra=false; } if (_initialisedSpectra) { book(_h_xp , 2+offset, 1, 1); book(_h_xi ,11+offset, 1, 1); book(_h_xe ,19+offset, 1, 1); book(_h_pTin ,27+offset, 1, 1); if (offset == 7) book(_h_pTout, 35, 1, 1); book(_h_rapidityT ,36+offset, 1, 1); book(_h_rapidityS ,44+offset, 1, 1); } - book(_weightedTotalChargedPartNum, "weightedTotalChargedPartNum"); + book(_weightedTotalChargedPartNum, "_weightedTotalChargedPartNum"); if (!_initialisedSpectra && !_initialisedJets) { MSG_WARNING("CoM energy of events sqrt(s) = " << sqrtS()/GeV << " doesn't match any available analysis energy ."); } book(mult, 1, 1, 1); } void analyze(const Event& e) { const Thrust& thrust = apply(e, "Thrust"); const Sphericity& sphericity = apply(e, "Sphericity"); if(_initialisedJets) { bool LEP1 = fuzzyEquals(sqrtS(),91.2*GeV,0.01); // event shapes double thr = LEP1 ? thrust.thrust() : 1.0 - thrust.thrust(); _h_thrust->fill(thr); _h_thrustmajor->fill(thrust.thrustMajor()); if(LEP1) _h_thrustminor->fill(log(thrust.thrustMinor())); else _h_thrustminor->fill(thrust.thrustMinor()); _h_oblateness->fill(thrust.oblateness()); const Hemispheres& hemi = apply(e, "Hemispheres"); _h_heavyjetmass->fill(hemi.scaledM2high()); _h_jetmassdifference->fill(hemi.scaledM2diff()); _h_totaljetbroadening->fill(hemi.Bsum()); _h_widejetbroadening->fill(hemi.Bmax()); const ParisiTensor& parisi = apply(e, "Parisi"); _h_cparameter->fill(parisi.C()); _h_aplanarity->fill(sphericity.aplanarity()); if(_h_planarity) _h_planarity->fill(sphericity.planarity()); _h_sphericity->fill(sphericity.sphericity()); // Jet rates const FastJets& durjet = apply(e, "DurhamJets"); double log10e = log10(exp(1.)); if (durjet.clusterSeq()) { double logynm1=0.; double logyn; for (size_t i=0; i<5; ++i) { double yn = durjet.clusterSeq()->exclusive_ymerge_max(i+1); if (yn<=0.0) continue; logyn = -log(yn); if (_h_y_Durham[i]) { _h_y_Durham[i]->fill(logyn); } if(!LEP1) logyn *= log10e; for (size_t j = 0; j < _h_R_Durham[i]->numBins(); ++j) { double val = _h_R_Durham[i]->bin(j).xMin(); double width = _h_R_Durham[i]->bin(j).xWidth(); if(-val<=logynm1) break; if(-valfill(val+0.5*width, width); } } logynm1 = logyn; } for (size_t j = 0; j < _h_R_Durham[5]->numBins(); ++j) { double val = _h_R_Durham[5]->bin(j).xMin(); double width = _h_R_Durham[5]->bin(j).xWidth(); if(-val<=logynm1) break; _h_R_Durham[5]->fill(val+0.5*width, width); } } if( !_initialisedSpectra) { const ChargedFinalState& cfs = apply(e, "CFS"); const size_t numParticles = cfs.particles().size(); _weightedTotalChargedPartNum->fill(numParticles); } } // charged particle distributions if(_initialisedSpectra) { const ChargedFinalState& cfs = apply(e, "CFS"); const size_t numParticles = cfs.particles().size(); _weightedTotalChargedPartNum->fill(numParticles); const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; for (const Particle& p : cfs.particles()) { const double xp = p.p3().mod()/meanBeamMom; _h_xp->fill(xp ); const double logxp = -std::log(xp); _h_xi->fill(logxp); const double xe = p.E()/meanBeamMom; _h_xe->fill(xe ); const double pTinT = dot(p.p3(), thrust.thrustMajorAxis()); const double pToutT = dot(p.p3(), thrust.thrustMinorAxis()); _h_pTin->fill(fabs(pTinT/GeV)); if(_h_pTout) _h_pTout->fill(fabs(pToutT/GeV)); const double momT = dot(thrust.thrustAxis() ,p.p3()); const double rapidityT = 0.5 * std::log((p.E() + momT) / (p.E() - momT)); _h_rapidityT->fill(fabs(rapidityT)); const double momS = dot(sphericity.sphericityAxis(),p.p3()); const double rapidityS = 0.5 * std::log((p.E() + momS) / (p.E() - momS)); _h_rapidityS->fill(fabs(rapidityS)); } } } void finalize() { if(!_initialisedJets && !_initialisedSpectra) return; if (_initialisedJets) { normalize(_h_thrust); normalize(_h_heavyjetmass); normalize(_h_totaljetbroadening); normalize(_h_widejetbroadening); normalize(_h_cparameter); normalize(_h_thrustmajor); normalize(_h_thrustminor); normalize(_h_jetmassdifference); normalize(_h_aplanarity); if(_h_planarity) normalize(_h_planarity); normalize(_h_oblateness); normalize(_h_sphericity); for (size_t n=0; n<6; ++n) { scale(_h_R_Durham[n], 1./sumOfWeights()); } for (size_t n = 0; n < 5; ++n) { if (_h_y_Durham[n]) { scale(_h_y_Durham[n], 1.0/sumOfWeights()); } } } Histo1D temphisto(refData(1, 1, 1)); const double avgNumParts = dbl(*_weightedTotalChargedPartNum) / sumOfWeights(); for (size_t b = 0; b < temphisto.numBins(); b++) { const double x = temphisto.bin(b).xMid(); const double ex = temphisto.bin(b).xWidth()/2.; if (inRange(sqrtS()/GeV, x-ex, x+ex)) { mult->addPoint(x, avgNumParts, ex, 0.); } } if (_initialisedSpectra) { normalize(_h_xp, avgNumParts); normalize(_h_xi, avgNumParts); normalize(_h_xe, avgNumParts); normalize(_h_pTin , avgNumParts); if (_h_pTout) normalize(_h_pTout, avgNumParts); normalize(_h_rapidityT, avgNumParts); normalize(_h_rapidityS, avgNumParts); } } private: bool _initialisedJets; bool _initialisedSpectra; Scatter2DPtr mult; Histo1DPtr _h_xp; Histo1DPtr _h_xi; Histo1DPtr _h_xe; Histo1DPtr _h_pTin; Histo1DPtr _h_pTout; Histo1DPtr _h_rapidityT; Histo1DPtr _h_rapidityS; Histo1DPtr _h_thrust; Histo1DPtr _h_heavyjetmass; Histo1DPtr _h_totaljetbroadening; Histo1DPtr _h_widejetbroadening; Histo1DPtr _h_cparameter; Histo1DPtr _h_thrustmajor; Histo1DPtr _h_thrustminor; Histo1DPtr _h_jetmassdifference; Histo1DPtr _h_aplanarity; Histo1DPtr _h_planarity; Histo1DPtr _h_oblateness; Histo1DPtr _h_sphericity; Histo1DPtr _h_R_Durham[6]; Histo1DPtr _h_y_Durham[5]; CounterPtr _weightedTotalChargedPartNum; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_2004_S5765862); } diff --git a/analyses/pluginLEP/DELPHI_1995_S3137023.cc b/analyses/pluginLEP/DELPHI_1995_S3137023.cc --- a/analyses/pluginLEP/DELPHI_1995_S3137023.cc +++ b/analyses/pluginLEP/DELPHI_1995_S3137023.cc @@ -1,104 +1,104 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief DELPHI strange baryon paper /// @author Hendrik Hoeth class DELPHI_1995_S3137023 : public Analysis { public: /// Constructor DELPHI_1995_S3137023() : Analysis("DELPHI_1995_S3137023") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); declare(UnstableParticles(), "UFS"); book(_histXpXiMinus ,2, 1, 1); book(_histXpSigma1385Plus ,3, 1, 1); - book(_weightedTotalNumXiMinus, "weightedTotalNumXiMinus"); - book(_weightedTotalNumSigma1385Plus, "weightedTotalNumSigma1385Plus"); + book(_weightedTotalNumXiMinus, "_weightedTotalNumXiMinus"); + book(_weightedTotalNumSigma1385Plus, "_weightedTotalNumSigma1385Plus"); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra const UnstableParticles& ufs = apply(e, "UFS"); for (const Particle& p : ufs.particles()) { const int id = p.abspid(); switch (id) { case 3312: _histXpXiMinus->fill(p.p3().mod()/meanBeamMom); _weightedTotalNumXiMinus->fill(); break; case 3114: case 3224: _histXpSigma1385Plus->fill(p.p3().mod()/meanBeamMom); _weightedTotalNumSigma1385Plus->fill(); break; } } } /// Finalize void finalize() { normalize(_histXpXiMinus , dbl(*_weightedTotalNumXiMinus)/sumOfWeights()); normalize(_histXpSigma1385Plus , dbl(*_weightedTotalNumSigma1385Plus)/sumOfWeights()); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. CounterPtr _weightedTotalNumXiMinus; CounterPtr _weightedTotalNumSigma1385Plus; Histo1DPtr _histXpXiMinus; Histo1DPtr _histXpSigma1385Plus; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_1995_S3137023); } diff --git a/analyses/pluginLEP/DELPHI_1996_S3430090.cc b/analyses/pluginLEP/DELPHI_1996_S3430090.cc --- a/analyses/pluginLEP/DELPHI_1996_S3430090.cc +++ b/analyses/pluginLEP/DELPHI_1996_S3430090.cc @@ -1,553 +1,553 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /** * @brief DELPHI event shapes and identified particle spectra * @author Andy Buckley * @author Hendrik Hoeth * * This is the paper which was used for the original PROFESSOR MC tuning * study. It studies a wide range of e+ e- event shape variables, differential * jet rates in the Durham and JADE schemes, and incorporates identified * particle spectra, from other LEP analyses. * * @par Run conditions * * @arg LEP1 beam energy: \f$ \sqrt{s} = \f$ 91.2 GeV * @arg Run with generic QCD events. * @arg No \f$ p_\perp^\text{min} \f$ cutoff is required */ class DELPHI_1996_S3430090 : public Analysis { public: /// Constructor DELPHI_1996_S3430090() : Analysis("DELPHI_1996_S3430090") { } /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); // Don't try to introduce a pT or eta cut here. It's all corrected // back. (See Section 2 of the paper.) const ChargedFinalState cfs; declare(cfs, "FS"); declare(UnstableParticles(), "UFS"); declare(FastJets(cfs, FastJets::JADE, 0.7), "JadeJets"); declare(FastJets(cfs, FastJets::DURHAM, 0.7), "DurhamJets"); declare(Sphericity(cfs), "Sphericity"); declare(ParisiTensor(cfs), "Parisi"); const Thrust thrust(cfs); declare(thrust, "Thrust"); declare(Hemispheres(thrust), "Hemispheres"); book(_histPtTIn, 1, 1, 1); book(_histPtTOut,2, 1, 1); book(_histPtSIn, 3, 1, 1); book(_histPtSOut,4, 1, 1); book(_histRapidityT, 5, 1, 1); book(_histRapidityS, 6, 1, 1); book(_histScaledMom, 7, 1, 1); book(_histLogScaledMom, 8, 1, 1); book(_histPtTOutVsXp ,9, 1, 1); book(_histPtVsXp ,10, 1, 1); book(_hist1MinusT, 11, 1, 1); book(_histTMajor, 12, 1, 1); book(_histTMinor, 13, 1, 1); book(_histOblateness, 14, 1, 1); book(_histSphericity, 15, 1, 1); book(_histAplanarity, 16, 1, 1); book(_histPlanarity, 17, 1, 1); book(_histCParam, 18, 1, 1); book(_histDParam, 19, 1, 1); book(_histHemiMassH, 20, 1, 1); book(_histHemiMassL, 21, 1, 1); book(_histHemiMassD, 22, 1, 1); book(_histHemiBroadW, 23, 1, 1); book(_histHemiBroadN, 24, 1, 1); book(_histHemiBroadT, 25, 1, 1); book(_histHemiBroadD, 26, 1, 1); // Binned in y_cut book(_histDiffRate2Durham, 27, 1, 1); book(_histDiffRate2Jade, 28, 1, 1); book(_histDiffRate3Durham, 29, 1, 1); book(_histDiffRate3Jade, 30, 1, 1); book(_histDiffRate4Durham, 31, 1, 1); book(_histDiffRate4Jade, 32, 1, 1); // Binned in cos(chi) book(_histEEC, 33, 1, 1); book(_histAEEC, 34, 1, 1); book(_histMultiCharged, 35, 1, 1); book(_histMultiPiPlus, 36, 1, 1); book(_histMultiPi0, 36, 1, 2); book(_histMultiKPlus, 36, 1, 3); book(_histMultiK0, 36, 1, 4); book(_histMultiEta, 36, 1, 5); book(_histMultiEtaPrime, 36, 1, 6); book(_histMultiDPlus, 36, 1, 7); book(_histMultiD0, 36, 1, 8); book(_histMultiBPlus0, 36, 1, 9); book(_histMultiF0, 37, 1, 1); book(_histMultiRho, 38, 1, 1); book(_histMultiKStar892Plus, 38, 1, 2); book(_histMultiKStar892_0, 38, 1, 3); book(_histMultiPhi, 38, 1, 4); book(_histMultiDStar2010Plus, 38, 1, 5); book(_histMultiF2, 39, 1, 1); book(_histMultiK2Star1430_0, 39, 1, 2); book(_histMultiP, 40, 1, 1); book(_histMultiLambda0, 40, 1, 2); book(_histMultiXiMinus, 40, 1, 3); book(_histMultiOmegaMinus, 40, 1, 4); book(_histMultiDeltaPlusPlus, 40, 1, 5); book(_histMultiSigma1385Plus, 40, 1, 6); book(_histMultiXi1530_0, 40, 1, 7); book(_histMultiLambdaB0, 40, 1, 8); - book(_weightedTotalPartNum,"TotalPartNum"); - book(_passedCutWeightSum, "passedCutWeightSum"); - book(_passedCut3WeightSum, "passedCut3WeightSum"); - book(_passedCut4WeightSum, "passedCut4WeightSum"); - book(_passedCut5WeightSum, "passedCut5WeightSum"); + book(_weightedTotalPartNum,"_TotalPartNum"); + book(_passedCutWeightSum, "_passedCutWeightSum"); + book(_passedCut3WeightSum, "_passedCut3WeightSum"); + book(_passedCut4WeightSum, "_passedCut4WeightSum"); + book(_passedCut5WeightSum, "_passedCut5WeightSum"); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); _passedCutWeightSum->fill(); _weightedTotalPartNum->fill(numParticles); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Thrusts MSG_DEBUG("Calculating thrust"); const Thrust& thrust = apply(e, "Thrust"); _hist1MinusT->fill(1 - thrust.thrust()); _histTMajor->fill(thrust.thrustMajor()); _histTMinor->fill(thrust.thrustMinor()); _histOblateness->fill(thrust.oblateness()); // Jets const FastJets& durjet = apply(e, "DurhamJets"); const FastJets& jadejet = apply(e, "JadeJets"); if (numParticles >= 3) { _passedCut3WeightSum->fill(); if (durjet.clusterSeq()) _histDiffRate2Durham->fill(durjet.clusterSeq()->exclusive_ymerge_max(2)); if (jadejet.clusterSeq()) _histDiffRate2Jade->fill(jadejet.clusterSeq()->exclusive_ymerge_max(2)); } if (numParticles >= 4) { _passedCut4WeightSum->fill(); if (durjet.clusterSeq()) _histDiffRate3Durham->fill(durjet.clusterSeq()->exclusive_ymerge_max(3)); if (jadejet.clusterSeq()) _histDiffRate3Jade->fill(jadejet.clusterSeq()->exclusive_ymerge_max(3)); } if (numParticles >= 5) { _passedCut5WeightSum->fill(); if (durjet.clusterSeq()) _histDiffRate4Durham->fill(durjet.clusterSeq()->exclusive_ymerge_max(4)); if (jadejet.clusterSeq()) _histDiffRate4Jade->fill(jadejet.clusterSeq()->exclusive_ymerge_max(4)); } // Sphericities MSG_DEBUG("Calculating sphericity"); const Sphericity& sphericity = apply(e, "Sphericity"); _histSphericity->fill(sphericity.sphericity()); _histAplanarity->fill(sphericity.aplanarity()); _histPlanarity->fill(sphericity.planarity()); // C & D params MSG_DEBUG("Calculating Parisi params"); const ParisiTensor& parisi = apply(e, "Parisi"); _histCParam->fill(parisi.C()); _histDParam->fill(parisi.D()); // Hemispheres MSG_DEBUG("Calculating hemisphere variables"); const Hemispheres& hemi = apply(e, "Hemispheres"); _histHemiMassH->fill(hemi.scaledM2high()); _histHemiMassL->fill(hemi.scaledM2low()); _histHemiMassD->fill(hemi.scaledM2diff()); _histHemiBroadW->fill(hemi.Bmax()); _histHemiBroadN->fill(hemi.Bmin()); _histHemiBroadT->fill(hemi.Bsum()); _histHemiBroadD->fill(hemi.Bdiff()); // Iterate over all the charged final state particles. double Evis = 0.0; double Evis2 = 0.0; MSG_DEBUG("About to iterate over charged FS particles"); for (const Particle& p : fs.particles()) { // Get momentum and energy of each particle. const Vector3 mom3 = p.p3(); const double energy = p.E(); Evis += energy; // Scaled momenta. const double mom = mom3.mod(); const double scaledMom = mom/meanBeamMom; const double logInvScaledMom = -std::log(scaledMom); _histLogScaledMom->fill(logInvScaledMom); _histScaledMom->fill(scaledMom); // Get momenta components w.r.t. thrust and sphericity. const double momT = dot(thrust.thrustAxis(), mom3); const double momS = dot(sphericity.sphericityAxis(), mom3); const double pTinT = dot(mom3, thrust.thrustMajorAxis()); const double pToutT = dot(mom3, thrust.thrustMinorAxis()); const double pTinS = dot(mom3, sphericity.sphericityMajorAxis()); const double pToutS = dot(mom3, sphericity.sphericityMinorAxis()); const double pT = sqrt(pow(pTinT, 2) + pow(pToutT, 2)); _histPtTIn->fill(fabs(pTinT/GeV)); _histPtTOut->fill(fabs(pToutT/GeV)); _histPtSIn->fill(fabs(pTinS/GeV)); _histPtSOut->fill(fabs(pToutS/GeV)); _histPtVsXp->fill(scaledMom, fabs(pT/GeV)); _histPtTOutVsXp->fill(scaledMom, fabs(pToutT/GeV)); // Calculate rapidities w.r.t. thrust and sphericity. const double rapidityT = 0.5 * std::log((energy + momT) / (energy - momT)); const double rapidityS = 0.5 * std::log((energy + momS) / (energy - momS)); _histRapidityT->fill(fabs(rapidityT)); _histRapidityS->fill(fabs(rapidityS)); MSG_TRACE(fabs(rapidityT) << " " << scaledMom/GeV); } Evis2 = Evis*Evis; // (A)EEC // Need iterators since second loop starts at current outer loop iterator, i.e. no "for" here! for (Particles::const_iterator p_i = fs.particles().begin(); p_i != fs.particles().end(); ++p_i) { for (Particles::const_iterator p_j = p_i; p_j != fs.particles().end(); ++p_j) { if (p_i == p_j) continue; const Vector3 mom3_i = p_i->momentum().p3(); const Vector3 mom3_j = p_j->momentum().p3(); const double energy_i = p_i->momentum().E(); const double energy_j = p_j->momentum().E(); const double cosij = dot(mom3_i.unit(), mom3_j.unit()); const double eec = (energy_i*energy_j) / Evis2; _histEEC->fill(cosij, eec); if (cosij < 0) _histAEEC->fill( cosij, eec); else _histAEEC->fill(-cosij, -eec); } } _histMultiCharged->fill(_histMultiCharged->bin(0).xMid(), numParticles); // Final state of unstable particles to get particle spectra const UnstableParticles& ufs = apply(e, "UFS"); for (const Particle& p : ufs.particles()) { int id = p.abspid(); switch (id) { case 211: _histMultiPiPlus->fill(_histMultiPiPlus->bin(0).xMid()); break; case 111: _histMultiPi0->fill(_histMultiPi0->bin(0).xMid()); break; case 321: _histMultiKPlus->fill(_histMultiKPlus->bin(0).xMid()); break; case 130: case 310: _histMultiK0->fill(_histMultiK0->bin(0).xMid()); break; case 221: _histMultiEta->fill(_histMultiEta->bin(0).xMid()); break; case 331: _histMultiEtaPrime->fill(_histMultiEtaPrime->bin(0).xMid()); break; case 411: _histMultiDPlus->fill(_histMultiDPlus->bin(0).xMid()); break; case 421: _histMultiD0->fill(_histMultiD0->bin(0).xMid()); break; case 511: case 521: case 531: _histMultiBPlus0->fill(_histMultiBPlus0->bin(0).xMid()); break; case 9010221: _histMultiF0->fill(_histMultiF0->bin(0).xMid()); break; case 113: _histMultiRho->fill(_histMultiRho->bin(0).xMid()); break; case 323: _histMultiKStar892Plus->fill(_histMultiKStar892Plus->bin(0).xMid()); break; case 313: _histMultiKStar892_0->fill(_histMultiKStar892_0->bin(0).xMid()); break; case 333: _histMultiPhi->fill(_histMultiPhi->bin(0).xMid()); break; case 413: _histMultiDStar2010Plus->fill(_histMultiDStar2010Plus->bin(0).xMid()); break; case 225: _histMultiF2->fill(_histMultiF2->bin(0).xMid()); break; case 315: _histMultiK2Star1430_0->fill(_histMultiK2Star1430_0->bin(0).xMid()); break; case 2212: _histMultiP->fill(_histMultiP->bin(0).xMid()); break; case 3122: _histMultiLambda0->fill(_histMultiLambda0->bin(0).xMid()); break; case 3312: _histMultiXiMinus->fill(_histMultiXiMinus->bin(0).xMid()); break; case 3334: _histMultiOmegaMinus->fill(_histMultiOmegaMinus->bin(0).xMid()); break; case 2224: _histMultiDeltaPlusPlus->fill(_histMultiDeltaPlusPlus->bin(0).xMid()); break; case 3114: _histMultiSigma1385Plus->fill(_histMultiSigma1385Plus->bin(0).xMid()); break; case 3324: _histMultiXi1530_0->fill(_histMultiXi1530_0->bin(0).xMid()); break; case 5122: _histMultiLambdaB0->fill(_histMultiLambdaB0->bin(0).xMid()); break; } } } // Finalize void finalize() { // Normalize inclusive single particle distributions to the average number // of charged particles per event. const double avgNumParts = dbl(*_weightedTotalPartNum / *_passedCutWeightSum); normalize(_histPtTIn, avgNumParts); normalize(_histPtTOut, avgNumParts); normalize(_histPtSIn, avgNumParts); normalize(_histPtSOut, avgNumParts); normalize(_histRapidityT, avgNumParts); normalize(_histRapidityS, avgNumParts); normalize(_histLogScaledMom, avgNumParts); normalize(_histScaledMom, avgNumParts); scale(_histEEC, 1.0 / *_passedCutWeightSum); scale(_histAEEC, 1.0 / *_passedCutWeightSum); scale(_histMultiCharged, 1.0 / *_passedCutWeightSum); scale(_histMultiPiPlus, 1.0 / *_passedCutWeightSum); scale(_histMultiPi0, 1.0 / *_passedCutWeightSum); scale(_histMultiKPlus, 1.0 / *_passedCutWeightSum); scale(_histMultiK0, 1.0 / *_passedCutWeightSum); scale(_histMultiEta, 1.0 / *_passedCutWeightSum); scale(_histMultiEtaPrime, 1.0 / *_passedCutWeightSum); scale(_histMultiDPlus, 1.0 / *_passedCutWeightSum); scale(_histMultiD0, 1.0 / *_passedCutWeightSum); scale(_histMultiBPlus0, 1.0 / *_passedCutWeightSum); scale(_histMultiF0, 1.0 / *_passedCutWeightSum); scale(_histMultiRho, 1.0 / *_passedCutWeightSum); scale(_histMultiKStar892Plus, 1.0 / *_passedCutWeightSum); scale(_histMultiKStar892_0, 1.0 / *_passedCutWeightSum); scale(_histMultiPhi, 1.0 / *_passedCutWeightSum); scale(_histMultiDStar2010Plus, 1.0 / *_passedCutWeightSum); scale(_histMultiF2, 1.0 / *_passedCutWeightSum); scale(_histMultiK2Star1430_0, 1.0 / *_passedCutWeightSum); scale(_histMultiP, 1.0 / *_passedCutWeightSum); scale(_histMultiLambda0, 1.0 / *_passedCutWeightSum); scale(_histMultiXiMinus, 1.0 / *_passedCutWeightSum); scale(_histMultiOmegaMinus, 1.0 / *_passedCutWeightSum); scale(_histMultiDeltaPlusPlus, 1.0 / *_passedCutWeightSum); scale(_histMultiSigma1385Plus, 1.0 / *_passedCutWeightSum); scale(_histMultiXi1530_0, 1.0 / *_passedCutWeightSum); scale(_histMultiLambdaB0, 1.0 / *_passedCutWeightSum); scale(_hist1MinusT, 1.0 / *_passedCutWeightSum); scale(_histTMajor, 1.0 / *_passedCutWeightSum); scale(_histTMinor, 1.0 / *_passedCutWeightSum); scale(_histOblateness, 1.0 / *_passedCutWeightSum); scale(_histSphericity, 1.0 / *_passedCutWeightSum); scale(_histAplanarity, 1.0 / *_passedCutWeightSum); scale(_histPlanarity, 1.0 / *_passedCutWeightSum); scale(_histHemiMassD, 1.0 / *_passedCutWeightSum); scale(_histHemiMassH, 1.0 / *_passedCutWeightSum); scale(_histHemiMassL, 1.0 / *_passedCutWeightSum); scale(_histHemiBroadW, 1.0 / *_passedCutWeightSum); scale(_histHemiBroadN, 1.0 / *_passedCutWeightSum); scale(_histHemiBroadT, 1.0 / *_passedCutWeightSum); scale(_histHemiBroadD, 1.0 / *_passedCutWeightSum); scale(_histCParam, 1.0 / *_passedCutWeightSum); scale(_histDParam, 1.0 / *_passedCutWeightSum); scale(_histDiffRate2Durham, 1.0 / *_passedCut3WeightSum); scale(_histDiffRate2Jade, 1.0 / *_passedCut3WeightSum); scale(_histDiffRate3Durham, 1.0 / *_passedCut4WeightSum); scale(_histDiffRate3Jade, 1.0 / *_passedCut4WeightSum); scale(_histDiffRate4Durham, 1.0 / *_passedCut5WeightSum); scale(_histDiffRate4Jade, 1.0 / *_passedCut5WeightSum); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. CounterPtr _weightedTotalPartNum; /// @name Sums of weights past various cuts //@{ CounterPtr _passedCutWeightSum; CounterPtr _passedCut3WeightSum; CounterPtr _passedCut4WeightSum; CounterPtr _passedCut5WeightSum; //@} /// @name Histograms //@{ Histo1DPtr _histPtTIn; Histo1DPtr _histPtTOut; Histo1DPtr _histPtSIn; Histo1DPtr _histPtSOut; Histo1DPtr _histRapidityT; Histo1DPtr _histRapidityS; Histo1DPtr _histScaledMom, _histLogScaledMom; Profile1DPtr _histPtTOutVsXp, _histPtVsXp; Histo1DPtr _hist1MinusT; Histo1DPtr _histTMajor; Histo1DPtr _histTMinor; Histo1DPtr _histOblateness; Histo1DPtr _histSphericity; Histo1DPtr _histAplanarity; Histo1DPtr _histPlanarity; Histo1DPtr _histCParam; Histo1DPtr _histDParam; Histo1DPtr _histHemiMassD; Histo1DPtr _histHemiMassH; Histo1DPtr _histHemiMassL; Histo1DPtr _histHemiBroadW; Histo1DPtr _histHemiBroadN; Histo1DPtr _histHemiBroadT; Histo1DPtr _histHemiBroadD; Histo1DPtr _histDiffRate2Durham; Histo1DPtr _histDiffRate2Jade; Histo1DPtr _histDiffRate3Durham; Histo1DPtr _histDiffRate3Jade; Histo1DPtr _histDiffRate4Durham; Histo1DPtr _histDiffRate4Jade; Histo1DPtr _histEEC, _histAEEC; Histo1DPtr _histMultiCharged; Histo1DPtr _histMultiPiPlus; Histo1DPtr _histMultiPi0; Histo1DPtr _histMultiKPlus; Histo1DPtr _histMultiK0; Histo1DPtr _histMultiEta; Histo1DPtr _histMultiEtaPrime; Histo1DPtr _histMultiDPlus; Histo1DPtr _histMultiD0; Histo1DPtr _histMultiBPlus0; Histo1DPtr _histMultiF0; Histo1DPtr _histMultiRho; Histo1DPtr _histMultiKStar892Plus; Histo1DPtr _histMultiKStar892_0; Histo1DPtr _histMultiPhi; Histo1DPtr _histMultiDStar2010Plus; Histo1DPtr _histMultiF2; Histo1DPtr _histMultiK2Star1430_0; Histo1DPtr _histMultiP; Histo1DPtr _histMultiLambda0; Histo1DPtr _histMultiXiMinus; Histo1DPtr _histMultiOmegaMinus; Histo1DPtr _histMultiDeltaPlusPlus; Histo1DPtr _histMultiSigma1385Plus; Histo1DPtr _histMultiXi1530_0; Histo1DPtr _histMultiLambdaB0; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_1996_S3430090); } diff --git a/analyses/pluginLEP/L3_2004_I652683.cc b/analyses/pluginLEP/L3_2004_I652683.cc --- a/analyses/pluginLEP/L3_2004_I652683.cc +++ b/analyses/pluginLEP/L3_2004_I652683.cc @@ -1,404 +1,404 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/InitialQuarks.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" namespace Rivet { /// Jet rates and event shapes at LEP I+II class L3_2004_I652683 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(L3_2004_I652683); /// Book histograms and initialise projections before the run void init() { // Projections to use const FinalState FS; declare(FS, "FS"); declare(Beam(), "beams"); const ChargedFinalState CFS; declare(CFS, "CFS"); const Thrust thrust(FS); declare(thrust, "thrust"); declare(ParisiTensor(FS), "Parisi"); declare(Hemispheres(thrust), "Hemispheres"); declare(InitialQuarks(), "initialquarks"); // Book the histograms if(fuzzyEquals(sqrtS()/GeV, 91.2, 1e-3)) { // z pole book(_h_Thrust_udsc , 47, 1, 1); book(_h_Thrust_bottom , 47, 1, 2); book(_h_heavyJetmass_udsc , 48, 1, 1); book(_h_heavyJetmass_bottom , 48, 1, 2); book(_h_totalJetbroad_udsc , 49, 1, 1); book(_h_totalJetbroad_bottom , 49, 1, 2); book(_h_wideJetbroad_udsc , 50, 1, 1); book(_h_wideJetbroad_bottom , 50, 1, 2); book(_h_Cparameter_udsc , 51, 1, 1); book(_h_Cparameter_bottom , 51, 1, 2); book(_h_Dparameter_udsc , 52, 1, 1); book(_h_Dparameter_bottom , 52, 1, 2); book(_h_Ncharged , "/TMP/NCHARGED" , 28, 1, 57); book(_h_Ncharged_udsc , "/TMP/NCHARGED_UDSC", 28, 1, 57); book(_h_Ncharged_bottom , "/TMP/NCHARGED_B" , 27, 3, 57); book(_h_scaledMomentum , 65, 1, 1); book(_h_scaledMomentum_udsc , 65, 1, 2); book(_h_scaledMomentum_bottom , 65, 1, 3); } else if(sqrtS()/GeV<90) { int i1(-1),i2(-1); if(fuzzyEquals(sqrtS()/GeV, 41.4, 1e-2)) { i1=0; i2=1; } else if(fuzzyEquals(sqrtS()/GeV, 55.3, 1e-2)) { i1=0; i2=2; } else if(fuzzyEquals(sqrtS()/GeV, 65.4, 1e-2)) { i1=0; i2=3; } else if(fuzzyEquals(sqrtS()/GeV, 75.7, 1e-2)) { i1=1; i2=1; } else if(fuzzyEquals(sqrtS()/GeV, 82.3, 1e-2)) { i1=1; i2=2; } else if(fuzzyEquals(sqrtS()/GeV, 85.1, 1e-2)) { i1=1; i2=3; } else MSG_ERROR("Beam energy not supported!"); book(_h_thrust , 21+i1,1,i2); book(_h_rho , 26+i1,1,i2); book(_h_B_T , 31+i1,1,i2); book(_h_B_W , 36+i1,1,i2); } else if(sqrtS()/GeV>120) { int i1(-1),i2(-1); if(fuzzyEquals(sqrtS()/GeV, 130.1, 1e-2)) { i1=0; i2=1; } else if(fuzzyEquals(sqrtS()/GeV, 136.1, 1e-2)) { i1=0; i2=2; } else if(fuzzyEquals(sqrtS()/GeV, 161.3, 1e-2)) { i1=0; i2=3; } else if(fuzzyEquals(sqrtS()/GeV, 172.3, 1e-2)) { i1=1; i2=1; } else if(fuzzyEquals(sqrtS()/GeV, 182.8, 1e-2)) { i1=1; i2=2; } else if(fuzzyEquals(sqrtS()/GeV, 188.6, 1e-2)) { i1=1; i2=3; } else if(fuzzyEquals(sqrtS()/GeV, 194.4, 1e-2)) { i1=2; i2=1; } else if(fuzzyEquals(sqrtS()/GeV, 200.2, 1e-2)) { i1=2; i2=2; } else if(fuzzyEquals(sqrtS()/GeV, 206.2, 1e-2)) { i1=2; i2=3; } else MSG_ERROR("Beam energy not supported!"); book(_h_thrust , 23+i1,1,i2); book(_h_rho , 28+i1,1,i2); book(_h_B_T , 33+i1,1,i2); book(_h_B_W , 38+i1,1,i2); book(_h_C , 41+i1,1,i2); book(_h_D , 44+i1,1,i2); book(_h_N , "/TMP/NCHARGED", 22, 9, 53); book(_h_xi , 66+i1,1,i2); // todo add the jets // int i3 = 3*i1+i2; // _h_y_2_JADE = bookHisto1D( i3,1,1); // _h_y_3_JADE = bookHisto1D( i3,1,2); // _h_y_4_JADE = bookHisto1D( i3,1,3); // _h_y_5_JADE = bookHisto1D( i3,1,4); // _h_y_2_Durham = bookHisto1D( 9+i3,1,1); // _h_y_3_Durham = bookHisto1D( 9+i3,1,2); // _h_y_4_Durham = bookHisto1D( 9+i3,1,3); // _h_y_5_Durham = bookHisto1D( 9+i3,1,4); // if(i3==8||i3==9) { // _h_y_2_Cambridge = bookHisto1D(10+i3,1,1); // _h_y_3_Cambridge = bookHisto1D(10+i3,1,2); // _h_y_4_Cambridge = bookHisto1D(10+i3,1,3); // _h_y_5_Cambridge = bookHisto1D(10+i3,1,4); // } } - book(_sumW_udsc, "sumW_udsc"); - book(_sumW_b, "sumW_b"); - book(_sumW_ch, "sumW_ch"); - book(_sumW_ch_udsc, "sumW_ch_udsc"); - book(_sumW_ch_b, "sumW_ch_b"); + book(_sumW_udsc, "_sumW_udsc"); + book(_sumW_b, "_sumW_b"); + book(_sumW_ch, "_sumW_ch"); + book(_sumW_ch_udsc, "_sumW_ch_udsc"); + book(_sumW_ch_b, "_sumW_ch_b"); } /// Perform the per-event analysis void analyze(const Event& event) { // Get beam average momentum const ParticlePair& beams = apply(event, "beams").beams(); const double beamMomentum = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; // InitialQuarks projection to have udsc events separated from b events /// @todo Yuck!!! Eliminate when possible... int iflav = 0; // only need the flavour at Z pole if(_h_Thrust_udsc) { int flavour = 0; const InitialQuarks& iqf = apply(event, "initialquarks"); Particles quarks; if ( iqf.particles().size() == 2 ) { flavour = iqf.particles().front().abspid(); quarks = iqf.particles(); } else { map quarkmap; for (const Particle& p : iqf.particles()) { if (quarkmap.find(p.pid()) == quarkmap.end()) quarkmap[p.pid()] = p; else if (quarkmap[p.pid()].E() < p.E()) quarkmap[p.pid()] = p; } double max_energy = 0.; for (int i = 1; i <= 5; ++i) { double energy = 0.; if (quarkmap.find(i) != quarkmap.end()) energy += quarkmap[ i].E(); if (quarkmap.find(-i) != quarkmap.end()) energy += quarkmap[-i].E(); if (energy > max_energy) flavour = i; } if (quarkmap.find(flavour) != quarkmap.end()) quarks.push_back(quarkmap[flavour]); if (quarkmap.find(-flavour) != quarkmap.end()) quarks.push_back(quarkmap[-flavour]); } // Flavour label /// @todo Change to a bool? iflav = (flavour == PID::DQUARK || flavour == PID::UQUARK || flavour == PID::SQUARK || flavour == PID::CQUARK) ? 1 : (flavour == PID::BQUARK) ? 5 : 0; } // Update weight sums if (iflav == 1) { _sumW_udsc->fill(); } else if (iflav == 5) { _sumW_b->fill(); } _sumW_ch->fill(); // Charged multiplicity const FinalState& cfs = applyProjection(event, "CFS"); if(_h_Ncharged) _h_Ncharged->fill(cfs.size()); if (iflav == 1) { _sumW_ch_udsc->fill(); _h_Ncharged_udsc->fill(cfs.size()); } else if (iflav == 5) { _sumW_ch_b->fill(); _h_Ncharged_bottom->fill(cfs.size()); } else if(_h_N) { _h_N->fill(cfs.size()); } // Scaled momentum const Particles& chparticles = cfs.particlesByPt(); for (const Particle& p : chparticles) { const Vector3 momentum3 = p.p3(); const double mom = momentum3.mod(); const double scaledMom = mom/beamMomentum; const double logScaledMom = std::log(scaledMom); if(_h_scaledMomentum) _h_scaledMomentum->fill(-logScaledMom); if (iflav == 1) { _h_scaledMomentum_udsc->fill(-logScaledMom); } else if (iflav == 5) { _h_scaledMomentum_bottom->fill(-logScaledMom); } else if(_h_xi) { _h_xi->fill(-logScaledMom); } } // Thrust const Thrust& thrust = applyProjection(event, "thrust"); if (iflav == 1) { _h_Thrust_udsc->fill(thrust.thrust()); } else if (iflav == 5) { _h_Thrust_bottom->fill(thrust.thrust()); } else if(_h_thrust) { _h_thrust->fill(1.-thrust.thrust()); } // C and D Parisi parameters const ParisiTensor& parisi = applyProjection(event, "Parisi"); if (iflav == 1) { _h_Cparameter_udsc->fill(parisi.C()); _h_Dparameter_udsc->fill(parisi.D()); } else if (iflav == 5) { _h_Cparameter_bottom->fill(parisi.C()); _h_Dparameter_bottom->fill(parisi.D()); } else if(_h_C) { _h_C->fill(parisi.C()); _h_D->fill(parisi.D()); } // The hemisphere variables const Hemispheres& hemisphere = applyProjection(event, "Hemispheres"); if (iflav == 1) { _h_heavyJetmass_udsc->fill(hemisphere.scaledM2high()); _h_totalJetbroad_udsc->fill(hemisphere.Bsum()); _h_wideJetbroad_udsc->fill(hemisphere.Bmax()); } else if (iflav == 5) { _h_heavyJetmass_bottom->fill(hemisphere.scaledM2high()); _h_totalJetbroad_bottom->fill(hemisphere.Bsum()); _h_wideJetbroad_bottom->fill(hemisphere.Bmax()); } else if (_h_rho) { _h_rho->fill(hemisphere.scaledM2high()); _h_B_T->fill(hemisphere.Bsum()); _h_B_W->fill(hemisphere.Bmax()); } } Scatter2DPtr convertHisto(unsigned int ix,unsigned int iy, unsigned int iz, Histo1DPtr histo) { Scatter2D temphisto(refData(ix, iy, iz)); Scatter2DPtr mult; book(mult, ix, iy, iz); for (size_t b = 0; b < temphisto.numPoints(); b++) { const double x = temphisto.point(b).x(); pair ex = temphisto.point(b).xErrs(); double y = histo->bins()[b].area(); double yerr = histo->bins()[b].areaErr(); mult->addPoint(x, y, ex, make_pair(yerr,yerr)); } return mult; } /// Normalise histograms etc., after the run void finalize() { // Z pole plots if(_h_Thrust_udsc) { scale({_h_Thrust_udsc, _h_heavyJetmass_udsc, _h_totalJetbroad_udsc, _h_wideJetbroad_udsc, _h_Cparameter_udsc, _h_Dparameter_udsc}, 1/_sumW_udsc->sumW()); scale({_h_Thrust_bottom, _h_heavyJetmass_bottom, _h_totalJetbroad_bottom, _h_wideJetbroad_bottom, _h_Cparameter_bottom, _h_Dparameter_bottom}, 1./_sumW_b->sumW()); scale(_h_Ncharged, 1./_sumW_ch->sumW()); scale(_h_Ncharged_udsc, 1./_sumW_ch_udsc->sumW()); scale(_h_Ncharged_bottom, 1./_sumW_ch_b->sumW()); convertHisto(59,1,1,_h_Ncharged ); convertHisto(59,1,2,_h_Ncharged_udsc ); convertHisto(59,1,3,_h_Ncharged_bottom); scale(_h_scaledMomentum, 1/_sumW_ch->sumW()); scale(_h_scaledMomentum_udsc, 1/_sumW_ch_udsc->sumW()); scale(_h_scaledMomentum_bottom, 1/_sumW_ch_b->sumW()); } else { if(_h_thrust) normalize(_h_thrust); if(_h_rho) normalize(_h_rho); if(_h_B_T) normalize(_h_B_T); if(_h_B_W) normalize(_h_B_W); if(_h_C) normalize(_h_C); if(_h_D) normalize(_h_D); if(_h_N) normalize(_h_N); if(_h_xi) scale(_h_xi,1./sumOfWeights()); Scatter2DPtr mult; if(_h_N) { if(fuzzyEquals(sqrtS()/GeV, 130.1, 1e-2)) { convertHisto(60, 1, 1, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 136.1, 1e-2)) { convertHisto(60, 1, 2, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 161.3, 1e-2)) { convertHisto(60, 1, 3, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 172.3, 1e-2)) { convertHisto(61, 1, 1, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 182.8, 1e-2)) { convertHisto(61, 1, 2, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 188.6, 1e-2)) { convertHisto(61, 1, 3, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 194.4, 1e-2)) { convertHisto(62, 1, 1, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 200.2, 1e-2)) { convertHisto(62, 1, 2, _h_N); } else if(fuzzyEquals(sqrtS()/GeV, 206.2, 1e-2)) { convertHisto(62, 1, 3, _h_N); } } // todo add the jets // Histo1DPtr _h_y_2_JADE,_h_y_3_JADE,_h_y_4_JADE,_h_y_5_JADE; // Histo1DPtr _h_y_2_Durham,_h_y_3_Durham,_h_y_4_Durham,_h_y_5_Durham; // Histo1DPtr _h_y_2_Cambridge,_h_y_3_Cambridge,_h_y_4_Cambridge,_h_y_5_Cambridge; } } /// Weight counters CounterPtr _sumW_udsc, _sumW_b, _sumW_ch, _sumW_ch_udsc, _sumW_ch_b; /// @name Histograms //@{ // at the Z pole Histo1DPtr _h_Thrust_udsc, _h_Thrust_bottom; Histo1DPtr _h_heavyJetmass_udsc, _h_heavyJetmass_bottom; Histo1DPtr _h_totalJetbroad_udsc, _h_totalJetbroad_bottom; Histo1DPtr _h_wideJetbroad_udsc, _h_wideJetbroad_bottom; Histo1DPtr _h_Cparameter_udsc, _h_Cparameter_bottom; Histo1DPtr _h_Dparameter_udsc, _h_Dparameter_bottom; Histo1DPtr _h_Ncharged, _h_Ncharged_udsc, _h_Ncharged_bottom; Histo1DPtr _h_scaledMomentum, _h_scaledMomentum_udsc, _h_scaledMomentum_bottom; // at other enegies Histo1DPtr _h_thrust,_h_rho,_h_B_T,_h_B_W,_h_C,_h_D,_h_N,_h_xi; // todo add the jets // Histo1DPtr _h_y_2_JADE,_h_y_3_JADE,_h_y_4_JADE,_h_y_5_JADE; // Histo1DPtr _h_y_2_Durham,_h_y_3_Durham,_h_y_4_Durham,_h_y_5_Durham; // Histo1DPtr _h_y_2_Cambridge,_h_y_3_Cambridge,_h_y_4_Cambridge,_h_y_5_Cambridge; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(L3_2004_I652683); } diff --git a/analyses/pluginLEP/OPAL_1996_S3257789.cc b/analyses/pluginLEP/OPAL_1996_S3257789.cc --- a/analyses/pluginLEP/OPAL_1996_S3257789.cc +++ b/analyses/pluginLEP/OPAL_1996_S3257789.cc @@ -1,96 +1,96 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL J/Psi fragmentation function paper /// @author Peter Richardson class OPAL_1996_S3257789 : public Analysis { public: /// Constructor OPAL_1996_S3257789() : Analysis("OPAL_1996_S3257789") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); declare(UnstableParticles(), "UFS"); book(_histXpJPsi , 1, 1, 1); book(_multJPsi , 2, 1, 1); book(_multPsiPrime , 2, 1, 2); - book(_weightSum,"weightSum"); + book(_weightSum,"_weightSum"); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra const UnstableParticles& ufs = apply(e, "UFS"); for (const Particle& p : ufs.particles()) { if(p.abspid()==443) { double xp = p.p3().mod()/meanBeamMom; _histXpJPsi->fill(xp); _multJPsi->fill(91.2); _weightSum->fill(); } else if(p.abspid()==100443) { _multPsiPrime->fill(91.2); } } } /// Finalize void finalize() { if(_weightSum->val()>0.) scale(_histXpJPsi , 0.1/ *_weightSum); scale(_multJPsi , 1./sumOfWeights()); scale(_multPsiPrime, 1./sumOfWeights()); } //@} private: CounterPtr _weightSum; Histo1DPtr _histXpJPsi; Histo1DPtr _multJPsi; Histo1DPtr _multPsiPrime; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1996_S3257789); } diff --git a/analyses/pluginLEP/OPAL_1998_S3780481.cc b/analyses/pluginLEP/OPAL_1998_S3780481.cc --- a/analyses/pluginLEP/OPAL_1998_S3780481.cc +++ b/analyses/pluginLEP/OPAL_1998_S3780481.cc @@ -1,195 +1,195 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #define I_KNOW_THE_INITIAL_QUARKS_PROJECTION_IS_DODGY_BUT_NEED_TO_USE_IT #include "Rivet/Projections/InitialQuarks.hh" namespace Rivet { /// @brief OPAL flavour-dependent fragmentation paper /// @author Hendrik Hoeth class OPAL_1998_S3780481 : public Analysis { public: /// Constructor OPAL_1998_S3780481() : Analysis("OPAL_1998_S3780481") { } /// @name Analysis methods //@{ void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); _weightedTotalPartNum->fill(numParticles); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); int flavour = 0; const InitialQuarks& iqf = apply(e, "IQF"); // If we only have two quarks (qqbar), just take the flavour. // If we have more than two quarks, look for the highest energetic q-qbar pair. /// @todo Yuck... does this *really* have to be quark-based?!? if (iqf.particles().size() == 2) { flavour = iqf.particles().front().abspid(); } else { map quarkmap; for (const Particle& p : iqf.particles()) { if (quarkmap[p.pid()] < p.E()) { quarkmap[p.pid()] = p.E(); } } double maxenergy = 0.; for (int i = 1; i <= 5; ++i) { if (quarkmap[i]+quarkmap[-i] > maxenergy) { flavour = i; } } } switch (flavour) { case 1: case 2: case 3: _SumOfudsWeights->fill(); break; case 4: _SumOfcWeights->fill(); break; case 5: _SumOfbWeights->fill(); break; } for (const Particle& p : fs.particles()) { const double xp = p.p3().mod()/meanBeamMom; const double logxp = -std::log(xp); _histXpall->fill(xp); _histLogXpall->fill(logxp); _histMultiChargedall->fill(_histMultiChargedall->bin(0).xMid()); switch (flavour) { /// @todo Use PDG code enums case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _histXpuds->fill(xp); _histLogXpuds->fill(logxp); _histMultiChargeduds->fill(_histMultiChargeduds->bin(0).xMid()); break; case PID::CQUARK: _histXpc->fill(xp); _histLogXpc->fill(logxp); _histMultiChargedc->fill(_histMultiChargedc->bin(0).xMid()); break; case PID::BQUARK: _histXpb->fill(xp); _histLogXpb->fill(logxp); _histMultiChargedb->fill(_histMultiChargedb->bin(0).xMid()); break; } } } void init() { // Projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); declare(InitialQuarks(), "IQF"); // Book histos book(_histXpuds ,1, 1, 1); book(_histXpc ,2, 1, 1); book(_histXpb ,3, 1, 1); book(_histXpall ,4, 1, 1); book(_histLogXpuds ,5, 1, 1); book(_histLogXpc ,6, 1, 1); book(_histLogXpb ,7, 1, 1); book(_histLogXpall ,8, 1, 1); book(_histMultiChargeduds ,9, 1, 1); book(_histMultiChargedc ,9, 1, 2); book(_histMultiChargedb ,9, 1, 3); book(_histMultiChargedall ,9, 1, 4); // Counters - book(_weightedTotalPartNum, "TotalPartNum"); - book(_SumOfudsWeights, "udsWeights"); - book(_SumOfcWeights, "cWeights"); - book(_SumOfbWeights, "bWeights"); + book(_weightedTotalPartNum, "_TotalPartNum"); + book(_SumOfudsWeights, "_udsWeights"); + book(_SumOfcWeights, "_cWeights"); + book(_SumOfbWeights, "_bWeights"); } /// Finalize void finalize() { const double avgNumParts = dbl(*_weightedTotalPartNum) / sumOfWeights(); normalize(_histXpuds , avgNumParts); normalize(_histXpc , avgNumParts); normalize(_histXpb , avgNumParts); normalize(_histXpall , avgNumParts); normalize(_histLogXpuds , avgNumParts); normalize(_histLogXpc , avgNumParts); normalize(_histLogXpb , avgNumParts); normalize(_histLogXpall , avgNumParts); scale(_histMultiChargeduds, 1.0/ *_SumOfudsWeights); scale(_histMultiChargedc , 1.0/ *_SumOfcWeights); scale(_histMultiChargedb , 1.0/ *_SumOfbWeights); scale(_histMultiChargedall, 1.0/sumOfWeights()); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. CounterPtr _weightedTotalPartNum; CounterPtr _SumOfudsWeights; CounterPtr _SumOfcWeights; CounterPtr _SumOfbWeights; Histo1DPtr _histXpuds; Histo1DPtr _histXpc; Histo1DPtr _histXpb; Histo1DPtr _histXpall; Histo1DPtr _histLogXpuds; Histo1DPtr _histLogXpc; Histo1DPtr _histLogXpb; Histo1DPtr _histLogXpall; Histo1DPtr _histMultiChargeduds; Histo1DPtr _histMultiChargedc; Histo1DPtr _histMultiChargedb; Histo1DPtr _histMultiChargedall; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1998_S3780481); } diff --git a/analyses/pluginLEP/OPAL_2004_I631361.cc b/analyses/pluginLEP/OPAL_2004_I631361.cc --- a/analyses/pluginLEP/OPAL_2004_I631361.cc +++ b/analyses/pluginLEP/OPAL_2004_I631361.cc @@ -1,362 +1,352 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/HadronicFinalState.hh" #include "Rivet/Tools/BinnedHistogram.hh" #include "fastjet/JetDefinition.hh" namespace fastjet { class P_scheme : public JetDefinition::Recombiner { public: std::string description() const {return "";} void recombine(const PseudoJet & pa, const PseudoJet & pb, PseudoJet & pab) const { PseudoJet tmp = pa + pb; double E = sqrt(tmp.px()*tmp.px() + tmp.py()*tmp.py() + tmp.pz()*tmp.pz()); pab.reset_momentum(tmp.px(), tmp.py(), tmp.pz(), E); } void preprocess(PseudoJet & p) const { double E = sqrt(p.px()*p.px() + p.py()*p.py() + p.pz()*p.pz()); p.reset_momentum(p.px(), p.py(), p.pz(), E); } ~P_scheme() { } }; } namespace Rivet { class OPAL_2004_I631361 : public Analysis { public: /// Constructor - OPAL_2004_I631361() - : Analysis("OPAL_2004_I631361"), _sumW(0.0) - { } - + DEFAULT_RIVET_ANALYSIS_CTOR(OPAL_2004_I631361); /// @name Analysis methods //@{ void init() { // Get options from the new option system _mode = 0; if ( getOption("PROCESS") == "GG" ) _mode = 0; if ( getOption("PROCESS") == "QQ" ) _mode = 1; // projections we need for both cases const FinalState fs; declare(fs, "FS"); const ChargedFinalState cfs; declare(cfs, "CFS"); // additional projections for q qbar if(_mode==1) { - declare(HadronicFinalState(fs), "HFS"); - declare(HadronicFinalState(cfs), "HCFS"); + declare(HadronicFinalState(fs), "HFS"); + declare(HadronicFinalState(cfs), "HCFS"); } // book the histograms if(_mode==0) { int ih(0), iy(0); if (inRange(0.5*sqrtS()/GeV, 5.0, 5.5)) { ih = 1; - iy = 1; + iy = 1; } else if (inRange(0.5*sqrtS()/GeV, 5.5, 6.5)) { ih = 1; - iy = 2; + iy = 2; } else if (inRange(0.5*sqrtS()/GeV, 6.5, 7.5)) { ih = 1; - iy = 3; + iy = 3; } else if (inRange(0.5*sqrtS()/GeV, 7.5, 9.5)) { ih = 2; - iy = 1; + iy = 1; } else if (inRange(0.5*sqrtS()/GeV, 9.5, 13.0)) { ih = 2; - iy = 2; + iy = 2; } else if (inRange(0.5*sqrtS()/GeV, 13.0, 16.0)) { ih = 3; - iy = 1; + iy = 1; } else if (inRange(0.5*sqrtS()/GeV, 16.0, 20.0)) { ih = 3; - iy = 2; + iy = 2; } + if (!ih) MSG_WARNING("Option \"PROCESS=GG\" not compatible with this beam energy!"); assert(ih>0); book(_h_chMult_gg, ih,1,iy); - if(ih==3) - book(_h_chFragFunc_gg, 5,1,iy); - else - _h_chFragFunc_gg = nullptr; + if(ih==3) book(_h_chFragFunc_gg, 5,1,iy); + else _h_chFragFunc_gg = nullptr; } else { Histo1DPtr dummy; - book(dummy, 1,1,1); - _h_chMult_qq.add(5.0, 5.5, dummy); - book(dummy, 1,1,2); - _h_chMult_qq.add(5.5, 6.5, dummy); - book(dummy, 1,1,3); - _h_chMult_qq.add(6.5, 7.5, dummy); - book(dummy, 2,1,1); - _h_chMult_qq.add(7.5, 9.5, dummy); - book(dummy, 2,1,2); - _h_chMult_qq.add(9.5, 13.0, dummy); - book(dummy, 3,1,1); - _h_chMult_qq.add(13.0, 16.0, dummy); - book(dummy, 3,1,2); - _h_chMult_qq.add(16.0, 20.0, dummy); + _h_chMult_qq.add( 5.0, 5.5, book(dummy, 1,1,1)); + _h_chMult_qq.add( 5.5, 6.5, book(dummy, 1,1,2)); + _h_chMult_qq.add( 6.5, 7.5, book(dummy, 1,1,3)); + _h_chMult_qq.add( 7.5, 9.5, book(dummy, 2,1,1)); + _h_chMult_qq.add( 9.5, 13.0, book(dummy, 2,1,2)); + _h_chMult_qq.add(13.0, 16.0, book(dummy, 3,1,1)); + _h_chMult_qq.add(16.0, 20.0, book(dummy, 3,1,2)); - book(dummy, 5,1,1); - _h_chFragFunc_qq.add(13.0, 16.0, dummy); - book(dummy, 5,1,2); - _h_chFragFunc_qq.add(16.0, 20.0, dummy); + _h_chFragFunc_qq.add(13.0, 16.0, book(dummy, 5,1,1)); + _h_chFragFunc_qq.add(16.0, 20.0, book(dummy, 5,1,2)); + + _sumWEbin.resize(7); + for (size_t i = 0; i < 7; ++i) { + book(_sumWEbin[i], "/TMP/sumWEbin" + to_string(i)); + } } } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = 1.0; // gg mode if(_mode==0) { - // find the initial gluons - Particles initial; - for (ConstGenParticlePtr p : HepMCUtils::particles(event.genEvent())) { - ConstGenVertexPtr pv = p->production_vertex(); - const PdgId pid = p->pdg_id(); - if(pid!=21) continue; - bool passed = false; - for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) { - const PdgId ppid = abs(pp->pdg_id()); - passed = (ppid == PID::ELECTRON || ppid == PID::HIGGS || - ppid == PID::ZBOSON || ppid == PID::GAMMA); - if(passed) break; - } - if(passed) initial.push_back(Particle(*p)); - } - if(initial.size()!=2) vetoEvent; - // use the direction for the event axis - Vector3 axis = initial[0].momentum().p3().unit(); - // fill histograms - const Particles& chps = applyProjection(event, "CFS").particles(); - unsigned int nMult[2] = {0,0}; - _sumW += 2.*weight; - // distribution - for (const Particle& p : chps) { - double xE = 2.*p.E()/sqrtS(); - if(_h_chFragFunc_gg) _h_chFragFunc_gg->fill(xE, weight); - if(p.momentum().p3().dot(axis)>0.) - ++nMult[0]; - else - ++nMult[1]; - } - // multiplicities in jet - _h_chMult_gg->fill(nMult[0],weight); - _h_chMult_gg->fill(nMult[1],weight); + // find the initial gluons + Particles initial; + for (ConstGenParticlePtr p : HepMCUtils::particles(event.genEvent())) { + ConstGenVertexPtr pv = p->production_vertex(); + const PdgId pid = p->pdg_id(); + if(pid!=21) continue; + bool passed = false; + for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) { + const PdgId ppid = abs(pp->pdg_id()); + passed = (ppid == PID::ELECTRON || ppid == PID::HIGGS || + ppid == PID::ZBOSON || ppid == PID::GAMMA); + if(passed) break; + } + if(passed) initial.push_back(Particle(*p)); + } + if(initial.size()!=2) vetoEvent; + // use the direction for the event axis + Vector3 axis = initial[0].momentum().p3().unit(); + // fill histograms + const Particles& chps = applyProjection(event, "CFS").particles(); + unsigned int nMult[2] = {0,0}; + // distribution + for (const Particle& p : chps) { + double xE = 2.*p.E()/sqrtS(); + if(_h_chFragFunc_gg) _h_chFragFunc_gg->fill(xE, weight); + if(p.momentum().p3().dot(axis)>0.) + ++nMult[0]; + else + ++nMult[1]; + } + // multiplicities in jet + _h_chMult_gg->fill(nMult[0],weight); + _h_chMult_gg->fill(nMult[1],weight); } // qqbar mode else { - // cut on the number of charged particles - const Particles& chParticles = applyProjection(event, "CFS").particles(); - if(chParticles.size() < 5) vetoEvent; - // cluster the jets - const Particles& particles = applyProjection(event, "FS").particles(); - fastjet::JetDefinition ee_kt_def(fastjet::ee_kt_algorithm, &p_scheme); - PseudoJets pParticles; - for (Particle p : particles) { - PseudoJet temp = p.pseudojet(); - if(p.fromBottom()) { - temp.set_user_index(5); - } - pParticles.push_back(temp); - } - fastjet::ClusterSequence cluster(pParticles, ee_kt_def); - // rescale energys to just keep the directions of the jets - // and keep track of b tags - PseudoJets pJets = sorted_by_E(cluster.exclusive_jets_up_to(3)); - if(pJets.size() < 3) vetoEvent; - array dirs; - for(int i=0; i<3; i++) { - dirs[i] = Vector3(pJets[i].px(),pJets[i].py(),pJets[i].pz()).unit(); - } - array bTagged; - Jets jets; - for(int i=0; i<3; i++) { - double Ejet = sqrtS()*sin(angle(dirs[(i+1)%3],dirs[(i+2)%3])) / - (sin(angle(dirs[i],dirs[(i+1)%3])) + sin(angle(dirs[i],dirs[(i+2)%3])) + sin(angle(dirs[(i+1)%3],dirs[(i+2)%3]))); - jets.push_back(FourMomentum(Ejet,Ejet*dirs[i].x(),Ejet*dirs[i].y(),Ejet*dirs[i].z())); - bTagged[i] = false; - for (PseudoJet particle : pJets[i].constituents()) { - if(particle.user_index() > 1 and !bTagged[i]) { - bTagged[i] = true; - } - } - } - - int QUARK1 = 0, QUARK2 = 1, GLUON = 2; - - if(jets[QUARK2].E() > jets[QUARK1].E()) swap(QUARK1, QUARK2); - if(jets[GLUON].E() > jets[QUARK1].E()) swap(QUARK1, GLUON); - if(!bTagged[QUARK2]) { - if(!bTagged[GLUON]) vetoEvent; - else swap(QUARK2, GLUON); - } - if(bTagged[GLUON]) vetoEvent; - - // exclude collinear or soft jets - double k1 = jets[QUARK1].E()*min(angle(jets[QUARK1].momentum(),jets[QUARK2].momentum()), - angle(jets[QUARK1].momentum(),jets[GLUON].momentum())); - double k2 = jets[QUARK2].E()*min(angle(jets[QUARK2].momentum(),jets[QUARK1].momentum()), - angle(jets[QUARK2].momentum(),jets[GLUON].momentum())); - if(k1<8.0*GeV || k2<8.0*GeV) vetoEvent; - - double sqg = (jets[QUARK1].momentum()+jets[GLUON].momentum()).mass2(); - double sgq = (jets[QUARK2].momentum()+jets[GLUON].momentum()).mass2(); - double s = (jets[QUARK1].momentum()+jets[QUARK2].momentum()+jets[GLUON].momentum()).mass2(); - - double Eg = 0.5*sqrt(sqg*sgq/s); - - if(Eg < 5.0 || Eg > 20.0) { vetoEvent; } - else if(Eg > 9.5) { - //requirements for experimental reconstructability raise as energy raises - if(!bTagged[QUARK1]) { - vetoEvent; - } - } - - // all cuts applied, increment sum of weights - _sumWEbin[getEbin(Eg)] += weight; - - - // transform to frame with event in y-z and glue jet in z direction - Matrix3 glueTOz(jets[GLUON].momentum().vector3(), Vector3(0,0,1)); - Vector3 transQuark = glueTOz*jets[QUARK2].momentum().vector3(); - Matrix3 quarksTOyz(Vector3(transQuark.x(), transQuark.y(), 0), Vector3(0,1,0)); - - // work out transformation to symmetric frame - array x_cm; - array x_cm_y; - array x_cm_z; - array x_pr; - for(int i=0; i<3; i++) { - x_cm[i] = 2*jets[i].E()/sqrt(s); - Vector3 p_transf = quarksTOyz*glueTOz*jets[i].p3(); - x_cm_y[i] = 2*p_transf.y()/sqrt(s); - x_cm_z[i] = 2*p_transf.z()/sqrt(s); - } - x_pr[GLUON] = sqrt(4*(1-x_cm[QUARK1])*(1-x_cm[QUARK2])/(3+x_cm[GLUON])); - x_pr[QUARK1] = x_pr[GLUON]/(1-x_cm[QUARK1]); - x_pr[QUARK2] = x_pr[GLUON]/(1-x_cm[QUARK2]); - double gamma = (x_pr[QUARK1] + x_pr[GLUON] + x_pr[QUARK2])/2; - double beta_z = x_pr[GLUON]/(gamma*x_cm[GLUON]) - 1; - double beta_y = (x_pr[QUARK2]/gamma - x_cm[QUARK2] - beta_z*x_cm_z[QUARK2])/x_cm_y[QUARK2]; - - LorentzTransform toSymmetric = LorentzTransform::mkObjTransformFromBeta(Vector3(0.,beta_y,beta_z)). - postMult(quarksTOyz*glueTOz); - - FourMomentum transGlue = toSymmetric.transform(jets[GLUON].momentum()); - double cutAngle = angle(toSymmetric.transform(jets[QUARK2].momentum()), transGlue)/2; - - int nCh = 0; - for (const Particle& chP : chParticles ) { - FourMomentum pSymmFrame = toSymmetric.transform(FourMomentum(chP.p3().mod(), chP.px(), chP.py(), chP.pz())); - if(angle(pSymmFrame, transGlue) < cutAngle) { - _h_chFragFunc_qq.fill(Eg, pSymmFrame.E()*sin(cutAngle)/Eg, weight); - nCh++; - } - } - _h_chMult_qq.fill(Eg, nCh, weight); + // cut on the number of charged particles + const Particles& chParticles = applyProjection(event, "CFS").particles(); + if(chParticles.size() < 5) vetoEvent; + // cluster the jets + const Particles& particles = applyProjection(event, "FS").particles(); + fastjet::JetDefinition ee_kt_def(fastjet::ee_kt_algorithm, &p_scheme); + PseudoJets pParticles; + for (Particle p : particles) { + PseudoJet temp = p.pseudojet(); + if(p.fromBottom()) { + temp.set_user_index(5); + } + pParticles.push_back(temp); + } + fastjet::ClusterSequence cluster(pParticles, ee_kt_def); + // rescale energys to just keep the directions of the jets + // and keep track of b tags + PseudoJets pJets = sorted_by_E(cluster.exclusive_jets_up_to(3)); + if(pJets.size() < 3) vetoEvent; + array dirs; + for(int i=0; i<3; i++) { + dirs[i] = Vector3(pJets[i].px(),pJets[i].py(),pJets[i].pz()).unit(); + } + array bTagged; + Jets jets; + for(int i=0; i<3; i++) { + double Ejet = sqrtS()*sin(angle(dirs[(i+1)%3],dirs[(i+2)%3])) / + (sin(angle(dirs[i],dirs[(i+1)%3])) + sin(angle(dirs[i],dirs[(i+2)%3])) + sin(angle(dirs[(i+1)%3],dirs[(i+2)%3]))); + jets.push_back(FourMomentum(Ejet,Ejet*dirs[i].x(),Ejet*dirs[i].y(),Ejet*dirs[i].z())); + bTagged[i] = false; + for (PseudoJet particle : pJets[i].constituents()) { + if(particle.user_index() > 1 and !bTagged[i]) { + bTagged[i] = true; + } + } + } + + int QUARK1 = 0, QUARK2 = 1, GLUON = 2; + + if(jets[QUARK2].E() > jets[QUARK1].E()) swap(QUARK1, QUARK2); + if(jets[GLUON].E() > jets[QUARK1].E()) swap(QUARK1, GLUON); + if(!bTagged[QUARK2]) { + if(!bTagged[GLUON]) vetoEvent; + else swap(QUARK2, GLUON); + } + if(bTagged[GLUON]) vetoEvent; + + // exclude collinear or soft jets + double k1 = jets[QUARK1].E()*min(angle(jets[QUARK1].momentum(),jets[QUARK2].momentum()), + angle(jets[QUARK1].momentum(),jets[GLUON].momentum())); + double k2 = jets[QUARK2].E()*min(angle(jets[QUARK2].momentum(),jets[QUARK1].momentum()), + angle(jets[QUARK2].momentum(),jets[GLUON].momentum())); + if(k1<8.0*GeV || k2<8.0*GeV) vetoEvent; + + double sqg = (jets[QUARK1].momentum()+jets[GLUON].momentum()).mass2(); + double sgq = (jets[QUARK2].momentum()+jets[GLUON].momentum()).mass2(); + double s = (jets[QUARK1].momentum()+jets[QUARK2].momentum()+jets[GLUON].momentum()).mass2(); + + double Eg = 0.5*sqrt(sqg*sgq/s); + + if(Eg < 5.0 || Eg > 46.) { vetoEvent; } + else if(Eg > 9.5) { + //requirements for experimental reconstructability raise as energy raises + if(!bTagged[QUARK1]) { + vetoEvent; + } + } + + // all cuts applied, increment sum of weights + _sumWEbin[getEbin(Eg)]->fill(); + + + // transform to frame with event in y-z and glue jet in z direction + Matrix3 glueTOz(jets[GLUON].momentum().vector3(), Vector3(0,0,1)); + Vector3 transQuark = glueTOz*jets[QUARK2].momentum().vector3(); + Matrix3 quarksTOyz(Vector3(transQuark.x(), transQuark.y(), 0), Vector3(0,1,0)); + + // work out transformation to symmetric frame + array x_cm; + array x_cm_y; + array x_cm_z; + array x_pr; + for(int i=0; i<3; i++) { + x_cm[i] = 2*jets[i].E()/sqrt(s); + Vector3 p_transf = quarksTOyz*glueTOz*jets[i].p3(); + x_cm_y[i] = 2*p_transf.y()/sqrt(s); + x_cm_z[i] = 2*p_transf.z()/sqrt(s); + } + x_pr[GLUON] = sqrt(4*(1-x_cm[QUARK1])*(1-x_cm[QUARK2])/(3+x_cm[GLUON])); + x_pr[QUARK1] = x_pr[GLUON]/(1-x_cm[QUARK1]); + x_pr[QUARK2] = x_pr[GLUON]/(1-x_cm[QUARK2]); + double gamma = (x_pr[QUARK1] + x_pr[GLUON] + x_pr[QUARK2])/2; + double beta_z = x_pr[GLUON]/(gamma*x_cm[GLUON]) - 1; + double beta_y = (x_pr[QUARK2]/gamma - x_cm[QUARK2] - beta_z*x_cm_z[QUARK2])/x_cm_y[QUARK2]; + + LorentzTransform toSymmetric = LorentzTransform::mkObjTransformFromBeta(Vector3(0.,beta_y,beta_z)). + postMult(quarksTOyz*glueTOz); + + FourMomentum transGlue = toSymmetric.transform(jets[GLUON].momentum()); + double cutAngle = angle(toSymmetric.transform(jets[QUARK2].momentum()), transGlue)/2; + + int nCh = 0; + for (const Particle& chP : chParticles ) { + FourMomentum pSymmFrame = toSymmetric.transform(FourMomentum(chP.p3().mod(), chP.px(), chP.py(), chP.pz())); + if(angle(pSymmFrame, transGlue) < cutAngle) { + _h_chFragFunc_qq.fill(Eg, pSymmFrame.E()*sin(cutAngle)/Eg, weight); + nCh++; + } + } + _h_chMult_qq.fill(Eg, nCh, weight); } } /// Normalise histograms etc., after the run void finalize() { if(_mode==0) { - normalize(_h_chMult_gg); - if(_h_chFragFunc_gg) scale(_h_chFragFunc_gg, 1./_sumW); + normalize(_h_chMult_gg); + if(_h_chFragFunc_gg) normalize(_h_chFragFunc_gg, 1.0, true); } else { - for (Histo1DPtr hist : _h_chMult_qq.histos()) { - normalize(hist); - } - for(int i=0; i<2; i++) { - if(!isZero(_sumWEbin[i+5])) { - scale(_h_chFragFunc_qq.histos()[i], 1./_sumWEbin[i+5]); - } - } + for (Histo1DPtr hist : _h_chMult_qq.histos()) { + normalize(hist); + } + for(int i=0; i<2; i++) { + if(!isZero(_sumWEbin[i+5]->val())) { + scale(_h_chFragFunc_qq.histos()[i], 1./(*_sumWEbin[i+5])); + } + } } } //@} private: int getEbin(double E_glue) { int ih = -1; if (inRange(E_glue/GeV, 5.0, 5.5)) { ih = 0; } else if (inRange(E_glue/GeV, 5.5, 6.5)) { ih = 1; } else if (inRange(E_glue/GeV, 6.5, 7.5)) { ih = 2; } else if (inRange(E_glue/GeV, 7.5, 9.5)) { ih = 3; } else if (inRange(E_glue/GeV, 9.5, 13.0)) { ih = 4; } else if (inRange(E_glue/GeV, 13.0, 16.0)) { ih = 5; } else if (inRange(E_glue/GeV, 16.0, 20.0)) { ih = 6; } assert(ih >= 0); return ih; } class PScheme : public JetDefinition::Recombiner { public: std::string description() const {return "";} void recombine(const PseudoJet & pa, const PseudoJet & pb, PseudoJet & pab) const { PseudoJet tmp = pa + pb; double E = sqrt(tmp.px()*tmp.px() + tmp.py()*tmp.py() + tmp.pz()*tmp.pz()); pab.reset_momentum(tmp.px(), tmp.py(), tmp.pz(), E); } void preprocess(PseudoJet & p) const { double E = sqrt(p.px()*p.px() + p.py()*p.py() + p.pz()*p.pz()); p.reset_momentum(p.px(), p.py(), p.pz(), E); } ~PScheme() { } }; private: // The mode unsigned int _mode; /// @todo IMPROVEMENT NEEDED? - double _sumW; - vector _sumWEbin; + vector _sumWEbin; // p scheme jet definition fastjet::P_scheme p_scheme; /// @name Histograms //@{ Histo1DPtr _h_chMult_gg; Histo1DPtr _h_chFragFunc_gg; BinnedHistogram _h_chMult_qq; BinnedHistogram _h_chFragFunc_qq; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2004_I631361); } diff --git a/analyses/pluginLEP/OPAL_2004_I631361.info b/analyses/pluginLEP/OPAL_2004_I631361.info --- a/analyses/pluginLEP/OPAL_2004_I631361.info +++ b/analyses/pluginLEP/OPAL_2004_I631361.info @@ -1,55 +1,55 @@ Name: OPAL_2004_I631361 Year: 2004 Summary: Gluon jet charged multiplicities and fragmentation functions Experiment: OPAL Collider: LEP InspireID: 631361 Status: VALIDATED Authors: - Daniel Reichelt References: - Phys. Rev. D69, 032002,2004 - hep-ex/0310048 RunInfo: The fictional $e^+e^-\to gg$ process NumEvents: 100000 NeedCrossSection: no Beams: [e+, e- ] -Energies: [10.50,11.96,13.96,16.86,21.84,28.48,35.44,91,2] +Energies: [10.50,11.96,13.96,16.86,21.84,28.48,35.44,91.2] Options: - PROCESS=GG,QQ NeedCrossSection: False Description: ' Measurement of gluon jet properties using the jet boost algorithm, a technique to select unbiased samples of gluon jets in $e^+e^-$ annihilation, i.e. gluon jets free of biases introduced by event selection or jet finding criteria. Two modes are provided, the prefer option is to produce the fictional $e^+e^-\to g g $ process to be used due to the corrections applied to the data, PROCESS=GG. The original analysis technique to extract gluon jets from hadronic $e^+e^-$ events using $e^+e^-\to q\bar{q}$ events, PROCESS=QQ, is also provided but cannot be used for tuning as the data has been corrected for impurities, however it is still useful qualitatively in order to check the properties of gluon jets in the original way in which there were measured rather than using a fictitious process.' BibKey: Abbiendi:2004gh BibTeX: '@article{Abbiendi:2004gh, author = "Abbiendi, G. and others", title = "{Experimental studies of unbiased gluon jets from $e^{+} e^{-}$ annihilations using the jet boost algorithm}", collaboration = "OPAL", journal = "Phys. Rev.", volume = "D69", year = "2004", pages = "032002", doi = "10.1103/PhysRevD.69.032002", eprint = "hep-ex/0310048", archivePrefix = "arXiv", primaryClass = "hep-ex", reportNumber = "CERN-EP-2004-067", SLACcitation = "%%CITATION = HEP-EX/0310048;%%" }' Validation: - $A LEP-10.5-gg - $A-2 LEP-11.96-gg - $A-3 LEP-13.96-gg - $A-4 LEP-16.86-gg - $A-5 LEP-21.84-gg - $A-6 LEP-28.48-gg - $A-7 LEP-35.44-gg - $A-8 LEP-91; rivet $< -a $A:PROCESS=QQ -o $@ - $A-9 LEP-91 :PROCESS=QQ diff --git a/analyses/pluginLEP/OPAL_2004_I648738.cc b/analyses/pluginLEP/OPAL_2004_I648738.cc --- a/analyses/pluginLEP/OPAL_2004_I648738.cc +++ b/analyses/pluginLEP/OPAL_2004_I648738.cc @@ -1,118 +1,118 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { class OPAL_2004_I648738 : public Analysis { public: /// Constructor OPAL_2004_I648738() : Analysis("OPAL_2004_I648738"), _sumW(3), _histo_xE(3) { } /// @name Analysis methods //@{ void init() { declare(FinalState(), "FS"); declare(ChargedFinalState(), "CFS"); unsigned int ih=0; if (inRange(0.5*sqrtS()/GeV, 4.0, 9.0)) { ih = 1; } else if (inRange(0.5*sqrtS()/GeV, 9.0, 19.0)) { ih = 2; } else if (inRange(0.5*sqrtS()/GeV, 19.0, 30.0)) { ih = 3; } else if (inRange(0.5*sqrtS()/GeV, 45.5, 45.7)) { ih = 5; } else if (inRange(0.5*sqrtS()/GeV, 30.0, 70.0)) { ih = 4; } else if (inRange(0.5*sqrtS()/GeV, 91.5, 104.5)) { ih = 6; } assert(ih>0); // book the histograms book(_histo_xE[0], ih+5,1,1); book(_histo_xE[1], ih+5,1,2); if(ih<5) book(_histo_xE[2] ,ih+5,1,3); - book(_sumW[0], "sumW_0"); - book(_sumW[1], "sumW_1"); - book(_sumW[2], "sumW_2"); + book(_sumW[0], "_sumW_0"); + book(_sumW[1], "_sumW_1"); + book(_sumW[2], "_sumW_2"); } /// Perform the per-event analysis void analyze(const Event& event) { // find the initial quarks/gluons Particles initial; for (ConstGenParticlePtr p : HepMCUtils::particles(event.genEvent())) { ConstGenVertexPtr pv = p->production_vertex(); const PdgId pid = abs(p->pdg_id()); if(!( (pid>=1&&pid<=5) || pid ==21) ) continue; bool passed = false; for (ConstGenParticlePtr pp : HepMCUtils::particles(pv, Relatives::PARENTS)) { const PdgId ppid = abs(pp->pdg_id()); passed = (ppid == PID::ELECTRON || ppid == PID::HIGGS || ppid == PID::ZBOSON || ppid == PID::GAMMA); if(passed) break; } if(passed) initial.push_back(Particle(*p)); } if(initial.size()!=2) { vetoEvent; } // type of event unsigned int itype=2; if(initial[0].pid()==-initial[1].pid()) { PdgId pid = abs(initial[0].pid()); if(pid>=1&&pid<=4) itype=0; else itype=1; } assert(itype<_histo_xE.size()); // fill histograms _sumW[itype]->fill(2.); const Particles& chps = applyProjection(event, "CFS").particles(); for(const Particle& p : chps) { double xE = 2.*p.E()/sqrtS(); _histo_xE[itype]->fill(xE); } } /// Normalise histograms etc., after the run void finalize() { for(unsigned int ix=0;ix<_histo_xE.size();++ix) { if(_sumW[ix]->val()>0.) scale(_histo_xE[ix],1./ *_sumW[ix]); } } //@} private: vector _sumW; /// @name Histograms //@{ vector _histo_xE; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2004_I648738); } diff --git a/analyses/pluginLEP/OPAL_2004_S6132243.cc b/analyses/pluginLEP/OPAL_2004_S6132243.cc --- a/analyses/pluginLEP/OPAL_2004_S6132243.cc +++ b/analyses/pluginLEP/OPAL_2004_S6132243.cc @@ -1,274 +1,274 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include namespace Rivet { /// @brief OPAL event shapes and moments at 91, 133, 177, and 197 GeV /// @author Andy Buckley class OPAL_2004_S6132243 : public Analysis { public: /// Constructor OPAL_2004_S6132243() : Analysis("OPAL_2004_S6132243"), _isqrts(-1) { // } /// @name Analysis methods //@{ /// Energies: 91, 133, 177 (161-183), 197 (189-209) => index 0..4 int getHistIndex(double sqrts) { int ih = -1; if (inRange(sqrts/GeV, 89.9, 91.5)) { ih = 0; } else if (fuzzyEquals(sqrts/GeV, 133)) { ih = 1; } else if (fuzzyEquals(sqrts/GeV, 177)) { // (161-183) ih = 2; } else if (fuzzyEquals(sqrts/GeV, 197)) { // (189-209) ih = 3; } else { stringstream ss; ss << "Invalid energy for OPAL_2004 analysis: " << sqrts/GeV << " GeV != 91, 133, 177, or 197 GeV"; throw Error(ss.str()); } assert(ih >= 0); return ih; } void init() { // Projections declare(Beam(), "Beams"); const FinalState fs; declare(fs, "FS"); const ChargedFinalState cfs; declare(cfs, "CFS"); declare(FastJets(fs, FastJets::DURHAM, 0.7), "DurhamJets"); declare(Sphericity(fs), "Sphericity"); declare(ParisiTensor(fs), "Parisi"); const Thrust thrust(fs); declare(thrust, "Thrust"); declare(Hemispheres(thrust), "Hemispheres"); // Get beam energy index _isqrts = getHistIndex(sqrtS()); // Book histograms book(_hist1MinusT[_isqrts] ,1, 1, _isqrts+1); book(_histHemiMassH[_isqrts] ,2, 1, _isqrts+1); book(_histCParam[_isqrts] ,3, 1, _isqrts+1); book(_histHemiBroadT[_isqrts] ,4, 1, _isqrts+1); book(_histHemiBroadW[_isqrts] ,5, 1, _isqrts+1); book(_histY23Durham[_isqrts] ,6, 1, _isqrts+1); book(_histTMajor[_isqrts] ,7, 1, _isqrts+1); book(_histTMinor[_isqrts] ,8, 1, _isqrts+1); book(_histAplanarity[_isqrts] ,9, 1, _isqrts+1); book(_histSphericity[_isqrts] ,10, 1, _isqrts+1); book(_histOblateness[_isqrts] ,11, 1, _isqrts+1); book(_histHemiMassL[_isqrts] ,12, 1, _isqrts+1); book(_histHemiBroadN[_isqrts] ,13, 1, _isqrts+1); book(_histDParam[_isqrts] ,14, 1, _isqrts+1); // book(_hist1MinusTMom[_isqrts] ,15, 1, _isqrts+1); book(_histHemiMassHMom[_isqrts] ,16, 1, _isqrts+1); book(_histCParamMom[_isqrts] ,17, 1, _isqrts+1); book(_histHemiBroadTMom[_isqrts] ,18, 1, _isqrts+1); book(_histHemiBroadWMom[_isqrts] ,19, 1, _isqrts+1); book(_histY23DurhamMom[_isqrts] ,20, 1, _isqrts+1); book(_histTMajorMom[_isqrts] ,21, 1, _isqrts+1); book(_histTMinorMom[_isqrts] ,22, 1, _isqrts+1); book(_histSphericityMom[_isqrts] ,23, 1, _isqrts+1); book(_histOblatenessMom[_isqrts] ,24, 1, _isqrts+1); book(_histHemiMassLMom[_isqrts] ,25, 1, _isqrts+1); book(_histHemiBroadNMom[_isqrts] ,26, 1, _isqrts+1); - book(_sumWTrack2, "sumWTrack2"); - book(_sumWJet3, "sumWJet3"); + book(_sumWTrack2, "_sumWTrack2"); + book(_sumWJet3, "_sumWJet3"); } void analyze(const Event& event) { // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. const FinalState& cfs = apply(event, "CFS"); if (cfs.size() < 2) vetoEvent; _sumWTrack2->fill(); // Thrusts const Thrust& thrust = apply(event, "Thrust"); _hist1MinusT[_isqrts]->fill(1-thrust.thrust()); _histTMajor[_isqrts]->fill(thrust.thrustMajor()); _histTMinor[_isqrts]->fill(thrust.thrustMinor()); _histOblateness[_isqrts]->fill(thrust.oblateness()); for (int n = 1; n <= 5; ++n) { _hist1MinusTMom[_isqrts]->fill(n, pow(1-thrust.thrust(), n)); _histTMajorMom[_isqrts]->fill(n, pow(thrust.thrustMajor(), n)); _histTMinorMom[_isqrts]->fill(n, pow(thrust.thrustMinor(), n)); _histOblatenessMom[_isqrts]->fill(n, pow(thrust.oblateness(), n)); } // Jets const FastJets& durjet = apply(event, "DurhamJets"); if (durjet.clusterSeq()) { _sumWJet3->fill(); const double y23 = durjet.clusterSeq()->exclusive_ymerge_max(2); if (y23>0.0) { _histY23Durham[_isqrts]->fill(y23); for (int n = 1; n <= 5; ++n) { _histY23DurhamMom[_isqrts]->fill(n, pow(y23, n)); } } } // Sphericities const Sphericity& sphericity = apply(event, "Sphericity"); const double sph = sphericity.sphericity(); const double apl = sphericity.aplanarity(); _histSphericity[_isqrts]->fill(sph); _histAplanarity[_isqrts]->fill(apl); for (int n = 1; n <= 5; ++n) { _histSphericityMom[_isqrts]->fill(n, pow(sph, n)); } // C & D params const ParisiTensor& parisi = apply(event, "Parisi"); const double cparam = parisi.C(); const double dparam = parisi.D(); _histCParam[_isqrts]->fill(cparam); _histDParam[_isqrts]->fill(dparam); for (int n = 1; n <= 5; ++n) { _histCParamMom[_isqrts]->fill(n, pow(cparam, n)); } // Hemispheres const Hemispheres& hemi = apply(event, "Hemispheres"); // The paper says that M_H/L are scaled by sqrt(s), but scaling by E_vis is the way that fits the data... const double hemi_mh = hemi.scaledMhigh(); const double hemi_ml = hemi.scaledMlow(); /// @todo This shouldn't be necessary... what's going on? Memory corruption suspected :( // if (std::isnan(hemi_ml)) { // MSG_ERROR("NaN in HemiL! Event = " << numEvents()); // MSG_ERROR(hemi.M2low() << ", " << hemi.E2vis()); // } if (!std::isnan(hemi_mh) && !std::isnan(hemi_ml)) { const double hemi_bmax = hemi.Bmax(); const double hemi_bmin = hemi.Bmin(); const double hemi_bsum = hemi.Bsum(); _histHemiMassH[_isqrts]->fill(hemi_mh); _histHemiMassL[_isqrts]->fill(hemi_ml); _histHemiBroadW[_isqrts]->fill(hemi_bmax); _histHemiBroadN[_isqrts]->fill(hemi_bmin); _histHemiBroadT[_isqrts]->fill(hemi_bsum); for (int n = 1; n <= 5; ++n) { // if (std::isnan(pow(hemi_ml, n))) MSG_ERROR("NaN in HemiL moment! Event = " << numEvents()); _histHemiMassHMom[_isqrts]->fill(n, pow(hemi_mh, n)); _histHemiMassLMom[_isqrts]->fill(n, pow(hemi_ml, n)); _histHemiBroadWMom[_isqrts]->fill(n, pow(hemi_bmax, n)); _histHemiBroadNMom[_isqrts]->fill(n, pow(hemi_bmin, n)); _histHemiBroadTMom[_isqrts]->fill(n, pow(hemi_bsum, n)); } } } void finalize() { scale(_hist1MinusT[_isqrts], 1.0 / *_sumWTrack2); scale(_histTMajor[_isqrts], 1.0 / *_sumWTrack2); scale(_histTMinor[_isqrts], 1.0 / *_sumWTrack2); scale(_histOblateness[_isqrts], 1.0 / *_sumWTrack2); scale(_histSphericity[_isqrts], 1.0 / *_sumWTrack2); scale(_histAplanarity[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiMassH[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiMassL[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiBroadW[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiBroadN[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiBroadT[_isqrts], 1.0 / *_sumWTrack2); scale(_histCParam[_isqrts], 1.0 / *_sumWTrack2); scale(_histDParam[_isqrts], 1.0 / *_sumWTrack2); scale(_histY23Durham[_isqrts], 1.0 / *_sumWJet3); // scale(_hist1MinusTMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histTMajorMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histTMinorMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histOblatenessMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histSphericityMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiMassHMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiMassLMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiBroadWMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiBroadNMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histHemiBroadTMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histCParamMom[_isqrts], 1.0 / *_sumWTrack2); scale(_histY23DurhamMom[_isqrts], 1.0 / *_sumWJet3); } //@} private: /// Beam energy index for histograms int _isqrts; /// @name Counters of event weights passing the cuts //@{ CounterPtr _sumWTrack2, _sumWJet3; //@} /// @name Event shape histos at 4 energies //@{ Histo1DPtr _hist1MinusT[4]; Histo1DPtr _histHemiMassH[4]; Histo1DPtr _histCParam[4]; Histo1DPtr _histHemiBroadT[4]; Histo1DPtr _histHemiBroadW[4]; Histo1DPtr _histY23Durham[4]; Histo1DPtr _histTMajor[4]; Histo1DPtr _histTMinor[4]; Histo1DPtr _histAplanarity[4]; Histo1DPtr _histSphericity[4]; Histo1DPtr _histOblateness[4]; Histo1DPtr _histHemiMassL[4]; Histo1DPtr _histHemiBroadN[4]; Histo1DPtr _histDParam[4]; //@} /// @name Event shape moment histos at 4 energies //@{ Histo1DPtr _hist1MinusTMom[4]; Histo1DPtr _histHemiMassHMom[4]; Histo1DPtr _histCParamMom[4]; Histo1DPtr _histHemiBroadTMom[4]; Histo1DPtr _histHemiBroadWMom[4]; Histo1DPtr _histY23DurhamMom[4]; Histo1DPtr _histTMajorMom[4]; Histo1DPtr _histTMinorMom[4]; Histo1DPtr _histSphericityMom[4]; Histo1DPtr _histOblatenessMom[4]; Histo1DPtr _histHemiMassLMom[4]; Histo1DPtr _histHemiBroadNMom[4]; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2004_S6132243); } diff --git a/analyses/pluginLEP/SLD_1996_S3398250.cc b/analyses/pluginLEP/SLD_1996_S3398250.cc --- a/analyses/pluginLEP/SLD_1996_S3398250.cc +++ b/analyses/pluginLEP/SLD_1996_S3398250.cc @@ -1,141 +1,141 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include #define I_KNOW_THE_INITIAL_QUARKS_PROJECTION_IS_DODGY_BUT_NEED_TO_USE_IT #include "Rivet/Projections/InitialQuarks.hh" namespace Rivet { /// @brief SLD multiplicities at mZ /// @author Peter Richardson class SLD_1996_S3398250 : public Analysis { public: /// Constructor SLD_1996_S3398250() : Analysis("SLD_1996_S3398250") {} /// @name Analysis methods //@{ void init() { // Projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "CFS"); declare(InitialQuarks(), "IQF"); book(_h_bottom ,1, 1, 1); book(_h_charm ,2, 1, 1); book(_h_light ,3, 1, 1); - book(_weightLight, "weightLight"); - book(_weightCharm, "weightCharm"); - book(_weightBottom, "weightBottom"); + book(_weightLight, "_weightLight"); + book(_weightCharm, "_weightCharm"); + book(_weightBottom, "_weightBottom"); book(scatter_c, 4,1,1); book(scatter_b, 5,1,1); } void analyze(const Event& event) { // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. const FinalState& cfs = apply(event, "CFS"); if (cfs.size() < 2) vetoEvent; int flavour = 0; const InitialQuarks& iqf = apply(event, "IQF"); // If we only have two quarks (qqbar), just take the flavour. // If we have more than two quarks, look for the highest energetic q-qbar pair. if (iqf.particles().size() == 2) { flavour = iqf.particles().front().abspid(); } else { map quarkmap; for (const Particle& p : iqf.particles()) { if (quarkmap[p.pid()] < p.E()) { quarkmap[p.pid()] = p.E(); } } double maxenergy = 0.; for (int i = 1; i <= 5; ++i) { if (quarkmap[i]+quarkmap[-i] > maxenergy) { flavour = i; } } } const size_t numParticles = cfs.particles().size(); switch (flavour) { case 1: case 2: case 3: _weightLight ->fill(); _h_light->fillBin(0, numParticles); break; case 4: _weightCharm ->fill(); _h_charm->fillBin(0, numParticles); break; case 5: _weightBottom->fill(); _h_bottom->fillBin(0, numParticles); break; } } void multiplicity_subtract(const Histo1DPtr first, const Histo1DPtr second, Scatter2DPtr & scatter) { const double x = first->bin(0).xMid(); const double ex = first->bin(0).xWidth()/2.; const double y = first->bin(0).area() - second->bin(0).area(); const double ey = sqrt(sqr(first->bin(0).areaErr()) + sqr(second->bin(0).areaErr())); scatter->addPoint(x, y, ex, ey); } void finalize() { if (_weightBottom->val() != 0) scale(_h_bottom, 1./ *_weightBottom); if (_weightCharm->val() != 0) scale(_h_charm, 1./ *_weightCharm ); if (_weightLight->val() != 0) scale(_h_light, 1./ *_weightLight ); multiplicity_subtract(_h_charm, _h_light, scatter_c); multiplicity_subtract(_h_bottom, _h_light, scatter_b); } //@} private: Scatter2DPtr scatter_c, scatter_b; /// @name Weights //@{ CounterPtr _weightLight; CounterPtr _weightCharm; CounterPtr _weightBottom; //@} Histo1DPtr _h_bottom; Histo1DPtr _h_charm; Histo1DPtr _h_light; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(SLD_1996_S3398250); } diff --git a/analyses/pluginLEP/SLD_1999_S3743934.cc b/analyses/pluginLEP/SLD_1999_S3743934.cc --- a/analyses/pluginLEP/SLD_1999_S3743934.cc +++ b/analyses/pluginLEP/SLD_1999_S3743934.cc @@ -1,738 +1,738 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/Thrust.hh" #define I_KNOW_THE_INITIAL_QUARKS_PROJECTION_IS_DODGY_BUT_NEED_TO_USE_IT #include "Rivet/Projections/InitialQuarks.hh" namespace Rivet { /// @brief SLD flavour-dependent fragmentation paper /// @author Peter Richardson class SLD_1999_S3743934 : public Analysis { public: /// Constructor SLD_1999_S3743934() : Analysis("SLD_1999_S3743934"), _multPiPlus(4),_multKPlus(4),_multK0(4), _multKStar0(4),_multPhi(4), _multProton(4),_multLambda(4) { } /// @name Analysis methods //@{ void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); int flavour = 0; const InitialQuarks& iqf = apply(e, "IQF"); // If we only have two quarks (qqbar), just take the flavour. // If we have more than two quarks, look for the highest energetic q-qbar pair. /// @todo Can we make this based on hadron flavour instead? Particles quarks; if (iqf.particles().size() == 2) { flavour = iqf.particles().front().abspid(); quarks = iqf.particles(); } else { map quarkmap; for (const Particle& p : iqf.particles()) { if (quarkmap.find(p.pid()) == quarkmap.end()) quarkmap[p.pid()] = p; else if (quarkmap[p.pid()].E() < p.E()) quarkmap[p.pid()] = p; } double maxenergy = 0.; for (int i = 1; i <= 5; ++i) { double energy(0.); if (quarkmap.find( i) != quarkmap.end()) energy += quarkmap[ i].E(); if (quarkmap.find(-i) != quarkmap.end()) energy += quarkmap[-i].E(); if (energy > maxenergy) flavour = i; } if (quarkmap.find(flavour) != quarkmap.end()) quarks.push_back(quarkmap[flavour]); if (quarkmap.find(-flavour) != quarkmap.end()) quarks.push_back(quarkmap[-flavour]); } switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _SumOfudsWeights->fill(); break; case PID::CQUARK: _SumOfcWeights->fill(); break; case PID::BQUARK: _SumOfbWeights->fill(); break; } // thrust axis for projections Vector3 axis = apply(e, "Thrust").thrustAxis(); double dot(0.); if (!quarks.empty()) { dot = quarks[0].p3().dot(axis); if (quarks[0].pid() < 0) dot *= -1; } for (const Particle& p : fs.particles()) { const double xp = p.p3().mod()/meanBeamMom; // if in quark or antiquark hemisphere bool quark = p.p3().dot(axis)*dot > 0.; _h_XpChargedN->fill(xp); _temp_XpChargedN1->fill(xp); _temp_XpChargedN2->fill(xp); _temp_XpChargedN3->fill(xp); int id = p.abspid(); // charged pions if (id == PID::PIPLUS) { _h_XpPiPlusN->fill(xp); _multPiPlus[0]->fill(); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multPiPlus[1]->fill(); _h_XpPiPlusLight->fill(xp); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RPiPlus->fill(xp); else _h_RPiMinus->fill(xp); break; case PID::CQUARK: _multPiPlus[2]->fill(); _h_XpPiPlusCharm->fill(xp); break; case PID::BQUARK: _multPiPlus[3]->fill(); _h_XpPiPlusBottom->fill(xp); break; } } else if (id == PID::KPLUS) { _h_XpKPlusN->fill(xp); _multKPlus[0]->fill(); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multKPlus[1]->fill(); _temp_XpKPlusLight->fill(xp); _h_XpKPlusLight->fill(xp); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RKPlus->fill(xp); else _h_RKMinus->fill(xp); break; break; case PID::CQUARK: _multKPlus[2]->fill(); _h_XpKPlusCharm->fill(xp); _temp_XpKPlusCharm->fill(xp); break; case PID::BQUARK: _multKPlus[3]->fill(); _h_XpKPlusBottom->fill(xp); break; } } else if (id == PID::PROTON) { _h_XpProtonN->fill(xp); _multProton[0]->fill(); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multProton[1]->fill(); _temp_XpProtonLight->fill(xp); _h_XpProtonLight->fill(xp); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RProton->fill(xp); else _h_RPBar ->fill(xp); break; break; case PID::CQUARK: _multProton[2]->fill(); _temp_XpProtonCharm->fill(xp); _h_XpProtonCharm->fill(xp); break; case PID::BQUARK: _multProton[3]->fill(); _h_XpProtonBottom->fill(xp); break; } } } const UnstableParticles& ufs = apply(e, "UFS"); for (const Particle& p : ufs.particles()) { const double xp = p.p3().mod()/meanBeamMom; // if in quark or antiquark hemisphere bool quark = p.p3().dot(axis)*dot>0.; int id = p.abspid(); if (id == PID::LAMBDA) { _multLambda[0]->fill(); _h_XpLambdaN->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multLambda[1]->fill(); _h_XpLambdaLight->fill(xp); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RLambda->fill(xp); else _h_RLBar ->fill(xp); break; case PID::CQUARK: _multLambda[2]->fill(); _h_XpLambdaCharm->fill(xp); break; case PID::BQUARK: _multLambda[3]->fill(); _h_XpLambdaBottom->fill(xp); break; } } else if (id == 313) { _multKStar0[0]->fill(); _h_XpKStar0N->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multKStar0[1]->fill(); _temp_XpKStar0Light->fill(xp); _h_XpKStar0Light->fill(xp); if ( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RKS0 ->fill(xp); else _h_RKSBar0->fill(xp); break; break; case PID::CQUARK: _multKStar0[2]->fill(); _temp_XpKStar0Charm->fill(xp); _h_XpKStar0Charm->fill(xp); break; case PID::BQUARK: _multKStar0[3]->fill(); _h_XpKStar0Bottom->fill(xp); break; } } else if (id == 333) { _multPhi[0]->fill(); _h_XpPhiN->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multPhi[1]->fill(); _h_XpPhiLight->fill(xp); break; case PID::CQUARK: _multPhi[2]->fill(); _h_XpPhiCharm->fill(xp); break; case PID::BQUARK: _multPhi[3]->fill(); _h_XpPhiBottom->fill(xp); break; } } else if (id == PID::K0S || id == PID::K0L) { _multK0[0]->fill(); _h_XpK0N->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multK0[1]->fill(); _h_XpK0Light->fill(xp); break; case PID::CQUARK: _multK0[2]->fill(); _h_XpK0Charm->fill(xp); break; case PID::BQUARK: _multK0[3]->fill(); _h_XpK0Bottom->fill(xp); break; } } } } void init() { // Projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); declare(UnstableParticles(), "UFS"); declare(InitialQuarks(), "IQF"); declare(Thrust(FinalState()), "Thrust"); book(_temp_XpChargedN1 ,"TMP/XpChargedN1", refData( 1, 1, 1)); book(_temp_XpChargedN2 ,"TMP/XpChargedN2", refData( 2, 1, 1)); book(_temp_XpChargedN3 ,"TMP/XpChargedN3", refData( 3, 1, 1)); book(_h_XpPiPlusN , 1, 1, 2); book(_h_XpKPlusN , 2, 1, 2); book(_h_XpProtonN , 3, 1, 2); book(_h_XpChargedN , 4, 1, 1); book(_h_XpK0N , 5, 1, 1); book(_h_XpLambdaN , 7, 1, 1); book(_h_XpKStar0N , 8, 1, 1); book(_h_XpPhiN , 9, 1, 1); book(_h_XpPiPlusLight ,10, 1, 1); book(_h_XpPiPlusCharm ,10, 1, 2); book(_h_XpPiPlusBottom ,10, 1, 3); book(_h_XpKPlusLight ,12, 1, 1); book(_h_XpKPlusCharm ,12, 1, 2); book(_h_XpKPlusBottom ,12, 1, 3); book(_h_XpKStar0Light ,14, 1, 1); book(_h_XpKStar0Charm ,14, 1, 2); book(_h_XpKStar0Bottom ,14, 1, 3); book(_h_XpProtonLight ,16, 1, 1); book(_h_XpProtonCharm ,16, 1, 2); book(_h_XpProtonBottom ,16, 1, 3); book(_h_XpLambdaLight ,18, 1, 1); book(_h_XpLambdaCharm ,18, 1, 2); book(_h_XpLambdaBottom ,18, 1, 3); book(_h_XpK0Light ,20, 1, 1); book(_h_XpK0Charm ,20, 1, 2); book(_h_XpK0Bottom ,20, 1, 3); book(_h_XpPhiLight ,22, 1, 1); book(_h_XpPhiCharm ,22, 1, 2); book(_h_XpPhiBottom ,22, 1, 3); book(_temp_XpKPlusCharm ,"TMP/XpKPlusCharm", refData(13, 1, 1)); book(_temp_XpKPlusLight ,"TMP/XpKPlusLight", refData(13, 1, 1)); book(_temp_XpKStar0Charm ,"TMP/XpKStar0Charm", refData(15, 1, 1)); book(_temp_XpKStar0Light ,"TMP/XpKStar0Light", refData(15, 1, 1)); book(_temp_XpProtonCharm ,"TMP/XpProtonCharm", refData(17, 1, 1)); book(_temp_XpProtonLight ,"TMP/XpProtonLight", refData(17, 1, 1)); book(_h_RPiPlus , 26, 1, 1); book(_h_RPiMinus , 26, 1, 2); book(_h_RKS0 , 28, 1, 1); book(_h_RKSBar0 , 28, 1, 2); book(_h_RKPlus , 30, 1, 1); book(_h_RKMinus , 30, 1, 2); book(_h_RProton , 32, 1, 1); book(_h_RPBar , 32, 1, 2); book(_h_RLambda , 34, 1, 1); book(_h_RLBar , 34, 1, 2); book(_s_Xp_PiPl_Ch , 1, 1, 1); book(_s_Xp_KPl_Ch , 2, 1, 1); book(_s_Xp_Pr_Ch , 3, 1, 1); book(_s_Xp_PiPlCh_PiPlLi, 11, 1, 1); book(_s_Xp_PiPlBo_PiPlLi, 11, 1, 2); book(_s_Xp_KPlCh_KPlLi , 13, 1, 1); book(_s_Xp_KPlBo_KPlLi , 13, 1, 2); book(_s_Xp_KS0Ch_KS0Li , 15, 1, 1); book(_s_Xp_KS0Bo_KS0Li , 15, 1, 2); book(_s_Xp_PrCh_PrLi , 17, 1, 1); book(_s_Xp_PrBo_PrLi , 17, 1, 2); book(_s_Xp_LaCh_LaLi , 19, 1, 1); book(_s_Xp_LaBo_LaLi , 19, 1, 2); book(_s_Xp_K0Ch_K0Li , 21, 1, 1); book(_s_Xp_K0Bo_K0Li , 21, 1, 2); book(_s_Xp_PhiCh_PhiLi , 23, 1, 1); book(_s_Xp_PhiBo_PhiLi , 23, 1, 2); book(_s_PiM_PiP , 27, 1, 1); book(_s_KSBar0_KS0, 29, 1, 1); book(_s_KM_KP , 31, 1, 1); book(_s_Pr_PBar , 33, 1, 1); book(_s_Lam_LBar , 35, 1, 1); - book(_SumOfudsWeights, "SumOfudsWeights"); - book(_SumOfcWeights, "SumOfcWeights"); - book(_SumOfbWeights, "SumOfbWeights"); + book(_SumOfudsWeights, "_SumOfudsWeights"); + book(_SumOfcWeights, "_SumOfcWeights"); + book(_SumOfbWeights, "_SumOfbWeights"); for ( size_t i=0; i<4; ++i) { - book(_multPiPlus[i], "multPiPlus_"+to_str(i)); - book(_multKPlus[i], "multKPlus_"+to_str(i)); - book(_multK0[i], "multK0_"+to_str(i)); - book(_multKStar0[i], "multKStar0_"+to_str(i)); - book(_multPhi[i], "multPhi_"+to_str(i)); - book(_multProton[i], "multProton_"+to_str(i)); - book(_multLambda[i], "multLambda_"+to_str(i)); + book(_multPiPlus[i], "_multPiPlus_"+to_str(i)); + book(_multKPlus[i], "_multKPlus_"+to_str(i)); + book(_multK0[i], "_multK0_"+to_str(i)); + book(_multKStar0[i], "_multKStar0_"+to_str(i)); + book(_multPhi[i], "_multPhi_"+to_str(i)); + book(_multProton[i], "_multProton_"+to_str(i)); + book(_multLambda[i], "_multLambda_"+to_str(i)); } book(tmp1, 24, 1, 1, true); book(tmp2, 24, 1, 2, true); book(tmp3, 24, 1, 3, true); book(tmp4, 24, 1, 4, true); book(tmp5, 25, 1, 1, true); book(tmp6, 25, 1, 2, true); book(tmp7, 24, 2, 1, true); book(tmp8, 24, 2, 2, true); book(tmp9, 24, 2, 3, true); book(tmp10, 24, 2, 4, true); book(tmp11, 25, 2, 1, true); book(tmp12, 25, 2, 2, true); book(tmp13, 24, 3, 1, true); book(tmp14, 24, 3, 2, true); book(tmp15, 24, 3, 3, true); book(tmp16, 24, 3, 4, true); book(tmp17, 25, 3, 1, true); book(tmp18, 25, 3, 2, true); book(tmp19, 24, 4, 1, true); book(tmp20, 24, 4, 2, true); book(tmp21, 24, 4, 3, true); book(tmp22, 24, 4, 4, true); book(tmp23, 25, 4, 1, true); book(tmp24, 25, 4, 2, true); book(tmp25, 24, 5, 1, true); book(tmp26, 24, 5, 2, true); book(tmp27, 24, 5, 3, true); book(tmp28, 24, 5, 4, true); book(tmp29, 25, 5, 1, true); book(tmp30, 25, 5, 2, true); book(tmp31, 24, 6, 1, true); book(tmp32, 24, 6, 2, true); book(tmp33, 24, 6, 3, true); book(tmp34, 24, 6, 4, true); book(tmp35, 25, 6, 1, true); book(tmp36, 25, 6, 2, true); book(tmp37, 24, 7, 1, true); book(tmp38, 24, 7, 2, true); book(tmp39, 24, 7, 3, true); book(tmp40, 24, 7, 4, true); book(tmp41, 25, 7, 1, true); book(tmp42, 25, 7, 2, true); } /// Finalize void finalize() { // Get the ratio plots sorted out first divide(_h_XpPiPlusN, _temp_XpChargedN1, _s_Xp_PiPl_Ch); divide(_h_XpKPlusN, _temp_XpChargedN2, _s_Xp_KPl_Ch); divide(_h_XpProtonN, _temp_XpChargedN3, _s_Xp_Pr_Ch); divide(_h_XpPiPlusCharm, _h_XpPiPlusLight, _s_Xp_PiPlCh_PiPlLi); _s_Xp_PiPlCh_PiPlLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpPiPlusBottom, _h_XpPiPlusLight, _s_Xp_PiPlBo_PiPlLi); _s_Xp_PiPlBo_PiPlLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); divide(_temp_XpKPlusCharm , _temp_XpKPlusLight, _s_Xp_KPlCh_KPlLi); _s_Xp_KPlCh_KPlLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpKPlusBottom, _h_XpKPlusLight, _s_Xp_KPlBo_KPlLi); _s_Xp_KPlBo_KPlLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); divide(_temp_XpKStar0Charm, _temp_XpKStar0Light, _s_Xp_KS0Ch_KS0Li); _s_Xp_KS0Ch_KS0Li->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpKStar0Bottom, _h_XpKStar0Light, _s_Xp_KS0Bo_KS0Li); _s_Xp_KS0Bo_KS0Li->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); divide(_temp_XpProtonCharm, _temp_XpProtonLight, _s_Xp_PrCh_PrLi); _s_Xp_PrCh_PrLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpProtonBottom, _h_XpProtonLight, _s_Xp_PrBo_PrLi); _s_Xp_PrBo_PrLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); divide(_h_XpLambdaCharm, _h_XpLambdaLight, _s_Xp_LaCh_LaLi); _s_Xp_LaCh_LaLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpLambdaBottom, _h_XpLambdaLight, _s_Xp_LaBo_LaLi); _s_Xp_LaBo_LaLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); divide(_h_XpK0Charm, _h_XpK0Light, _s_Xp_K0Ch_K0Li); _s_Xp_K0Ch_K0Li->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpK0Bottom, _h_XpK0Light, _s_Xp_K0Bo_K0Li); _s_Xp_K0Bo_K0Li->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); divide(_h_XpPhiCharm, _h_XpPhiLight, _s_Xp_PhiCh_PhiLi); _s_Xp_PhiCh_PhiLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfcWeights)); divide(_h_XpPhiBottom, _h_XpPhiLight, _s_Xp_PhiBo_PhiLi); _s_Xp_PhiBo_PhiLi->scale(1., dbl(*_SumOfudsWeights / *_SumOfbWeights)); // Then the leading particles divide(*_h_RPiMinus - *_h_RPiPlus, *_h_RPiMinus + *_h_RPiPlus, _s_PiM_PiP); divide(*_h_RKSBar0 - *_h_RKS0, *_h_RKSBar0 + *_h_RKS0, _s_KSBar0_KS0); divide(*_h_RKMinus - *_h_RKPlus, *_h_RKMinus + *_h_RKPlus, _s_KM_KP); divide(*_h_RProton - *_h_RPBar, *_h_RProton + *_h_RPBar, _s_Pr_PBar); divide(*_h_RLambda - *_h_RLBar, *_h_RLambda + *_h_RLBar, _s_Lam_LBar); // Then the rest scale(_h_XpPiPlusN, 1/sumOfWeights()); scale(_h_XpKPlusN, 1/sumOfWeights()); scale(_h_XpProtonN, 1/sumOfWeights()); scale(_h_XpChargedN, 1/sumOfWeights()); scale(_h_XpK0N, 1/sumOfWeights()); scale(_h_XpLambdaN, 1/sumOfWeights()); scale(_h_XpKStar0N, 1/sumOfWeights()); scale(_h_XpPhiN, 1/sumOfWeights()); scale(_h_XpPiPlusLight, 1 / *_SumOfudsWeights); scale(_h_XpPiPlusCharm, 1 / *_SumOfcWeights); scale(_h_XpPiPlusBottom, 1 / *_SumOfbWeights); scale(_h_XpKPlusLight, 1 / *_SumOfudsWeights); scale(_h_XpKPlusCharm, 1 / *_SumOfcWeights); scale(_h_XpKPlusBottom, 1 / *_SumOfbWeights); scale(_h_XpKStar0Light, 1 / *_SumOfudsWeights); scale(_h_XpKStar0Charm, 1 / *_SumOfcWeights); scale(_h_XpKStar0Bottom, 1 / *_SumOfbWeights); scale(_h_XpProtonLight, 1 / *_SumOfudsWeights); scale(_h_XpProtonCharm, 1 / *_SumOfcWeights); scale(_h_XpProtonBottom, 1 / *_SumOfbWeights); scale(_h_XpLambdaLight, 1 / *_SumOfudsWeights); scale(_h_XpLambdaCharm, 1 / *_SumOfcWeights); scale(_h_XpLambdaBottom, 1 / *_SumOfbWeights); scale(_h_XpK0Light, 1 / *_SumOfudsWeights); scale(_h_XpK0Charm, 1 / *_SumOfcWeights); scale(_h_XpK0Bottom, 1 / *_SumOfbWeights); scale(_h_XpPhiLight, 1 / *_SumOfudsWeights); scale(_h_XpPhiCharm , 1 / *_SumOfcWeights); scale(_h_XpPhiBottom, 1 / *_SumOfbWeights); scale(_h_RPiPlus, 1 / *_SumOfudsWeights); scale(_h_RPiMinus, 1 / *_SumOfudsWeights); scale(_h_RKS0, 1 / *_SumOfudsWeights); scale(_h_RKSBar0, 1 / *_SumOfudsWeights); scale(_h_RKPlus, 1 / *_SumOfudsWeights); scale(_h_RKMinus, 1 / *_SumOfudsWeights); scale(_h_RProton, 1 / *_SumOfudsWeights); scale(_h_RPBar, 1 / *_SumOfudsWeights); scale(_h_RLambda, 1 / *_SumOfudsWeights); scale(_h_RLBar, 1 / *_SumOfudsWeights); // Multiplicities double avgNumPartsAll, avgNumPartsLight,avgNumPartsCharm, avgNumPartsBottom; // pi+/- // all avgNumPartsAll = dbl(*_multPiPlus[0])/sumOfWeights(); tmp1->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multPiPlus[1] / *_SumOfudsWeights); tmp2->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multPiPlus[2] / *_SumOfcWeights); tmp3->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multPiPlus[3] / *_SumOfbWeights); tmp4->point(0).setY(avgNumPartsBottom); // charm-light tmp5->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp6->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // K+/- // all avgNumPartsAll = dbl(*_multKPlus[0])/sumOfWeights(); tmp7->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multKPlus[1] / *_SumOfudsWeights); tmp8->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multKPlus[2] / *_SumOfcWeights); tmp9->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multKPlus[3] / *_SumOfbWeights); tmp10->point(0).setY(avgNumPartsBottom); // charm-light tmp11->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp12->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // K0 // all avgNumPartsAll = dbl(*_multK0[0])/sumOfWeights(); tmp13->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multK0[1] / *_SumOfudsWeights); tmp14->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multK0[2] / *_SumOfcWeights); tmp15->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multK0[3] / *_SumOfbWeights); tmp16->point(0).setY(avgNumPartsBottom); // charm-light tmp17->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp18->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // K*0 // all avgNumPartsAll = dbl(*_multKStar0[0])/sumOfWeights(); tmp19->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multKStar0[1] / *_SumOfudsWeights); tmp20->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multKStar0[2] / *_SumOfcWeights); tmp21->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multKStar0[3] / *_SumOfbWeights); tmp22->point(0).setY(avgNumPartsBottom); // charm-light tmp23->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp24->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // phi // all avgNumPartsAll = dbl(*_multPhi[0])/sumOfWeights(); tmp25->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multPhi[1] / *_SumOfudsWeights); tmp26->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multPhi[2] / *_SumOfcWeights); tmp27->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multPhi[3] / *_SumOfbWeights); tmp28->point(0).setY(avgNumPartsBottom); // charm-light tmp29->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp30->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // p // all avgNumPartsAll = dbl(*_multProton[0])/sumOfWeights(); tmp31->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multProton[1] / *_SumOfudsWeights); tmp32->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multProton[2] / *_SumOfcWeights); tmp33->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multProton[3] / *_SumOfbWeights); tmp34->point(0).setY(avgNumPartsBottom); // charm-light tmp35->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp36->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // Lambda // all avgNumPartsAll = dbl(*_multLambda[0])/sumOfWeights(); tmp37->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = dbl(*_multLambda[1] / *_SumOfudsWeights); tmp38->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = dbl(*_multLambda[2] / *_SumOfcWeights); tmp39->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = dbl(*_multLambda[3] / *_SumOfbWeights); tmp40->point(0).setY(avgNumPartsBottom); // charm-light tmp41->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light tmp42->point(0).setY(avgNumPartsBottom - avgNumPartsLight); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles. Used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. CounterPtr _SumOfudsWeights, _SumOfcWeights, _SumOfbWeights; vector _multPiPlus, _multKPlus, _multK0, _multKStar0, _multPhi, _multProton, _multLambda; Histo1DPtr _h_XpPiPlusSig, _h_XpPiPlusN; Histo1DPtr _h_XpKPlusSig, _h_XpKPlusN; Histo1DPtr _h_XpProtonSig, _h_XpProtonN; Histo1DPtr _h_XpChargedN; Histo1DPtr _h_XpK0N, _h_XpLambdaN; Histo1DPtr _h_XpKStar0N, _h_XpPhiN; Histo1DPtr _h_XpPiPlusLight, _h_XpPiPlusCharm, _h_XpPiPlusBottom; Histo1DPtr _h_XpKPlusLight, _h_XpKPlusCharm, _h_XpKPlusBottom; Histo1DPtr _h_XpKStar0Light, _h_XpKStar0Charm, _h_XpKStar0Bottom; Histo1DPtr _h_XpProtonLight, _h_XpProtonCharm, _h_XpProtonBottom; Histo1DPtr _h_XpLambdaLight, _h_XpLambdaCharm, _h_XpLambdaBottom; Histo1DPtr _h_XpK0Light, _h_XpK0Charm, _h_XpK0Bottom; Histo1DPtr _h_XpPhiLight, _h_XpPhiCharm, _h_XpPhiBottom; Histo1DPtr _temp_XpChargedN1, _temp_XpChargedN2, _temp_XpChargedN3; Histo1DPtr _temp_XpKPlusCharm , _temp_XpKPlusLight; Histo1DPtr _temp_XpKStar0Charm, _temp_XpKStar0Light; Histo1DPtr _temp_XpProtonCharm, _temp_XpProtonLight; Histo1DPtr _h_RPiPlus, _h_RPiMinus; Histo1DPtr _h_RKS0, _h_RKSBar0; Histo1DPtr _h_RKPlus, _h_RKMinus; Histo1DPtr _h_RProton, _h_RPBar; Histo1DPtr _h_RLambda, _h_RLBar; Scatter2DPtr _s_Xp_PiPl_Ch, _s_Xp_KPl_Ch, _s_Xp_Pr_Ch; Scatter2DPtr _s_Xp_PiPlCh_PiPlLi, _s_Xp_PiPlBo_PiPlLi; Scatter2DPtr _s_Xp_KPlCh_KPlLi, _s_Xp_KPlBo_KPlLi; Scatter2DPtr _s_Xp_KS0Ch_KS0Li, _s_Xp_KS0Bo_KS0Li; Scatter2DPtr _s_Xp_PrCh_PrLi, _s_Xp_PrBo_PrLi; Scatter2DPtr _s_Xp_LaCh_LaLi, _s_Xp_LaBo_LaLi; Scatter2DPtr _s_Xp_K0Ch_K0Li, _s_Xp_K0Bo_K0Li; Scatter2DPtr _s_Xp_PhiCh_PhiLi, _s_Xp_PhiBo_PhiLi; Scatter2DPtr _s_PiM_PiP, _s_KSBar0_KS0, _s_KM_KP, _s_Pr_PBar, _s_Lam_LBar; //@} Scatter2DPtr tmp1; Scatter2DPtr tmp2; Scatter2DPtr tmp3; Scatter2DPtr tmp4; Scatter2DPtr tmp5; Scatter2DPtr tmp6; Scatter2DPtr tmp7; Scatter2DPtr tmp8; Scatter2DPtr tmp9; Scatter2DPtr tmp10; Scatter2DPtr tmp11; Scatter2DPtr tmp12; Scatter2DPtr tmp13; Scatter2DPtr tmp14; Scatter2DPtr tmp15; Scatter2DPtr tmp16; Scatter2DPtr tmp17; Scatter2DPtr tmp18; Scatter2DPtr tmp19; Scatter2DPtr tmp20; Scatter2DPtr tmp21; Scatter2DPtr tmp22; Scatter2DPtr tmp23; Scatter2DPtr tmp24; Scatter2DPtr tmp25; Scatter2DPtr tmp26; Scatter2DPtr tmp27; Scatter2DPtr tmp28; Scatter2DPtr tmp29; Scatter2DPtr tmp30; Scatter2DPtr tmp31; Scatter2DPtr tmp32; Scatter2DPtr tmp33; Scatter2DPtr tmp34; Scatter2DPtr tmp35; Scatter2DPtr tmp36; Scatter2DPtr tmp37; Scatter2DPtr tmp38; Scatter2DPtr tmp39; Scatter2DPtr tmp40; Scatter2DPtr tmp41; Scatter2DPtr tmp42; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(SLD_1999_S3743934); } diff --git a/analyses/pluginLEP/SLD_2004_S5693039.cc b/analyses/pluginLEP/SLD_2004_S5693039.cc --- a/analyses/pluginLEP/SLD_2004_S5693039.cc +++ b/analyses/pluginLEP/SLD_2004_S5693039.cc @@ -1,379 +1,379 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/Thrust.hh" #define I_KNOW_THE_INITIAL_QUARKS_PROJECTION_IS_DODGY_BUT_NEED_TO_USE_IT #include "Rivet/Projections/InitialQuarks.hh" namespace Rivet { /// @brief SLD flavour-dependent fragmentation paper /// @author Peter Richardson class SLD_2004_S5693039 : public Analysis { public: /// Constructor SLD_2004_S5693039() : Analysis("SLD_2004_S5693039") {} /// @name Analysis methods //@{ void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 2 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); int flavour = 0; const InitialQuarks& iqf = apply(e, "IQF"); // If we only have two quarks (qqbar), just take the flavour. // If we have more than two quarks, look for the highest energetic q-qbar pair. Particles quarks; if (iqf.particles().size() == 2) { flavour = iqf.particles().front().abspid(); quarks = iqf.particles(); } else { map quarkmap; for (const Particle& p : iqf.particles()) { if (quarkmap.find(p.pid())==quarkmap.end()) quarkmap[p.pid()] = p; else if (quarkmap[p.pid()].E() < p.E()) quarkmap[p.pid()] = p; } double maxenergy = 0.; for (int i = 1; i <= 5; ++i) { double energy(0.); if(quarkmap.find( i)!=quarkmap.end()) energy += quarkmap[ i].E(); if(quarkmap.find(-i)!=quarkmap.end()) energy += quarkmap[-i].E(); if (energy > maxenergy) flavour = i; } if(quarkmap.find( flavour)!=quarkmap.end()) quarks.push_back(quarkmap[ flavour]); if(quarkmap.find(-flavour)!=quarkmap.end()) quarks.push_back(quarkmap[-flavour]); } // total multiplicities switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _weightLight ->fill(); _weightedTotalChargedPartNumLight ->fill(numParticles); break; case PID::CQUARK: _weightCharm ->fill(); _weightedTotalChargedPartNumCharm ->fill(numParticles); break; case PID::BQUARK: _weightBottom ->fill(); _weightedTotalChargedPartNumBottom ->fill(numParticles); break; } // thrust axis for projections Vector3 axis = apply(e, "Thrust").thrustAxis(); double dot(0.); if(!quarks.empty()) { dot = quarks[0].p3().dot(axis); if(quarks[0].pid()<0) dot *= -1.; } // spectra and individual multiplicities for (const Particle& p : fs.particles()) { double pcm = p.p3().mod(); const double xp = pcm/meanBeamMom; // if in quark or antiquark hemisphere bool quark = p.p3().dot(axis)*dot>0.; _h_PCharged ->fill(pcm ); // all charged switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _h_XpChargedL->fill(xp); break; case PID::CQUARK: _h_XpChargedC->fill(xp); break; case PID::BQUARK: _h_XpChargedB->fill(xp); break; } int id = p.abspid(); // charged pions if (id == PID::PIPLUS) { _h_XpPiPlus->fill(xp); _h_XpPiPlusTotal->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _h_XpPiPlusL->fill(xp); _h_NPiPlusL->fill(sqrtS()); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RPiPlus->fill(xp); else _h_RPiMinus->fill(xp); break; case PID::CQUARK: _h_XpPiPlusC->fill(xp); _h_NPiPlusC->fill(sqrtS()); break; case PID::BQUARK: _h_XpPiPlusB->fill(xp); _h_NPiPlusB->fill(sqrtS()); break; } } else if (id == PID::KPLUS) { _h_XpKPlus->fill(xp); _h_XpKPlusTotal->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _h_XpKPlusL->fill(xp); _h_NKPlusL->fill(sqrtS()); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RKPlus->fill(xp); else _h_RKMinus->fill(xp); break; case PID::CQUARK: _h_XpKPlusC->fill(xp); _h_NKPlusC->fill(sqrtS()); break; case PID::BQUARK: _h_XpKPlusB->fill(xp); _h_NKPlusB->fill(sqrtS()); break; } } else if (id == PID::PROTON) { _h_XpProton->fill(xp); _h_XpProtonTotal->fill(xp); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _h_XpProtonL->fill(xp); _h_NProtonL->fill(sqrtS()); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RProton->fill(xp); else _h_RPBar ->fill(xp); break; case PID::CQUARK: _h_XpProtonC->fill(xp); _h_NProtonC->fill(sqrtS()); break; case PID::BQUARK: _h_XpProtonB->fill(xp); _h_NProtonB->fill(sqrtS()); break; } } } } void init() { // Projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); declare(InitialQuarks(), "IQF"); declare(Thrust(FinalState()), "Thrust"); // Book histograms book(_h_PCharged , 1, 1, 1); book(_h_XpPiPlus , 2, 1, 2); book(_h_XpKPlus , 3, 1, 2); book(_h_XpProton , 4, 1, 2); book(_h_XpPiPlusTotal , 2, 2, 2); book(_h_XpKPlusTotal , 3, 2, 2); book(_h_XpProtonTotal , 4, 2, 2); book(_h_XpPiPlusL , 5, 1, 1); book(_h_XpPiPlusC , 5, 1, 2); book(_h_XpPiPlusB , 5, 1, 3); book(_h_XpKPlusL , 6, 1, 1); book(_h_XpKPlusC , 6, 1, 2); book(_h_XpKPlusB , 6, 1, 3); book(_h_XpProtonL , 7, 1, 1); book(_h_XpProtonC , 7, 1, 2); book(_h_XpProtonB , 7, 1, 3); book(_h_XpChargedL , 8, 1, 1); book(_h_XpChargedC , 8, 1, 2); book(_h_XpChargedB , 8, 1, 3); book(_h_NPiPlusL , 5, 2, 1); book(_h_NPiPlusC , 5, 2, 2); book(_h_NPiPlusB , 5, 2, 3); book(_h_NKPlusL , 6, 2, 1); book(_h_NKPlusC , 6, 2, 2); book(_h_NKPlusB , 6, 2, 3); book(_h_NProtonL , 7, 2, 1); book(_h_NProtonC , 7, 2, 2); book(_h_NProtonB , 7, 2, 3); book(_h_RPiPlus , 9, 1, 1); book(_h_RPiMinus , 9, 1, 2); book(_h_RKPlus ,10, 1, 1); book(_h_RKMinus ,10, 1, 2); book(_h_RProton ,11, 1, 1); book(_h_RPBar ,11, 1, 2); // Ratios: used as target of divide() later book(_s_PiM_PiP, 9, 1, 3); book(_s_KM_KP , 10, 1, 3); book(_s_Pr_PBar, 11, 1, 3); - book(_weightedTotalChargedPartNumLight, "weightedTotalChargedPartNumLight"); - book(_weightedTotalChargedPartNumCharm, "weightedTotalChargedPartNumCharm"); - book(_weightedTotalChargedPartNumBottom, "weightedTotalChargedPartNumBottom"); - book(_weightLight, "weightLight"); - book(_weightCharm, "weightCharm"); - book(_weightBottom, "weightBottom"); + book(_weightedTotalChargedPartNumLight, "_weightedTotalChargedPartNumLight"); + book(_weightedTotalChargedPartNumCharm, "_weightedTotalChargedPartNumCharm"); + book(_weightedTotalChargedPartNumBottom, "_weightedTotalChargedPartNumBottom"); + book(_weightLight, "_weightLight"); + book(_weightCharm, "_weightCharm"); + book(_weightBottom, "_weightBottom"); book(tmp1, 8, 2, 1, true); book(tmp2, 8, 2, 2, true); book(tmp3, 8, 2, 3, true); book(tmp4, 8, 3, 2, true); book(tmp5, 8, 3, 3, true); } /// Finalize void finalize() { // Multiplicities /// @todo Include errors const double avgNumPartsLight = _weightedTotalChargedPartNumLight->val() / _weightLight->val(); const double avgNumPartsCharm = _weightedTotalChargedPartNumCharm->val() / _weightCharm->val(); const double avgNumPartsBottom = _weightedTotalChargedPartNumBottom->val() / _weightBottom->val(); tmp1->point(0).setY(avgNumPartsLight); tmp2->point(0).setY(avgNumPartsCharm); tmp3->point(0).setY(avgNumPartsBottom); tmp4->point(0).setY(avgNumPartsCharm - avgNumPartsLight); tmp5->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // Do divisions divide(*_h_RPiMinus - *_h_RPiPlus, *_h_RPiMinus + *_h_RPiPlus, _s_PiM_PiP); divide(*_h_RKMinus - *_h_RKPlus, *_h_RKMinus + *_h_RKPlus, _s_KM_KP); divide(*_h_RProton - *_h_RPBar, *_h_RProton + *_h_RPBar, _s_Pr_PBar); // Scale histograms scale(_h_PCharged, 1./sumOfWeights()); scale(_h_XpPiPlus, 1./sumOfWeights()); scale(_h_XpKPlus, 1./sumOfWeights()); scale(_h_XpProton, 1./sumOfWeights()); scale(_h_XpPiPlusTotal, 1./sumOfWeights()); scale(_h_XpKPlusTotal, 1./sumOfWeights()); scale(_h_XpProtonTotal, 1./sumOfWeights()); scale(_h_XpPiPlusL, 1. / *_weightLight); scale(_h_XpPiPlusC, 1. / *_weightCharm); scale(_h_XpPiPlusB, 1. / *_weightBottom); scale(_h_XpKPlusL, 1. / *_weightLight); scale(_h_XpKPlusC, 1. / *_weightCharm); scale(_h_XpKPlusB, 1. / *_weightBottom); scale(_h_XpProtonL, 1. / *_weightLight); scale(_h_XpProtonC, 1. / *_weightCharm); scale(_h_XpProtonB, 1. / *_weightBottom); scale(_h_XpChargedL, 1. / *_weightLight); scale(_h_XpChargedC, 1. / *_weightCharm); scale(_h_XpChargedB, 1. / *_weightBottom); scale(_h_NPiPlusL, 1. / *_weightLight); scale(_h_NPiPlusC, 1. / *_weightCharm); scale(_h_NPiPlusB, 1. / *_weightBottom); scale(_h_NKPlusL, 1. / *_weightLight); scale(_h_NKPlusC, 1. / *_weightCharm); scale(_h_NKPlusB, 1. / *_weightBottom); scale(_h_NProtonL, 1. / *_weightLight); scale(_h_NProtonC, 1. / *_weightCharm); scale(_h_NProtonB, 1. / *_weightBottom); // Paper suggests this should be 0.5/weight but it has to be 1.0 to get normalisations right... scale(_h_RPiPlus, 1. / *_weightLight); scale(_h_RPiMinus, 1. / *_weightLight); scale(_h_RKPlus, 1. / *_weightLight); scale(_h_RKMinus, 1. / *_weightLight); scale(_h_RProton, 1. / *_weightLight); scale(_h_RPBar, 1. / *_weightLight); // convert ratio to % _s_PiM_PiP->scale(1.,100.); _s_KM_KP ->scale(1.,100.); _s_Pr_PBar->scale(1.,100.); } //@} private: Scatter2DPtr tmp1; Scatter2DPtr tmp2; Scatter2DPtr tmp3; Scatter2DPtr tmp4; Scatter2DPtr tmp5; /// @name Multiplicities //@{ CounterPtr _weightedTotalChargedPartNumLight; CounterPtr _weightedTotalChargedPartNumCharm; CounterPtr _weightedTotalChargedPartNumBottom; //@} /// @name Weights //@{ CounterPtr _weightLight, _weightCharm, _weightBottom; //@} // Histograms //@{ Histo1DPtr _h_PCharged; Histo1DPtr _h_XpPiPlus, _h_XpKPlus, _h_XpProton; Histo1DPtr _h_XpPiPlusTotal, _h_XpKPlusTotal, _h_XpProtonTotal; Histo1DPtr _h_XpPiPlusL, _h_XpPiPlusC, _h_XpPiPlusB; Histo1DPtr _h_XpKPlusL, _h_XpKPlusC, _h_XpKPlusB; Histo1DPtr _h_XpProtonL, _h_XpProtonC, _h_XpProtonB; Histo1DPtr _h_XpChargedL, _h_XpChargedC, _h_XpChargedB; Histo1DPtr _h_NPiPlusL, _h_NPiPlusC, _h_NPiPlusB; Histo1DPtr _h_NKPlusL, _h_NKPlusC, _h_NKPlusB; Histo1DPtr _h_NProtonL, _h_NProtonC, _h_NProtonB; Histo1DPtr _h_RPiPlus, _h_RPiMinus, _h_RKPlus; Histo1DPtr _h_RKMinus, _h_RProton, _h_RPBar; Scatter2DPtr _s_PiM_PiP, _s_KM_KP, _s_Pr_PBar; //@} }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(SLD_2004_S5693039); } diff --git a/analyses/pluginMC/MC_VH2BB.cc b/analyses/pluginMC/MC_VH2BB.cc --- a/analyses/pluginMC/MC_VH2BB.cc +++ b/analyses/pluginMC/MC_VH2BB.cc @@ -1,262 +1,262 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ZFinder.hh" #include "Rivet/Projections/WFinder.hh" #include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Math/LorentzTrans.hh" namespace Rivet { class MC_VH2BB : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor MC_VH2BB() : Analysis("MC_VH2BB") { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { FinalState fs; Cut cut = Cuts::abseta < 3.5 && Cuts::pT > 25*GeV; ZFinder zeefinder(fs, cut, PID::ELECTRON, 65*GeV, 115*GeV, 0.2); declare(zeefinder, "ZeeFinder"); ZFinder zmmfinder(fs, cut, PID::MUON, 65*GeV, 115*GeV, 0.2); declare(zmmfinder, "ZmmFinder"); WFinder wefinder(fs, cut, PID::ELECTRON, 60*GeV, 100*GeV, 25*GeV, 0.2); declare(wefinder, "WeFinder"); WFinder wmfinder(fs, cut, PID::MUON, 60*GeV, 100*GeV, 25*GeV, 0.2); declare(wmfinder, "WmFinder"); declare(fs, "FinalState"); declare(FastJets(fs, FastJets::ANTIKT, 0.4), "AntiKT04"); declare(FastJets(fs, FastJets::ANTIKT, 0.5), "AntiKT05"); declare(FastJets(fs, FastJets::ANTIKT, 0.6), "AntiKT06"); /// Book histograms book(_h_jet_bb_Delta_eta ,"jet_bb_Delta_eta", 50, 0, 4); book(_h_jet_bb_Delta_phi ,"jet_bb_Delta_phi", 50, 0, M_PI); book(_h_jet_bb_Delta_pT ,"jet_bb_Delta_pT", 50,0, 500); book(_h_jet_bb_Delta_R ,"jet_bb_Delta_R", 50, 0, 5); book(_h_jet_b_jet_eta ,"jet_b_jet_eta", 50, -4, 4); book(_h_jet_b_jet_multiplicity ,"jet_b_jet_multiplicity", 11, -0.5, 10.5); book(_h_jet_b_jet_phi ,"jet_b_jet_phi", 50, 0, 2.*M_PI); book(_h_jet_b_jet_pT ,"jet_b_jet_pT", 50, 0, 500); book(_h_jet_H_eta_using_bb ,"jet_H_eta_using_bb", 50, -4, 4); book(_h_jet_H_mass_using_bb ,"jet_H_mass_using_bb", 50, 50, 200); book(_h_jet_H_phi_using_bb ,"jet_H_phi_using_bb", 50, 0, 2.*M_PI); book(_h_jet_H_pT_using_bb ,"jet_H_pT_using_bb", 50, 0, 500); book(_h_jet_eta ,"jet_eta", 50, -4, 4); book(_h_jet_multiplicity ,"jet_multiplicity", 11, -0.5, 10.5); book(_h_jet_phi ,"jet_phi", 50, 0, 2.*M_PI); book(_h_jet_pT ,"jet_pT", 50, 0, 500); book(_h_jet_VBbb_Delta_eta ,"jet_VBbb_Delta_eta", 50, 0, 4); book(_h_jet_VBbb_Delta_phi ,"jet_VBbb_Delta_phi", 50, 0, M_PI); book(_h_jet_VBbb_Delta_pT ,"jet_VBbb_Delta_pT", 50, 0, 500); book(_h_jet_VBbb_Delta_R ,"jet_VBbb_Delta_R", 50, 0, 8); book(_h_VB_eta ,"VB_eta", 50, -4, 4); book(_h_VB_mass ,"VB_mass", 50, 60, 110); book(_h_Z_multiplicity ,"Z_multiplicity", 11, -0.5, 10.5); book(_h_W_multiplicity ,"W_multiplicity", 11, -0.5, 10.5); book(_h_VB_phi ,"VB_phi", 50, 0, 2.*M_PI); book(_h_VB_pT ,"VB_pT", 50, 0, 500); book(_h_jet_bVB_angle_Hframe ,"jet_bVB_angle_Hframe", 50, 0, M_PI); book(_h_jet_bVB_cosangle_Hframe ,"jet_bVB_cosangle_Hframe", 50, -1, 1); book(_h_jet_bb_angle_Hframe ,"jet_bb_angle_Hframe", 50, 0, M_PI); book(_h_jet_bb_cosangle_Hframe ,"jet_bb_cosangle_Hframe", 50, -1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = 1.0; const double JETPTCUT = 30*GeV; const ZFinder& zeefinder = apply(event, "ZeeFinder"); const ZFinder& zmmfinder = apply(event, "ZmmFinder"); const WFinder& wefinder = apply(event, "WeFinder"); const WFinder& wmfinder = apply(event, "WmFinder"); const Particles vectorBosons = zeefinder.bosons() + zmmfinder.bosons() + wefinder.bosons() + wmfinder.bosons(); _h_Z_multiplicity->fill(zeefinder.bosons().size() + zmmfinder.bosons().size(), weight); _h_W_multiplicity->fill(wefinder.bosons().size() + wmfinder.bosons().size(), weight); const Jets jets = apply(event, "AntiKT04").jetsByPt(JETPTCUT); _h_jet_multiplicity->fill(jets.size(), weight); // Identify the b-jets Jets bjets; for (const Jet& jet : jets) { const double jetEta = jet.eta(); const double jetPhi = jet.phi(); const double jetPt = jet.pT(); _h_jet_eta->fill(jetEta, weight); _h_jet_phi->fill(jetPhi, weight); _h_jet_pT->fill(jetPt/GeV, weight); if (jet.bTagged() && jet.pT() > JETPTCUT) { bjets.push_back(jet); _h_jet_b_jet_eta->fill( jetEta , weight ); _h_jet_b_jet_phi->fill( jetPhi , weight ); _h_jet_b_jet_pT->fill( jetPt , weight ); } } _h_jet_b_jet_multiplicity->fill(bjets.size(), weight); // Plot vector boson properties for (const Particle& v : vectorBosons) { _h_VB_phi->fill(v.phi(), weight); _h_VB_pT->fill(v.pT(), weight); _h_VB_eta->fill(v.eta(), weight); _h_VB_mass->fill(v.mass(), weight); } // rest of analysis requires at least 1 b jets if(bjets.empty()) vetoEvent; // Construct Higgs candidates from pairs of b-jets for (size_t i = 0; i < bjets.size()-1; ++i) { for (size_t j = i+1; j < bjets.size(); ++j) { const Jet& jet1 = bjets[i]; const Jet& jet2 = bjets[j]; const double deltaEtaJJ = fabs(jet1.eta() - jet2.eta()); const double deltaPhiJJ = deltaPhi(jet1.momentum(), jet2.momentum()); const double deltaRJJ = deltaR(jet1.momentum(), jet2.momentum()); const double deltaPtJJ = fabs(jet1.pT() - jet2.pT()); _h_jet_bb_Delta_eta->fill(deltaEtaJJ, weight); _h_jet_bb_Delta_phi->fill(deltaPhiJJ, weight); _h_jet_bb_Delta_pT->fill(deltaPtJJ, weight); _h_jet_bb_Delta_R->fill(deltaRJJ, weight); const FourMomentum phiggs = jet1.momentum() + jet2.momentum(); _h_jet_H_eta_using_bb->fill(phiggs.eta(), weight); _h_jet_H_mass_using_bb->fill(phiggs.mass(), weight); _h_jet_H_phi_using_bb->fill(phiggs.phi(), weight); _h_jet_H_pT_using_bb->fill(phiggs.pT(), weight); for (const Particle& v : vectorBosons) { const double deltaEtaVH = fabs(phiggs.eta() - v.eta()); const double deltaPhiVH = deltaPhi(phiggs, v.momentum()); const double deltaRVH = deltaR(phiggs, v.momentum()); const double deltaPtVH = fabs(phiggs.pT() - v.pT()); _h_jet_VBbb_Delta_eta->fill(deltaEtaVH, weight); _h_jet_VBbb_Delta_phi->fill(deltaPhiVH, weight); _h_jet_VBbb_Delta_pT->fill(deltaPtVH, weight); _h_jet_VBbb_Delta_R->fill(deltaRVH, weight); // Calculate boost angles const vector angles = boostAngles(jet1.momentum(), jet2.momentum(), v.momentum()); _h_jet_bVB_angle_Hframe->fill(angles[0], weight); _h_jet_bb_angle_Hframe->fill(angles[1], weight); _h_jet_bVB_cosangle_Hframe->fill(cos(angles[0]), weight); _h_jet_bb_cosangle_Hframe->fill(cos(angles[1]), weight); } } } } /// Normalise histograms etc., after the run void finalize() { scale(_h_jet_bb_Delta_eta, crossSection()/sumOfWeights()); scale(_h_jet_bb_Delta_phi, crossSection()/sumOfWeights()); scale(_h_jet_bb_Delta_pT, crossSection()/sumOfWeights()); scale(_h_jet_bb_Delta_R, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_eta, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_multiplicity, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_phi, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_pT, crossSection()/sumOfWeights()); scale(_h_jet_H_eta_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_H_mass_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_H_phi_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_H_pT_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_eta, crossSection()/sumOfWeights()); scale(_h_jet_multiplicity, crossSection()/sumOfWeights()); scale(_h_jet_phi, crossSection()/sumOfWeights()); scale(_h_jet_pT, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_eta, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_phi, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_pT, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_R, crossSection()/sumOfWeights()); scale(_h_VB_eta, crossSection()/sumOfWeights()); scale(_h_VB_mass, crossSection()/sumOfWeights()); scale(_h_Z_multiplicity, crossSection()/sumOfWeights()); scale(_h_W_multiplicity, crossSection()/sumOfWeights()); scale(_h_VB_phi, crossSection()/sumOfWeights()); scale(_h_VB_pT, crossSection()/sumOfWeights()); scale(_h_jet_bVB_angle_Hframe, crossSection()/sumOfWeights()); scale(_h_jet_bb_angle_Hframe, crossSection()/sumOfWeights()); scale(_h_jet_bVB_cosangle_Hframe, crossSection()/sumOfWeights()); scale(_h_jet_bb_cosangle_Hframe, crossSection()/sumOfWeights()); } /// This should take in the four-momenta of two b's (jets/hadrons) and a vector boson, for the process VB*->VBH with H->bb /// It should return the smallest angle between the virtual vector boson and one of the b's, in the rest frame of the Higgs boson. /// It should also return (as the second element of the vector) the angle between the b's, in the rest frame of the Higgs boson. vector boostAngles(const FourMomentum& b1, const FourMomentum& b2, const FourMomentum& vb) { const FourMomentum higgsMomentum = b1 + b2; const FourMomentum virtualVBMomentum = higgsMomentum + vb; const LorentzTransform lt = LorentzTransform::mkFrameTransformFromBeta(higgsMomentum.betaVec()); const FourMomentum virtualVBMomentumBOOSTED = lt.transform(virtualVBMomentum); const FourMomentum b1BOOSTED = lt.transform(b1); const FourMomentum b2BOOSTED = lt.transform(b2); const double angle1 = b1BOOSTED.angle(virtualVBMomentumBOOSTED); const double angle2 = b2BOOSTED.angle(virtualVBMomentumBOOSTED); - const double anglebb = b1BOOSTED.angle(b2BOOSTED); + const double anglebb = mapAngle0ToPi(b1BOOSTED.angle(b2BOOSTED)); vector rtn; rtn.push_back(angle1 < angle2 ? angle1 : angle2); rtn.push_back(anglebb); return rtn; } //@} private: /// @name Histograms //@{ Histo1DPtr _h_Z_multiplicity, _h_W_multiplicity; Histo1DPtr _h_jet_bb_Delta_eta, _h_jet_bb_Delta_phi, _h_jet_bb_Delta_pT, _h_jet_bb_Delta_R; Histo1DPtr _h_jet_b_jet_eta, _h_jet_b_jet_multiplicity, _h_jet_b_jet_phi, _h_jet_b_jet_pT; Histo1DPtr _h_jet_H_eta_using_bb, _h_jet_H_mass_using_bb, _h_jet_H_phi_using_bb, _h_jet_H_pT_using_bb; Histo1DPtr _h_jet_eta, _h_jet_multiplicity, _h_jet_phi, _h_jet_pT; Histo1DPtr _h_jet_VBbb_Delta_eta, _h_jet_VBbb_Delta_phi, _h_jet_VBbb_Delta_pT, _h_jet_VBbb_Delta_R; Histo1DPtr _h_VB_eta, _h_VB_mass, _h_VB_phi, _h_VB_pT; Histo1DPtr _h_jet_bVB_angle_Hframe, _h_jet_bb_angle_Hframe, _h_jet_bVB_cosangle_Hframe, _h_jet_bb_cosangle_Hframe; //Histo1DPtr _h_jet_cuts_bb_deltaR_v_HpT; //@} }; // This global object acts as a hook for the plugin system DECLARE_RIVET_PLUGIN(MC_VH2BB); } diff --git a/analyses/pluginMC/MC_WJETS.cc b/analyses/pluginMC/MC_WJETS.cc --- a/analyses/pluginMC/MC_WJETS.cc +++ b/analyses/pluginMC/MC_WJETS.cc @@ -1,130 +1,129 @@ // -*- C++ -*- #include "Rivet/Analyses/MC_JetAnalysis.hh" #include "Rivet/Projections/WFinder.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Analysis.hh" namespace Rivet { /// @brief MC validation analysis for W + jets events class MC_WJETS : public MC_JetAnalysis { public: /// Default constructor MC_WJETS(string name="MC_WJETS") : MC_JetAnalysis(name, 4, "Jets") { _dR=0.2; _lepton=PID::ELECTRON; } /// @name Analysis methods //@{ /// Book histograms void init() { FinalState fs; WFinder wfinder(fs, Cuts::abseta < 3.5 && Cuts::pT > 25*GeV, _lepton, 60.0*GeV, 100.0*GeV, 25.0*GeV, _dR); declare(wfinder, "WFinder"); FastJets jetpro(wfinder.remainingFinalState(), FastJets::ANTIKT, 0.4); declare(jetpro, "Jets"); book(_h_W_jet1_deta ,"W_jet1_deta", 50, -5.0, 5.0); book(_h_W_jet1_dR ,"W_jet1_dR", 25, 0.5, 7.0); MC_JetAnalysis::init(); } /// Do the analysis void analyze(const Event & e) { - const double weight = 1.0; const WFinder& wfinder = apply(e, "WFinder"); if (wfinder.bosons().size() != 1) { vetoEvent; } FourMomentum wmom(wfinder.bosons().front().momentum()); const Jets& jets = apply(e, "Jets").jetsByPt(_jetptcut); if (jets.size() > 0) { - _h_W_jet1_deta->fill(wmom.eta()-jets[0].eta(), weight); - _h_W_jet1_dR->fill(deltaR(wmom, jets[0].momentum()), weight); + _h_W_jet1_deta->fill(wmom.eta()-jets[0].eta()); + _h_W_jet1_dR->fill(deltaR(wmom, jets[0].momentum())); } MC_JetAnalysis::analyze(e); } /// Finalize void finalize() { scale(_h_W_jet1_deta, crossSection()/picobarn/sumOfWeights()); scale(_h_W_jet1_dR, crossSection()/picobarn/sumOfWeights()); MC_JetAnalysis::finalize(); } //@} protected: /// @name Parameters for specialised e/mu and dressed/bare subclassing //@{ double _dR; PdgId _lepton; //@} private: /// @name Histograms //@{ Histo1DPtr _h_W_jet1_deta; Histo1DPtr _h_W_jet1_dR; //@} }; struct MC_WJETS_EL : public MC_WJETS { MC_WJETS_EL() : MC_WJETS("MC_WJETS_EL") { _dR = 0.2; _lepton = PID::ELECTRON; } }; struct MC_WJETS_EL_BARE : public MC_WJETS { MC_WJETS_EL_BARE() : MC_WJETS("MC_WJETS_EL_BARE") { _dR = 0; _lepton = PID::ELECTRON; } }; struct MC_WJETS_MU : public MC_WJETS { MC_WJETS_MU() : MC_WJETS("MC_WJETS_MU") { _dR = 0.2; _lepton = PID::MUON; } }; struct MC_WJETS_MU_BARE : public MC_WJETS { MC_WJETS_MU_BARE() : MC_WJETS("MC_WJETS_MU_BARE") { _dR = 0; _lepton = PID::MUON; } }; // The hooks for the plugin system DECLARE_RIVET_PLUGIN(MC_WJETS); DECLARE_RIVET_PLUGIN(MC_WJETS_EL); DECLARE_RIVET_PLUGIN(MC_WJETS_EL_BARE); DECLARE_RIVET_PLUGIN(MC_WJETS_MU); DECLARE_RIVET_PLUGIN(MC_WJETS_MU_BARE); } diff --git a/bin/make-pgfplots b/bin/make-pgfplots --- a/bin/make-pgfplots +++ b/bin/make-pgfplots @@ -1,2819 +1,2819 @@ #! /usr/bin/env python """\ %(prog)s [options] file.dat [file2.dat ...] TODO * Optimise output for e.g. lots of same-height bins in a row * Add a RatioFullRange directive to show the full range of error bars + MC envelope in the ratio * Tidy LaTeX-writing code -- faster to compile one doc only, then split it? * Handle boolean values flexibly (yes, no, true, false, etc. as well as 1, 0) """ from __future__ import print_function ## ## This program is copyright by Christian Gutschow and ## the Rivet team https://rivet.hepforge.org. It may be used ## for scientific and private purposes. Patches are welcome, but please don't ## redistribute changed versions yourself. ## ## Check the Python version import sys if sys.version_info[:3] < (2,6,0): print("make-plots requires Python version >= 2.6.0... exiting") sys.exit(1) ## Try to rename the process on Linux try: import ctypes libc = ctypes.cdll.LoadLibrary('libc.so.6') libc.prctl(15, 'make-plots', 0, 0, 0) except Exception as e: pass import os, logging, re import tempfile import getopt import string import copy from math import * ## Regex patterns pat_begin_block = re.compile(r'^#+\s*BEGIN ([A-Z0-9_]+) ?(\S+)?') pat_end_block = re.compile('^#+\s*END ([A-Z0-9_]+)') pat_comment = re.compile('^#|^\s*$') pat_property = re.compile('^(\w+?)=(.*)$') pat_property_opt = re.compile('^ReplaceOption\[(\w+=\w+)\]=(.*)$') pat_path_property = re.compile('^(\S+?)::(\w+?)=(.*)$') pat_options = re.compile(r"((?::\w+=[^:/]+)+)") def fuzzyeq(a, b, tolerance=1e-6): "Fuzzy equality comparison function for floats, with given fractional tolerance" # if type(a) is not float or type(a) is not float: # print(a, b) if (a == 0 and abs(b) < 1e-12) or (b == 0 and abs(a) < 1e-12): return True return 2.0*abs(a-b) < tolerance * abs(a+b) def inrange(x, a, b): return x >= a and x < b def floatify(x): if type(x) is str: x = x.split() if not hasattr(x, "__len__"): x = [x] x = [float(a) for a in x] return x[0] if len(x) == 1 else x def floatpair(x): if type(x) is str: x = x.split() if hasattr(x, "__len__"): assert len(x) == 2 return [float(a) for a in x] return [float(x), float(x)] def is_end_marker(line, blockname): m = pat_end_block.match(line) return m and m.group(1) == blockname def is_comment(line): return pat_comment.match(line) is not None def checkColor(line): if '[RGB]' in line: # e.g. '{[RGB]{1,2,3}}' if line[0] == '{' and line[-1] == '}': line = line[1:-1] # i.e. '[RGB]{1,2,3}' composition = line.split('{')[1][:-1] # e.g. '1,2,3' line = '{rgb,255:red,%s;green,%s;blue,%s}' % tuple(composition.split(',')) return line class Described(object): "Inherited functionality for objects holding a 'props' dictionary" def __init__(self): pass def has_attr(self, key): return key in self.props def set_attr(self, key, val): self.props[key] = val def attr(self, key, default=None): return self.props.get(key, default) def attr_bool(self, key, default=None): x = self.attr(key, default) if str(x).lower() in ["1", "true", "yes", "on"]: return True if str(x).lower() in ["0", "false", "no", "off"]: return False return None def attr_int(self, key, default=None): x = self.attr(key, default) try: x = int(x) except: x = None return x def attr_float(self, key, default=None): x = self.attr(key, default) try: x = float(x) except: x = None return x #def doGrid(self, pre = ''): # grid_major = self.attr_bool(pre + 'Grid', False) or self.attr_bool(pre + 'GridMajor', False) # grid_minor = self.attr_bool(pre + 'GridMinor', False) # if grid_major and grid_minor: return 'both' # elif grid_major: return 'major' # elif grid_minor: return 'minor' def ratio_names(self, skipFirst = False): offset = 1 if skipFirst else 0 return [ ('RatioPlot%s' % (str(i) if i else ''), i) for i in range(skipFirst, 9) ] def legend_names(self, skipFirst = False): offset = 1 if skipFirst else 0 return [ ('Legend%s' % (str(i) if i else ''), i) for i in range(skipFirst, 9) ] class InputData(Described): def __init__(self, filename): self.filename=filename if not self.filename.endswith(".dat"): self.filename += ".dat" self.props = {} self.histos = {} self.ratios = {} self.special = {} self.functions = {} # not sure what this is good for ... yet self.histomangler = {} self.normalised = False self.props['_OptSubs'] = { } self.props['is2dim'] = False # analyse input dat file f = open(self.filename) for line in f: m = pat_begin_block.match(line) if m: name, path = m.group(1,2) if path is None and name != 'PLOT': raise Exception('BEGIN sections need a path name.') if name == 'PLOT': self.read_input(f); elif name == 'SPECIAL': self.special[path] = Special(f) elif name == 'HISTOGRAM' or name == 'HISTOGRAM2D': self.histos[path] = Histogram(f, p=path) self.props['is2dim'] = self.histos[path].is2dim self.histos[path].zlog = self.attr_bool('LogZ') if self.attr_bool('Is3D', 0): self.histos[path].zlog = False if not self.histos[path].getName() == '': newname = self.histos[path].getName() self.histos[newname] = copy.deepcopy(self.histos[path]) del self.histos[path] elif name == 'HISTO1D': self.histos[path] = Histo1D(f, p=path) if not self.histos[path].getName() == '': newname = self.histos[path].getName() self.histos[newname] = copy.deepcopy(self.histos[path]) del self.histos[path] elif name == 'HISTO2D': self.histos[path] = Histo2D(f, p=path) self.props['is2dim'] = True self.histos[path].zlog = self.attr_bool('LogZ') if self.attr_bool('Is3D', 0): self.histos[path].zlog = False if not self.histos[path].getName() == '': newname = self.histos[path].getName() self.histos[newname] = copy.deepcopy(self.histos[path]) del self.histos[path] elif name == 'HISTOGRAMMANGLER': self.histomangler[path] = PlotFunction(f) elif name == 'COUNTER': self.histos[path] = Counter(f, p=path) elif name == 'VALUE': self.histos[path] = Value(f, p=path) elif name == 'FUNCTION': self.functions[path] = Function(f) f.close() self.apply_config_files(args.CONFIGFILES) self.props.setdefault('PlotSizeX', 10.) self.props.setdefault('PlotSizeY', 6.) if self.props['is2dim']: self.props['PlotSizeX'] -= 1.7 self.props['PlotSizeY'] = 10. self.props['RatioPlot'] = '0' if self.props.get('PlotSize', '') != '': plotsizes = self.props['PlotSize'].split(',') self.props['PlotSizeX'] = float(plotsizes[0]) self.props['PlotSizeY'] = float(plotsizes[1]) if len(plotsizes) == 3: self.props['RatioPlotSizeY'] = float(plotsizes[2]) del self.props['PlotSize'] self.props['RatioPlotSizeY'] = 0. # default is no ratio if self.attr('MainPlot') == '0': ## has RatioPlot, but no MainPlot self.props['PlotSizeY'] = 0. # size of MainPlot self.props['RatioPlot'] = '1' #< don't allow both to be zero! if self.attr_bool('RatioPlot'): if self.has_attr('RatioPlotYSize') and self.attr('RatioPlotYSize') != '': self.props['RatioPlotSizeY'] = self.attr_float('RatioPlotYSize') else: self.props['RatioPlotSizeY'] = 6. if self.attr_bool('MainPlot') else 3. if self.props['is2dim']: self.props['RatioPlotSizeY'] *= 2. for rname, _ in self.ratio_names(True): if self.attr_bool(rname, False): if self.props.get(rname+'YSize') != '': self.props[rname+'SizeY'] = self.attr_float(rname+'YSize') else: self.props[rname+'SizeY'] = 3. if self.attr('MainPlot') == '0' else 6. if self.props['is2dim']: self.props[rname+'SizeY'] *= 2. ## Ensure numbers, not strings self.props['PlotSizeX'] = float(self.props['PlotSizeX']) self.props['PlotSizeY'] = float(self.props['PlotSizeY']) self.props['RatioPlotSizeY'] = float(self.props['RatioPlotSizeY']) # self.props['TopMargin'] = float(self.props['TopMargin']) # self.props['BottomMargin'] = float(self.props['BottomMargin']) self.props['LogX'] = self.attr_bool('LogX', 0) self.props['LogY'] = self.attr_bool('LogY', 0) self.props['LogZ'] = self.attr_bool('LogZ', 0) for rname, _ in self.ratio_names(True): self.props[rname+'LogY'] = self.attr_bool(rname+'LogY', 0) self.props[rname+'LogZ'] = self.attr_bool(rname+'LogZ', 0) if self.has_attr('Rebin'): for key in self.histos: self.histos[key].props['Rebin'] = self.props['Rebin'] if self.has_attr('ConnectBins'): for key in self.histos: self.histos[key].props['ConnectBins'] = self.props['ConnectBins'] self.histo_sorting('DrawOnly') for curves, _ in self.ratio_names(): if self.has_attr(curves+'DrawOnly'): self.histo_sorting(curves+'DrawOnly') else: self.props[curves+'DrawOnly'] = self.props['DrawOnly'] ## Inherit various values from histograms if not explicitly set #for k in ['LogX', 'LogY', 'LogZ', # 'XLabel', 'YLabel', 'ZLabel', # 'XCustomMajorTicks', 'YCustomMajorTicks', 'ZCustomMajorTicks']: # self.inherit_from_histos(k) @property def is2dim(self): return self.attr_bool("is2dim", False) @is2dim.setter def is2dim(self, val): self.set_attr("is2dim", val) @property def drawonly(self): x = self.attr("DrawOnly") if type(x) is str: self.drawonly = x #< use setter to listify return x if x else [] @drawonly.setter def drawonly(self, val): if type(val) is str: val = val.strip().split() self.set_attr("DrawOnly", val) @property def stacklist(self): x = self.attr("Stack") if type(x) is str: self.stacklist = x #< use setter to listify return x if x else [] @stacklist.setter def stacklist(self, val): if type(val) is str: val = val.strip().split() self.set_attr("Stack", val) @property def plotorder(self): x = self.attr("PlotOrder") if type(x) is str: self.plotorder = x #< use setter to listify return x if x else [] @plotorder.setter def plotorder(self, val): if type(val) is str: val = val.strip().split() self.set_attr("PlotOrder", val) @property def plotsizex(self): return self.attr_float("PlotSizeX") @plotsizex.setter def plotsizex(self, val): self.set_attr("PlotSizeX", val) @property def plotsizey(self): return self.attr_float("PlotSizeY") @plotsizey.setter def plotsizey(self, val): self.set_attr("PlotSizeY", val) @property def plotsize(self): return [self.plotsizex, self.plotsizey] @plotsize.setter def plotsize(self, val): if type(val) is str: val = [float(x) for x in val.split(",")] assert len(val) == 2 self.plotsizex = val[0] self.plotsizey = val[1] @property def ratiosizey(self): return self.attr_float("RatioPlotSizeY") @ratiosizey.setter def ratiosizey(self, val): self.set_attr("RatioPlotSizeY", val) @property def scale(self): return self.attr_float("Scale") @scale.setter def scale(self, val): self.set_attr("Scale", val) @property def xmin(self): return self.attr_float("XMin") @xmin.setter def xmin(self, val): self.set_attr("XMin", val) @property def xmax(self): return self.attr_float("XMax") @xmax.setter def xmax(self, val): self.set_attr("XMax", val) @property def xrange(self): return [self.xmin, self.xmax] @xrange.setter def xrange(self, val): if type(val) is str: val = [float(x) for x in val.split(",")] assert len(val) == 2 self.xmin = val[0] self.xmax = val[1] @property def ymin(self): return self.attr_float("YMin") @ymin.setter def ymin(self, val): self.set_attr("YMin", val) @property def ymax(self): return self.attr_float("YMax") @ymax.setter def ymax(self, val): self.set_attr("YMax", val) @property def yrange(self): return [self.ymin, self.ymax] @yrange.setter def yrange(self, val): if type(val) is str: val = [float(y) for y in val.split(",")] assert len(val) == 2 self.ymin = val[0] self.ymax = val[1] # TODO: add more rw properties for plotsize(x,y), ratiosize(y), # show_mainplot, show_ratioplot, show_legend, log(x,y,z), rebin, # drawonly, legendonly, plotorder, stack, # label(x,y,z), majorticks(x,y,z), minorticks(x,y,z), # min(x,y,z), max(x,y,z), range(x,y,z) def getLegendPos(self, prefix = ''): xpos = self.attr_float(prefix+'LegendXPos', 0.95 if self.getLegendAlign() == 'right' else 0.53) ypos = self.attr_float(prefix+'LegendYPos', 0.93) return (xpos, ypos) def getLegendAlign(self, prefix = ''): la = self.attr(prefix+'LegendAlign', 'left') if la == 'l': return 'left' elif la == 'c': return 'center' elif la == 'r': return 'right' else: return la #def inherit_from_histos(self, k): # """Note: this will inherit the key from a random histogram: # only use if you're sure all histograms have this key!""" # if k not in self.props: # h = list(self.histos.values())[0] # if k in h.props: # self.props[k] = h.props[k] def read_input(self, f): for line in f: if is_end_marker(line, 'PLOT'): break elif is_comment(line): continue m = pat_property.match(line) m_opt = pat_property_opt.match(line) if m_opt: opt_old, opt_new = m_opt.group(1,2) self.props['_OptSubs'][opt_old.strip()] = opt_new.strip() elif m: prop, value = m.group(1,2) prop = prop.strip() value = value.strip() if prop in self.props: logging.debug("Overwriting property %s = %s -> %s" % (prop, self.props[prop], value)) ## Use strip here to deal with DOS newlines containing \r self.props[prop.strip()] = value.strip() def apply_config_files(self, conffiles): """Use config file to overwrite cosmetic properties.""" if conffiles is not None: for filename in conffiles: cf = open(filename, 'r') lines = cf.readlines() for i in range(len(lines)): ## First evaluate PLOT sections m = pat_begin_block.match(lines[i]) if m and m.group(1) == 'PLOT' and re.match(m.group(2),self.filename): while i %s" % (prop, self.props[prop], value)) ## Use strip here to deal with DOS newlines containing \r self.props[prop.strip()] = value.strip() elif is_comment(lines[i]): continue else: ## Then evaluate path-based settings, e.g. for HISTOGRAMs m = pat_path_property.match(lines[i]) if m: regex, prop, value = m.group(1,2,3) for obj_dict in [self.special, self.histos, self.functions]: for path, obj in obj_dict.items(): if re.match(regex, path): ## Use strip here to deal with DOS newlines containing \r obj.props.update({prop.strip() : value.strip()}) cf.close() def histo_sorting(self, curves): """Determine in what order to draw curves.""" histoordermap = {} histolist = self.histos.keys() if self.has_attr(curves): histolist = filter(self.histos.keys().count, self.attr(curves).strip().split()) for histo in histolist: order = 0 if self.histos[histo].has_attr('PlotOrder'): order = int(self.histos[histo].attr['PlotOrder']) if not order in histoordermap: histoordermap[order] = [] histoordermap[order].append(histo) sortedhistolist = [] for i in sorted(histoordermap.keys()): sortedhistolist.extend(histoordermap[i]) self.props[curves] = sortedhistolist class Plot(object): def __init__(self): #, inputdata): self.customCols = {} def panel_header(self, **kwargs): out = '' out += ('\\begin{axis}[\n') out += ('at={(0,%4.3fcm)},\n' % kwargs['PanelOffset']) out += ('xmode=%s,\n' % kwargs['Xmode']) out += ('ymode=%s,\n' % kwargs['Ymode']) #if kwargs['Zmode']: out += ('zmode=log,\n') out += ('scale only axis=true,\n') out += ('scaled ticks=false,\n') out += ('clip marker paths=true,\n') out += ('axis on top,\n') out += ('axis line style={line width=0.3pt},\n') out += ('height=%scm,\n' % kwargs['PanelHeight']) out += ('width=%scm,\n' % kwargs['PanelWidth']) out += ('xmin=%s,\n' % kwargs['Xmin']) out += ('xmax=%s,\n' % kwargs['Xmax']) out += ('ymin=%s,\n' % kwargs['Ymin']) out += ('ymax=%s,\n' % kwargs['Ymax']) if kwargs['is2D']: out += ('zmin=%s,\n' % kwargs['Zmin']) out += ('zmax=%s,\n' % kwargs['Zmax']) #out += ('legend style={\n') #out += (' draw=none, fill=none, anchor = north west,\n') #out += (' at={(%4.3f,%4.3f)},\n' % kwargs['LegendPos']) #out += ('},\n') #out += ('legend cell align=%s,\n' % kwargs['LegendAlign']) #out += ('legend image post style={sharp plot, -},\n') if kwargs['is2D']: if kwargs['is3D']: hrotate = 45 + kwargs['HRotate'] vrotate = 30 + kwargs['VRotate'] out += ('view={%i}{%s}, zticklabel pos=right,\n' % (hrotate, vrotate)) else: out += ('view={0}{90}, colorbar,\n') out += ('colormap/%s,\n' % kwargs['ColorMap']) if not kwargs['is3D'] and kwargs['Zmode']: out += ('colorbar style={yticklabel=$\\,10^{\\pgfmathprintnumber{\\tick}}$},\n') #if kwargs['Grid']: # out += ('grid=%s,\n' % kwargs['Grid']) for axis, label in kwargs['Labels'].iteritems(): out += ('%s={%s},\n' % (axis.lower(), label)) if kwargs['XLabelSep'] != None: if not kwargs['is3D']: out += ('xlabel style={at={(1,0)},below left,yshift={-%4.3fcm}},\n' % kwargs['XLabelSep']) out += ('xticklabel shift=%4.3fcm,\n' % kwargs['XTickShift']) else: out += ('xticklabels={,,},\n') if kwargs['YLabelSep'] != None: if not kwargs['is3D']: out += ('ylabel style={at={(0,1)},left,yshift={%4.3fcm}},\n' % kwargs['YLabelSep']) out += ('yticklabel shift=%4.3fcm,\n' % kwargs['YTickShift']) out += ('major tick length={%4.3fcm},\n' % kwargs['MajorTickLength']) out += ('minor tick length={%4.3fcm},\n' % kwargs['MinorTickLength']) # check if 'number of minor tick divisions' is specified for axis, nticks in kwargs['MinorTicks'].iteritems(): if nticks: out += ('minor %s tick num=%i,\n' % (axis.lower(), nticks)) # check if actual major/minor tick divisions have been specified out += ('max space between ticks=20,\n') for axis, tickinfo in kwargs['CustomTicks'].iteritems(): majorlabels, majorticks, minorticks = tickinfo if len(minorticks): out += ('minor %stick={%s},\n' % (axis.lower(), ','.join(minorticks))) if len(majorticks): if float(majorticks[0]) > float(kwargs['%smin' % axis]): majorticks = [ str(2 * float(majorticks[0]) - float(majorticks[1])) ] + majorticks if len(majorlabels): majorlabels = [ '.' ] + majorlabels # dummy label if float(majorticks[-1]) < float(kwargs['%smax' % axis]): majorticks.append(str(2 * float(majorticks[-1]) - float(majorticks[-2]))) if len(majorlabels): majorlabels.append('.') # dummy label out += ('%stick={%s},\n' % (axis.lower(), ','.join(majorticks))) if kwargs['NeedsXLabels'] and len(majorlabels): out += ('%sticklabels={{%s}},\n' % (axis.lower(), '},{'.join(majorlabels))) out += ('every %s tick/.style={black},\n' % axis.lower()) out += (']\n') return out def panel_footer(self): out = '' out += ('\\end{axis}\n') return out def set_normalisation(self, inputdata): if inputdata.normalised: return for method in ['NormalizeToIntegral', 'NormalizeToSum']: if inputdata.has_attr(method): for key in inputdata.props['DrawOnly']: if not inputdata.histos[key].has_attr(method): inputdata.histos[key].props[method] = inputdata.props[method] if inputdata.has_attr('Scale'): for key in inputdata.props['DrawOnly']: inputdata.histos[key].props['Scale'] = inputdata.attr_float('Scale') for key in inputdata.histos.keys(): inputdata.histos[key].mangle_input() inputdata.normalised = True def stack_histograms(self, inputdata): if inputdata.has_attr('Stack'): stackhists = [h for h in inputdata.attr('Stack').strip().split() if h in inputdata.histos] previous = '' for key in stackhists: if previous != '': inputdata.histos[key].add(inputdata.histos[previous]) previous = key def set_histo_options(self, inputdata): if inputdata.has_attr('ConnectGaps'): for key in inputdata.histos.keys(): if not inputdata.histos[key].has_attr('ConnectGaps'): inputdata.histos[i].props['ConnectGaps'] = inputdata.props['ConnectGaps'] # Counter and Value only have dummy x-axis, ticks wouldn't make sense here, so suppress them: if 'Value object' in str(inputdata.histos) or 'Counter object' in str(inputdata.histos): inputdata.props['XCustomMajorTicks'] = '' inputdata.props['XCustomMinorTicks'] = '' def set_borders(self, inputdata): self.set_xmax(inputdata) self.set_xmin(inputdata) self.set_ymax(inputdata) self.set_ymin(inputdata) self.set_zmax(inputdata) self.set_zmin(inputdata) inputdata.props['Borders'] = (self.xmin, self.xmax, self.ymin, self.ymax, self.zmin, self.zmax) def set_xmin(self, inputdata): self.xmin = inputdata.xmin if self.xmin is None: xmins = [inputdata.histos[h].getXMin() for h in inputdata.props['DrawOnly']] self.xmin = min(xmins) if xmins else 0.0 def set_xmax(self,inputdata): self.xmax = inputdata.xmax if self.xmax is None: xmaxs = [inputdata.histos[h].getXMax() for h in inputdata.props['DrawOnly']] self.xmax = min(xmaxs) if xmaxs else 1.0 def set_ymin(self,inputdata): if inputdata.ymin is not None: self.ymin = inputdata.ymin else: ymins = [inputdata.histos[i].getYMin(self.xmin, self.xmax, inputdata.props['LogY']) for i in inputdata.attr('DrawOnly')] minymin = min(ymins) if ymins else 0.0 if inputdata.props['is2dim']: self.ymin = minymin else: showzero = inputdata.attr_bool("ShowZero", True) if showzero: self.ymin = 0. if minymin > -1e-4 else 1.1*minymin else: self.ymin = 1.1*minymin if minymin < -1e-4 else 0 if minymin < 1e-4 else 0.9*minymin if inputdata.props['LogY']: ymins = [ymin for ymin in ymins if ymin > 0.0] if not ymins: if self.ymax == 0: self.ymax = 1 ymins.append(2e-7*self.ymax) minymin = min(ymins) fullrange = args.FULL_RANGE if inputdata.has_attr('FullRange'): fullrange = inputdata.attr_bool('FullRange') self.ymin = minymin/1.7 if fullrange else max(minymin/1.7, 2e-7*self.ymax) if self.ymin == self.ymax: self.ymin -= 1 self.ymax += 1 def set_ymax(self,inputdata): if inputdata.has_attr('YMax'): self.ymax = inputdata.attr_float('YMax') else: ymaxs = [inputdata.histos[h].getYMax(self.xmin, self.xmax) for h in inputdata.attr('DrawOnly')] self.ymax = max(ymaxs) if ymaxs else 1.0 if not inputdata.is2dim: self.ymax *= (1.7 if inputdata.attr_bool('LogY') else 1.1) def set_zmin(self,inputdata): if inputdata.has_attr('ZMin'): self.zmin = inputdata.attr_float('ZMin') else: zmins = [inputdata.histos[i].getZMin(self.xmin, self.xmax, self.ymin, self.ymax) for i in inputdata.attr('DrawOnly')] minzmin = min(zmins) if zmins else 0.0 self.zmin = minzmin if zmins: showzero = inputdata.attr_bool('ShowZero', True) if showzero: self.zmin = 0 if minzmin > -1e-4 else 1.1*minzmin else: self.zmin = 1.1*minzmin if minzmin < -1e-4 else 0. if minzmin < 1e-4 else 0.9*minzmin if inputdata.attr_bool('LogZ', False): zmins = [zmin for zmin in zmins if zmin > 0] if not zmins: if self.zmax == 0: self.zmax = 1 zmins.append(2e-7*self.zmax) minzmin = min(zmins) fullrange = inputdata.attr_bool("FullRange", args.FULL_RANGE) self.zmin = minzmin/1.7 if fullrange else max(minzmin/1.7, 2e-7*self.zmax) if self.zmin == self.zmax: self.zmin -= 1 self.zmax += 1 def set_zmax(self,inputdata): self.zmax = inputdata.attr_float('ZMax') if self.zmax is None: zmaxs = [inputdata.histos[h].getZMax(self.xmin, self.xmax, self.ymin, self.ymax) for h in inputdata.attr('DrawOnly')] self.zmax = max(zmaxs) if zmaxs else 1.0 def getTicks(self, inputdata, axis): majorticks = []; majorlabels = [] ticktype = '%sCustomMajorTicks' % axis if inputdata.attr(ticktype): ticks = inputdata.attr(ticktype).strip().split() if not len(ticks) % 2: for i in range(0,len(ticks),2): majorticks.append(ticks[i]) majorlabels.append(ticks[i+1]) minorticks = [] ticktype = '%sCustomMinorTicks' % axis if inputdata.attr(ticktype): ticks = inputdata.attr(ticktype).strip().split() for val in ticks: minorticks.append(val) return (majorlabels, majorticks, minorticks) def draw(self): pass def write_header(self,inputdata): inputdata.props.setdefault('TopMargin', 0.8) inputdata.props.setdefault('LeftMargin', 1.4) inputdata.props.setdefault('BottomMargin', 0.75) inputdata.props.setdefault('RightMargin', 0.35) if inputdata.attr('is2dim'): inputdata.props['RightMargin'] += 1.8 papersizex = inputdata.attr_float('PlotSizeX') + 0.1 papersizex += inputdata.attr_float('LeftMargin') + inputdata.attr_float('RightMargin') papersizey = inputdata.attr_float('PlotSizeY') + 0.2 papersizey += inputdata.attr_float('TopMargin') + inputdata.attr_float('BottomMargin') for rname, _ in inputdata.ratio_names(): if inputdata.has_attr(rname+'SizeY'): papersizey += inputdata.attr_float(rname+'SizeY') # out = "" out += '\\documentclass{article}\n' if args.OUTPUT_FONT == "MINION": out += ('\\usepackage{minion}\n') elif args.OUTPUT_FONT == "PALATINO_OSF": out += ('\\usepackage[osf,sc]{mathpazo}\n') elif args.OUTPUT_FONT == "PALATINO": out += ('\\usepackage{mathpazo}\n') elif args.OUTPUT_FONT == "TIMES": out += ('\\usepackage{mathptmx}\n') elif args.OUTPUT_FONT == "HELVETICA": out += ('\\renewcommand{\\familydefault}{\\sfdefault}\n') out += ('\\usepackage{sfmath}\n') out += ('\\usepackage{helvet}\n') out += ('\\usepackage[symbolgreek]{mathastext}\n') for pkg in args.LATEXPKGS: out += ('\\usepackage{%s}\n' % pkg) out += ('\\usepackage[dvipsnames]{xcolor}\n') out += ('\\selectcolormodel{rgb}\n') out += ('\\definecolor{red}{HTML}{EE3311}\n') # (Google uses 'DC3912') out += ('\\definecolor{blue}{HTML}{3366FF}\n') out += ('\\definecolor{green}{HTML}{109618}\n') out += ('\\definecolor{orange}{HTML}{FF9900}\n') out += ('\\definecolor{lilac}{HTML}{990099}\n') out += ('\\usepackage{amsmath}\n') out += ('\\usepackage{amssymb}\n') out += ('\\usepackage{relsize}\n') out += ('\\usepackage{graphicx}\n') out += ('\\usepackage[dvips,\n') out += (' left=%4.3fcm,\n' % (inputdata.attr_float('LeftMargin')-0.45)) out += (' right=0cm,\n') out += (' top=%4.3fcm,\n' % (inputdata.attr_float('TopMargin')-0.70)) out += (' bottom=0cm,\n') out += (' paperwidth=%scm,paperheight=%scm\n' % (papersizex, papersizey)) out += (']{geometry}\n') if inputdata.has_attr('DefineColor'): out += ('% user defined colours\n') for color in inputdata.attr('DefineColor').split('\t'): out += ('%s\n' % color) col_count = 0 for obj in inputdata.histos: for col in inputdata.histos[obj].customCols: if col in self.customCols: # already seen, look up name inputdata.histos[obj].customCols[col] = self.customCols[col] elif ']{' in col: colname = 'MyColour%i' % col_count # assign custom name inputdata.histos[obj].customCols[col] = colname self.customCols[col] = colname col_count += 1 # remove outer {...} if present while col[0] == '{' and col[-1] == '}': col = col[1:-1] model, specs = tuple(col[1:-1].split(']{')) out += ('\\definecolor{%s}{%s}{%s}\n' % (colname, model, specs)) out += ('\\usepackage{pgfplots}\n') out += ('\\usepgfplotslibrary{fillbetween}\n') #out += ('\\usetikzlibrary{positioning,shapes.geometric,patterns}\n') out += ('\\usetikzlibrary{patterns}\n') out += ('\\pgfplotsset{ compat=1.16,\n') out += (' title style={at={(0,1)},right,yshift={0.10cm}},\n') out += ('}\n') out += ('\\begin{document}\n') out += ('\\pagestyle{empty}\n') out += ('\\begin{tikzpicture}[\n') out += (' inner sep=0,\n') out += (' trim axis left = %4.3f,\n' % (inputdata.attr_float('LeftMargin') + 0.1)) out += (' trim axis right,\n') out += (' baseline,') out += (' hatch distance/.store in=\\hatchdistance,\n') out += (' hatch distance=8pt,\n') out += (' hatch thickness/.store in=\\hatchthickness,\n') out += (' hatch thickness=1pt,\n') out += (']\n') out += ('\\makeatletter\n') out += ('\\pgfdeclarepatternformonly[\hatchdistance,\hatchthickness]{diagonal hatch}\n') out += ('{\\pgfqpoint{0pt}{0pt}}\n') out += ('{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n') out += ('{\\pgfpoint{\\hatchdistance-1pt}{\\hatchdistance-1pt}}%\n') out += ('{\n') out += (' \\pgfsetcolor{\\tikz@pattern@color}\n') out += (' \\pgfsetlinewidth{\\hatchthickness}\n') out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{0pt}}\n') out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n') out += (' \\pgfusepath{stroke}\n') out += ('}\n') out += ('\\pgfdeclarepatternformonly[\hatchdistance,\hatchthickness]{antidiagonal hatch}\n') out += ('{\\pgfqpoint{0pt}{0pt}}\n') out += ('{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n') out += ('{\\pgfpoint{\\hatchdistance-1pt}{\\hatchdistance-1pt}}%\n') out += ('{\n') out += (' \\pgfsetcolor{\\tikz@pattern@color}\n') out += (' \\pgfsetlinewidth{\\hatchthickness}\n') out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{\\hatchdistance}}\n') out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{0pt}}\n') out += (' \\pgfusepath{stroke}\n') out += ('}\n') out += ('\\pgfdeclarepatternformonly[\hatchdistance,\hatchthickness]{cross hatch}\n') out += ('{\\pgfqpoint{0pt}{0pt}}\n') out += ('{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n') out += ('{\\pgfpoint{\\hatchdistance-1pt}{\\hatchdistance-1pt}}%\n') out += ('{\n') out += (' \\pgfsetcolor{\\tikz@pattern@color}\n') out += (' \\pgfsetlinewidth{\\hatchthickness}\n') out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{0pt}}\n') out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n') out += (' \\pgfusepath{stroke}\n') out += (' \\pgfsetcolor{\\tikz@pattern@color}\n') out += (' \\pgfsetlinewidth{\\hatchthickness}\n') out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{\\hatchdistance}}\n') out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{0pt}}\n') out += (' \\pgfusepath{stroke}\n') out += ('}\n') out += ('\makeatother\n') if inputdata.attr_bool('is2dim'): colorseries = '{hsb}{grad}[rgb]{0,0,1}{-.700,0,0}' if inputdata.attr('ColorSeries', ''): colorseries = inputdata.attr('ColorSeries') out += ('\\definecolorseries{gradientcolors}%s\n' % colorseries) out += ('\\resetcolorseries[130]{gradientcolors}\n') return out def write_footer(self): out = "" out += ('\\end{tikzpicture}\n') out += ('\\end{document}\n') return out class MainPlot(Plot): def __init__(self, inputdata): self.name = 'MainPlot' inputdata.props['PlotStage'] = 'MainPlot' self.set_normalisation(inputdata) self.stack_histograms(inputdata) do_gof = inputdata.props.get('GofLegend', '0') == '1' or inputdata.props.get('GofFrame', '') != '' do_taylor = inputdata.props.get('TaylorPlot', '0') == '1' if do_gof and not do_taylor: self.calculate_gof(inputdata) self.set_histo_options(inputdata) self.set_borders(inputdata) self.yoffset = inputdata.props['PlotSizeY'] def draw(self, inputdata): out = "" out += ('\n%\n% MainPlot\n%\n') offset = 0. for rname, i in inputdata.ratio_names(): if inputdata.has_attr(rname+'SizeY'): offset += inputdata.attr_float(rname+'SizeY') labels = self.getLabels(inputdata) out += self.panel_header( PanelOffset = offset, Xmode = 'log' if inputdata.attr_bool('LogX') else 'normal', Ymode = 'log' if inputdata.attr_bool('LogY') else 'normal', Zmode = inputdata.attr_bool('LogZ'), PanelHeight = inputdata.props['PlotSizeY'], PanelWidth = inputdata.props['PlotSizeX'], Xmin = self.xmin, Xmax = self.xmax, Ymin = self.ymin, Ymax = self.ymax, Zmin = self.zmin, Zmax = self.zmax, Labels = { l : inputdata.attr(l) for l in labels if inputdata.has_attr(l) }, XLabelSep = inputdata.attr_float('XLabelSep', 0.7) if 'XLabel' in labels else None, YLabelSep = inputdata.attr_float('YLabelSep', 1.2) if 'YLabel' in labels else None, XTickShift = inputdata.attr_float('XTickShift', 0.1) if 'XLabel' in labels else None, YTickShift = inputdata.attr_float('YTickShift', 0.1) if 'YLabel' in labels else None, MajorTickLength = inputdata.attr_float('MajorTickLength', 0.30), MinorTickLength = inputdata.attr_float('MinorTickLength', 0.15), MinorTicks = { axis : inputdata.attr_int('%sMinorTickMarks' % axis, 4) for axis in ['X', 'Y', 'Z'] }, CustomTicks = { axis : self.getTicks(inputdata, axis) for axis in ['X', 'Y', 'Z'] }, NeedsXLabels = self.needsXLabel(inputdata), #Grid = inputdata.doGrid(), is2D = inputdata.is2dim, is3D = inputdata.attr_bool('Is3D', 0), HRotate = inputdata.attr_int('HRotate', 0), VRotate = inputdata.attr_int('VRotate', 0), ColorMap = inputdata.attr('ColorMap', 'jet'), #LegendAlign = inputdata.getLegendAlign(), #LegendPos = inputdata.getLegendPos(), ) out += self.plot_object(inputdata) out += self.panel_footer() return out def plot_object(self, inputdata): out = "" if inputdata.attr_bool('DrawSpecialFirst', False): for s in inputdata.special.values(): out += s.draw(inputdata) if inputdata.attr_bool('DrawFunctionFirst', False): for f in inputdata.functions.values(): out += f.draw(inputdata, self.props['Borders'][0], self.props['Borders'][1]) for key in inputdata.props['DrawOnly']: #add_legend = inputdata.attr_bool('Legend') out += inputdata.histos[key].draw() #add_legend) if not inputdata.attr_bool('DrawSpecialFirst', False): for s in inputdata.special.values(): out += s.draw(inputdata) if not inputdata.attr_bool('DrawFunctionFirst', False): for f in inputdata.functions.values(): out += f.draw(inputdata, self.props['Borders'][0], self.props['Borders'][1]) for lname, i in inputdata.legend_names(): if inputdata.attr_bool(lname, False): legend = Legend(inputdata.props,inputdata.histos,inputdata.functions, lname, i) out += legend.draw() return out def needsXLabel(self, inputdata): if inputdata.attr('PlotTickLabels') == '0': return False # only draw the x-axis label if there are no ratio panels drawlabels = not any([ inputdata.attr_bool(rname) for rname, _ in inputdata.ratio_names() ]) return drawlabels def getLabels(self, inputdata): labels = ['Title', 'YLabel'] if self.needsXLabel(inputdata): labels.append('XLabel') if inputdata.props['is2dim']: labels.append('ZLabel') return labels def calculate_gof(self, inputdata): refdata = inputdata.props.get('GofReference') if refdata is None: refdata = inputdata.props.get('RatioPlotReference') if refdata is None: inputdata.props['GofLegend'] = '0' inputdata.props['GofFrame'] = '' return def pickcolor(gof): color = None colordefs = {} for i in inputdata.props.setdefault('GofFrameColor', '0:green 3:yellow 6:red!70').strip().split(): foo = i.split(':') if len(foo) != 2: continue colordefs[float(foo[0])] = foo[1] for col in sorted(colordefs.keys()): if gof >= col: color=colordefs[col] return color inputdata.props.setdefault('GofLegend', '0') inputdata.props.setdefault('GofFrame', '') inputdata.props.setdefault('FrameColor', None) for key in inputdata.props['DrawOnly']: if key == refdata: continue if inputdata.props['GofLegend'] != '1' and key != inputdata.props['GofFrame']: continue if inputdata.props.get('GofType', 'chi2') != 'chi2': return gof = inputdata.histos[key].getChi2(inputdata.histos[refdata]) if key == inputdata.props['GofFrame'] and inputdata.props['FrameColor'] is None: inputdata.props['FrameColor'] = pickcolor(gof) if inputdata.histos[key].props.setdefault('Title', '') != '': inputdata.histos[key].props['Title'] += ', ' inputdata.histos[key].props['Title'] += '$\\chi^2/n={}$%1.2f' %gof class TaylorPlot(Plot): def __init__(self, inputdata): self.refdata = inputdata.props['TaylorPlotReference'] self.calculate_taylorcoordinates(inputdata) def calculate_taylorcoordinates(self,inputdata): foo = inputdata.props['DrawOnly'].pop(inputdata.props['DrawOnly'].index(self.refdata)) inputdata.props['DrawOnly'].append(foo) for i in inputdata.props['DrawOnly']: print(i) print('meanbinval = ', inputdata.histos[i].getMeanBinValue()) print('sigmabinval = ', inputdata.histos[i].getSigmaBinValue()) print('chi2/nbins = ', inputdata.histos[i].getChi2(inputdata.histos[self.refdata])) print('correlation = ', inputdata.histos[i].getCorrelation(inputdata.histos[self.refdata])) print('distance = ', inputdata.histos[i].getRMSdistance(inputdata.histos[self.refdata])) class RatioPlot(Plot): def __init__(self, inputdata, i): self.number = i self.name='RatioPlot%s' % (str(i) if i else '') # initialise histograms even when no main plot self.set_normalisation(inputdata) self.refdata = inputdata.props[self.name+'Reference'] if not inputdata.histos.has_key(self.refdata): print('ERROR: %sReference=%s not found in:' % (self.name,self.refdata)) for i in inputdata.histos.keys(): print(' ', i) sys.exit(1) if not inputdata.has_attr('RatioPlotYOffset'): inputdata.props['RatioPlotYOffset'] = inputdata.props['PlotSizeY'] if not inputdata.has_attr(self.name + 'SameStyle'): inputdata.props[self.name+'SameStyle'] = '1' self.yoffset = inputdata.props['RatioPlotYOffset'] + inputdata.props[self.name+'SizeY'] inputdata.props['PlotStage'] = self.name inputdata.props['RatioPlotYOffset'] = self.yoffset inputdata.props['PlotSizeY'] = inputdata.props[self.name+'SizeY'] inputdata.props['LogY'] = inputdata.props.get(self.name+"LogY", False) # TODO: It'd be nice it this wasn't so MC-specific rpmode = inputdata.props.get(self.name+'Mode', "mcdata") if rpmode=='deviation': inputdata.props['YLabel']='$(\\text{MC}-\\text{data})$' inputdata.props['YMin']=-2.99 inputdata.props['YMax']=2.99 elif rpmode=='delta': inputdata.props['YLabel']='\\delta' inputdata.props['YMin']=-0.5 inputdata.props['YMax']=0.5 elif rpmode=='deltapercent': inputdata.props['YLabel']='\\delta\;[\%]' inputdata.props['YMin']=-50. inputdata.props['YMax']=50. elif rpmode=='deltamc': inputdata.props['YLabel']='Data/MC' inputdata.props['YMin']=0.5 inputdata.props['YMax']=1.5 else: inputdata.props['YLabel'] = 'MC/Data' inputdata.props['YMin'] = 0.5 inputdata.props['YMax'] = 1.5 if inputdata.has_attr(self.name+'YLabel'): inputdata.props['YLabel'] = inputdata.props[self.name+'YLabel'] if inputdata.has_attr(self.name+'YMin'): inputdata.props['YMin'] = inputdata.props[self.name+'YMin'] if inputdata.has_attr(self.name+'YMax'): inputdata.props['YMax'] = inputdata.props[self.name+'YMax'] if inputdata.has_attr(self.name+'YLabelSep'): inputdata.props['YLabelSep'] = inputdata.props[self.name+'YLabelSep'] if not inputdata.has_attr(self.name+'ErrorBandColor'): inputdata.props[self.name+'ErrorBandColor'] = 'yellow' if inputdata.props[self.name+'SameStyle']=='0': inputdata.histos[self.refdata].props['ErrorBandColor'] = inputdata.props[self.name+'ErrorBandColor'] inputdata.histos[self.refdata].props['ErrorBandOpacity'] = inputdata.props[self.name+'ErrorBandOpacity'] inputdata.histos[self.refdata].props['ErrorBands'] = '1' inputdata.histos[self.refdata].props['ErrorBars'] = '0' inputdata.histos[self.refdata].props['ErrorTubes'] = '0' inputdata.histos[self.refdata].props['LineStyle'] = 'solid' inputdata.histos[self.refdata].props['LineColor'] = 'black' inputdata.histos[self.refdata].props['LineWidth'] = '0.3pt' inputdata.histos[self.refdata].props['MarkerStyle'] = '' inputdata.histos[self.refdata].props['ConnectGaps'] = '1' self.calculate_ratios(inputdata) self.set_borders(inputdata) def draw(self, inputdata): out = '' out += ('\n%\n% RatioPlot\n%\n') offset = 0. for rname, i in inputdata.ratio_names(): if i > self.number and inputdata.has_attr(rname+'SizeY'): offset += inputdata.attr_float(rname+'SizeY') labels = self.getLabels(inputdata) out += self.panel_header( PanelOffset = offset, Xmode = 'log' if inputdata.attr_bool('LogX') else 'normal', Ymode = 'log' if inputdata.attr_bool('LogY') else 'normal', Zmode = inputdata.attr_bool('LogZ'), PanelHeight = inputdata.props['PlotSizeY'], PanelWidth = inputdata.props['PlotSizeX'], Xmin = self.xmin, Xmax = self.xmax, Ymin = self.ymin, Ymax = self.ymax, Zmin = self.zmin, Zmax = self.zmax, Labels = { l : inputdata.attr(l) for l in labels if inputdata.has_attr(l) }, XLabelSep = inputdata.attr_float('XLabelSep', 0.7) if 'XLabel' in labels else None, YLabelSep = inputdata.attr_float('YLabelSep', 1.2) if 'YLabel' in labels else None, XTickShift = inputdata.attr_float('XTickShift', 0.1) if 'XLabel' in labels else None, YTickShift = inputdata.attr_float('YTickShift', 0.1) if 'YLabel' in labels else None, MajorTickLength = inputdata.attr_float('MajorTickLength', 0.30), MinorTickLength = inputdata.attr_float('MinorTickLength', 0.15), MinorTicks = { axis : self.getMinorTickMarks(inputdata, axis) for axis in ['X', 'Y', 'Z'] }, CustomTicks = { axis : self.getTicks(inputdata, axis) for axis in ['X', 'Y', 'Z'] }, NeedsXLabels = self.needsXLabel(inputdata), #Grid = inputdata.doGrid(self.name), is2D = inputdata.is2dim, is3D = inputdata.attr_bool('Is3D', 0), HRotate = inputdata.attr_int('HRotate', 0), VRotate = inputdata.attr_int('VRotate', 0), ColorMap = inputdata.attr('ColorMap', 'jet'), #LegendAlign = inputdata.getLegendAlign(self.name), #LegendPos = inputdata.getLegendPos(self.name), ) out += self.add_object(inputdata) for lname, i in inputdata.legend_names(): if inputdata.attr_bool(self.name + lname, False): legend = Legend(inputdata.props,inputdata.histos,inputdata.functions, self.name + lname, i) out += legend.draw() out += self.panel_footer() return out def calculate_ratios(self, inputdata): inputdata.ratios = {} inputdata.ratios = copy.deepcopy(inputdata.histos) name = inputdata.attr(self.name+'DrawOnly').pop(inputdata.attr(self.name+'DrawOnly').index(self.refdata)) reffirst = inputdata.attr(self.name+'DrawReferenceFirst') != '0' if reffirst and inputdata.histos[self.refdata].attr_bool('ErrorBands'): inputdata.props[self.name+'DrawOnly'].insert(0, name) else: inputdata.props[self.name+'DrawOnly'].append(name) rpmode = inputdata.props.get(self.name+'Mode', 'mcdata') for i in inputdata.props[self.name+'DrawOnly']: # + [ self.refdata ]: if i != self.refdata: if rpmode == 'deviation': inputdata.ratios[i].deviation(inputdata.ratios[self.refdata]) elif rpmode == 'delta': inputdata.ratios[i].delta(inputdata.ratios[self.refdata]) elif rpmode == 'deltapercent': inputdata.ratios[i].deltapercent(inputdata.ratios[self.refdata]) elif rpmode == 'datamc': inputdata.ratios[i].dividereverse(inputdata.ratios[self.refdata]) inputdata.ratios[i].props['ErrorBars'] = '1' else: inputdata.ratios[i].divide(inputdata.ratios[self.refdata]) if rpmode == 'deviation': inputdata.ratios[self.refdata].deviation(inputdata.ratios[self.refdata]) elif rpmode == 'delta': inputdata.ratios[self.refdata].delta(inputdata.ratios[self.refdata]) elif rpmode == 'deltapercent': inputdata.ratios[self.refdata].deltapercent(inputdata.ratios[self.refdata]) elif rpmode == 'datamc': inputdata.ratios[self.refdata].dividereverse(inputdata.ratios[self.refdata]) else: inputdata.ratios[self.refdata].divide(inputdata.ratios[self.refdata]) def add_object(self, inputdata): out = "" if inputdata.attr_bool('DrawSpecialFirst', False): for s in inputdata.special.values(): out += s.draw(inputdata) if inputdata.attr_bool('DrawFunctionFirst'): for i in inputdata.functions.keys(): out += inputdata.functions[i].draw(inputdata, self.props['Borders'][0], self.props['Borders'][1]) for key in inputdata.props[self.name+'DrawOnly']: if inputdata.has_attr(self.name+'Mode') and inputdata.attr(self.name+'Mode') == 'datamc': if key == self.refdata: continue #add_legend = inputdata.attr_bool(self.name+'Legend') out += inputdata.ratios[key].draw() #add_legend) if not inputdata.attr_bool('DrawFunctionFirst'): for i in inputdata.functions.keys(): out += inputdata.functions[i].draw(inputdata, self.props['Borders'][0], self.props['Borders'][1]) if not inputdata.attr_bool('DrawSpecialFirst', False): for s in inputdata.special.values(): out += s.draw(inputdata) return out def getMinorTickMarks(self, inputdata, axis): tag = '%sMinorTickMarks' % axis if inputdata.has_attr(self.name + tag): return inputdata.attr_int(self.name + tag) return inputdata.attr_int(tag, 4) def needsXLabel(self, inputdata): # only plot x label if it's the last ratio panel ratios = [ i for rname, i in inputdata.ratio_names() if inputdata.attr_bool(rname, True) and inputdata.attr(rname + 'Reference', False) ] return ratios[-1] == self.number def getLabels(self, inputdata): labels = ['YLabel'] drawtitle = inputdata.has_attr('MainPlot') and not inputdata.attr_bool('MainPlot') if drawtitle and not any([inputdata.attr_bool(rname) for rname, i in inputdata.ratio_names() if i < self.number]): labels.append('Title') if self.needsXLabel(inputdata): labels.append('XLabel') return labels class Legend(Described): def __init__(self, props, histos, functions, name, number): self.name = name self.number = number self.histos = histos self.functions = functions self.props = props def draw(self): legendordermap = {} legendlist = self.props['DrawOnly'] + list(self.functions.keys()) if self.name + 'Only' in self.props: legendlist = [] for legend in self.props[self.name+'Only'].strip().split(): if legend in self.histos or legend in self.functions: legendlist.append(legend) for legend in legendlist: order = 0 if legend in self.histos and 'LegendOrder' in self.histos[legend].props: order = int(self.histos[legend].props['LegendOrder']) if legend in self.functions and 'LegendOrder' in self.functions[legend].props: order = int(self.functions[legend].props['LegendOrder']) if not order in legendordermap: legendordermap[order] = [] legendordermap[order].append(legend) orderedlegendlist=[] for i in sorted(legendordermap.keys()): orderedlegendlist.extend(legendordermap[i]) if self.props['is2dim']: return self.draw_2dlegend(orderedlegendlist) out = "" out += '\n%\n% Legend\n%\n' talign = 'right' if self.getLegendAlign() == 'left' else 'left' posalign = 'left' if talign == 'right' else 'right' legx = float(self.getLegendXPos()); legy = float(self.getLegendYPos()) ypos = legy -0.05*6/self.props['PlotSizeY'] if self.props.has_key(self.name+'Title'): for i in self.props[self.name+'Title'].strip().split('\\\\'): out += ('\\node[black, inner sep=0, align=%s, %s,\n' % (posalign, talign)) out += ('] at (rel axis cs: %4.3f,%4.3f) {%s};\n' % (legx, ypos, i)) ypos -= 0.075*6/self.props['PlotSizeY'] offset = self.attr_float(self.name+'EntryOffset', 0.) separation = self.attr_float(self.name+'EntrySeparation', 0.) hline = True; vline = True if self.props.has_key(self.name+'HorizontalLine'): hline = self.props[self.name+'HorizontalLine'] != '0' if self.props.has_key(self.name+'VerticalLine'): vline = self.props[self.name+'VerticalLine'] != '0' rel_xpos_sign = 1.0 if self.getLegendAlign() == 'right': rel_xpos_sign = -1.0 xwidth = self.getLegendIconWidth() xpos1 = legx -0.02*rel_xpos_sign-0.08*xwidth*rel_xpos_sign xpos2 = legx -0.02*rel_xpos_sign xposc = legx -0.02*rel_xpos_sign-0.04*xwidth*rel_xpos_sign xpostext = 0.1*rel_xpos_sign for i in orderedlegendlist: if self.histos.has_key(i): drawobject=self.histos[i] elif self.functions.has_key(i): drawobject=self.functions[i] else: continue title = drawobject.getTitle() mopts = pat_options.search(drawobject.path) if mopts and not self.props.get("RemoveOptions", 0): opts = list(mopts.groups())[0].lstrip(':').split(":") for opt in opts: if opt in self.props['_OptSubs']: title += ' %s' % self.props['_OptSubs'][opt] else: title += ' [%s]' % opt if title == '': continue else: titlelines=[] for i in title.strip().split('\\\\'): titlelines.append(i) ypos -= 0.075*6/self.props['PlotSizeY']*separation boxtop = 0.045*(6./self.props['PlotSizeY']) boxbottom = 0. lineheight = 0.5*(boxtop-boxbottom) xico = xpostext + xposc xhi = xpostext + xpos2 xlo = xpostext + xpos1 yhi = ypos + lineheight ylo = ypos - lineheight xleg = legx + xpostext; # options set -> lineopts setup = ('%s,\n' % drawobject.getLineColor()) linewidth = drawobject.getLineWidth() try: float(linewidth) linewidth += 'cm' except ValueError: pass setup += ('draw opacity=%s,\n' % drawobject.getLineOpacity()) if drawobject.getErrorBands(): out += ('\\fill[\n') out += (' fill=none, fill opacity=%s,\n' % drawobject.getErrorBandOpacity()) if drawobject.getPatternFill(): out += ('fill=%s,\n' % drawobject.getErrorBandFillColor()) out += ('] (rel axis cs: %4.3f, %4.3f) rectangle (rel axis cs: %4.3f, %4.3f);\n' % (xlo,ylo,xhi,yhi)) if drawobject.getPattern() != '': out += ('\\fill[\n') out += ('pattern = %s,\n' % drawobject.getPattern()) if drawobject.getErroBandHatchDistance() != "": out += ('hatch distance = %s,\n' % drawobject.getErroBandHatchDistance()) if drawobject.getPatternColor() != '': out += ('pattern color = %s,\n' % drawobject.getPatternColor()) out += ('] (rel axis cs: %4.3f, %4.3f) rectangle (rel axis cs: %4.3f, %4.3f);\n' % (xlo,ylo,xhi,yhi)) if drawobject.getFillBorder(): out += ('\\draw[line style=solid, thin, %s] (rel axis cs: %4.3f,%4.3f)' % (setup, xlo, yhi)) out += ('-- (rel axis cs: %4.3f, %4.3f);\n' % (xhi, yhi)) out += ('\\draw[line style=solid, thin, %s] (rel axis cs: %4.3f,%4.3f)' % (setup, xlo, ylo)) out += ('-- (rel axis cs: %4.3f, %4.3f);\n' % (xhi, ylo)) setup += ('line width={%s},\n' % linewidth) setup += ('style={%s},\n' % drawobject.getLineStyle()) if drawobject.getLineDash(): setup += ('dash pattern=%s,\n' % drawobject.getLineDash()) if drawobject.getErrorBars() and vline: out += ('\\draw[%s] (rel axis cs: %4.3f,%4.3f)' % (setup, xico, ylo)) out += (' -- (rel axis cs: %4.3f, %4.3f);\n' % (xico, yhi)) if hline: out += ('\\draw[%s] (rel axis cs: %4.3f,%4.3f)' % (setup, xlo, ypos)) out += ('-- (rel axis cs: %4.3f, %4.3f);\n' % (xhi, ypos)) if drawobject.getMarkerStyle() != 'none': setup += ('mark options={\n') setup += (' %s, fill color=%s,\n' % (drawobject.getMarkerColor(), drawobject.getMarkerColor())) setup += (' mark size={%s}, scale=%s,\n' % (drawobject.getMarkerSize(), drawobject.getMarkerScale())) setup += ('},\n') out += ('\\draw[mark=*, %s] plot coordinates {\n' % setup) out += ('(rel axis cs: %4.3f,%4.3f)};\n' % (xico, ypos)) ypos -= 0.075*6/self.props['PlotSizeY']*offset for i in titlelines: out += ('\\node[black, inner sep=0, align=%s, %s,\n' % (posalign, talign)) out += ('] at (rel axis cs: %4.3f,%4.3f) {%s};\n' % (xleg, ypos, i)) ypos -= 0.075*6/self.props['PlotSizeY'] if 'CustomLegend' in self.props: for i in self.props['CustomLegend'].strip().split('\\\\'): out += ('\\node[black, inner sep=0, align=%s, %s,\n' % (posalign, talign)) out += ('] at (rel axis cs: %4.3f, %4.3f) {%s};\n' % (xleg, ypos, i)) ypos -= 0.075*6/self.props['PlotSizeY'] return out def draw_2dlegend(self,orderedlegendlist): histos = "" for i in range(0,len(orderedlegendlist)): if self.histos.has_key(orderedlegendlist[i]): drawobject=self.histos[orderedlegendlist[i]] elif self.functions.has_key(orderedlegendlist[i]): drawobject=self.functions[orderedlegendlist[i]] else: continue title = drawobject.getTitle() if title == '': continue else: histos += title.strip().split('\\\\')[0] if not i==len(orderedlegendlist)-1: histos += ', ' #out = '\\rput(1,1){\\rput[rB](0, 1.7\\labelsep){\\normalsize '+histos+'}}\n' out += ('\\node[black, inner sep=0, align=left]\n') out += ('at (rel axis cs: 1,1) {\\normalsize{%s}};\n' % histos) return out def getLegendXPos(self): return self.props.get(self.name+'XPos', '0.95' if self.getLegendAlign() == 'right' else '0.53') def getLegendYPos(self): return self.props.get(self.name+'YPos', '0.93') def getLegendAlign(self): la = self.props.get(self.name+'Align', 'left') if la == 'l': return 'left' elif la == 'c': return 'center' elif la == 'r': return 'right' else: return la def getLegendIconWidth(self): return float(self.props.get(self.name+'IconWidth', '1.0')) class PlotFunction(object): def __init__(self, f): self.props = {} self.read_input(f) def read_input(self, f): self.code='def histomangler(x):\n' iscode=False for line in f: if is_end_marker(line, 'HISTOGRAMMANGLER'): break elif is_comment(line): continue else: m = pat_property.match(line) if iscode: self.code+=' '+line elif m: prop, value = m.group(1,2) if prop=='Code': iscode=True else: self.props[prop] = value if not iscode: print('++++++++++ ERROR: No code in function') else: foo = compile(self.code, '', 'exec') exec(foo) self.histomangler = histomangler def transform(self, x): return self.histomangler(x) class Special(Described): def __init__(self, f): self.props = {} self.data = [] self.read_input(f) if not self.props.has_key('Location'): self.props['Location']='MainPlot' self.props['Location']=self.props['Location'].split('\t') if not self.props.has_key('Coordinates'): self.props['Coordinates'] = 'Relative' def read_input(self, f): for line in f: if is_end_marker(line, 'SPECIAL'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.props[prop] = value else: self.data.append(line) def draw(self, inputdata): drawme = False for i in self.props['Location']: if i in inputdata.props['PlotStage']: drawme = True break if not drawme: return "" out = "" out += ('\n%\n% Special\n%\n') out += ('\\pgfplotsset{\n') out += (' after end axis/.append code={\n') for l in self.data: cs = 'axis cs:' if self.props['Coordinates'].lower() == 'relative': cs = 'rel ' + cs atpos = l.index('at') cspos = l[atpos:].index('(') + 1 l = l[:atpos+cspos] + cs + l[atpos+cspos:] out += ' %s%s\n' % (l, ';' if l[-1:] != ';' else '') out += (' }\n}\n') return out class DrawableObject(Described): def __init__(self, f): pass def getName(self): return self.props.get('Name', '') def getTitle(self): return self.props.get('Title', '') def getLineStyle(self): if 'LineStyle' in self.props: ## I normally like there to be "only one way to do it", but providing ## this dashdotted/dotdashed synonym just seems humane ;-) if self.props['LineStyle'] in ('dashdotted', 'dotdashed'): self.props['LineStyle']='dashed' self.props['LineDash']='3pt 3pt .8pt 3pt' return self.props['LineStyle'] else: return 'solid' def getLineDash(self): pattern = self.props.get('LineDash', '') if pattern: # converting this into pgfplots syntax # "3pt 3pt .8pt 3pt" becomes # "on 3pt off 3pt on .8pt off 3pt" for i, val in enumerate(pattern.split(' ')): if i == 0: pattern = 'on %s' % val else: pattern += ' %s %s' % ('on' if i % 2 == 0 else 'off', val) return pattern def getLineWidth(self): return self.props.get("LineWidth", "0.8pt") def getColor(self, col): if col in self.customCols: return self.customCols[col] return col def getLineColor(self): return self.getColor(self.props.get("LineColor", "black")) def getLineOpacity(self): return self.props.get("LineOpacity", "1.0") def getFillOpacity(self): return self.props.get("FillOpacity", "1.0") def getFillBorder(self): return self.attr_bool("FillBorder", "1") def getHatchColor(self): return self.getColor(self.props.get("HatchColor", self.getErrorBandColor())) def getMarkerStyle(self): return self.props.get("MarkerStyle", "*" if self.getErrorBars() else "none") def getMarkerSize(self): return self.props.get("MarkerSize", "1.5pt") def getMarkerScale(self): return self.props.get("MarkerScale", "1") def getMarkerColor(self): return self.getColor(self.props.get("MarkerColor", "black")) def getErrorMarkStyle(self): return self.props.get("ErrorMarkStyle", "none") def getErrorBars(self): return bool(int(self.props.get("ErrorBars", "0"))) def getErrorBands(self): return bool(int(self.props.get("ErrorBands", "0"))) def getErrorBandColor(self): return self.getColor(self.props.get("ErrorBandColor", self.getLineColor())) def getErrorBandStyle(self): return self.props.get("ErrorBandStyle", "solid") def getPattern(self): return self.props.get("ErrorBandPattern", "") def getPatternColor(self): return self.getColor(self.props.get("ErrorBandPatternColor", "")) def getPatternFill(self): return bool(int(self.props.get("ErrorBandFill", self.getPattern() == ""))) def getErrorBandFillColor(self): return self.getColor(self.props.get("ErrorBandFillColor", self.getErrorBandColor())) def getErroBandHatchDistance(self): return self.props.get("ErrorBandHatchDistance", "") def getErrorBandOpacity(self): return self.props.get("ErrorBandOpacity", "1.0") def removeXerrors(self): return bool(int(self.props.get("RemoveXerrors", "0"))) def getSmoothLine(self): return bool(int(self.props.get("SmoothLine", "0"))) def getShader(self): return self.props.get('Shader', 'flat') def makecurve(self, setup, metadata, data = None): #, legendName = None): out = '' out += ('\\addplot%s+[\n' % ('3' if self.is2dim else '')) out += setup out += (']\n') out += metadata out += ('{\n') if data: out += data out += ('};\n') #if legendName: # out += ('\\addlegendentry[\n') # out += (' image style={yellow, mark=*},\n') # out += (']{%s};\n' % legendName) return out def addcurve(self, points): #, legendName): setup = ''; #setup += ('color=%s,\n' % self.getLineColor()) setup += ('%s,\n' % self.getLineColor()) linewidth = self.getLineWidth() try: float(linewidth) linewidth += 'cm' except ValueError: pass setup += ('line width={%s},\n' % linewidth) setup += ('style={%s},\n' % self.getLineStyle()) setup += ('mark=%s,\n' % self.getMarkerStyle()) if self.getLineDash(): setup += ('dash pattern=%s,\n' % self.getLineDash()) if self.getSmoothLine(): setup += ('smooth,\n') #if not legendName: # setup += ('forget plot,\n') if self.getMarkerStyle() != 'none': setup += ('mark options={\n') setup += (' %s, fill color=%s,\n' % (self.getMarkerColor(), self.getMarkerColor())) setup += (' mark size={%s}, scale=%s,\n' % (self.getMarkerSize(), self.getMarkerScale())) setup += ('},\n') if self.getErrorBars(): setup += ('only marks,\n') setup += ('error bars/.cd,\n') setup += ('x dir=both,x explicit,\n') setup += ('y dir=both,y explicit,\n') setup += ('error bar style={%s, line width={%s}},\n' % (self.getLineColor(), linewidth)) setup += ('error mark=%s,\n' % self.getErrorMarkStyle()) if self.getErrorMarkStyle() != 'none': setup += ('error mark options={line width=%s},\n' % linewidth) metadata = 'coordinates\n' if self.getErrorBars(): metadata = 'table [x error plus=ex+, x error minus=ex-, y error plus=ey+, y error minus=ey-]\n' return self.makecurve(setup, metadata, points) #, legendName) def makeband(self, points): setup = '' setup += ('%s,\n' % self.getErrorBandColor()) setup += ('fill opacity=%s,\n' % self.getErrorBandOpacity()) setup += ('forget plot,\n') setup += ('solid,\n') setup += ('draw = %s,\n' % self.getErrorBandColor()) if self.getPatternFill(): setup += ('fill=%s,\n' % self.getErrorBandFillColor()) if self.getPattern() != '': if self.getPatternFill(): setup += ('postaction={\n') setup += ('pattern = %s,\n' % self.getPattern()) if self.getErroBandHatchDistance() != "": setup += ('hatch distance = %s,\n' % self.getErroBandHatchDistance()) if self.getPatternColor() != '': setup += ('pattern color = %s,\n' % self.getPatternColor()) if self.getPatternFill(): setup += ('fill opacity=1,\n') setup += ('},\n') aux = 'draw=none, no markers, forget plot, name path=%s\n' env = 'table [x=x, y=%s]\n' out = '' out += self.makecurve(aux % 'pluserr', env % 'y+', points) out += self.makecurve(aux % 'minuserr', env % 'y-', points) out += self.makecurve(setup, 'fill between [of=pluserr and minuserr]\n') return out def make2dee(self, points, zlog, zmin, zmax): setup = 'mark=none, surf, shader=%s,\n' % self.getShader() metadata = 'table [x index=0, y index=1, z index=2]\n' setup += 'restrict z to domain=%s:%s,\n' % (log10(zmin) if zlog else zmin, log10(zmax) if zlog else zmax) if zlog: metadata = 'table [x index=0, y index=1, z expr=log10(\\thisrowno{2})]\n' return self.makecurve(setup, metadata, points) class Function(DrawableObject): def __init__(self, f): self.props = {} self.read_input(f) if not self.props.has_key('Location'): self.props['Location']='MainPlot' self.props['Location']=self.props['Location'].split('\t') def read_input(self, f): self.code='def plotfunction(x):\n' iscode=False for line in f: if is_end_marker(line, 'FUNCTION'): break elif is_comment(line): continue else: m = pat_property.match(line) if iscode: self.code+=' '+line elif m: prop, value = m.group(1,2) if prop=='Code': iscode=True else: self.props[prop] = value if not iscode: print('++++++++++ ERROR: No code in function') else: foo = compile(self.code, '', 'exec') exec(foo) self.plotfunction = plotfunction def draw(self, inputdata, xmin, xmax): drawme = False for key in self.attr('Location'): if key in inputdata.attr('PlotStage'): drawme = True break if not drawme: return '' if self.has_attr('XMin') and self.attr('XMin'): xmin = self.attr_float('XMin') if self.props.has_attr('FunctionXMin') and self.attr('FunctionXMin'): xmin = max(xmin, self.attr_float('FunctionXMin')) if self.has_attr('XMax') and self.attr('XMax'): xmax = self.attr_float('XMax') if self.has_attr('FunctionXMax') and self.attr('FunctionXMax'): xmax = min(xmax, self.attr_float('FunctionXMax')) xmin = min(xmin, xmax) xmax = max(xmin, xmax) # TODO: Space sample points logarithmically if LogX=1 points = '' xsteps = 500. if self.has_attr('XSteps') and self.attr('XSteps'): xsteps = self.attr_float('XSteps') dx = (xmax - xmin) / xsteps x = xmin - dx while x < (xmax+2*dx): y = self.plotfunction(x) points += ('(%s,%s)\n' % (x, y)) x += dx setup = ''; setup += ('%s,\n' % self.getLineColor()) linewidth = self.getLineWidth() try: float(linewidth) linewidth += 'cm' except ValueError: pass setup += ('line width={%s},\n' % linewidth) setup += ('style={%s},\n' % self.getLineStyle()) setup += ('smooth, mark=none,\n') if self.getLineDash(): setup += ('dash pattern=%s,\n' % self.getLineDash()) metadata = 'coordinates\n' return self.makecurve(setup, metadata, points) class BinData(object): """\ Store bin edge and value+error(s) data for a 1D or 2D bin. TODO: generalise/alias the attr names to avoid mention of x and y """ def __init__(self, low, high, val, err): #print("@", low, high, val, err) self.low = floatify(low) self.high = floatify(high) self.val = float(val) self.err = floatpair(err) @property def is2D(self): return hasattr(self.low, "__len__") and hasattr(self.high, "__len__") @property def isValid(self): invalid_val = (isnan(self.val) or isnan(self.err[0]) or isnan(self.err[1])) if invalid_val: return False if self.is2D: invalid_low = any(isnan(x) for x in self.low) invalid_high = any(isnan(x) for x in self.high) else: invalid_low, invalid_high = isnan(self.low), isnan(self.high) return not (invalid_low or invalid_high) @property def xmin(self): return self.low @xmin.setter def xmin(self,x): self.low = x @property def xmax(self): return self.high @xmax.setter def xmax(self,x): self.high = x @property def xmid(self): # TODO: Generalise to 2D return 0.5 * (self.xmin + self.xmax) @property def xwidth(self): # TODO: Generalise to 2D assert self.xmin <= self.xmax return self.xmax - self.xmin @property def y(self): return self.val @y.setter def y(self, x): self.val = x @property def ey(self): return self.err @ey.setter def ey(self, x): self.err = x @property def ymin(self): return self.y - self.ey[0] @property def ymax(self): return self.y + self.ey[1] def __getitem__(self, key): "dict-like access for backward compatibility" if key in ("LowEdge"): return self.xmin elif key == ("UpEdge", "HighEdge"): return self.xmax elif key == "Content": return self.y elif key == "Errors": return self.ey class Histogram(DrawableObject, Described): def __init__(self, f, p=None): self.props = {} self.customCols = {} self.is2dim = False self.zlog = False self.data = [] self.read_input_data(f) self.sigmabinvalue = None self.meanbinvalue = None self.path = p def read_input_data(self, f): for line in f: if is_end_marker(line, 'HISTOGRAM'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.props[prop] = value if 'Color' in prop and '{' in value: self.customCols[value] = value else: ## Detect symm errs linearray = line.split() if len(linearray) == 4: self.data.append(BinData(*linearray)) ## Detect asymm errs elif len(linearray) == 5: self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]])) ## Detect two-dimensionality elif len(linearray) in [6,7]: self.is2dim = True # If asymm z error, use the max or average of +- error err = float(linearray[5]) if len(linearray) == 7: if self.props.get("ShowMaxZErr", 1): err = max(err, float(linearray[6])) else: err = 0.5 * (err + float(linearray[6])) self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], linearray[4], err)) ## Unknown histo format else: raise RuntimeError("Unknown HISTOGRAM data line format with %d entries" % len(linearray)) def mangle_input(self): norm2int = self.attr_bool("NormalizeToIntegral", False) norm2sum = self.attr_bool("NormalizeToSum", False) if norm2int or norm2sum: if norm2int and norm2sum: print("Cannot normalise to integral and to sum at the same time. Will normalise to the integral.") foo = 0.0 for point in self.data: if norm2int: foo += point.val*point.xwidth else: foo += point.val if foo != 0: for point in self.data: point.val /= foo point.err[0] /= foo point.err[1] /= foo scale = self.attr_float('Scale', 1.0) if scale != 1.0: # TODO: change to "in self.data"? for point in self.data: point.val *= scale point.err[0] *= scale point.err[1] *= scale if self.attr_float("ScaleError", 0.0): scale = self.attr_float("ScaleError") for point in self.data: point.err[0] *= scale point.err[1] *= scale if self.attr_float('Shift', 0.0): shift = self.attr_float("Shift") for point in self.data: point.val += shift if self.has_attr('Rebin') and self.attr('Rebin') != '': rawrebins = self.attr('Rebin').strip().split('\t') rebins = [] maxindex = len(self.data)-1 if len(rawrebins) % 2 == 1: rebins.append({'Start': self.data[0].xmin, 'Rebin': int(rawrebins[0])}) rawrebins.pop(0) for i in range(0,len(rawrebins),2): if float(rawrebins[i]) self.data[0].xmin): rebins.insert(0,{'Start': self.data[0].xmin, 'Rebin': 1}) errortype = self.attr("ErrorType", "stat") newdata = [] lower = self.getBin(rebins[0]['Start']) for k in range(0,len(rebins),1): rebin = rebins[k]['Rebin'] upper = maxindex end = 1 if (kmaxindex: break binwidth = self.data[i+j].xwidth foo += self.data[i+j].val * binwidth if errortype=="stat": barl += (binwidth * self.data[i+j].err[0])**2 baru += (binwidth * self.data[i+j].err[1])**2 elif errortype == "env": barl += self.data[i+j].ymin * binwidth baru += self.data[i+j].ymax * binwidth else: logging.error("Rebinning for ErrorType not implemented.") sys.exit(1) upedge = min(i+rebin-1,maxindex) newbinwidth=self.data[upedge].xmax-self.data[i].xmin newcentral=foo/newbinwidth if errortype=="stat": newerror=[sqrt(barl)/newbinwidth,sqrt(baru)/newbinwidth] elif errortype=="env": newerror=[(foo-barl)/newbinwidth,(baru-foo)/newbinwidth] newdata.append(BinData(self.data[i].xmin, self.data[i+rebin-1].xmax, newcentral, newerror)) lower = (upper/rebin)*rebin+(upper%rebin) self.data=newdata def add(self, name): if len(self.data) != len(name.data): print('+++ Error in Histogram.add() for %s: different numbers of bins' % self.path) for i, b in enumerate(self.data): if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax): b.val += name.data[i].val b.err[0] = sqrt(b.err[0]**2 + name.data[i].err[0]**2) b.err[1] = sqrt(b.err[1]**2 + name.data[i].err[1]**2) else: print('+++ Error in Histogram.add() for %s: binning of histograms differs' % self.path) def divide(self, name): #print(name.path, self.path) if len(self.data) != len(name.data): print('+++ Error in Histogram.divide() for %s: different numbers of bins' % self.path) for i,b in enumerate(self.data): if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax): try: b.err[0] /= name.data[i].val except ZeroDivisionError: b.err[0] = 0. try: b.err[1] /= name.data[i].val except ZeroDivisionError: b.err[1] = 0. try: b.val /= name.data[i].val except ZeroDivisionError: b.val = 1. # b.err[0] = sqrt(b.err[0]**2 + name.data[i].err[0]**2) # b.err[1] = sqrt(b.err[1]**2 + name.data[i].err[1]**2) else: print('+++ Error in Histogram.divide() for %s: binning of histograms differs' % self.path) def dividereverse(self, name): if len(self.data) != len(name.data): print('+++ Error in Histogram.dividereverse() for %s: different numbers of bins' % self.path) for i, b in enumerate(self.data): if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax): try: b.err[0] = name.data[i].err[0]/b.val except ZeroDivisionError: b.err[0] = 0. try: b.err[1] = name.data[i].err[1]/b.val except ZeroDivisionError: b.err[1] = 0. try: b.val = name.data[i].val/b.val except ZeroDivisionError: b.val = 1. else: print('+++ Error in Histogram.dividereverse(): binning of histograms differs') def deviation(self, name): if len(self.data) != len(name.data): print('+++ Error in Histogram.deviation() for %s: different numbers of bins' % self.path) for i, b in enumerate(self.data): if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax): b.val -= name.data[i].val try: b.val /= 0.5*sqrt((name.data[i].err[0] + name.data[i].err[1])**2 + (b.err[0] + b.err[1])**2) except ZeroDivisionError: b.val = 0.0 try: b.err[0] /= name.data[i].err[0] except ZeroDivisionError: b.err[0] = 0.0 try: b.err[1] /= name.data[i].err[1] except ZeroDivisionError: b.err[1] = 0.0 else: print('+++ Error in Histogram.deviation() for %s: binning of histograms differs' % self.path) def delta(self,name): self.divide(name) for point in self.data: point.val -= 1. def deltapercent(self,name): self.delta(name) for point in self.data: point.val *= 100. point.err[0] *= 100. point.err[1] *= 100. def getBin(self,x): if xself.data[len(self.data)-1].xmax: print('+++ Error in Histogram.getBin(): x out of range') return float('nan') for i in range(1,len(self.data)-1,1): if x xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax): foo = b.val if self.getErrorBars() or self.getErrorBands(): foo -= b.err[0] if not isnan(foo) and (not logy or foo > 0): yvalues.append(foo) return min(yvalues) if yvalues else self.data[0].val def getYMax(self, xmin, xmax): if not self.data: return 1 elif self.is2dim: return max(b.high[1] for b in self.data) else: yvalues = [] for b in self.data: if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax): foo = b.val if self.getErrorBars() or self.getErrorBands(): foo += b.err[1] if not isnan(foo): # and (not logy or foo > 0): yvalues.append(foo) return max(yvalues) if yvalues else self.data[0].val def getZMin(self, xmin, xmax, ymin, ymax): if not self.is2dim: return 0 zvalues = [] for b in self.data: if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax): zvalues.append(b.val) return min(zvalues) def getZMax(self, xmin, xmax, ymin, ymax): if not self.is2dim: return 0 zvalues = [] for b in self.data: if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax): zvalues.append(b.val) return max(zvalues) class Value(Histogram): def read_input_data(self, f): for line in f: if is_end_marker(line, 'VALUE'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.props[prop] = value else: linearray = line.split() if len(linearray) == 3: self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[2] ])) # dummy x-values else: raise Exception('Value does not have the expected number of columns. ' + line) # TODO: specialise draw() here class Counter(Histogram): def read_input_data(self, f): for line in f: if is_end_marker(line, 'COUNTER'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.props[prop] = value else: linearray = line.split() if len(linearray) == 2: self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[1] ])) # dummy x-values else: raise Exception('Counter does not have the expected number of columns. ' + line) # TODO: specialise draw() here class Histo1D(Histogram): def read_input_data(self, f): for line in f: if is_end_marker(line, 'HISTO1D'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.props[prop] = value if 'Color' in prop and '{' in value: self.customCols[value] = value else: linearray = line.split() ## Detect symm errs # TODO: Not sure what the 8-param version is for... auto-compatibility with YODA format? if len(linearray) in [4,8]: self.data.append(BinData(linearray[0], linearray[1], linearray[2], linearray[3])) ## Detect asymm errs elif len(linearray) == 5: self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]])) else: raise Exception('Histo1D does not have the expected number of columns. ' + line) # TODO: specialise draw() here class Histo2D(Histogram): def read_input_data(self, f): self.is2dim = True #< Should really be done in a constructor, but this is easier for now... for line in f: if is_end_marker(line, 'HISTO2D'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.props[prop] = value if 'Color' in prop and '{' in value: self.customCols[value] = value else: linearray = line.split() if len(linearray) in [6,7]: # If asymm z error, use the max or average of +- error err = float(linearray[5]) if len(linearray) == 7: if self.props.get("ShowMaxZErr", 1): err = max(err, float(linearray[6])) else: err = 0.5 * (err + float(linearray[6])) self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], float(linearray[4]), err)) else: raise Exception('Histo2D does not have the expected number of columns. '+line) # TODO: specialise draw() here #################### def try_cmd(args): "Run the given command + args and return True/False if it succeeds or not" import subprocess try: subprocess.check_output(args, stderr=subprocess.STDOUT) return True except: return False def have_cmd(cmd): return try_cmd(["which", cmd]) import shutil, subprocess def process_datfile(datfile): global opts if not os.access(datfile, os.R_OK): raise Exception("Could not read data file '%s'" % datfile) datpath = os.path.abspath(datfile) datfile = os.path.basename(datpath) datdir = os.path.dirname(datpath) outdir = args.OUTPUT_DIR if args.OUTPUT_DIR else datdir filename = datfile.replace('.dat','') ## Create a temporary directory # cwd = os.getcwd() tempdir = tempfile.mkdtemp('.make-plots') tempdatpath = os.path.join(tempdir, datfile) shutil.copy(datpath, tempdir) if args.NO_CLEANUP: logging.info('Keeping temp-files in %s' % tempdir) ## Make TeX file inputdata = InputData(datpath) if inputdata.attr_bool('IgnorePlot', False): return texpath = os.path.join(tempdir, '%s.tex' % filename) texfile = open(texpath, 'w') p = Plot() texfile.write(p.write_header(inputdata)) if inputdata.attr_bool("MainPlot", True): mp = MainPlot(inputdata) texfile.write(mp.draw(inputdata)) if not inputdata.attr_bool("is2dim", False): for rname, i in inputdata.ratio_names(): if inputdata.attr_bool(rname, True) and inputdata.attr(rname + 'Reference', False): rp = RatioPlot(inputdata, i) texfile.write(rp.draw(inputdata)) #for s in inputdata.special.values(): # texfile.write(p.write_special(inputdata)) texfile.write(p.write_footer()) texfile.close() if args.OUTPUT_FORMAT != ["TEX"]: ## Check for the required programs latexavailable = have_cmd("latex") dvipsavailable = have_cmd("dvips") convertavailable = have_cmd("convert") ps2pnmavailable = have_cmd("ps2pnm") pnm2pngavailable = have_cmd("pnm2png") # TODO: It'd be nice to be able to control the size of the PNG between thumb and full-size... # currently defaults (and is used below) to a size suitable for thumbnails def mkpngcmd(infile, outfile, outsize=450, density=300): if convertavailable: pngcmd = ["convert", "-flatten", "-density", str(density), infile, "-quality", "100", "-resize", "{size:d}x{size:d}>".format(size=outsize), #"-sharpen", "0x1.0", outfile] #logging.debug(" ".join(pngcmd)) #pngproc = subprocess.Popen(pngcmd, stdout=subprocess.PIPE, cwd=tempdir) #pngproc.wait() return pngcmd else: raise Exception("Required PNG maker program (convert) not found") # elif ps2pnmavailable and pnm2pngavailable: # pstopnm = "pstopnm -stdout -xsize=461 -ysize=422 -xborder=0.01 -yborder=0.01 -portrait " + infile # p1 = subprocess.Popen(pstopnm.split(), stdout=subprocess.PIPE, stderr=open("/dev/null", "w"), cwd=tempdir) # p2 = subprocess.Popen(["pnmtopng"], stdin=p1.stdout, stdout=open("%s/%s.png" % (tempdir, outfile), "w"), stderr=open("/dev/null", "w"), cwd=tempdir) # p2.wait() # else: # raise Exception("Required PNG maker programs (convert, or ps2pnm and pnm2png) not found") ## Run LaTeX (in no-stop mode) logging.debug(os.listdir(tempdir)) texcmd = ["latex", "\scrollmode\input", texpath] logging.debug("TeX command: " + " ".join(texcmd)) texproc = subprocess.Popen(texcmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=tempdir) logging.debug(texproc.communicate()[0]) logging.debug(os.listdir(tempdir)) ## Run dvips dvcmd = ["dvips", filename] if not logging.getLogger().isEnabledFor(logging.DEBUG): dvcmd.append("-q") ## Handle Minion Font if args.OUTPUT_FONT == "MINION": dvcmd.append('-Pminion') ## Choose format # TODO: Rationalise... this is a mess! Maybe we can use tex2pix? if "PS" in args.OUTPUT_FORMAT: dvcmd += ["-o", "%s.ps" % filename] logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) dvproc.wait() if "PDF" in args.OUTPUT_FORMAT: dvcmd.append("-f") logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) - cnvproc = subprocess.Popen(["ps2pdf", "-"], stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir) + cnvproc = subprocess.Popen(["ps2pdf", "-dNOSAFER", "-"], stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir) f = open(os.path.join(tempdir, "%s.pdf" % filename), "wb") f.write(cnvproc.communicate()[0]) f.close() if "EPS" in args.OUTPUT_FORMAT: dvcmd.append("-f") logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) cnvproc = subprocess.Popen(["ps2eps"], stdin=dvproc.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE, cwd=tempdir) f = open(os.path.join(tempdir, "%s.eps" % filename), "wb") f.write(cnvproc.communicate()[0]) f.close() if "PNG" in args.OUTPUT_FORMAT: dvcmd.append("-f") logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) #pngcmd = ["convert", "-flatten", "-density", "110", "-", "-quality", "100", "-sharpen", "0x1.0", "%s.png" % filename] pngcmd = mkpngcmd("-", "%s.png" % filename) logging.debug(" ".join(pngcmd)) pngproc = subprocess.Popen(pngcmd, stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir) pngproc.wait() logging.debug(os.listdir(tempdir)) ## Copy results back to main dir for fmt in args.OUTPUT_FORMAT: outname = "%s.%s" % (filename, fmt.lower()) outpath = os.path.join(tempdir, outname) if os.path.exists(outpath): shutil.copy(outpath, outdir) else: logging.error("No output file '%s' from processing %s" % (outname, datfile)) ## Clean up if not args.NO_CLEANUP: shutil.rmtree(tempdir, ignore_errors=True) #################### if __name__ == '__main__': ## Try to rename the process on Linux try: import ctypes libc = ctypes.cdll.LoadLibrary('libc.so.6') libc.prctl(15, 'make-plots', 0, 0, 0) except Exception: pass ## Try to use Psyco optimiser try: import psyco psyco.full() except ImportError: pass ## Find number of (virtual) processing units import multiprocessing try: numcores = multiprocessing.cpu_count() except: numcores = 1 ## Parse command line options import argparse parser = argparse.ArgumentParser(usage=__doc__) parser.add_argument("DATFILES", nargs="+", help=".dat files to plot") parser.add_argument("-j", "-n", "--num-threads", dest="NUM_THREADS", type=int, default=numcores, help="max number of threads to be used [%s]" % numcores) parser.add_argument("-o", "--outdir", dest="OUTPUT_DIR", default=None, help="choose the output directory (default = .dat dir)") parser.add_argument("--font", dest="OUTPUT_FONT", choices="palatino,cm,times,helvetica,minion".split(","), default="palatino", help="choose the font to be used in the plots") parser.add_argument("-f", "--format", dest="OUTPUT_FORMAT", default="PDF", help="choose plot format, perhaps multiple comma-separated formats e.g. 'pdf' or 'tex,pdf,png' (default = PDF).") parser.add_argument("--no-cleanup", dest="NO_CLEANUP", action="store_true", default=False, help="keep temporary directory and print its filename.") parser.add_argument("--no-subproc", dest="NO_SUBPROC", action="store_true", default=False, help="don't use subprocesses to render the plots in parallel -- useful for debugging.") parser.add_argument("--full-range", dest="FULL_RANGE", action="store_true", default=False, help="plot full y range in log-y plots.") parser.add_argument("-c", "--config", dest="CONFIGFILES", action="append", default=None, help="plot config file to be used. Overrides internal config blocks.") verbgroup = parser.add_argument_group("Verbosity control") verbgroup.add_argument("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL", default=logging.INFO, help="print debug (very verbose) messages") verbgroup.add_argument("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL", default=logging.INFO, help="be very quiet") args = parser.parse_args() ## Tweak the opts output logging.basicConfig(level=args.LOGLEVEL, format="%(message)s") args.OUTPUT_FONT = args.OUTPUT_FONT.upper() args.OUTPUT_FORMAT = args.OUTPUT_FORMAT.upper().split(",") if args.NUM_THREADS == 1: args.NO_SUBPROC = True ## Check for no args if len(args.DATFILES) == 0: logging.error(parser.get_usage()) sys.exit(2) ## Check that the files exist for f in args.DATFILES: if not os.access(f, os.R_OK): print("Error: cannot read from %s" % f) sys.exit(1) ## Test for external programs (kpsewhich, latex, dvips, ps2pdf/ps2eps, and convert) args.LATEXPKGS = [] if args.OUTPUT_FORMAT != ["TEX"]: try: ## latex if not have_cmd("latex"): logging.error("ERROR: required program 'latex' could not be found. Exiting...") sys.exit(1) ## dvips if not have_cmd("dvips"): logging.error("ERROR: required program 'dvips' could not be found. Exiting...") sys.exit(1) ## ps2pdf / ps2eps if "PDF" in args.OUTPUT_FORMAT: if not have_cmd("ps2pdf"): logging.error("ERROR: required program 'ps2pdf' (for PDF output) could not be found. Exiting...") sys.exit(1) elif "EPS" in args.OUTPUT_FORMAT: if not have_cmd("ps2eps"): logging.error("ERROR: required program 'ps2eps' (for EPS output) could not be found. Exiting...") sys.exit(1) ## PNG output converter if "PNG" in args.OUTPUT_FORMAT: if not have_cmd("convert"): logging.error("ERROR: required program 'convert' (for PNG output) could not be found. Exiting...") sys.exit(1) ## kpsewhich: required for LaTeX package testing if not have_cmd("kpsewhich"): logging.warning("WARNING: required program 'kpsewhich' (for LaTeX package checks) could not be found") else: ## Check minion font if args.OUTPUT_FONT == "MINION": p = subprocess.Popen(["kpsewhich", "minion.sty"], stdout=subprocess.PIPE) p.wait() if p.returncode != 0: logging.warning('Warning: Using "--minion" requires minion.sty to be installed. Ignoring it.') args.OUTPUT_FONT = "PALATINO" ## Check for HEP LaTeX packages # TODO: remove HEP-specifics/non-standards? for pkg in ["hepnames", "hepunits", "underscore"]: p = subprocess.Popen(["kpsewhich", "%s.sty" % pkg], stdout=subprocess.PIPE) p.wait() if p.returncode == 0: args.LATEXPKGS.append(pkg) ## Check for Palatino old style figures and small caps if args.OUTPUT_FONT == "PALATINO": p = subprocess.Popen(["kpsewhich", "ot1pplx.fd"], stdout=subprocess.PIPE) p.wait() if p.returncode == 0: args.OUTPUT_FONT = "PALATINO_OSF" except Exception as e: logging.warning("Problem while testing for external packages. I'm going to try and continue without testing, but don't hold your breath...") def init_worker(): import signal signal.signal(signal.SIGINT, signal.SIG_IGN) ## Run rendering jobs datfiles = args.DATFILES plotword = "plots" if len(datfiles) > 1 else "plot" logging.info("Making %d %s" % (len(datfiles), plotword)) if args.NO_SUBPROC: init_worker() for i, df in enumerate(datfiles): logging.info("Plotting %s (%d/%d remaining)" % (df, len(datfiles)-i, len(datfiles))) process_datfile(df) else: pool = multiprocessing.Pool(args.NUM_THREADS, init_worker) try: for i, _ in enumerate(pool.imap(process_datfile, datfiles)): logging.info("Plotting %s (%d/%d remaining)" % (datfiles[i], len(datfiles)-i, len(datfiles))) pool.close() except KeyboardInterrupt: print("Caught KeyboardInterrupt, terminating workers") pool.terminate() except ValueError as e: print(e) print("Perhaps your .dat file is corrupt?") pool.terminate() pool.join() diff --git a/bin/make-plots b/bin/make-plots --- a/bin/make-plots +++ b/bin/make-plots @@ -1,3540 +1,3540 @@ #! /usr/bin/env python """\ %(prog)s [options] file.dat [file2.dat ...] TODO * Optimise output for e.g. lots of same-height bins in a row * Add a RatioFullRange directive to show the full range of error bars + MC envelope in the ratio * Tidy LaTeX-writing code -- faster to compile one doc only, then split it? * Handle boolean values flexibly (yes, no, true, false, etc. as well as 1, 0) """ from __future__ import print_function ## ## This program is copyright by Hendrik Hoeth and ## the Rivet team https://rivet.hepforge.org. It may be used ## for scientific and private purposes. Patches are welcome, but please don't ## redistribute changed versions yourself. ## ## Check the Python version import sys if sys.version_info[:3] < (2,6,0): print("make-plots requires Python version >= 2.6.0... exiting") sys.exit(1) ## Try to rename the process on Linux try: import ctypes libc = ctypes.cdll.LoadLibrary('libc.so.6') libc.prctl(15, 'make-plots', 0, 0, 0) except Exception as e: pass import os, logging, re import tempfile import getopt import string import copy from math import * ## Regex patterns pat_begin_block = re.compile(r'^#+\s*BEGIN ([A-Z0-9_]+) ?(\S+)?') pat_end_block = re.compile('^#+\s*END ([A-Z0-9_]+)') pat_comment = re.compile('^#|^\s*$') pat_property = re.compile('^(\w+?)=(.*)$') pat_property_opt = re.compile('^ReplaceOption\[(\w+=\w+)\]=(.*)$') pat_path_property = re.compile('^(\S+?)::(\w+?)=(.*)$') pat_options = re.compile(r"((?::\w+=[^:/]+)+)") def fuzzyeq(a, b, tolerance=1e-6): "Fuzzy equality comparison function for floats, with given fractional tolerance" # if type(a) is not float or type(a) is not float: # print(a, b) if (a == 0 and abs(b) < 1e-12) or (b == 0 and abs(a) < 1e-12): return True return 2.0*abs(a-b)/abs(a+b) < tolerance def inrange(x, a, b): return x >= a and x < b def floatify(x): if type(x) is str: x = x.split() if not hasattr(x, "__len__"): x = [x] x = [float(a) for a in x] return x[0] if len(x) == 1 else x def floatpair(x): if type(x) is str: x = x.split() if hasattr(x, "__len__"): assert len(x) == 2 return [float(a) for a in x] return [float(x), float(x)] def is_end_marker(line, blockname): m = pat_end_block.match(line) return m and m.group(1) == blockname def is_comment(line): return pat_comment.match(line) is not None class Described(object): "Inherited functionality for objects holding a 'description' dictionary" def __init__(self): pass def has_attr(self, key): return key in self.description def set_attr(self, key, val): self.description[key] = val def attr(self, key, default=None): return self.description.get(key, default) def attr_bool(self, key, default=None): x = self.attr(key, default) if x is None: return None if str(x).lower() in ["1", "true", "yes", "on"]: return True if str(x).lower() in ["0", "false", "no", "off"]: return False return None def attr_int(self, key, default=None): x = self.attr(key, default) try: x = int(x) except: x = None return x def attr_float(self, key, default=None): x = self.attr(key, default) try: x = float(x) except: x = None return x class InputData(Described): def __init__(self, filename): self.filename=filename if not self.filename.endswith(".dat"): self.filename += ".dat" self.normalized=False self.histos = {} self.ratiohistos = {} self.histomangler = {} self.special = {} self.functions = {} self.description = {} self.pathdescriptions = [] self.description['_OptSubs'] = { } self.description['is2dim'] = False f = open(filename) for line in f: m = pat_begin_block.match(line) if m: name, path = m.group(1,2) if path is None and name != 'PLOT': raise Exception('BEGIN sections need a path name.') if name == 'PLOT': self.read_input(f); elif name == 'SPECIAL': self.special[path] = Special(f) elif name == 'HISTOGRAM' or name == 'HISTOGRAM2D': self.histos[path] = Histogram(f, p=path) self.description['is2dim'] = self.histos[path].is2dim if not self.histos[path].getName() == '': newname = self.histos[path].getName() self.histos[newname] = copy.deepcopy(self.histos[path]) del self.histos[path] elif name == 'HISTO1D': self.histos[path] = Histo1D(f, p=path) if not self.histos[path].getName() == '': newname = self.histos[path].getName() self.histos[newname] = copy.deepcopy(self.histos[path]) del self.histos[path] elif name == 'HISTO2D': self.histos[path] = Histo2D(f, p=path) self.description['is2dim'] = True if not self.histos[path].getName() == '': newname = self.histos[path].getName() self.histos[newname] = copy.deepcopy(self.histos[path]) del self.histos[path] elif name == 'HISTOGRAMMANGLER': self.histomangler[path] = PlotFunction(f) elif name == 'COUNTER': self.histos[path] = Counter(f, p=path) elif name == 'VALUE': self.histos[path] = Value(f, p=path) elif name == 'FUNCTION': self.functions[path] = Function(f) # elif is_comment(line): # continue # else: # self.read_path_based_input(line) f.close() self.apply_config_files(args.CONFIGFILES) self.description.setdefault('PlotSizeX', 10.) self.description.setdefault('PlotSizeY', 6.) if self.description['is2dim']: self.description['PlotSizeX'] -= 1.7 self.description['PlotSizeY'] = 10. self.description['MainPlot'] = '1' self.description['RatioPlot'] = '0' if self.description.get('PlotSize', '') != '': plotsizes = self.description['PlotSize'].split(',') self.description['PlotSizeX'] = float(plotsizes[0]) self.description['PlotSizeY'] = float(plotsizes[1]) if len(plotsizes) == 3: self.description['RatioPlotSizeY'] = float(plotsizes[2]) del self.description['PlotSize'] self.description['RatioPlotSizeY'] = 0. if self.description.get('MainPlot') == '0': ## Ratio, no main self.description['RatioPlot'] = '1' #< don't allow both to be zero! self.description['PlotSizeY'] = 0. #if 'RatioPlot' in self.description and self.description['RatioPlot']=='1': if self.attr_bool('RatioPlot'): if self.has_attr('RatioPlotYSize') and self.attr('RatioPlotYSize') != '': self.description['RatioPlotSizeY'] = self.attr_float('RatioPlotYSize') else: if self.attr_bool('MainPlot'): self.description['RatioPlotSizeY'] = 6. else: self.description['RatioPlotSizeY'] = 3. if self.description['is2dim']: self.description['RatioPlotSizeY'] *= 2. for i in range(1,9): if self.description.get('RatioPlot'+str(i), '0') == '1': if self.description.get('RatioPlot'+str(i)+'YSize') != '': self.description['RatioPlot'+str(i)+'SizeY'] = float(self.description['RatioPlot'+str(i)+'YSize']) else: if self.description.get('MainPlot') == '0': self.description['RatioPlot'+str(i)+'SizeY'] = 6. else: self.description['RatioPlot'+str(i)+'SizeY'] = 3. if self.description['is2dim']: self.description['RatioPlot'+str(i)+'SizeY'] *= 2. ## Ensure numbers, not strings self.description['PlotSizeX'] = float(self.description['PlotSizeX']) self.description['PlotSizeY'] = float(self.description['PlotSizeY']) self.description['RatioPlotSizeY'] = float(self.description['RatioPlotSizeY']) # self.description['TopMargin'] = float(self.description['TopMargin']) # self.description['BottomMargin'] = float(self.description['BottomMargin']) self.description['LogX'] = str(self.description.get('LogX', 0)) in ["1", "yes", "true"] self.description['LogY'] = str(self.description.get('LogY', 0)) in ["1", "yes", "true"] self.description['LogZ'] = str(self.description.get('LogZ', 0)) in ["1", "yes", "true"] self.description['RatioPlotLogY'] = str(self.description.get('RatioPlotLogY', 0)) in ["1", "yes", "true"] self.description['RatioPlotLogZ'] = str(self.description.get('RatioPlotLogZ', 0)) in ["1", "yes", "true"] for i in range(1,9): self.description['RatioPlot'+str(i)+'LogY'] = str(self.description.get('RatioPlot'+str(i)+'LogY', 0)) in ["1", "yes", "true"] self.description['RatioPlot'+str(i)+'LogZ'] = str(self.description.get('RatioPlot'+str(i)+'LogZ', 0)) in ["1", "yes", "true"] if 'Rebin' in self.description: for i in self.histos: self.histos[i].description['Rebin'] = self.description['Rebin'] if 'ConnectBins' in self.description: for i in self.histos: self.histos[i].description['ConnectBins'] = self.description['ConnectBins'] histoordermap = {} histolist = list(self.histos.keys()) if 'DrawOnly' in self.description: histolist = filter(histolist.count, self.description['DrawOnly'].strip().split()) for histo in histolist: order = 0 if 'PlotOrder' in self.histos[histo].description: order = int(self.histos[histo].description['PlotOrder']) if not order in histoordermap: histoordermap[order] = [] histoordermap[order].append(histo) sortedhistolist = [] for i in sorted(histoordermap.keys()): sortedhistolist.extend(histoordermap[i]) self.description['DrawOnly']=sortedhistolist refhistoordermap = {} refhistolist = list(self.histos.keys()) if 'RatioPlotDrawOnly' in self.description: refhistolist = filter(refhistolist.count, self.description['RatioPlotDrawOnly'].strip().split()) for histo in refhistolist: order = 0 if 'PlotOrder' in self.histos[histo].description: order = int(self.histos[histo].description['PlotOrder']) if not order in refhistoordermap: refhistoordermap[order] = [] refhistoordermap[order].append(histo) sortedrefhistolist = [] for i in sorted(refhistoordermap.keys()): sortedrefhistolist.extend(refhistoordermap[i]) self.description['RatioPlotDrawOnly']=sortedrefhistolist else: self.description['RatioPlotDrawOnly']=self.description['DrawOnly'] for i in range(1,9): refhistoordermap = {} refhistolist = list(self.histos.keys()) if ('RatioPlot'+str(i)+'DrawOnly') in self.description: refhistolist = filter(refhistolist.count, self.description['RatioPlot'+str(i)+'DrawOnly'].strip().split()) for histo in refhistolist: order = 0 if 'PlotOrder' in self.histos[histo].description: order = int(self.histos[histo].description['PlotOrder']) if not order in refhistoordermap: refhistoordermap[order] = [] refhistoordermap[order].append(histo) sortedrefhistolist = [] for key in sorted(refhistoordermap.keys()): sortedrefhistolist.extend(refhistoordermap[key]) self.description['RatioPlot'+str(i)+'DrawOnly']=sortedrefhistolist else: self.description['RatioPlot'+str(i)+'DrawOnly']=self.description['DrawOnly'] ## Inherit various values from histograms if not explicitly set for k in ['LogX', 'LogY', 'LogZ', 'XLabel', 'YLabel', 'ZLabel', 'XCustomMajorTicks', 'YCustomMajorTicks', 'ZCustomMajorTicks']: self.inherit_from_histos(k) @property def is2dim(self): return self.attr_bool("is2dim", False) @is2dim.setter def is2dim(self, val): self.set_attr("is2dim", val) @property def drawonly(self): x = self.attr("DrawOnly") if type(x) is str: self.drawonly = x #< use setter to listify return x if x else [] @drawonly.setter def drawonly(self, val): if type(val) is str: val = val.strip().split() self.set_attr("DrawOnly", val) @property def stacklist(self): x = self.attr("Stack") if type(x) is str: self.stacklist = x #< use setter to listify return x if x else [] @stacklist.setter def stacklist(self, val): if type(val) is str: val = val.strip().split() self.set_attr("Stack", val) @property def plotorder(self): x = self.attr("PlotOrder") if type(x) is str: self.plotorder = x #< use setter to listify return x if x else [] @plotorder.setter def plotorder(self, val): if type(val) is str: val = val.strip().split() self.set_attr("PlotOrder", val) @property def plotsizex(self): return self.attr_float("PlotSizeX") @plotsizex.setter def plotsizex(self, val): self.set_attr("PlotSizeX", val) @property def plotsizey(self): return self.attr_float("PlotSizeY") @plotsizey.setter def plotsizey(self, val): self.set_attr("PlotSizeY", val) @property def plotsize(self): return [self.plotsizex, self.plotsizey] @plotsize.setter def plotsize(self, val): if type(val) is str: val = [float(x) for x in val.split(",")] assert len(val) == 2 self.plotsizex = val[0] self.plotsizey = val[1] @property def ratiosizey(self): return self.attr_float("RatioPlotSizeY") @ratiosizey.setter def ratiosizey(self, val): self.set_attr("RatioPlotSizeY", val) @property def scale(self): return self.attr_float("Scale") @scale.setter def scale(self, val): self.set_attr("Scale", val) @property def xmin(self): return self.attr_float("XMin") @xmin.setter def xmin(self, val): self.set_attr("XMin", val) @property def xmax(self): return self.attr_float("XMax") @xmax.setter def xmax(self, val): self.set_attr("XMax", val) @property def xrange(self): return [self.xmin, self.xmax] @xrange.setter def xrange(self, val): if type(val) is str: val = [float(x) for x in val.split(",")] assert len(val) == 2 self.xmin = val[0] self.xmax = val[1] @property def ymin(self): return self.attr_float("YMin") @ymin.setter def ymin(self, val): self.set_attr("YMin", val) @property def ymax(self): return self.attr_float("YMax") @ymax.setter def ymax(self, val): self.set_attr("YMax", val) @property def yrange(self): return [self.ymin, self.ymax] @yrange.setter def yrange(self, val): if type(val) is str: val = [float(y) for y in val.split(",")] assert len(val) == 2 self.ymin = val[0] self.ymax = val[1] # TODO: add more rw properties for plotsize(x,y), ratiosize(y), # show_mainplot, show_ratioplot, show_legend, log(x,y,z), rebin, # drawonly, legendonly, plotorder, stack, # label(x,y,z), majorticks(x,y,z), minorticks(x,y,z), # min(x,y,z), max(x,y,z), range(x,y,z) def inherit_from_histos(self, k): """Note: this will inherit the key from a random histogram: only use if you're sure all histograms have this key!""" if k not in self.description: h = list(self.histos.values())[0] if k in h.description: self.description[k] = h.description[k] def read_input(self, f): for line in f: if is_end_marker(line, 'PLOT'): break elif is_comment(line): continue m = pat_property.match(line) m_opt = pat_property_opt.match(line) if m_opt: opt_old, opt_new = m_opt.group(1,2) self.description['_OptSubs'][opt_old.strip()] = opt_new.strip() elif m: prop, value = m.group(1,2) prop = prop.strip() value = value.strip() if prop in self.description: logging.debug("Overwriting property %s = %s -> %s" % (prop, self.description[prop], value)) ## Use strip here to deal with DOS newlines containing \r self.description[prop.strip()] = value.strip() def apply_config_files(self, conffiles): if conffiles is not None: for filename in conffiles: cf = open(filename,'r') lines = cf.readlines() for i in range(0, len(lines)): ## First evaluate PLOT sections m = pat_begin_block.match(lines[i]) if m and m.group(1) == 'PLOT' and re.match(m.group(2),self.filename): while i %s" % (prop, self.description[prop], value)) ## Use strip here to deal with DOS newlines containing \r self.description[prop.strip()] = value.strip() elif is_comment(lines[i]): continue else: ## Then evaluate path-based settings, e.g. for HISTOGRAMs m = pat_path_property.match(lines[i]) if m: regex, prop, value = m.group(1,2,3) for obj_dict in [self.special, self.histos, self.functions]: for path, obj in obj_dict.items(): if re.match(regex, path): ## Use strip here to deal with DOS newlines containing \r obj.description.update({prop.strip() : value.strip()}) cf.close() class Plot(object): def __init__(self, inputdata): pass def set_normalization(self,inputdata): if inputdata.normalized==True: return for method in ['NormalizeToIntegral', 'NormalizeToSum']: if method in inputdata.description: for i in inputdata.description['DrawOnly']: if method not in inputdata.histos[i].description: inputdata.histos[i].description[method] = inputdata.description[method] if 'Scale' in inputdata.description: for i in inputdata.description['DrawOnly']: inputdata.histos[i].description['Scale'] = float(inputdata.description['Scale']) for i in inputdata.histos.keys(): inputdata.histos[i].mangle_input() inputdata.normalized = True def stack_histograms(self,inputdata): if 'Stack' in inputdata.description: stackhists = [h for h in inputdata.attr('Stack').strip().split() if h in inputdata.histos] previous = '' for i in stackhists: if previous != '': inputdata.histos[i].add(inputdata.histos[previous]) previous = i def set_histo_options(self,inputdata): if 'ConnectGaps' in inputdata.description: for i in inputdata.histos.keys(): if 'ConnectGaps' not in inputdata.histos[i].description: inputdata.histos[i].description['ConnectGaps'] = inputdata.description['ConnectGaps'] # Counter and Value only have dummy x-axis, ticks wouldn't make sense here, so suppress them: if 'Value object' in str(inputdata.histos) or 'Counter object' in str(inputdata.histos): inputdata.description['XCustomMajorTicks'] = '' inputdata.description['XCustomMinorTicks'] = '' def set_borders(self, inputdata): self.set_xmax(inputdata) self.set_xmin(inputdata) self.set_ymax(inputdata) self.set_ymin(inputdata) self.set_zmax(inputdata) self.set_zmin(inputdata) inputdata.description['Borders'] = (self.xmin, self.xmax, self.ymin, self.ymax, self.zmin, self.zmax) def set_xmin(self, inputdata): self.xmin = inputdata.xmin if self.xmin is None: xmins = [inputdata.histos[h].getXMin() for h in inputdata.description['DrawOnly']] self.xmin = min(xmins) if xmins else 0.0 def set_xmax(self,inputdata): self.xmax = inputdata.xmax if self.xmax is None: xmaxs = [inputdata.histos[h].getXMax() for h in inputdata.description['DrawOnly']] self.xmax = min(xmaxs) if xmaxs else 1.0 def set_ymin(self,inputdata): if inputdata.ymin is not None: self.ymin = inputdata.ymin else: ymins = [inputdata.histos[i].getYMin(self.xmin, self.xmax, inputdata.description['LogY']) for i in inputdata.attr('DrawOnly')] minymin = min(ymins) if ymins else 0.0 if inputdata.description['is2dim']: self.ymin = minymin else: showzero = inputdata.attr_bool("ShowZero", True) if showzero: self.ymin = 0. if minymin > -1e-4 else 1.1*minymin else: self.ymin = 1.1*minymin if minymin < -1e-4 else 0 if minymin < 1e-4 else 0.9*minymin if inputdata.description['LogY']: ymins = [ymin for ymin in ymins if ymin > 0.0] if not ymins: if self.ymax == 0: self.ymax = 1 ymins.append(2e-7*self.ymax) minymin = min(ymins) fullrange = args.FULL_RANGE if inputdata.has_attr('FullRange'): fullrange = inputdata.attr_bool('FullRange') self.ymin = minymin/1.7 if fullrange else max(minymin/1.7, 2e-7*self.ymax) if self.ymin == self.ymax: self.ymin -= 1 self.ymax += 1 def set_ymax(self,inputdata): if inputdata.has_attr('YMax'): self.ymax = inputdata.attr_float('YMax') else: ymaxs = [inputdata.histos[h].getYMax(self.xmin, self.xmax) for h in inputdata.attr('DrawOnly')] self.ymax = max(ymaxs) if ymaxs else 1.0 if not inputdata.is2dim: self.ymax *= (1.7 if inputdata.attr_bool('LogY') else 1.1) def set_zmin(self,inputdata): if inputdata.has_attr('ZMin'): self.zmin = inputdata.attr_float('ZMin') else: zmins = [inputdata.histos[i].getZMin(self.xmin, self.xmax, self.ymin, self.ymax) for i in inputdata.attr('DrawOnly')] minzmin = min(zmins) if zmins else 0.0 self.zmin = minzmin if zmins: showzero = inputdata.attr_bool('ShowZero', True) if showzero: self.zmin = 0 if minzmin > -1e-4 else 1.1*minzmin else: self.zmin = 1.1*minzmin if minzmin < -1e-4 else 0. if minzmin < 1e-4 else 0.9*minzmin if inputdata.attr_bool('LogZ', False): zmins = [zmin for zmin in zmins if zmin > 0] if not zmins: if self.zmax == 0: self.zmax = 1 zmins.append(2e-7*self.zmax) minzmin = min(zmins) fullrange = inputdata.attr_bool("FullRange", args.FULL_RANGE) self.zmin = minzmin/1.7 if fullrange else max(minzmin/1.7, 2e-7*self.zmax) if self.zmin == self.zmax: self.zmin -= 1 self.zmax += 1 def set_zmax(self,inputdata): self.zmax = inputdata.attr_float('ZMax') if self.zmax is None: zmaxs = [inputdata.histos[h].getZMax(self.xmin, self.xmax, self.ymin, self.ymax) for h in inputdata.attr('DrawOnly')] self.zmax = max(zmaxs) if zmaxs else 1.0 def draw(self): pass def write_header(self,inputdata): if inputdata.description.get('LeftMargin', '') != '': inputdata.description['LeftMargin'] = float(inputdata.description['LeftMargin']) else: inputdata.description['LeftMargin'] = 1.4 if inputdata.description.get('RightMargin', '') != '': inputdata.description['RightMargin'] = float(inputdata.description['RightMargin']) else: inputdata.description['RightMargin'] = 0.35 if inputdata.description.get('TopMargin', '') != '': inputdata.description['TopMargin'] = float(inputdata.description['TopMargin']) else: inputdata.description['TopMargin'] = 0.65 if inputdata.description.get('BottomMargin', '') != '': inputdata.description['BottomMargin'] = float(inputdata.description['BottomMargin']) else: inputdata.description['BottomMargin'] = 0.95 if inputdata.description['is2dim']: inputdata.description['RightMargin'] += 1.8 papersizex = inputdata.description['PlotSizeX'] + 0.1 + \ inputdata.description['LeftMargin'] + inputdata.description['RightMargin'] papersizey = inputdata.description['PlotSizeY'] + 0.1 + \ inputdata.description['TopMargin'] + inputdata.description['BottomMargin'] papersizey += inputdata.description['RatioPlotSizeY'] for i in range(1,9): if ('RatioPlot'+str(i)+'SizeY') in inputdata.description: papersizey += inputdata.description['RatioPlot'+str(i)+'SizeY'] # out = "" out += '\\documentclass{article}\n' if args.OUTPUT_FONT == "MINION": out += ('\\usepackage{minion}\n') elif args.OUTPUT_FONT == "PALATINO_OSF": out += ('\\usepackage[osf,sc]{mathpazo}\n') elif args.OUTPUT_FONT == "PALATINO": out += ('\\usepackage{mathpazo}\n') elif args.OUTPUT_FONT == "TIMES": out += ('\\usepackage{mathptmx}\n') elif args.OUTPUT_FONT == "HELVETICA": out += ('\\renewcommand{\\familydefault}{\\sfdefault}\n') out += ('\\usepackage{sfmath}\n') out += ('\\usepackage{helvet}\n') out += ('\\usepackage[symbolgreek]{mathastext}\n') for pkg in args.LATEXPKGS: out += ('\\usepackage{%s}\n' % pkg) out += ('\\usepackage[dvipsnames]{xcolor}\n') out += ('\\usepackage{pst-all}\n') out += ('\\selectcolormodel{rgb}\n') out += ('\\definecolor{red}{HTML}{EE3311}\n') # (Google uses 'DC3912') out += ('\\definecolor{blue}{HTML}{3366FF}') out += ('\\definecolor{green}{HTML}{109618}') out += ('\\definecolor{orange}{HTML}{FF9900}') out += ('\\definecolor{lilac}{HTML}{990099}') out += ('\\usepackage{amsmath}\n') out += ('\\usepackage{amssymb}\n') out += ('\\usepackage{relsize}\n') out += ('\\usepackage{graphicx}\n') out += ('\\usepackage[dvips,\n') out += (' left=%4.3fcm, right=0cm,\n' % (inputdata.description['LeftMargin']-0.45,)) out += (' top=%4.3fcm, bottom=0cm,\n' % (inputdata.description['TopMargin']-0.30,)) out += (' paperwidth=%scm,paperheight=%scm\n' % (papersizex,papersizey)) out += (']{geometry}\n') if 'DefineColor' in inputdata.description: out += ('% user defined colours\n') for color in inputdata.description['DefineColor'].split('\t'): out += ('%s\n' %color) if 'UseExtendedPSTricks' in inputdata.description and inputdata.description['UseExtendedPSTricks']=='1': out += self.write_extended_pstricks() out += ('\\begin{document}\n') out += ('\\pagestyle{empty}\n') out += ('\\SpecialCoor\n') out += ('\\begin{pspicture}(0,0)(0,0)\n') out += ('\\psset{xunit=%scm}\n' %(inputdata.description['PlotSizeX'])) if inputdata.description['is2dim']: if inputdata.description.get('ColorSeries', '') != '': colorseries = inputdata.description['ColorSeries'] else: colorseries = '{hsb}{grad}[rgb]{0,0,1}{-.700,0,0}' out += ('\\definecolorseries{gradientcolors}%s\n' % colorseries) out += ('\\resetcolorseries[130]{gradientcolors}\n') return out def write_extended_pstricks(self): out = '' out += ('\\usepackage{pstricks-add}\n') out += ('\\makeatletter\n') out += ('\\def\\pshs@solid{0 setlinecap }\n') out += ('\\def\\pshs@dashed{[ \\psk@dash ] 0 setdash }\n') out += ('\\def\\pshs@dotted{[ 0 \\psk@dotsep CLW add ] 0 setdash 1 setlinecap }\n') out += ('\\def\\psset@hatchstyle#1{%\n') out += ('\\@ifundefined{pshs@#1}%\n') out += ('{\\@pstrickserr{Hatch style `#1\' not defined}\\@eha}%\n') out += ('{\\edef\\pshatchstyle{#1}}}\n') out += ('\\psset@hatchstyle{solid}\n') out += ('\\def\\pst@linefill{%\n') out += ('\\@nameuse{pshs@\\pshatchstyle}\n') out += ('\\psk@hatchangle rotate\n') out += ('\\psk@hatchwidth SLW\n') out += ('\\pst@usecolor\\pshatchcolor\n') out += ('\\psk@hatchsep\n') out += ('\\tx@LineFill}\n') out += ('\\pst@def{LineFill}<{%\n') out += ('gsave\n') out += (' abs CLW add\n') out += (' /a ED\n') out += (' a 0 dtransform\n') out += (' round exch round exch 2 copy idtransform\n') out += (' exch Atan rotate idtransform\n') out += (' pop\n') out += (' /a ED\n') out += (' .25 .25 itransform\n') out += (' pathbbox\n') out += (' /y2 ED\n') out += (' a Div ceiling cvi\n') out += (' /x2 ED\n') out += (' /y1 ED\n') out += (' a Div cvi\n') out += (' /x1 ED\n') out += (' /y2 y2 y1 sub def\n') out += (' clip\n') out += (' newpath\n') out += (' systemdict\n') out += (' /setstrokeadjust\n') out += (' known { true setstrokeadjust } if\n') out += (' x2 x1 sub 1 add\n') out += (' { x1 a mul y1 moveto\n') out += (' 0 y2 rlineto\n') out += (' stroke\n') out += (' /x1 x1 1 add def } repeat\n') out += (' grestore\n') out += ('pop pop}>\n') out += ('\makeatother\n') return out def write_footer(self): out = "" out += ('\\end{pspicture}\n') out += ('\\end{document}\n') return out class MainPlot(Plot): def __init__(self, inputdata): self.name = 'MainPlot' inputdata.description['PlotStage'] = 'MainPlot' self.set_normalization(inputdata) self.stack_histograms(inputdata) do_gof = inputdata.description.get('GofLegend', '0') == '1' or inputdata.description.get('GofFrame', '') != '' do_taylor = inputdata.description.get('TaylorPlot', '0') == '1' if do_gof and not do_taylor: self.calculate_gof(inputdata) self.set_histo_options(inputdata) self.set_borders(inputdata) self.yoffset = inputdata.description['PlotSizeY'] self.coors = Coordinates(inputdata) def draw(self, inputdata): out = "" out += ('\n%\n% MainPlot\n%\n') out += ('\\psset{yunit=%scm}\n' %(self.yoffset)) out += ('\\rput(0,-1){%\n') out += ('\\psset{yunit=%scm}\n' %(inputdata.description['PlotSizeY'])) out += self._draw(inputdata) out += ('}\n') return out def _draw(self, inputdata): out = "" frame = Frame(inputdata.description,self.name) out += frame.drawZeroLine(self.coors.phys2frameY(0)) out += frame.drawUnitLine(self.coors.phys2frameY(1)) if inputdata.attr_bool('DrawSpecialFirst', False): for s in inputdata.special.values(): out += s.draw(self.coors,inputdata) if inputdata.attr_bool('DrawFunctionFirst', False): for f in inputdata.functions.values(): out += f.draw(self.coors,inputdata) for i in inputdata.description['DrawOnly']: out += inputdata.histos[i].draw(self.coors) if not inputdata.attr_bool('DrawSpecialFirst', False): for s in inputdata.special.values(): out += s.draw(self.coors,inputdata) if not inputdata.attr_bool('DrawFunctionFirst', False): for f in inputdata.functions.values(): out += f.draw(self.coors,inputdata) if inputdata.attr_bool('Legend', False): legend = Legend(inputdata.description,inputdata.histos,inputdata.functions,'Legend',1) out += legend.draw() for i in range(1,10): if inputdata.attr_bool('Legend'+str(i), False): legend = Legend(inputdata.description,inputdata.histos,inputdata.functions,'Legend'+str(i),i) out += legend.draw() if inputdata.description['is2dim']: colorscale = ColorScale(inputdata.description, self.coors) out += colorscale.draw() out += frame.draw() xcustommajortickmarks = inputdata.attr_int('XMajorTickMarks', -1) xcustomminortickmarks = inputdata.attr_int('XMinorTickMarks', -1) xcustommajorticks = [] xcustomminorticks = [] xbreaks=[] if inputdata.attr('XCustomMajorTicks'): x_label_pairs = inputdata.attr('XCustomMajorTicks').strip().split() #'\t') if len(x_label_pairs) % 2 == 0: for i in range(0, len(x_label_pairs), 2): xcustommajorticks.append({'Value': float(x_label_pairs[i]), 'Label': x_label_pairs[i+1]}) else: print("Warning: XCustomMajorTicks requires an even number of alternating pos/label entries") if inputdata.attr('XCustomMinorTicks'): xs = inputdata.attr('XCustomMinorTicks').strip().split() #'\t') xcustomminorticks = [{'Value': float(x)} for x in xs] if 'XBreaks' in inputdata.description and inputdata.description['XBreaks']!='': FOO=inputdata.description['XBreaks'].strip().split('\t') xbreaks = [{'Value': float(FOO[i])} for i in range(len(FOO))] xticks = XTicks(inputdata.description, self.coors) drawlabels = True if 'PlotTickLabels' in inputdata.description and inputdata.description['PlotTickLabels']=='0': drawlabels=False if 'RatioPlot' in inputdata.description and inputdata.description['RatioPlot']=='1': drawlabels=False for i in range(1,9): if ('RatioPlot'+str(i)) in inputdata.description and inputdata.description['RatioPlot'+str(i)]=='1': drawlabels=False out += xticks.draw(custommajortickmarks=xcustommajortickmarks, customminortickmarks=xcustomminortickmarks, custommajorticks=xcustommajorticks, customminorticks=xcustomminorticks, drawlabels=drawlabels, breaks=xbreaks) ycustommajortickmarks = inputdata.attr_int('YMajorTickMarks', -1) ycustomminortickmarks = inputdata.attr_int('YMinorTickMarks', -1) ycustommajorticks = [] ycustomminorticks = [] ybreaks=[] if 'YCustomMajorTicks' in inputdata.description: y_label_pairs = inputdata.description['YCustomMajorTicks'].strip().split() #'\t') if len(y_label_pairs) % 2 == 0: for i in range(0, len(y_label_pairs), 2): ycustommajorticks.append({'Value': float(y_label_pairs[i]), 'Label': y_label_pairs[i+1]}) else: print("Warning: YCustomMajorTicks requires an even number of alternating pos/label entries") if inputdata.has_attr('YCustomMinorTicks'): ys = inputdata.attr('YCustomMinorTicks').strip().split() #'\t') ycustomminorticks = [{'Value': float(y)} for y in ys] yticks = YTicks(inputdata.description, self.coors) drawylabels = inputdata.attr_bool('PlotYTickLabels', True) if inputdata.description.get('YBreaks', '') != '': FOO=inputdata.description['YBreaks'].strip().split('\t') for i in range(len(FOO)): ybreaks.append({'Value': float(FOO[i])}) out += yticks.draw(custommajortickmarks=ycustommajortickmarks, customminortickmarks=ycustomminortickmarks, custommajorticks=ycustommajorticks, customminorticks=ycustomminorticks, drawlabels=drawylabels, breaks=ybreaks) labels = Labels(inputdata.description) if drawlabels: if inputdata.description['is2dim']: out += labels.draw(['Title','XLabel','YLabel','ZLabel']) else: out += labels.draw(['Title','XLabel','YLabel']) else: out += labels.draw(['Title','YLabel']) #if inputdata.attr_bool('RatioPlot', False): # out += labels.draw(['Title','YLabel']) #elif drawlabels: # if not inputdata.description['is2dim']: # out += labels.draw(['Title','XLabel','YLabel']) # else: # out += labels.draw(['Title','XLabel','YLabel','ZLabel']) return out def calculate_gof(self, inputdata): refdata = inputdata.description.get('GofReference') if refdata is None: refdata = inputdata.description.get('RatioPlotReference') if refdata is None: inputdata.description['GofLegend'] = '0' inputdata.description['GofFrame'] = '' return def pickcolor(gof): color = None colordefs = {} for i in inputdata.description.setdefault('GofFrameColor', '0:green 3:yellow 6:red!70').strip().split(): foo = i.split(':') if len(foo) != 2: continue colordefs[float(foo[0])] = foo[1] for i in sorted(colordefs.keys()): if gof>=i: color=colordefs[i] return color inputdata.description.setdefault('GofLegend', '0') inputdata.description.setdefault('GofFrame', '') inputdata.description.setdefault('FrameColor', None) for i in inputdata.description['DrawOnly']: if i == refdata: continue if inputdata.description['GofLegend']!='1' and i!=inputdata.description['GofFrame']: continue if inputdata.description.get('GofType', "chi2") != 'chi2': return gof = inputdata.histos[i].getChi2(inputdata.histos[refdata]) if i == inputdata.description['GofFrame'] and inputdata.description['FrameColor'] is None: inputdata.description['FrameColor'] = pickcolor(gof) if inputdata.histos[i].description.setdefault('Title', '') != '': inputdata.histos[i].description['Title'] += ', ' inputdata.histos[i].description['Title'] += '$\\chi^2/n={}$%1.2f' %gof class TaylorPlot(Plot): def __init__(self, inputdata): self.refdata = inputdata.description['TaylorPlotReference'] self.calculate_taylorcoordinates(inputdata) def calculate_taylorcoordinates(self,inputdata): foo = inputdata.description['DrawOnly'].pop(inputdata.description['DrawOnly'].index(self.refdata)) inputdata.description['DrawOnly'].append(foo) for i in inputdata.description['DrawOnly']: print(i) print('meanbinval = ', inputdata.histos[i].getMeanBinValue()) print('sigmabinval = ', inputdata.histos[i].getSigmaBinValue()) print('chi2/nbins = ', inputdata.histos[i].getChi2(inputdata.histos[self.refdata])) print('correlation = ', inputdata.histos[i].getCorrelation(inputdata.histos[self.refdata])) print('distance = ', inputdata.histos[i].getRMSdistance(inputdata.histos[self.refdata])) class RatioPlot(Plot): def __init__(self, inputdata, i): self.number=i self.name='RatioPlot'+str(i) if i==0: self.name='RatioPlot' # initialise histograms even when no main plot self.set_normalization(inputdata) self.refdata = inputdata.description[self.name+'Reference'] if self.refdata not in inputdata.histos: print('ERROR: %sReference=%s not found in:' % (self.name,self.refdata)) for i in inputdata.histos.keys(): print(' ',i) sys.exit(1) inputdata.description.setdefault('RatioPlotYOffset', inputdata.description['PlotSizeY']) inputdata.description.setdefault(self.name+'SameStyle', '1') self.yoffset = inputdata.description['RatioPlotYOffset'] + inputdata.description[self.name+'SizeY'] inputdata.description['PlotStage'] = self.name inputdata.description['RatioPlotYOffset'] = self.yoffset inputdata.description['PlotSizeY'] = inputdata.description[self.name+'SizeY'] inputdata.description['LogY'] = inputdata.description.get(self.name+"LogY", False) # TODO: It'd be nice it this wasn't so MC-specific rpmode = inputdata.description.get(self.name+'Mode', "mcdata") if rpmode=='deviation': inputdata.description['YLabel']='$(\\text{MC}-\\text{data})$' inputdata.description['YMin']=-2.99 inputdata.description['YMax']=2.99 elif rpmode=='delta': inputdata.description['YLabel']='\\delta' inputdata.description['YMin']=-0.5 inputdata.description['YMax']=0.5 elif rpmode=='deltapercent': inputdata.description['YLabel']='\\delta\;[\%]' inputdata.description['YMin']=-50. inputdata.description['YMax']=50. elif rpmode=='deltamc': inputdata.description['YLabel']='Data/MC' inputdata.description['YMin']=0.5 inputdata.description['YMax']=1.5 else: inputdata.description['YLabel'] = 'MC/Data' inputdata.description['YMin'] = 0.5 inputdata.description['YMax'] = 1.5 if (self.name+'YLabel') in inputdata.description: inputdata.description['YLabel']=inputdata.description[self.name+'YLabel'] inputdata.description['YLabel']='\\rput(-%s,0){%s}'%(0.5*inputdata.description['PlotSizeY']/inputdata.description['PlotSizeX'],inputdata.description['YLabel']) if (self.name+'YMin') in inputdata.description: inputdata.description['YMin']=inputdata.description[self.name+'YMin'] if (self.name+'YMax') in inputdata.description: inputdata.description['YMax']=inputdata.description[self.name+'YMax'] if (self.name+'YLabelSep') in inputdata.description: inputdata.description['YLabelSep']=inputdata.description[self.name+'YLabelSep'] if (self.name+'ErrorBandColor') not in inputdata.description: inputdata.description[self.name+'ErrorBandColor']='yellow' if inputdata.description.get(self.name+'SameStyle', '0')=='0': inputdata.histos[self.refdata].description['ErrorBandColor']=inputdata.description[self.name+'ErrorBandColor'] inputdata.histos[self.refdata].description['ErrorBands'] = '1' inputdata.histos[self.refdata].description['ErrorBars'] = '0' inputdata.histos[self.refdata].description['ErrorTubes']='0' inputdata.histos[self.refdata].description['LineStyle'] = 'solid' inputdata.histos[self.refdata].description['LineColor'] = 'black' inputdata.histos[self.refdata].description['LineWidth'] = '0.3pt' inputdata.histos[self.refdata].description['PolyMarker'] = '' inputdata.histos[self.refdata].description['ConnectGaps'] = '1' self.calculate_ratios(inputdata) self.set_borders(inputdata) self.coors = Coordinates(inputdata) def draw(self, inputdata): out = "" out += ('\n%\n% RatioPlot\n%\n') out += ('\\psset{yunit=%scm}\n' %(self.yoffset)) out += ('\\rput(0,-1){%\n') out += ('\\psset{yunit=%scm}\n' %(inputdata.description['PlotSizeY'])) out += self._draw(inputdata) out += ('}\n') return out def calculate_ratios(self,inputdata): inputdata.ratiohistos = {} inputdata.ratiohistos = copy.deepcopy(inputdata.histos) foo=inputdata.description[self.name+'DrawOnly'].pop(inputdata.description[self.name+'DrawOnly'].index(self.refdata)) if inputdata.description.get(self.name+'DrawReferenceFirst', '')!='0': if inputdata.histos[self.refdata].description.get('ErrorBands', '')=='1': inputdata.description[self.name+'DrawOnly'].insert(0,foo) else: inputdata.description[self.name+'DrawOnly'].append(foo) else: inputdata.description[self.name+'DrawOnly'].append(foo) rpmode = inputdata.description.get(self.name+'Mode', 'mcdata') for i in inputdata.description[self.name+'DrawOnly']: if i!=self.refdata: if rpmode == 'deviation': inputdata.ratiohistos[i].deviation(inputdata.ratiohistos[self.refdata]) elif rpmode == 'delta': inputdata.ratiohistos[i].delta(inputdata.ratiohistos[self.refdata]) elif rpmode == 'deltapercent': inputdata.ratiohistos[i].deltapercent(inputdata.ratiohistos[self.refdata]) elif rpmode == 'datamc': inputdata.ratiohistos[i].dividereverse(inputdata.ratiohistos[self.refdata]) inputdata.ratiohistos[i].description['ErrorBars']='1' else: inputdata.ratiohistos[i].divide(inputdata.ratiohistos[self.refdata]) if rpmode == 'deviation': inputdata.ratiohistos[self.refdata].deviation(inputdata.ratiohistos[self.refdata]) elif rpmode == 'delta': inputdata.ratiohistos[self.refdata].delta(inputdata.ratiohistos[self.refdata]) elif rpmode == 'deltapercent': inputdata.ratiohistos[self.refdata].deltapercent(inputdata.ratiohistos[self.refdata]) elif rpmode == 'datamc': inputdata.ratiohistos[self.refdata].dividereverse(inputdata.ratiohistos[self.refdata]) else: inputdata.ratiohistos[self.refdata].divide(inputdata.ratiohistos[self.refdata]) def _draw(self, inputdata): out = "" frame = Frame(inputdata.description,self.name) out += frame.drawZeroLine(self.coors.phys2frameY(0)) out += frame.drawUnitLine(self.coors.phys2frameY(1)) if inputdata.description.get('DrawSpecialFirst', '')=='1': for i in inputdata.special.keys(): out += inputdata.special[i].draw(self.coors,inputdata) if inputdata.description.get('DrawFunctionFirst', '')=='1': for i in inputdata.functions.keys(): out += inputdata.functions[i].draw(self.coors,inputdata) for i in inputdata.description[self.name+'DrawOnly']: if inputdata.description.get(self.name+'Mode', '')=='datamc': if i!=self.refdata: out += inputdata.ratiohistos[i].draw(self.coors) else: out += inputdata.ratiohistos[i].draw(self.coors) if inputdata.description.get('DrawFunctionFirst', '0')=='0': for i in inputdata.functions.keys(): out += inputdata.functions[i].draw(self.coors,inputdata) if inputdata.description.get('DrawSpecialFirst', '0')=='0': for i in inputdata.special.keys(): out += inputdata.special[i].draw(self.coors,inputdata) out += frame.draw() # TODO: so much duplication with MainPlot... yuck! if inputdata.description.get('XMajorTickMarks', '')!='': xcustommajortickmarks=int(inputdata.description['XMajorTickMarks']) else: xcustommajortickmarks=-1 if inputdata.description.get('XMinorTickMarks', '')!='': xcustomminortickmarks=int(inputdata.description['XMinorTickMarks']) else: xcustomminortickmarks=-1 xcustommajorticks=[] xcustomminorticks=[] if inputdata.description.get('XCustomMajorTicks', '')!='': FOO=inputdata.description['XCustomMajorTicks'].strip().split() #'\t') if not len(FOO)%2: for i in range(0,len(FOO),2): xcustommajorticks.append({'Value': float(FOO[i]), 'Label': FOO[i+1]}) if inputdata.description.get('XCustomMinorTicks', '')!='': FOO=inputdata.description['XCustomMinorTicks'].strip().split() #'\t') for i in range(len(FOO)): xcustomminorticks.append({'Value': float(FOO[i])}) xticks = XTicks(inputdata.description, self.coors) # find out whether to draw title (only if no MainPlot and top RatioPlot) drawtitle=False if inputdata.description.get('MainPlot', '')=='0': drawtitle=True for i in range(0,self.number): if i==0: if inputdata.description.get('RatioPlot', '')=='1': drawtitle=False else: if inputdata.description.get('RatioPlot'+str(i), '')=='1': drawtitle=False # find out whether to draw xlabels (only if lowest RatioPlot) drawlabels = True if (inputdata.description.get(self.name+'TickLabels','')=='0'): drawlabels=False for i in range(self.number+1,10): if inputdata.description.get('RatioPlot'+str(i), '')=='1': drawlabels=False out += xticks.draw(custommajortickmarks=xcustommajortickmarks, customminortickmarks=xcustomminortickmarks, custommajorticks=xcustommajorticks, customminorticks=xcustomminorticks, drawlabels=drawlabels) ycustommajortickmarks = inputdata.attr(self.name + 'YMajorTickMarks', '') ycustommajortickmarks = int(ycustommajortickmarks) if ycustommajortickmarks else -1 ycustomminortickmarks = inputdata.attr(self.name + 'YMinorTickMarks', '') ycustomminortickmarks = int(ycustomminortickmarks) if ycustomminortickmarks else -1 ycustommajorticks = [] if self.name + 'YCustomMajorTicks' in inputdata.description: tickstr = inputdata.description[self.name + 'YCustomMajorTicks'].strip().split() #'\t') if not len(tickstr) % 2: for i in range(0, len(tickstr), 2): ycustommajorticks.append({'Value': float(tickstr[i]), 'Label': tickstr[i+1]}) ycustomminorticks = [] if self.name + 'YCustomMinorTicks' in inputdata.description: tickstr = inputdata.description[self.name + 'YCustomMinorTicks'].strip().split() #'\t') for i in range(len(tickstr)): ycustomminorticks.append({'Value': float(tickstr[i])}) yticks = YTicks(inputdata.description, self.coors) out += yticks.draw(custommajortickmarks=ycustommajortickmarks, customminortickmarks=ycustomminortickmarks, custommajorticks=ycustommajorticks, customminorticks=ycustomminorticks) if inputdata.attr_bool(self.name+'Legend', False): legend = Legend(inputdata.description,inputdata.histos,inputdata.functions,self.name+'Legend',1) out += legend.draw() for i in range(1,10): if inputdata.attr_bool(self.name+'Legend'+str(i), False): legend = Legend(inputdata.description,inputdata.histos,inputdata.functions,self.name+'Legend'+str(i),i) out += legend.draw() labels = Labels(inputdata.description) if drawtitle: if drawlabels: out += labels.draw(['Title','XLabel','YLabel']) else: out += labels.draw(['Title','YLabel']) else: if drawlabels: out += labels.draw(['XLabel','YLabel']) else: out += labels.draw(['YLabel']) return out class Legend(Described): def __init__(self, description, histos, functions, name, number): self.name = name self.number = number self.histos = histos self.functions = functions self.description = description def draw(self): legendordermap = {} legendlist = self.description['DrawOnly'] + list(self.functions.keys()) if self.name + 'Only' in self.description: legendlist = [] for legend in self.description[self.name+'Only'].strip().split(): if legend in self.histos or legend in self.functions: legendlist.append(legend) for legend in legendlist: order = 0 if legend in self.histos and 'LegendOrder' in self.histos[legend].description: order = int(self.histos[legend].description['LegendOrder']) if legend in self.functions and 'LegendOrder' in self.functions[legend].description: order = int(self.functions[legend].description['LegendOrder']) if not order in legendordermap: legendordermap[order] = [] legendordermap[order].append(legend) orderedlegendlist=[] for i in sorted(legendordermap.keys()): orderedlegendlist.extend(legendordermap[i]) if self.description['is2dim']: return self.draw_2dlegend(orderedlegendlist) out = "" out += '\n%\n% Legend\n%\n' out += '\\rput[tr](%s,%s){%%\n' % (self.getLegendXPos(), self.getLegendYPos()) ypos = -0.05*6/self.description['PlotSizeY'] if (self.name+'Title') in self.description: for i in self.description[self.name+'Title'].strip().split('\\\\'): out += '\\rput[Bl](0.,'+ str(ypos) + '){' + i + '}\n' ypos -= 0.075*6/self.description['PlotSizeY'] offset = float(self.description.get(self.name+'EntryOffset', 0)) separation = float(self.description.get(self.name+'EntrySeparation', 0)) hline = self.description.get(self.name+'HorizontalLine', '1') != '0' vline = self.description.get(self.name+'VerticalLine', '1') != '0' rel_xpos_sign = 1.0 if self.getLegendAlign()=='r': rel_xpos_sign = -1.0 xwidth = self.getLegendIconWidth() xpos1 = -0.02*rel_xpos_sign-0.08*xwidth*rel_xpos_sign xpos2 = -0.02*rel_xpos_sign xposc = -0.02*rel_xpos_sign-0.04*xwidth*rel_xpos_sign xpostext = 0.1*rel_xpos_sign for i in orderedlegendlist: if i in self.histos: drawobject=self.histos[i] elif i in self.functions: drawobject=self.functions[i] else: continue title = drawobject.getTitle() mopts = pat_options.search(drawobject.path) if mopts and not self.description.get("RemoveOptions", 0): opts = list(mopts.groups())[0].lstrip(':').split(":") for opt in opts: if opt in self.description['_OptSubs']: title += ' %s' % self.description['_OptSubs'][opt] else: title += ' [%s]' % opt if title == '': continue else: titlelines=[] for i in title.strip().split('\\\\'): titlelines.append(i) ypos -= 0.075*6/self.description['PlotSizeY']*separation boxtop = 0.045*(6./self.description['PlotSizeY']) boxbottom = 0. lineheight = 0.5*(boxtop-boxbottom) out += ('\\rput[B%s](%s,%s){%s\n' %(self.getLegendAlign(),rel_xpos_sign*0.1,ypos,'%')) if drawobject.getErrorBands(): out += ('\\psframe[linewidth=0pt,linestyle=none,fillstyle=%s,fillcolor=%s,opacity=%s,hatchcolor=%s]' %(drawobject.getErrorBandStyle(),drawobject.getErrorBandColor(),drawobject.getErrorBandOpacity(),drawobject.getHatchColor())) out += ('(%s, %s)(%s, %s)\n' %(xpos1,boxtop,xpos2,boxbottom)) # set psline options for all lines to be drawn next lineopts = ('linestyle=' + drawobject.getLineStyle() \ + ', linecolor=' + drawobject.getLineColor() \ + ', linewidth=' + drawobject.getLineWidth() \ + ', strokeopacity=' + drawobject.getLineOpacity() \ + ', opacity=' + drawobject.getFillOpacity()) if drawobject.getLineDash()!='': lineopts += (', dash=' + drawobject.getLineDash()) if drawobject.getFillStyle()!='none': lineopts += (', fillstyle=' + drawobject.getFillStyle() \ + ', fillcolor=' + drawobject.getFillColor() \ + ', hatchcolor=' + drawobject.getHatchColor()) # options set -> lineopts if drawobject.getErrorBars() and vline: out += (' \\psline[' + lineopts + '](%s, %s)(%s, %s)\n') \ %(xposc, boxtop, xposc, boxbottom) if drawobject.getErrorTubes(): tubeopts = ('linestyle=' + drawobject.getErrorTubeLineStyle() \ + ', linecolor=' + drawobject.getErrorTubeLineColor() \ + ', linewidth=' + drawobject.getErrorTubeLineWidth() \ + ', strokeopacity=' + drawobject.getErrorTubeLineOpacity() \ + ', opacity=' + drawobject.getFillOpacity()) out += (' \\psline[' + tubeopts + '](%s, %s)(%s, %s)\n') \ %(xpos1, boxtop, xpos2, boxtop) out += (' \\psline[' + tubeopts + '](%s, %s)(%s, %s)\n') \ %(xpos1, boxbottom, xpos2, boxbottom) if hline: out += (' \\psline[' + lineopts ) if drawobject.getFillStyle()!='none': out += (']{C-C}(%s, %s)(%s, %s)(%s, %s)(%s, %s)(%s, %s)\n' \ %(xpos1, boxtop, xpos2, boxtop, xpos2, boxbottom, xpos1, boxbottom, xpos1, boxtop)) else: out += ('](%s, %s)(%s, %s)\n' %(xpos1, lineheight, xpos2, lineheight)) if drawobject.getPolyMarker() != '': out += (' \\psdot[dotstyle=' + drawobject.getPolyMarker() \ + ', dotsize=' + drawobject.getDotSize() \ + ', dotscale=' + drawobject.getDotScale() \ + ', linecolor=' + drawobject.getLineColor() \ + ', linewidth=' + drawobject.getLineWidth() \ + ', linestyle=' + drawobject.getLineStyle() \ + ', fillstyle=' + drawobject.getFillStyle() \ + ', fillcolor=' + drawobject.getFillColor() \ + ', strokeopacity=' + drawobject.getLineOpacity() \ + ', opacity=' + drawobject.getFillOpacity() \ + ', hatchcolor=' + drawobject.getHatchColor()) if drawobject.getFillStyle()!='none': out += ('](%s, %s)\n' % (xposc, 0.95*boxtop)) else: out += ('](%s, %s)\n' % (xposc, lineheight)) out += ('}\n') ypos -= 0.075*6/self.description['PlotSizeY']*offset for i in titlelines: out += ('\\rput[B%s](%s,%s){%s}\n' %(self.getLegendAlign(),xpostext,ypos,i)) ypos -= 0.075*6/self.description['PlotSizeY'] if 'CustomLegend' in self.description: for i in self.description['CustomLegend'].strip().split('\\\\'): out += ('\\rput[B%s](%s,%s){%s}\n' %(self.getLegendAlign(),xpostext,ypos,i)) ypos -= 0.075*6/self.description['PlotSizeY'] out += ('}\n') return out def draw_2dlegend(self,orderedlegendlist): histos = "" for i in range(0,len(orderedlegendlist)): if orderedlegendlist[i] in self.histos: drawobject=self.histos[orderedlegendlist[i]] elif orderedlegendlist[i] in self.functions: drawobject=self.functions[orderedlegendlist[i]] else: continue title = drawobject.getTitle() if title == '': continue else: histos += title.strip().split('\\\\')[0] if not i==len(orderedlegendlist)-1: histos += ', ' out = '\\rput(1,1){\\rput[rB](0, 1.7\\labelsep){\\normalsize '+histos+'}}\n' return out def getLegendXPos(self): return self.description.get(self.name+'XPos', '0.95' if self.getLegendAlign() == 'r' else '0.53') def getLegendYPos(self): return self.description.get(self.name+'YPos', '0.93') def getLegendAlign(self): return self.description.get(self.name+'Align', 'l') def getLegendIconWidth(self): return float(self.description.get(self.name+'IconWidth', '1.0')) class PlotFunction: def __init__(self, f): self.description = {} self.read_input(f) def read_input(self, f): self.code='def histomangler(x):\n' iscode=False for line in f: if is_end_marker(line, 'HISTOGRAMMANGLER'): break elif is_comment(line): continue else: m = pat_property.match(line) if iscode: self.code+=' '+line elif m: prop, value = m.group(1,2) if prop=='Code': iscode=True else: self.description[prop] = value if not iscode: print('++++++++++ ERROR: No code in function') else: foo = compile(self.code, '', 'exec') exec(foo) self.histomangler = histomangler def transform(self, x): return self.histomangler(x) class ColorScale(Described): def __init__(self, description, coors): self.description = description self.coors = coors def draw(self): out = '' out += '\n%\n% ColorScale\n%\n' out += '\\rput(1,0){\n' out += ' \\psset{xunit=4mm}\n' out += ' \\rput(0.5,0){\n' out += ' \\psset{yunit=0.0076923, linestyle=none, fillstyle=solid}\n' out += ' \\multido{\\ic=0+1,\\id=1+1}{130}{\n' out += ' \\psframe[fillcolor={gradientcolors!![\\ic]},dimen=inner,linewidth=0.1pt](0, \\ic)(1, \\id)\n' out += ' }\n' out += ' }\n' out += ' \\rput(0.5,0){\n' out += ' \\psframe[linewidth=0.3pt,dimen=middle](0,0)(1,1)\n' zcustommajortickmarks = self.attr_int('ZMajorTickMarks', -1) zcustomminortickmarks = self.attr_int('ZMinorTickMarks', -1) zcustommajorticks = [] zcustomminorticks = [] if self.attr('ZCustomMajorTicks'): zcustommajorticks = [] z_label_pairs = self.attr('ZCustomMajorTicks').strip().split() #'\t') if len(z_label_pairs) % 2 == 0: for i in range(0, len(z_label_pairs), 2): zcustommajorticks.append({'Value': float(z_label_pairs[i]), 'Label': z_label_pairs[i+1]}) else: print("Warning: ZCustomMajorTicks requires an even number of alternating pos/label entries") if self.attr('ZCustomMinorTicks'): zs = self.attr('ZCustomMinorTicks').strip().split() #'\t') zcustomminorticks = [{'Value': float(x)} for x in xs] drawzlabels = self.attr_bool('PlotZTickLabels', True) zticks = ZTicks(self.description, self.coors) out += zticks.draw(custommajortickmarks=zcustommajortickmarks,\ customminortickmarks=zcustomminortickmarks,\ custommajorticks=zcustommajorticks,\ customminorticks=zcustomminorticks, drawlabels=drawzlabels) out += ' }\n' out += '}\n' return out class Labels(Described): def __init__(self, description): self.description = description def draw(self, axis=[]): out = "" out += ('\n%\n% Labels\n%\n') if 'Title' in self.description and (axis.count('Title') or axis==[]): out += ('\\rput(0,1){\\rput[lB](0, 1.7\\labelsep){\\normalsize '+self.description['Title']+'}}\n') if 'XLabel' in self.description and (axis.count('XLabel') or axis==[]): xlabelsep = 4.7 if 'XLabelSep' in self.description: xlabelsep=float(self.description['XLabelSep']) out += ('\\rput(1,0){\\rput[rB](0,-%4.3f\\labelsep){\\normalsize '%(xlabelsep) +self.description['XLabel']+'}}\n') if 'YLabel' in self.description and (axis.count('YLabel') or axis==[]): ylabelsep = 6.5 if 'YLabelSep' in self.description: ylabelsep=float(self.description['YLabelSep']) out += ('\\rput(0,1){\\rput[rB]{90}(-%4.3f\\labelsep,0){\\normalsize '%(ylabelsep) +self.description['YLabel']+'}}\n') if 'ZLabel' in self.description and (axis.count('ZLabel') or axis==[]): zlabelsep = 6.5 if 'ZLabelSep' in self.description: zlabelsep=float(self.description['ZLabelSep']) out += ('\\rput(1,1){\\rput(%4.3f\\labelsep,0){\\psset{xunit=4mm}\\rput[lB]{270}(1.5,0){\\normalsize '%(zlabelsep) +self.description['ZLabel']+'}}}\n') return out class Special(Described): def __init__(self, f): self.description = {} self.data = [] self.read_input(f) if 'Location' not in self.description: self.description['Location']='MainPlot' self.description['Location']=self.description['Location'].split('\t') def read_input(self, f): for line in f: if is_end_marker(line, 'SPECIAL'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.description[prop] = value else: self.data.append(line) def draw(self, coors, inputdata): drawme = False for i in self.description['Location']: if i in inputdata.description['PlotStage']: drawme = True break if not drawme: return "" out = "" out += ('\n%\n% Special\n%\n') import re regex = re.compile(r'^(.*?)(\\physics[xy]?coor)\(\s?([0-9\.eE+-]+)\s?,\s?([0-9\.eE+-]+)\s?\)(.*)') # TODO: More precise number string matching, something like this: # num = r"-?[0-9]*(?:\.[0-9]*)(?:[eE][+-]?\d+]" # regex = re.compile(r'^(.*?)(\\physics[xy]?coor)\(\s?(' + num + ')\s?,\s?(' + num + ')\s?\)(.*)') for l in self.data: while regex.search(l): match = regex.search(l) xcoor, ycoor = float(match.group(3)), float(match.group(4)) if match.group(2)[1:] in ["physicscoor", "physicsxcoor"]: xcoor = coors.phys2frameX(xcoor) if match.group(2)[1:] in ["physicscoor", "physicsycoor"]: ycoor = coors.phys2frameY(ycoor) line = "%s(%f, %f)%s" % (match.group(1), xcoor, ycoor, match.group(5)) l = line out += l + "\n" return out class DrawableObject(Described): def __init__(self, f): pass def getName(self): return self.description.get("Name", "") def getTitle(self): return self.description.get("Title", "") def getLineStyle(self): if 'LineStyle' in self.description: ## I normally like there to be "only one way to do it", but providing ## this dashdotted/dotdashed synonym just seems humane ;-) if self.description['LineStyle'] in ('dashdotted', 'dotdashed'): self.description['LineStyle']='dashed' self.description['LineDash']='3pt 3pt .8pt 3pt' return self.description['LineStyle'] else: return 'solid' def getLineDash(self): if 'LineDash' in self.description: # Check if LineStyle=='dashdotted' before returning something self.getLineStyle() return self.description['LineDash'] else: return '' def getLineWidth(self): return self.description.get("LineWidth", "0.8pt") def getLineColor(self): return self.description.get("LineColor", "black") def getLineOpacity(self): return self.description.get("LineOpacity", "1.0") def getFillColor(self): return self.description.get("FillColor", "white") def getFillOpacity(self): return self.description.get("FillOpacity", "1.0") def getHatchColor(self): return self.description.get("HatchColor", self.getErrorBandColor()) def getFillStyle(self): return self.description.get("FillStyle", "none") def getPolyMarker(self): return self.description.get("PolyMarker", "") def getDotSize(self): return self.description.get("DotSize", "2pt 2") def getDotScale(self): return self.description.get("DotScale", "1") def getErrorBars(self): return bool(int(self.description.get("ErrorBars", "0"))) def getErrorBands(self): return bool(int(self.description.get("ErrorBands", "0"))) def getErrorBandColor(self): return self.description.get("ErrorBandColor", "yellow") def getErrorBandStyle(self): return self.description.get("ErrorBandStyle", "solid") def getErrorBandOpacity(self): return self.description.get("ErrorBandOpacity", "1.0") def getErrorTubes(self): if 'ErrorTubes' in self.description: return bool(int(self.description['ErrorTubes'])) else: return False def getErrorTubeLineStyle(self): if 'ErrorTubeLineStyle' in self.description: if self.description['ErrorTubeLineStyle'] in ('dashdotted', 'dotdashed'): self.description['ErrorTubeLineStyle']='dashed' self.description['ErrorTubeLineDash']='3pt 3pt .8pt 3pt' return self.description['ErrorTubeLineStyle'] else: return self.getLineStyle() def getErrorTubeLineColor(self): return self.description.get('ErrorTubeLineColor', self.getLineColor()) def getErrorTubeLineDash(self): return self.description.get('ErrorTubeLineDash', self.getLineDash()) def getErrorTubeLineWidth(self): return self.description.get('ErrorTubeLineWidth', '0.3pt') def getErrorTubeLineOpacity(self): return self.description.get('ErrorTubeLineOpacity', self.getLineOpacity()) def getSmoothLine(self): return bool(int(self.description.get("SmoothLine", "0"))) def startclip(self): return '\\psclip{\\psframe[linewidth=0, linestyle=none](0,0)(1,1)}\n' def stopclip(self): return '\\endpsclip\n' def startpsset(self): out = "" out += ('\\psset{linecolor='+self.getLineColor()+'}\n') out += ('\\psset{linewidth='+self.getLineWidth()+'}\n') out += ('\\psset{linestyle='+self.getLineStyle()+'}\n') out += ('\\psset{fillstyle='+self.getFillStyle()+'}\n') out += ('\\psset{fillcolor='+self.getFillColor()+'}\n') out += ('\\psset{hatchcolor='+self.getHatchColor()+'}\n') out += ('\\psset{strokeopacity='+self.getLineOpacity()+'}\n') out += ('\\psset{opacity='+self.getFillOpacity()+'}\n') if self.getLineDash()!='': out += ('\\psset{dash='+self.getLineDash()+'}\n') return out def stoppsset(self): out = "" out += ('\\psset{linecolor=black}\n') out += ('\\psset{linewidth=0.8pt}\n') out += ('\\psset{linestyle=solid}\n') out += ('\\psset{fillstyle=none}\n') out += ('\\psset{fillcolor=white}\n') out += ('\\psset{hatchcolor=black}\n') out += ('\\psset{strokeopacity=1.0}\n') out += ('\\psset{opacity=1.0}\n') return out class Function(DrawableObject): def __init__(self, f): self.description = {} self.read_input(f) if 'Location' not in self.description: self.description['Location']='MainPlot' self.description['Location']=self.description['Location'].split('\t') def read_input(self, f): self.code='def plotfunction(x):\n' iscode=False for line in f: if is_end_marker(line, 'FUNCTION'): break elif is_comment(line): continue else: m = pat_property.match(line) if iscode: self.code+=' '+line elif m: prop, value = m.group(1,2) if prop=='Code': iscode=True else: self.description[prop] = value if not iscode: print('++++++++++ ERROR: No code in function') else: foo = compile(self.code, '', 'exec') exec(foo) self.plotfunction = plotfunction def draw(self,coors,inputdata): drawme = False for i in self.description['Location']: if i in inputdata.description['PlotStage']: drawme = True break if not drawme: return "" out = "" out += self.startclip() out += self.startpsset() xmin = coors.xmin() if self.description.get('XMin'): xmin = float(self.description['XMin']) if self.description.get('FunctionXMin'): xmin = max(xmin,float(self.description['FunctionXMin'])) xmax=coors.xmax() if self.description.get('XMax'): xmax=float(self.description['XMax']) if self.description.get('FunctionXMax'): xmax = min(xmax,float(self.description['FunctionXMax'])) xmin=min(xmin,xmax) xmax=max(xmin,xmax) # TODO: Space sample points logarithmically if LogX=1 xsteps=500. if self.description.get('XSteps'): xsteps=float(self.description['XSteps']) dx = (xmax-xmin)/xsteps x = xmin-dx funcstyle = '\\pscurve' if self.description.get('FunctionStyle'): if self.description['FunctionStyle']=='curve': funcstyle = '\\pscurve' elif self.description['FunctionStyle']=='line': funcstyle = '\\psline' out += funcstyle if 'FillStyle' in self.description and self.description['FillStyle']!='none': out += '(%s,%s)\n' % (coors.strphys2frameX(xmin),coors.strphys2frameY(coors.ymin())) while x < (xmax+2*dx): y = self.plotfunction(x) out += ('(%s,%s)\n' % (coors.strphys2frameX(x), coors.strphys2frameY(y))) x += dx if 'FillStyle' in self.description and self.description['FillStyle']!='none': out += '(%s,%s)\n' % (coors.strphys2frameX(xmax),coors.strphys2frameY(coors.ymin())) out += self.stoppsset() out += self.stopclip() return out class BinData(object): """\ Store bin edge and value+error(s) data for a 1D or 2D bin. TODO: generalise/alias the attr names to avoid mention of x and y """ def __init__(self, low, high, val, err): #print("@", low, high, val, err) self.low = floatify(low) self.high = floatify(high) self.val = float(val) self.err = floatpair(err) @property def is2D(self): return hasattr(self.low, "__len__") and hasattr(self.high, "__len__") @property def isValid(self): invalid_val = (isnan(self.val) or isnan(self.err[0]) or isnan(self.err[1])) if invalid_val: return False if self.is2D: invalid_low = any(isnan(x) for x in self.low) invalid_high = any(isnan(x) for x in self.high) else: invalid_low, invalid_high = isnan(self.low), isnan(self.high) return not (invalid_low or invalid_high) @property def xmin(self): return self.low @xmin.setter def xmin(self,x): self.low = x @property def xmax(self): return self.high @xmax.setter def xmax(self,x): self.high = x @property def xmid(self): # TODO: Generalise to 2D return (self.xmin + self.xmax) / 2.0 @property def xwidth(self): # TODO: Generalise to 2D assert self.xmin <= self.xmax return self.xmax - self.xmin @property def y(self): return self.val @y.setter def y(self, x): self.val = x @property def ey(self): return self.err @ey.setter def ey(self, x): self.err = x @property def ymin(self): return self.y - self.ey[0] @property def ymax(self): return self.y + self.ey[1] def __getitem__(self, key): "dict-like access for backward compatibility" if key in ("LowEdge"): return self.xmin elif key == ("UpEdge", "HighEdge"): return self.xmax elif key == "Content": return self.y elif key == "Errors": return self.ey class Histogram(DrawableObject, Described): def __init__(self, f, p=None): self.description = {} self.is2dim = False self.data = [] self.read_input_data(f) self.sigmabinvalue = None self.meanbinvalue = None self.path = p def read_input_data(self, f): for line in f: if is_end_marker(line, 'HISTOGRAM'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.description[prop] = value else: ## Detect symm errs linearray = line.split() if len(linearray) == 4: self.data.append(BinData(*linearray)) ## Detect asymm errs elif len(linearray) == 5: self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]])) ## Detect two-dimensionality elif len(linearray) in [6,7]: self.is2dim = True # If asymm z error, use the max or average of +- error err = float(linearray[5]) if len(linearray) == 7: if self.description.get("ShowMaxZErr", 1): err = max(err, float(linearray[6])) else: err = 0.5 * (err + float(linearray[6])) self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], linearray[4], err)) ## Unknown histo format else: raise RuntimeError("Unknown HISTOGRAM data line format with %d entries" % len(linearray)) def mangle_input(self): norm2int = self.attr_bool("NormalizeToIntegral", False) norm2sum = self.attr_bool("NormalizeToSum", False) if norm2int or norm2sum: if norm2int and norm2sum: print("Can't normalize to Integral and to Sum at the same time. Will normalize to the sum.") foo = 0.0 # TODO: change to "in self.data"? for i in range(len(self.data)): if norm2sum: foo += self.data[i].val else: foo += self.data[i].val*(self.data[i].xmax-self.data[i].xmin) # TODO: change to "in self.data"? if foo != 0: for i in range(len(self.data)): self.data[i].val /= foo self.data[i].err[0] /= foo self.data[i].err[1] /= foo scale = self.attr_float('Scale', 1.0) if scale != 1.0: # TODO: change to "in self.data"? for i in range(len(self.data)): self.data[i].val *= scale self.data[i].err[0] *= scale self.data[i].err[1] *= scale if self.attr_float("ScaleError", 0.0): scale = float(self.attr_float("ScaleError")) for i in range(len(self.data)): self.data[i]['Error'][0] *= scale self.data[i]['Error'][1] *= scale if self.attr_float('Shift', 0.0): shift = float(self.attr_float("Shift")) for i in range(len(self.data)): self.data[i]['Content'] += shift #if self.description.get('Rebin'): if self.has_attr('Rebin') and self.attr('Rebin') != '': rawrebins = self.attr('Rebin').strip().split('\t') rebins = [] maxindex = len(self.data)-1 if len(rawrebins)%2==1: rebins.append({'Start': self.data[0].xmin, 'Rebin': int(rawrebins[0])}) rawrebins.pop(0) for i in range(0,len(rawrebins),2): if float(rawrebins[i]) self.data[0].xmin): rebins.insert(0,{'Start': self.data[0].xmin, 'Rebin': 1}) errortype = self.attr("ErrorType", "stat") newdata=[] lower = self.getBin(rebins[0]['Start']) for k in range(0,len(rebins),1): rebin = rebins[k]['Rebin'] upper = maxindex end = 1 if (kmaxindex: break binwidth = self.data[i+j].xwidth foo += self.data[i+j].val * binwidth if errortype=="stat": barl += (binwidth * self.data[i+j].err[0])**2 baru += (binwidth * self.data[i+j].err[1])**2 elif errortype == "env": barl += self.data[i+j].ymin * binwidth baru += self.data[i+j].ymax * binwidth else: logging.error("Rebinning for ErrorType not implemented.") sys.exit(1) upedge = min(i+rebin-1,maxindex) newbinwidth=self.data[upedge].xmax-self.data[i].xmin newcentral=foo/newbinwidth if errortype=="stat": newerror=[sqrt(barl)/newbinwidth,sqrt(baru)/newbinwidth] elif errortype=="env": newerror=[(foo-barl)/newbinwidth,(baru-foo)/newbinwidth] newdata.append(BinData(self.data[i].xmin, self.data[i+rebin-1].xmax, newcentral, newerror)) lower = (upper/rebin)*rebin+(upper%rebin) self.data=newdata def add(self, name): if len(self.data) != len(name.data): print('+++ Error in Histogram.add() for %s: different numbers of bins' % self.path) for i in range(len(self.data)): if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \ fuzzyeq(self.data[i].xmax, name.data[i].xmax): self.data[i].val += name.data[i].val self.data[i].err[0] = sqrt(self.data[i].err[0]**2 + name.data[i].err[0]**2) self.data[i].err[1] = sqrt(self.data[i].err[1]**2 + name.data[i].err[1]**2) else: print('+++ Error in Histogram.add() for %s: binning of histograms differs' % self.path) def divide(self, name): #print(name.path, self.path) if len(self.data) != len(name.data): print('+++ Error in Histogram.divide() for %s: different numbers of bins' % self.path) for i in range(len(self.data)): if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \ fuzzyeq(self.data[i].xmax, name.data[i].xmax): try: self.data[i].err[0] /= name.data[i].val except ZeroDivisionError: self.data[i].err[0]=0. try: self.data[i].err[1] /= name.data[i].val except ZeroDivisionError: self.data[i].err[1]=0. try: self.data[i].val /= name.data[i].val except ZeroDivisionError: self.data[i].val=1. # self.data[i].err[0] = sqrt(self.data[i].err[0]**2 + name.data[i].err[0]**2) # self.data[i].err[1] = sqrt(self.data[i].err[1]**2 + name.data[i].err[1]**2) else: print('+++ Error in Histogram.divide() for %s: binning of histograms differs' % self.path) def dividereverse(self, name): if len(self.data) != len(name.data): print('+++ Error in Histogram.dividereverse() for %s: different numbers of bins' % self.path) for i in range(len(self.data)): if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \ fuzzyeq(self.data[i].xmax, name.data[i].xmax): try: self.data[i].err[0] = name.data[i].err[0]/self.data[i].val except ZeroDivisionError: self.data[i].err[0]=0. try: self.data[i].err[1] = name.data[i].err[1]/self.data[i].val except ZeroDivisionError: self.data[i].err[1]=0. try: self.data[i].val = name.data[i].val/self.data[i].val except ZeroDivisionError: self.data[i].val=1. else: print('+++ Error in Histogram.dividereverse(): binning of histograms differs') def deviation(self, name): if len(self.data) != len(name.data): print('+++ Error in Histogram.deviation() for %s: different numbers of bins' % self.path) for i in range(len(self.data)): if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \ fuzzyeq(self.data[i].xmax, name.data[i].xmax): self.data[i].val -= name.data[i].val try: self.data[i].val /= 0.5*sqrt((name.data[i].err[0] + name.data[i].err[1])**2 + \ (self.data[i].err[0] + self.data[i].err[1])**2) except ZeroDivisionError: self.data[i].val = 0.0 try: self.data[i].err[0] /= name.data[i].err[0] except ZeroDivisionError: self.data[i].err[0] = 0.0 try: self.data[i].err[1] /= name.data[i].err[1] except ZeroDivisionError: self.data[i].err[1] = 0.0 else: print('+++ Error in Histogram.deviation() for %s: binning of histograms differs' % self.path) def delta(self,name): self.divide(name) for i in range(len(self.data)): self.data[i]['Content'] -= 1. def deltapercent(self,name): self.delta(name) for i in range(len(self.data)): self.data[i]['Content'] *= 100. self.data[i]['Error'][0] *= 100. self.data[i]['Error'][1] *= 100. def getBin(self,x): if xself.data[len(self.data)-1].xmax: print('+++ Error in Histogram.getBin(): x out of range') return float('nan') for i in range(1,len(self.data)-1,1): if x coors.zmax(): color = 129 if b.val < coors.zmin(): color = 0 if b.val <= coors.zmin(): out += ('[linewidth=0pt, linestyle=none, fillstyle=solid, fillcolor=white]') else: out += ('[linewidth=0pt, linestyle=none, fillstyle=solid, fillcolor={gradientcolors!!['+str(color)+']}]') out += ('(' + coors.strphys2frameX(b.low[0]) + ', ' \ + coors.strphys2frameY(b.low[1]) + ')(' \ + coors.strphys2frameX(b.high[0]) + ', ' \ + coors.strphys2frameY(b.high[1]) + ')\n') else: if self.getErrorBands(): self.description['SmoothLine'] = 0 for b in self.data: if isnan(b.val) or isnan(b.err[0]) or isnan(b.err[1]): seen_nan = True continue out += ('\\psframe[dimen=inner,linewidth=0pt,linestyle=none,fillstyle=%s,fillcolor=%s,opacity=%s,hatchcolor=%s]' %(self.getErrorBandStyle(),self.getErrorBandColor(),self.getErrorBandOpacity(),self.getHatchColor())) out += ('(' + coors.strphys2frameX(b.xmin) + ', ' \ + coors.strphys2frameY(b.val - b.err[0]) + ')(' \ + coors.strphys2frameX(b.xmax) + ', ' \ + coors.strphys2frameY(b.val + b.err[1]) + ')\n') if self.getErrorBars(): for b in self.data: if isnan(b.val) or isnan(b.err[0]) or isnan(b.err[1]): seen_nan = True continue if b.val == 0. and b.err == [0.,0.]: continue out += ('\\psline') out += ('(' + coors.strphys2frameX(b.xmin) + ', ' \ + coors.strphys2frameY(b.val) + ')(' \ + coors.strphys2frameX(b.xmax) + ', ' \ + coors.strphys2frameY(b.val) + ')\n') out += ('\\psline') bincenter = coors.strphys2frameX(.5*(b.xmin+b.xmax)) out += ('(' + bincenter + ', ' \ + coors.strphys2frameY(b.val-b.err[0]) + ')(' \ + bincenter + ', ' \ + coors.strphys2frameY(b.val+b.err[1]) + ')\n') if self.getErrorTubes(): for i in range(len(self.data)): if self.data[i].val==0. and self.data[i].err==[0.,0.]: continue tubeopts = ('linestyle=' + self.getErrorTubeLineStyle() \ + ', linecolor=' + self.getErrorTubeLineColor() \ + ', linewidth=' + self.getErrorTubeLineWidth() \ + ', strokeopacity=' + self.getErrorTubeLineOpacity() \ + ', opacity=' + self.getFillOpacity()) out += ('\psline['+tubeopts+']') out += ('(' + coors.strphys2frameX(self.data[i].xmin) + ', ' \ + coors.strphys2frameY(self.data[i].val-self.data[i].err[0]) + ')(' \ + coors.strphys2frameX(self.data[i].xmax) + ', ' \ + coors.strphys2frameY(self.data[i].val-self.data[i].err[0]) + ')\n') out += ('\psline['+tubeopts+']') out += ('(' + coors.strphys2frameX(self.data[i].xmin) + ', ' \ + coors.strphys2frameY(self.data[i].val+self.data[i].err[1]) + ')(' \ + coors.strphys2frameX(self.data[i].xmax) + ', ' \ + coors.strphys2frameY(self.data[i].val+self.data[i].err[1]) + ')\n') if self.description.get('ConnectBins', '1') == '1': if i>0: out += ('\psline['+tubeopts+']') out += ('(' + coors.strphys2frameX(self.data[i].xmin) + ', ' \ + coors.strphys2frameY(self.data[i].val-self.data[i].err[0]) + ')(' \ + coors.strphys2frameX(self.data[i].xmin) + ', ' \ + coors.strphys2frameY(self.data[i-1].val-self.data[i-1].err[0]) + ')\n') out += ('\psline['+tubeopts+']') out += ('(' + coors.strphys2frameX(self.data[i].xmin) + ', ' \ + coors.strphys2frameY(self.data[i].val+self.data[i].err[1]) + ')(' \ + coors.strphys2frameX(self.data[i].xmin) + ', ' \ + coors.strphys2frameY(self.data[i-1].val+self.data[i-1].err[1]) + ')\n') if self.getSmoothLine(): out += '\\psbezier' else: out += '\\psline' if self.getFillStyle() != 'none': # make sure that filled areas go all the way down to the x-axis if coors.phys2frameX(self.data[0].xmin) > 1e-4: out += '(' + coors.strphys2frameX(self.data[0].xmin) + ', -0.1)\n' else: out += '(-0.1, -0.1)\n' start = True for i, b in enumerate(self.data): if isnan(b.val): seen_nan = True continue ## Join/separate data points, with vertical/diagonal lines if(not start and not self.getSmoothLine()) : if self.description.get('ConnectBins', '1') != '1': out += ('\\psline') else: ## If bins are joined, but there is a gap in binning, choose whether to fill the gap if (abs(coors.phys2frameX(self.data[i-1].xmax) - coors.phys2frameX(b.xmin)) > 1e-4): if self.description.get('ConnectGaps', '0') != '1': out += ('\\psline') # TODO: Perhaps use a new dashed line to fill the gap? start = False if self.getSmoothLine(): out += ('(' + coors.strphys2frameX(0.5*(b.xmin+b.xmax)) + ', ' \ + coors.strphys2frameY(b.val) + ')\n') else: out += ('(' + coors.strphys2frameX(b.xmin) + ', ' \ + coors.strphys2frameY(b.val) + ')(' \ + coors.strphys2frameX(b.xmax) + ', ' \ + coors.strphys2frameY(b.val) + ')\n') if self.getFillStyle() != 'none': # make sure that filled areas go all the way down to the x-axis if (coors.phys2frameX(self.data[-1].xmax) < 1-1e-4): out += '(' + coors.strphys2frameX(self.data[-1].xmax) + ', -0.1)\n' else: out += '(1.1, -0.1)\n' # if self.getPolyMarker() != '': for b in self.data: if isnan(b.val): seen_nan = True continue if b.val == 0. and b.err == [0.,0.]: continue out += ('\\psdot[dotstyle=%s,dotsize=%s,dotscale=%s](' % (self.getPolyMarker(),self.getDotSize(),self.getDotScale()) \ + coors.strphys2frameX(.5*(b.xmin+b.xmax)) + ', ' \ + coors.strphys2frameY(b.val) + ')\n') out += "% END DATA\n" else: print("WARNING: No valid bin value/errors/edges to plot!") out += "% NO DATA!\n" out += self.stoppsset() out += self.stopclip() if seen_nan: print("WARNING: NaN-valued value or error bar!") return out # def is2dimensional(self): # return self.is2dim def getXMin(self): if not self.data: return 0 elif self.is2dim: return min(b.low[0] for b in self.data) else: return min(b.xmin for b in self.data) def getXMax(self): if not self.data: return 1 elif self.is2dim: return max(b.high[0] for b in self.data) else: return max(b.xmax for b in self.data) def getYMin(self, xmin, xmax, logy): if not self.data: return 0 elif self.is2dim: return min(b.low[1] for b in self.data) else: yvalues = [] for b in self.data: if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax): foo = b.val if self.getErrorBars() or self.getErrorBands(): foo -= b.err[0] if not isnan(foo) and (not logy or foo > 0): yvalues.append(foo) return min(yvalues) if yvalues else self.data[0].val def getYMax(self, xmin, xmax): if not self.data: return 1 elif self.is2dim: return max(b.high[1] for b in self.data) else: yvalues = [] for b in self.data: if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax): foo = b.val if self.getErrorBars() or self.getErrorBands(): foo += b.err[1] if not isnan(foo): # and (not logy or foo > 0): yvalues.append(foo) return max(yvalues) if yvalues else self.data[0].val def getZMin(self, xmin, xmax, ymin, ymax): if not self.is2dim: return 0 zvalues = [] for b in self.data: if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax): zvalues.append(b.val) return min(zvalues) def getZMax(self, xmin, xmax, ymin, ymax): if not self.is2dim: return 0 zvalues = [] for b in self.data: if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax): zvalues.append(b.val) return max(zvalues) class Value(Histogram): def read_input_data(self, f): for line in f: if is_end_marker(line, 'VALUE'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.description[prop] = value else: linearray = line.split() if len(linearray) == 3: self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[2] ])) # dummy x-values else: raise Exception('Value does not have the expected number of columns. ' + line) # TODO: specialise draw() here class Counter(Histogram): def read_input_data(self, f): for line in f: if is_end_marker(line, 'COUNTER'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.description[prop] = value else: linearray = line.split() if len(linearray) == 2: self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[1] ])) # dummy x-values else: raise Exception('Counter does not have the expected number of columns. ' + line) # TODO: specialise draw() here class Histo1D(Histogram): def read_input_data(self, f): for line in f: if is_end_marker(line, 'HISTO1D'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.description[prop] = value else: linearray = line.split() ## Detect symm errs # TODO: Not sure what the 8-param version is for... auto-compatibility with YODA format? if len(linearray) in [4,8]: self.data.append(BinData(linearray[0], linearray[1], linearray[2], linearray[3])) ## Detect asymm errs elif len(linearray) == 5: self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]])) else: raise Exception('Histo1D does not have the expected number of columns. ' + line) # TODO: specialise draw() here class Histo2D(Histogram): def read_input_data(self, f): self.is2dim = True #< Should really be done in a constructor, but this is easier for now... for line in f: if is_end_marker(line, 'HISTO2D'): break elif is_comment(line): continue else: line = line.rstrip() m = pat_property.match(line) if m: prop, value = m.group(1,2) self.description[prop] = value else: linearray = line.split() if len(linearray) in [6,7]: # If asymm z error, use the max or average of +- error err = float(linearray[5]) if len(linearray) == 7: if self.description.get("ShowMaxZErr", 1): err = max(err, float(linearray[6])) else: err = 0.5 * (err + float(linearray[6])) self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], float(linearray[4]), err)) else: raise Exception('Histo2D does not have the expected number of columns. '+line) # TODO: specialise draw() here ############################# class Frame(object): def __init__(self, description, name): self.description = description self.name = name self.framelinewidth = '0.3pt' self.framelinecolor = "black" self.zeroline = self.description.get(self.name+'ZeroLine', '')=='1' self.unitline = self.description.get(self.name+'UnitLine', '')=='1' def drawZeroLine(self,coor): out = ('') if self.zeroline: out += ('\n%\n% ZeroLine\n%\n') out += ('\\psline[linewidth=%s,linecolor=%s](0,%s)(1,%s)\n' %(self.framelinewidth,self.framelinecolor,coor,coor)) return out def drawUnitLine(self,coor): out = ('') if self.unitline: out += ('\n%\n% UnitLine\n%\n') out += ('\\psline[linewidth=%s,linecolor=%s](0,%s)(1,%s)\n' %(self.framelinewidth,self.framelinecolor,coor,coor)) return out def draw(self): out = ('\n%\n% Frame\n%\n') if self.description.get('FrameColor') is not None: color = inputdata.description['FrameColor'] # We want to draw this frame only once, so set it to False for next time: inputdata.description['FrameColor']=None # Calculate how high and wide the overall plot is height = [0,0] width = inputdata.attr('PlotSizeX') if inputdata.attr_bool('RatioPlot', True): height[1] = -inputdata.description['RatioPlotSizeY'] if self.description.get('MainPlot', '0')!='0': height[0] = inputdata.description['PlotSizeY'] else: height[0] = -height[1] height[1] = 0 # Get the margin widths left = inputdata.description['LeftMargin']+0.1 right = inputdata.description['RightMargin']+0.1 top = inputdata.description['TopMargin']+0.1 bottom = inputdata.description['BottomMargin']+0.1 # out += ('\\rput(0,1){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(top, color, -left, top/2, width+right, top/2)) out += ('\\rput(0,%scm){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(height[1], bottom, color, -left, -bottom/2, width+right, -bottom/2)) out += ('\\rput(0,0){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(left, color, -left/2, height[1]-0.05, -left/2, height[0]+0.05)) out += ('\\rput(1,0){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(right, color, right/2, height[1]-0.05, right/2, height[0]+0.05)) if 'FrameLineWidth' in self.description: self.framelinewidth=self.description['FrameLineWidth'] if 'FrameLineColor' in self.description: self.framelinecolor=self.description['FrameLineColor'] out += ('\\psframe[linewidth='+self.framelinewidth+',linecolor='+self.framelinecolor+',dimen=middle](0,0)(1,1)\n') return out class Ticks(object): def __init__(self, description, coors): self.description = description self.majorticklinewidth = self.getMajorTickLineWidth() self.minorticklinewidth = self.getMinorTickLineWidth() self.majorticklinecolor = self.getMajorTickLineColor() self.minorticklinecolor = self.getMinorTickLineColor() self.majorticklength = self.getMajorTickLength() self.minorticklength = self.getMinorTickLength() self.majorticklabelcolor = self.getMajorTickLabelColor() self.coors = coors self.decimal = 0 self.framelinewidth = '0.3pt' self.framelinecolor = 'black' def draw_ticks(self, vmin, vmax, plotlog=False, custommajorticks=[], customminorticks=[], custommajortickmarks=-1, customminortickmarks=-1, twosided=False, drawlabels=True): if vmax <= vmin: raise Exception("Cannot place tick marks. Inconsistent min=%s and max=%s" % (vmin,vmax)) out = "" if plotlog: if vmin <= 0 or vmax <= 0: raise Exception("Cannot place log axis min or max tick <= 0") if (custommajorticks!=[] or customminorticks!=[]): for i in range(len(custommajorticks)): value=custommajorticks[i]['Value'] label=custommajorticks[i]['Label'] if value>=vmin and value<=vmax: out += self.draw_majortick(value,twosided) if drawlabels: out += self.draw_majorticklabel(value, label=label) for i in range(len(customminorticks)): value=customminorticks[i]['Value'] if value>=vmin and value<=vmax: out += self.draw_minortick(value,twosided) else: x = int(log10(vmin)) n_labels = 0 while x < log10(vmax) + 1: if 10**x >= vmin: ticklabel = 10**x if ticklabel > vmin and ticklabel < vmax: out += self.draw_majortick(ticklabel,twosided) if drawlabels: out += self.draw_majorticklabel(ticklabel) n_labels += 1 if ticklabel == vmin or ticklabel == vmax: if drawlabels: out += self.draw_majorticklabel(ticklabel) n_labels+=1 for i in range(2,10): ticklabel = i*10**(x-1) if ticklabel > vmin and ticklabel < vmax: out += self.draw_minortick(ticklabel,twosided) if drawlabels and n_labels == 0: if (i+1)*10**(x-1) < vmax: # some special care for the last minor tick out += self.draw_minorticklabel(ticklabel) else: out += self.draw_minorticklabel(ticklabel, last=True) x += 1 elif custommajorticks != [] or customminorticks != []: for i in range(len(custommajorticks)): value = custommajorticks[i]['Value'] label = custommajorticks[i]['Label'] if value >= vmin and value <= vmax: out += self.draw_majortick(value,twosided) if drawlabels: out += self.draw_majorticklabel(value, label=label) for i in range(len(customminorticks)): value = customminorticks[i]['Value'] if value >= vmin and value <= vmax: out += self.draw_minortick(value,twosided) else: vrange = vmax - vmin if isnan(vrange): vrange, vmin, vmax = 1, 1, 2 digits = int(log10(vrange))+1 if vrange <= 1: digits -= 1 foo = int(vrange/(10**(digits-1))) if foo/9. > 0.5: tickmarks = 10 elif foo/9. > 0.2: tickmarks = 5 elif foo/9. > 0.1: tickmarks = 2 if custommajortickmarks > -1: if custommajortickmarks not in [1, 2, 5, 10, 20]: print('+++ Error in Ticks.draw_ticks(): MajorTickMarks must be in [1, 2, 5, 10, 20]') else: tickmarks = custommajortickmarks if tickmarks == 2 or tickmarks == 20: minortickmarks = 3 else: minortickmarks = 4 if customminortickmarks > -1: minortickmarks = customminortickmarks # x = 0 while x > vmin*10**digits: x -= tickmarks*100**(digits-1) while x <= vmax*10**digits: if x >= vmin*10**digits - tickmarks*100**(digits-1): ticklabel = 1.*x/10**digits if int(ticklabel) == ticklabel: ticklabel = int(ticklabel) if float(ticklabel-vmin)/vrange >= -1e-5: if abs(ticklabel-vmin)/vrange > 1e-5 and abs(ticklabel-vmax)/vrange > 1e-5: out += self.draw_majortick(ticklabel,twosided) if drawlabels: out += self.draw_majorticklabel(ticklabel) xminor = x for i in range(minortickmarks): xminor += 1.*tickmarks*100**(digits-1)/(minortickmarks+1) ticklabel = 1.*xminor/10**digits if ticklabel > vmin and ticklabel < vmax: if abs(ticklabel-vmin)/vrange > 1e-5 and abs(ticklabel-vmax)/vrange > 1e-5: out += self.draw_minortick(ticklabel,twosided) x += tickmarks*100**(digits-1) return out def draw_breaks(self, vmin, vmax, breaks=[], twosided=False): out = "" for b in breaks: value = b['Value'] if value>=vmin and value<=vmax: out += self.draw_break(value,twosided) return out def add_definitions(self): pass def read_parameters(self): if 'FrameLineWidth' in self.description: self.framelinewidth=self.description['FrameLineWidth'] if 'FrameLineColor' in self.description: self.framelinecolor=self.description['FrameLineColor'] def draw(self): pass def draw_minortick(self, ticklabel, twosided): pass def draw_majortick(self, ticklabel, twosided): pass def draw_majorticklabel(self, ticklabel): pass def draw_minorticklabel(self, value, label='', last=False): return '' def draw_break(self, pos, twosided): pass def get_ticklabel(self, value, plotlog=False, minor=False, lastminor=False): label='' prefix = '' if plotlog: bar = int(log10(value)) if bar < 0: sign='-' else: sign='\\,' if minor: # The power of ten is only to be added to the last minor tick label if lastminor: label = str(int(value/(10**bar))) + "\\cdot" + '10$^{'+sign+'\\text{'+str(abs(bar))+'}}$' else: label = str(int(value/(10**bar))) # The naked prefactor else: if bar==0: label = '1' else: label = '10$^{'+sign+'\\text{%s}}$' % str(abs(bar)) else: if fabs(value) < 1e-10: value = 0 label = '%.3g' % value if "e" in label: a, b = label.split("e") astr = "%2.1f" % float(a) bstr = str(int(b)) label = "\\smaller{%s $\\!\\cdot 10^{%s} $}" % (astr, bstr) return label def getMajorTickLineWidth(self): pass def getMinorTickLineWidth(self): pass def getMajorTickLineColor(self): pass def getMinorTickLineColor(self): pass def getMajorTickLabelColor(self): pass def getMinorTickLength(self): pass def getMajorTickLength(self): pass class XTicks(Ticks): def draw(self, custommajorticks=[], customminorticks=[], custommajortickmarks=-1, customminortickmarks=-1, drawlabels=True, breaks=[]): twosided = bool(int(self.description.get('XTwosidedTicks', '1'))) out = "" out += ('\n%\n% X-Ticks\n%\n') out += self.add_definitions() uselog = self.description['LogX'] and (self.coors.xmin() > 0 and self.coors.xmax() > 0) out += self.draw_ticks(self.coors.xmin(), self.coors.xmax(),\ plotlog=uselog,\ custommajorticks=custommajorticks,\ customminorticks=customminorticks,\ custommajortickmarks=custommajortickmarks,\ customminortickmarks=customminortickmarks,\ twosided=twosided,\ drawlabels=drawlabels) out += self.draw_breaks(self.coors.ymin(), self.coors.ymax(),\ breaks, twosided) return out def add_definitions(self): self.read_parameters() out = '' out += ('\\def\\majortickmarkx{\\psline[linewidth='+self.majorticklinewidth+',linecolor='+self.majorticklinecolor+'](0,0)(0,'+self.majorticklength+')}%\n') out += ('\\def\\minortickmarkx{\\psline[linewidth='+self.minorticklinewidth+',linecolor='+self.minorticklinecolor+'](0,0)(0,'+self.minorticklength+')}%\n') out += ('\\def\\breakx{%\n \\rput{270}(0,0){\\psline[linewidth='+self.framelinewidth+',linecolor=white](0,-1pt)(0,1pt)}\n \\rput{180}(0,0){%\n \\rput{20}(0,1pt){%\n \\psline[linewidth='+self.framelinewidth+',linecolor='+self.framelinecolor+'](-5pt,0)(5pt,0)\n }\n \\rput{20}(0,-1pt){%\n \\psline[linewidth='+self.framelinewidth+',linecolor='+self.framelinecolor+'](-5pt,0)(5pt,0)\n }\n }\n }\n') return out def draw_minortick(self, ticklabel, twosided): out = '' out += '\\rput('+self.coors.strphys2frameX(ticklabel)+', 0){\\minortickmarkx}\n' if twosided: out += '\\rput{180}('+self.coors.strphys2frameX(ticklabel)+', 1){\\minortickmarkx}\n' return out def draw_minorticklabel(self, value, label='', last=False): if not label: label=self.get_ticklabel(value, int(self.description['LogX']), minor=True, lastminor=last) if last: # Some more indentation for the last minor label return ('\\rput('+self.coors.strphys2frameX(value)+', 0){\\rput[B](1.9\\labelsep,-2.3\\labelsep){\\strut{}'+label+'}}\n') else: return ('\\rput('+self.coors.strphys2frameX(value)+', 0){\\rput[B](0,-2.3\\labelsep){\\strut{}'+label+'}}\n') def draw_majortick(self, ticklabel, twosided): out = '' out += '\\rput('+self.coors.strphys2frameX(ticklabel)+', 0){\\majortickmarkx}\n' if twosided: out += '\\rput{180}('+self.coors.strphys2frameX(ticklabel)+', 1){\\majortickmarkx}\n' return out def draw_majorticklabel(self, value, label=''): if not label: label = self.get_ticklabel(value, int(self.description['LogX']) and self.coors.xmin() > 0 and self.coors.xmax() > 0) labelparts = label.split("\\n") labelcode = label if len(labelparts) == 1 else ("\\shortstack{" + "\\\\ ".join(labelparts) + "}") rtn = "\\rput(" + self.coors.strphys2frameX(value) + ", 0){\\rput[t](0,-\\labelsep){\\textcolor{" +self.majorticklabelcolor + "}{" + labelcode + "}}}\n" return rtn def draw_break(self, pos, twosided): out = '' out += '\\rput(0, '+self.coors.strphys2frameX(pos)+', 0){\\breakx}\n' if twosided: out += '\\rput('+self.coors.strphys2frameX(pos)+', 1){\\breakx}\n' return out def getMajorTickLength(self): return self.description.get('XMajorTickLength', '9pt') def getMinorTickLength(self): return self.description.get('XMinorTickLength', '4pt') def getMajorTickLineWidth(self): return self.description.get('XMajorTickLineWidth', '0.3pt') def getMinorTickLineWidth(self): return self.description.get('XMinorTickLineWidth', '0.3pt') def getMajorTickLineColor(self): return self.description.get('XMajorTickLineColor', '0.3pt') def getMinorTickLineColor(self): return self.description.get('XMinorTickLineColor', 'black') def getMajorTickLabelColor(self): return self.description.get('XMajorTickLabelColor', 'black') class YTicks(Ticks): def draw(self, custommajorticks=[], customminorticks=[], custommajortickmarks=-1, customminortickmarks=-1, drawlabels=True, breaks=[]): twosided = bool(int(self.description.get('YTwosidedTicks', '1'))) out = "" out += ('\n%\n% Y-Ticks\n%\n') out += self.add_definitions(); uselog = self.description['LogY'] and self.coors.ymin() > 0 and self.coors.ymax() > 0 out += self.draw_ticks(self.coors.ymin(), self.coors.ymax(), plotlog=uselog, custommajorticks=custommajorticks, customminorticks=customminorticks, custommajortickmarks=custommajortickmarks, customminortickmarks=customminortickmarks, twosided=twosided, drawlabels=drawlabels) if self.description.get('ShortenLargeNumbers') and 'RatioPlot' not in self.description.get('PlotStage', ''): bar = int(self.decimal) if bar<0: sign='-' else: sign='\\,' if bar==0: pass else: pos = self.coors.strphys2frameY(self.coors.ymax()) label = \ ('\\times 10$^{'+sign+'\\text{'+str(abs(self.decimal))+'}}$') out += ('\\uput[180]{0}(0, '+pos+'){\\strut{}'+label+'}\n') out += self.draw_breaks(self.coors.ymin(), self.coors.ymax(),\ breaks, twosided) return out def add_definitions(self): self.read_parameters() out = '' out += ('\\def\\majortickmarky{\\psline[linewidth='+self.majorticklinewidth+',linecolor='+self.majorticklinecolor+'](0,0)('+self.majorticklength+',0)}%\n') out += ('\\def\\minortickmarky{\\psline[linewidth='+self.minorticklinewidth+',linecolor='+self.minorticklinecolor+'](0,0)('+self.minorticklength+',0)}%\n') out += ('\\def\\breaky{%\n \\rput{180}(0,0){\\psline[linewidth='+self.framelinewidth+',linecolor=white](0,-1pt)(0,1pt)}\n \\rput{180}(0,0){%\n \\rput{20}(0,1pt){%\n \\psline[linewidth='+self.framelinewidth+',linecolor='+self.framelinecolor+'](-5pt,0)(5pt,0)\n }\n \\rput{20}(0,-1pt){%\n \\psline[linewidth='+self.framelinewidth+',linecolor='+self.framelinecolor+'](-5pt,0)(5pt,0)\n }\n }\n }\n') return out def draw_minortick(self, ticklabel, twosided): out = '' out += '\\rput(0, '+self.coors.strphys2frameY(ticklabel)+'){\\minortickmarky}\n' if twosided: out += '\\rput{180}(1, '+self.coors.strphys2frameY(ticklabel)+'){\\minortickmarky}\n' return out def draw_majortick(self, ticklabel, twosided): out = '' out += '\\rput(0, '+self.coors.strphys2frameY(ticklabel)+'){\\majortickmarky}\n' if twosided: out += '\\rput{180}(1, '+self.coors.strphys2frameY(ticklabel)+'){\\majortickmarky}\n' return out def draw_majorticklabel(self, value, label=''): if not label: label = self.get_ticklabel(value, int(self.description['LogY']) and self.coors.ymin() > 0 and self.coors.ymax() > 0) if self.description.get('RatioPlotMode', 'mcdata') == 'deviation' and 'RatioPlot' in self.description.get('PlotStage', ''): rtn = '\\uput[180]{0}(0, '+self.coors.strphys2frameY(value)+'){\\strut{}\\textcolor{'+self.majorticklabelcolor+'}{'+label+'\\,$\\sigma$}}\n' else: labelparts = label.split("\\n") labelcode = label if len(labelparts) == 1 else ("\\shortstack{" + "\\\\ ".join(labelparts) + "}") rtn = "\\rput(0, " + self.coors.strphys2frameY(value) + "){\\rput[r](-\\labelsep,0){\\textcolor{"+self.majorticklabelcolor+"}{" + labelcode + "}}}\n" return rtn def draw_break(self, pos, twosided): out = '' out += '\\rput(0, '+self.coors.strphys2frameY(pos)+'){\\breaky}\n' if twosided: out += '\\rput(1, '+self.coors.strphys2frameY(pos)+'){\\breaky}\n' return out def getMajorTickLength(self): return self.description.get('YMajorTickLength', '9pt') def getMinorTickLength(self): return self.description.get('YMinorTickLength', '4pt') def getMajorTickLineWidth(self): return self.description.get('YMajorTickLineWidth', '0.3pt') def getMinorTickLineWidth(self): return self.description.get('YMinorTickLineWidth', '0.3pt') def getMajorTickLineColor(self): return self.description.get('YMajorTickLineColor', 'black') def getMinorTickLineColor(self): return self.description.get('YMinorTickLineColor', 'black') def getMajorTickLabelColor(self): return self.description.get('YMajorTickLabelColor', 'black') class ZTicks(Ticks): def __init__(self, description, coors): self.description = description self.majorticklinewidth = self.getMajorTickLineWidth() self.minorticklinewidth = self.getMinorTickLineWidth() self.majorticklinecolor = self.getMajorTickLineColor() self.minorticklinecolor = self.getMinorTickLineColor() self.majorticklength = '6pt' self.minorticklength = '2.6pt' self.majorticklabelcolor = self.getMajorTickLabelColor() self.coors = coors self.decimal = 0 self.framelinewidth = '0.3pt' self.framelinecolor = 'black' def draw(self, custommajorticks=[], customminorticks=[], custommajortickmarks=-1, customminortickmarks=-1, drawlabels=True): out = "" out += ('\n%\n% Z-Ticks\n%\n') out += ('\\def\\majortickmarkz{\\psline[linewidth='+self.majorticklinewidth+',linecolor='+self.majorticklinecolor+'](0,0)('+self.majorticklength+',0)}%\n') out += ('\\def\\minortickmarkz{\\psline[linewidth='+self.minorticklinewidth+',linecolor='+self.minorticklinecolor+'](0,0)('+self.minorticklength+',0)}%\n') out += self.draw_ticks(self.coors.zmin(), self.coors.zmax(),\ plotlog=self.description['LogZ'],\ custommajorticks=custommajorticks,\ customminorticks=customminorticks,\ custommajortickmarks=custommajortickmarks,\ customminortickmarks=customminortickmarks,\ twosided=False,\ drawlabels=drawlabels) return out def draw_minortick(self, ticklabel, twosided): return '\\rput{180}(1, '+self.coors.strphys2frameZ(ticklabel)+'){\\minortickmarkz}\n' def draw_majortick(self, ticklabel, twosided): return '\\rput{180}(1, '+self.coors.strphys2frameZ(ticklabel)+'){\\majortickmarkz}\n' def draw_majorticklabel(self, value, label=''): if label=='': label = self.get_ticklabel(value, int(self.description['LogZ'])) if self.description.get('RatioPlotMode', "mcdata") == 'deviation' \ and 'RatioPlot' in self.description.get('PlotStage', ''): return ('\\uput[0]{0}(1, '+self.coors.strphys2frameZ(value)+'){\\strut{}'+label+'\\,$\\sigma$}\n') else: return ('\\uput[0]{0}(1, '+self.coors.strphys2frameZ(value)+'){\\strut{}'+label+'}\n') def getMajorTickLength(self): return self.description.get('ZMajorTickLength', '9pt') def getMinorTickLength(self): return self.description.get('ZMinorTickLength', '4pt') def getMajorTickLineWidth(self): return self.description.get('ZMajorTickLineWidth', '0.3pt') def getMinorTickLineWidth(self): return self.description.get('ZMinorTickLineWidth', '0.3pt') def getMajorTickLabelColor(self): return self.description.get('ZMajorTickLabelColor', 'black') def getMajorTickLineColor(self): return self.description.get('ZMajorTickLineColor', 'black') def getMinorTickLineColor(self): return self.description.get('ZMinorTickLineColor', 'black') class Coordinates(object): def __init__(self, inputdata): self.description = inputdata.description def phys2frameX(self, x): if self.description['LogX']: if x>0: result = 1.*(log10(x)-log10(self.xmin()))/(log10(self.xmax())-log10(self.xmin())) else: return -10 else: result = 1.*(x-self.xmin())/(self.xmax()-self.xmin()) if (fabs(result) < 1e-4): return 0 else: return min(max(result,-10),10) def phys2frameY(self, y): if self.description['LogY']: if y > 0 and self.ymin() > 0 and self.ymax() > 0: result = 1.*(log10(y)-log10(self.ymin()))/(log10(self.ymax())-log10(self.ymin())) else: return -10 else: result = 1.*(y-self.ymin())/(self.ymax()-self.ymin()) if (fabs(result) < 1e-4): return 0 else: return min(max(result,-10),10) def phys2frameZ(self, z): if self.description['LogZ']: if z>0: result = 1.*(log10(z)-log10(self.zmin()))/(log10(self.zmax())-log10(self.zmin())) else: return -10 else: result = 1.*(z-self.zmin())/(self.zmax()-self.zmin()) if (fabs(result) < 1e-4): return 0 else: return min(max(result,-10),10) # TODO: Add frame2phys functions (to allow linear function sampling in the frame space rather than the physical space) def strphys2frameX(self, x): return str(self.phys2frameX(x)) def strphys2frameY(self, y): return str(self.phys2frameY(y)) def strphys2frameZ(self, z): return str(self.phys2frameZ(z)) def xmin(self): return self.description['Borders'][0] def xmax(self): return self.description['Borders'][1] def ymin(self): return self.description['Borders'][2] def ymax(self): return self.description['Borders'][3] def zmin(self): return self.description['Borders'][4] def zmax(self): return self.description['Borders'][5] #################### def try_cmd(args): "Run the given command + args and return True/False if it succeeds or not" import subprocess try: subprocess.check_output(args, stderr=subprocess.STDOUT) return True except: return False def have_cmd(cmd): return try_cmd(["which", cmd]) import shutil, subprocess def process_datfile(datfile): global opts if not os.access(datfile, os.R_OK): raise Exception("Could not read data file '%s'" % datfile) datpath = os.path.abspath(datfile) datfile = os.path.basename(datpath) datdir = os.path.dirname(datpath) outdir = args.OUTPUT_DIR if args.OUTPUT_DIR else datdir filename = datfile.replace('.dat','') ## Create a temporary directory # cwd = os.getcwd() tempdir = tempfile.mkdtemp('.make-plots') tempdatpath = os.path.join(tempdir, datfile) shutil.copy(datpath, tempdir) if args.NO_CLEANUP: logging.info('Keeping temp-files in %s' % tempdir) ## Make TeX file inputdata = InputData(datpath) if inputdata.attr_bool('IgnorePlot', False): return texpath = os.path.join(tempdir, '%s.tex' % filename) texfile = open(texpath, 'w') p = Plot(inputdata) texfile.write(p.write_header(inputdata)) if inputdata.attr_bool("MainPlot", True): mp = MainPlot(inputdata) texfile.write(mp.draw(inputdata)) if not inputdata.attr_bool("is2dim", False): if inputdata.attr_bool("RatioPlot", True) and inputdata.attr("RatioPlotReference"): # is not None: rp = RatioPlot(inputdata,0) texfile.write(rp.draw(inputdata)) for i in range(1,9): if inputdata.attr_bool('RatioPlot'+str(i), False): # and inputdata.attr('RatioPlot'+str(i)+'Reference'): rp = RatioPlot(inputdata,i) texfile.write(rp.draw(inputdata)) texfile.write(p.write_footer()) texfile.close() if args.OUTPUT_FORMAT != ["TEX"]: ## Check for the required programs latexavailable = have_cmd("latex") dvipsavailable = have_cmd("dvips") convertavailable = have_cmd("convert") ps2pnmavailable = have_cmd("ps2pnm") pnm2pngavailable = have_cmd("pnm2png") # TODO: It'd be nice to be able to control the size of the PNG between thumb and full-size... # currently defaults (and is used below) to a size suitable for thumbnails def mkpngcmd(infile, outfile, outsize=450, density=300): if convertavailable: pngcmd = ["convert", "-flatten", "-density", str(density), infile, "-quality", "100", "-resize", "{size:d}x{size:d}>".format(size=outsize), #"-sharpen", "0x1.0", outfile] #logging.debug(" ".join(pngcmd)) #pngproc = subprocess.Popen(pngcmd, stdout=subprocess.PIPE, cwd=tempdir) #pngproc.wait() return pngcmd else: raise Exception("Required PNG maker program (convert) not found") # elif ps2pnmavailable and pnm2pngavailable: # pstopnm = "pstopnm -stdout -xsize=461 -ysize=422 -xborder=0.01 -yborder=0.01 -portrait " + infile # p1 = subprocess.Popen(pstopnm.split(), stdout=subprocess.PIPE, stderr=open("/dev/null", "w"), cwd=tempdir) # p2 = subprocess.Popen(["pnmtopng"], stdin=p1.stdout, stdout=open("%s/%s.png" % (tempdir, outfile), "w"), stderr=open("/dev/null", "w"), cwd=tempdir) # p2.wait() # else: # raise Exception("Required PNG maker programs (convert, or ps2pnm and pnm2png) not found") ## Run LaTeX (in no-stop mode) logging.debug(os.listdir(tempdir)) texcmd = ["latex", "\scrollmode\input", texpath] logging.debug("TeX command: " + " ".join(texcmd)) texproc = subprocess.Popen(texcmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=tempdir) logging.debug(texproc.communicate()[0]) logging.debug(os.listdir(tempdir)) ## Run dvips dvcmd = ["dvips", filename] if not logging.getLogger().isEnabledFor(logging.DEBUG): dvcmd.append("-q") ## Handle Minion Font if args.OUTPUT_FONT == "MINION": dvcmd.append('-Pminion') ## Choose format # TODO: Rationalise... this is a mess! Maybe we can use tex2pix? if "PS" in args.OUTPUT_FORMAT: dvcmd += ["-o", "%s.ps" % filename] logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) dvproc.wait() if "PDF" in args.OUTPUT_FORMAT: dvcmd.append("-f") logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) - cnvproc = subprocess.Popen(["ps2pdf", "-"], stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir) + cnvproc = subprocess.Popen(["ps2pdf", "-dNOSAFER", "-"], stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir) f = open(os.path.join(tempdir, "%s.pdf" % filename), "wb") f.write(cnvproc.communicate()[0]) f.close() if "EPS" in args.OUTPUT_FORMAT: dvcmd.append("-f") logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) cnvproc = subprocess.Popen(["ps2eps"], stdin=dvproc.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE, cwd=tempdir) f = open(os.path.join(tempdir, "%s.eps" % filename), "wb") f.write(cnvproc.communicate()[0]) f.close() if "PNG" in args.OUTPUT_FORMAT: dvcmd.append("-f") logging.debug(" ".join(dvcmd)) dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir) #pngcmd = ["convert", "-flatten", "-density", "110", "-", "-quality", "100", "-sharpen", "0x1.0", "%s.png" % filename] pngcmd = mkpngcmd("-", "%s.png" % filename) logging.debug(" ".join(pngcmd)) pngproc = subprocess.Popen(pngcmd, stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir) pngproc.wait() logging.debug(os.listdir(tempdir)) ## Copy results back to main dir for fmt in args.OUTPUT_FORMAT: outname = "%s.%s" % (filename, fmt.lower()) outpath = os.path.join(tempdir, outname) if os.path.exists(outpath): shutil.copy(outpath, outdir) else: logging.error("No output file '%s' from processing %s" % (outname, datfile)) ## Clean up if not args.NO_CLEANUP: shutil.rmtree(tempdir, ignore_errors=True) #################### if __name__ == '__main__': ## Try to rename the process on Linux try: import ctypes libc = ctypes.cdll.LoadLibrary('libc.so.6') libc.prctl(15, 'make-plots', 0, 0, 0) except Exception: pass ## Try to use Psyco optimiser try: import psyco psyco.full() except ImportError: pass ## Find number of (virtual) processing units import multiprocessing try: numcores = multiprocessing.cpu_count() except: numcores = 1 ## Parse command line options import argparse parser = argparse.ArgumentParser(usage=__doc__) parser.add_argument("DATFILES", nargs="+", help=".dat files to plot") parser.add_argument("-j", "-n", "--num-threads", dest="NUM_THREADS", type=int, default=numcores, help="max number of threads to be used [%s]" % numcores) parser.add_argument("-o", "--outdir", dest="OUTPUT_DIR", default=None, help="choose the output directory (default = .dat dir)") parser.add_argument("--font", dest="OUTPUT_FONT", choices="palatino,cm,times,helvetica,minion".split(","), default="palatino", help="choose the font to be used in the plots") parser.add_argument("--palatino", dest="OUTPUT_FONT", action="store_const", const="palatino", default="palatino", help="use Palatino as font (default). DEPRECATED: Use --font") parser.add_argument("--cm", dest="OUTPUT_FONT", action="store_const", const="cm", default="palatino", help="use Computer Modern as font. DEPRECATED: Use --font") parser.add_argument("--times", dest="OUTPUT_FONT", action="store_const", const="times", default="palatino", help="use Times as font. DEPRECATED: Use --font") parser.add_argument("--minion", dest="OUTPUT_FONT", action="store_const", const="minion", default="palatino", help="use Adobe Minion Pro as font. Note: You need to set TEXMFHOME first. DEPRECATED: Use --font") parser.add_argument("--helvetica", dest="OUTPUT_FONT", action="store_const", const="helvetica", default="palatino", help="use Helvetica as font. DEPRECATED: Use --font") parser.add_argument("-f", "--format", dest="OUTPUT_FORMAT", default="PDF", help="choose plot format, perhaps multiple comma-separated formats e.g. 'pdf' or 'tex,pdf,png' (default = PDF).") parser.add_argument("--ps", dest="OUTPUT_FORMAT", action="store_const", const="PS", default="PDF", help="create PostScript output (default). DEPRECATED") parser.add_argument("--pdf", dest="OUTPUT_FORMAT", action="store_const", const="PDF", default="PDF", help="create PDF output. DEPRECATED") parser.add_argument("--eps", dest="OUTPUT_FORMAT", action="store_const", const="EPS", default="PDF", help="create Encapsulated PostScript output. DEPRECATED") parser.add_argument("--png", dest="OUTPUT_FORMAT", action="store_const", const="PNG", default="PDF", help="create PNG output. DEPRECATED") parser.add_argument("--pspng", dest="OUTPUT_FORMAT", action="store_const", const="PS,PNG", default="PDF", help="create PS and PNG output. DEPRECATED") parser.add_argument("--pdfpng", dest="OUTPUT_FORMAT", action="store_const", const="PDF,PNG", default="PDF", help="create PDF and PNG output. DEPRECATED") parser.add_argument("--epspng", dest="OUTPUT_FORMAT", action="store_const", const="EPS,PNG", default="PDF", help="create EPS and PNG output. DEPRECATED") parser.add_argument("--tex", dest="OUTPUT_FORMAT", action="store_const", const="TEX", default="PDF", help="create TeX/LaTeX output.") parser.add_argument("--no-cleanup", dest="NO_CLEANUP", action="store_true", default=False, help="keep temporary directory and print its filename.") parser.add_argument("--no-subproc", dest="NO_SUBPROC", action="store_true", default=False, help="don't use subprocesses to render the plots in parallel -- useful for debugging.") parser.add_argument("--full-range", dest="FULL_RANGE", action="store_true", default=False, help="plot full y range in log-y plots.") parser.add_argument("-c", "--config", dest="CONFIGFILES", action="append", default=None, help="plot config file to be used. Overrides internal config blocks.") verbgroup = parser.add_argument_group("Verbosity control") verbgroup.add_argument("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL", default=logging.INFO, help="print debug (very verbose) messages") verbgroup.add_argument("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL", default=logging.INFO, help="be very quiet") args = parser.parse_args() ## Tweak the opts output logging.basicConfig(level=args.LOGLEVEL, format="%(message)s") args.OUTPUT_FONT = args.OUTPUT_FONT.upper() args.OUTPUT_FORMAT = args.OUTPUT_FORMAT.upper().split(",") if args.NUM_THREADS == 1: args.NO_SUBPROC = True ## Check for no args if len(args.DATFILES) == 0: logging.error(parser.get_usage()) sys.exit(2) ## Check that the files exist for f in args.DATFILES: if not os.access(f, os.R_OK): print("Error: cannot read from %s" % f) sys.exit(1) ## Test for external programs (kpsewhich, latex, dvips, ps2pdf/ps2eps, and convert) args.LATEXPKGS = [] if args.OUTPUT_FORMAT != ["TEX"]: try: ## latex if not have_cmd("latex"): logging.error("ERROR: required program 'latex' could not be found. Exiting...") sys.exit(1) ## dvips if not have_cmd("dvips"): logging.error("ERROR: required program 'dvips' could not be found. Exiting...") sys.exit(1) ## ps2pdf / ps2eps if "PDF" in args.OUTPUT_FORMAT: if not have_cmd("ps2pdf"): logging.error("ERROR: required program 'ps2pdf' (for PDF output) could not be found. Exiting...") sys.exit(1) elif "EPS" in args.OUTPUT_FORMAT: if not have_cmd("ps2eps"): logging.error("ERROR: required program 'ps2eps' (for EPS output) could not be found. Exiting...") sys.exit(1) ## PNG output converter if "PNG" in args.OUTPUT_FORMAT: if not have_cmd("convert"): logging.error("ERROR: required program 'convert' (for PNG output) could not be found. Exiting...") sys.exit(1) ## kpsewhich: required for LaTeX package testing if not have_cmd("kpsewhich"): logging.warning("WARNING: required program 'kpsewhich' (for LaTeX package checks) could not be found") else: ## Check minion font if args.OUTPUT_FONT == "MINION": p = subprocess.Popen(["kpsewhich", "minion.sty"], stdout=subprocess.PIPE) p.wait() if p.returncode != 0: logging.warning('Warning: Using "--minion" requires minion.sty to be installed. Ignoring it.') args.OUTPUT_FONT = "PALATINO" ## Check for HEP LaTeX packages # TODO: remove HEP-specifics/non-standards? for pkg in ["hepnames", "hepunits", "underscore"]: p = subprocess.Popen(["kpsewhich", "%s.sty" % pkg], stdout=subprocess.PIPE) p.wait() if p.returncode == 0: args.LATEXPKGS.append(pkg) ## Check for Palatino old style figures and small caps if args.OUTPUT_FONT == "PALATINO": p = subprocess.Popen(["kpsewhich", "ot1pplx.fd"], stdout=subprocess.PIPE) p.wait() if p.returncode == 0: args.OUTPUT_FONT = "PALATINO_OSF" except Exception as e: logging.warning("Problem while testing for external packages. I'm going to try and continue without testing, but don't hold your breath...") def init_worker(): import signal signal.signal(signal.SIGINT, signal.SIG_IGN) ## Run rendering jobs datfiles = args.DATFILES plotword = "plots" if len(datfiles) > 1 else "plot" logging.info("Making %d %s" % (len(datfiles), plotword)) if args.NO_SUBPROC: init_worker() for i, df in enumerate(datfiles): logging.info("Plotting %s (%d/%d remaining)" % (df, len(datfiles)-i, len(datfiles))) process_datfile(df) else: pool = multiprocessing.Pool(args.NUM_THREADS, init_worker) try: for i, _ in enumerate(pool.imap(process_datfile, datfiles)): logging.info("Plotting %s (%d/%d remaining)" % (datfiles[i], len(datfiles)-i, len(datfiles))) pool.close() except KeyboardInterrupt: print("Caught KeyboardInterrupt, terminating workers") pool.terminate() except ValueError as e: print(e) print("Perhaps your .dat file is corrupt?") pool.terminate() pool.join() diff --git a/bin/rivet b/bin/rivet --- a/bin/rivet +++ b/bin/rivet @@ -1,687 +1,688 @@ #! /usr/bin/env python """\ Run Rivet analyses on HepMC events read from a file or Unix pipe Examples: %(prog)s [options] [ ...] or mkfifo fifo.hepmc my_generator -o fifo.hepmc & %(prog)s [options] fifo.hepmc ENVIRONMENT: * RIVET_ANALYSIS_PATH: list of paths to be searched for plugin analysis libraries at runtime * RIVET_DATA_PATH: list of paths to be searched for data files (defaults to use analysis path) * RIVET_WEIGHT_INDEX: the numerical weight-vector index to use in this run (default = 0; -1 = ignore weights) * See the documentation for more environment variables. """ from __future__ import print_function import os, sys ## Load the rivet module try: import rivet except: ## If rivet loading failed, try to bootstrap the Python path! try: # TODO: Is this a good idea? Maybe just notify the user that their PYTHONPATH is wrong? import commands modname = sys.modules[__name__].__file__ binpath = os.path.dirname(modname) rivetconfigpath = os.path.join(binpath, "rivet-config") rivetpypath = commands.getoutput(rivetconfigpath + " --pythonpath") sys.path.append(rivetpypath) import rivet except: sys.stderr.write("The rivet Python module could not be loaded: is your PYTHONPATH set correctly?\n") sys.exit(5) rivet.util.check_python_version() rivet.util.set_process_name("rivet") import time, datetime, logging, signal ## Parse command line options import argparse parser = argparse.ArgumentParser(description=__doc__) parser.add_argument("ARGS", nargs="*") parser.add_argument("--version", dest="SHOW_VERSION", action="store_true", default=False, help="show Rivet version") anagroup = parser.add_argument_group("Analysis handling") anagroup.add_argument("-a", "--analysis", "--analyses", dest="ANALYSES", action="append", default=[], metavar="ANA", help="add an analysis (or comma-separated list of analyses) to the processing list.") anagroup.add_argument("--list-analyses", "--list", dest="LIST_ANALYSES", action="store_true", default=False, help="show the list of available analyses' names. With -v, it shows the descriptions, too") anagroup.add_argument("--list-keywords", "--keywords", dest="LIST_KEYWORDS", action="store_true", default=False, help="show the list of available keywords.") anagroup.add_argument("--list-used-analyses", action="store_true", dest="LIST_USED_ANALYSES", default=False, help="list the analyses used by this command (after subtraction of inappropriate ones)") anagroup.add_argument("--show-analysis", "--show-analyses", "--show", dest="SHOW_ANALYSES", action="append", default=[], help="show the details of an analysis") anagroup.add_argument("--show-bibtex", dest="SHOW_BIBTEX", action="store_true", default=False, help="show BibTeX entries for all used analyses") anagroup.add_argument("--analysis-path", dest="ANALYSIS_PATH", metavar="PATH", default=None, help="specify the analysis search path (cf. $RIVET_ANALYSIS_PATH).") # TODO: remove/deprecate the append? anagroup.add_argument("--analysis-path-append", dest="ANALYSIS_PATH_APPEND", metavar="PATH", default=None, help="append to the analysis search path (cf. $RIVET_ANALYSIS_PATH).") anagroup.add_argument("--pwd", dest="ANALYSIS_PATH_PWD", action="store_true", default=False, help="append the current directory (pwd) to the analysis/data search paths (cf. $RIVET_ANALYSIS_PATH).") extragroup = parser.add_argument_group("Extra run settings") extragroup.add_argument("-o", "-H", "--histo-file", dest="HISTOFILE", default="Rivet.yoda", help="specify the output histo file path (default = %(default)s)") extragroup.add_argument("-p", "--preload-file", dest="PRELOADFILE", default=None, help="specify and old yoda file to initialize (default = %(default)s)") extragroup.add_argument("--no-histo-file", dest="WRITE_DATA", action="store_false", default=True, help="don't write out any histogram file at the end of the run (default = write)") extragroup.add_argument("-x", "--cross-section", dest="CROSS_SECTION", default=None, metavar="XS", help="specify the signal process cross-section in pb") extragroup.add_argument("-n", "--nevts", dest="MAXEVTNUM", type=int, default=None, metavar="NUM", help="restrict the max number of events to process") extragroup.add_argument("--nskip", dest="EVTSKIPNUM", type=int, default=0, metavar="NUM", help="skip NUM events read from input before beginning processing") +extragroup.add_argument("--skip-weights", dest="SKIP_WEIGHTS", action="store_true", + default=False, help="only run on the nominal weight") extragroup.add_argument("--runname", dest="RUN_NAME", default=None, metavar="NAME", help="give an optional run name, to be prepended as a 'top level directory' in histo paths") extragroup.add_argument("--ignore-beams", dest="IGNORE_BEAMS", action="store_true", default=False, help="ignore input event beams when checking analysis compatibility. " "WARNING: analyses may not work correctly, or at all, with inappropriate beams") extragroup.add_argument("-d", "--dump", "--histo-interval", dest="DUMP_PERIOD", type=int, default=1000, metavar="NUM", help="specify the number of events between histogram file updates, " "default = %(default)s. Set to 0 to only write out at the end of the run. " "Note that intermediate histograms will be those from the analyze step " "only, except for analyses explicitly declared Reentrant for which the " "finalize function is executed first.") timinggroup = parser.add_argument_group("Timeouts and periodic operations") timinggroup.add_argument("--event-timeout", dest="EVENT_TIMEOUT", type=int, default=21600, metavar="NSECS", help="max time in whole seconds to wait for an event to be generated from the specified source (default = %(default)s)") timinggroup.add_argument("--run-timeout", dest="RUN_TIMEOUT", type=int, default=None, metavar="NSECS", help="max time in whole seconds to wait for the run to finish. This can be useful on batch systems such " "as the LCG Grid where tokens expire on a fixed wall-clock and can render long Rivet runs unable to write " "out the final histogram file (default = unlimited)") verbgroup = parser.add_argument_group("Verbosity control") parser.add_argument("-l", dest="NATIVE_LOG_STRS", action="append", default=[], help="set a log level in the Rivet library") verbgroup.add_argument("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL", default=logging.INFO, help="print debug (very verbose) messages") verbgroup.add_argument("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL", default=logging.INFO, help="be very quiet") args = parser.parse_args() ## Print the version and exit if args.SHOW_VERSION: print("rivet v%s" % rivet.version()) sys.exit(0) ## Override/modify analysis search path if args.ANALYSIS_PATH: rivet.setAnalysisLibPaths(args.ANALYSIS_PATH.split(":")) rivet.setAnalysisDataPaths(args.ANALYSIS_PATH.split(":")) if args.ANALYSIS_PATH_APPEND: for ap in args.ANALYSIS_PATH_APPEND.split(":"): rivet.addAnalysisLibPath(ap) rivet.addAnalysisDataPath(ap) if args.ANALYSIS_PATH_PWD: rivet.addAnalysisLibPath(os.path.abspath(".")) rivet.addAnalysisDataPath(os.path.abspath(".")) ## Configure logging logging.basicConfig(level=args.LOGLEVEL, format="%(message)s") for l in args.NATIVE_LOG_STRS: name, level = None, None try: name, level = l.split("=") except: name = "Rivet" level = l ## Fix name if name != "Rivet" and not name.startswith("Rivet."): name = "Rivet." + name try: ## Get right error type level = rivet.LEVELS.get(level.upper(), None) logging.debug("Setting log level: %s %d" % (name, level)) rivet.setLogLevel(name, level) except: logging.warning("Couldn't process logging string '%s'" % l) ############################ ## Listing available analyses/keywords def getAnalysesByKeyword(alist, kstring): add, veto, ret = [], [], [] bits = [i for i in kstring.replace("^@", "@^").split("@") if len(i) > 0] for b in bits: if b.startswith("^"): veto.append(b.strip("^")) else: add.append(b) add = set(add) veto = set(veto) for a in alist: kwds = set([i.lower() for i in rivet.AnalysisLoader.getAnalysis(a).keywords()]) if kwds.intersection(veto) and len(kwds.intersection(add)) == len(list(add)): ret.append(a) return ret ## List of analyses all_analyses = rivet.AnalysisLoader.analysisNames() if args.LIST_ANALYSES: ## Treat args as case-insensitive regexes if present regexes = None if args.ARGS: import re regexes = [re.compile(arg, re.I) for arg in args.ARGS] # import tempfile, subprocess # tf, tfpath = tempfile.mkstemp(prefix="rivet-list.") names = [] msg = [] for aname in all_analyses: if not regexes: toshow = True else: toshow = False for regex in regexes: if regex.search(aname): toshow = True break if toshow: names.append(aname) msg.append('') if args.LOGLEVEL <= logging.INFO: a = rivet.AnalysisLoader.getAnalysis(aname) st = "" if a.status() == "VALIDATED" else ("[%s] " % a.status()) # detex will very likely introduce some non-ASCII chars from # greek names in analysis titles. # The u"" prefix and explicit print encoding are necessary for # py2 to handle this properly msg[-1] = u"%s%s" % (st, a.summary()) if args.LOGLEVEL < logging.INFO: if a.keywords(): msg[-1] += u" [%s]" % " ".join(a.keywords()) if a.luminosityfb(): msg[-1] += u" [ \int L = %s fb^{-1} ]" % a.luminosityfb() msg = rivet.util.detex(msg) retlist = '\n'.join([ u"%-25s %s" % (a,m) for a,m in zip(names,msg) ]) - if type(u'') is not str: - retlist = retlist.encode('utf-8') + retlist = retlist.encode('utf-8') print(retlist) sys.exit(0) def getKeywords(alist): all_keywords = [] for a in alist: all_keywords.extend(rivet.AnalysisLoader.getAnalysis(a).keywords()) all_keywords = [i.lower() for i in all_keywords] return sorted(list(set(all_keywords))) ## List keywords if args.LIST_KEYWORDS: # a = rivet.AnalysisLoader.getAnalysis(aname) for k in getKeywords(all_analyses): print(k) sys.exit(0) ## Show analyses' details if len(args.SHOW_ANALYSES) > 0: toshow = [] for i, a in enumerate(args.SHOW_ANALYSES): a_up = a.upper() if a_up in all_analyses and a_up not in toshow: toshow.append(a_up) else: ## Treat as a case-insensitive regex import re regex = re.compile(a, re.I) for ana in all_analyses: if regex.search(ana) and a_up not in toshow: toshow.append(ana) msgs = [] for i, name in enumerate(sorted(toshow)): import textwrap ana = rivet.AnalysisLoader.getAnalysis(name) msg = "" msg += name + "\n" msg += (len(name) * "=") + "\n\n" msg += rivet.util.detex(ana.summary()) + "\n\n" msg += "Status: " + ana.status() + "\n\n" # TODO: reduce to only show Inspire in v3 if ana.inspireId(): msg += "Inspire ID: " + ana.inspireId() + "\n" msg += "Inspire URL: http://inspire-hep.net/record/" + ana.inspireId() + "\n" msg += "HepData URL: http://hepdata.net/record/ins" + ana.inspireId() + "\n" elif ana.spiresId(): msg += "Spires ID: " + ana.spiresId() + "\n" msg += "Inspire URL: http://inspire-hep.net/search?p=find+key+" + ana.spiresId() + "\n" msg += "HepData URL: http://hepdata.cedar.ac.uk/view/irn" + ana.spiresId() + "\n" if ana.year(): msg += "Year of publication: " + ana.year() + "\n" if ana.bibKey(): msg += "BibTeX key: " + ana.bibKey() + "\n" msg += "Authors:\n" for a in ana.authors(): msg += " " + a + "\n" msg += "\n" msg += "Description:\n" twrap = textwrap.TextWrapper(width=75, initial_indent=2*" ", subsequent_indent=2*" ") msg += twrap.fill(rivet.util.detex(ana.description())) + "\n\n" if ana.experiment(): msg += "Experiment: " + ana.experiment() if ana.collider(): msg += "(%s)" % ana.collider() msg += "\n" # TODO: move this formatting into Analysis or a helper function? if ana.requiredBeams(): def pid_to_str(pid): if pid == 11: return "e-" elif pid == -11: return "e+" elif pid == 2212: return "p+" elif pid == -2212: return "p-" elif pid == 1000010020: return "d" elif pid == 1000130270: return "Al" elif pid == 1000290630: return "Cu" elif pid == 1000541290: return "Xe" elif pid == 1000791970: return "Au" elif pid == 1000822080: return "Pb" elif pid == 1000922380: return "U" elif pid == 10000: return "*" else: return str(pid) beamstrs = [] for bp in ana.requiredBeams(): beamstrs.append(pid_to_str(bp[0]) + " " + pid_to_str(bp[1])) msg += "Beams: " + ", ".join(beamstrs) + "\n" if ana.requiredEnergies(): msg += "Beam energies: " + "; ".join(["(%0.1f, %0.1f) GeV\n" % (epair[0], epair[1]) for epair in ana.requiredEnergies()]) else: msg += "Beam energies: ANY\n" if ana.runInfo(): msg += "Run details:\n" twrap = textwrap.TextWrapper(width=75, initial_indent=2*" ", subsequent_indent=4*" ") for l in ana.runInfo().split("\n"): msg += twrap.fill(l) + "\n" if ana.luminosityfb(): msg+= "\nIntegrated data luminosity = %s inverse fb.\n"%ana.luminosityfb() if ana.keywords(): msg += "\nAnalysis keywords:" for k in ana.keywords(): msg += " %s"%k msg+= "\n\n" if ana.references(): msg += "\n" + "References:\n" for r in ana.references(): url = None if r.startswith("arXiv:"): code = r.split()[0].replace("arXiv:", "") url = "http://arxiv.org/abs/" + code elif r.startswith("doi:"): code = r.replace("doi:", "") url = "http://dx.doi.org/" + code if url is not None: r += " - " + url msg += " " + r + "\n" ## Add to the output msgs.append(msg) ## Write the combined messages to a temporary file and page it if msgs: try: import tempfile, subprocess tffd, tfpath = tempfile.mkstemp(prefix="rivet-show.") msgsum = u"\n\n".join(msgs) - if type(u'') is not str: - msgsum = msgsum.encode('utf-8') + msgsum = msgsum.encode('utf-8') os.write(tffd, msgsum) if sys.stdout.isatty(): pager = subprocess.Popen(["less", "-FX", tfpath]) #, stdin=subprocess.PIPE) pager.communicate() else: f = open(tfpath) print(f.read()) f.close() finally: os.unlink(tfpath) #< always clean up sys.exit(0) ############################ ## Actual analysis runs ## We allow comma-separated lists of analysis names -- normalise the list here newanas = [] for a in args.ANALYSES: if "," in a: newanas += a.split(",") elif "@" in a: #< NB. this bans combination of ana lists and keywords in a single arg temp = getAnalysesByKeyword(all_analyses, a) for i in temp: newanas.append(i) else: newanas.append(a) args.ANALYSES = newanas ## Parse supplied cross-section if args.CROSS_SECTION is not None: xsstr = args.CROSS_SECTION try: args.CROSS_SECTION = float(xsstr) except: import re suffmatch = re.search(r"[^\d.]", xsstr) if not suffmatch: raise ValueError("Bad cross-section string: %s" % xsstr) factor = base = None suffstart = suffmatch.start() if suffstart != -1: base = xsstr[:suffstart] suffix = xsstr[suffstart:].lower() if suffix == "mb": factor = 1e+9 elif suffix == "mub": factor = 1e+6 elif suffix == "nb": factor = 1e+3 elif suffix == "pb": factor = 1 elif suffix == "fb": factor = 1e-3 elif suffix == "ab": factor = 1e-6 if factor is None or base is None: raise ValueError("Bad cross-section string: %s" % xsstr) xs = float(base) * factor args.CROSS_SECTION = xs ## Print the available CLI options! #if args.LIST_OPTIONS: # for o in parser.option_list: # print(o.get_opt_string()) # sys.exit(0) ## Set up signal handling RECVD_KILL_SIGNAL = None def handleKillSignal(signum, frame): "Declare us as having been signalled, and return to default handling behaviour" global RECVD_KILL_SIGNAL logging.critical("Signal handler called with signal " + str(signum)) RECVD_KILL_SIGNAL = signum signal.signal(signum, signal.SIG_DFL) ## Signals to handle signal.signal(signal.SIGTERM, handleKillSignal); signal.signal(signal.SIGHUP, handleKillSignal); signal.signal(signal.SIGINT, handleKillSignal); signal.signal(signal.SIGUSR1, handleKillSignal); signal.signal(signal.SIGUSR2, handleKillSignal); try: signal.signal(signal.SIGXCPU, handleKillSignal); except: pass ## Identify HepMC files/streams ## TODO: check readability, deal with stdin if len(args.ARGS) > 0: HEPMCFILES = args.ARGS else: HEPMCFILES = ["-"] ## Event number logging def logNEvt(n, starttime, maxevtnum): if n % 10000 == 0: nevtloglevel = logging.CRITICAL elif n % 1000 == 0: nevtloglevel = logging.WARNING elif n % 100 == 0: nevtloglevel = logging.INFO else: nevtloglevel = logging.DEBUG currenttime = datetime.datetime.now().replace(microsecond=0) elapsedtime = currenttime - starttime logging.log(nevtloglevel, "Event %d (%s elapsed)" % (n, str(elapsedtime))) # if maxevtnum is None: # logging.log(nevtloglevel, "Event %d (%s elapsed)" % (n, str(elapsedtime))) # else: # remainingtime = (maxevtnum-n) * elapsedtime.total_seconds() / float(n) # eta = time.strftime("%a %b %d %H:%M", datetime.localtime(currenttime + remainingtime)) # logging.log(nevtloglevel, "Event %d (%d s elapsed / %d s left) -> ETA: %s" % # (n, elapsedtime, remainingtime, eta)) ## Do some checks on output histo file, before we stat the event loop histo_parentdir = os.path.dirname(os.path.abspath(args.HISTOFILE)) if not os.path.exists(histo_parentdir): logging.error('Parent path of output histogram file does not exist: %s\nExiting.' % histo_parentdir) sys.exit(4) if not os.access(histo_parentdir,os.W_OK): logging.error('Insufficient permissions to write output histogram file to directory %s\nExiting.' % histo_parentdir) sys.exit(4) ## Set up analysis handler RUNNAME = args.RUN_NAME or "" ah = rivet.AnalysisHandler(RUNNAME) ah.setIgnoreBeams(args.IGNORE_BEAMS) +ah.skipMultiWeights(args.SKIP_WEIGHTS) for a in args.ANALYSES: ## Print warning message and exit if not a valid analysis name if not rivet.stripOptions(a) in all_analyses: logging.warning("'%s' is not a known Rivet analysis! Do you need to set RIVET_ANALYSIS_PATH or use the --pwd switch?\n" % a) # TODO: lay out more neatly, or even try for a "did you mean XXXX?" heuristic? logging.warning("There are %d currently available analyses:\n" % len(all_analyses) + ", ".join(all_analyses)) sys.exit(1) logging.debug("Adding analysis '%s'" % a) ah.addAnalysis(a) if args.PRELOADFILE is not None: ah.readData(args.PRELOADFILE) if args.DUMP_PERIOD: ah.dump(args.HISTOFILE, args.DUMP_PERIOD) if args.SHOW_BIBTEX: bibs = [] for aname in sorted(ah.analysisNames()): ana = rivet.AnalysisLoader.getAnalysis(aname) bibs.append("% " + aname + "\n" + ana.bibTeX()) if bibs: print("\nBibTeX for used Rivet analyses:\n") print("% --------------------------\n") print("\n\n".join(bibs) + "\n") print("% --------------------------\n") ## Read and process events run = rivet.Run(ah) if args.CROSS_SECTION is not None: logging.info("User-supplied cross-section = %e pb" % args.CROSS_SECTION) run.setCrossSection(args.CROSS_SECTION) if args.LIST_USED_ANALYSES is not None: run.setListAnalyses(args.LIST_USED_ANALYSES) ## Print platform type import platform starttime = datetime.datetime.now().replace(microsecond=0) logging.info("Rivet %s running on machine %s (%s) at %s" % \ (rivet.version(), platform.node(), platform.machine(), str(starttime))) def min_nonnull(a, b): "A version of min which considers None to always be greater than a real number" if a is None: return b if b is None: return a return min(a, b) ## Set up an event timeout handler class TimeoutException(Exception): pass if args.EVENT_TIMEOUT or args.RUN_TIMEOUT: def evttimeouthandler(signum, frame): logging.warn("It has taken more than %d secs to get an event! Is the input event stream working?" % min_nonnull(args.EVENT_TIMEOUT, args.RUN_TIMEOUT)) raise TimeoutException("Event timeout") signal.signal(signal.SIGALRM, evttimeouthandler) ## Init run based on one event hepmcfile = HEPMCFILES[0] ## Apply a file-level weight derived from the filename hepmcfileweight = 1.0 if ":" in hepmcfile: hepmcfile, hepmcfileweight = hepmcfile.rsplit(":", 1) hepmcfileweight = float(hepmcfileweight) try: if args.EVENT_TIMEOUT or args.RUN_TIMEOUT: signal.alarm(min_nonnull(args.EVENT_TIMEOUT, args.RUN_TIMEOUT)) init_ok = run.init(hepmcfile, hepmcfileweight) signal.alarm(0) if not init_ok: logging.error("Failed to initialise using event file '%s'... exiting" % hepmcfile) sys.exit(2) except TimeoutException as te: logging.error("Timeout in initialisation from event file '%s'... exiting" % hepmcfile) sys.exit(3) except Exception as ex: logging.warning("Could not read from '%s' (error=%s)" % (hepmcfile, str(ex))) sys.exit(3) ## Event loop evtnum = 0 for fileidx, hepmcfile in enumerate(HEPMCFILES): ## Apply a file-level weight derived from the filename hepmcfileweight = 1.0 if ":" in hepmcfile: hepmcfile, hepmcfileweight = hepmcfile.rsplit(":", 1) hepmcfileweight = float(hepmcfileweight) ## Open next HepMC file (NB. this doesn't apply to the first file: it was already used for the run init) if fileidx > 0: try: run.openFile(hepmcfile, hepmcfileweight) except Exception as ex: logging.warning("Could not read from '%s' (error=%s)" % (hepmcfile, ex)) continue if not run.readEvent(): logging.warning("Could not read events from '%s'" % hepmcfile) continue ## Announce new file msg = "Reading events from '%s'" % hepmcfile if hepmcfileweight != 1.0: msg += " (file weight = %e)" % hepmcfileweight logging.info(msg) ## The event loop while args.MAXEVTNUM is None or evtnum-args.EVTSKIPNUM < args.MAXEVTNUM: evtnum += 1 ## Optional event skipping if evtnum <= args.EVTSKIPNUM: logging.debug("Skipping event #%i" % evtnum) run.skipEvent(); continue ## Only log the event number once we're actually processing logNEvt(evtnum, starttime, args.MAXEVTNUM) ## Process this event processed_ok = run.processEvent() if not processed_ok: logging.warn("Event processing failed for evt #%i!" % evtnum) break ## Set flag to exit event loop if run timeout exceeded if args.RUN_TIMEOUT and (time.time() - starttime) > args.RUN_TIMEOUT: logging.warning("Run timeout of %d secs exceeded... exiting gracefully" % args.RUN_TIMEOUT) RECVD_KILL_SIGNAL = True ## Exit the loop if signalled if RECVD_KILL_SIGNAL is not None: break ## Read next event (with timeout handling if requested) try: if args.EVENT_TIMEOUT: signal.alarm(args.EVENT_TIMEOUT) read_ok = run.readEvent() signal.alarm(0) if not read_ok: break except TimeoutException as te: logging.error("Timeout in reading event from '%s'... exiting" % hepmcfile) sys.exit(3) ## Print end-of-loop messages loopendtime = datetime.datetime.now().replace(microsecond=0) logging.info("Finished event loop at %s" % str(loopendtime)) # logging.info("Cross-section = %e pb" % ah.crossSection()) print() ## Finalize and write out data file run.finalize() if args.WRITE_DATA: ah.writeData(args.HISTOFILE) print() endtime = datetime.datetime.now().replace(microsecond=0) logging.info("Rivet run completed at %s, time elapsed = %s" % (str(endtime), str(endtime-starttime))) print() logging.info("Histograms written to %s" % os.path.abspath(args.HISTOFILE)) diff --git a/bin/rivet-mkanalysis b/bin/rivet-mkanalysis --- a/bin/rivet-mkanalysis +++ b/bin/rivet-mkanalysis @@ -1,322 +1,375 @@ #! /usr/bin/env python """\ %(prog)s: make templates of analysis source files for Rivet Usage: %(prog)s [--help|-h] [--srcroot=] Without the --srcroot flag, the analysis files will be created in the current directory. """ import rivet, sys, os rivet.util.check_python_version() rivet.util.set_process_name(os.path.basename(__file__)) import logging ## Handle command line import argparse parser = argparse.ArgumentParser(usage=__doc__) parser.add_argument("ANANAMES", nargs="+", help="names of analyses to make") parser.add_argument("--srcroot", metavar="DIR", dest="SRCROOT", default=None, help="install the templates into the Rivet source tree (rooted " + "at directory DIR) rather than just creating all in the current dir") parser.add_argument("-q", "--quiet", dest="LOGLEVEL", default=logging.INFO, action="store_const", const=logging.WARNING, help="only write out warning and error messages") parser.add_argument("-v", "--verbose", dest="LOGLEVEL", default=logging.INFO, action="store_const", const=logging.DEBUG, help="provide extra debugging messages") parser.add_argument("-i", "--inline-info", dest="INLINE", action="store_true", default=False, help="Put analysis info into source file instead of separate data file.") args = parser.parse_args() logging.basicConfig(format="%(msg)s", level=args.LOGLEVEL) ANANAMES = args.ANANAMES ## Work out installation paths ANAROOT = os.path.abspath(args.SRCROOT or os.getcwd()) if not os.access(ANAROOT, os.W_OK): logging.error("Can't write to source root directory %s" % ANAROOT) sys.exit(1) ANASRCDIR = os.getcwd() ANAINFODIR = os.getcwd() ANAPLOTDIR = os.getcwd() if args.SRCROOT: ANASRCDIR = os.path.join(ANAROOT, "src/Analyses") ANAINFODIR = os.path.join(ANAROOT, "data/anainfo") ANAPLOTDIR = os.path.join(ANAROOT, "data/plotinfo") if not (os.path.exists(ANASRCDIR) and os.path.exists(ANAINFODIR) and os.path.exists(ANAPLOTDIR)): logging.error("Rivet analysis dirs do not exist under %s" % ANAROOT) sys.exit(1) if not (os.access(ANASRCDIR, os.W_OK) and os.access(ANAINFODIR, os.W_OK) and os.access(ANAPLOTDIR, os.W_OK)): logging.error("Can't write to Rivet analysis dirs under %s" % ANAROOT) sys.exit(1) ## Check for disallowed characters in analysis names import string allowedchars = string.ascii_letters + string.digits + "_" all_ok = True for ananame in ANANAMES: for c in ananame: if c not in allowedchars: logging.error("Analysis name '%s' contains disallowed character '%s'!" % (ananame, c)) all_ok = False break if not all_ok: logging.error("Exiting... please ensure that all analysis names are valid") sys.exit(1) ## Now make each analysis for ANANAME in ANANAMES: logging.info("Writing templates for %s to %s" % (ANANAME, ANAROOT)) ## Extract some metadata from the name if it matches the standard pattern import re re_stdana = re.compile(r"^(\w+)_(\d{4})_(I|S)(\d+)$") match = re_stdana.match(ANANAME) STDANA = False ANAEXPT = "" ANACOLLIDER = "" ANAYEAR = "" INSPIRE_SPIRES = 'I' ANAINSPIREID = "" if match: STDANA = True ANAEXPT = match.group(1) if ANAEXPT.upper() in ("ALICE", "ATLAS", "CMS", "LHCB"): ANACOLLIDER = "LHC" elif ANAEXPT.upper() in ("CDF", "D0"): ANACOLLIDER = "Tevatron" elif ANAEXPT.upper() == "BABAR": ANACOLLIDER = "PEP-II" elif ANAEXPT.upper() == "BELLE": ANACOLLIDER = "KEKB" ANAYEAR = match.group(2) INSPIRE_SPIRES = match.group(3) ANAINSPIREID = match.group(4) if INSPIRE_SPIRES == "I": ANAREFREPO = "Inspire" else: ANAREFREPO = "Spires" KEYWORDS = { "ANANAME" : ANANAME, "ANAEXPT" : ANAEXPT, "ANACOLLIDER" : ANACOLLIDER, "ANAYEAR" : ANAYEAR, "ANAREFREPO" : ANAREFREPO, "ANAINSPIREID" : ANAINSPIREID } ## Try to get bib info from SPIRES ANABIBKEY = "" ANABIBTEX = "" bibkey, bibtex = None, None if STDANA: try: logging.info("Getting Inspire/SPIRES biblio data for '%s'" % ANANAME) bibkey, bibtex = rivet.spiresbib.get_bibtex_from_repo(INSPIRE_SPIRES, ANAINSPIREID) except Exception as e: logging.error("Inspire/SPIRES oops: %s" % e) if bibkey and bibtex: ANABIBKEY = bibkey ANABIBTEX = bibtex KEYWORDS["ANABIBKEY"] = ANABIBKEY KEYWORDS["ANABIBTEX"] = ANABIBTEX ## Try to download YODA data file from HepData if STDANA: try: try: from urllib.request import urlretrieve except: from urllib import urlretrieve # hdurl = None if INSPIRE_SPIRES == "I": hdurl = "http://www.hepdata.net/record/ins%s?format=yoda&rivet=%s" % (ANAINSPIREID, ANANAME) if not hdurl: raise Exception("Couldn't identify a URL for getting reference data from HepData") logging.info("Getting data file from HepData at %s" % hdurl) tmpfile = urlretrieve(hdurl)[0] # import tarfile, shutil tar = tarfile.open(tmpfile, mode="r") fnames = tar.getnames() if len(fnames) > 1: logging.warning("Found more than one file in downloaded archive. Treating first as canonical") tar.extractall() shutil.move(fnames[0], "%s.yoda" % ANANAME) except Exception as e: logging.warning("Problem encountered retrieving from HepData: %s" % hdurl) logging.warning("No reference data file written") logging.debug("HepData oops: %s: %s" % (str(type(e)), e)) INLINEMETHODS = "" if args.INLINE: KEYWORDS["ANAREFREPO_LOWER"] = KEYWORDS["ANAREFREPO"].lower() INLINEMETHODS = """ public: string experiment() const { return "%(ANAEXPT)s"; } string year() const { return "%(ANAYEAR)s"; } string %(ANAREFREPO_LOWER)sId() const { return "%(ANAINSPIREID)s"; } string collider() const { return ""; } string summary() const { return ""; } string description() const { return ""; } string runInfo() const { return ""; } string bibKey() const { return "%(ANABIBKEY)s"; } string bibTeX() const { return "%(ANABIBTEX)s"; } string status() const { return "UNVALIDATED"; } vector authors() const { return vector(); } vector references() const { return vector(); } vector todos() const { return vector(); } """ % KEYWORDS del KEYWORDS["ANAREFREPO_LOWER"] KEYWORDS["INLINEMETHODS"] = INLINEMETHODS if ANANAME.startswith("MC_"): HISTOBOOKING = """\ - _h_XXXX = bookHisto1D("myh1", 20, 0.0, 100.0); - _h_YYYY = bookHisto1D("myh2", logspace(20, 1e-2, 1e3)); - _h_ZZZZ = bookHisto1D("myh3", {0.0, 1.0, 2.0, 4.0, 8.0, 16.0}); - _p_AAAA = bookProfile1D("myp", 20, 0.0, 100.0); - _c_BBBB = bookCounter("myc");""" % KEYWORDS + book(_h["XXXX"], "myh1", 20, 0.0, 100.0); + book(_h["YYYY"], "myh2", logspace(20, 1e-2, 1e3)); + book(_h["ZZZZ"], "myh3", {0.0, 1.0, 2.0, 4.0, 8.0, 16.0}); + book(_p["AAAA"], "myp", 20, 0.0, 100.0); + book(_c["BBBB"], "myc");""" % KEYWORDS else: HISTOBOOKING = """\ - _h_XXXX = bookHisto1D(1, 1, 1); - _p_AAAA = bookProfile1D(2, 1, 1); - _c_BBBB = bookCounter(3, 1, 1);""" % KEYWORDS + // specify custom binning + book(_h["XXXX"], "myh1", 20, 0.0, 100.0); + book(_h["YYYY"], "myh2", logspace(20, 1e-2, 1e3)); + book(_h["ZZZZ"], "myh3", {0.0, 1.0, 2.0, 4.0, 8.0, 16.0}); + // take binning from reference data using HEPData ID (digits in "d01-x01-y01" etc.) + book(_h["AAAA"], 1, 1, 1); + book(_p["BBBB"], 2, 1, 1); + book(_c["CCCC"], 3, 1, 1);""" % KEYWORDS KEYWORDS["HISTOBOOKING"] = HISTOBOOKING ANASRCFILE = os.path.join(ANASRCDIR, ANANAME+".cc") logging.debug("Writing implementation template to %s" % ANASRCFILE) f = open(ANASRCFILE, "w") src = '''\ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/FastJets.hh" +#include "Rivet/Projections/DressedLeptons.hh" +#include "Rivet/Projections/MissingMomentum.hh" +#include "Rivet/Projections/PromptFinalState.hh" namespace Rivet { /// @brief Add a short analysis description here class %(ANANAME)s : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(%(ANANAME)s); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(FinalState(Cuts::abseta < 5 && Cuts::pT > 100*MeV), "FS"); + + // the basic final-state projection: + // all final-state particles within + // the given eta acceptance + const FinalState fs(Cuts::abseta < 4.9); + + // the final-state particles declared above are clustered using FastJet with + // the anti-kT algorithm and a jet-radius parameter 0.4 + // muons and neutrinos are excluded from the clustering + FastJets jetfs(fs, FastJets::ANTIKT, 0.4, JetAlg::Muons::NONE, JetAlg::Invisibles::NONE); + declare(jetfs, "jets"); + + // FinalState of prompt photons and bare muons and electrons in the event + PromptFinalState photons(Cuts::abspid == PID::PHOTON); + PromptFinalState bare_leps(Cuts::abspid == PID::MUON || Cuts::abspid == PID::ELECTRON); + + // dress the prompt bare leptons with prompt photons within dR < 0.1 + // apply some fiducial cuts on the dressed leptons + Cut lepton_cuts = Cuts::abseta < 2.5 && Cuts::pT > 20*GeV; + DressedLeptons dressed_leps(photons, bare_leps, 0.1, lepton_cuts); + declare(dressed_leps, "leptons"); + + // missing momentum + declare(MissingMomentum(fs), "MET"); // Book histograms %(HISTOBOOKING)s } /// Perform the per-event analysis void analyze(const Event& event) { /// @todo Do the event by event analysis here + // retrieve dressed leptons, sorted by pT + vector leptons = apply(event, "leptons").dressedLeptons(); + + // retrieve clustered jets, sorted by pT, with a minimum pT cut + Jets jets = apply(event, "jets").jetsByPt(Cuts::pT > 30*GeV); + + // remove all jets within dR < 0.2 of a dressed lepton + idiscardIfAnyDeltaRLess(jets, leptons, 0.2); + + // select jets ghost-associated to B-hadrons with a certain fiducial selection + Jets bjets = filter_select(jets, [](const Jet& jet) { + return jet.bTagged(Cuts::pT > 5*GeV && Cuts::abseta < 2.5); + }); + + // veto event if there are no b-jets + if (bjets.empty()) vetoEvent; + + // apply a missing-momentum cut + if (apply(event, "MET").missingPt() < 30*GeV) vetoEvent; + + // fill histogram with leading b-jet pT + _h["XXXX"]->fill(bjets[0].pT()/GeV); + } /// Normalise histograms etc., after the run void finalize() { - normalize(_h_YYYY); // normalize to unity - scale(_h_ZZZZ, crossSection()/picobarn/sumOfWeights()); // norm to cross section + normalize(_h["YYYY"]); // normalize to unity + scale(_h["ZZZZ"], crossSection()/picobarn/sumOfWeights()); // norm to cross section } //@} /// @name Histograms //@{ - Histo1DPtr _h_XXXX, _h_YYYY, _h_ZZZZ; - Profile1DPtr _p_AAAA; - CounterPtr _c_BBBB; + map _h; + map _p; + map _c; //@} %(INLINEMETHODS)s }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(%(ANANAME)s); } ''' % KEYWORDS f.write(src) f.close() ANAPLOTFILE = os.path.join(ANAPLOTDIR, ANANAME+".plot") logging.debug("Writing plot template to %s" % ANAPLOTFILE) f = open(ANAPLOTFILE, "w") src = '''\ BEGIN PLOT /%(ANANAME)s/d01-x01-y01 Title=[Insert title for histogram d01-x01-y01 here] XLabel=[Insert $x$-axis label for histogram d01-x01-y01 here] YLabel=[Insert $y$-axis label for histogram d01-x01-y01 here] # + any additional plot settings you might like, see make-plots documentation END PLOT # ... add more histograms as you need them ... ''' % KEYWORDS f.write(src) f.close() if args.INLINE: sys.exit(0) ANAINFOFILE = os.path.join(ANAINFODIR, ANANAME+".info") logging.debug("Writing info template to %s" % ANAINFOFILE) f = open(ANAINFOFILE, "w") src = """\ Name: %(ANANAME)s Year: %(ANAYEAR)s Summary: Experiment: %(ANAEXPT)s Collider: %(ANACOLLIDER)s %(ANAREFREPO)sID: %(ANAINSPIREID)s -Status: UNVALIDATED NOTREENTRY NOHEPDATA SINGLEWEIGHT UNPHYSICAL +Status: UNVALIDATED Authors: - Your Name #References: #- '' #- '' #- '' RunInfo: -NeedCrossSection: no #Beams: #Energies: #Luminosity_fb: Description: ' 50\;\GeV$.>' Keywords: [] BibKey: %(ANABIBKEY)s BibTeX: '%(ANABIBTEX)s' ToDo: - Implement the analysis, test it, remove this ToDo, and mark as VALIDATED :-) """ % KEYWORDS f.write(src) f.close() logging.info("Use e.g. 'rivet-buildplugin Rivet%s.so %s.cc' to compile the plugin" % (ANANAME, ANANAME)) diff --git a/configure.ac b/configure.ac --- a/configure.ac +++ b/configure.ac @@ -1,401 +1,398 @@ ## Process this file with autoconf to produce a configure script. AC_PREREQ(2.59) AC_INIT([Rivet],[3.1.0-pre],[rivet@projects.hepforge.org],[Rivet]) ## Check and block installation into the src/build dir if test "$prefix" = "$PWD"; then AC_MSG_ERROR([Installation into the build directory is not supported: use a different --prefix argument]) fi ## Force default prefix to have a path value rather than NONE if test "$prefix" = "NONE"; then prefix=/usr/local fi AC_CONFIG_SRCDIR([src/Core/Analysis.cc]) AC_CONFIG_HEADERS([include/Rivet/Config/DummyConfig.hh include/Rivet/Config/RivetConfig.hh]) AM_INIT_AUTOMAKE([dist-bzip2 -Wall 1.10]) m4_ifdef([AM_SILENT_RULES], [AM_SILENT_RULES([yes])]) m4_ifdef([AM_PROG_AR], [AM_PROG_AR]) AC_CONFIG_MACRO_DIR([m4]) AC_SUBST(LT_OBJDIR) ## Package-specific #defines AC_DEFINE_UNQUOTED(RIVET_VERSION, "$PACKAGE_VERSION", "Rivet version string") AC_DEFINE_UNQUOTED(RIVET_NAME, "$PACKAGE_NAME", "Rivet name string") AC_DEFINE_UNQUOTED(RIVET_STRING, "$PACKAGE_STRING", "Rivet name and version string") AC_DEFINE_UNQUOTED(RIVET_TARNAME, "$PACKAGE_TARNAME", "Rivet short name string") AC_DEFINE_UNQUOTED(RIVET_BUGREPORT, "$PACKAGE_BUGREPORT", "Rivet contact email address") ## OS X AC_CEDAR_OSX ## Work out the LCG platform tag AC_LCG_TAG ## Set default compiler flags if test "x$CXXFLAGS" == "x"; then CXXFLAGS="-O2"; fi ## Compiler setup AC_LANG(C++) AC_PROG_CXX AX_CXX_COMPILE_STDCXX([11], [noext], [mandatory]) ## Store and propagate the compiler identity and flags RIVETCXX="$CXX" AC_SUBST(RIVETCXX) RIVETCXXFLAGS="$CXXFLAGS" AC_SUBST(RIVETCXXFLAGS) ## Checks for programs. AC_PROG_INSTALL AC_PROG_LN_S AC_DISABLE_STATIC AC_LIBTOOL_DLOPEN AC_PROG_LIBTOOL AX_EXECINFO AC_FUNC_STRERROR_R ## YODA histogramming library AC_CEDAR_LIBRARYANDHEADERS([YODA], , , [AC_MSG_ERROR([YODA is required])]) YODABINPATH=$YODALIBPATH/../bin AC_SUBST(YODABINPATH) AC_PATH_PROG(YODACONFIG, yoda-config, [], [$YODALIBPATH/../bin:$PATH]) YODA_PYTHONPATH="" if test -f "$YODACONFIG"; then AC_MSG_CHECKING([YODA version using yoda-config]) YODA_VERSION=`$YODACONFIG --version` AC_MSG_RESULT([$YODA_VERSION]) YODA_VERSION1=[`echo $YODA_VERSION | cut -d. -f1 | sed -e 's/\([0-9]*\).*/\1/g'`] YODA_VERSION2=[`echo $YODA_VERSION | cut -d. -f2 | sed -e 's/\([0-9]*\).*/\1/g'`] YODA_VERSION3=[`echo $YODA_VERSION | cut -d. -f3 | sed -e 's/\([0-9]*\).*/\1/g'`] let YODA_VERSION_INT=YODA_VERSION1*10000+YODA_VERSION2*100+YODA_VERSION3 if test $YODA_VERSION_INT -lt 10704; then AC_MSG_ERROR([YODA version isn't sufficient: at least version 1.7.4 required]) fi AC_MSG_CHECKING([YODA Python path using yoda-config]) YODA_PYTHONPATH=`$YODACONFIG --pythonpath` AC_MSG_RESULT([$YODA_PYTHONPATH]) fi AC_SUBST(YODA_PYTHONPATH) ## HepMC event record library if test "${with_hepmc+set}" = set; then if test "${with_hepmc3+set}" = set; then AC_MSG_ERROR([HepMC2 *OR* HepMC3 is required. Both were specified!]) fi fi if test "${with_hepmc3+set}" = set; then AC_CEDAR_LIBRARYANDHEADERS([HepMC3], , [use_hepmc3=yes], [use_hepmc3=no]) AM_CONDITIONAL(WITH_HEPMC,false) AM_CONDITIONAL(WITH_HEPMCINC,false) AM_CONDITIONAL(WITH_HEPMCLIB,false) AM_CONDITIONAL(WITHOUT_HEPMC,true) AM_CONDITIONAL(WITHOUT_HEPMCINC,true) AM_CONDITIONAL(WITHOUT_HEPMCLIB,true) else AC_CEDAR_LIBRARYANDHEADERS([HepMC], , [use_hepmc2=yes], [use_hepmc2=no]) AM_CONDITIONAL(WITH_HEPMC3,false) AM_CONDITIONAL(WITH_HEPMC3INC,false) AM_CONDITIONAL(WITH_HEPMC3LIB,false) AM_CONDITIONAL(WITHOUT_HEPMC3,true) AM_CONDITIONAL(WITHOUT_HEPMC3INC,true) AM_CONDITIONAL(WITHOUT_HEPMC3LIB,true) fi if test x$use_hepmc2 = xno && test x$use_hepmc3 = xno ; then AC_MSG_ERROR([HepMC2 or HepMC3 is required]) fi if test x$use_hepmc2 = xyes; then oldCPPFLAGS=$CPPFLAGS CPPFLAGS="$CPPFLAGS -I$HEPMCINCPATH" if test -e "$HEPMCINCPATH/HepMC/HepMCDefs.h"; then AC_LANG_CONFTEST([AC_LANG_SOURCE([#include #include "HepMC/HepMCDefs.h" int main() { std::cout << HEPMC_VERSION << std::endl; return 0; }])]) else AC_LANG_CONFTEST([AC_LANG_SOURCE([#include #include "HepMC/defs.h" int main() { std::cout << VERSION << std::endl; return 0; }])]) fi if test -f conftest.cc; then $CXX $CPPFLAGS conftest.cc -o conftest 2>&1 1>&5 elif test -f conftest.C; then $CXX $CPPFLAGS conftest.C -o conftest 2>&1 1>&5 else $CXX $CPPFLAGS conftest.cpp -o conftest 2>&1 1>&5 fi hepmc_version=`./conftest` if test x$hepmc_version != x; then let hepmc_major=`echo "$hepmc_version" | cut -d. -f1` let hepmc_minor=`echo "$hepmc_version" | cut -d. -f2` fi rm -f conftest conftest.cpp conftest.cc conftest.C HEPMC_VERSION=$hepmc_major$hepmc_minor AC_MSG_NOTICE([HepMC version is $hepmc_version -> $HEPMC_VERSION]) AC_SUBST(HEPMC_VERSION) CPPFLAGS=$oldCPPFLAGS else oldCPPFLAGS=$CPPFLAGS CPPFLAGS="$CPPFLAGS -I$HEPMC3INCPATH" AC_LANG_CONFTEST([AC_LANG_SOURCE([#include #include "HepMC3/Version.h" int main() { std::cout << HepMC3::version() << std::endl; return 0; }])]) if test -f conftest.cc; then $CXX $CPPFLAGS conftest.cc -o conftest 2>&1 1>&5 elif test -f conftest.C; then $CXX $CPPFLAGS conftest.C -o conftest 2>&1 1>&5 else $CXX $CPPFLAGS conftest.cpp -o conftest 2>&1 1>&5 fi hepmc_version=`./conftest` if test x$hepmc_version != x; then let hepmc_major=`echo "$hepmc_version" | cut -d. -f1` let hepmc_minor=`echo "$hepmc_version" | cut -d. -f2` let hepmc_third=`echo "$hepmc_version" | cut -d. -f3` fi rm -f conftest conftest.cpp conftest.cc conftest.C HEPMC_VERSION=$hepmc_major$hepmc_minor$hepmc_third AC_MSG_NOTICE([HepMC version is $hepmc_version -> $HEPMC_VERSION]) if test $HEPMC_VERSION -le 310; then AC_MSG_ERROR([HepMC3 version 3.1.1 or later is required.]) fi AC_SUBST(HEPMC_VERSION) CPPFLAGS=$oldCPPFLAGS fi AM_CONDITIONAL([ENABLE_HEPMC_3], [test x$hepmc_major = x3]) AM_COND_IF([ENABLE_HEPMC_3],[CPPFLAGS="$CPPFLAGS -DENABLE_HEPMC_3=true"]) ## FastJet clustering library AC_CEDAR_LIBRARYANDHEADERS([fastjet], , , [AC_MSG_ERROR([FastJet is required])]) AC_PATH_PROG(FJCONFIG, fastjet-config, [], $FASTJETPATH/bin:$PATH) if test -f "$FJCONFIG"; then AC_MSG_CHECKING([FastJet version using fastjet-config]) fjversion=`$FJCONFIG --version` AC_MSG_RESULT([$fjversion]) fjmajor=$(echo $fjversion | cut -f1 -d.) fjminor=$(echo $fjversion | cut -f2 -d.) fjmicro=$(echo $fjversion | cut -f3 -d.) if test "$fjmajor" -lt 3; then AC_MSG_ERROR([FastJet version 3.0.0 or later is required]) fi - FASTJETCONFIGLIBADD="$($FJCONFIG --plugins --shared --libs)" + FASTJETCONFIGLIBADD="$($FJCONFIG --plugins --shared --libs) -lfastjetcontribfragile" else FASTJETCONFIGLIBADD="-L$FASTJETLIBPATH -l$FASTJETLIBNAME" FASTJETCONFIGLIBADD="$FASTJETCONFIGLIBADD -lSISConePlugin -lsiscone -lsiscone_spherical" FASTJETCONFIGLIBADD="$FASTJETCONFIGLIBADD -lCDFConesPlugin -lD0RunIIConePlugin -lNestedDefsPlugin" FASTJETCONFIGLIBADD="$FASTJETCONFIGLIBADD -lTrackJetPlugin -lATLASConePlugin -lCMSIterativeConePlugin" FASTJETCONFIGLIBADD="$FASTJETCONFIGLIBADD -lEECambridgePlugin -lJadePlugin" + FASTJETCONFIGLIBADD="$FASTJETCONFIGLIBADD -lfastjetcontribfragile" fi; AC_MSG_NOTICE([FastJet LIBADD = $FASTJETCONFIGLIBADD]) AC_SUBST(FASTJETCONFIGLIBADD) # Check for FastJet headers that require the --enable-all(cxx)plugins option FASTJET_ERRMSG="Required FastJet plugin headers were not found: did you build FastJet with the --enable-allcxxplugins option?" oldCPPFLAGS=$CPPFLAGS CPPFLAGS="$CPPFLAGS -I$FASTJETINCPATH" AC_CHECK_HEADER([fastjet/D0RunIIConePlugin.hh], [], [AC_MSG_ERROR([$FASTJET_ERRMSG])]) AC_CHECK_HEADER([fastjet/TrackJetPlugin.hh], [], [AC_MSG_ERROR([$FASTJET_ERRMSG])]) CPPFLAGS=$oldCPPFLAGS # ## GNU Scientific Library # AC_SEARCH_GSL # AC_CEDAR_HEADERS([gsl], , , [AC_MSG_ERROR([GSL (GNU Scientific Library) is required])]) # oldCPPFLAGS=$CPPFLAGS # CPPFLAGS="$CPPFLAGS -I$GSLINCPATH" # AC_CHECK_HEADER([gsl/gsl_vector.h], [], [AC_MSG_ERROR([GSL vectors not found.])]) # CPPFLAGS=$oldCPPFLAGS ## Disable build/install of standard analyses AC_ARG_ENABLE([analyses], [AC_HELP_STRING(--disable-analyses, [don't try to build or install standard analyses])], [], [enable_analyses=yes]) if test x$enable_analyses != xyes; then AC_MSG_WARN([Not building standard Rivet analyses, by request]) fi AM_CONDITIONAL(ENABLE_ANALYSES, [test x$enable_analyses = xyes]) ## Build LaTeX docs if possible... AC_PATH_PROG(PDFLATEX, pdflatex) AM_CONDITIONAL(WITH_PDFLATEX, [test x$PDFLATEX != x]) ## ... unless told otherwise! AC_ARG_ENABLE([pdfmanual], [AC_HELP_STRING(--enable-pdfmanual, [build and install the manual])], [], [enable_pdfmanual=no]) if test x$enable_pdfmanual = xyes; then AC_MSG_WARN([Building Rivet manual, by request]) fi AM_CONDITIONAL(ENABLE_PDFMANUAL, [test x$enable_pdfmanual = xyes]) ## Build Doxygen documentation if possible AC_ARG_ENABLE([doxygen], [AC_HELP_STRING(--disable-doxygen, [don't try to make Doxygen documentation])], [], [enable_doxygen=yes]) if test x$enable_doxygen = xyes; then AC_PATH_PROG(DOXYGEN, doxygen) fi AM_CONDITIONAL(WITH_DOXYGEN, [test x$DOXYGEN != x]) ## Build asciidoc docs if possible AC_PATH_PROG(ASCIIDOC, asciidoc) AM_CONDITIONAL(WITH_ASCIIDOC, [test x$ASCIIDOC != x]) ## Python extension AC_ARG_ENABLE(pyext, [AC_HELP_STRING(--disable-pyext, [don't build Python module (default=build)])], [], [enable_pyext=yes]) ## Basic Python checks if test x$enable_pyext == xyes; then AX_PYTHON_DEVEL([>= '2.7.3']) AC_SUBST(PYTHON_VERSION) RIVET_PYTHONPATH=`$PYTHON -c "from __future__ import print_function; import distutils.sysconfig; print(distutils.sysconfig.get_python_lib(prefix='$prefix', plat_specific=True));"` AC_SUBST(RIVET_PYTHONPATH) if test -z "$PYTHON"; then AC_MSG_ERROR([Can't build Python extension since python can't be found]) enable_pyext=no fi if test -z "$PYTHON_CPPFLAGS"; then AC_MSG_ERROR([Can't build Python extension since Python.h header file cannot be found]) enable_pyext=no fi fi AM_CONDITIONAL(ENABLE_PYEXT, [test x$enable_pyext == xyes]) dnl dnl setup.py puts its build artifacts into a labelled path dnl this helps the test scripts to find them locally instead of dnl having to install first dnl RIVET_SETUP_PY_PATH=$(${PYTHON} -c 'from __future__ import print_function; import distutils.util as u, sys; vi=sys.version_info; print("lib.%s-%s.%s" % (u.get_platform(),vi.major, vi.minor))') AC_SUBST(RIVET_SETUP_PY_PATH) ## Cython checks if test x$enable_pyext == xyes; then AM_CHECK_CYTHON([0.24.0], [:], [:]) if test x$CYTHON_FOUND = xyes; then AC_MSG_NOTICE([Cython >= 0.24 found: Python extension source can be rebuilt (for developers)]) fi AC_CHECK_FILE([pyext/rivet/core.cpp], [], [if test "x$CYTHON_FOUND" != "xyes"; then AC_MSG_ERROR([Cython is required for --enable-pyext, no pre-built core.cpp was found.]) fi]) cython_compiler=$CXX ## Set extra Python extension build flags (to cope with Cython output code oddities) PYEXT_CXXFLAGS="$CXXFLAGS" AC_CEDAR_CHECKCXXFLAG([-Wno-unused-but-set-variable], [PYEXT_CXXFLAGS="$PYEXT_CXXFLAGS -Wno-unused-but-set-variable"]) AC_CEDAR_CHECKCXXFLAG([-Wno-sign-compare], [PYEXT_CXXFLAGS="$PYEXT_CXXFLAGS -Wno-sign-compare"]) AC_SUBST(PYEXT_CXXFLAGS) AC_MSG_NOTICE([All Python build checks successful: 'rivet' Python extension will be built]) fi AM_CONDITIONAL(WITH_CYTHON, [test x$CYTHON_FOUND = xyes]) ## Set default build flags AM_CPPFLAGS="-I\$(top_srcdir)/include -I\$(top_builddir)/include" #AM_CPPFLAGS="$AM_CPPFLAGS -I\$(top_srcdir)/include/eigen3" #AM_CPPFLAGS="$AM_CPPFLAGS \$(GSL_CPPFLAGS)" dnl AM_CPPFLAGS="$AM_CPPFLAGS \$(BOOST_CPPFLAGS)" AM_CPPFLAGS="$AM_CPPFLAGS -I\$(YODAINCPATH)" if test x$use_hepmc2 = xyes ; then AM_CPPFLAGS="$AM_CPPFLAGS -I\$(HEPMCINCPATH)" else AM_CPPFLAGS="$AM_CPPFLAGS -I\$(HEPMC3INCPATH)" fi AM_CPPFLAGS="$AM_CPPFLAGS -I\$(FASTJETINCPATH)" AC_CEDAR_CHECKCXXFLAG([-pedantic], [AM_CXXFLAGS="$AM_CXXFLAGS -pedantic"]) AC_CEDAR_CHECKCXXFLAG([-Wall], [AM_CXXFLAGS="$AM_CXXFLAGS -Wall"]) AC_CEDAR_CHECKCXXFLAG([-Wno-long-long], [AM_CXXFLAGS="$AM_CXXFLAGS -Wno-long-long"]) AC_CEDAR_CHECKCXXFLAG([-Wno-format], [AM_CXXFLAGS="$AM_CXXFLAGS -Wno-format"]) dnl AC_CEDAR_CHECKCXXFLAG([-Wno-unused-variable], [AM_CXXFLAGS="$AM_CXXFLAGS -Wno-unused-variable"]) AC_CEDAR_CHECKCXXFLAG([-Werror=uninitialized], [AM_CXXFLAGS="$AM_CXXFLAGS -Werror=uninitialized"]) AC_CEDAR_CHECKCXXFLAG([-Werror=delete-non-virtual-dtor], [AM_CXXFLAGS="$AM_CXXFLAGS -Werror=delete-non-virtual-dtor"]) ## Add OpenMP-enabling flags if possible AX_OPENMP([AM_CXXFLAGS="$AM_CXXFLAGS $OPENMP_CXXFLAGS"]) ## Optional zlib support for gzip-compressed data streams/files AX_CHECK_ZLIB ## Debug flag (default=-DNDEBUG, enabled=-g) AC_ARG_ENABLE([debug], [AC_HELP_STRING(--enable-debug, [build with debugging symbols @<:@default=no@:>@])], [], [enable_debug=no]) if test x$enable_debug == xyes; then AM_CXXFLAGS="$AM_CXXFLAGS -g" fi ## Extra warnings flag (default=none) AC_ARG_ENABLE([extra-warnings], [AC_HELP_STRING(--enable-extra-warnings, [build with extra compiler warnings (recommended for developers) @<:@default=no@:>@])], [], [enable_extra_warnings=no]) if test x$enable_extra_warnings == xyes; then AC_CEDAR_CHECKCXXFLAG([-Wextra], [AM_CXXFLAGS="$AM_CXXFLAGS -Wextra "]) fi AC_SUBST(AM_CPPFLAGS) AC_SUBST(AM_CXXFLAGS) AC_EMPTY_SUBST AC_CONFIG_FILES(Makefile Doxyfile) AC_CONFIG_FILES(include/Makefile include/Rivet/Makefile) AC_CONFIG_FILES(src/Makefile) AC_CONFIG_FILES(src/Core/Makefile src/Core/yamlcpp/Makefile) AC_CONFIG_FILES(src/Tools/Makefile) -AC_CONFIG_FILES(src/Tools/fjcontrib/Makefile) -AC_CONFIG_FILES(src/Tools/fjcontrib/EnergyCorrelator/Makefile) -AC_CONFIG_FILES(src/Tools/fjcontrib/Nsubjettiness/Makefile) -AC_CONFIG_FILES(src/Tools/fjcontrib/RecursiveTools/Makefile) AC_CONFIG_FILES(src/Projections/Makefile) AC_CONFIG_FILES(src/AnalysisTools/Makefile) AC_CONFIG_FILES(analyses/Makefile) AC_CONFIG_FILES(test/Makefile) AC_CONFIG_FILES(pyext/Makefile pyext/rivet/Makefile pyext/setup.py) AC_CONFIG_FILES(data/Makefile data/texmf/Makefile) AC_CONFIG_FILES(doc/Makefile) AC_CONFIG_FILES(doc/rivetversion.sty) AC_CONFIG_FILES(bin/Makefile bin/rivet-config bin/rivet-buildplugin) AC_CONFIG_FILES(rivetenv.sh rivetenv.csh rivet.pc) AC_OUTPUT if test x$enable_pyrivet == xyes; then cat < /// @def vetoEvent /// Preprocessor define for vetoing events, including the log message and return. #define vetoEvent \ do { MSG_DEBUG("Vetoing event on line " << __LINE__ << " of " << __FILE__); return; } while(0) namespace Rivet { // Convenience for analysis writers using std::cout; using std::cerr; using std::endl; using std::tuple; using std::stringstream; using std::swap; using std::numeric_limits; // Forward declaration class AnalysisHandler; /// @brief This is the base class of all analysis classes in Rivet. /// /// There are /// three virtual functions which should be implemented in base classes: /// /// void init() is called by Rivet before a run is started. Here the /// analysis class should book necessary histograms. The needed /// projections should probably rather be constructed in the /// constructor. /// /// void analyze(const Event&) is called once for each event. Here the /// analysis class should apply the necessary Projections and fill the /// histograms. /// /// void finalize() is called after a run is finished. Here the analysis /// class should do whatever manipulations are necessary on the /// histograms. Writing the histograms to a file is, however, done by /// the Rivet class. class Analysis : public ProjectionApplier { /// The AnalysisHandler is a friend. friend class AnalysisHandler; public: /// @name Standard constructors and destructors. //@{ // /// The default constructor. // Analysis(); /// Constructor Analysis(const std::string& name); /// The destructor. virtual ~Analysis() {} //@} public: /// @name Main analysis methods //@{ /// Initialize this analysis object. A concrete class should here /// book all necessary histograms. An overridden function must make /// sure it first calls the base class function. virtual void init() { } /// Analyze one event. A concrete class should here apply the /// necessary projections on the \a event and fill the relevant /// histograms. An overridden function must make sure it first calls /// the base class function. virtual void analyze(const Event& event) = 0; /// Finalize this analysis object. A concrete class should here make /// all necessary operations on the histograms. Writing the /// histograms to a file is, however, done by the Rivet class. An /// overridden function must make sure it first calls the base class /// function. virtual void finalize() { } //@} public: /// @name Metadata /// Metadata is used for querying from the command line and also for /// building web pages and the analysis pages in the Rivet manual. //@{ /// Get the actual AnalysisInfo object in which all this metadata is stored. const AnalysisInfo& info() const { assert(_info && "No AnalysisInfo object :O"); return *_info; } /// @brief Get the name of the analysis. /// /// By default this is computed by combining the results of the /// experiment, year and Spires ID metadata methods and you should /// only override it if there's a good reason why those won't /// work. If options has been set for this instance, a /// corresponding string is appended at the end. virtual std::string name() const { return ( (info().name().empty()) ? _defaultname : info().name() ) + _optstring; } /// Get name of reference data file, which could be different from plugin name virtual std::string getRefDataName() const { return (info().getRefDataName().empty()) ? _defaultname : info().getRefDataName(); } /// Set name of reference data file, which could be different from plugin name virtual void setRefDataName(const std::string& ref_data="") { info().setRefDataName(!ref_data.empty() ? ref_data : name()); } /// Get the Inspire ID code for this analysis. virtual std::string inspireId() const { return info().inspireId(); } /// Get the SPIRES ID code for this analysis (~deprecated). virtual std::string spiresId() const { return info().spiresId(); } /// @brief Names & emails of paper/analysis authors. /// /// Names and email of authors in 'NAME \' format. The first /// name in the list should be the primary contact person. virtual std::vector authors() const { return info().authors(); } /// @brief Get a short description of the analysis. /// /// Short (one sentence) description used as an index entry. /// Use @a description() to provide full descriptive paragraphs /// of analysis details. virtual std::string summary() const { return info().summary(); } /// @brief Get a full description of the analysis. /// /// Full textual description of this analysis, what it is useful for, /// what experimental techniques are applied, etc. Should be treated /// as a chunk of restructuredText (http://docutils.sourceforge.net/rst.html), /// with equations to be rendered as LaTeX with amsmath operators. virtual std::string description() const { return info().description(); } /// @brief Information about the events needed as input for this analysis. /// /// Event types, energies, kinematic cuts, particles to be considered /// stable, etc. etc. Should be treated as a restructuredText bullet list /// (http://docutils.sourceforge.net/rst.html) virtual std::string runInfo() const { return info().runInfo(); } /// Experiment which performed and published this analysis. virtual std::string experiment() const { return info().experiment(); } /// Collider on which the experiment ran. virtual std::string collider() const { return info().collider(); } /// When the original experimental analysis was published. virtual std::string year() const { return info().year(); } /// The luminosity in inverse femtobarn virtual std::string luminosityfb() const { return info().luminosityfb(); } /// Journal, and preprint references. virtual std::vector references() const { return info().references(); } /// BibTeX citation key for this article. virtual std::string bibKey() const { return info().bibKey(); } /// BibTeX citation entry for this article. virtual std::string bibTeX() const { return info().bibTeX(); } /// Whether this analysis is trusted (in any way!) virtual std::string status() const { return (info().status().empty()) ? "UNVALIDATED" : info().status(); } /// Any work to be done on this analysis. virtual std::vector todos() const { return info().todos(); } /// make-style commands for validating this analysis. virtual std::vector validation() const { return info().validation(); } /// Return the allowed pairs of incoming beams required by this analysis. virtual const std::vector& requiredBeams() const { return info().beams(); } /// Declare the allowed pairs of incoming beams required by this analysis. virtual Analysis& setRequiredBeams(const std::vector& requiredBeams) { info().setBeams(requiredBeams); return *this; } /// Sets of valid beam energy pairs, in GeV virtual const std::vector >& requiredEnergies() const { return info().energies(); } /// Get vector of analysis keywords virtual const std::vector & keywords() const { return info().keywords(); } /// Declare the list of valid beam energy pairs, in GeV virtual Analysis& setRequiredEnergies(const std::vector >& requiredEnergies) { info().setEnergies(requiredEnergies); return *this; } //@} /// @name Internal metadata modifying methods //@{ /// Get the actual AnalysisInfo object in which all this metadata is stored (non-const). AnalysisInfo& info() { assert(_info && "No AnalysisInfo object :O"); return *_info; } //@} /// @name Run conditions //@{ /// Incoming beams for this run const ParticlePair& beams() const; /// Incoming beam IDs for this run const PdgIdPair beamIds() const; /// Centre of mass energy for this run double sqrtS() const; /// Check if we are running rivet-merge. bool merging() const { return sqrtS() <= 0.0; } //@} /// @name Analysis / beam compatibility testing /// @todo Replace with beamsCompatible() with no args (calling beams() function internally) /// @todo Add beamsMatch() methods with same (shared-code?) tolerance as in beamsCompatible() //@{ /// Check if analysis is compatible with the provided beam particle IDs and energies bool isCompatible(const ParticlePair& beams) const; /// Check if analysis is compatible with the provided beam particle IDs and energies bool isCompatible(PdgId beam1, PdgId beam2, double e1, double e2) const; /// Check if analysis is compatible with the provided beam particle IDs and energies bool isCompatible(const PdgIdPair& beams, const std::pair& energies) const; //@} /// Access the controlling AnalysisHandler object. AnalysisHandler& handler() const { return *_analysishandler; } protected: /// Get a Log object based on the name() property of the calling analysis object. Log& getLog() const; /// Get the process cross-section in pb. Throws if this hasn't been set. double crossSection() const; /// Get the process cross-section per generated event in pb. Throws if this /// hasn't been set. double crossSectionPerEvent() const; /// @brief Get the number of events seen (via the analysis handler). /// /// @note Use in the finalize phase only. size_t numEvents() const; /// @brief Get the sum of event weights seen (via the analysis handler). /// /// @note Use in the finalize phase only. double sumW() const; /// Alias double sumOfWeights() const { return sumW(); } /// @brief Get the sum of squared event weights seen (via the analysis handler). /// /// @note Use in the finalize phase only. double sumW2() const; protected: /// @name Histogram paths //@{ /// Get the canonical histogram "directory" path for this analysis. const std::string histoDir() const; /// Get the canonical histogram path for the named histogram in this analysis. const std::string histoPath(const std::string& hname) const; /// Get the canonical histogram path for the numbered histogram in this analysis. const std::string histoPath(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const; /// Get the internal histogram name for given d, x and y (cf. HepData) const std::string mkAxisCode(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const; //@} /// @name Histogram reference data //@{ /// Get reference data for a named histo /// @todo SFINAE to ensure that the type inherits from YODA::AnalysisObject? template const T& refData(const string& hname) const { _cacheRefData(); MSG_TRACE("Using histo bin edges for " << name() << ":" << hname); if (!_refdata[hname]) { MSG_ERROR("Can't find reference histogram " << hname); throw Exception("Reference data " + hname + " not found."); } return dynamic_cast(*_refdata[hname]); } /// Get reference data for a numbered histo /// @todo SFINAE to ensure that the type inherits from YODA::AnalysisObject? template const T& refData(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { const string hname = mkAxisCode(datasetId, xAxisId, yAxisId); return refData(hname); } //@} /// @name Counter booking //@{ /// Book a counter. - CounterPtr & book(CounterPtr &, const std::string& name, - const std::string& title=""); - // const std::string& valtitle="" + CounterPtr & book(CounterPtr &, const std::string& name); /// Book a counter, using a path generated from the dataset and axis ID codes /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. - CounterPtr & book(CounterPtr &, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const std::string& title=""); - // const std::string& valtitle="" + CounterPtr & book(CounterPtr &, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId); //@} /// @name 1D histogram booking //@{ /// Book a 1D histogram with @a nbins uniformly distributed across the range @a lower - @a upper . - Histo1DPtr & book(Histo1DPtr &,const std::string& name, - size_t nbins, double lower, double upper, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Histo1DPtr & book(Histo1DPtr &,const std::string& name, size_t nbins, double lower, double upper); /// Book a 1D histogram with non-uniform bins defined by the vector of bin edges @a binedges . - Histo1DPtr & book(Histo1DPtr &,const std::string& name, - const std::vector& binedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Histo1DPtr & book(Histo1DPtr &,const std::string& name, const std::vector& binedges); /// Book a 1D histogram with non-uniform bins defined by the vector of bin edges @a binedges . - Histo1DPtr & book(Histo1DPtr &,const std::string& name, - const std::initializer_list& binedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Histo1DPtr & book(Histo1DPtr &,const std::string& name, const std::initializer_list& binedges); /// Book a 1D histogram with binning from a reference scatter. - Histo1DPtr & book(Histo1DPtr &,const std::string& name, - const Scatter2D& refscatter, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Histo1DPtr & book(Histo1DPtr &,const std::string& name, const Scatter2D& refscatter); /// Book a 1D histogram, using the binnings in the reference data histogram. - Histo1DPtr & book(Histo1DPtr &,const std::string& name, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Histo1DPtr & book(Histo1DPtr &,const std::string& name); /// Book a 1D histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. - Histo1DPtr & book(Histo1DPtr &,unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Histo1DPtr & book(Histo1DPtr &,unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId); //@} /// @name 2D histogram booking //@{ /// Book a 2D histogram with @a nxbins and @a nybins uniformly /// distributed across the ranges @a xlower - @a xupper and @a /// ylower - @a yupper respectively along the x- and y-axis. Histo2DPtr & book(Histo2DPtr &,const std::string& name, size_t nxbins, double xlower, double xupper, - size_t nybins, double ylower, double yupper, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + size_t nybins, double ylower, double yupper); /// Book a 2D histogram with non-uniform bins defined by the /// vectors of bin edges @a xbinedges and @a ybinedges. Histo2DPtr & book(Histo2DPtr &,const std::string& name, const std::vector& xbinedges, - const std::vector& ybinedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + const std::vector& ybinedges); /// Book a 2D histogram with non-uniform bins defined by the /// vectors of bin edges @a xbinedges and @a ybinedges. Histo2DPtr & book(Histo2DPtr &,const std::string& name, const std::initializer_list& xbinedges, - const std::initializer_list& ybinedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + const std::initializer_list& ybinedges); /// Book a 2D histogram with binning from a reference scatter. Histo2DPtr & book(Histo2DPtr &,const std::string& name, - const Scatter3D& refscatter, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + const Scatter3D& refscatter); /// Book a 2D histogram, using the binnings in the reference data histogram. - Histo2DPtr & book(Histo2DPtr &,const std::string& name, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + Histo2DPtr & book(Histo2DPtr &,const std::string& name); /// Book a 2D histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. - Histo2DPtr & book(Histo2DPtr &,unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + Histo2DPtr & book(Histo2DPtr &,unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId); //@} /// @name 1D profile histogram booking //@{ /// Book a 1D profile histogram with @a nbins uniformly distributed across the range @a lower - @a upper . - Profile1DPtr & book(Profile1DPtr &, const std::string& name, - size_t nbins, double lower, double upper, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Profile1DPtr & book(Profile1DPtr &, const std::string& name, size_t nbins, double lower, double upper); /// Book a 1D profile histogram with non-uniform bins defined by the vector of bin edges @a binedges . - Profile1DPtr & book(Profile1DPtr &, const std::string& name, - const std::vector& binedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Profile1DPtr & book(Profile1DPtr &, const std::string& name, const std::vector& binedges); /// Book a 1D profile histogram with non-uniform bins defined by the vector of bin edges @a binedges . - Profile1DPtr & book(Profile1DPtr &, const std::string& name, - const std::initializer_list& binedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Profile1DPtr & book(Profile1DPtr &, const std::string& name, const std::initializer_list& binedges); /// Book a 1D profile histogram with binning from a reference scatter. - Profile1DPtr & book(Profile1DPtr &, const std::string& name, - const Scatter2D& refscatter, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Profile1DPtr & book(Profile1DPtr &, const std::string& name, const Scatter2D& refscatter); /// Book a 1D profile histogram, using the binnings in the reference data histogram. - Profile1DPtr & book(Profile1DPtr &, const std::string& name, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Profile1DPtr & book(Profile1DPtr &, const std::string& name); /// Book a 1D profile histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. - Profile1DPtr & book(Profile1DPtr &, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Profile1DPtr & book(Profile1DPtr &, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId); //@} /// @name 2D profile histogram booking //@{ /// Book a 2D profile histogram with @a nxbins and @a nybins uniformly /// distributed across the ranges @a xlower - @a xupper and @a ylower - @a /// yupper respectively along the x- and y-axis. Profile2DPtr & book(Profile2DPtr &, const std::string& name, size_t nxbins, double xlower, double xupper, - size_t nybins, double ylower, double yupper, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + size_t nybins, double ylower, double yupper); /// Book a 2D profile histogram with non-uniform bins defined by the vectorx /// of bin edges @a xbinedges and @a ybinedges. Profile2DPtr & book(Profile2DPtr &, const std::string& name, const std::vector& xbinedges, - const std::vector& ybinedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + const std::vector& ybinedges); /// Book a 2D profile histogram with non-uniform bins defined by the vectorx /// of bin edges @a xbinedges and @a ybinedges. Profile2DPtr & book(Profile2DPtr &, const std::string& name, const std::initializer_list& xbinedges, - const std::initializer_list& ybinedges, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle="", - const std::string& ztitle=""); + const std::initializer_list& ybinedges); /// @todo REINSTATE // /// Book a 2D profile histogram with binning from a reference scatter. // Profile2DPtr & book(const Profile2DPtr &, const std::string& name, - // const Scatter3D& refscatter, - // const std::string& title="", - // const std::string& xtitle="", - // const std::string& ytitle="", - // const std::string& ztitle=""); + // const Scatter3D& refscatter); // /// Book a 2D profile histogram, using the binnings in the reference data histogram. - // Profile2DPtr & book(const Profile2DPtr &, const std::string& name, - // const std::string& title="", - // const std::string& xtitle="", - // const std::string& ytitle="", - // const std::string& ztitle=""); + // Profile2DPtr & book(const Profile2DPtr &, const std::string& name); // /// Book a 2D profile histogram, using the binnings in the reference data histogram. // /// // /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. - // Profile2DPtr & book(const Profile2DPtr &, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - // const std::string& title="", - // const std::string& xtitle="", - // const std::string& ytitle="", - // const std::string& ztitle=""); + // Profile2DPtr & book(const Profile2DPtr &, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId); //@} /// @name 2D scatter booking //@{ /// @brief Book a 2-dimensional data point set with the given name. /// /// @note Unlike histogram booking, scatter booking by default makes no /// attempt to use reference data to pre-fill the data object. If you want /// this, which is sometimes useful e.g. when the x-position is not really /// meaningful and can't be extracted from the data, then set the @a /// copy_pts parameter to true. This creates points to match the reference /// data's x values and errors, but with the y values and errors zeroed... /// assuming that there is a reference histo with the same name: if there /// isn't, an exception will be thrown. - Scatter2DPtr & book(Scatter2DPtr & s2d, const string& hname, - bool copy_pts=false, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Scatter2DPtr & book(Scatter2DPtr & s2d, const string& hname, bool copy_pts = false); /// @brief Book a 2-dimensional data point set, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. /// /// @note Unlike histogram booking, scatter booking by default makes no /// attempt to use reference data to pre-fill the data object. If you want /// this, which is sometimes useful e.g. when the x-position is not really /// meaningful and can't be extracted from the data, then set the @a /// copy_pts parameter to true. This creates points to match the reference /// data's x values and errors, but with the y values and errors zeroed. - Scatter2DPtr & book(Scatter2DPtr & s2d, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - bool copy_pts=false, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Scatter2DPtr & book(Scatter2DPtr & s2d, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, bool copy_pts = false); /// @brief Book a 2-dimensional data point set with equally spaced x-points in a range. /// /// The y values and errors will be set to 0. - Scatter2DPtr & book(Scatter2DPtr & s2d, const string& hname, - size_t npts, double lower, double upper, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Scatter2DPtr & book(Scatter2DPtr & s2d, const string& hname, size_t npts, double lower, double upper); /// @brief Book a 2-dimensional data point set based on provided contiguous "bin edges". /// /// The y values and errors will be set to 0. - Scatter2DPtr & book(Scatter2DPtr & s2d, const string& hname, - const std::vector& binedges, - const std::string& title, - const std::string& xtitle, - const std::string& ytitle); + Scatter2DPtr & book(Scatter2DPtr & s2d, const string& hname, const std::vector& binedges); /// Book a 2-dimensional data point set with x-points from an existing scatter and a new path. - Scatter2DPtr book(Scatter2DPtr & s2d, const Scatter2DPtr & scPtr, - const std::string& path, - const std::string& title="", - const std::string& xtitle="", - const std::string& ytitle=""); + Scatter2DPtr book(Scatter2DPtr & s2d, const Scatter2DPtr & scPtr, const std::string& path); //@} public: /// @name Accessing options for this Analysis instance. //@{ /// Return the map of all options given to this analysis. const std::map & options() { return _options; } /// Get an option for this analysis instance as a string. std::string getOption(std::string optname) { if ( _options.find(optname) != _options.end() ) return _options.find(optname)->second; return ""; } /// Get an option for this analysis instance converted to a /// specific type (given by the specified @a def value). template T getOption(std::string optname, T def) { if (_options.find(optname) == _options.end()) return def; std::stringstream ss; ss << _options.find(optname)->second; T ret; ss >> ret; return ret; } //@} /// @brief Book a CentralityProjection /// /// Using a SingleValueProjection, @a proj, giving the value of an /// experimental observable to be used as a centrality estimator, /// book a CentralityProjection based on the experimentally /// measured pecentiles of this observable (as given by the /// reference data for the @a calHistName histogram in the @a /// calAnaName analysis. If a preloaded file with the output of a /// run using the @a calAnaName analysis contains a valid /// generated @a calHistName histogram, it will be used as an /// optional percentile binning. Also if this preloaded file /// contains a histogram with the name @a calHistName with an /// appended "_IMP" This histogram will be used to add an optional /// centrality percentile based on the generated impact /// parameter. If @increasing is true, a low (high) value of @proj /// is assumed to correspond to a more peripheral (central) event. const CentralityProjection& declareCentrality(const SingleValueProjection &proj, string calAnaName, string calHistName, const string projName, bool increasing = false); /// @brief Book a Percentile wrapper around AnalysisObjects. /// /// Based on a previously registered CentralityProjection named @a /// projName book one AnalysisObject for each @a centralityBin and /// name them according to the corresponding code in the @a ref /// vector. /// /// @todo Convert to just be called book() cf. others template Percentile bookPercentile(string projName, vector > centralityBins, vector > ref) { typedef typename ReferenceTraits::RefT RefT; typedef rivet_shared_ptr> WrapT; Percentile pctl(this, projName); const int nCent = centralityBins.size(); for (int iCent = 0; iCent < nCent; ++iCent) { const string axisCode = mkAxisCode(std::get<0>(ref[iCent]), std::get<1>(ref[iCent]), std::get<2>(ref[iCent])); const RefT & refscatter = refData(axisCode); WrapT wtf(_weightNames(), T(refscatter, histoPath(axisCode))); wtf = addAnalysisObject(wtf); CounterPtr cnt(_weightNames(), Counter(histoPath("TMP/COUNTER/" + axisCode))); cnt = addAnalysisObject(cnt); pctl.add(wtf, cnt, centralityBins[iCent]); } return pctl; } // /// @brief Book Percentile wrappers around AnalysisObjects. // /// // /// Based on a previously registered CentralityProjection named @a // /// projName book one (or several) AnalysisObject(s) named // /// according to @a ref where the x-axis will be filled according // /// to the percentile output(s) of the @projName. // /// // /// @todo Convert to just be called book() cf. others // template // PercentileXaxis bookPercentileXaxis(string projName, // tuple ref) { // typedef typename ReferenceTraits::RefT RefT; // typedef rivet_shared_ptr> WrapT; // PercentileXaxis pctl(this, projName); // const string axisCode = mkAxisCode(std::get<0>(ref), // std::get<1>(ref), // std::get<2>(ref)); // const RefT & refscatter = refData(axisCode); // WrapT wtf(_weightNames(), T(refscatter, histoPath(axisCode))); // wtf = addAnalysisObject(wtf); // CounterPtr cnt(_weightNames(), Counter()); // cnt = addAnalysisObject(cnt); // pctl.add(wtf, cnt); // return pctl; // } private: // Functions that have to be defined in the .cc file to avoid circular #includes /// Get the list of weight names from the handler vector _weightNames() const; /// Get the list of weight names from the handler YODA::AnalysisObjectPtr _getPreload(string name) const; /// Get the default/nominal weight index size_t _defaultWeightIndex() const; /// Get an AO from another analysis MultiweightAOPtr _getOtherAnalysisObject(const std::string & ananame, const std::string& name); /// Check that analysis objects aren't being booked/registered outside the init stage void _checkBookInit() const; /// Check if we are in the init stage. bool inInit() const; /// Check if we are in the finalize stage. bool inFinalize() const; private: /// To be used in finalize context only: class CounterAdapter { public: CounterAdapter(double x) : x_(x ) {} CounterAdapter(const YODA::Counter & c) : x_(c.val() ) {} // CounterAdapter(CounterPtr cp) : x_(cp->val() ) {} CounterAdapter(const YODA::Scatter1D & s) : x_(s.points()[0].x()) { assert( s.numPoints() == 1 || "Can only scale by a single value."); } // CounterAdapter(Scatter1DPtr sp) : x_(sp->points()[0].x()) { // assert( sp->numPoints() == 1 || "Can only scale by a single value."); // } operator double() const { return x_; } private: double x_; }; public: double dbl(double x) { return x; } double dbl(const YODA::Counter & c) { return c.val(); } double dbl(const YODA::Scatter1D & s) { assert( s.numPoints() == 1 ); return s.points()[0].x(); } /// @name Analysis object manipulation /// @todo Should really be protected: only public to keep BinnedHistogram happy for now... //@{ /// Multiplicatively scale the given counter, @a cnt, by factor @s factor. void scale(CounterPtr cnt, CounterAdapter factor); /// Multiplicatively scale the given counters, @a cnts, by factor @s factor. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of CounterPtrs void scale(const std::vector& cnts, CounterAdapter factor) { for (auto& c : cnts) scale(c, factor); } /// @todo YUCK! template void scale(const CounterPtr (&cnts)[array_size], CounterAdapter factor) { // for (size_t i = 0; i < std::extent::value; ++i) scale(cnts[i], factor); for (auto& c : cnts) scale(c, factor); } /// Normalize the given histogram, @a histo, to area = @a norm. void normalize(Histo1DPtr histo, CounterAdapter norm=1.0, bool includeoverflows=true); /// Normalize the given histograms, @a histos, to area = @a norm. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo1DPtrs void normalize(const std::vector& histos, CounterAdapter norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// @todo YUCK! template void normalize(const Histo1DPtr (&histos)[array_size], CounterAdapter norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// Multiplicatively scale the given histogram, @a histo, by factor @s factor. void scale(Histo1DPtr histo, CounterAdapter factor); /// Multiplicatively scale the given histograms, @a histos, by factor @s factor. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo1DPtrs void scale(const std::vector& histos, CounterAdapter factor) { for (auto& h : histos) scale(h, factor); } /// @todo YUCK! template void scale(const Histo1DPtr (&histos)[array_size], CounterAdapter factor) { for (auto& h : histos) scale(h, factor); } /// Normalize the given histogram, @a histo, to area = @a norm. void normalize(Histo2DPtr histo, CounterAdapter norm=1.0, bool includeoverflows=true); /// Normalize the given histograms, @a histos, to area = @a norm. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo2DPtrs void normalize(const std::vector& histos, CounterAdapter norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// @todo YUCK! template void normalize(const Histo2DPtr (&histos)[array_size], CounterAdapter norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// Multiplicatively scale the given histogram, @a histo, by factor @s factor. void scale(Histo2DPtr histo, CounterAdapter factor); /// Multiplicatively scale the given histograms, @a histos, by factor @s factor. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo2DPtrs void scale(const std::vector& histos, CounterAdapter factor) { for (auto& h : histos) scale(h, factor); } /// @todo YUCK! template void scale(const Histo2DPtr (&histos)[array_size], CounterAdapter factor) { for (auto& h : histos) scale(h, factor); } /// Helper for counter division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(CounterPtr c1, CounterPtr c2, Scatter1DPtr s) const; /// Helper for histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Counter& c1, const YODA::Counter& c2, Scatter1DPtr s) const; /// Helper for histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const; /// Helper for histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Histo1D& h1, const YODA::Histo1D& h2, Scatter2DPtr s) const; /// Helper for profile histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Profile1DPtr p1, Profile1DPtr p2, Scatter2DPtr s) const; /// Helper for profile histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Profile1D& p1, const YODA::Profile1D& p2, Scatter2DPtr s) const; /// Helper for 2D histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Histo2DPtr h1, Histo2DPtr h2, Scatter3DPtr s) const; /// Helper for 2D histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Histo2D& h1, const YODA::Histo2D& h2, Scatter3DPtr s) const; /// Helper for 2D profile histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Profile2DPtr p1, Profile2DPtr p2, Scatter3DPtr s) const; /// Helper for 2D profile histogram division with raw YODA objects /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Profile2D& p1, const YODA::Profile2D& p2, Scatter3DPtr s) const; /// Helper for histogram efficiency calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void efficiency(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const; /// Helper for histogram efficiency calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void efficiency(const YODA::Histo1D& h1, const YODA::Histo1D& h2, Scatter2DPtr s) const; /// Helper for histogram asymmetry calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void asymm(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const; /// Helper for histogram asymmetry calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void asymm(const YODA::Histo1D& h1, const YODA::Histo1D& h2, Scatter2DPtr s) const; /// Helper for converting a differential histo to an integral one. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void integrate(Histo1DPtr h, Scatter2DPtr s) const; /// Helper for converting a differential histo to an integral one. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void integrate(const Histo1D& h, Scatter2DPtr s) const; //@} public: /// List of registered analysis data objects const vector& analysisObjects() const { return _analysisobjects; } protected: /// @name Data object registration, retrieval, and removal //@{ /// Get a preloaded YODA object. template shared_ptr getPreload(string path) const { return dynamic_pointer_cast(_getPreload(path)); } /// Register a new data object, optionally read in preloaded data. template rivet_shared_ptr< Wrapper > registerAO(const YODAT & yao) { typedef Wrapper WrapperT; typedef shared_ptr YODAPtrT; typedef rivet_shared_ptr RAOT; if ( !inInit() && !inFinalize() ) { MSG_ERROR("Can't book objects outside of init()"); throw UserError(name() + ": Can't book objects outside of init() or finalize()."); } // First check that we haven't booked this before. This is // allowed when booking in finalze. for (auto & waold : analysisObjects()) { if ( yao.path() == waold.get()->basePath() ) { if ( !inInit() ) MSG_WARNING("Found double-booking of " << yao.path() << " in " << name() << ". Keeping previous booking"); return RAOT(dynamic_pointer_cast(waold.get())); } } shared_ptr wao = make_shared(); wao->_basePath = yao.path(); YODAPtrT yaop = make_shared(yao); for (const string& weightname : _weightNames()) { // Create two YODA objects for each weight. Copy from // preloaded YODAs if present. First the finalized yoda: string finalpath = yao.path(); if ( weightname != "" ) finalpath += "[" + weightname + "]"; YODAPtrT preload = getPreload(finalpath); if ( preload ) { if ( !bookingCompatible(preload, yaop) ) { MSG_WARNING("Found incompatible pre-existing data object with same base path " << finalpath << " for " << name()); preload = nullptr; } else { MSG_TRACE("Using preloaded " << finalpath << " in " <_final.push_back(make_shared(*preload)); } } if ( !preload ) { wao->_final.push_back(make_shared(yao)); wao->_final.back()->setPath(finalpath); } // Then the raw filling yodas. string rawpath = "/RAW" + finalpath; preload = getPreload(rawpath); if ( preload ) { if ( !bookingCompatible(preload, yaop) ) { MSG_WARNING("Found incompatible pre-existing data object with same base path " << rawpath << " for " << name()); preload = nullptr; } else { MSG_TRACE("Using preloaded " << rawpath << " in " <_persistent.push_back(make_shared(*preload)); } } if ( !preload ) { wao->_persistent.push_back(make_shared(yao)); wao->_persistent.back()->setPath(rawpath); } } rivet_shared_ptr ret(wao); ret.get()->unsetActiveWeight(); if ( inFinalize() ) { // If booked in finalize() we assume it is the first time // finalize is run. ret.get()->pushToFinal(); ret.get()->setActiveFinalWeightIdx(0); } _analysisobjects.push_back(ret); return ret; } /// Register a data object in the histogram system template AO addAnalysisObject(const AO & aonew) { _checkBookInit(); for (const MultiweightAOPtr& ao : analysisObjects()) { // Check AO base-name first ao.get()->setActiveWeightIdx(_defaultWeightIndex()); aonew.get()->setActiveWeightIdx(_defaultWeightIndex()); if (ao->path() != aonew->path()) continue; // If base-name matches, check compatibility // NB. This evil is because dynamic_ptr_cast can't work on rivet_shared_ptr directly AO aoold = AO(dynamic_pointer_cast(ao.get())); //< OMG if ( !aoold || !bookingCompatible(aonew, aoold) ) { MSG_WARNING("Found incompatible pre-existing data object with same base path " << aonew->path() << " for " << name()); throw LookupError("Found incompatible pre-existing data object with same base path during AO booking"); } // Finally, check all weight variations for (size_t weightIdx = 0; weightIdx < _weightNames().size(); ++weightIdx) { aoold.get()->setActiveWeightIdx(weightIdx); aonew.get()->setActiveWeightIdx(weightIdx); if (aoold->path() != aonew->path()) { MSG_WARNING("Found incompatible pre-existing data object with different weight-path " << aonew->path() << " for " << name()); throw LookupError("Found incompatible pre-existing data object with same weight-path during AO booking"); } } // They're fully compatible: bind and return aoold.get()->unsetActiveWeight(); MSG_TRACE("Bound pre-existing data object " << aoold->path() << " for " << name()); return aoold; } // No equivalent found MSG_TRACE("Registered " << aonew->annotation("Type") << " " << aonew->path() << " for " << name()); aonew.get()->unsetActiveWeight(); _analysisobjects.push_back(aonew); return aonew; } /// Unregister a data object from the histogram system (by name) void removeAnalysisObject(const std::string& path); /// Unregister a data object from the histogram system (by pointer) void removeAnalysisObject(const MultiweightAOPtr & ao); // /// Get all data objects, for all analyses, from the AnalysisHandler // /// @todo Can we remove this? Why not call handler().getData()? // vector getAllData(bool includeorphans) const; /// Get a Rivet data object from the histogram system template const AO getAnalysisObject(const std::string& aoname) const { for (const MultiweightAOPtr& ao : analysisObjects()) { ao.get()->setActiveWeightIdx(_defaultWeightIndex()); if (ao->path() == histoPath(aoname)) { // return dynamic_pointer_cast(ao); return AO(dynamic_pointer_cast(ao.get())); } } throw LookupError("Data object " + histoPath(aoname) + " not found"); } // /// Get a data object from the histogram system // template // const std::shared_ptr getAnalysisObject(const std::string& name) const { // foreach (const AnalysisObjectPtr& ao, analysisObjects()) { // if (ao->path() == histoPath(name)) return dynamic_pointer_cast(ao); // } // throw LookupError("Data object " + histoPath(name) + " not found"); // } // /// Get a data object from the histogram system (non-const) // template // std::shared_ptr getAnalysisObject(const std::string& name) { // foreach (const AnalysisObjectPtr& ao, analysisObjects()) { // if (ao->path() == histoPath(name)) return dynamic_pointer_cast(ao); // } // throw LookupError("Data object " + histoPath(name) + " not found"); // } /// Get a data object from another analysis (e.g. preloaded /// calibration histogram). template AO getAnalysisObject(const std::string& ananame, const std::string& aoname) { MultiweightAOPtr ao = _getOtherAnalysisObject(ananame, aoname); // return dynamic_pointer_cast(ao); return AO(dynamic_pointer_cast(ao.get())); } // /// Get a named Histo1D object from the histogram system // const Histo1DPtr getHisto1D(const std::string& name) const { // return getAnalysisObject(name); // } // /// Get a named Histo1D object from the histogram system (non-const) // Histo1DPtr getHisto1D(const std::string& name) { // return getAnalysisObject(name); // } // /// Get a Histo1D object from the histogram system by axis ID codes (non-const) // const Histo1DPtr getHisto1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a Histo1D object from the histogram system by axis ID codes (non-const) // Histo1DPtr getHisto1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a named Histo2D object from the histogram system // const Histo2DPtr getHisto2D(const std::string& name) const { // return getAnalysisObject(name); // } // /// Get a named Histo2D object from the histogram system (non-const) // Histo2DPtr getHisto2D(const std::string& name) { // return getAnalysisObject(name); // } // /// Get a Histo2D object from the histogram system by axis ID codes (non-const) // const Histo2DPtr getHisto2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a Histo2D object from the histogram system by axis ID codes (non-const) // Histo2DPtr getHisto2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a named Profile1D object from the histogram system // const Profile1DPtr getProfile1D(const std::string& name) const { // return getAnalysisObject(name); // } // /// Get a named Profile1D object from the histogram system (non-const) // Profile1DPtr getProfile1D(const std::string& name) { // return getAnalysisObject(name); // } // /// Get a Profile1D object from the histogram system by axis ID codes (non-const) // const Profile1DPtr getProfile1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a Profile1D object from the histogram system by axis ID codes (non-const) // Profile1DPtr getProfile1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a named Profile2D object from the histogram system // const Profile2DPtr getProfile2D(const std::string& name) const { // return getAnalysisObject(name); // } // /// Get a named Profile2D object from the histogram system (non-const) // Profile2DPtr getProfile2D(const std::string& name) { // return getAnalysisObject(name); // } // /// Get a Profile2D object from the histogram system by axis ID codes (non-const) // const Profile2DPtr getProfile2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a Profile2D object from the histogram system by axis ID codes (non-const) // Profile2DPtr getProfile2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a named Scatter2D object from the histogram system // const Scatter2DPtr getScatter2D(const std::string& name) const { // return getAnalysisObject(name); // } // /// Get a named Scatter2D object from the histogram system (non-const) // Scatter2DPtr getScatter2D(const std::string& name) { // return getAnalysisObject(name); // } // /// Get a Scatter2D object from the histogram system by axis ID codes (non-const) // const Scatter2DPtr getScatter2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } // /// Get a Scatter2D object from the histogram system by axis ID codes (non-const) // Scatter2DPtr getScatter2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { // return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); // } //@} private: /// Name passed to constructor (used to find .info analysis data file, and as a fallback) string _defaultname; /// Pointer to analysis metadata object unique_ptr _info; /// Storage of all plot objects /// @todo Make this a map for fast lookup by path? vector _analysisobjects; /// @name Cross-section variables //@{ double _crossSection; bool _gotCrossSection; //@} /// The controlling AnalysisHandler object. AnalysisHandler* _analysishandler; /// Collection of cached refdata to speed up many autobookings: the /// reference data file should only be read once. mutable std::map _refdata; /// Options the (this instance of) the analysis map _options; /// The string of options. string _optstring; private: /// @name Utility functions //@{ /// Get the reference data for this paper and cache it. void _cacheRefData() const; //@} /// The assignment operator is private and must never be called. /// In fact, it should not even be implemented. Analysis& operator=(const Analysis&); }; } // Include definition of analysis plugin system so that analyses automatically see it when including Analysis.hh #include "Rivet/AnalysisBuilder.hh" /// @def DECLARE_RIVET_PLUGIN /// Preprocessor define to prettify the global-object plugin hook mechanism. #define DECLARE_RIVET_PLUGIN(clsname) ::Rivet::AnalysisBuilder plugin_ ## clsname /// @def DECLARE_ALIASED_RIVET_PLUGIN /// Preprocessor define to prettify the global-object plugin hook mechanism, with an extra alias name for this analysis. // #define DECLARE_ALIASED_RIVET_PLUGIN(clsname, alias) Rivet::AnalysisBuilder plugin_ ## clsname ## ( ## #alias ## ) #define DECLARE_ALIASED_RIVET_PLUGIN(clsname, alias) DECLARE_RIVET_PLUGIN(clsname)( #alias ) /// @def DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR /// Preprocessor define to prettify the manky constructor with name string argument #define DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR(clsname) clsname() : Analysis(# clsname) {} /// @def DEFAULT_RIVET_ANALYSIS_CTOR /// Slight abbreviation for DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR #define DEFAULT_RIVET_ANALYSIS_CTOR(clsname) DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR(clsname) #endif diff --git a/include/Rivet/AnalysisHandler.hh b/include/Rivet/AnalysisHandler.hh --- a/include/Rivet/AnalysisHandler.hh +++ b/include/Rivet/AnalysisHandler.hh @@ -1,360 +1,366 @@ // -*- C++ -*- #ifndef RIVET_RivetHandler_HH #define RIVET_RivetHandler_HH #include "Rivet/Config/RivetCommon.hh" #include "Rivet/Particle.hh" #include "Rivet/AnalysisLoader.hh" #include "Rivet/Tools/RivetYODA.hh" namespace Rivet { // Forward declaration and smart pointer for Analysis class Analysis; typedef std::shared_ptr AnaHandle; /// A class which handles a number of analysis objects to be applied to /// generated events. An {@link Analysis}' AnalysisHandler is also responsible /// for handling the final writing-out of histograms. class AnalysisHandler { public: /// @name Constructors and destructors. */ //@{ /// Preferred constructor, with optional run name. AnalysisHandler(const string& runname=""); /// @brief Destructor /// The destructor is not virtual, as this class should not be inherited from. ~AnalysisHandler(); //@} private: /// Get a logger object. Log& getLog() const; public: /// @name Run properties and weights //@{ /// Get the name of this run. string runName() const; /// Get the number of events seen. Should only really be used by external /// steering code or analyses in the finalize phase. size_t numEvents() const; /// @brief Access the sum of the event weights seen /// /// This is the weighted equivalent of the number of events. It should only /// be used by external steering code or analyses in the finalize phase. double sumW() const { return _eventCounter->sumW(); } /// Access to the sum of squared-weights double sumW2() const { return _eventCounter->sumW2(); } /// Names of event weight categories const vector& weightNames() const { return _weightNames; } /// Are any of the weights non-numeric? size_t numWeights() const { return _weightNames.size(); } /// Are any of the weights non-numeric? bool haveNamedWeights() const; /// Set the weight names from a GenEvent void setWeightNames(const GenEvent& ge); /// Get the index of the nominal weight-stream size_t defaultWeightIndex() const { return _defaultWeightIdx; } //@} /// @name Cross-sections //@{ /// Get the cross-section known to the handler. Scatter1DPtr crossSection() const { return _xs; } /// Set the cross-section for the process being generated. AnalysisHandler& setCrossSection(double xs, double xserr); /// Get the nominal cross-section double nominalCrossSection() const { _xs.get()->setActiveWeightIdx(_defaultWeightIdx); const YODA::Scatter1D::Points& ps = _xs->points(); if (ps.size() != 1) { string errMsg = "cross section missing when requesting nominal cross section"; throw Error(errMsg); } double xs = ps[0].x(); _xs.get()->unsetActiveWeight(); return xs; } //@} /// @name Beams //@{ /// Set the beam particles for this run AnalysisHandler& setRunBeams(const ParticlePair& beams) { _beams = beams; MSG_DEBUG("Setting run beams = " << beams << " @ " << sqrtS()/GeV << " GeV"); return *this; } /// Get the beam particles for this run, usually determined from the first event. const ParticlePair& beams() const { return _beams; } /// Get beam IDs for this run, usually determined from the first event. /// @deprecated Use standalone beamIds(ah.beams()), to clean AH interface PdgIdPair beamIds() const; /// Get energy for this run, usually determined from the first event. /// @deprecated Use standalone sqrtS(ah.beams()), to clean AH interface double sqrtS() const; /// Setter for _ignoreBeams void setIgnoreBeams(bool ignore=true); + /// Setter for _skipWeights + void skipMultiWeights(bool ignore=false); + //@} /// @name Handle analyses //@{ /// Get a list of the currently registered analyses' names. std::vector analysisNames() const; /// Get the collection of currently registered analyses. const std::map& analysesMap() const { return _analyses; } /// Get the collection of currently registered analyses. std::vector analyses() const { std::vector rtn; rtn.reserve(_analyses.size()); for (const auto& apair : _analyses) rtn.push_back(apair.second); return rtn; } /// Get a registered analysis by name. AnaHandle analysis(const std::string& analysisname) { if ( _analyses.find(analysisname) == _analyses.end() ) throw LookupError("No analysis named '" + analysisname + "' registered in AnalysisHandler"); try { return _analyses[analysisname]; } catch (...) { throw LookupError("No analysis named '" + analysisname + "' registered in AnalysisHandler"); } } /// Add an analysis to the run list by object AnalysisHandler& addAnalysis(Analysis* analysis); /// @brief Add an analysis to the run list using its name. /// /// The actual Analysis to be used will be obtained via /// AnalysisLoader::getAnalysis(string). If no matching analysis is found, /// no analysis is added (i.e. the null pointer is checked and discarded. AnalysisHandler& addAnalysis(const std::string& analysisname); /// @brief Add an analysis with a map of analysis options. AnalysisHandler& addAnalysis(const std::string& analysisname, std::map pars); /// @brief Add analyses to the run list using their names. /// /// The actual {@link Analysis}' to be used will be obtained via /// AnalysisHandler::addAnalysis(string), which in turn uses /// AnalysisLoader::getAnalysis(string). If no matching analysis is found /// for a given name, no analysis is added, but also no error is thrown. AnalysisHandler& addAnalyses(const std::vector& analysisnames); /// Remove an analysis from the run list using its name. AnalysisHandler& removeAnalysis(const std::string& analysisname); /// Remove analyses from the run list using their names. AnalysisHandler& removeAnalyses(const std::vector& analysisnames); /// //@} /// @name Main init/execute/finalise //@{ /// Initialize a run, with the run beams taken from the example event. void init(const GenEvent& event); /// @brief Analyze the given \a event by reference. /// /// This function will call the AnalysisBase::analyze() function of all /// included analysis objects. void analyze(const GenEvent& event); /// @brief Analyze the given \a event by pointer. /// /// This function will call the AnalysisBase::analyze() function of all /// included analysis objects, after checking the event pointer validity. void analyze(const GenEvent* event); /// Finalize a run. This function calls the AnalysisBase::finalize() /// functions of all included analysis objects. void finalize(); //@} /// @name Histogram / data object access //@{ /// Read analysis plots into the histo collection (via addData) from the named file. void readData(const std::string& filename); /// Get all multi-weight Rivet analysis object wrappers vector getRivetAOs() const; /// Get a pointer to a preloaded yoda object with the given path, /// or null if path is not found. const YODA::AnalysisObjectPtr getPreload(string path) const { auto it = _preloads.find(path); if ( it == _preloads.end() ) return nullptr; return it->second; } /// Write all analyses' plots (via getData) to the named file. void writeData(const std::string& filename) const; /// Tell Rivet to dump intermediate result to a file named @a /// dumpfile every @a period'th event. If @period is not positive, /// no dumping will be done. void dump(string dumpfile, int period) { _dumpPeriod = period; _dumpFile = dumpfile; } /// Take the vector of yoda files and merge them together using /// the cross section and weight information provided in each /// file. Each file in @a aofiles is assumed to have been produced /// by Rivet. By default the files are assumed to contain /// different processes (or the same processs but mutually /// exclusive cuts), but if @a equiv if ture, the files are /// assumed to contain output of completely equivalent (but /// statistically independent) Rivet runs. The corresponding /// analyses will be loaded and their analysis objects will be /// filled with the merged result. finalize() will be run on each /// relevant analysis. The resulting YODA file can then be rwitten /// out by writeData(). If delopts is non-empty, it is assumed to /// contain names different options to be merged into the same /// analysis objects. void mergeYodas(const vector & aofiles, const vector & delopts = vector(), bool equiv = false); /// Helper function to strip specific options from data object paths. void stripOptions(YODA::AnalysisObjectPtr ao, const vector & delopts) const; //@} /// Indicate which Rivet stage we're in. /// At the moment, only INIT is used to enable booking. enum class Stage { OTHER, INIT, FINALIZE }; /// Which stage are we in? Stage stage() const { return _stage; } private: /// Current handler stage Stage _stage = Stage::OTHER; /// The collection of Analysis objects to be used. std::map _analyses; /// A vector of pre-loaded object which do not have a valid /// Analysis plugged in. map _preloads; /// A vector containing copies of analysis objects after /// finalize() has been run. vector _finalizedAOs; /// @name Run properties //@{ /// Weight names std::vector _weightNames; std::vector > _subEventWeights; size_t _numWeightTypes; // always == WeightVector.size() /// Run name std::string _runname; /// Event counter mutable CounterPtr _eventCounter; /// Cross-section known to AH Scatter1DPtr _xs; /// Beams used by this run. ParticlePair _beams; /// Flag to check if init has been called bool _initialised; /// Flag whether input event beams should be ignored in compatibility check bool _ignoreBeams; + /// Flag to check if multiweights should be included + bool _skipWeights; + /// Current event number int _eventNumber; /// The index in the weight vector for the nominal weight stream size_t _defaultWeightIdx; /// Determines how often Rivet runs finalize() and writes the /// result to a YODA file. int _dumpPeriod; /// The name of a YODA file to which Rivet periodically dumps /// results. string _dumpFile; /// Flag to indicate periodic dumping is in progress bool _dumping; //@} private: /// The assignment operator is private and must never be called. /// In fact, it should not even be implemented. AnalysisHandler& operator=(const AnalysisHandler&); /// The copy constructor is private and must never be called. In /// fact, it should not even be implemented. AnalysisHandler(const AnalysisHandler&); }; } #endif diff --git a/include/Rivet/Makefile.am b/include/Rivet/Makefile.am --- a/include/Rivet/Makefile.am +++ b/include/Rivet/Makefile.am @@ -1,375 +1,359 @@ ## Internal headers - not to be installed nobase_dist_noinst_HEADERS = ## Public headers - to be installed nobase_pkginclude_HEADERS = ## Rivet interface nobase_pkginclude_HEADERS += \ Rivet.hh \ Run.hh \ Event.hh \ ParticleBase.hh \ Particle.fhh Particle.hh \ Jet.fhh Jet.hh \ Projection.fhh Projection.hh \ ProjectionApplier.hh \ ProjectionHandler.hh \ Analysis.hh \ AnalysisHandler.hh \ AnalysisInfo.hh \ AnalysisBuilder.hh \ AnalysisLoader.hh ## Build config stuff nobase_pkginclude_HEADERS += \ Config/RivetCommon.hh ## Projections nobase_pkginclude_HEADERS += \ Projections/AliceCommon.hh \ Projections/AxesDefinition.hh \ Projections/Beam.hh \ Projections/BeamThrust.hh \ Projections/CentralEtHCM.hh \ Projections/CentralityProjection.hh \ Projections/ChargedFinalState.hh \ Projections/ChargedLeptons.hh \ Projections/ConstLossyFinalState.hh \ Projections/DirectFinalState.hh \ Projections/DISDiffHadron.hh \ Projections/DISFinalState.hh \ Projections/DISKinematics.hh \ Projections/DISLepton.hh \ Projections/DISRapidityGap.hh \ Projections/DressedLeptons.hh \ Projections/EventMixingFinalState.hh \ Projections/FastJets.hh \ Projections/PxConePlugin.hh \ Projections/FinalPartons.hh \ Projections/FinalState.hh \ Projections/FParameter.hh \ Projections/GeneratedPercentileProjection.hh \ Projections/HadronicFinalState.hh \ Projections/HeavyHadrons.hh \ Projections/Hemispheres.hh \ Projections/HepMCHeavyIon.hh \ Projections/IdentifiedFinalState.hh \ Projections/ImpactParameterProjection.hh \ Projections/IndirectFinalState.hh \ Projections/InitialQuarks.hh \ Projections/InvMassFinalState.hh \ Projections/JetAlg.hh \ Projections/JetShape.hh \ Projections/LeadingParticlesFinalState.hh \ Projections/LossyFinalState.hh \ Projections/MergedFinalState.hh \ Projections/MissingMomentum.hh \ Projections/NeutralFinalState.hh \ Projections/NonHadronicFinalState.hh \ Projections/NonPromptFinalState.hh \ Projections/ParisiTensor.hh \ Projections/ParticleFinder.hh \ Projections/PartonicTops.hh \ Projections/PercentileProjection.hh \ Projections/PrimaryHadrons.hh \ Projections/PrimaryParticles.hh \ Projections/PromptFinalState.hh \ Projections/SingleValueProjection.hh \ Projections/SmearedParticles.hh \ Projections/SmearedJets.hh \ Projections/SmearedMET.hh \ Projections/Sphericity.hh \ Projections/Spherocity.hh \ Projections/TauFinder.hh \ Projections/Thrust.hh \ Projections/TriggerCDFRun0Run1.hh \ Projections/TriggerCDFRun2.hh \ Projections/TriggerProjection.hh \ Projections/TriggerUA5.hh \ Projections/UnstableFinalState.hh \ Projections/UndressBeamLeptons.hh \ Projections/UnstableParticles.hh \ Projections/UserCentEstimate.hh \ Projections/VetoedFinalState.hh \ Projections/VisibleFinalState.hh \ Projections/WFinder.hh \ Projections/ZFinder.hh ## Meta-projection convenience headers nobase_pkginclude_HEADERS += \ Projections/FinalStates.hh \ Projections/Smearing.hh ## Analysis base class headers # TODO: Move to Rivet/AnalysisTools header dir? nobase_pkginclude_HEADERS += \ Analyses/MC_Cent_pPb.hh \ Analyses/MC_ParticleAnalysis.hh \ Analyses/MC_JetAnalysis.hh \ Analyses/MC_JetSplittings.hh ## Tools nobase_pkginclude_HEADERS += \ Tools/AliceCommon.hh \ Tools/AtlasCommon.hh \ Tools/BeamConstraint.hh \ Tools/BinnedHistogram.hh \ Tools/CentralityBinner.hh \ Tools/Cmp.fhh \ Tools/Cmp.hh \ Tools/Correlators.hh \ Tools/Cutflow.hh \ Tools/Cuts.fhh \ Tools/Cuts.hh \ Tools/Exceptions.hh \ Tools/JetUtils.hh \ Tools/Logging.hh \ Tools/Random.hh \ Tools/ParticleBaseUtils.hh \ Tools/ParticleIdUtils.hh \ Tools/ParticleUtils.hh \ Tools/ParticleName.hh \ Tools/Percentile.hh \ Tools/PrettyPrint.hh \ Tools/ReaderCompressedAscii.hh \ Tools/RivetPaths.hh \ Tools/RivetSTL.hh \ Tools/RivetFastJet.hh \ Tools/RivetHepMC.hh \ Tools/RivetYODA.hh \ Tools/RivetMT2.hh \ Tools/SmearingFunctions.hh \ Tools/MomentumSmearingFunctions.hh \ Tools/ParticleSmearingFunctions.hh \ Tools/JetSmearingFunctions.hh \ Tools/TypeTraits.hh \ Tools/WriterCompressedAscii.hh \ - Tools/Utils.hh \ - Tools/fjcontrib/AxesDefinition.hh \ - Tools/fjcontrib/BottomUpSoftDrop.hh \ - Tools/fjcontrib/EnergyCorrelator.hh \ - Tools/fjcontrib/ExtraRecombiners.hh \ - Tools/fjcontrib/IteratedSoftDrop.hh \ - Tools/fjcontrib/MeasureDefinition.hh \ - Tools/fjcontrib/ModifiedMassDropTagger.hh \ - Tools/fjcontrib/Njettiness.hh \ - Tools/fjcontrib/NjettinessPlugin.hh \ - Tools/fjcontrib/Nsubjettiness.hh \ - Tools/fjcontrib/Recluster.hh \ - Tools/fjcontrib/RecursiveSoftDrop.hh \ - Tools/fjcontrib/RecursiveSymmetryCutBase.hh \ - Tools/fjcontrib/SoftDrop.hh \ - Tools/fjcontrib/TauComponents.hh \ - Tools/fjcontrib/XConePlugin.hh + Tools/Utils.hh nobase_dist_noinst_HEADERS += \ Tools/osdir.hh \ Math/eigen3/COPYING.GPL \ Math/eigen3/README.md ## Maths nobase_pkginclude_HEADERS += \ Math/Matrices.hh \ Math/Vector3.hh \ Math/VectorN.hh \ Math/MatrixN.hh \ Math/MathHeader.hh \ Math/Vectors.hh \ Math/LorentzTrans.hh \ Math/Matrix3.hh \ Math/MathUtils.hh \ Math/Vector4.hh \ Math/Math.hh \ Math/Units.hh \ Math/Constants.hh \ Math/eigen3/Cholesky \ Math/eigen3/Core \ Math/eigen3/Dense \ Math/eigen3/Eigenvalues \ Math/eigen3/Geometry \ Math/eigen3/Householder \ Math/eigen3/Jacobi \ Math/eigen3/LU \ Math/eigen3/QR \ Math/eigen3/src/Cholesky/LDLT.h \ Math/eigen3/src/Cholesky/LLT.h \ Math/eigen3/src/Core/arch/AltiVec/Complex.h \ Math/eigen3/src/Core/arch/AltiVec/MathFunctions.h \ Math/eigen3/src/Core/arch/AltiVec/PacketMath.h \ Math/eigen3/src/Core/arch/AVX/Complex.h \ Math/eigen3/src/Core/arch/AVX/MathFunctions.h \ Math/eigen3/src/Core/arch/AVX/PacketMath.h \ Math/eigen3/src/Core/arch/AVX/TypeCasting.h \ Math/eigen3/src/Core/arch/AVX512/MathFunctions.h \ Math/eigen3/src/Core/arch/AVX512/PacketMath.h \ Math/eigen3/src/Core/arch/CUDA/Complex.h \ Math/eigen3/src/Core/arch/CUDA/Half.h \ Math/eigen3/src/Core/arch/CUDA/MathFunctions.h \ Math/eigen3/src/Core/arch/CUDA/PacketMath.h \ Math/eigen3/src/Core/arch/CUDA/PacketMathHalf.h \ Math/eigen3/src/Core/arch/CUDA/TypeCasting.h \ Math/eigen3/src/Core/arch/Default/Settings.h \ Math/eigen3/src/Core/arch/NEON/Complex.h \ Math/eigen3/src/Core/arch/NEON/MathFunctions.h \ Math/eigen3/src/Core/arch/NEON/PacketMath.h \ Math/eigen3/src/Core/arch/SSE/Complex.h \ Math/eigen3/src/Core/arch/SSE/MathFunctions.h \ Math/eigen3/src/Core/arch/SSE/PacketMath.h \ Math/eigen3/src/Core/arch/SSE/TypeCasting.h \ Math/eigen3/src/Core/arch/ZVector/Complex.h \ Math/eigen3/src/Core/arch/ZVector/MathFunctions.h \ Math/eigen3/src/Core/arch/ZVector/PacketMath.h \ Math/eigen3/src/Core/Array.h \ Math/eigen3/src/Core/ArrayBase.h \ Math/eigen3/src/Core/ArrayWrapper.h \ Math/eigen3/src/Core/Assign.h \ Math/eigen3/src/Core/AssignEvaluator.h \ Math/eigen3/src/Core/BandMatrix.h \ Math/eigen3/src/Core/Block.h \ Math/eigen3/src/Core/BooleanRedux.h \ Math/eigen3/src/Core/CommaInitializer.h \ Math/eigen3/src/Core/ConditionEstimator.h \ Math/eigen3/src/Core/CoreEvaluators.h \ Math/eigen3/src/Core/CoreIterators.h \ Math/eigen3/src/Core/CwiseBinaryOp.h \ Math/eigen3/src/Core/CwiseNullaryOp.h \ Math/eigen3/src/Core/CwiseTernaryOp.h \ Math/eigen3/src/Core/CwiseUnaryOp.h \ Math/eigen3/src/Core/CwiseUnaryView.h \ Math/eigen3/src/Core/DenseBase.h \ Math/eigen3/src/Core/DenseCoeffsBase.h \ Math/eigen3/src/Core/DenseStorage.h \ Math/eigen3/src/Core/Diagonal.h \ Math/eigen3/src/Core/DiagonalMatrix.h \ Math/eigen3/src/Core/DiagonalProduct.h \ Math/eigen3/src/Core/Dot.h \ Math/eigen3/src/Core/EigenBase.h \ Math/eigen3/src/Core/functors/AssignmentFunctors.h \ Math/eigen3/src/Core/functors/BinaryFunctors.h \ Math/eigen3/src/Core/functors/NullaryFunctors.h \ Math/eigen3/src/Core/functors/StlFunctors.h \ Math/eigen3/src/Core/functors/TernaryFunctors.h \ Math/eigen3/src/Core/functors/UnaryFunctors.h \ Math/eigen3/src/Core/Fuzzy.h \ Math/eigen3/src/Core/GeneralProduct.h \ Math/eigen3/src/Core/GenericPacketMath.h \ Math/eigen3/src/Core/GlobalFunctions.h \ Math/eigen3/src/Core/Inverse.h \ Math/eigen3/src/Core/IO.h \ Math/eigen3/src/Core/Map.h \ Math/eigen3/src/Core/MapBase.h \ Math/eigen3/src/Core/MathFunctions.h \ Math/eigen3/src/Core/MathFunctionsImpl.h \ Math/eigen3/src/Core/Matrix.h \ Math/eigen3/src/Core/MatrixBase.h \ Math/eigen3/src/Core/NestByValue.h \ Math/eigen3/src/Core/NoAlias.h \ Math/eigen3/src/Core/NumTraits.h \ Math/eigen3/src/Core/PermutationMatrix.h \ Math/eigen3/src/Core/PlainObjectBase.h \ Math/eigen3/src/Core/Product.h \ Math/eigen3/src/Core/ProductEvaluators.h \ Math/eigen3/src/Core/products/GeneralBlockPanelKernel.h \ Math/eigen3/src/Core/products/GeneralMatrixMatrix.h \ Math/eigen3/src/Core/products/GeneralMatrixMatrixTriangular.h \ Math/eigen3/src/Core/products/GeneralMatrixVector.h \ Math/eigen3/src/Core/products/Parallelizer.h \ Math/eigen3/src/Core/products/SelfadjointMatrixMatrix.h \ Math/eigen3/src/Core/products/SelfadjointMatrixVector.h \ Math/eigen3/src/Core/products/SelfadjointProduct.h \ Math/eigen3/src/Core/products/SelfadjointRank2Update.h \ Math/eigen3/src/Core/products/TriangularMatrixMatrix.h \ Math/eigen3/src/Core/products/TriangularMatrixVector.h \ Math/eigen3/src/Core/products/TriangularSolverMatrix.h \ Math/eigen3/src/Core/products/TriangularSolverVector.h \ Math/eigen3/src/Core/Random.h \ Math/eigen3/src/Core/Redux.h \ Math/eigen3/src/Core/Ref.h \ Math/eigen3/src/Core/Replicate.h \ Math/eigen3/src/Core/ReturnByValue.h \ Math/eigen3/src/Core/Reverse.h \ Math/eigen3/src/Core/Select.h \ Math/eigen3/src/Core/SelfAdjointView.h \ Math/eigen3/src/Core/SelfCwiseBinaryOp.h \ Math/eigen3/src/Core/Solve.h \ Math/eigen3/src/Core/SolverBase.h \ Math/eigen3/src/Core/SolveTriangular.h \ Math/eigen3/src/Core/StableNorm.h \ Math/eigen3/src/Core/Stride.h \ Math/eigen3/src/Core/Swap.h \ Math/eigen3/src/Core/Transpose.h \ Math/eigen3/src/Core/Transpositions.h \ Math/eigen3/src/Core/TriangularMatrix.h \ Math/eigen3/src/Core/util/BlasUtil.h \ Math/eigen3/src/Core/util/Constants.h \ Math/eigen3/src/Core/util/DisableStupidWarnings.h \ Math/eigen3/src/Core/util/ForwardDeclarations.h \ Math/eigen3/src/Core/util/Macros.h \ Math/eigen3/src/Core/util/Memory.h \ Math/eigen3/src/Core/util/Meta.h \ Math/eigen3/src/Core/util/MKL_support.h \ Math/eigen3/src/Core/util/NonMPL2.h \ Math/eigen3/src/Core/util/ReenableStupidWarnings.h \ Math/eigen3/src/Core/util/StaticAssert.h \ Math/eigen3/src/Core/util/XprHelper.h \ Math/eigen3/src/Core/VectorBlock.h \ Math/eigen3/src/Core/VectorwiseOp.h \ Math/eigen3/src/Core/Visitor.h \ Math/eigen3/src/Eigenvalues/ComplexEigenSolver.h \ Math/eigen3/src/Eigenvalues/ComplexSchur.h \ Math/eigen3/src/Eigenvalues/EigenSolver.h \ Math/eigen3/src/Eigenvalues/GeneralizedEigenSolver.h \ Math/eigen3/src/Eigenvalues/GeneralizedSelfAdjointEigenSolver.h \ Math/eigen3/src/Eigenvalues/HessenbergDecomposition.h \ Math/eigen3/src/Eigenvalues/MatrixBaseEigenvalues.h \ Math/eigen3/src/Eigenvalues/RealQZ.h \ Math/eigen3/src/Eigenvalues/RealSchur.h \ Math/eigen3/src/Eigenvalues/SelfAdjointEigenSolver.h \ Math/eigen3/src/Eigenvalues/Tridiagonalization.h \ Math/eigen3/src/Geometry/AlignedBox.h \ Math/eigen3/src/Geometry/AngleAxis.h \ Math/eigen3/src/Geometry/arch/Geometry_SSE.h \ Math/eigen3/src/Geometry/EulerAngles.h \ Math/eigen3/src/Geometry/Homogeneous.h \ Math/eigen3/src/Geometry/Hyperplane.h \ Math/eigen3/src/Geometry/OrthoMethods.h \ Math/eigen3/src/Geometry/ParametrizedLine.h \ Math/eigen3/src/Geometry/Quaternion.h \ Math/eigen3/src/Geometry/Rotation2D.h \ Math/eigen3/src/Geometry/RotationBase.h \ Math/eigen3/src/Geometry/Scaling.h \ Math/eigen3/src/Geometry/Transform.h \ Math/eigen3/src/Geometry/Translation.h \ Math/eigen3/src/Geometry/Umeyama.h \ Math/eigen3/src/Householder/BlockHouseholder.h \ Math/eigen3/src/Householder/Householder.h \ Math/eigen3/src/Householder/HouseholderSequence.h \ Math/eigen3/src/Jacobi/Jacobi.h \ Math/eigen3/src/LU/arch/Inverse_SSE.h \ Math/eigen3/src/LU/Determinant.h \ Math/eigen3/src/LU/FullPivLU.h \ Math/eigen3/src/LU/InverseImpl.h \ Math/eigen3/src/LU/PartialPivLU.h \ Math/eigen3/src/misc/Image.h \ Math/eigen3/src/misc/Kernel.h \ Math/eigen3/src/misc/RealSvd2x2.h \ Math/eigen3/src/plugins/ArrayCwiseBinaryOps.h \ Math/eigen3/src/plugins/ArrayCwiseUnaryOps.h \ Math/eigen3/src/plugins/BlockMethods.h \ Math/eigen3/src/plugins/CommonCwiseBinaryOps.h \ Math/eigen3/src/plugins/CommonCwiseUnaryOps.h \ Math/eigen3/src/plugins/MatrixCwiseBinaryOps.h \ Math/eigen3/src/plugins/MatrixCwiseUnaryOps.h \ Math/eigen3/src/QR/ColPivHouseholderQR.h \ Math/eigen3/src/QR/CompleteOrthogonalDecomposition.h \ Math/eigen3/src/QR/FullPivHouseholderQR.h \ Math/eigen3/src/QR/HouseholderQR.h \ Math/eigen3/src/SVD/BDCSVD.h \ Math/eigen3/src/SVD/JacobiSVD.h \ Math/eigen3/src/SVD/SVDBase.h \ Math/eigen3/src/SVD/UpperBidiagonalization.h \ Math/eigen3/SVD diff --git a/include/Rivet/Projections/Spherocity.hh b/include/Rivet/Projections/Spherocity.hh --- a/include/Rivet/Projections/Spherocity.hh +++ b/include/Rivet/Projections/Spherocity.hh @@ -1,134 +1,129 @@ // -*- C++ -*- #ifndef RIVET_Spherocity_HH #define RIVET_Spherocity_HH #include "Rivet/Projection.hh" #include "Rivet/Projections/AxesDefinition.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Event.hh" namespace Rivet { /// @brief Get the transverse spherocity scalars for hadron-colliders. /// /// @author Holger Schulz /// /// The scalar (minimum) transverse spherocity is defined as /// \f[ /// S = \frac{\pi^2}{4} \mathrm{min}_{\vec{n}_\perp} \left( \frac{\sum_i \left|\vec{p}_{\perp,i} \times \vec{n}_\perp \right|}{\sum_i |\vec{p}_{\perp,i}|} \right)^2 /// \f], /// with the direction of the unit vector \f$ \vec{n_\perp} \f$ which minimises \f$ T \f$ /// being identified as the spherocity axis. The unit vector which maximises the spherocity /// scalar in the plane perpendicular to \f$ \vec{n} \f$ is the "spherocity major" /// direction, and the vector perpendicular to both the spherocity and spherocity major directions /// is the spherocity minor. Both the major and minor directions have associated spherocity /// scalars. /// /// Care must be taken in the case of Drell-Yan processes - there we should use the /// newly proposed observable \f$ a_T \f$. class Spherocity : public AxesDefinition { public: // Default Constructor Spherocity() {} /// Constructor. - Spherocity(const FinalState& fsp) - : _calculatedSpherocity(false) - { + Spherocity(const FinalState& fsp) { setName("Spherocity"); declare(fsp, "FS"); } /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(Spherocity); protected: /// Perform the projection on the Event void project(const Event& e) { const vector ps = applyProjection(e, "FS").particles(); calc(ps); } /// Compare projections CmpState compare(const Projection& p) const { return mkNamedPCmp(p, "FS"); } public: /// @name Spherocity scalar accessors //@{ /// The spherocity scalar, \f$ S \f$, (minimum spherocity). double spherocity() const { return _spherocities[0]; } //@} /// @name Spherocity axis accessors //@{ /// The spherocity axis. const Vector3& spherocityAxis() const { return _spherocityAxes[0]; } /// The spherocity major axis (axis of max spherocity perpendicular to spherocity axis). const Vector3& spherocityMajorAxis() const { return _spherocityAxes[1]; } /// The spherocity minor axis (axis perpendicular to spherocity and spherocity major). const Vector3& spherocityMinorAxis() const { return _spherocityAxes[2]; } //@} /// @name AxesDefinition axis accessors. //@{ const Vector3& axis1() const { return spherocityAxis(); } const Vector3& axis2() const { return spherocityMajorAxis(); } const Vector3& axis3() const { return spherocityMinorAxis(); } ///@} public: /// @name Direct methods /// Ways to do the calculation directly, without engaging the caching system //@{ /// Manually calculate the spherocity, without engaging the caching system void calc(const FinalState& fs); /// Manually calculate the spherocity, without engaging the caching system void calc(const vector& fsparticles); /// Manually calculate the spherocity, without engaging the caching system void calc(const vector& fsmomenta); /// Manually calculate the spherocity, without engaging the caching system void calc(const vector& threeMomenta); //@} private: /// The spherocity scalars. vector _spherocities; /// The spherocity axes. vector _spherocityAxes; - /// Caching flag to avoid costly recalculations. - bool _calculatedSpherocity; - private: /// Explicitly calculate the spherocity values. void _calcSpherocity(const vector& fsmomenta); }; } #endif diff --git a/include/Rivet/Tools/Cmp.fhh b/include/Rivet/Tools/Cmp.fhh --- a/include/Rivet/Tools/Cmp.fhh +++ b/include/Rivet/Tools/Cmp.fhh @@ -1,18 +1,36 @@ // -*- C++ -*- #ifndef RIVET_Cmp_FHH #define RIVET_Cmp_FHH namespace Rivet { // Forward-declare the Cmp template class template class Cmp; + + /// Enumeration of possible value-comparison states enum class CmpState { - UNDEF, LT, EQ, GT, NEQ + UNDEF, EQ, NEQ }; + /// Representation of a CmpState as a string + inline std::string toString(const CmpState& cmpst) { + switch (cmpst) { + case CmpState::UNDEF: return "Cmp: ??"; + case CmpState::EQ: return "Cmp: =="; + case CmpState::NEQ: return "Cmp: !="; + } + throw Error("CmpState value not in enum list"); + } + + /// Stream a CmpState via its toString representation + inline std::ostream& operator << (std::ostream& os, const CmpState& obj) { + os << toString(obj); + return os; + } + } #endif diff --git a/include/Rivet/Tools/Cmp.hh b/include/Rivet/Tools/Cmp.hh --- a/include/Rivet/Tools/Cmp.hh +++ b/include/Rivet/Tools/Cmp.hh @@ -1,302 +1,303 @@ // -*- C++ -*- #ifndef RIVET_Cmp_HH #define RIVET_Cmp_HH #include "Rivet/Config/RivetCommon.hh" #include "Rivet/Projection.hh" #include "Cmp.fhh" #include namespace Rivet { /// Helper class when checking the ordering of two objects. /// /// Cmp is a helper class to be used when checking the ordering of two /// objects. When implicitly converted to an integer the value will be /// negative if the two objects used in the constructor are ordered and /// positive if they are not. Zero will be returned if they are equal. /// /// The main usage of the Cmp class is if several variables should be /// checked for ordering in which case several Cmp objects can be /// combined as follows: cmp(a1, a2) || cmp(b1, b2) || cmp(c1, /// c2) where cmp is a global function for easy creation of Cmp /// objects. template class Cmp final { public: /// @name Standard constructors etc. //@{ /// The default constructor. Cmp(const T& t1, const T& t2) : _value(CmpState::UNDEF), _objects(&t1, &t2) { } /// The copy constructor. template Cmp(const Cmp& x) : _value(x._value), _objects(nullptr, nullptr) { } /// The assignment operator. template const Cmp& operator=(const Cmp& x) { _value = x; return *this; } //@} public: /// Automatically convert to an enum. operator CmpState() const { _compare(); return _value; } /// If this state is equivalent, set this state to the state of \a c. template const Cmp& operator||(const Cmp& c) const { _compare(); if (_value == CmpState::EQ) _value = c; return *this; } private: /// Perform the actual comparison if necessary. void _compare() const { if (_value == CmpState::UNDEF) { std::less l; - if ( l(*_objects.first, *_objects.second) ) _value = CmpState::LT; - else if ( l(*_objects.second, *_objects.first) ) _value = CmpState::GT; + if ( l(*_objects.first, *_objects.second) ) _value = CmpState::NEQ; + else if ( l(*_objects.second, *_objects.first) ) _value = CmpState::NEQ; else _value = CmpState::EQ; } } /// The state of this object. mutable CmpState _value; /// The objects to be compared. const pair _objects; }; /// @brief Specialization of Cmp for checking the ordering of two @a {Projection}s. /// /// Specialization of the Cmp helper class to be used when checking the /// ordering of two Projection objects. When implicitly converted to an /// integer the value will be negative if the two objects used in the /// constructor are ordered and positive if they are not. Zero will be /// returned if they are equal. This specialization uses directly the /// virtual compare() function in the Projection class. /// /// The main usage of the Cmp class is if several variables should be /// checked for ordering in which case several Cmp objects can be /// combined as follows: cmp(a1, a2) || cmp(b1, b2) || cmp(c1, /// c2) where cmp is a global function for easy creation of Cmp /// objects. template <> class Cmp final { public: /// @name Standard constructors and destructors. //@{ /// The default constructor. Cmp(const Projection& p1, const Projection& p2) : _value(CmpState::UNDEF), _objects(&p1, &p2) { } /// The copy constructor. template Cmp(const Cmp& x) : _value(x), _objects(nullptr, nullptr) { } /// The assignment operator. template const Cmp& operator=(const Cmp& x) { _value = x; return *this; } //@} public: /// Automatically convert to an enum. operator CmpState() const { _compare(); return _value; } /// If this state is equivalent, set this state to the state of \a c. template const Cmp& operator||(const Cmp& c) const { _compare(); if (_value == CmpState::EQ) _value = c; return *this; } private: /// Perform the actual comparison if necessary. void _compare() const { if (_value == CmpState::UNDEF) { const std::type_info& id1 = typeid(*_objects.first); const std::type_info& id2 = typeid(*_objects.second); - if (id1.before(id2)) _value = CmpState::LT; - else if (id2.before(id1)) _value = CmpState::GT; + if (id1.before(id2)) _value = CmpState::NEQ; + else if (id2.before(id1)) _value = CmpState::NEQ; else { - _value = _objects.first->compare(*_objects.second); + CmpState cmps = _objects.first->compare(*_objects.second); + if (cmps == CmpState::EQ) _value = CmpState::EQ; + else _value = CmpState::NEQ; } } } private: /// The state of this object. mutable CmpState _value; /// The objects to be compared. const pair _objects; }; /// @brief Specialization of Cmp for checking the ordering of two floating point numbers. /// /// When implicitly converted to an integer the value will be negative if the /// two objects used in the constructor are ordered and positive if they are /// not. Zero will be returned if they are equal. This specialization uses the /// Rivet fuzzyEquals function to indicate equivalence protected from /// numerical precision effects. /// /// The main usage of the Cmp class is if several variables should be /// checked for ordering in which case several Cmp objects can be /// combined as follows: cmp(a1, a2) || cmp(b1, b2) || cmp(c1, /// c2) where cmp is a global function for easy creation of Cmp /// objects. template <> class Cmp final { public: /// @name Standard constructors and destructors. //@{ /// The default constructor. Cmp(const double p1, const double p2) : _value(CmpState::UNDEF), _numA(p1), _numB(p2) { } /// The copy constructor. template Cmp(const Cmp& x) : _value(x), _numA(0.0), _numB(0.0) { } /// The assignment operator. template const Cmp& operator=(const Cmp& x) { _value = x; return *this; } //@} public: /// Automatically convert to an enum. operator CmpState() const { _compare(); return _value; } /// If this state is equivalent, set this state to the state of \a c. template const Cmp& operator||(const Cmp& c) const { _compare(); if (_value == CmpState::EQ) _value = c; return *this; } private: /// Perform the actual comparison if necessary. void _compare() const { if (_value == CmpState::UNDEF) { if (fuzzyEquals(_numA,_numB)) _value = CmpState::EQ; - else if (_numA < _numB) _value = CmpState::LT; - else _value = CmpState::GT; + else _value = CmpState::NEQ; } } private: /// The state of this object. mutable CmpState _value; /// The objects to be compared. const double _numA, _numB; }; /////////////////////////////////////////////////////////////////// /// Global helper function for easy creation of Cmp objects. template inline Cmp cmp(const T& t1, const T& t2) { return Cmp(t1, t2); } /// Typedef for Cmp using PCmp = Cmp; /// Global helper function for easy creation of Cmp objects. inline Cmp pcmp(const Projection& p1, const Projection& p2) { return Cmp(p1, p2); } /// Global helper function for easy creation of Cmp objects from /// two parent projections and their common name for the projection to be compared. inline Cmp pcmp(const Projection& parent1, const Projection& parent2, const string& pname) { return Cmp(parent1.getProjection(pname), parent2.getProjection(pname)); } /// Global helper function for easy creation of Cmp objects from /// two parent projections and their common name for the projection to be compared. /// This version takes one parent as a pointer. inline Cmp pcmp(const Projection* parent1, const Projection& parent2, const string& pname) { assert(parent1); return Cmp(parent1->getProjection(pname), parent2.getProjection(pname)); } /// Global helper function for easy creation of Cmp objects from /// two parent projections and their common name for the projection to be compared. /// This version takes one parent as a pointer. inline Cmp pcmp(const Projection& parent1, const Projection* parent2, const string& pname) { assert(parent2); return Cmp(parent1.getProjection(pname), parent2->getProjection(pname)); } /// Global helper function for easy creation of Cmp objects from /// two parent projections and their common name for the projection to be compared. inline Cmp pcmp(const Projection* parent1, const Projection* parent2, const string& pname) { assert(parent1); assert(parent2); return Cmp(parent1->getProjection(pname), parent2->getProjection(pname)); } } #endif diff --git a/include/Rivet/Tools/JetSmearingFunctions.hh b/include/Rivet/Tools/JetSmearingFunctions.hh --- a/include/Rivet/Tools/JetSmearingFunctions.hh +++ b/include/Rivet/Tools/JetSmearingFunctions.hh @@ -1,137 +1,137 @@ // -*- C++ -*- #ifndef RIVET_JetSmearingFunctions_HH #define RIVET_JetSmearingFunctions_HH #include "Rivet/Jet.hh" #include "Rivet/Tools/MomentumSmearingFunctions.hh" #include "Rivet/Tools/ParticleSmearingFunctions.hh" #include "Rivet/Tools/Random.hh" namespace Rivet { /// @name Jet filtering, efficiency and smearing utils //@{ /// @name Typedef for Jet smearing functions/functors typedef function JetSmearFn; /// @name Typedef for Jet efficiency functions/functors typedef function JetEffFn; /// Return a constant 0 given a Jet as argument inline double JET_EFF_ZERO(const Jet& p) { return 0; } /// Return a constant 1 given a Jet as argument inline double JET_EFF_ONE(const Jet& p) { return 1; } /// Take a Jet and return a constant efficiency struct JET_EFF_CONST { JET_EFF_CONST(double eff) : _eff(eff) {} double operator () (const Jet& ) const { return _eff; } double _eff; }; /// Return 1 if the given Jet contains a b, otherwise 0 inline double JET_BTAG_PERFECT(const Jet& j) { return j.bTagged() ? 1 : 0; } /// Return 1 if the given Jet contains a c, otherwise 0 inline double JET_CTAG_PERFECT(const Jet& j) { return j.cTagged() ? 1 : 0; } /// @brief b-tagging efficiency functor, for more readable b-tag effs and mistag rates /// Note several constructors, allowing for optional specification of charm, tau, and light jet mistag rates struct JET_BTAG_EFFS { JET_BTAG_EFFS(double eff_b, double eff_light=0) : _eff_b(eff_b), _eff_c(-1), _eff_t(-1), _eff_l(eff_light) { } JET_BTAG_EFFS(double eff_b, double eff_c, double eff_light) : _eff_b(eff_b), _eff_c(eff_c), _eff_t(-1), _eff_l(eff_light) { } JET_BTAG_EFFS(double eff_b, double eff_c, double eff_tau, double eff_light) : _eff_b(eff_b), _eff_c(eff_c), _eff_t(eff_tau), _eff_l(eff_light) { } inline double operator () (const Jet& j) { if (j.bTagged()) return _eff_b; if (_eff_c >= 0 && j.cTagged()) return _eff_c; if (_eff_t >= 0 && j.tauTagged()) return _eff_t; return _eff_l; } double _eff_b, _eff_c, _eff_t, _eff_l; }; /// Take a jet and return an unmodified copy /// @todo Modify constituent particle vectors for consistency /// @todo Set a null PseudoJet if the Jet is smeared? inline Jet JET_SMEAR_IDENTITY(const Jet& j) { return j; } /// Alias for JET_SMEAR_IDENTITY inline Jet JET_SMEAR_PERFECT(const Jet& j) { return j; } /// @brief Functor for simultaneous efficiency-filtering and smearing of Jets /// /// A central element of the SmearedJets system /// /// @todo Include tagging efficiency functions? struct JetEffSmearFn { JetEffSmearFn(const JetSmearFn& s, const JetEffFn& e) : sfn(s), efn(e) { } JetEffSmearFn(const JetEffFn& e, const JetSmearFn& s) : sfn(s), efn(e) { } JetEffSmearFn(const JetSmearFn& s) : sfn(s), efn(JET_EFF_ONE) { } JetEffSmearFn(const JetEffFn& e) : sfn(JET_SMEAR_IDENTITY), efn(e) { } JetEffSmearFn(double eff) : JetEffSmearFn(JET_EFF_CONST(eff)) { } /// Smear and calculate an efficiency for the given jet pair operator() (const Jet& j) const { return make_pair(sfn(j), efn(j)); } /// Compare to another, for use in the projection system CmpState cmp(const JetEffSmearFn& other) const { // cout << "Eff hashes = " << get_address(efn) << "," << get_address(other.efn) << "; " // << "smear hashes = " << get_address(sfn) << "," << get_address(other.sfn) << '\n'; - if (get_address(sfn) == 0 || get_address(other.sfn) == 0) return CmpState::UNDEF; - if (get_address(efn) == 0 || get_address(other.efn) == 0) return CmpState::UNDEF; + if (get_address(sfn) == 0 || get_address(other.sfn) == 0) return CmpState::NEQ; + if (get_address(efn) == 0 || get_address(other.efn) == 0) return CmpState::NEQ; return Rivet::cmp(get_address(sfn), get_address(other.sfn)) || Rivet::cmp(get_address(efn), get_address(other.efn)); } /// Automatic conversion to a smearing function operator JetSmearFn () { return sfn; } /// Automatic conversion to an efficiency function /// @todo Ambiguity re. whether reco eff or a tagging efficiency... // operator JetEffFn () { return efn; } // Stored functions/functors JetSmearFn sfn; JetEffFn efn; }; /// Return true if Jet @a j is chosen to survive a random efficiency selection template inline bool efffilt(const Jet& j, FN& feff) { return rand01() < feff(j); } /// A functor to return true if Jet @a j survives a random efficiency selection struct JetEffFilter { template JetEffFilter(const FN& feff) : _feff(feff) {} JetEffFilter(double eff) : JetEffFilter( [&](const Jet& j){return eff;} ) {} bool operator () (const Jet& j) const { return efffilt(j, _feff); } private: const JetEffFn _feff; }; using jetEffFilter = JetEffFilter; //@} } #endif diff --git a/include/Rivet/Tools/ParticleSmearingFunctions.hh b/include/Rivet/Tools/ParticleSmearingFunctions.hh --- a/include/Rivet/Tools/ParticleSmearingFunctions.hh +++ b/include/Rivet/Tools/ParticleSmearingFunctions.hh @@ -1,118 +1,118 @@ // -*- C++ -*- #ifndef RIVET_ParticleSmearingFunctions_HH #define RIVET_ParticleSmearingFunctions_HH #include "Rivet/Particle.hh" #include "Rivet/Tools/MomentumSmearingFunctions.hh" #include "Rivet/Tools/Random.hh" namespace Rivet { /// @name Particle filtering, efficiency and smearing utils //@{ /// @name Typedef for Particle smearing functions/functors typedef function ParticleSmearFn; /// @name Typedef for Particle efficiency functions/functors typedef function ParticleEffFn; /// Take a Particle and return 0 inline double PARTICLE_EFF_ZERO(const Particle& ) { return 0; } /// Alias for PARTICLE_EFF_ZERO inline double PARTICLE_EFF_0(const Particle& ) { return 0; } /// Alias for PARTICLE_EFF_ZERO inline double PARTICLE_FN0(const Particle& ) { return 0; } /// Take a Particle and return 1 inline double PARTICLE_EFF_ONE(const Particle& ) { return 1; } /// Alias for PARTICLE_EFF_ONE inline double PARTICLE_EFF_1(const Particle& ) { return 1; } /// Alias for PARTICLE_EFF_ONE inline double PARTICLE_EFF_PERFECT(const Particle& ) { return 1; } /// Alias for PARTICLE_EFF_ONE inline double PARTICLE_FN1(const Particle& ) { return 1; } /// Take a Particle and return a constant number struct PARTICLE_EFF_CONST { PARTICLE_EFF_CONST(double x) : _x(x) {} double operator () (const Particle& ) const { return _x; } double _x; }; /// Take a Particle and return it unmodified inline Particle PARTICLE_SMEAR_IDENTITY(const Particle& p) { return p; } /// Alias for PARTICLE_SMEAR_IDENTITY inline Particle PARTICLE_SMEAR_PERFECT(const Particle& p) { return p; } /// @brief Functor for simultaneous efficiency-filtering and smearing of Particles /// /// A central element of the SmearedParticles system struct ParticleEffSmearFn { ParticleEffSmearFn(const ParticleSmearFn& s, const ParticleEffFn& e) : sfn(s), efn(e) { } ParticleEffSmearFn(const ParticleEffFn& e, const ParticleSmearFn& s) : sfn(s), efn(e) { } ParticleEffSmearFn(const ParticleSmearFn& s) : sfn(s), efn(PARTICLE_EFF_ONE) { } ParticleEffSmearFn(const ParticleEffFn& e) : sfn(PARTICLE_SMEAR_IDENTITY), efn(e) { } ParticleEffSmearFn(double eff) : ParticleEffSmearFn(PARTICLE_EFF_CONST(eff)) { } /// Smear and calculate an efficiency for the given particle pair operator() (const Particle& p) const { return make_pair(sfn(p), efn(p)); } /// Compare to another, for use in the projection system CmpState cmp(const ParticleEffSmearFn& other) const { // cout << "Eff hashes = " << get_address(efn) << "," << get_address(other.efn) << "; " // << "smear hashes = " << get_address(sfn) << "," << get_address(other.sfn) << '\n'; - if (get_address(sfn) == 0 || get_address(other.sfn) == 0) return CmpState::UNDEF; - if (get_address(efn) == 0 || get_address(other.efn) == 0) return CmpState::UNDEF; + if (get_address(sfn) == 0 || get_address(other.sfn) == 0) return CmpState::NEQ; + if (get_address(efn) == 0 || get_address(other.efn) == 0) return CmpState::NEQ; return Rivet::cmp(get_address(sfn), get_address(other.sfn)) || Rivet::cmp(get_address(efn), get_address(other.efn)); } /// Automatic conversion to a smearing function operator ParticleSmearFn () { return sfn; } /// Automatic conversion to an efficiency function operator ParticleEffFn () { return efn; } // Stored functions/functors const ParticleSmearFn sfn; const ParticleEffFn efn; }; /// Return true if Particle @a p is chosen to survive a random efficiency selection inline bool efffilt(const Particle& p, const ParticleEffFn& feff) { return rand01() < feff(p); } /// A functor to return true if Particle @a p survives a random efficiency selection /// @deprecated Prefer struct ParticleEffFilter { template ParticleEffFilter(const FN& feff) : _feff(feff) {} ParticleEffFilter(double eff) : ParticleEffFilter( [&](const Particle& p){return eff;} ) {} bool operator () (const Particle& p) const { return efffilt(p, _feff); } private: const ParticleEffFn _feff; }; using particleEffFilter = ParticleEffFilter; //@} } #endif diff --git a/include/Rivet/Tools/RivetFastJet.hh b/include/Rivet/Tools/RivetFastJet.hh --- a/include/Rivet/Tools/RivetFastJet.hh +++ b/include/Rivet/Tools/RivetFastJet.hh @@ -1,40 +1,45 @@ #ifndef RIVET_RIVETFASTJET_HH #define RIVET_RIVETFASTJET_HH #include "Rivet/Config/RivetCommon.hh" #include "fastjet/JetDefinition.hh" #include "fastjet/AreaDefinition.hh" #include "fastjet/ClusterSequence.hh" #include "fastjet/ClusterSequenceArea.hh" #include "fastjet/PseudoJet.hh" #include "fastjet/tools/Filter.hh" #include "fastjet/tools/Recluster.hh" +namespace fastjet { + namespace contrib { } +} + namespace Rivet { + namespace fjcontrib = fastjet::contrib; /// Unscoped awareness of FastJet's PseudoJet using fastjet::PseudoJet; using fastjet::ClusterSequence; using fastjet::JetDefinition; /// Typedef for a collection of PseudoJet objects. /// @todo Make into an explicit container with conversion to Jets and FourMomenta? typedef std::vector PseudoJets; /// Make a 3-momentum vector from a FastJet pseudojet inline Vector3 momentum3(const fastjet::PseudoJet& pj) { return Vector3(pj.px(), pj.py(), pj.pz()); } /// Make a 4-momentum vector from a FastJet pseudojet inline FourMomentum momentum(const fastjet::PseudoJet& pj) { return FourMomentum(pj.E(), pj.px(), pj.py(), pj.pz()); } } #endif diff --git a/include/Rivet/Tools/fjcontrib/AxesDefinition.hh b/include/Rivet/Tools/fjcontrib/AxesDefinition.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/AxesDefinition.hh +++ /dev/null @@ -1,1283 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: AxesDefinition.hh 833 2015-07-23 14:35:23Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_AXES_DEFINITION_HH__ -#define __FASTJET_CONTRIB_AXES_DEFINITION_HH__ - - -#include "MeasureDefinition.hh" -#include "ExtraRecombiners.hh" - -#include "fastjet/PseudoJet.hh" -#include - -#include -#include -#include -#include - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -// The following AxesDefinitions are currently available (and the relevant arguments, if needed) -class KT_Axes; -class CA_Axes; -class AntiKT_Axes; // (R0) -class WTA_KT_Axes; -class WTA_CA_Axes; -class GenKT_Axes; // (p, R0 = infinity) -class WTA_GenKT_Axes; // (p, R0 = infinity) -class GenET_GenKT_Axes; // (delta, p, R0 = infinity) -class Manual_Axes; - -class OnePass_KT_Axes; -class OnePass_CA_Axes; -class OnePass_AntiKT_Axes; // (R0) -class OnePass_WTA_KT_Axes; -class OnePass_WTA_CA_Axes; -class OnePass_GenKT_Axes; // (p, R0 = infinity) -class OnePass_WTA_GenKT_Axes; // (p, R0 = infinity) -class OnePass_GenET_GenKT_Axes; // (delta, p, R0 = infinity) -class OnePass_Manual_Axes; - -class MultiPass_Axes; // (NPass) (currently only defined for KT_Axes) -class MultiPass_Manual_Axes; // (NPass) - -class Comb_GenKT_Axes; // (nExtra, p, R0 = infinity) -class Comb_WTA_GenKT_Axes; // (nExtra, p, R0 = infinity) -class Comb_GenET_GenKT_Axes; // (nExtra, delta, p, R0 = infinity) - -/////// -// -// AxesDefinition -// -/////// - -///------------------------------------------------------------------------ -/// \class AxesDefinition -/// \brief Base class for axes definitions -/// -/// A generic AxesDefinition first finds a set of seed axes. -/// Then, if desired, uses measure information -/// (from MeasureDefinition) to refine those axes starting from those seed axes. -/// The AxesDefinitions are typically based on sequential jet algorithms. -///------------------------------------------------------------------------ -class AxesDefinition { - -public: - - /// This function should be overloaded in all derived classes, and defines how to find the seed axes. - /// If desired, the measure information (which might be NULL) can be used to test multiple axes choices, but should - /// not be used for iterative refining (since that is the job of MeasureDefinition). - virtual std::vector get_starting_axes(int n_jets, - const std::vector& inputs, - const MeasureDefinition * measure) const = 0; - - /// Short description of AxesDefinitions (and any parameters) - virtual std::string short_description() const = 0; - - /// Long description of AxesDefinitions (and any parameters) - virtual std::string description() const = 0; - - /// This has to be defined in all derived classes, and allows these to be copied around. - virtual AxesDefinition* create() const = 0; - -public: - - /// Starting from seeds, refine axes using one or more passes. - /// Note that in order to do >0 passes, we need information from the MeasureDefinition about how to do the appropriate minimization. - std::vector get_refined_axes(int n_jets, - const std::vector& inputs, - const std::vector& seedAxes, - const MeasureDefinition * measure = NULL) const { - - assert(n_jets == (int)seedAxes.size()); //added int casting to get rid of compiler warning - - if (_Npass == 0) { - // no refining, just use seeds - return seedAxes; - } else if (_Npass == 1) { - if (measure == NULL) throw Error("AxesDefinition: One-pass minimization requires specifying a MeasureDefinition."); - - // do one pass minimum using measure definition - return measure->get_one_pass_axes(n_jets, inputs, seedAxes,_nAttempts,_accuracy); - } else { - if (measure == NULL) throw Error("AxesDefinition: Multi-pass minimization requires specifying a MeasureDefinition."); - return get_multi_pass_axes(n_jets, inputs, seedAxes, measure); - } - } - - /// Combines get_starting_axes with get_refined_axes. - /// In the Njettiness class, these two steps are done separately in order to store seed axes information. - std::vector get_axes(int n_jets, - const std::vector& inputs, - const MeasureDefinition * measure = NULL) const { - std::vector seedAxes = get_starting_axes(n_jets, inputs, measure); - return get_refined_axes(n_jets,inputs,seedAxes,measure); - } - - - /// Short-hand for the get_axes function. Useful when trying to write terse code. - inline std::vector operator() (int n_jets, - const std::vector& inputs, - const MeasureDefinition * measure = NULL) const { - return get_axes(n_jets,inputs,measure); - } - - /// \enum AxesRefiningEnum - /// Defines the cases of zero pass and one pass for convenience - enum AxesRefiningEnum { - UNDEFINED_REFINE = -1, // added to create a default value - NO_REFINING = 0, - ONE_PASS = 1, - MULTI_PASS = 100, - }; - - /// A integer that is used externally to decide how to do multi-pass minimization - int nPass() const { return _Npass; } - - /// A flag that indicates whether results are deterministics. - bool givesRandomizedResults() const { - return (_Npass > 1); - } - - /// A flag that indicates whether manual axes are being used. - bool needsManualAxes() const { - return _needsManualAxes; // if there is no starting axes finder - } - - /// Allows user to change number of passes. Also used internally to set nPass. - /// Can also specify details of one/multi pass minimziation - void setNPass(int nPass, - int nAttempts = 1000, - double accuracy = 0.0001, - double noise_range = 1.0 // only needed for MultiPass minimization - ) - { - _Npass = nPass; - _nAttempts = nAttempts; - _accuracy = accuracy; - _noise_range = noise_range; - if (nPass < 0) throw Error("AxesDefinition requires a nPass >= 0"); - } - - /// Destructor - virtual ~AxesDefinition() {}; - -protected: - - /// Default constructor contains no information. Number of passes has to be set - /// manually by derived classes using setNPass function. - AxesDefinition() : _Npass(UNDEFINED_REFINE), - _nAttempts(0), - _accuracy(0.0), - _noise_range(0.0), - _needsManualAxes(false) {} - - /// Does multi-pass minimization by randomly jiggling the axes within _noise_range - std::vector get_multi_pass_axes(int n_jets, - const std::vector& inputs, - const std::vector& seedAxes, - const MeasureDefinition* measure) const; - - /// Function to jiggle axes within _noise_range - PseudoJet jiggle(const PseudoJet& axis) const; - - int _Npass; ///< Number of passes (0 = no refining, 1 = one-pass, >1 multi-pass) - int _nAttempts; ///< Number of attempts per pass - double _accuracy; ///< Accuracy goal per pass - double _noise_range; ///< Noise in rapidity/phi (for multi-pass minimization only) - bool _needsManualAxes; ///< Flag to indicate special case of manual axes -}; - -///------------------------------------------------------------------------ -/// \class ExclusiveJetAxes -/// \brief Base class for axes defined from exclusive jet algorithm -/// -/// This class finds axes by clustering particles with an exclusive jet definition. -/// This can be implemented with different jet algorithms. The user can call this directly -/// using their favorite fastjet::JetDefinition -///------------------------------------------------------------------------ -class ExclusiveJetAxes : public AxesDefinition { - -public: - /// Constructor takes JetDefinition as an argument - ExclusiveJetAxes(fastjet::JetDefinition def) - : AxesDefinition(), _def(def) { - setNPass(NO_REFINING); // default to no minimization - } - - /// Starting axes obtained by creating a cluster sequenence and running exclusive_jets. - virtual std::vector get_starting_axes(int n_jets, - const std::vector & inputs, - const MeasureDefinition * ) const { - fastjet::ClusterSequence jet_clust_seq(inputs, _def); - - std::vector axes = jet_clust_seq.exclusive_jets_up_to(n_jets); - - if ((int)axes.size() < n_jets) { - _too_few_axes_warning.warn("ExclusiveJetAxes::get_starting_axes: Fewer than N axes found; results are unpredictable."); - axes.resize(n_jets); // resize to make sure there are enough axes to not yield an error elsewhere - } - - return axes; - } - - /// Short description - virtual std::string short_description() const { return "ExclAxes";} - /// Long description - virtual std::string description() const { return "ExclAxes: " + _def.description();} - - /// To make it possible to copy around. - virtual ExclusiveJetAxes* create() const {return new ExclusiveJetAxes(*this);} - -private: - fastjet::JetDefinition _def; ///< Jet definition to use. - static LimitedWarning _too_few_axes_warning; -}; - -///------------------------------------------------------------------------ -/// \class ExclusiveCombinatorialJetAxes -/// \brief Base class for axes defined from exclusive jet algorithm, checking combinatorial options -/// -/// This class finds axes by clustering particles with an exclusive jet definition. -/// It takes an extra number of jets (specificed by the user via nExtra), and then finds the set of N that minimizes N-jettiness. -/// WARNING: If one wants to be guarenteed that results improve by increasing nExtra, then one should use -/// winner-take-all-style recombination schemes -///------------------------------------------------------------------------ -class ExclusiveCombinatorialJetAxes : public AxesDefinition { - -public: - /// Constructor takes JetDefinition and nExtra as options (nExtra=0 acts the same as ExclusiveJetAxes) - ExclusiveCombinatorialJetAxes(fastjet::JetDefinition def, int nExtra = 0) - : AxesDefinition(), _def(def), _nExtra(nExtra) { - if (nExtra < 0) throw Error("Need nExtra >= 0"); - setNPass(NO_REFINING); // default to no minimization - } - - /// Find n_jets + _nExtra axes, and then choose the n_jets subset with the smallest N-(sub)jettiness value. - virtual std::vector get_starting_axes(int n_jets, - const std::vector & inputs, - const MeasureDefinition *measure) const { - int starting_number = n_jets + _nExtra; - fastjet::ClusterSequence jet_clust_seq(inputs, _def); - std::vector starting_axes = jet_clust_seq.exclusive_jets_up_to(starting_number); - - if ((int)starting_axes.size() < n_jets) { - _too_few_axes_warning.warn("ExclusiveCombinatorialJetAxes::get_starting_axes: Fewer than N + nExtra axes found; results are unpredictable."); - starting_axes.resize(n_jets); // resize to make sure there are enough axes to not yield an error elsewhere - } - - std::vector final_axes; - - // check so that no computation time is wasted if there are no extra axes - if (_nExtra == 0) final_axes = starting_axes; - - else { - - // define string of 1's based on number of desired jets - std::string bitmask(n_jets, 1); - // expand the array size to the total number of jets with extra 0's at the end, makes string easy to permute - bitmask.resize(starting_number, 0); - - double min_tau = std::numeric_limits::max(); - std::vector temp_axes; - - do { - - temp_axes.clear(); - - // only take an axis if it is listed as true (1) in the string - for (int i = 0; i < (int)starting_axes.size(); ++i) { - if (bitmask[i]) temp_axes.push_back(starting_axes[i]); - } - - double temp_tau = measure->result(inputs, temp_axes); - if (temp_tau < min_tau) { - min_tau = temp_tau; - final_axes = temp_axes; - } - - // permutes string of 1's and 0's according to next lexicographic ordering and returns true - // continues to loop through all possible lexicographic orderings - // returns false and breaks the loop when there are no more possible orderings - } while (std::prev_permutation(bitmask.begin(), bitmask.end())); - } - - return final_axes; - } - - /// Short description - virtual std::string short_description() const { return "ExclCombAxes";} - /// Long description - virtual std::string description() const { return "ExclCombAxes: " + _def.description();} - /// To make it possible to copy around. - virtual ExclusiveCombinatorialJetAxes* create() const {return new ExclusiveCombinatorialJetAxes(*this);} - -private: - fastjet::JetDefinition _def; ///< Jet definition to use - int _nExtra; ///< Extra axes to find - static LimitedWarning _too_few_axes_warning; -}; - -///------------------------------------------------------------------------ -/// \class HardestJetAxes -/// \brief Base class for axes defined from an inclusive jet algorithm -/// -/// This class finds axes by running an inclusive algorithm and then finding the n hardest jets. -/// This can be implemented with different jet algorithms, and can be called by the user. -///------------------------------------------------------------------------ -class HardestJetAxes : public AxesDefinition { -public: - /// Constructor takes JetDefinition - HardestJetAxes(fastjet::JetDefinition def) - : AxesDefinition(), _def(def) { - setNPass(NO_REFINING); // default to no minimization - } - - /// Finds seed axes by running a ClusterSequence, running inclusive_jets, and finding the N hardest - virtual std::vector get_starting_axes(int n_jets, - const std::vector & inputs, - const MeasureDefinition * ) const { - fastjet::ClusterSequence jet_clust_seq(inputs, _def); - std::vector axes = sorted_by_pt(jet_clust_seq.inclusive_jets()); - - if ((int)axes.size() < n_jets) { - _too_few_axes_warning.warn("HardestJetAxes::get_starting_axes: Fewer than N axes found; results are unpredictable."); - } - - axes.resize(n_jets); // only keep n hardest - return axes; - } - - /// Short description - virtual std::string short_description() const { return "HardAxes";} - /// Long description - virtual std::string description() const { return "HardAxes: " + _def.description();} - /// To make it possible to copy around. - virtual HardestJetAxes* create() const {return new HardestJetAxes(*this);} - -private: - fastjet::JetDefinition _def; ///< Jet Definition to use. - - static LimitedWarning _too_few_axes_warning; - -}; - -///------------------------------------------------------------------------ -/// \class KT_Axes -/// \brief Axes from exclusive kT -/// -/// Axes from kT algorithm with E_scheme recombination. -///------------------------------------------------------------------------ -class KT_Axes : public ExclusiveJetAxes { -public: - /// Constructor - KT_Axes() - : ExclusiveJetAxes(fastjet::JetDefinition(fastjet::kt_algorithm, - fastjet::JetDefinition::max_allowable_R, //maximum jet radius constant - fastjet::E_scheme, - fastjet::Best) - ) { - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "KT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "KT Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual KT_Axes* create() const {return new KT_Axes(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class CA_Axes -/// \brief Axes from exclusive CA -/// -/// Axes from CA algorithm with E_scheme recombination. -///------------------------------------------------------------------------ -class CA_Axes : public ExclusiveJetAxes { -public: - /// Constructor - CA_Axes() - : ExclusiveJetAxes(fastjet::JetDefinition(fastjet::cambridge_algorithm, - fastjet::JetDefinition::max_allowable_R, //maximum jet radius constant - fastjet::E_scheme, - fastjet::Best) - ) { - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "CA"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "CA Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual CA_Axes* create() const {return new CA_Axes(*this);} - -}; - - -///------------------------------------------------------------------------ -/// \class AntiKT_Axes -/// \brief Axes from inclusive anti-kT -/// -/// Axes from anti-kT algorithm and E_scheme. -/// The one parameter R0 is subjet radius -///------------------------------------------------------------------------ -class AntiKT_Axes : public HardestJetAxes { - -public: - /// Constructor. Takes jet radius as argument - AntiKT_Axes(double R0) - : HardestJetAxes(fastjet::JetDefinition(fastjet::antikt_algorithm, - R0, - fastjet::E_scheme, - fastjet::Best) - ), _R0(R0) { - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "AKT" << _R0; - return stream.str(); - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Anti-KT Axes (R0 = " << _R0 << ")"; - return stream.str(); - }; - - /// For copying purposes - virtual AntiKT_Axes* create() const {return new AntiKT_Axes(*this);} - -protected: - double _R0; ///< AKT jet radius - -}; - -///------------------------------------------------------------------------ -/// \class JetDefinitionWrapper -/// \brief Wrapper for jet definitions (for memory management) -/// -/// This class was introduced to avoid issue of a FastJet bug when using genKT clustering -/// Now using this for all AxesDefinition with a manual recombiner to use the delete_recombiner_when_unused function -///------------------------------------------------------------------------ -class JetDefinitionWrapper { - -public: - - /// Default Constructor - JetDefinitionWrapper(JetAlgorithm jet_algorithm_in, double R_in, double xtra_param_in, const JetDefinition::Recombiner *recombiner) { - jet_def = fastjet::JetDefinition(jet_algorithm_in, R_in, xtra_param_in); - jet_def.set_recombiner(recombiner); - jet_def.delete_recombiner_when_unused(); // added to prevent memory leaks - } - - /// Additional constructor so that build-in FastJet algorithms can also be called - JetDefinitionWrapper(JetAlgorithm jet_algorithm_in, double R_in, const JetDefinition::Recombiner *recombiner, fastjet::Strategy strategy_in) { - jet_def = fastjet::JetDefinition(jet_algorithm_in, R_in, recombiner, strategy_in); - jet_def.delete_recombiner_when_unused(); - } - - /// Return jet definition - JetDefinition getJetDef() { - return jet_def; - } - -private: - JetDefinition jet_def; ///< my jet definition -}; - -///------------------------------------------------------------------------ -/// \class WTA_KT_Axes -/// \brief Axes from exclusive kT, winner-take-all recombination -/// -/// Axes from kT algorithm and winner-take-all recombination -///------------------------------------------------------------------------ -class WTA_KT_Axes : public ExclusiveJetAxes { -public: - /// Constructor - WTA_KT_Axes() - : ExclusiveJetAxes(JetDefinitionWrapper(fastjet::kt_algorithm, - fastjet::JetDefinition::max_allowable_R, //maximum jet radius constant - _recomb = new WinnerTakeAllRecombiner(), // Needs to be explicitly declared (this will be deleted by JetDefinitionWrapper) - fastjet::Best).getJetDef() - ) { - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "WTA KT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Winner-Take-All KT Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual WTA_KT_Axes* create() const {return new WTA_KT_Axes(*this);} - -private: - const WinnerTakeAllRecombiner *_recomb; ///< Internal recombiner - -}; - -///------------------------------------------------------------------------ -/// \class WTA_CA_Axes -/// \brief Axes from exclusive CA, winner-take-all recombination -/// -/// Axes from CA algorithm and winner-take-all recombination -///------------------------------------------------------------------------ -class WTA_CA_Axes : public ExclusiveJetAxes { -public: - /// Constructor - WTA_CA_Axes() - : ExclusiveJetAxes(JetDefinitionWrapper(fastjet::cambridge_algorithm, - fastjet::JetDefinition::max_allowable_R, //maximum jet radius constant - _recomb = new WinnerTakeAllRecombiner(), // Needs to be explicitly declared (this will be deleted by JetDefinitionWrapper) - fastjet::Best).getJetDef()) { - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "WTA CA"; - }; - - /// Long descriptions - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Winner-Take-All CA Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual WTA_CA_Axes* create() const {return new WTA_CA_Axes(*this);} - -private: - const WinnerTakeAllRecombiner *_recomb; ///< Internal recombiner - -}; - - -///------------------------------------------------------------------------ -/// \class GenKT_Axes -/// \brief Axes from exclusive generalized kT -/// -/// Axes from a general KT algorithm (standard E-scheme recombination) -/// Requires the power of the KT algorithm to be used and the radius parameter -///------------------------------------------------------------------------ -class GenKT_Axes : public ExclusiveJetAxes { - -public: - /// Constructor - GenKT_Axes(double p, double R0 = fastjet::JetDefinition::max_allowable_R) - : ExclusiveJetAxes(fastjet::JetDefinition(fastjet::genkt_algorithm, - R0, - p)), _p(p), _R0(R0) { - if (p < 0) throw Error("GenKT_Axes: Currently only p >=0 is supported."); - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "GenKT Axes"; - return stream.str(); - }; - - /// Long descriptions - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual GenKT_Axes* create() const {return new GenKT_Axes(*this);} - -protected: - double _p; ///< genkT power - double _R0; ///< jet radius -}; - - -///------------------------------------------------------------------------ -/// \class WTA_GenKT_Axes -/// \brief Axes from exclusive generalized kT, winner-take-all recombination -/// -/// Axes from a general KT algorithm with a Winner Take All Recombiner -/// Requires the power of the KT algorithm to be used and the radius parameter -///------------------------------------------------------------------------ -class WTA_GenKT_Axes : public ExclusiveJetAxes { - -public: - /// Constructor - WTA_GenKT_Axes(double p, double R0 = fastjet::JetDefinition::max_allowable_R) - : ExclusiveJetAxes(JetDefinitionWrapper(fastjet::genkt_algorithm, - R0, - p, - _recomb = new WinnerTakeAllRecombiner() - ).getJetDef()), _p(p), _R0(R0) { - if (p < 0) throw Error("WTA_GenKT_Axes: Currently only p >=0 is supported."); - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "WTA, GenKT Axes"; - return stream.str(); - }; - - /// Long descriptions - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Winner-Take-All General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual WTA_GenKT_Axes* create() const {return new WTA_GenKT_Axes(*this);} - -protected: - double _p; ///< genkT power - double _R0; ///< jet radius - const WinnerTakeAllRecombiner *_recomb; ///< Internal recombiner -}; - -///------------------------------------------------------------------------ -/// \class GenET_GenKT_Axes -/// \brief Axes from exclusive kT, generalized Et-scheme recombination -/// -/// Class using general KT algorithm with a more general recombination scheme -/// Requires power of KT algorithm, power of recombination weights, and radius parameter -///------------------------------------------------------------------------ -class GenET_GenKT_Axes : public ExclusiveJetAxes { - -public: - /// Constructor - GenET_GenKT_Axes(double delta, double p, double R0 = fastjet::JetDefinition::max_allowable_R) - : ExclusiveJetAxes((JetDefinitionWrapper(fastjet::genkt_algorithm, R0, p, _recomb = new GeneralEtSchemeRecombiner(delta))).getJetDef() ), - _delta(delta), _p(p), _R0(R0) { - if (p < 0) throw Error("GenET_GenKT_Axes: Currently only p >=0 is supported."); - if (delta <= 0) throw Error("GenET_GenKT_Axes: Currently only delta >0 is supported."); - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "GenET, GenKT Axes"; - return stream.str(); - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2); - // TODO: if _delta is huge, change to "WTA" - if (_delta < std::numeric_limits::max()) stream << "General Recombiner (delta = " << _delta << "), " << "General KT (p = " << _p << ") Axes, R0 = " << _R0; - else stream << "Winner-Take-All General KT (p = " << _p << "), R0 = " << _R0; - - return stream.str(); - }; - - /// For copying purposes - virtual GenET_GenKT_Axes* create() const {return new GenET_GenKT_Axes(*this);} - -protected: - double _delta; ///< Recombination pT weighting - double _p; ///< GenkT power - double _R0; ///< jet radius - const GeneralEtSchemeRecombiner *_recomb; ///< Internal recombiner -}; - -///------------------------------------------------------------------------ -/// \class OnePass_KT_Axes -/// \brief Axes from exclusive kT, with one-pass minimization -/// -/// Onepass minimization from kt axes -///------------------------------------------------------------------------ -class OnePass_KT_Axes : public KT_Axes { -public: - /// Constructor - OnePass_KT_Axes() : KT_Axes() { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass KT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from KT Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_KT_Axes* create() const {return new OnePass_KT_Axes(*this);} - - -}; - -///------------------------------------------------------------------------ -/// \class OnePass_CA_Axes -/// \brief Axes from exclusive CA, with one-pass minimization -/// -/// Onepass minimization from CA axes -///------------------------------------------------------------------------ -class OnePass_CA_Axes : public CA_Axes { -public: - /// Constructor - OnePass_CA_Axes() : CA_Axes() { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass CA"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from CA Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_CA_Axes* create() const {return new OnePass_CA_Axes(*this);} - - -}; - -///------------------------------------------------------------------------ -/// \class OnePass_AntiKT_Axes -/// \brief Axes from inclusive anti-kT, with one-pass minimization -/// -/// Onepass minimization from AntiKT axes, one parameter R0 -///------------------------------------------------------------------------ -class OnePass_AntiKT_Axes : public AntiKT_Axes { - -public: - /// Constructor - OnePass_AntiKT_Axes(double R0) : AntiKT_Axes(R0) { - setNPass(ONE_PASS); - } - - /// Short Description - virtual std::string short_description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "OnePassAKT" << _R0; - return stream.str(); - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from Anti-KT Axes (R0 = " << _R0 << ")"; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_AntiKT_Axes* create() const {return new OnePass_AntiKT_Axes(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class OnePass_WTA_KT_Axes -/// \brief Axes from exclusive kT, winner-take-all recombination, with one-pass minimization -/// -/// Onepass minimization from winner-take-all kt axes -///------------------------------------------------------------------------ -class OnePass_WTA_KT_Axes : public WTA_KT_Axes { -public: - /// Constructor - OnePass_WTA_KT_Axes() : WTA_KT_Axes() { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass WTA KT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from Winner-Take-All KT Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_WTA_KT_Axes* create() const {return new OnePass_WTA_KT_Axes(*this);} - - -}; - -///------------------------------------------------------------------------ -/// \class OnePass_WTA_CA_Axes -/// \brief Axes from exclusive CA, winner-take-all recombination, with one-pass minimization -/// -/// Onepass minimization from winner-take-all CA axes -///------------------------------------------------------------------------ -class OnePass_WTA_CA_Axes : public WTA_CA_Axes { - -public: - /// Constructor - OnePass_WTA_CA_Axes() : WTA_CA_Axes() { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass WTA CA"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from Winner-Take-All CA Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_WTA_CA_Axes* create() const {return new OnePass_WTA_CA_Axes(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class OnePass_GenKT_Axes -/// \brief Axes from exclusive generalized kT with one-pass minimization -/// -/// Onepass minimization, General KT Axes (standard E-scheme recombination) -///------------------------------------------------------------------------ -class OnePass_GenKT_Axes : public GenKT_Axes { - -public: - /// Constructor - OnePass_GenKT_Axes(double p, double R0 = fastjet::JetDefinition::max_allowable_R) : GenKT_Axes(p, R0) { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass GenKT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_GenKT_Axes* create() const {return new OnePass_GenKT_Axes(*this);} -}; - -///------------------------------------------------------------------------ -/// \class OnePass_WTA_GenKT_Axes -/// \brief Axes from exclusive generalized kT, winner-take-all recombination, with one-pass minimization -/// -/// Onepass minimization from winner-take-all, General KT Axes -///------------------------------------------------------------------------ -class OnePass_WTA_GenKT_Axes : public WTA_GenKT_Axes { - -public: - /// Constructor - OnePass_WTA_GenKT_Axes(double p, double R0 = fastjet::JetDefinition::max_allowable_R) : WTA_GenKT_Axes(p, R0) { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass WTA GenKT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from Winner-Take-All General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_WTA_GenKT_Axes* create() const {return new OnePass_WTA_GenKT_Axes(*this);} -}; - -///------------------------------------------------------------------------ -/// \class OnePass_GenET_GenKT_Axes -/// \brief Axes from exclusive generalized kT, generalized Et-scheme recombination, with one-pass minimization -/// -/// Onepass minimization from General Recomb, General KT axes -///------------------------------------------------------------------------ -class OnePass_GenET_GenKT_Axes : public GenET_GenKT_Axes { - -public: - /// Constructor - OnePass_GenET_GenKT_Axes(double delta, double p, double R0 = fastjet::JetDefinition::max_allowable_R) : GenET_GenKT_Axes(delta, p, R0) { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass GenET, GenKT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2); - if (_delta < std::numeric_limits::max()) stream << "One-Pass Minimization from General Recombiner (delta = " - << _delta << "), " << "General KT (p = " << _p << ") Axes, R0 = " << _R0; - else stream << "One-Pass Minimization from Winner-Take-All General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual OnePass_GenET_GenKT_Axes* create() const {return new OnePass_GenET_GenKT_Axes(*this);} -}; - - -///------------------------------------------------------------------------ -/// \class Manual_Axes -/// \brief Manual axes finding -/// -/// Allows the user to set the axes manually -///------------------------------------------------------------------------ -class Manual_Axes : public AxesDefinition { -public: - /// Constructor. Note that _needsManualAxes is set to true. - Manual_Axes() : AxesDefinition() { - setNPass(NO_REFINING); - _needsManualAxes = true; - } - - /// This is now a dummy function since this is manual mode - virtual std::vector get_starting_axes(int, - const std::vector&, - const MeasureDefinition *) const; - - - /// Short description - virtual std::string short_description() const { - return "Manual"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Manual Axes"; - return stream.str(); - }; - - /// For copying purposes - virtual Manual_Axes* create() const {return new Manual_Axes(*this);} - - -}; - -///------------------------------------------------------------------------ -/// \class OnePass_Manual_Axes -/// \brief Manual axes finding, with one-pass minimization -/// -/// One pass minimization from manual starting point -///------------------------------------------------------------------------ -class OnePass_Manual_Axes : public Manual_Axes { -public: - /// Constructor. Note that _needsManualAxes is set to true. - OnePass_Manual_Axes() : Manual_Axes() { - setNPass(ONE_PASS); - } - - /// Short description - virtual std::string short_description() const { - return "OnePass Manual"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "One-Pass Minimization from Manual Axes"; - return stream.str(); - }; - - // For copying purposes - virtual OnePass_Manual_Axes* create() const {return new OnePass_Manual_Axes(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class MultiPass_Axes -/// \brief Manual axes finding, with multi-pass (randomized) minimization -/// -/// Multi-pass minimization from kT starting point -///------------------------------------------------------------------------ -class MultiPass_Axes : public KT_Axes { - -public: - - /// Constructor - MultiPass_Axes(unsigned int Npass) : KT_Axes() { - setNPass(Npass); - } - - /// Short description - virtual std::string short_description() const { - return "MultiPass"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Multi-Pass Axes (Npass = " << _Npass << ")"; - return stream.str(); - }; - - /// For copying purposs - virtual MultiPass_Axes* create() const {return new MultiPass_Axes(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class MultiPass_Manual_Axes -/// \brief Axes finding from exclusive kT, with multi-pass (randomized) minimization -/// -/// multi-pass minimization from kT starting point -///------------------------------------------------------------------------ -class MultiPass_Manual_Axes : public Manual_Axes { - -public: - /// Constructor - MultiPass_Manual_Axes(unsigned int Npass) : Manual_Axes() { - setNPass(Npass); - } - - /// Short Description - virtual std::string short_description() const { - return "MultiPass Manual"; - }; - - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Multi-Pass Manual Axes (Npass = " << _Npass << ")"; - return stream.str(); - }; - - /// For copying purposes - virtual MultiPass_Manual_Axes* create() const {return new MultiPass_Manual_Axes(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class Comb_GenKT_Axes -/// \brief Axes from exclusive generalized kT with combinatorial testing -/// -/// Axes from kT algorithm (standard E-scheme recombination) -/// Requires nExtra parameter and returns set of N that minimizes N-jettiness -/// Note that this method is not guaranteed to find a deeper minimum than GenKT_Axes -///------------------------------------------------------------------------ -class Comb_GenKT_Axes : public ExclusiveCombinatorialJetAxes { -public: - /// Constructor - Comb_GenKT_Axes(int nExtra, double p, double R0 = fastjet::JetDefinition::max_allowable_R) - : ExclusiveCombinatorialJetAxes(fastjet::JetDefinition(fastjet::genkt_algorithm, R0, p), nExtra), - _p(p), _R0(R0) { - if (p < 0) throw Error("Comb_GenKT_Axes: Currently only p >=0 is supported."); - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "N Choose M GenKT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "N Choose M Minimization (nExtra = " << _nExtra << ") from General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual Comb_GenKT_Axes* create() const {return new Comb_GenKT_Axes(*this);} - -private: - double _nExtra; ///< Number of extra axes - double _p; ///< GenkT power - double _R0; ///< jet radius -}; - - - -///------------------------------------------------------------------------ -/// \class Comb_WTA_GenKT_Axes -/// \brief Axes from exclusive generalized kT, winner-take-all recombination, with combinatorial testing -/// -/// Axes from kT algorithm and winner-take-all recombination -/// Requires nExtra parameter and returns set of N that minimizes N-jettiness -///------------------------------------------------------------------------ -class Comb_WTA_GenKT_Axes : public ExclusiveCombinatorialJetAxes { -public: - /// Constructor - Comb_WTA_GenKT_Axes(int nExtra, double p, double R0 = fastjet::JetDefinition::max_allowable_R) - : ExclusiveCombinatorialJetAxes((JetDefinitionWrapper(fastjet::genkt_algorithm, R0, p, _recomb = new WinnerTakeAllRecombiner())).getJetDef(), nExtra), - _p(p), _R0(R0) { - if (p < 0) throw Error("Comb_WTA_GenKT_Axes: Currently only p >=0 is supported."); - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "N Choose M WTA GenKT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "N Choose M Minimization (nExtra = " << _nExtra << ") from Winner-Take-All General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual Comb_WTA_GenKT_Axes* create() const {return new Comb_WTA_GenKT_Axes(*this);} - -private: - double _nExtra; ///< Number of extra axes - double _p; ///< GenkT power - double _R0; ///< jet radius - const WinnerTakeAllRecombiner *_recomb; ///< Internal recombiner -}; - -///------------------------------------------------------------------------ -/// \class Comb_GenET_GenKT_Axes -/// \brief Axes from exclusive generalized kT, generalized Et-scheme recombination, with combinatorial testing -/// -/// Axes from kT algorithm and General Et scheme recombination -/// Requires nExtra parameter and returns set of N that minimizes N-jettiness -///------------------------------------------------------------------------ -class Comb_GenET_GenKT_Axes : public ExclusiveCombinatorialJetAxes { -public: - /// Constructor - Comb_GenET_GenKT_Axes(int nExtra, double delta, double p, double R0 = fastjet::JetDefinition::max_allowable_R) - : ExclusiveCombinatorialJetAxes((JetDefinitionWrapper(fastjet::genkt_algorithm, R0, p, _recomb = new GeneralEtSchemeRecombiner(delta))).getJetDef(), nExtra), - _delta(delta), _p(p), _R0(R0) { - if (p < 0) throw Error("Comb_GenET_GenKT_Axes: Currently only p >=0 is supported."); - if (delta <= 0) throw Error("Comb_GenET_GenKT_Axes: Currently only delta >=0 is supported."); - setNPass(NO_REFINING); - } - - /// Short description - virtual std::string short_description() const { - return "N Choose M GenET GenKT"; - }; - - /// Long description - virtual std::string description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2); - if (_delta < std::numeric_limits::max()) stream << "N choose M Minimization (nExtra = " << _nExtra - << ") from General Recombiner (delta = " << _delta << "), " << "General KT (p = " << _p << ") Axes, R0 = " << _R0; - else stream << "N choose M Minimization (nExtra = " << _nExtra << ") from Winner-Take-All General KT (p = " << _p << "), R0 = " << _R0; - return stream.str(); - }; - - /// For copying purposes - virtual Comb_GenET_GenKT_Axes* create() const {return new Comb_GenET_GenKT_Axes(*this);} - -private: - double _nExtra; ///< Number of extra axes - double _delta; ///< Recombination pT weighting exponent - double _p; ///< GenkT power - double _R0; ///< jet radius - const GeneralEtSchemeRecombiner *_recomb; ///< Internal recombiner -}; - - -} // namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_NJETTINESS_HH__ diff --git a/include/Rivet/Tools/fjcontrib/BottomUpSoftDrop.hh b/include/Rivet/Tools/fjcontrib/BottomUpSoftDrop.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/BottomUpSoftDrop.hh +++ /dev/null @@ -1,308 +0,0 @@ -// $Id: BottomUpSoftDrop.hh 1085 2017-10-11 02:16:59Z jthaler $ -// -// Copyright (c) 2017-, Gavin P. Salam, Gregory Soyez, Jesse Thaler, -// Kevin Zhou, Frederic Dreyer -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __BOTTOMUPSOFTDROP_HH__ -#define __BOTTOMUPSOFTDROP_HH__ - -#include "fastjet/ClusterSequence.hh" -#include "fastjet/WrappedStructure.hh" -#include "fastjet/tools/Transformer.hh" - -#include -#include - -// TODO -// -// - missing class description -// -// - check what to do when pta=ptb=0 -// for the moment, we recombine both for multiple reasons -// . this avois breakingteh symemtry between pa and pb -// . it would be groomed in later steps anyway -// Note that this is slightly inconsistent with our use of -// > (instead of >=) in the cdt - -//FASTJET_BEGIN_NAMESPACE - -//namespace contrib{ - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - - -// fwd declarations -class BottomUpSoftDrop; -class BottomUpSoftDropStructure; -class BottomUpSoftDropRecombiner; -class BottomUpSoftDropPlugin; - -//---------------------------------------------------------------------- -/// \class BottomUpSoftDrop -/// Implementation of the BottomUpSoftDrop transformer -/// -/// Bottom-Up Soft drop grooms a jet by applying a modified -/// recombination scheme, where particles are recombined only if they -/// pass the Soft Drop condition -/// -/// \f[ -/// z < z_{\rm cut} (\theta/R0)^\beta -/// \f] -/// -/// the groomed jet contains the particles remaining after this -/// pair-wise recombination -/// -/// Note: -/// - one can use BottomUpSoftDrop on a full event with the -/// global_grooming(event) method. -/// - if two recombined particles a and b have momentum pta=ptb=0, -/// we recombine both. -/// - - -class BottomUpSoftDrop : public Transformer { -public: - /// minimal constructor, which the jet algorithm to CA, sets the radius - /// to JetDefinition::max_allowable_R (practically equivalent to - /// infinity) and also tries to use a recombiner based on the one in - /// the jet definition of the particular jet being Soft Dropped. - /// - /// \param beta the value for beta - /// \param symmetry_cut the value for symmetry_cut - /// \param R0 the value for R0 - BottomUpSoftDrop(double beta, double symmetry_cut, double R0 = 1.0) - : _jet_def(cambridge_algorithm, JetDefinition::max_allowable_R), - _beta(beta),_symmetry_cut(symmetry_cut), _R0(R0), - _get_recombiner_from_jet(true) {} - - /// alternative constructor which takes a specified jet algorithm - /// - /// \param jet_alg the jet algorithm for the internal clustering (uses R=infty) - /// \param symmetry_cut the value of symmetry_cut - /// \param beta the value for beta - /// \param R0 the value for R0 - BottomUpSoftDrop(const JetAlgorithm jet_alg, double beta, double symmetry_cut, - double R0 = 1.0) - : _jet_def(jet_alg, JetDefinition::max_allowable_R), - _beta(beta), _symmetry_cut(symmetry_cut), _R0(R0), - _get_recombiner_from_jet(true) {} - - - /// alternative ctor in which the full reclustering jet definition can - /// be specified. - /// - /// \param jet_def the jet definition for the internal clustering - /// \param symmetry_cut the value of symmetry_cut - /// \param beta the value for beta - /// \param R0 the value for R0 - BottomUpSoftDrop(const JetDefinition &jet_def, double beta, double symmetry_cut, - double R0 = 1.0) - : _jet_def(jet_def), _beta(beta), _symmetry_cut(symmetry_cut), _R0(R0), - _get_recombiner_from_jet(false) {} - - /// action on a single jet - virtual PseudoJet result(const PseudoJet &jet) const; - - /// global grooming on a full event - /// note: does not support jet areas - virtual std::vector global_grooming(const std::vector & event) const; - - /// description - virtual std::string description() const; - - // the type of the associated structure - typedef BottomUpSoftDropStructure StructureType; - -private: - /// check if the jet has explicit_ghosts (knowing that there is an - /// area support) - bool _check_explicit_ghosts(const PseudoJet &jet) const; - - /// see if there is a common recombiner among the pieces; if there - /// is return true and set jet_def_for_recombiner so that the - /// recombiner can be taken from that JetDefinition. Otherwise, - /// return false. 'assigned' is initially false; when true, each - /// time we meet a new jet definition, we'll check it shares the - /// same recombiner as jet_def_for_recombiner. - bool _check_common_recombiner(const PseudoJet &jet, - JetDefinition &jet_def_for_recombiner, - bool assigned=false) const; - - - JetDefinition _jet_def; ///< the internal jet definition - double _beta; ///< the value of beta - double _symmetry_cut; ///< the value of symmetry_cut - double _R0; ///< the value of R0 - bool _get_recombiner_from_jet; ///< true for minimal constructor, - ///< causes recombiner to be set equal - ///< to that already used in the jet - ///< (if it can be deduced) -}; - -//---------------------------------------------------------------------- -/// The structure associated with a PseudoJet thas has gone through a -/// bottom/up SoftDrop transformer -class BottomUpSoftDropStructure : public WrappedStructure{ -public: - /// default ctor - /// \param result_jet the jet for which we have to keep the structure - BottomUpSoftDropStructure(const PseudoJet & result_jet) - : WrappedStructure(result_jet.structure_shared_ptr()){} - - /// description - virtual std::string description() const{ - return "Bottom/Up Soft Dropped PseudoJet"; - } - - /// return the constituents that have been rejected - std::vector rejected() const{ - return validated_cs()->childless_pseudojets(); - } - - /// return the other jets that may have been found along with the - /// result of the bottom/up Soft Drop - /// The resulting vector is sorted in pt - std::vector extra_jets() const { - return sorted_by_pt((!SelectorNHardest(1))(validated_cs()->inclusive_jets())); - } - - /// return the value of beta that was used for this specific Soft Drop. - double beta() const {return _beta;} - - /// return the value of symmetry_cut that was used for this specific Soft Drop. - double symmetry_cut() const {return _symmetry_cut;} - - /// return the value of R0 that was used for this specific Soft Drop. - double R0() const {return _R0;} - -protected: - friend class BottomUpSoftDrop; ///< to allow setting the internal information - -private: - double _beta, _symmetry_cut, _R0; -}; - -//---------------------------------------------------------------------- -/// Class for Soft Drop recombination -/// recombines the objects that are not vetoed by Bottom-Up SoftDrop -/// -/// This recombiner only recombines, using the provided 'recombiner', -/// objects (i and j) that pass the following SoftDrop criterion: -/// -/// min(pti, ptj) > zcut (pti+ptj) (theta_ij/R0)^beta -/// -/// If the criterion fail, the hardest of i and j is kept and the -/// softest is rejected. -/// -/// Note that this in not meant for standalone use [in particular -/// because it could lead to memory issues due to the rejected indices -/// stored internally]. -/// -/// This class is a direct adaptation of PruningRecombiner in Fastjet tools -class BottomUpSoftDropRecombiner : public JetDefinition::Recombiner { -public: - /// ctor - /// \param symmetry_cut value of cut on symmetry measure - /// \param beta avalue of beta parameter - /// \param recomb pointer to a recombiner to use to cluster pairs - BottomUpSoftDropRecombiner(double beta, double symmetry_cut, double R0, - const JetDefinition::Recombiner *recombiner) - : _beta(beta), _symmetry_cut(symmetry_cut), _R0sqr(R0*R0), - _recombiner(recombiner) {} - - /// perform a recombination taking into account the Soft Drop - /// conditions - virtual void recombine(const PseudoJet &pa, - const PseudoJet &pb, - PseudoJet &pab) const; - - /// returns the description of the recombiner - virtual std::string description() const { - std::ostringstream oss; - oss << "SoftDrop recombiner with symmetry_cut = " << _symmetry_cut - << ", beta = " << _beta - << ", and underlying recombiner = " << _recombiner->description(); - return oss.str(); - } - - /// return the history indices that have been soft dropped away - const std::vector & rejected() const{ return _rejected;} - - /// clears the list of rejected indices - /// - /// If one decides to use this recombiner standalone, one has to - /// call this after each clustering in order for the rejected() vector - /// to remain sensible and not grow to infinite size. - void clear_rejected(){ _rejected.clear();} - -private: - double _beta; ///< beta parameter - double _symmetry_cut; ///< value of symmetry_cut - double _R0sqr; ///< normalisation of the angular distance - const JetDefinition::Recombiner *_recombiner; ///< the underlying recombiner to use - mutable std::vector _rejected; ///< list of rejected history indices -}; - -//---------------------------------------------------------------------- -/// \class BottomUpSoftDropPlugin -/// Class for a bottom/up Soft Drop algorithm, based on the Pruner plugin -/// -/// This is an internal plugin that clusters the particles using the -/// BottomUpRecombiner. -/// -/// See BottomUpRecombiner for a description of what bottom-up -/// SoftDrop does. -/// -/// Note that this is an internal class used by the BottomUpSoftDrop -/// transformer and it is not meant to be used as a standalone -/// clustering tool. -class BottomUpSoftDropPlugin : public JetDefinition::Plugin { -public: - /// ctor - /// \param jet_def the jet definition to be used for the - /// internal clustering - /// \param symmetry_cut value of cut on symmetry measure - /// \param beta value of beta parameter - BottomUpSoftDropPlugin(const JetDefinition &jet_def, double beta, double symmetry_cut, - double R0 = 1.0) - : _jet_def(jet_def), _beta(beta), _symmetry_cut(symmetry_cut), _R0(R0) {} - - /// the actual clustering work for the plugin - virtual void run_clustering(ClusterSequence &input_cs) const; - - /// description of the plugin - virtual std::string description() const; - - /// returns the radius - virtual double R() const {return _jet_def.R();} - -private: - JetDefinition _jet_def; ///< the internal jet definition - double _beta; ///< beta parameter - double _symmetry_cut; ///< value of symmetry_cut - double _R0; ///< normalisation of the angular distance -}; - - } } - - //FASTJET_END_NAMESPACE // defined in fastjet/internal/base.hh -#endif // __BOTTOMUPSOFTDROP_HH__ diff --git a/include/Rivet/Tools/fjcontrib/EnergyCorrelator.hh b/include/Rivet/Tools/fjcontrib/EnergyCorrelator.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/EnergyCorrelator.hh +++ /dev/null @@ -1,1080 +0,0 @@ -#ifndef __FASTJET_CONTRIB_ENERGYCORRELATOR_HH__ -#define __FASTJET_CONTRIB_ENERGYCORRELATOR_HH__ - -// EnergyCorrelator Package -// Questions/Comments? Email the authors. -// larkoski@mit.edu, lnecib@mit.edu, -// gavin.salam@cern.ch jthaler@jthaler.net -// -// Copyright (c) 2013-2016 -// Andrew Larkoski, Lina Necib Gavin Salam, and Jesse Thaler -// -// $Id: EnergyCorrelator.hh 1098 2018-01-07 20:17:52Z linoush $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include "fastjet/FunctionOfPseudoJet.hh" - -#include -#include - -namespace Rivet { - // FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - namespace fjcontrib { - - using namespace fastjet; - -/// \mainpage EnergyCorrelator contrib -/// -/// The EnergyCorrelator contrib provides an implementation of energy -/// correlators and their ratios as described in arXiv:1305.0007 by -/// Larkoski, Salam and Thaler. Additionally, the ratio observable -/// D2 described in arXiv:1409.6298 by Larkoski, Moult and Neill -/// is also included in this contrib. Finally, a generalized version of -/// the energy correlation functions is added, defined in -/// arXiv:1609.07483 by Moult, Necib and Thaler, which allow the -/// definition of the M series, N series, and U series observables. -/// There is also a generalized version of D2. -/// -/// -///

There are 4 main classes: -/// -/// - EnergyCorrelator -/// - EnergyCorrelatorRatio -/// - EnergyCorrelatorDoubleRatio -/// - EnergyCorrelatorGeneralized -/// -///

There are five classes that define useful combinations of the ECFs. -/// -/// - EnergyCorrelatorNseries -/// - EnergyCorrelatorMseries -/// - EnergyCorrelatorUseries -/// - EnergyCorrelatorD2 -/// - EnergyCorrelatorGeneralizedD2 -/// -///

There are also aliases for easier access: -/// - EnergyCorrelatorCseries (same as EnergyCorrelatorDoubleRatio) -/// - EnergyCorrelatorC1 (EnergyCorrelatorCseries with i=1) -/// - EnergyCorrelatorC2 (EnergyCorrelatorCseries with i=2) -/// - EnergyCorrelatorN2 (EnergyCorrelatorNseries with i=2) -/// - EnergyCorrelatorN3 (EnergyCorrelatorNseries with i=3) -/// - EnergyCorrelatorM2 (EnergyCorrelatorMseries with i=2) -/// - EnergyCorrelatorU1 (EnergyCorrelatorUseries with i=1) -/// - EnergyCorrelatorU2 (EnergyCorrelatorUseries with i=2) -/// - EnergyCorrelatorU3 (EnergyCorrelatorUseries with i=3) -/// -/// Each of these classes is a FastJet FunctionOfPseudoJet. -/// EnergyCorrelatorDoubleRatio (which is equivalent to EnergyCorrelatorCseries) -/// is in particular is useful for quark/gluon discrimination and boosted -/// object tagging. -/// -/// Using the original 2- and 3-point correlators, EnergyCorrelationD2 has -/// been shown to be the optimal combination for boosted 2-prong tagging. -/// -/// The EnergyCorrelatorNseries and EnergyCorrelatorMseries use -/// generalized correlation functions with different angular scaling, -/// and are intended for use on 2-prong and 3-prong jets. -/// The EnergyCorrelatorUseries is useful for quark/gluon discrimimation. -/// -/// See the file example.cc for an illustration of usage and -/// example_basic_usage.cc for the most commonly used functions. - -//------------------------------------------------------------------------ -/// \class EnergyCorrelator -/// ECF(N,beta) is the N-point energy correlation function, with an angular exponent beta. -/// -/// It is defined as follows -/// -/// - \f$ \mathrm{ECF}(1,\beta) = \sum_i E_i \f$ -/// - \f$ \mathrm{ECF}(2,\beta) = \sum_{i { - friend class EnergyCorrelatorGeneralized; ///< This allow ECFG to access the energy and angle definitions - ///< of this class, which are otherwise private. - public: - - enum Measure { - pt_R, ///< use transverse momenta and boost-invariant angles, - ///< eg \f$\mathrm{ECF}(2,\beta) = \sum_{i=3 this leads to many expensive recomputations, - ///< but has only O(n) memory usage for n particles - - storage_array /// the interparticle angles are cached. This gives a significant speed - /// improvement for N>=3, but has a memory requirement of (4n^2) bytes. - }; - - public: - - /// constructs an N-point correlator with angular exponent beta, - /// using the specified choice of energy and angular measure as well - /// one of two possible underlying computational Strategy - EnergyCorrelator(unsigned int N, - double beta, - Measure measure = pt_R, - Strategy strategy = storage_array) : - _N(N), _beta(beta), _measure(measure), _strategy(strategy) {}; - - /// destructor - virtual ~EnergyCorrelator(){} - - /// returns the value of the energy correlator for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - /// returns the the part of the description related to the parameters - std::string description_parameters() const; - std::string description_no_N() const; - - private: - - unsigned int _N; - double _beta; - Measure _measure; - Strategy _strategy; - - double energy(const PseudoJet& jet) const; - double angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const; - double multiply_angles(double angles[], int n_angles, unsigned int N_total) const; - void precompute_energies_and_angles(std::vector const &particles, double* energyStore, double** angleStore) const; - double evaluate_n3(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n4(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n5(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - }; - -// core EnergyCorrelator::result code in .cc file. - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorRatio -/// A class to calculate the ratio of (N+1)-point to N-point energy correlators, -/// ECF(N+1,beta)/ECF(N,beta), -/// called \f$ r_N^{(\beta)} \f$ in the publication. - class EnergyCorrelatorRatio : public FunctionOfPseudoJet { - - public: - - /// constructs an (N+1)-point to N-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorRatio(unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _N(N), _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorRatio() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _N; - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - inline double EnergyCorrelatorRatio::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelator(_N + 1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelator(_N, _beta, _measure, _strategy).result(jet); - - return numerator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorDoubleRatio -/// Calculates the double ratio of energy correlators, ECF(N-1,beta)*ECF(N+1)/ECF(N,beta)^2. -/// -/// A class to calculate a double ratio of energy correlators, -/// ECF(N-1,beta)*ECF(N+1,beta)/ECF(N,beta)^2, -/// called \f$C_N^{(\beta)}\f$ in the publication, and equal to -/// \f$ r_N^{(\beta)}/r_{N-1}^{(\beta)} \f$. -/// - - class EnergyCorrelatorDoubleRatio : public FunctionOfPseudoJet { - - public: - - EnergyCorrelatorDoubleRatio(unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _N(N), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_N < 1) throw Error("EnergyCorrelatorDoubleRatio: N must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorDoubleRatio() {} - - - /// returns the value of the energy correlator double-ratio for a - /// jet's constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _N; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorDoubleRatio::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelator(_N - 1, _beta, _measure, _strategy).result(jet) * EnergyCorrelator(_N + 1, _beta, _measure, _strategy).result(jet); - double denominator = pow(EnergyCorrelator(_N, _beta, _measure, _strategy).result(jet), 2.0); - - return numerator/denominator; - - } - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorC1 -/// A class to calculate the normalized 2-point energy correlators, -/// ECF(2,beta)/ECF(1,beta)^2, -/// called \f$ C_1^{(\beta)} \f$ in the publication. - class EnergyCorrelatorC1 : public FunctionOfPseudoJet { - - public: - - /// constructs a 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorC1(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorC1() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorC1::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelator(2, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelator(1, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorC2 -/// A class to calculate the double ratio of 3-point to 2-point -/// energy correlators, -/// ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2, -/// called \f$ C_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorC2 : public FunctionOfPseudoJet { - - public: - - /// constructs a 3-point to 2-point correlator double ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorC2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorC2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorC2::result(const PseudoJet& jet) const { - - double numerator3 = EnergyCorrelator(3, _beta, _measure, _strategy).result(jet); - double numerator1 = EnergyCorrelator(1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelator(2, _beta, _measure, _strategy).result(jet); - - return numerator3*numerator1/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorD2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3, -/// called \f$ D_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorD2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorD2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorD2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorD2::result(const PseudoJet& jet) const { - - double numerator3 = EnergyCorrelator(3, _beta, _measure, _strategy).result(jet); - double numerator1 = EnergyCorrelator(1, _beta, _measure, _strategy).result(jet); - double denominator2 = EnergyCorrelator(2, _beta, _measure, _strategy).result(jet); - - return numerator3*numerator1*numerator1*numerator1/denominator2/denominator2/denominator2; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorGeneralized -/// A generalized and normalized version of the N-point energy correlators, with -/// angular exponent beta and v number of pairwise angles. When \f$v = {N \choose 2}\f$ -/// (or, for convenience, \f$v = -1\f$), EnergyCorrelatorGeneralized just gives normalized -/// versions of EnergyCorrelator: -/// - \f$ \mathrm{ECFG}(-1,1,\beta) = \mathrm{ECFN}(N,\beta) = \mathrm{ECF}(N,\beta)/\mathrm{ECF}(1,\beta)\f$ -/// -/// Note that there is no separate class that implements ECFN, though it is a -/// notation that we will use in this documentation. Examples of the low-point normalized -/// correlators are: -/// - \f$\mathrm{ECFN}(1,\beta) = 1\f$ -/// - \f$\mathrm{ECFN}(2,\beta) = \sum_{i { - public: - - /// constructs an N-point correlator with v_angles pairwise angles - /// and angular exponent beta, - /// using the specified choice of energy and angular measure as well - /// one of two possible underlying computational Strategy - EnergyCorrelatorGeneralized(int v_angles, - unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _angles(v_angles), _N(N), _beta(beta), _measure(measure), _strategy(strategy), _helper_correlator(1,_beta, _measure, _strategy) {}; - - /// destructor - virtual ~EnergyCorrelatorGeneralized(){} - - /// returns the value of the normalized energy correlator for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - - double result(const PseudoJet& jet) const; - std::vector result_all_angles(const PseudoJet& jet) const; - - private: - - int _angles; - unsigned int _N; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - EnergyCorrelator _helper_correlator; - - double energy(const PseudoJet& jet) const; - double angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const; - double multiply_angles(double angles[], int n_angles, unsigned int N_total) const; - void precompute_energies_and_angles(std::vector const &particles, double* energyStore, double** angleStore) const; - double evaluate_n3(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n4(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n5(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - }; - - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorGeneralizedD2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFN(3,alpha)/ECFN(2,beta)^3 alpha/beta, -/// called \f$ D_2^{(\alpha, \beta)} \f$ in the publication. - class EnergyCorrelatorGeneralizedD2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorGeneralizedD2( - double alpha, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _alpha(alpha), _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorGeneralizedD2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _alpha; - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorGeneralizedD2::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(-1, 3, _alpha, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(-1, 2, _beta, _measure, _strategy).result(jet); - - return numerator/pow(denominator, 3.0*_alpha/_beta); - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorNseries -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// N_n = ECFG(2,n+1,beta)/ECFG(1,n,beta)^2, -/// called \f$ N_i^{(\alpha, \beta)} \f$ in the publication. -/// By definition, N_1^{beta} = ECFG(1, 2, 2*beta), where the angular exponent -/// is twice as big since the N series should involve two pairwise angles. - class EnergyCorrelatorNseries : public FunctionOfPseudoJet { - - public: - - /// constructs a n 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorNseries( - unsigned int n, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _n(n), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_n < 1) throw Error("EnergyCorrelatorNseries: n must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorNseries() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _n; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - }; - - - inline double EnergyCorrelatorNseries::result(const PseudoJet& jet) const { - - if (_n == 1) return EnergyCorrelatorGeneralized(1, 2, 2*_beta, _measure, _strategy).result(jet); - // By definition, N1 = ECFN(2, 2 beta) - double numerator = EnergyCorrelatorGeneralized(2, _n + 1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, _n, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorN2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFG(2,3,beta)/ECFG(1,2,beta)^2, -/// called \f$ N_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorN2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorN2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorN2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorN2::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(2, 3, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorN3 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFG(2,4,beta)/ECFG(1,3,beta)^2, -/// called \f$ N_3^{(\beta)} \f$ in the publication. - class EnergyCorrelatorN3 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorN3(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorN3() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorN3::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(2, 4, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, 3, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorMseries -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// M_n = ECFG(1,n+1,beta)/ECFG(1,n,beta), -/// called \f$ M_i^{(\alpha, \beta)} \f$ in the publication. -/// By definition, M_1^{beta} = ECFG(1,2,beta) - class EnergyCorrelatorMseries : public FunctionOfPseudoJet { - - public: - - /// constructs a n 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorMseries( - unsigned int n, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _n(n), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_n < 1) throw Error("EnergyCorrelatorMseries: n must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorMseries() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _n; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - }; - - - inline double EnergyCorrelatorMseries::result(const PseudoJet& jet) const { - - if (_n == 1) return EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - double numerator = EnergyCorrelatorGeneralized(1, _n + 1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, _n, _beta, _measure, _strategy).result(jet); - - return numerator/denominator; - - } - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorM2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFG(1,3,beta)/ECFG(1,2,beta), -/// called \f$ M_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorM2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorM2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorM2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorM2::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(1, 3, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - return numerator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorCseries -/// Calculates the C series energy correlators, ECFN(N-1,beta)*ECFN(N+1,beta)/ECFN(N,beta)^2. -/// This is equivalent to EnergyCorrelatorDoubleRatio -/// -/// A class to calculate a double ratio of energy correlators, -/// ECFN(N-1,beta)*ECFN(N+1,beta)/ECFN(N,beta)^2, -/// called \f$C_N^{(\beta)}\f$ in the publication, and equal to -/// \f$ r_N^{(\beta)}/r_{N-1}^{(\beta)} \f$. -/// - - class EnergyCorrelatorCseries : public FunctionOfPseudoJet { - - public: - - EnergyCorrelatorCseries(unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _N(N), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_N < 1) throw Error("EnergyCorrelatorCseries: N must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorCseries() {} - - - /// returns the value of the energy correlator double-ratio for a - /// jet's constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _N; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorCseries::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(-1, _N - 1, _beta, _measure, _strategy).result(jet) * EnergyCorrelatorGeneralized(-1, _N + 1, _beta, _measure, _strategy).result(jet); - double denominator = pow(EnergyCorrelatorGeneralized(-1, _N, _beta, _measure, _strategy).result(jet), 2.0); - - return numerator/denominator; - - } - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorUseries -/// A class to calculate the observable used for quark versus gluon discrimination -/// U_n = ECFG(1,n+1,beta), -/// called \f$ U_i^{(\beta)} \f$ in the publication. - - class EnergyCorrelatorUseries : public FunctionOfPseudoJet { - - public: - - /// constructs a n 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorUseries( - unsigned int n, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _n(n), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_n < 1) throw Error("EnergyCorrelatorUseries: n must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorUseries() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _n; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - }; - - - inline double EnergyCorrelatorUseries::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, _n + 1, _beta, _measure, _strategy).result(jet); - return answer; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorU1 -/// A class to calculate the observable formed from -/// ECFG(1,2,beta), -/// called \f$ U_1^{(\beta)} \f$ in the publication. - class EnergyCorrelatorU1 : public FunctionOfPseudoJet { - - public: - - /// constructs a 2-point correlator with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorU1(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorU1() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorU1::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - return answer; - - } - - - //------------------------------------------------------------------------ - /// \class EnergyCorrelatorU2 - /// A class to calculate the observable formed from - /// ECFG(1,3,beta), - /// called \f$ U_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorU2 : public FunctionOfPseudoJet { - - public: - - /// constructs a 3-point correlator with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorU2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorU2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorU2::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, 3, _beta, _measure, _strategy).result(jet); - - return answer; - - } - - - //------------------------------------------------------------------------ - /// \class EnergyCorrelatorU3 - /// A class to calculate the observable formed from - /// ECFG(1,4,beta), - /// called \f$ U_3^{(\beta)} \f$ in the publication. - class EnergyCorrelatorU3 : public FunctionOfPseudoJet { - - public: - - /// constructs a 4-point correlator with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorU3(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorU3() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorU3::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, 4, _beta, _measure, _strategy).result(jet); - - return answer; - - } - - - -} // namespace contrib - //FASTJET_END_NAMESPACE -} // namespace Rivet - -#endif // __FASTJET_CONTRIB_ENERGYCORRELATOR_HH__ diff --git a/include/Rivet/Tools/fjcontrib/ExtraRecombiners.hh b/include/Rivet/Tools/fjcontrib/ExtraRecombiners.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/ExtraRecombiners.hh +++ /dev/null @@ -1,105 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: ExtraRecombiners.hh 828 2015-07-20 14:52:06Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_WINNERTAKEALLRECOMBINER_HH__ -#define __FASTJET_CONTRIB_WINNERTAKEALLRECOMBINER_HH__ - -#include "fastjet/PseudoJet.hh" -#include "fastjet/JetDefinition.hh" - -#include -#include -#include -#include -#include -#include -#include - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -///------------------------------------------------------------------------ -/// \class GeneralEtSchemeRecombiner -/// \brief Recombination scheme with generalized Et weighting -/// -/// GeneralEtSchemeRecombiner defines a new recombination scheme by inheriting from JetDefinition::Recombiner. -/// This scheme compares the pT of two input particles, and then combines them into a particle with -/// a pT equal to the sum of the two particle pTs and a direction (in rapidity/phi) weighted by the respective momenta of the -/// particle. The weighting is dependent on the power delta. For delta = infinity, this should return the same result as the -/// WinnerTakeAllRecombiner. -///------------------------------------------------------------------------ -class GeneralEtSchemeRecombiner : public fastjet::JetDefinition::Recombiner { -public: - - /// Constructor takes delta weighting - /// (delta = 1.0 for Et-scheme, delta = infinity for winner-take-all scheme) - GeneralEtSchemeRecombiner(double delta) : _delta(delta) {} - - /// Description - virtual std::string description() const; - - /// Recombine pa and pb and put result into pab - virtual void recombine(const fastjet::PseudoJet & pa, - const fastjet::PseudoJet & pb, - fastjet::PseudoJet & pab) const; - -private: - double _delta; ///< Weighting exponent -}; - -///------------------------------------------------------------------------ -/// \class WinnerTakeAllRecombiner -/// \brief Recombination scheme with winner-take-all weighting -/// -/// WinnerTakeAllRecombiner defines a new recombination scheme by inheriting from JetDefinition::Recombiner. -/// This scheme compares the pT of two input particles, and then combines them into a particle with -/// a pT equal to the sum of the two particle pTs and a direction (in rapidity/phi) identical to that of the harder -/// particle. This creates a jet with an axis guaranteed to align with a particle in the event. -///------------------------------------------------------------------------ -class WinnerTakeAllRecombiner : public fastjet::JetDefinition::Recombiner { -public: - - /// Constructor to choose value of alpha (defaulted to 1 for normal pT sum) - WinnerTakeAllRecombiner(double alpha = 1.0) : _alpha(alpha) {} - - /// Description - virtual std::string description() const; - - /// recombine pa and pb and put result into pab - virtual void recombine(const fastjet::PseudoJet & pa, - const fastjet::PseudoJet & pb, - fastjet::PseudoJet & pab) const; - -private: - double _alpha; //power of (pt/E) term when recombining particles -}; - -} //namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_WINNERTAKEALLRECOMBINER_HH__ diff --git a/include/Rivet/Tools/fjcontrib/IteratedSoftDrop.hh b/include/Rivet/Tools/fjcontrib/IteratedSoftDrop.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/IteratedSoftDrop.hh +++ /dev/null @@ -1,223 +0,0 @@ -// $Id: IteratedSoftDrop.hh 1086 2017-10-11 08:07:26Z gsoyez $ -// -// Copyright (c) 2017-, Jesse Thaler, Kevin Zhou, Gavin P. Salam, -// Gregory Soyez -// -// based on arXiv:1704.06266 by Christopher Frye, Andrew J. Larkoski, -// Jesse Thaler, Kevin Zhou -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_ITERATEDSOFTDROP_HH__ -#define __FASTJET_CONTRIB_ITERATEDSOFTDROP_HH__ - -#include "RecursiveSoftDrop.hh" - -// FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - -//------------------------------------------------------------------------ -/// \class IteratedSoftDropInfo -/// helper class that carries all the relevant information one can get -/// from running IteratedSoftDrop on a given jet (or vector of jets) -/// -class IteratedSoftDropInfo{ -public: - /// ctor without initialisation - IteratedSoftDropInfo(){} - - /// ctor with initialisation - IteratedSoftDropInfo(std::vector > zg_thetag_in) - : _all_zg_thetag(zg_thetag_in){} - - /// get the raw list of (angular-ordered) zg and thetag - const std::vector > &all_zg_thetag() const{ - return _all_zg_thetag; - } - - /// overloadd the () operator so that it also returns the full (zg,thetag) list - const std::vector > & operator()() const{ - return _all_zg_thetag; - } - - /// overloadd the [] operator to access the ith (zg,thetag) pair - const std::pair & operator[](unsigned int i) const{ - return _all_zg_thetag[i]; - } - - /// returns the angularity with angular exponent alpha and z - /// exponent kappa calculated on the zg's and thetag's found by - /// iterated SoftDrop - /// - /// returns 0 if no substructure was found - double angularity(double alpha, double kappa=1.0) const; - - /// returns the Iterated SoftDrop multiplicity - unsigned int multiplicity() const{ return _all_zg_thetag.size(); } - - /// returns the Iterated SoftDrop multiplicity (i.e. size) - unsigned int size() const{ return _all_zg_thetag.size(); } - -protected: - /// the real information: angular-ordered list of all the zg and - /// thetag that passed the (recursive) SD conddition - std::vector > _all_zg_thetag; -}; - - - -//------------------------------------------------------------------------ -/// \class IteratedSoftDrop -/// implementation of the IteratedSoftDrop procedure -/// -/// This class provides an implementation of the IteratedSoftDrop -/// procedure. It is based on the SoftDrop procedure can be used to -/// define a 'groomed symmetry factor', equal to the symmetry factor -/// of the two subjets of the resulting groomed jet. The Iterated -/// Soft Drop procedure recursively performs Soft Drop on the harder -/// branch of the groomed jet, halting at a specified angular cut -/// \f$\theta_{\rm cut}\f$, returning a list of symmetry factors which -/// can be used to define observables. -/// -/// Like SoftDrop, the cut applied recursively is -/// \f[ -/// z > z_{\rm cut} (\theta/R_0)^\beta -/// \f] -/// with z the asymmetry measure and \f$\theta\f$ the geometrical -/// distance between the two subjets. The procedure halts when -/// \f$\theta < \theta_{\rm cut}\f$. -/// -/// By default, this implementation returs the IteratedSoftDropInfo -/// obtained after running IteratedSoftDrop on a jet -/// -/// Although all these quantities can be obtained from the returned -/// IteratedSoftDropInfo, we also provide helpers to directly get the -/// multiplicity, some (generalised) angularity, or the raw list of -/// (angular-ordered) (zg, thetag) pairs that passed the (recursive) -/// SoftDrop condition. -/// -/// We stress the fact that IteratedSoftDrop is _not_ a Transformer -/// since it returns an IteratedSoftDropInfo and not a modified -/// PseudoJet -/// -class IteratedSoftDrop : public FunctionOfPseudoJet { -public: - /// Constructor. Takes in the standard Soft Drop parameters, an angular cut \f$\theta_{\rm cut}\f$, - /// and a choice of angular and symmetry measure. - /// - /// \param beta the Soft Drop beta parameter - /// \param symmetry_cut the Soft Drop symmetry cut - /// \param angular_cut the angular cutoff to halt Iterated Soft Drop - /// \param R0 the angular distance normalization - /// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) - IteratedSoftDrop(double beta, double symmetry_cut, double angular_cut, double R0 = 1.0, - const FunctionOfPseudoJet * subtractor = 0); - - /// Full constructor, which takes the following parameters: - /// - /// \param beta the value of the beta parameter - /// \param symmetry_cut the value of the cut on the symmetry measure - /// \param symmetry_measure the choice of measure to use to estimate the symmetry - /// \param angular_cut the angular cutoff to halt Iterated Soft Drop - /// \param R0 the angular distance normalisation [1 by default] - /// \param mu_cut the maximal allowed value of mass drop variable mu = m_heavy/m_parent - /// \param recursion_choice the strategy used to decide which subjet to recurse into - /// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) - /// - /// Notes: - /// - /// - by default, SoftDrop will recluster the jet with the - /// Cambridge/Aachen algorithm if it is not already the case. This - /// behaviour can be changed using the "set_reclustering" method - /// defined below - /// - IteratedSoftDrop(double beta, - double symmetry_cut, - RecursiveSoftDrop::SymmetryMeasure symmetry_measure, - double angular_cut, - double R0 = 1.0, - double mu_cut = std::numeric_limits::infinity(), - RecursiveSoftDrop::RecursionChoice recursion_choice = RecursiveSoftDrop::larger_pt, - const FunctionOfPseudoJet * subtractor = 0); - - /// default destructor - virtual ~IteratedSoftDrop(){} - - //---------------------------------------------------------------------- - // behaviour tweaks (inherited from RecursiveSoftDrop and RecursiveSymmetryCutBase) - - /// switch to using a dynamical R0 (see RecursiveSoftDrop) - void set_dynamical_R0(bool value=true) { _rsd.set_dynamical_R0(value); } - bool use_dynamical_R0() const { return _rsd.use_dynamical_R0(); } - - /// an alternative way to set the subtractor (see RecursiveSymmetryCutBase) - void set_subtractor(const FunctionOfPseudoJet * subtractor_) {_rsd.set_subtractor(subtractor_);} - const FunctionOfPseudoJet * subtractor() const {return _rsd.subtractor();} - - /// returns the IteratedSoftDropInfo associated with the jet "jet" - IteratedSoftDropInfo result(const PseudoJet& jet) const; - - /// Tells the tagger whether to assume that the input jet has - /// already been subtracted (relevant only with a non-null - /// subtractor, see RecursiveSymmetryCutBase) - void set_input_jet_is_subtracted(bool is_subtracted) { _rsd.set_input_jet_is_subtracted(is_subtracted);} - bool input_jet_is_subtracted() const {return _rsd.input_jet_is_subtracted();} - - /// configure the reclustering prior to the recursive de-clustering - void set_reclustering(bool do_reclustering=true, const Recluster *recluster=0){ - _rsd.set_reclustering(do_reclustering, recluster); - } - - //---------------------------------------------------------------------- - // actions on jets - /// returns vector of ISD symmetry factors and splitting angles - std::vector > all_zg_thetag(const PseudoJet& jet) const{ - return result(jet).all_zg_thetag(); - } - - /// returns the angularity with angular exponent alpha and z - /// exponent kappa calculated on the zg's and thetag's found by - /// iterated SoftDrop - /// - /// returns 0 if no substructure was found - double angularity(const PseudoJet& jet, double alpha, double kappa=1.0) const{ - return result(jet).angularity(alpha, kappa); - } - - /// returns the Iterated SoftDrop multiplicity - double multiplicity(const PseudoJet& jet) const{ return result(jet).multiplicity(); } - - /// description of the class - std::string description() const; - -protected: - RecursiveSoftDrop _rsd; -}; - - - -} } // namespace contrib - - //FASTJET_END_NAMESPACE - -#endif // __FASTJET_CONTRIB_ITERATEDSOFTDROP_HH__ diff --git a/include/Rivet/Tools/fjcontrib/MeasureDefinition.hh b/include/Rivet/Tools/fjcontrib/MeasureDefinition.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/MeasureDefinition.hh +++ /dev/null @@ -1,855 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: MeasureDefinition.hh 828 2015-07-20 14:52:06Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_MEASUREDEFINITION_HH__ -#define __FASTJET_CONTRIB_MEASUREDEFINITION_HH__ - -#include "fastjet/PseudoJet.hh" -#include -#include -#include -#include - -#include "TauComponents.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness{ - - - -// The following Measures are available (and their relevant arguments): -// Recommended for usage as jet shapes -class DefaultMeasure; // Default measure from which next classes derive (should not be called directly) -class NormalizedMeasure; // (beta,R0) -class UnnormalizedMeasure; // (beta) -class NormalizedCutoffMeasure; // (beta,R0,Rcutoff) -class UnnormalizedCutoffMeasure; // (beta,Rcutoff) - -// New measures as of v2.2 -// Recommended for usage as event shapes (or for jet finding) -class ConicalMeasure; // (beta,R) -class OriginalGeometricMeasure; // (R) -class ModifiedGeometricMeasure; // (R) -class ConicalGeometricMeasure; // (beta, gamma, R) -class XConeMeasure; // (beta, R) - -// Formerly GeometricMeasure, now no longer recommended, kept commented out only for cross-check purposes -//class DeprecatedGeometricMeasure; // (beta) -//class DeprecatedGeometricCutoffMeasure; // (beta,Rcutoff) - - -/////// -// -// MeasureDefinition -// -/////// - -//This is a helper class for the Minimum Axes Finders. It is defined later. -class LightLikeAxis; - -///------------------------------------------------------------------------ -/// \class MeasureDefinition -/// \brief Base class for measure definitions -/// -/// This is the base class for measure definitions. Derived classes will calculate -/// the tau_N of a jet given a specific measure and a set of axes. The measure is -/// determined by various jet and beam distances (and possible normalization factors). -///------------------------------------------------------------------------ -class MeasureDefinition { - -public: - - /// Description of measure and parameters - virtual std::string description() const = 0; - - /// In derived classes, this should return a copy of the corresponding derived class - virtual MeasureDefinition* create() const = 0; - - //The following five functions define the measure by which tau_N is calculated, - //and are overloaded by the various measures below - - /// Distanes to jet axis. This is called many times, so needs to be as fast as possible - /// Unless overloaded, it just calls jet_numerator - virtual double jet_distance_squared(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - return jet_numerator(particle,axis); - } - - /// Distanes to beam. This is called many times, so needs to be as fast as possible - /// Unless overloaded, it just calls beam_numerator - virtual double beam_distance_squared(const fastjet::PseudoJet& particle) const { - return beam_numerator(particle); - } - - /// The jet measure used in N-(sub)jettiness - virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const = 0; - /// The beam measure used in N-(sub)jettiness - virtual double beam_numerator(const fastjet::PseudoJet& particle) const = 0; - - /// A possible normalization factor - virtual double denominator(const fastjet::PseudoJet& particle) const = 0; - - /// Run a one-pass minimization routine. There is a generic one-pass minimization that works for a wide variety of measures. - /// This should be overloaded to create a measure-specific minimization scheme - virtual std::vector get_one_pass_axes(int n_jets, - const std::vector& inputs, - const std::vector& seedAxes, - int nAttempts = 1000, // cap number of iterations - double accuracy = 0.0001 // cap distance of closest approach - ) const; - -public: - - /// Returns the tau value for a give set of particles and axes - double result(const std::vector& particles, const std::vector& axes) const { - return component_result(particles,axes).tau(); - } - - /// Short-hand for the result() function - inline double operator() (const std::vector& particles, const std::vector& axes) const { - return result(particles,axes); - } - - /// Return all of the TauComponents for specific input particles and axes - TauComponents component_result(const std::vector& particles, const std::vector& axes) const; - - /// Create the partitioning according the jet/beam distances and stores them a TauPartition - TauPartition get_partition(const std::vector& particles, const std::vector& axes) const; - - /// Calculate the tau result using an existing partition - TauComponents component_result_from_partition(const TauPartition& partition, const std::vector& axes) const; - - - - /// virtual destructor - virtual ~MeasureDefinition(){} - -protected: - - /// Flag set by derived classes to choose whether or not to use beam/denominator - TauMode _tau_mode; - - /// Flag set by derived classes to say whether cheap get_one_pass_axes method can be used (true by default) - bool _useAxisScaling; - - /// This is the only constructor, which requires _tau_mode and _useAxisScaling to be manually set by derived classes. - MeasureDefinition() : _tau_mode(UNDEFINED_SHAPE), _useAxisScaling(true) {} - - - /// Used by derived classes to set whether or not to use beam/denominator information - void setTauMode(TauMode tau_mode) { - _tau_mode = tau_mode; - } - - /// Used by derived classes to say whether one can use cheap get_one_pass_axes - void setAxisScaling(bool useAxisScaling) { - _useAxisScaling = useAxisScaling; - } - - /// Uses denominator information? - bool has_denominator() const { return (_tau_mode == NORMALIZED_JET_SHAPE || _tau_mode == NORMALIZED_EVENT_SHAPE);} - /// Uses beam information? - bool has_beam() const {return (_tau_mode == UNNORMALIZED_EVENT_SHAPE || _tau_mode == NORMALIZED_EVENT_SHAPE);} - - /// Create light-like axis (used in default one-pass minimization routine) - fastjet::PseudoJet lightFrom(const fastjet::PseudoJet& input) const { - double length = sqrt(pow(input.px(),2) + pow(input.py(),2) + pow(input.pz(),2)); - return fastjet::PseudoJet(input.px()/length,input.py()/length,input.pz()/length,1.0); - } - - /// Shorthand for squaring - static inline double sq(double x) {return x*x;} - -}; - - -/////// -// -// Default Measures -// -/////// - - -///------------------------------------------------------------------------ -/// \enum DefaultMeasureType -/// \brief Options for default measure -/// -/// Can be used to switch between pp and ee measure types in the DefaultMeasure -///------------------------------------------------------------------------ -enum DefaultMeasureType { - pt_R, /// use transverse momenta and boost-invariant angles, - E_theta, /// use energies and angles, - lorentz_dot, /// use dot product inspired measure - perp_lorentz_dot /// use conical geometric inspired measures -}; - -///------------------------------------------------------------------------ -/// \class DefaultMeasure -/// \brief Base class for default N-subjettiness measure definitions -/// -/// This class is the default measure as defined in the original N-subjettiness papers. -/// Based on the conical measure, but with a normalization factor -/// This measure is defined as the pT of the particle multiplied by deltaR -/// to the power of beta. This class includes the normalization factor determined by R0 -///------------------------------------------------------------------------ -class DefaultMeasure : public MeasureDefinition { - -public: - - /// Description - virtual std::string description() const; - /// To allow copying around of these objects - virtual DefaultMeasure* create() const {return new DefaultMeasure(*this);} - - /// fast jet distance - virtual double jet_distance_squared(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - return angleSquared(particle, axis); - } - - /// fast beam distance - virtual double beam_distance_squared(const fastjet::PseudoJet& /*particle*/) const { - return _RcutoffSq; - } - - /// true jet distance (given by general definitions of energy and angle) - virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const{ - double jet_dist = angleSquared(particle, axis); - if (jet_dist > 0.0) { - return energy(particle) * std::pow(jet_dist,_beta/2.0); - } else { - return 0.0; - } - } - - /// true beam distance - virtual double beam_numerator(const fastjet::PseudoJet& particle) const { - return energy(particle) * std::pow(_Rcutoff,_beta); - } - - /// possible denominator for normalization - virtual double denominator(const fastjet::PseudoJet& particle) const { - return energy(particle) * std::pow(_R0,_beta); - } - - /// Special minimization routine (from v1.0 of N-subjettiness) - virtual std::vector get_one_pass_axes(int n_jets, - const std::vector& inputs, - const std::vector& seedAxes, - int nAttempts, // cap number of iterations - double accuracy // cap distance of closest approach - ) const; - -protected: - double _beta; ///< Angular exponent - double _R0; ///< Normalization factor - double _Rcutoff; ///< Cutoff radius - double _RcutoffSq; ///< Cutoff radius squared - DefaultMeasureType _measure_type; ///< Type of measure used (i.e. pp style or ee style) - - - /// Constructor is protected so that no one tries to call this directly. - DefaultMeasure(double beta, double R0, double Rcutoff, DefaultMeasureType measure_type = pt_R) - : MeasureDefinition(), _beta(beta), _R0(R0), _Rcutoff(Rcutoff), _RcutoffSq(sq(Rcutoff)), _measure_type(measure_type) - { - if (beta <= 0) throw Error("DefaultMeasure: You must choose beta > 0."); - if (R0 <= 0) throw Error("DefaultMeasure: You must choose R0 > 0."); - if (Rcutoff <= 0) throw Error("DefaultMeasure: You must choose Rcutoff > 0."); - } - - /// Added set measure method in case it becomes useful later - void setDefaultMeasureType(DefaultMeasureType measure_type) { - _measure_type = measure_type; - } - - /// Generalized energy value (determined by _measure_type) - double energy(const PseudoJet& jet) const; - /// Generalized angle value (determined by _measure_type) - double angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const; - - /// Name of _measure_type, so description will include the measure type - std::string measure_type_name() const { - if (_measure_type == pt_R) return "pt_R"; - else if (_measure_type == E_theta) return "E_theta"; - else if (_measure_type == lorentz_dot) return "lorentz_dot"; - else if (_measure_type == perp_lorentz_dot) return "perp_lorentz_dot"; - else return "Measure Type Undefined"; - } - - /// templated for speed (TODO: probably should remove, since not clear that there is a speed gain) - template std::vector UpdateAxesFast(const std::vector & old_axes, - const std::vector & inputJets, - double accuracy) const; - - /// called by get_one_pass_axes to update axes iteratively - std::vector UpdateAxes(const std::vector & old_axes, - const std::vector & inputJets, - double accuracy) const; -}; - - -///------------------------------------------------------------------------ -/// \class NormalizedCutoffMeasure -/// \brief Dimensionless default measure, with radius cutoff -/// -/// This measure is just a wrapper for DefaultMeasure -///------------------------------------------------------------------------ -class NormalizedCutoffMeasure : public DefaultMeasure { - -public: - - /// Constructor - NormalizedCutoffMeasure(double beta, double R0, double Rcutoff, DefaultMeasureType measure_type = pt_R) - : DefaultMeasure(beta, R0, Rcutoff, measure_type) { - setTauMode(NORMALIZED_JET_SHAPE); - } - - /// Description - virtual std::string description() const; - - /// For copying purposes - virtual NormalizedCutoffMeasure* create() const {return new NormalizedCutoffMeasure(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class NormalizedMeasure -/// \brief Dimensionless default measure, with no cutoff -/// -/// This measure is the same as NormalizedCutoffMeasure, with Rcutoff taken to infinity. -///------------------------------------------------------------------------ -class NormalizedMeasure : public NormalizedCutoffMeasure { - -public: - - /// Constructor - NormalizedMeasure(double beta, double R0, DefaultMeasureType measure_type = pt_R) - : NormalizedCutoffMeasure(beta, R0, std::numeric_limits::max(), measure_type) { - _RcutoffSq = std::numeric_limits::max(); - setTauMode(NORMALIZED_JET_SHAPE); - } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual NormalizedMeasure* create() const {return new NormalizedMeasure(*this);} - -}; - - -///------------------------------------------------------------------------ -/// \class UnnormalizedCutoffMeasure -/// \brief Dimensionful default measure, with radius cutoff -/// -/// This class is the unnormalized conical measure. The only difference from NormalizedCutoffMeasure -/// is that the denominator is defined to be 1.0 by setting _has_denominator to false. -/// class UnnormalizedCutoffMeasure : public NormalizedCutoffMeasure { -///------------------------------------------------------------------------ -class UnnormalizedCutoffMeasure : public DefaultMeasure { - -public: - - /// Since all methods are identical, UnnormalizedMeasure inherits directly - /// from NormalizedMeasure. R0 is a dummy value since the value of R0 is unecessary for this class, - /// and the "false" flag sets _has_denominator in MeasureDefinition to false so no denominator is used. - UnnormalizedCutoffMeasure(double beta, double Rcutoff, DefaultMeasureType measure_type = pt_R) - : DefaultMeasure(beta, std::numeric_limits::quiet_NaN(), Rcutoff, measure_type) { - setTauMode(UNNORMALIZED_EVENT_SHAPE); - } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual UnnormalizedCutoffMeasure* create() const {return new UnnormalizedCutoffMeasure(*this);} - -}; - - -///------------------------------------------------------------------------ -/// \class UnnormalizedMeasure -/// \brief Dimensionless default measure, with no cutoff -/// -/// This measure is the same as UnnormalizedCutoffMeasure, with Rcutoff taken to infinity. -///------------------------------------------------------------------------ -class UnnormalizedMeasure : public UnnormalizedCutoffMeasure { - -public: - /// Since all methods are identical, UnnormalizedMeasure inherits directly - /// from NormalizedMeasure. R0 is a dummy value since the value of R0 is unecessary for this class, - /// and the "false" flag sets _has_denominator in MeasureDefinition to false so no denominator is used. - UnnormalizedMeasure(double beta, DefaultMeasureType measure_type = pt_R) - : UnnormalizedCutoffMeasure(beta, std::numeric_limits::max(), measure_type) { - _RcutoffSq = std::numeric_limits::max(); - setTauMode(UNNORMALIZED_JET_SHAPE); - } - - /// Description - virtual std::string description() const; - - /// For copying purposes - virtual UnnormalizedMeasure* create() const {return new UnnormalizedMeasure(*this);} - -}; - - -///------------------------------------------------------------------------ -/// \class ConicalMeasure -/// \brief Dimensionful event-shape measure, with radius cutoff -/// -/// Very similar to UnnormalizedCutoffMeasure, but with different normalization convention -/// and using the new default one-pass minimization algorithm. -/// Axes are also made to be light-like to ensure sensible behavior -/// Intended to be used as an event shape. -///------------------------------------------------------------------------ -class ConicalMeasure : public MeasureDefinition { - -public: - - /// Constructor - ConicalMeasure(double beta, double Rcutoff) - : MeasureDefinition(), _beta(beta), _Rcutoff(Rcutoff), _RcutoffSq(sq(Rcutoff)) { - if (beta <= 0) throw Error("ConicalMeasure: You must choose beta > 0."); - if (Rcutoff <= 0) throw Error("ConicalMeasure: You must choose Rcutoff > 0."); - setTauMode(UNNORMALIZED_EVENT_SHAPE); - } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual ConicalMeasure* create() const {return new ConicalMeasure(*this);} - - /// fast jet distance - virtual double jet_distance_squared(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - PseudoJet lightAxis = lightFrom(axis); - return particle.squared_distance(lightAxis); - } - - /// fast beam distance - virtual double beam_distance_squared(const fastjet::PseudoJet& /*particle*/) const { - return _RcutoffSq; - } - - - /// true jet distance - virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - PseudoJet lightAxis = lightFrom(axis); - double jet_dist = particle.squared_distance(lightAxis)/_RcutoffSq; - double jet_perp = particle.perp(); - - if (_beta == 2.0) { - return jet_perp * jet_dist; - } else { - return jet_perp * pow(jet_dist,_beta/2.0); - } - } - - /// true beam distance - virtual double beam_numerator(const fastjet::PseudoJet& particle) const { - return particle.perp(); - } - - /// no denominator used for this measure - virtual double denominator(const fastjet::PseudoJet& /*particle*/) const { - return std::numeric_limits::quiet_NaN(); - } - -protected: - double _beta; ///< angular exponent - double _Rcutoff; ///< effective jet radius - double _RcutoffSq;///< effective jet radius squared -}; - - - -///------------------------------------------------------------------------ -/// \class OriginalGeometricMeasure -/// \brief Dimensionful event-shape measure, with dot-product distances -/// -/// This class is the original (and hopefully now correctly coded) geometric measure. -/// This measure is defined by the Lorentz dot product between -/// the particle and the axis. This class does not include normalization of tau_N. -/// New in Nsubjettiness v2.2 -/// NOTE: This is defined differently from the DeprecatedGeometricMeasure which are now commented out. -///------------------------------------------------------------------------ -class OriginalGeometricMeasure : public MeasureDefinition { - -public: - /// Constructor - OriginalGeometricMeasure(double Rcutoff) - : MeasureDefinition(), _Rcutoff(Rcutoff), _RcutoffSq(sq(Rcutoff)) { - if (Rcutoff <= 0) throw Error("OriginalGeometricMeasure: You must choose Rcutoff > 0."); - setTauMode(UNNORMALIZED_EVENT_SHAPE); - setAxisScaling(false); // No need to rescale axes (for speed up in one-pass minimization) - } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual OriginalGeometricMeasure* create() const {return new OriginalGeometricMeasure(*this);} - - // This class uses the default jet_distance_squared and beam_distance_squared - - /// true jet measure - virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - return dot_product(lightFrom(axis), particle)/_RcutoffSq; - } - - /// true beam measure - virtual double beam_numerator(const fastjet::PseudoJet& particle) const { - fastjet::PseudoJet beam_a(0,0,1,1); - fastjet::PseudoJet beam_b(0,0,-1,1); - double min_perp = std::min(dot_product(beam_a, particle),dot_product(beam_b, particle)); - return min_perp; - } - - /// no denominator needed for this measure. - virtual double denominator(const fastjet::PseudoJet& /*particle*/) const { - return std::numeric_limits::quiet_NaN(); - } - -protected: - double _Rcutoff; ///< Effective jet radius (rho = R^2) - double _RcutoffSq; ///< Effective jet radius squared - -}; - - -///------------------------------------------------------------------------ -/// \class ModifiedGeometricMeasure -/// \brief Dimensionful event-shape measure, with dot-product distances, modified beam measure -/// -/// This class is the Modified geometric measure. This jet measure is defined by the Lorentz dot product between -/// the particle and the axis, as in the Original Geometric Measure. The beam measure is defined differently from -/// the above OriginalGeometric to allow for more conical jets. New in Nsubjettiness v2.2 -///------------------------------------------------------------------------ -class ModifiedGeometricMeasure : public MeasureDefinition { - -public: - /// Constructor - ModifiedGeometricMeasure(double Rcutoff) - : MeasureDefinition(), _Rcutoff(Rcutoff), _RcutoffSq(sq(Rcutoff)) { - if (Rcutoff <= 0) throw Error("ModifiedGeometricMeasure: You must choose Rcutoff > 0."); - setTauMode(UNNORMALIZED_EVENT_SHAPE); - setAxisScaling(false); // No need to rescale axes (for speed up in one-pass minimization) - } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual ModifiedGeometricMeasure* create() const {return new ModifiedGeometricMeasure(*this);} - - // This class uses the default jet_distance_squared and beam_distance_squared - - /// True jet measure - virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - return dot_product(lightFrom(axis), particle)/_RcutoffSq; - } - - /// True beam measure - virtual double beam_numerator(const fastjet::PseudoJet& particle) const { - fastjet::PseudoJet lightParticle = lightFrom(particle); - return 0.5*particle.mperp()*lightParticle.pt(); - } - - /// This measure does not require a denominator - virtual double denominator(const fastjet::PseudoJet& /*particle*/) const { - return std::numeric_limits::quiet_NaN(); - } - -protected: - double _Rcutoff; ///< Effective jet radius (rho = R^2) - double _RcutoffSq; ///< Effective jet radius squared - - -}; - -///------------------------------------------------------------------------ -/// \class ConicalGeometricMeasure -/// \brief Dimensionful event-shape measure, basis for XCone jet algorithm -/// -/// This class is the Conical Geometric measure. This measure is defined by the Lorentz dot product between -/// the particle and the axis normalized by the axis and particle pT, as well as a factor of cosh(y) to vary -/// the rapidity depepdence of the beam. New in Nsubjettiness v2.2, and the basis for the XCone jet algorithm -///------------------------------------------------------------------------ -class ConicalGeometricMeasure : public MeasureDefinition { - -public: - - /// Constructor - ConicalGeometricMeasure(double jet_beta, double beam_gamma, double Rcutoff) - : MeasureDefinition(), _jet_beta(jet_beta), _beam_gamma(beam_gamma), _Rcutoff(Rcutoff), _RcutoffSq(sq(Rcutoff)){ - if (jet_beta <= 0) throw Error("ConicalGeometricMeasure: You must choose beta > 0."); - if (beam_gamma <= 0) throw Error("ConicalGeometricMeasure: You must choose gamma > 0."); - if (Rcutoff <= 0) throw Error("ConicalGeometricMeasure: You must choose Rcutoff > 0."); - setTauMode(UNNORMALIZED_EVENT_SHAPE); - } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual ConicalGeometricMeasure* create() const {return new ConicalGeometricMeasure(*this);} - - /// fast jet measure - virtual double jet_distance_squared(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - fastjet::PseudoJet lightAxis = lightFrom(axis); - double pseudoRsquared = 2.0*dot_product(lightFrom(axis),particle)/(lightAxis.pt()*particle.pt()); - return pseudoRsquared; - } - - /// fast beam measure - virtual double beam_distance_squared(const fastjet::PseudoJet& /*particle*/) const { - return _RcutoffSq; - } - - /// true jet measure - virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { - double jet_dist = jet_distance_squared(particle,axis)/_RcutoffSq; - if (jet_dist > 0.0) { - fastjet::PseudoJet lightParticle = lightFrom(particle); - double weight = (_beam_gamma == 1.0) ? 1.0 : std::pow(0.5*lightParticle.pt(),_beam_gamma - 1.0); - return particle.pt() * weight * std::pow(jet_dist,_jet_beta/2.0); - } else { - return 0.0; - } - } - - /// true beam measure - virtual double beam_numerator(const fastjet::PseudoJet& particle) const { - fastjet::PseudoJet lightParticle = lightFrom(particle); - double weight = (_beam_gamma == 1.0) ? 1.0 : std::pow(0.5*lightParticle.pt(),_beam_gamma - 1.0); - return particle.pt() * weight; - } - - /// no denominator needed - virtual double denominator(const fastjet::PseudoJet& /*particle*/) const { - return std::numeric_limits::quiet_NaN(); - } - -protected: - double _jet_beta; ///< jet angular exponent - double _beam_gamma; ///< beam angular exponent (gamma = 1.0 is recommended) - double _Rcutoff; ///< effective jet radius - double _RcutoffSq; ///< effective jet radius squared - -}; - -///------------------------------------------------------------------------ -/// \class XConeMeasure -/// \brief Dimensionful event-shape measure used in XCone jet algorithm -/// -/// This class is the XCone Measure. This is the default measure for use with the -/// XCone algorithm. It is identical to the conical geometric measure but with gamma = 1.0. -///------------------------------------------------------------------------ -class XConeMeasure : public ConicalGeometricMeasure { - -public: - /// Constructor - XConeMeasure(double jet_beta, double R) - : ConicalGeometricMeasure(jet_beta, - 1.0, // beam_gamma, hard coded at gamma = 1.0 default - R // Rcutoff scale - ) { } - - /// Description - virtual std::string description() const; - /// For copying purposes - virtual XConeMeasure* create() const {return new XConeMeasure(*this);} - -}; - -///------------------------------------------------------------------------ -/// \class LightLikeAxis -/// \brief Helper class to define light-like axes directions -/// -/// This is a helper class for the minimization routines. -/// It creates a convenient way of defining axes in order to better facilitate calculations. -///------------------------------------------------------------------------ -class LightLikeAxis { - -public: - /// Bare constructor - LightLikeAxis() : _rap(0.0), _phi(0.0), _weight(0.0), _mom(0.0) {} - /// Constructor - LightLikeAxis(double my_rap, double my_phi, double my_weight, double my_mom) : - _rap(my_rap), _phi(my_phi), _weight(my_weight), _mom(my_mom) {} - - /// Rapidity - double rap() const {return _rap;} - /// Azimuth - double phi() const {return _phi;} - /// weight factor - double weight() const {return _weight;} - /// pt momentum - double mom() const {return _mom;} - - /// set rapidity - void set_rap(double my_set_rap) {_rap = my_set_rap;} - /// set azimuth - void set_phi(double my_set_phi) {_phi = my_set_phi;} - /// set weight factor - void set_weight(double my_set_weight) {_weight = my_set_weight;} - /// set pt momentum - void set_mom(double my_set_mom) {_mom = my_set_mom;} - /// set all kinematics - void reset(double my_rap, double my_phi, double my_weight, double my_mom) {_rap=my_rap; _phi=my_phi; _weight=my_weight; _mom=my_mom;} - - /// Return PseudoJet version - fastjet::PseudoJet ConvertToPseudoJet(); - - /// Squared distance to PseudoJet - double DistanceSq(const fastjet::PseudoJet& input) const { - return DistanceSq(input.rap(),input.phi()); - } - - /// Distance to PseudoJet - double Distance(const fastjet::PseudoJet& input) const { - return std::sqrt(DistanceSq(input)); - } - - /// Squared distance to Lightlikeaxis - double DistanceSq(const LightLikeAxis& input) const { - return DistanceSq(input.rap(),input.phi()); - } - - /// Distance to Lightlikeaxis - double Distance(const LightLikeAxis& input) const { - return std::sqrt(DistanceSq(input)); - } - -private: - double _rap; ///< rapidity - double _phi; ///< azimuth - double _weight; ///< weight factor - double _mom; ///< pt momentum - - /// Internal squared distance calculation - double DistanceSq(double rap2, double phi2) const { - double rap1 = _rap; - double phi1 = _phi; - - double distRap = rap1-rap2; - double distPhi = std::fabs(phi1-phi2); - if (distPhi > M_PI) {distPhi = 2.0*M_PI - distPhi;} - return distRap*distRap + distPhi*distPhi; - } - - /// Internal distance calculation - double Distance(double rap2, double phi2) const { - return std::sqrt(DistanceSq(rap2,phi2)); - } - -}; - - -////------------------------------------------------------------------------ -///// \class DeprecatedGeometricCutoffMeasure -//// This class is the old, incorrectly coded geometric measure. -//// It is kept in case anyone wants to check old code, but should not be used for production purposes. -//class DeprecatedGeometricCutoffMeasure : public MeasureDefinition { -// -//public: -// -// // Please, please don't use this. -// DeprecatedGeometricCutoffMeasure(double jet_beta, double Rcutoff) -// : MeasureDefinition(), -// _jet_beta(jet_beta), -// _beam_beta(1.0), // This is hard coded, since alternative beta_beam values were never checked. -// _Rcutoff(Rcutoff), -// _RcutoffSq(sq(Rcutoff)) { -// setTauMode(UNNORMALIZED_EVENT_SHAPE); -// setAxisScaling(false); -// if (jet_beta != 2.0) { -// throw Error("Geometric minimization is currently only defined for beta = 2.0."); -// } -// } -// -// virtual std::string description() const; -// -// virtual DeprecatedGeometricCutoffMeasure* create() const {return new DeprecatedGeometricCutoffMeasure(*this);} -// -// virtual double jet_distance_squared(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { -// fastjet::PseudoJet lightAxis = lightFrom(axis); -// double pseudoRsquared = 2.0*dot_product(lightFrom(axis),particle)/(lightAxis.pt()*particle.pt()); -// return pseudoRsquared; -// } -// -// virtual double beam_distance_squared(const fastjet::PseudoJet& /*particle*/) const { -// return _RcutoffSq; -// } -// -// virtual double jet_numerator(const fastjet::PseudoJet& particle, const fastjet::PseudoJet& axis) const { -// fastjet::PseudoJet lightAxis = lightFrom(axis); -// double weight = (_beam_beta == 1.0) ? 1.0 : std::pow(lightAxis.pt(),_beam_beta - 1.0); -// return particle.pt() * weight * std::pow(jet_distance_squared(particle,axis),_jet_beta/2.0); -// } -// -// virtual double beam_numerator(const fastjet::PseudoJet& particle) const { -// double weight = (_beam_beta == 1.0) ? 1.0 : std::pow(particle.pt()/particle.e(),_beam_beta - 1.0); -// return particle.pt() * weight * std::pow(_Rcutoff,_jet_beta); -// } -// -// virtual double denominator(const fastjet::PseudoJet& /*particle*/) const { -// return std::numeric_limits::quiet_NaN(); -// } -// -// virtual std::vector get_one_pass_axes(int n_jets, -// const std::vector& inputs, -// const std::vector& seedAxes, -// int nAttempts, // cap number of iterations -// double accuracy // cap distance of closest approach -// ) const; -// -//protected: -// double _jet_beta; -// double _beam_beta; -// double _Rcutoff; -// double _RcutoffSq; -// -//}; -// -//// ------------------------------------------------------------------------ -//// / \class DeprecatedGeometricMeasure -//// Same as DeprecatedGeometricMeasureCutoffMeasure, but with Rcutoff taken to infinity. -//// NOTE: This class should not be used for production purposes. -//class DeprecatedGeometricMeasure : public DeprecatedGeometricCutoffMeasure { -// -//public: -// DeprecatedGeometricMeasure(double beta) -// : DeprecatedGeometricCutoffMeasure(beta,std::numeric_limits::max()) { -// _RcutoffSq = std::numeric_limits::max(); -// setTauMode(UNNORMALIZED_JET_SHAPE); -// } -// -// virtual std::string description() const; -// -// virtual DeprecatedGeometricMeasure* create() const {return new DeprecatedGeometricMeasure(*this);} -//}; - - -} //namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_MEASUREDEFINITION_HH__ diff --git a/include/Rivet/Tools/fjcontrib/ModifiedMassDropTagger.hh b/include/Rivet/Tools/fjcontrib/ModifiedMassDropTagger.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/ModifiedMassDropTagger.hh +++ /dev/null @@ -1,132 +0,0 @@ -// $Id: ModifiedMassDropTagger.hh 1032 2017-07-31 14:20:03Z gsoyez $ -// -// Copyright (c) 2014-, Gavin P. Salam -// based on arXiv:1307.007 by Mrinal Dasgupta, Simone Marzani and Gavin P. Salam -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_MODIFIEDMASSDROPTAGGER_HH__ -#define __FASTJET_CONTRIB_MODIFIEDMASSDROPTAGGER_HH__ - -#include "RecursiveSymmetryCutBase.hh" - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - -//------------------------------------------------------------------------ -/// \class ModifiedMassDropTagger -/// An implementation of the modified Mass-Drop Tagger from arXiv:1307.0007. -/// -class ModifiedMassDropTagger : public RecursiveSymmetryCutBase { -public: - - /// Simplified constructor, which takes just a symmetry cut (applied - /// on the scalar_z variable) and an optional subtractor. - /// - /// In this incarnation the ModifiedMassDropTagger is a bit of a - /// misnomer, because there is no mass-drop condition - /// applied. Recursion into the jet structure chooses the prong with - /// largest pt. (Results from arXiv:1307.0007 were based on the - /// largest mt, but this only makes a difference for values of the - /// symmetry_cut close to 1/2). - /// - /// If the (optional) pileup subtractor can be supplied, then see - /// also the documentation for the set_input_jet_is_subtracted() member - /// function. - /// - /// NB: The configuration of MMDT provided by this constructor is - /// probably the most robust for use with subtraction. - ModifiedMassDropTagger(double symmetry_cut, - const FunctionOfPseudoJet * subtractor = 0 - ) : - RecursiveSymmetryCutBase(scalar_z, // the default SymmetryMeasure - std::numeric_limits::infinity(), // the default is no mass drop - larger_pt, // the default RecursionChoice - subtractor), - _symmetry_cut(symmetry_cut) - {} - - /// Full constructor, which takes the following parameters: - /// - /// \param symmetry_cut the value of the cut on the symmetry measure - /// \param symmetry_measure the choice of measure to use to estimate the symmetry - /// \param mu_cut the maximal allowed value of mass drop variable mu = m_heavy/m_parent - /// \param recursion_choice the strategy used to decide which subjet to recurse into - /// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) - /// - /// To obtain the mMDT as discussed in arXiv:1307.0007, use an - /// symmetry_measure that's one of the following - /// - /// - RecursiveSymmetryCutBase::y (for a cut on y) - /// - RecursiveSymmetryCutBase::scalar_z (for a cut on z) - /// - /// and use the default recursion choice of - /// RecursiveSymmetryCutBase::larger_pt (larger_mt will give something - /// very similar, while larger_m will give the behaviour of the - /// original, but now deprecated MassDropTagger) - /// - /// Notes: - /// - /// - By default the ModifiedMassDropTagger will relcuster the jets - /// with the C/A algorithm (if needed). - /// - /// - the mu_cut parameter is mostly irrelevant when it's taken - /// larger than about 1/2: the tagger is then one that cuts - /// essentially on the (a)symmetry of the jet's momentum - /// sharing. The default value of infinity turns off its use - /// entirely - ModifiedMassDropTagger(double symmetry_cut, - SymmetryMeasure symmetry_measure, - double mu_cut = std::numeric_limits::infinity(), - RecursionChoice recursion_choice = larger_pt, - const FunctionOfPseudoJet * subtractor = 0 - ) : - RecursiveSymmetryCutBase(symmetry_measure, mu_cut, recursion_choice, subtractor), - _symmetry_cut(symmetry_cut) - {} - - /// default destructor - virtual ~ModifiedMassDropTagger(){} - - //---------------------------------------------------------------------- - // access to class info - double symmetry_cut() const { return _symmetry_cut; } - -protected: - - /// The symmetry cut function for MMDT returns just a constant, since the cut value - /// has no dependence on the subjet kinematics - virtual double symmetry_cut_fn(const PseudoJet & /* p1 */, - const PseudoJet & /* p2 */, - void *extra_parameters = 0 - ) const {return _symmetry_cut;} - virtual std::string symmetry_cut_description() const; - - double _symmetry_cut; -}; - -} } // namespace contrib - - //FASTJET_END_NAMESPACE - -#endif // __FASTJET_CONTRIB_MODIFIEDMASSDROPTAGGER_HH__ diff --git a/include/Rivet/Tools/fjcontrib/Njettiness.hh b/include/Rivet/Tools/fjcontrib/Njettiness.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/Njettiness.hh +++ /dev/null @@ -1,266 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: Njettiness.hh 822 2015-06-15 23:52:57Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_NJETTINESS_HH__ -#define __FASTJET_CONTRIB_NJETTINESS_HH__ - - -#include "MeasureDefinition.hh" -#include "AxesDefinition.hh" -#include "TauComponents.hh" - -#include "fastjet/PseudoJet.hh" -#include "fastjet/SharedPtr.hh" -#include - -#include -#include -#include - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -/** \mainpage Nsubjettiness Documentation - * - * These Doxygen pages provide automatically generated documentation for the - * Nsubjettiness FastJet contrib. This documentation is being slowly improved. - * - * \section core_classes Core Classes - * - * - Nsubjettiness: Calculating N-subjettiness jet shapes - * - NsubjettinessRatio: Calculating N-subjettiness ratios - * - XConePlugin: Running the XCone jet algorithm - * - NjettinessPlugin: Running generic N-jettiness as a jet algorithm - * - * \subsection core_classes_helper Helper Classes for Core - * - Njettiness: Core code for all N-(sub)jettiness calculations - * - NjettinessExtras: Way to access additional N-jettiness information - * - TauComponents: Output format for all N-(sub)jettiness information - * - TauPartition: Access to N-(sub)jettiness partitioning information - * - PseudoXConePlugin: Testing code to compare to XConePlugin (doesn't use one-pass minimization) - * - * \section axes_classes Axes Definitions - * - * \subsection axes_classes_base Base Classes - * - AxesDefinition: Defines generic axes definition - * - ExclusiveJetAxes: Axes finder from exclusive jet algorithm - * - ExclusiveCombinatorialJetAxes: Axes finder from exclusive jet algorithm (with extra axes to test combinatoric options) - * - HardestJetAxes: Axes finder from inclusive jet algorithm - * - * \subsection axes_classes_derived Derived Classes - * - * \b OnePass means iterative improvement to a (local) N-(sub)jettiness minimum. - * - * \b MultiPass means using multiple random seed inputs - * - * \b Comb means checking N + Nextra choose N combinatorial options - * - * - KT_Axes / OnePass_KT_Axes: Axes from exclusive kT - * - CA_Axes / OnePass_CA_Axes: Axes from exclusive Cambridge/Aachen - * - AntiKT_Axes / OnePass_AntiKT_Axes: Axes from inclusive anti-kT - * - WTA_KT_Axes / OnePass_WTA_KT_Axes: Axes from exclusive kT, using winner-take-all recombination - * - WTA_CA_Axes / OnePass_WTA_CA_Axes: Axes from exclusive CA, using winner-take-all recombination - * - * - GenET_GenKT_Axes / OnePass_GenET_GenKT_Axes / Comb_GenET_GenKT_Axes: Axes from exclusive generalized kt, with generalized Et recombination - * - WTA_GenKT_Axes / OnePass_WTA_GenKT_Axes / Comb_WTA_GenKT_Axes: Axes from exclusive generalized kt, with winner-take-all recombination - * - * - Manual_Axes / OnePass_Manual_Axes / MultiPass_Manual_Axes: Use manual axes - * - MultiPass_Axes: Multiple passes with random input seeds - * - * \subsection axes_classes_helper Helper Classes for Axes - * - GeneralEtSchemeRecombiner: Generalized Et-scheme recombiner - * - WinnerTakeAllRecombiner: Winner-take-all recombiner (similar functionality now in FastJet 3.1) - * - JetDefinitionWrapper: Wrapper for Jet Definitions to handle memory management - * - * \section measure_classes Measure Definitions - * - * \subsection measure_classes_bases Base Classes - * - MeasureDefinition: Defines generic measures - * - DefaultMeasure: Base class for the original N-subjettiness measures - * - * \subsection measure_classes_derived Derived Classes - * - NormalizedMeasure / UnnormalizedMeasure / NormalizedCutoffMeasure / UnnormalizedCutoffMeasure : Variants on the default N-subjettiness measure. - * - ConicalMeasure: Similar to default measure, but intended as N-jettiness event shape - * - ConicalGeometricMeasure / XConeMeasure: Measure used in XCone jet algorithm - * - OriginalGeometricMeasure / ModifiedGeometricMeasure: Dot product measures useful for theoretical calcualtions - * - * \subsection measure_classes_helper Helper Classes for Measures - * - LightLikeAxis : Defines light-like axis - */ - -/////// -// -// Main Njettiness Class -// -/////// - -///------------------------------------------------------------------------ -/// \class Njettiness -/// \brief Core class for N-(sub)jettiness calculations -/// -/** - * The N-jettiness event shape. - * - * This is the core class used to perform N-jettiness jet finding (via NjettinessPlugin or XConePluggin) - * as well as find the N-subjettiness jet shape (via Nsubjettiness). - * - * In general, the user should never need to call this object. In addition, its API should not be considered - * fixed, since all code improvements effectively happen from reorganizing this class. - * - * Njettiness uses AxesDefinition and MeasureDefinition together in order to find tau_N for the event. - * It also can return information about the axes and jets it used in the calculation, as well as - * information about how the event was partitioned. - */ -class Njettiness { -public: - - /// Main constructor that uses AxesMode and MeasureDefinition to specify measure - /// Unlike Nsubjettiness or NjettinessPlugin, the value N is not chosen - Njettiness(const AxesDefinition & axes_def, const MeasureDefinition & measure_def); - - /// Destructor - ~Njettiness() {}; - - /// setAxes for Manual mode - void setAxes(const std::vector & myAxes); - - /// Calculates and returns all TauComponents that user would want. - /// This information is stored in _current_tau_components for later access as well. - TauComponents getTauComponents(unsigned n_jets, const std::vector & inputJets) const; - - /// Calculates the value of N-subjettiness, - /// but only returns the tau value from _current_tau_components - double getTau(unsigned n_jets, const std::vector & inputJets) const { - return getTauComponents(n_jets, inputJets).tau(); - } - - /// Return all relevant information about tau components - TauComponents currentTauComponents() const {return _current_tau_components;} - /// Return axes found by getTauComponents. - std::vector currentAxes() const { return _currentAxes;} - /// Return seedAxes used if onepass minimization (otherwise, same as currentAxes) - std::vector seedAxes() const { return _seedAxes;} - /// Return jet partition found by getTauComponents. - std::vector currentJets() const {return _currentPartition.jets();} - /// Return beam partition found by getTauComponents. - fastjet::PseudoJet currentBeam() const {return _currentPartition.beam();} - /// Return beam partition found by getTauComponents. - TauPartition currentPartition() const {return _currentPartition;} - - -private: - - /// AxesDefinition to use. Implemented as SharedPtrs to avoid memory management headaches - SharedPtr _axes_def; - /// MeasureDefinition to use. Implemented as SharedPtrs to avoid memory management headaches - SharedPtr _measure_def; - - - // Information about the current information - // Defined as mutables, so user should be aware that these change when getTau is called. - // TODO: These are not thread safe and should be fixed somehow - mutable TauComponents _current_tau_components; //automatically set to have components of 0; these values will be set by the getTau function call - mutable std::vector _currentAxes; //axes found after minimization - mutable std::vector _seedAxes; // axes used prior to minimization (if applicable) - mutable TauPartition _currentPartition; //partitioning information - - /// Warning if the user tries to use v1.0.3 measure style. - static LimitedWarning _old_measure_warning; - /// Warning if the user tries to use v1.0.3 axes style. - static LimitedWarning _old_axes_warning; - - -public: - - // These interfaces are included for backwards compability, and will be deprecated in a future release (scheduled for deletion in v3.0) - - /// \deprecated - /// Deprecated enum to determine axes mode - /// The various axes choices available to the user - /// It is recommended to use AxesDefinition instead of these. - enum AxesMode { - kt_axes, // exclusive kt axes - ca_axes, // exclusive ca axes - antikt_0p2_axes, // inclusive hardest axes with antikt-0.2 - wta_kt_axes, // Winner Take All axes with kt - wta_ca_axes, // Winner Take All axes with CA - onepass_kt_axes, // one-pass minimization from kt starting point - onepass_ca_axes, // one-pass minimization from ca starting point - onepass_antikt_0p2_axes, // one-pass minimization from antikt-0.2 starting point - onepass_wta_kt_axes, //one-pass minimization of WTA axes with kt - onepass_wta_ca_axes, //one-pass minimization of WTA axes with ca - min_axes, // axes that minimize N-subjettiness (100 passes by default) - manual_axes, // set your own axes with setAxes() - onepass_manual_axes // one-pass minimization from manual starting point - }; - - /// \deprecated - /// Deprecated enum to determine measure mode - /// The measures available to the user. - /// "normalized_cutoff_measure" was the default in v1.0 of Nsubjettiness - /// "unnormalized_measure" is now the recommended default usage - /// But it is recommended to use MeasureDefinition instead of these. - enum MeasureMode { - normalized_measure, //default normalized measure - unnormalized_measure, //default unnormalized measure - geometric_measure, //geometric measure - normalized_cutoff_measure, //default normalized measure with explicit Rcutoff - unnormalized_cutoff_measure, //default unnormalized measure with explicit Rcutoff - geometric_cutoff_measure //geometric measure with explicit Rcutoff - }; - - /// \deprecated - /// Intermediate constructor (needed to enable v1.0.3 backwards compatibility?) - Njettiness(AxesMode axes_mode, const MeasureDefinition & measure_def); - - /// \deprecated - /// Old-style constructor which takes axes/measure information as enums with measure parameters - /// This version absolutely is not recommended - Njettiness(AxesMode axes_mode, - MeasureMode measure_mode, - int num_para, - double para1 = std::numeric_limits::quiet_NaN(), - double para2 = std::numeric_limits::quiet_NaN(), - double para3 = std::numeric_limits::quiet_NaN()) - : _axes_def(createAxesDef(axes_mode)), _measure_def(createMeasureDef(measure_mode, num_para, para1, para2, para3)) { - } - - /// \deprecated - /// Convert old style enums into new style MeasureDefinition - AxesDefinition* createAxesDef(AxesMode axes_mode) const; - - /// \deprecated - /// Convert old style enums into new style MeasureDefinition - MeasureDefinition* createMeasureDef(MeasureMode measure_mode, int num_para, double para1, double para2, double para3) const; - -}; - -} // namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_NJETTINESS_HH__ diff --git a/include/Rivet/Tools/fjcontrib/NjettinessPlugin.hh b/include/Rivet/Tools/fjcontrib/NjettinessPlugin.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/NjettinessPlugin.hh +++ /dev/null @@ -1,182 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: NjettinessPlugin.hh 822 2015-06-15 23:52:57Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_NJETTINESSPLUGIN_HH__ -#define __FASTJET_CONTRIB_NJETTINESSPLUGIN_HH__ - -#include - -#include "Njettiness.hh" -#include "MeasureDefinition.hh" -#include "AxesDefinition.hh" -#include "TauComponents.hh" - -#include "fastjet/ClusterSequence.hh" -#include "fastjet/JetDefinition.hh" - -#include -#include - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - - -/// \class NjettinessPlugin -/// \brief Implements the N-jettiness Jet Algorithm -/** - * An exclusive jet finder that identifies N jets; first N axes are found, then - * particles are assigned to the nearest (DeltaR) axis and for each axis the - * corresponding jet is simply the four-momentum sum of these particles. - * - * As of version 2.2, it is recommended to use the XConePlugin, which has - * sensible default values for jet finding. - * - * Axes can be found in several ways, specified by the AxesDefinition argument. - * The recommended AxesDefinitions for jet finding (different than for jet shapes) - * OnePass_AntiKT(R0) : one-pass minimization from anti-kT starting point - * OnePass_GenET_GenKT_Axes(delta, p, R0) : one-pass min. from GenET/KT - * OnePass_WTA_GenKT_Axes(p, R0) : one-pass min from WTA/GenKT - * For recommendations on which axes to use, please see the README file. - * - * Jet regions are determined by the MeasureDefinition. The recommended choices - * for jet finding are - * ConicalMeasure(beta,R0) : perfect cones in rapidity/azimuth plane - * XConeMeasure(beta,R0) : approximate cones based on dot product distances. - * - * Other measures introduced in version 2.2 include OriginalGeometricMeasure, - * ModifiedGeometricMeasure, and ConicalGeometricMeasure, which define N-jettiness - * through dot products of particle momenta with light-like axes. OriginalGeometricMeasure - * produces football-shaped jets due to its central weighting of the beam measure, - * but ModifiedGeometric and ConicalGeometric both deform the original geometric measure - * to allow for cone-shaped jets. The size of these cones can be controlled through Rcutoff - * just as in the other measures. See the README file or MeasureDefinition.hh for information - * on how to call these measures. - * - */ -class NjettinessPlugin : public JetDefinition::Plugin { -public: - - /// Constructor with same arguments as Nsubjettiness (N, AxesDefinition, MeasureDefinition) - NjettinessPlugin(int N, - const AxesDefinition & axes_def, - const MeasureDefinition & measure_def) - : _njettinessFinder(axes_def, measure_def), _N(N) {} - - - /// Description - virtual std::string description () const; - /// Jet radius (this does not make sense yet) - virtual double R() const {return -1.0;} // TODO: make this not stupid - - /// The actually clustering, which first called Njettiness and then creates a dummy ClusterSequence - virtual void run_clustering(ClusterSequence&) const; - - /// For using manual axes with Njettiness Plugin - void setAxes(const std::vector & myAxes) { - // Cross check that manual axes are being used is in Njettiness - _njettinessFinder.setAxes(myAxes); - } - - /// Destructor - virtual ~NjettinessPlugin() {} - -private: - - Njettiness _njettinessFinder; ///< The core Njettiness that does the heavy lifting - int _N; ///< Number of exclusive jets to find. - - /// Warning if the user tries to use v1.0.3 constructor. - static LimitedWarning _old_constructor_warning; - -public: - - // Alternative constructors that define the measure via enums and parameters - // These constructors are deprecated and will be removed in a future version. - - /// \deprecated - /// Old-style constructor with 0 arguments (DEPRECATED) - NjettinessPlugin(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode) - : _njettinessFinder(axes_mode, measure_mode, 0), _N(N) { - _old_constructor_warning.warn("NjettinessPlugin: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the NjettinessPlugin constructor based on AxesDefinition and MeasureDefinition instead."); - } - - /// \deprecated - /// Old-style constructor with 1 argument (DEPRECATED) - NjettinessPlugin(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1) - : _njettinessFinder(axes_mode, measure_mode, 1, para1), _N(N) { - _old_constructor_warning.warn("NjettinessPlugin: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the NjettinessPlugin constructor based on AxesDefinition and MeasureDefinition instead."); - - } - - /// \deprecated - /// Old-style constructor with 2 arguments (DEPRECATED) - NjettinessPlugin(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1, - double para2) - : _njettinessFinder(axes_mode, measure_mode, 2, para1, para2), _N(N) { - _old_constructor_warning.warn("NjettinessPlugin: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the NjettinessPlugin constructor based on AxesDefinition and MeasureDefinition instead."); - - } - - /// \deprecated - /// Old-style constructor with 3 arguments (DEPRECATED) - NjettinessPlugin(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1, - double para2, - double para3) - : _njettinessFinder(axes_mode, measure_mode, 3, para1, para2, para3), _N(N) { - _old_constructor_warning.warn("NjettinessPlugin: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the NjettinessPlugin constructor based on AxesDefinition and MeasureDefinition instead."); - } - - /// \deprecated - /// Old-style constructor for backwards compatibility with v1.0, when NormalizedCutoffMeasure was the only option - NjettinessPlugin(int N, - Njettiness::AxesMode mode, - double beta, - double R0, - double Rcutoff=std::numeric_limits::max()) - : _njettinessFinder(mode, NormalizedCutoffMeasure(beta, R0, Rcutoff)), _N(N) { - _old_constructor_warning.warn("NjettinessPlugin: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the NjettinessPlugin constructor based on AxesDefinition and MeasureDefinition instead."); - } - - -}; - -} // namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_NJETTINESSPLUGIN_HH__ diff --git a/include/Rivet/Tools/fjcontrib/Nsubjettiness.hh b/include/Rivet/Tools/fjcontrib/Nsubjettiness.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/Nsubjettiness.hh +++ /dev/null @@ -1,303 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: Nsubjettiness.hh 822 2015-06-15 23:52:57Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_NSUBJETTINESS_HH__ -#define __FASTJET_CONTRIB_NSUBJETTINESS_HH__ - -#include - -#include "Njettiness.hh" - -#include "fastjet/FunctionOfPseudoJet.hh" -#include -#include - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -// Classes defined in this file. -class Nsubjettiness; -class NsubjettinessRatio; - -///------------------------------------------------------------------------ -/// \class Nsubjettiness -/// \brief Implements the N-subjettiness jet shape -/// -/** - * The N-jettiness jet shape. - * - * Nsubjettiness extends the concept of Njettiness to a jet shape, but other - * than the set of particles considered, they are identical. This class - * wraps the core Njettiness code to provide the fastjet::FunctionOfPseudoJet - * interface for convenience in larger analyses. - * - * The recommended AxesDefinitions are: - * KT_Axes : exclusive kt axes - * WTA_KT_Axes : exclusive kt with winner-take-all recombination - * OnePass_KT_Axes : one-pass minimization from kt starting point - * OnePass_WTA_KT_Axes : one-pass min. from wta_kt starting point - * More AxesDefinitions are listed in the README and defined in AxesDefinition.hh - * - * The recommended MeasureDefinitions are (with the corresponding parameters) - * NormalizedMeasure(beta,R0) - * : This was the original N-subjettiness measure (dimensionless) - * UnnormalizedMeasure(beta) - * : This is the new recommended default, same as above but without - * : the normalization factor, and hence has units of GeV - * NormalizedCutoffMeasure(beta,R0,Rcutoff) - * : Same as normalized_measure, but cuts off at Rcutoff - * UnnormalizedCutoffMeasure(beta,Rcutoff) - * : Same as unnormalized_measure, but cuts off at Rcutoff - * More MeasureDefinitions are listed in the README and defined in MeasureDefinition.hh - * - * For example, for the UnnormalizedMeasure(beta), N-subjettiness is defined as: - * - * tau_N = Sum_{i in jet} p_T^i min((DR_i1)^beta, (DR_i2)^beta, ...) - * - * DR_ij is the distance sqrt(Delta_phi^2 + Delta_rap^2) between particle i - * and jet j. - * - * The NormalizedMeausure include an extra parameter R0, and the various cutoff - * measures include an Rcutoff, which effectively defines an angular cutoff - * similar in effect to a cone-jet radius. - */ -class Nsubjettiness : public FunctionOfPseudoJet { - -public: - - /// Main constructor, which takes N, the AxesDefinition, and the MeasureDefinition. - /// The Definitions are given in AxesDefinition.hh and MeasureDefinition.hh - Nsubjettiness(int N, - const AxesDefinition& axes_def, - const MeasureDefinition& measure_def) - : _njettinessFinder(axes_def,measure_def), _N(N) {} - - /// Returns tau_N, measured on the constituents of this jet - double result(const PseudoJet& jet) const; - - /// Returns components of tau_N, so that user can find individual tau values. - TauComponents component_result(const PseudoJet& jet) const; - - /// To set axes in manual mode - void setAxes(const std::vector & myAxes) { - // Note that cross check that manual mode has been set is in Njettiness - _njettinessFinder.setAxes(myAxes); - } - - /// returns seed axes used for onepass minimization (otherwise same as currentAxes) - std::vector seedAxes() const { - return _njettinessFinder.seedAxes(); - } - - /// returns current axes found by result() calculation - std::vector currentAxes() const { - return _njettinessFinder.currentAxes(); - } - - /// returns subjet regions found by result() calculation (these have valid constituents) - /// Note that the axes and the subjets are not the same - std::vector currentSubjets() const { - return _njettinessFinder.currentJets(); - } - - /// returns components of tau_N without recalculating anything - TauComponents currentTauComponents() const { - return _njettinessFinder.currentTauComponents(); - } - - /// returns components of tau_N without recalculating anything - TauPartition currentPartition() const { - return _njettinessFinder.currentPartition(); - } - -private: - - /// Core Njettiness code that is called - Njettiness _njettinessFinder; // TODO: should muck with this so result can be const without this mutable - /// Number of subjets to find - int _N; - - /// Warning if the user tries to use v1.0.3 constructor. - static LimitedWarning _old_constructor_warning; - -public: - - // The following interfaces are included for backwards compatibility, but no longer recommended. - // They may be deleted in a future release - - /// \deprecated - /// Alternative constructors that define the measure via enums and parameters - /// These constructors will be removed in v3.0 - /// Zero parameter arguments - /// (Currently, no measure uses this) - Nsubjettiness(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode) - : _njettinessFinder(axes_mode, measure_mode, 0), _N(N) { - _old_constructor_warning.warn("Nsubjettiness: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the Nsubjettiness constructor based on AxesDefinition and MeasureDefinition instead."); - } - - /// \deprecated - /// Construcotr for one parameter argument - /// (for unnormalized_measure, para1=beta) - Nsubjettiness(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1) - : _njettinessFinder(axes_mode, measure_mode, 1, para1), _N(N) { - _old_constructor_warning.warn("Nsubjettiness: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the Nsubjettiness constructor based on AxesDefinition and MeasureDefinition instead."); - } - - /// \deprecated - /// Constructor for two parameter arguments - /// (for normalized_measure, para1=beta, para2=R0) - /// (for unnormalized_cutoff_measure, para1=beta, para2=Rcutoff) - Nsubjettiness(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1, - double para2) - : _njettinessFinder(axes_mode, measure_mode, 2, para1, para2), _N(N) { - _old_constructor_warning.warn("Nsubjettiness: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the Nsubjettiness constructor based on AxesDefinition and MeasureDefinition instead."); - } - - /// \deprecated - /// Constructor for three parameter arguments - /// (for unnormalized_cutoff_measure, para1=beta, para2=R0, para3=Rcutoff) - Nsubjettiness(int N, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1, - double para2, - double para3) - : _njettinessFinder(axes_mode, measure_mode, 3, para1, para2, para3), _N(N) { - _old_constructor_warning.warn("Nsubjettiness: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the Nsubjettiness constructor based on AxesDefinition and MeasureDefinition instead."); - } - - /// \deprecated - /// Old constructor for backwards compatibility with v1.0, - /// where normalized_cutoff_measure was the only option - Nsubjettiness(int N, - Njettiness::AxesMode axes_mode, - double beta, - double R0, - double Rcutoff=std::numeric_limits::max()) - : _njettinessFinder(axes_mode, NormalizedCutoffMeasure(beta,R0,Rcutoff)), _N(N) { - _old_constructor_warning.warn("Nsubjettiness: You are using the old style constructor. This is deprecated as of v2.1 and will be removed in v3.0. Please use the Nsubjettiness constructor based on AxesDefinition and MeasureDefinition instead."); - } - -}; - - -///------------------------------------------------------------------------ -/// \class NsubjettinessRatio -/// \brief Implements ratios of N-subjettiness jet shapes -/// -/// NsubjettinessRatio uses the results from Nsubjettiness to calculate the ratio -/// tau_N/tau_M, where N and M are specified by the user. The ratio of different tau values -/// is often used in analyses, so this class is helpful to streamline code. Note that -/// manual axis mode is not supported -class NsubjettinessRatio : public FunctionOfPseudoJet { -public: - - /// Main constructor. Apart from specifying both N and M, the same options as Nsubjettiness - NsubjettinessRatio(int N, - int M, - const AxesDefinition & axes_def, - const MeasureDefinition & measure_def) - : _nsub_numerator(N,axes_def,measure_def), - _nsub_denominator(M,axes_def,measure_def) { - if (axes_def.needsManualAxes()) { - throw Error("NsubjettinessRatio does not support ManualAxes mode."); - } - } - - /// Returns tau_N/tau_M based off the input jet using result function from Nsubjettiness - double result(const PseudoJet& jet) const; - -private: - - Nsubjettiness _nsub_numerator; ///< Function for numerator - Nsubjettiness _nsub_denominator; ///< Function for denominator - -public: - - // The following interfaces are included for backwards compatibility, but no longer recommended. - // They may be deprecated at some point. - - /// \deprecated - /// Old-style constructor for zero arguments - /// Alternative constructor with enums and parameters - /// Again, likely to be removed - NsubjettinessRatio(int N, - int M, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode) - : _nsub_numerator(N, axes_mode, measure_mode), - _nsub_denominator(M, axes_mode, measure_mode) {} - - /// \deprecated - /// Old-style constructor for one argument - NsubjettinessRatio(int N, - int M, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1) - : _nsub_numerator(N, axes_mode, measure_mode, para1), - _nsub_denominator(M, axes_mode, measure_mode, para1) {} - - /// \deprecated - /// Old-style constructor for 2 arguments - NsubjettinessRatio(int N, - int M, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1, - double para2) - : _nsub_numerator(N, axes_mode, measure_mode, para1, para2), - _nsub_denominator(M, axes_mode, measure_mode, para1, para2) {} - - /// \deprecated - /// Old-style constructor for 3 arguments - NsubjettinessRatio(int N, - int M, - Njettiness::AxesMode axes_mode, - Njettiness::MeasureMode measure_mode, - double para1, - double para2, - double para3) - : _nsub_numerator(N, axes_mode, measure_mode, para1, para2, para3), - _nsub_denominator(M, axes_mode, measure_mode, para1, para2, para3) {} - - -}; - -} // namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_NSUBJETTINESS_HH__ diff --git a/include/Rivet/Tools/fjcontrib/Recluster.hh b/include/Rivet/Tools/fjcontrib/Recluster.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/Recluster.hh +++ /dev/null @@ -1,187 +0,0 @@ -#ifndef __FASTJET_CONTRIB_TOOLS_RECLUSTER_HH__ -#define __FASTJET_CONTRIB_TOOLS_RECLUSTER_HH__ - -// $Id: Recluster.hh 723 2014-07-30 09:11:01Z gsoyez $ -// -// Copyright (c) 2014-, Matteo Cacciari, Gavin P. Salam and Gregory Soyez -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include // to derive the ReclusterStructure from CompositeJetStructure -#include // to derive Recluster from Transformer -#include -#include - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - -//---------------------------------------------------------------------- -/// \class Recluster -/// Class that helps reclustering a jet with a new jet definition -/// -/// The result of the reclustering is returned as a single PseudoJet -/// with a CompositeJet structure. The pieces of that PseudoJet will -/// be the individual subjets -/// -/// When constructed from a JetDefinition, that definition will be -/// used to obtain the subjets. When constructed from a JetAlgorithm -/// and parameters (0 parameters for e+e-, just R or R and an extra -/// parameter for others) the recombination scheme will be taken as -/// the same one used to initially cluster the original jet. -/// -/// The result of this transformer depends on its usage. There are two -/// typical use-cases: either we recluster one fat jet into subjets, -/// OR, we recluster the jet with a different jet alg. When Recluster -/// is created from a full jet definition. The last parameter of the -/// constructors below dicatate that behaviour: if "single" is true -/// (the default), a single jet, issued from a regular clustering is -/// returned (if there are more than one, the hardest is taken); -/// otherwise (single==false), the result will be a composite jet with -/// each subjet as pieces -/// -/// Open points for discussion: -/// -/// - do we add an option to force area support? [could be useful -/// e.g. for the filter with a subtractor where area support is -/// mandatory] -/// -class Recluster : public Transformer { -public: - /// define a recluster that decomposes a jet into subjets using a - /// generic JetDefinition - /// - /// \param subjet_def the jet definition applied to obtain the subjets - /// \param single when true, cluster the jet in a single jet (the - /// hardest one) with an associated ClusterSequence, - /// otherwise return a composite jet with subjets - /// as pieces. - Recluster(const JetDefinition & subjet_def, bool single=true) - : _subjet_def(subjet_def), _use_full_def(true), _single(single) {} - - /// define a recluster that decomposes a jet into subjets using a - /// JetAlgorithm and its parameters - /// - /// \param subjet_alg the jet algorithm applied to obtain the subjets - /// \param subjet_radius the jet radius if required - /// \param subjet_extra optional extra parameters for the jet algorithm (only when needed) - /// \param single when true, cluster the jet in a single jet (the - /// hardest one) with an associated ClusterSequence, - /// otherwise return a composite jet with subjets - /// as pieces. - /// - /// Typically, for e+e- algoriothm you should use the third version - /// below with no parameters, for "standard" pp algorithms, just the - /// clustering radius has to be specified and for genkt-type of - /// algorithms, both the radius and the extra parameter have to be - /// specified. - Recluster(JetAlgorithm subjet_alg, double subjet_radius, double subjet_extra, - bool single=true) - : _subjet_alg(subjet_alg), _use_full_def(false), - _subjet_radius(subjet_radius), _has_subjet_radius(true), - _subjet_extra(subjet_extra), _has_subjet_extra(true), _single(single) {} - Recluster(JetAlgorithm subjet_alg, double subjet_radius, bool single=true) - : _subjet_alg(subjet_alg), _use_full_def(false), - _subjet_radius(subjet_radius), _has_subjet_radius(true), - _has_subjet_extra(false), _single(single) {} - Recluster(JetAlgorithm subjet_alg, bool single=true) - : _subjet_alg(subjet_alg), _use_full_def(false), - _has_subjet_radius(false), _has_subjet_extra(false), _single(single) {} - - /// default dtor - virtual ~Recluster(){}; - - //---------------------------------------------------------------------- - // standard Transformer behaviour inherited from the base class - // (i.e. result(), description() and structural info) - - /// runs the reclustering and sets kept and rejected to be the jets of interest - /// (with non-zero rho, they will have been subtracted). - /// - /// \param jet the jet that gets reclustered - /// \return the reclustered jet - virtual PseudoJet result(const PseudoJet & jet) const; - - /// class description - virtual std::string description() const; - - // the type of the associated structure - typedef CompositeJetStructure StructureType; - -private: - /// set the reclustered elements in the simple case of C/A+C/A - void _recluster_cafilt(const std::vector & all_pieces, - std::vector & subjets, - double Rfilt) const; - - /// set the reclustered elements in the generic re-clustering case - void _recluster_generic(const PseudoJet & jet, - std::vector & subjets, - const JetDefinition & subjet_def, - bool do_areas) const; - - // a series of checks - //-------------------------------------------------------------------- - /// get the pieces down to the fundamental pieces - bool _get_all_pieces(const PseudoJet &jet, std::vector &all_pieces) const; - - /// get the common recombiner to all pieces (NULL if none) - const JetDefinition::Recombiner* _get_common_recombiner(const std::vector &all_pieces) const; - - /// construct the proper jet definition ensuring that the recombiner - /// is taken from the underlying pieces (an error is thrown if the - /// pieces do no share a common recombiner) - void _build_jet_def_with_recombiner(const std::vector &all_pieces, - JetDefinition &subjet_def) const; - - /// check if one can apply the simplified trick for C/A subjets - bool _check_ca(const std::vector &all_pieces, - const JetDefinition &subjet_def) const; - - /// check if the jet (or all its pieces) have explicit ghosts - /// (assuming the jet has area support - /// - /// Note that if the jet has an associated cluster sequence that is no - /// longer valid, an error will be thrown - bool _check_explicit_ghosts(const std::vector &all_pieces) const; - - JetDefinition _subjet_def; ///< the jet definition to use to extract the subjets - JetAlgorithm _subjet_alg; ///< the jet algorithm to be used - bool _use_full_def; ///< true when the full JetDefinition is supplied to the ctor - double _subjet_radius; ///< the jet radius (only if needed for the jet alg) - bool _has_subjet_radius; ///< the subjet radius has been specified - double _subjet_extra; ///< the jet alg extra param (only if needed) - bool _has_subjet_extra; ///< the extra param has been specified - - bool _single; ///< (true) return a single jet with a - ///< regular clustering or (false) a - ///< composite jet with subjets as pieces - - static LimitedWarning _explicit_ghost_warning; -}; - -} } // namespace contrib - - //FASTJET_END_NAMESPACE // defined in fastjet/internal/base.hh - -#endif // __FASTJET_CONTRIB_TOOLS_RECLUSTER_HH__ diff --git a/include/Rivet/Tools/fjcontrib/RecursiveSoftDrop.hh b/include/Rivet/Tools/fjcontrib/RecursiveSoftDrop.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/RecursiveSoftDrop.hh +++ /dev/null @@ -1,212 +0,0 @@ -// $Id: RecursiveSoftDrop.hh 1082 2017-10-10 12:00:13Z gsoyez $ -// -// Copyright (c) 2014-, Gavin P. Salam, Gregory Soyez, Jesse Thaler, -// Kevin Zhou, Frederic Dreyer -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __RECURSIVESOFTDROP_HH__ -#define __RECURSIVESOFTDROP_HH__ - -#include "Recluster.hh" -#include "SoftDrop.hh" -#include "fastjet/WrappedStructure.hh" - -#include -#include -#include - -//FASTJET_BEGIN_NAMESPACE - -//namespace contrib{ - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - - -//------------------------------------------------------------------------ -/// \class RecursiveSoftDrop -/// An implementation of the RecursiveSoftDrop. -/// -/// Recursive Soft Drop will recursively groom a jet, removing -/// particles that fail the criterion -/// \f[ -/// z > z_{\rm cut} (\theta/R0)^\beta -/// \f] -/// until n subjets have been found. -/// -/// Several variants are supported: -/// - set_fixed_depth_mode() switches to fixed depth on all branches -/// of the clustering tree -/// - set_dynamical_R0() switches to dynamical R0 implementation of -/// RSD -/// - set_hardest_branch_only() switches to following only the -/// hardest branch (e.g. for Iterated Soft Drop) -/// - set_min_deltaR_square(val) sets a minimum angle considered for -/// substructure (e.g. for Iterated Soft Drop) -/// -/// Notes: -/// -/// - Even though the calls to "set_tagging_mode()" or -/// "set_grooming_mode(false)" are allowed, they should not be used -/// with n=-1, and the default grooming_mode has to remain -/// untouched (except for beta<0 and finite n). -/// -//---------------------------------------------------------------------- -class RecursiveSoftDrop : public SoftDrop { -public: - /// Simplified constructor. This takes the value of the "beta" - /// parameter and the symmetry cut (applied by default on the - /// scalar_z variable, as for the mMDT). It also takes an optional - /// subtractor. - /// - /// n is the number of times we require the SoftDrop condition to be - /// satisfied. n=-1 means infinity, i.e. we recurse into the jet - /// until individual constituents - /// - /// If the (optional) pileup subtractor can be supplied, then see - /// also the documentation for the set_input_jet_is_subtracted() member - /// function. - /// - /// \param beta the value of the beta parameter - /// \param symmetry_cut the value of the cut on the symmetry measure - /// \param n the requested number of iterations - /// \param R0 the angular distance normalisation [1 by default] - RecursiveSoftDrop(double beta, - double symmetry_cut, - int n = -1, - double R0 = 1, - const FunctionOfPseudoJet * subtractor = 0) : - SoftDrop(beta, symmetry_cut, R0, subtractor), _n(n) { set_defaults(); } - - /// Full constructor, which takes the following parameters: - /// - /// \param beta the value of the beta parameter - /// \param symmetry_cut the value of the cut on the symmetry measure - /// \param symmetry_measure the choice of measure to use to estimate the symmetry - /// \param n the requested number of iterations - /// \param R0 the angular distance normalisation [1 by default] - /// \param mu_cut the maximal allowed value of mass drop variable mu = m_heavy/m_parent - /// \param recursion_choice the strategy used to decide which subjet to recurse into - /// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) - RecursiveSoftDrop(double beta, - double symmetry_cut, - SymmetryMeasure symmetry_measure, - int n = -1, - double R0 = 1.0, - double mu_cut = std::numeric_limits::infinity(), - RecursionChoice recursion_choice = larger_pt, - const FunctionOfPseudoJet * subtractor = 0) : - SoftDrop(beta, symmetry_cut, symmetry_measure, R0, mu_cut, recursion_choice, subtractor), - _n(n) { set_defaults(); } - - /// default destructor - virtual ~RecursiveSoftDrop(){} - - //---------------------------------------------------------------------- - // access to class info - int n() const { return _n; } - - //---------------------------------------------------------------------- - // on top of the tweaks that we inherit from SoftDrop (via - // RecursiveSymmetryBase): - // - set_verbose_structure() - // - set_subtractor() - // - set_input_jet_is_subtracted() - // we provide several other knobs, given below - - /// initialise all the flags below to their default value - void set_defaults(); - - /// switch to using the "same depth" variant where instead of - /// recursing from large to small angles and requiring n SD - /// conditions to be met (our default), we recurse simultaneously in - /// all the branches found during the previous iteration, up to a - /// maximum depth of n. - /// default: false - void set_fixed_depth_mode(bool value=true) { _fixed_depth = value; } - bool fixed_depth_mode() const { return _fixed_depth; } - - /// switch to using a dynamical R0 (used for the normalisation of - /// the symmetry measure) set by the last deltaR at which some - /// substructure was found. - /// default: false - void set_dynamical_R0(bool value=true) { _dynamical_R0 = value; } - bool use_dynamical_R0() const { return _dynamical_R0; } - - /// when finding some substructure, only follow the hardest branch - /// for the recursion - /// default: false (i.e. recurse in both branches) - void set_hardest_branch_only(bool value=true) { _hardest_branch_only = value; } - bool use_hardest_branch_only() const { return _hardest_branch_only; } - - /// set the minimum angle (squared) that we should consider for - /// substructure - /// default: -1.0 (i.e. no minimum) - void set_min_deltaR_squared(double value=-1.0) { _min_dR2 = value; } - double min_deltaR_squared() const { return _min_dR2; } - - /// description of the tool - virtual std::string description() const; - - //---------------------------------------------------------------------- - /// action on a single jet with RecursiveSoftDrop. - /// - /// uses "result_fixed_tags" by default (i.e. recurse from R0 to - /// smaller angles until n SD conditions have been met), or - /// "result_fixed_depth" where each of the previous SD branches are - /// recirsed into down to a depth of n. - virtual PseudoJet result(const PseudoJet &jet) const; - - /// this routine applies the Soft Drop criterion recursively on the - /// CA tree until we find n subjets (or until it converges), and - /// adds them together into a groomed PseudoJet - PseudoJet result_fixed_tags(const PseudoJet &jet) const; - - /// this routine applies the Soft Drop criterion recursively on the - /// CA tree, recursing into all the branches found during the previous iteration - /// until n layers have been found (or until it converges) - PseudoJet result_fixed_depth(const PseudoJet &jet) const; - -protected: - /// return false if we reached desired layer of grooming _n - bool continue_grooming(int current_n) const { - return ((_n < 0) or (current_n < _n)); - } - -private: - int _n; ///< the value of n - - // behaviour tweaks - bool _fixed_depth; ///< look in parallel into each all branches until depth n - bool _dynamical_R0; ///< when true, use the last deltaR with substructure as D0 - bool _hardest_branch_only; ///< recurse only in the hardest branch - /// when substructure is found - double _min_dR2; ///< the min allowed angle to search for substructure -}; - -// helper to get the (linear) list of prongs inside a jet resulting -// from RecursiveSoftDrop. This would avoid having amnually to go -// through the successive pairwise compositeness -std::vector recursive_soft_drop_prongs(const PseudoJet & rsd_jet); - -} } - - //FASTJET_END_NAMESPACE // defined in fastjet/internal/base.hh -#endif // __RECURSIVESOFTDROP_HH__ diff --git a/include/Rivet/Tools/fjcontrib/RecursiveSymmetryCutBase.hh b/include/Rivet/Tools/fjcontrib/RecursiveSymmetryCutBase.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/RecursiveSymmetryCutBase.hh +++ /dev/null @@ -1,392 +0,0 @@ -// $Id: RecursiveSymmetryCutBase.hh 1074 2017-09-18 15:15:20Z gsoyez $ -// -// Copyright (c) 2014-, Gavin P. Salam, Gregory Soyez, Jesse Thaler -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_RECURSIVESYMMETRYCUTBASE_HH__ -#define __FASTJET_CONTRIB_RECURSIVESYMMETRYCUTBASE_HH__ - -#include -#include -#include -#include "fastjet/tools/Transformer.hh" -#include "fastjet/WrappedStructure.hh" -#include "fastjet/CompositeJetStructure.hh" - -#include "fastjet/config.h" - -// we'll use the native FJ class for reculstering if available -#if FASTJET_VERSION_NUMBER >= 30100 -#include "fastjet/tools/Recluster.hh" -#else -#include "Recluster.hh" -#endif - -/** \mainpage RecursiveTools contrib - - The RecursiveTools contrib provides a set of tools for - recursive investigation jet substructure. Currently it includes: - - fastjet::contrib::ModifiedMassDropTagger - - fastjet::contrib::SoftDrop - - fastjet::contrib::RecursiveSymmetryCutBase (from which the above two classes derive) - - fastjet::contrib::IteratedSoftDropSymmetryFactors (defines ISD procedure) - - fastjet::contrib::IteratedSoftDropMultiplicity (defines a useful observable using ISD) - - example*.cc provides usage examples - */ - - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - - -//------------------------------------------------------------------------ -/// \class RecursiveSymmetryCutBase -/// A base class for all the tools that de-cluster a jet until a -/// sufficiently symmetric configuration if found. -/// -/// Derived classes (so far, ModifiedMassDropTagger and SoftDrop) have to -/// implement the symmetry cut and its description -/// -/// Note that by default, the jet will be reculstered with -/// Cambridge/Aachen before applying the de-clustering procedure. This -/// behaviour can be changed using set_clustering (see below). -/// -/// By default, this class behaves as a tagger, i.e. returns an empty -/// PseudoJet if no substructure is found. While the derived -/// ModifiedMassDropTagger is a tagger, the derived SoftDrop is a groomer -/// (i.e. it returns a non-zero jet even if no substructure is found). -/// -/// This class provides support for -/// - an optional mass-drop cut (see ctor) -/// - options to select which symmetry measure should be used (see ctor) -/// - options to select how the recursion proceeds (see ctor) -/// - options for reclustering the jet before running the de-clustering -/// (see set_reclustering) -/// - an optional subtractor (see ctor and other methods below) -class RecursiveSymmetryCutBase : public Transformer { -public: - // ctors and dtors - //---------------------------------------------------------------------- - - /// an enum of the different (a)symmetry measures that can be used - enum SymmetryMeasure{scalar_z, ///< \f$ \min(p_{ti}, p_{tj})/(p_{ti} + p_{tj}) \f$ - vector_z, ///< \f$ \min(p_{ti}, p_{tj})/p_{t(i+j)} \f$ - y, ///< \f$ \min(p_{ti}^2,p_{tj}^2) \Delta R_{ij}^2 / m_{ij}^2 \f$ - theta_E, ///< \f$ \min(E_i,E_j)/(E_i+E_j) \f$ with 3d angle (ee collisions) - cos_theta_E ///< \f$ \min(E_i,E_j)/(E_i+E_j) \f$ with - /// \f$ \sqrt{2[1-cos(theta)]}\f$ for angles (ee collisions) - }; - - /// an enum for the options of how to choose which of two subjets to recurse into - enum RecursionChoice{larger_pt, ///< choose the subjet with larger \f$ p_t \f$ - larger_mt, ///< choose the subjet with larger \f$ m_t \equiv (m^2+p_t^2)^{\frac12}] \f$ - larger_m, ///< choose the subjet with larger mass (deprecated) - larger_E ///< choose the subjet with larger energy (meant for ee collisions) - }; - - /// Full constructor, which takes the following parameters: - /// - /// \param symmetry_measure the choice of measure to use to estimate the symmetry - /// \param mu_cut the maximal allowed value of mass drop variable mu = m_heavy/m_parent - /// \param recursion_choice the strategy used to decide which subjet to recurse into - /// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) - /// - /// If the (optional) pileup subtractor is supplied, then, by - /// default, the input jet is assumed unsubtracted and the - /// RecursiveSymmetryCutBase returns a subtracted 4-vector. [see - /// also the set_input_jet_is_subtracted() member function]. - /// - RecursiveSymmetryCutBase(SymmetryMeasure symmetry_measure = scalar_z, - double mu_cut = std::numeric_limits::infinity(), - RecursionChoice recursion_choice = larger_pt, - const FunctionOfPseudoJet * subtractor = 0 - ) : - _symmetry_measure(symmetry_measure), - _mu_cut(mu_cut), - _recursion_choice(recursion_choice), - _subtractor(subtractor), _input_jet_is_subtracted(false), - _do_reclustering(true), _recluster(0), - _grooming_mode(false), - _verbose_structure(false) // by default, don't story verbose information - {} - - /// default destructor - virtual ~RecursiveSymmetryCutBase(){} - - // access to class info - //---------------------------------------------------------------------- - SymmetryMeasure symmetry_measure() const { return _symmetry_measure; } - double mu_cut() const { return _mu_cut; } - RecursionChoice recursion_choice() const { return _recursion_choice; } - - // internal subtraction configuration - //---------------------------------------------------------------------- - - /// This tells the tagger whether to assume that the input jet has - /// already been subtracted. This is relevant only if a non-null - /// subtractor pointer has been supplied, and the default assymption - /// is that the input jet is passed unsubtracted. - /// - /// Note: given that subtractors usually change the momentum of the - /// main jet, but not that of the subjets, subjets will continue to - /// have subtraction applied to them. - void set_input_jet_is_subtracted(bool is_subtracted) {_input_jet_is_subtracted = is_subtracted;} - - /// returns a bool to indicate if the input jet is assumed already subtracted - bool input_jet_is_subtracted() const {return _input_jet_is_subtracted;} - - /// an alternative way to set the subtractor - /// - /// Note that when a subtractor is provided, the result of the - /// RecursiveSymmetryCutBase will be a subtracted jet. - void set_subtractor(const FunctionOfPseudoJet * subtractor_) {_subtractor = subtractor_;} - - /// returns a pointer to the subtractor - const FunctionOfPseudoJet * subtractor() const {return _subtractor;} - - // reclustering behaviour - //---------------------------------------------------------------------- - - /// configure the reclustering prior to the recursive de-clustering - /// \param do_reclustering recluster the jet or not? - /// \param recluster how to recluster the jet - /// (only if do_recluster is true; - /// Cambridge/Aachen used if NULL) - /// - /// Note that the ModifiedMassDropTagger and SoftDrop are designed - /// to work with a Cambridge/Aachen clustering. Use any other option - /// at your own risk! - void set_reclustering(bool do_reclustering=true, const Recluster *recluster=0){ - _do_reclustering = do_reclustering; - _recluster = recluster; - } - - // what to do when no substructure is found - //---------------------------------------------------------------------- - /// specify the behaviour adopted when no substructure is found - /// - in tagging mode, an empty PseudoJet will be returned - /// - in grooming mode, a single particle is returned - /// for clarity, we provide both function although they are redundant - void set_grooming_mode(bool enable=true){ _grooming_mode = enable;} - void set_tagging_mode(bool enable=true){ _grooming_mode = !enable;} - - - /// Allows access to verbose information about recursive declustering, - /// in particular values of symmetry, delta_R, and mu of dropped branches - void set_verbose_structure(bool enable=true) { _verbose_structure = enable; } - bool has_verbose_structure() const { return _verbose_structure; } - - - // inherited from the Transformer base - //---------------------------------------------------------------------- - - /// the function that carries out the tagging; if a subtractor is - /// being used, then this function assumes that input jet is - /// unsubtracted (unless set_input_jet_is_subtracted(true) has been - /// explicitly called before) and the result of the MMDT will be a - /// subtracted jet. - virtual PseudoJet result(const PseudoJet & j) const; - - /// description of the tool - virtual std::string description() const; - - /// returns the gepometrical distance between the two particles - /// depending on the symmetry measure used - double squared_geometric_distance(const PseudoJet &j1, - const PseudoJet &j2) const; - - - class StructureType; - - /// for testing - static bool _verbose; - -protected: - // the methods below have to be defined by deerived classes - //---------------------------------------------------------------------- - /// the cut on the symmetry measure (typically z) that one need to - /// apply for a given pair of subjets p1 and p2 - virtual double symmetry_cut_fn(const PseudoJet & /* p1 */, - const PseudoJet & /* p2 */, - void *extra_parameters = 0) const = 0; - /// the associated dwescription - virtual std::string symmetry_cut_description() const = 0; - - //---------------------------------------------------------------------- - /// this defines status codes when checking for substructure - enum RecursionStatus{ - recursion_success=0, //< found some substructure - recursion_dropped, //< dropped softest prong; recursion continues - recursion_no_parents, //< down to constituents; bottom of recursion - recursion_issue //< something went wrong; recursion stops - }; - - //---------------------------------------------------------------------- - /// the method below is the one actually performing one step of the - /// recursion. - /// - /// It returns a status code (defined above) - /// - /// In case of success, all the information is filled - /// In case of "no parents", piee1 is the same subjet - /// In case of trouble, piece2 will be a 0 PJ and piece1 is the PJ we - /// should return (either 0 itself if the issue was critical, or - /// non-wero in case of a minor issue just causing the recursion to - /// stop) - /// - /// The extra_parameter argument allows one to pass extra agruments - /// to the symmetry condition - RecursionStatus recurse_one_step(const PseudoJet & subjet, - PseudoJet &piece1, PseudoJet &piece2, - double &sym, double &mu2, - void *extra_parameters = 0) const; - - //---------------------------------------------------------------------- - /// helper for handling the reclustering - PseudoJet _recluster_if_needed(const PseudoJet &jet) const; - - //---------------------------------------------------------------------- - // helpers for selecting between ee and pp cases - bool is_ee() const{ - return ((_symmetry_measure==theta_E) || (_symmetry_measure==cos_theta_E)); - } - -private: - SymmetryMeasure _symmetry_measure; - double _mu_cut; - RecursionChoice _recursion_choice; - const FunctionOfPseudoJet * _subtractor; - bool _input_jet_is_subtracted; - - bool _do_reclustering; ///< start with a reclustering - const Recluster *_recluster; ///< how to recluster the jet - - bool _grooming_mode; ///< grooming or tagging mode - - static LimitedWarning _negative_mass_warning; - static LimitedWarning _mu2_gt1_warning; - //static LimitedWarning _nonca_warning; - static LimitedWarning _explicit_ghost_warning; - - // additional verbose structure information - bool _verbose_structure; - - /// decide what to return when no substructure has been found - PseudoJet _result_no_substructure(const PseudoJet &last_parent) const; -}; - - - -//---------------------------------------------------------------------- -/// class to hold the structure of a jet tagged by RecursiveSymmetryCutBase. -class RecursiveSymmetryCutBase::StructureType : public WrappedStructure { -public: - StructureType(const PseudoJet & j) : - WrappedStructure(j.structure_shared_ptr()), _delta_R(-1.0), _symmetry(-1.0), _mu(-1.0), - _is_composite(false), _has_verbose(false) // by default, do not store verbose structure - {} - - /// construct a structure with - /// - basic info inherited from the reference jet "j" - /// - a given deltaR for substructure - /// - a given symmetry for substructure - /// - a given mu parameter for substructure - /// If j is a "copmposite jet", it means that it has further - /// substructure to potentially recurse into - StructureType(const PseudoJet & j, double delta_R_in, double symmetry_in, double mu_in=-1.0) : - WrappedStructure(j.structure_shared_ptr()), _delta_R(delta_R_in), _symmetry(symmetry_in), _mu(mu_in), - _is_composite(dynamic_cast(j.structure_ptr()) != NULL), - _has_verbose(false) // by default, do not store verbose structure - {} - - // information about kept branch - double delta_R() const {return _delta_R;} - double thetag() const {return _delta_R;} // alternative name - double symmetry() const {return _symmetry;} - double zg() const {return _symmetry;} // alternative name - double mu() const {return _mu;} - - // additional verbose information about dropped branches - bool has_verbose() const { return _has_verbose;} - void set_verbose(bool value) { _has_verbose = value;} - - /// returns true if the current jet has some substructure (i.e. has - /// been tagged by the resursion) or not - /// - /// Note that this should include deltaR==0 (e.g. a perfectly - /// collinear branching with SoftDrop) - bool has_substructure() const { return _delta_R>=0; } - - /// number of dropped branches - int dropped_count(bool global=true) const; - - /// delta_R of dropped branches - /// when "global" is set, recurse into possile further substructure - std::vector dropped_delta_R(bool global=true) const; - void set_dropped_delta_R(const std::vector &v) { _dropped_delta_R = v; } - - /// symmetry values of dropped branches - std::vector dropped_symmetry(bool global=true) const; - void set_dropped_symmetry(const std::vector &v) { _dropped_symmetry = v; } - - /// mass drop values of dropped branches - std::vector dropped_mu(bool global=true) const; - void set_dropped_mu(const std::vector &v) { _dropped_mu = v; } - - /// maximum symmetry value dropped - double max_dropped_symmetry(bool global=true) const; - - /// (global) list of groomed away elements as zg and thetag - /// sorted from largest to smallest anlge - std::vector > sorted_zg_and_thetag() const; - -private: - double _delta_R, _symmetry, _mu; - friend class RecursiveSymmetryCutBase; - - bool _is_composite; - - // additional verbose information - bool _has_verbose; - // information about dropped values - std::vector _dropped_delta_R; - std::vector _dropped_symmetry; - std::vector _dropped_mu; - - bool check_verbose(const std::string &what) const{ - if (!_has_verbose){ - throw Error("RecursiveSymmetryCutBase::StructureType: Verbose structure must be turned on to get "+what+"."); - return false; - } - return true; - } - - -}; - -} } // namespace contrib - - //FASTJET_END_NAMESPACE - -#endif // __FASTJET_CONTRIB_RECURSIVESYMMETRYCUTBASE_HH__ diff --git a/include/Rivet/Tools/fjcontrib/SoftDrop.hh b/include/Rivet/Tools/fjcontrib/SoftDrop.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/SoftDrop.hh +++ /dev/null @@ -1,152 +0,0 @@ -// $Id: SoftDrop.hh 1034 2017-08-01 10:03:53Z gsoyez $ -// -// Copyright (c) 2014-, Gregory Soyez, Jesse Thaler -// based on arXiv:1402.2657 by Andrew J. Larkoski, Simone Marzani, -// Gregory Soyez, Jesse Thaler -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_SOFTDROP_HH__ -#define __FASTJET_CONTRIB_SOFTDROP_HH__ - -#include "RecursiveSymmetryCutBase.hh" - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - -namespace Rivet { - namespace fjcontrib { - using namespace fastjet; - - -//------------------------------------------------------------------------ -/// \class SoftDrop -/// An implementation of the SoftDrop from arXiv:1402.2657. -/// -/// For the basic functionalities, we refer the reader to the -/// documentation of the RecursiveSymmetryCutBase from which SoftDrop -/// inherits. Here, we mostly put the emphasis on things specific to -/// SoftDrop: -/// -/// - the cut applied recursively is -/// \f[ -/// z > z_{\rm cut} (\theta/R0)^\beta -/// \f] -/// with z the asymmetry measure and \f$\theta\f$ the geometrical -/// distance between the two subjets. R0 is set to 1 by default. -/// -/// - by default, we work in "grooming mode" i.s. if no substructure -/// is found, we return a jet made of a single parton. Note that -/// this behaviour differs from the mMDT (and can be a source of -/// differences when running SoftDrop with beta=0.) -/// -class SoftDrop : public RecursiveSymmetryCutBase { -public: - /// Simplified constructor. This takes the value of the "beta" - /// parameter and the symmetry cut (applied by default on the - /// scalar_z variable, as for the mMDT). It also takes an optional - /// subtractor. - /// - /// If the (optional) pileup subtractor can be supplied, then see - /// also the documentation for the set_input_jet_is_subtracted() member - /// function. - /// - /// \param beta the value of the beta parameter - /// \param symmetry_cut the value of the cut on the symmetry measure - /// \param R0 the angular distance normalisation [1 by default] - SoftDrop(double beta, - double symmetry_cut, - double R0 = 1, - const FunctionOfPseudoJet * subtractor = 0) : - RecursiveSymmetryCutBase(scalar_z, // the default SymmetryMeasure - std::numeric_limits::infinity(), // default is no mass drop - larger_pt, // the default RecursionChoice - subtractor), - _beta(beta), _symmetry_cut(symmetry_cut), _R0sqr(R0*R0) { - // change the default: use grooming mode - set_grooming_mode(); - } - - /// Full constructor, which takes the following parameters: - /// - /// \param beta the value of the beta parameter - /// \param symmetry_cut the value of the cut on the symmetry measure - /// \param symmetry_measure the choice of measure to use to estimate the symmetry - /// \param R0 the angular distance normalisation [1 by default] - /// \param mu_cut the maximal allowed value of mass drop variable mu = m_heavy/m_parent - /// \param recursion_choice the strategy used to decide which subjet to recurse into - /// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) - /// - /// The default values provided for this constructor are suited to - /// obtain the SoftDrop as discussed in arXiv:1402.2657: - /// - no mass drop is requested - /// - recursion follows the branch with the largest pt - /// The symmetry measure has to be specified (scalar_z is the recommended value) - /// - /// Notes: - /// - /// - by default, SoftDrop will recluster the jet with the - /// Cambridge/Aachen algorithm if it is not already the case. This - /// behaviour can be changed using the "set_reclustering" method - /// defined below - /// - SoftDrop(double beta, - double symmetry_cut, - SymmetryMeasure symmetry_measure, - double R0 = 1.0, - double mu_cut = std::numeric_limits::infinity(), - RecursionChoice recursion_choice = larger_pt, - const FunctionOfPseudoJet * subtractor = 0) : - RecursiveSymmetryCutBase(symmetry_measure, mu_cut, recursion_choice, subtractor), - _beta(beta), _symmetry_cut(symmetry_cut), _R0sqr(R0*R0) - { - // change the default: use grooming mode - set_grooming_mode(); - } - - /// default destructor - virtual ~SoftDrop(){} - - //---------------------------------------------------------------------- - // access to class info - double beta() const { return _beta; } - double symmetry_cut() const { return _symmetry_cut; } - double R0() const { return sqrt(_R0sqr); } - -protected: - - // Unlike MMDT, the SoftDrop symmetry_cut_fn depends on the subjet kinematics - // since the symmetry condition depends on the DeltaR between subjets. - virtual double symmetry_cut_fn(const PseudoJet & p1, - const PseudoJet & p2, - void * optional_R0sqr_ptr = 0) const; - virtual std::string symmetry_cut_description() const; - - //private: - double _beta; ///< the power of the angular distance to be used - ///< in the symmetry condition - double _symmetry_cut; ///< the value of zcut (the prefactor in the asymmetry cut) - double _R0sqr; ///< normalisation of the angular distance - ///< (typically set to the jet radius, 1 by default) -}; - -} } // namespace contrib - - //FASTJET_END_NAMESPACE - -#endif // __FASTJET_CONTRIB_SOFTDROP_HH__ diff --git a/include/Rivet/Tools/fjcontrib/TauComponents.hh b/include/Rivet/Tools/fjcontrib/TauComponents.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/TauComponents.hh +++ /dev/null @@ -1,354 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: MeasureFunction.hh 742 2014-08-23 15:43:29Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_TAUCOMPONENTS_HH__ -#define __FASTJET_CONTRIB_TAUCOMPONENTS_HH__ - -#include "fastjet/PseudoJet.hh" -#include "fastjet/ClusterSequence.hh" -#include "fastjet/WrappedStructure.hh" - - -#include -#include -#include -#include - - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -// Classes defined in this file. -class TauComponents; -class TauPartition; -class NjettinessExtras; - -///------------------------------------------------------------------------ -/// \enum TauMode -/// Specified whether tau value has beam region or denominators -///------------------------------------------------------------------------ -enum TauMode { - UNDEFINED_SHAPE = -1, // Added so that constructor would default to some value - UNNORMALIZED_JET_SHAPE = 0, - NORMALIZED_JET_SHAPE = 1, - UNNORMALIZED_EVENT_SHAPE = 2, - NORMALIZED_EVENT_SHAPE = 3, -}; - -/////// -// -// TauComponents -// -/////// - -///------------------------------------------------------------------------ -/// \class TauComponents -/// \brief Output wrapper for supplemental N-(sub)jettiness information -/// -/// This class creates a wrapper for the various tau/subtau values calculated in Njettiness. This class allows Njettiness access to these variables -/// without ever having to do the calculation itself. It takes in subtau numerators and tau denominator from MeasureFunction -/// and outputs tau numerator, and normalized tau and subtau. -///------------------------------------------------------------------------ -class TauComponents { - -public: - - /// empty constructor necessary to initialize tau_components in Njettiness - /// later set correctly in Njettiness::getTau function - TauComponents() {} - - /// This constructor takes input vector and double and calculates all necessary tau components - TauComponents(TauMode tau_mode, - const std::vector & jet_pieces_numerator, - double beam_piece_numerator, - double denominator, - const std::vector & jets, - const std::vector & axes - ); - - /// Test for denominator - bool has_denominator() const; - /// Test for beam region - bool has_beam() const; - - /// Return tau value - double tau() const { return _tau; } - /// Return jet regions - const std::vector& jet_pieces() const { return _jet_pieces; } - /// Return beam region - double beam_piece() const { return _beam_piece; } - - /// Return jet regions (no denominator) - std::vector jet_pieces_numerator() const { return _jet_pieces_numerator; } - /// Return beam regions (no denominator) - double beam_piece_numerator() const { return _beam_piece_numerator; } - /// Return numerator - double numerator() const { return _numerator; } - /// Return denominator - double denominator() const { return _denominator; } - - /// Four-vector of total jet (sum of clustered regions) - PseudoJet total_jet() const { return _total_jet;} - /// Four-vector of jet regions - const std::vector& jets() const { return _jets;} - /// Four-vector of axes - const std::vector& axes() const { return _axes;} - - class StructureType; - -protected: - - /// Defines whether there is a beam or denominator - TauMode _tau_mode; - - std::vector _jet_pieces_numerator; ///< Constructor input (jet region numerator) - double _beam_piece_numerator; ///< Constructor input (beam region numerator) - double _denominator; ///< Constructor input (denominator) - - std::vector _jet_pieces; ///< Derived value (jet regions) - double _beam_piece; ///< Derived value (beam region) - double _numerator; ///< Derived value (total numerator) - double _tau; ///< Derived value (final value) - - PseudoJet _total_jet; ///< Total jet four-vector - std::vector _jets; ///< Jet four-vectors - std::vector _axes; ///< AXes four-vectors - -}; - -/////// -// -// TauPartition -// -/////// - -///------------------------------------------------------------------------ -/// \class TauPartition -/// \brief Output wrapper for N-(sub)jettiness partitioning information -/// -/// Class for storing partitioning information. -///------------------------------------------------------------------------ -class TauPartition { - -public: - /// empty constructor - TauPartition() {} - - /// Make partition of size to hold n_jet partitions - TauPartition(int n_jet) { - _jets_list.resize(n_jet); - _jets_partition.resize(n_jet); - } - - /// add a particle to the jet - void push_back_jet(int jet_num, const PseudoJet& part_to_add, int part_index) { - _jets_list[jet_num].push_back(part_index); - _jets_partition[jet_num].push_back(part_to_add); - } - - /// add a particle to the beam - void push_back_beam(const PseudoJet& part_to_add, int part_index) { - _beam_list.push_back(part_index); - _beam_partition.push_back(part_to_add); - } - - /// return jet regions - PseudoJet jet(int jet_num) const { return join(_jets_partition.at(jet_num)); } - /// return beam region - PseudoJet beam() const { return join(_beam_partition);} - - /// return jets - std::vector jets() const { - std::vector jets; - for (unsigned int i = 0; i < _jets_partition.size(); i++) { - jets.push_back(jet(i)); - } - return jets; - } - - /// jets in list form - const std::list & jet_list(int jet_num) const { return _jets_list.at(jet_num);} - /// beam in list form - const std::list & beam_list() const { return _beam_list;} - /// all jets in list form - const std::vector > & jets_list() const { return _jets_list;} - -private: - - std::vector > _jets_list; ///< jets in list form - std::list _beam_list; ///< beam in list form - - std::vector > _jets_partition; ///< Partition in jet regions - std::vector _beam_partition; ///< Partition in beam region - -}; - - -/////// -// -// NjettinessExtras -// -/////// - -///------------------------------------------------------------------------ -/// \class NjettinessExtras -/// \brief ClusterSequence add on for N-jettiness information -/// -/// This class contains the same information as TauComponents, but adds additional ways of linking up -/// the jets found in the ClusterSequence::Extras class. -/// This is done in order to help improve the interface for the main NjettinessPlugin class. -///------------------------------------------------------------------------ -class NjettinessExtras : public ClusterSequence::Extras, public TauComponents { - -public: - /// Constructor - NjettinessExtras(TauComponents tau_components, - std::vector cluster_hist_indices) - : TauComponents(tau_components), _cluster_hist_indices(cluster_hist_indices) {} - - - - /// Ask for tau of the whole event, but by querying a jet - double tau(const fastjet::PseudoJet& /*jet*/) const {return _tau;} - - /// Ask for tau of an individual jet - double tau_piece(const fastjet::PseudoJet& jet) const { - if (labelOf(jet) == -1) return std::numeric_limits::quiet_NaN(); // nonsense - return _jet_pieces[labelOf(jet)]; - } - - /// Find axis associated with jet - fastjet::PseudoJet axis(const fastjet::PseudoJet& jet) const { - return _axes[labelOf(jet)]; - } - - /// Check if extra information is available. - bool has_njettiness_extras(const fastjet::PseudoJet& jet) const { - return (labelOf(jet) >= 0); - } - -private: - - /// Store cluster history indices to link up with ClusterSequence - std::vector _cluster_hist_indices; - - /// Figure out which jet things belonged to - int labelOf(const fastjet::PseudoJet& jet) const { - int thisJet = -1; - for (unsigned int i = 0; i < _jets.size(); i++) { - if (_cluster_hist_indices[i] == jet.cluster_hist_index()) { - thisJet = i; - break; - } - } - return thisJet; - } - -public: - - // These are old methods for gaining this information - // The recommended interface is given in TauComponents - - /// Tau value - double totalTau() const {return _tau;} - /// Jet regions - std::vector subTaus() const {return _jet_pieces;} - - /// Tau value - double totalTau(const fastjet::PseudoJet& /*jet*/) const { - return _tau; - } - - /// Jet region - double subTau(const fastjet::PseudoJet& jet) const { - if (labelOf(jet) == -1) return std::numeric_limits::quiet_NaN(); // nonsense - return _jet_pieces[labelOf(jet)]; - } - - /// beam region - double beamTau() const { - return _beam_piece; - } - -}; - - -/// Helper function to find out what njettiness_extras are (from jet) -inline const NjettinessExtras * njettiness_extras(const fastjet::PseudoJet& jet) { - const ClusterSequence * myCS = jet.associated_cluster_sequence(); - if (myCS == NULL) return NULL; - const NjettinessExtras* extras = dynamic_cast(myCS->extras()); - return extras; -} - -/// Helper function to find out what njettiness_extras are (from ClusterSequence) -inline const NjettinessExtras * njettiness_extras(const fastjet::ClusterSequence& myCS) { - const NjettinessExtras* extras = dynamic_cast(myCS.extras()); - return extras; -} - -/////// -// -// TauComponents::StructureType -// -/////// - - -///------------------------------------------------------------------------ -/// \class TauComponents::StructureType -/// \brief Wrapped structure for jet-based N-(sub)jettiness information -/// -/// Small wrapped structure to store tau information -/// TODO: Can these be auto-joined? -///------------------------------------------------------------------------ -class TauComponents::StructureType : public WrappedStructure { - -public: - /// Constructor - StructureType(const PseudoJet& j) : - WrappedStructure(j.structure_shared_ptr()) - {} - - /// tau associated with jet - double tau_piece() const { return _tau_piece; } - - /// alternative call, though might be confusing - double tau() const { return _tau_piece; } - -private: - friend class TauComponents; - double _tau_piece; ///< tau value associated with jet -}; - - - - -} //namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_TAUCOMPONENTS_HH__ diff --git a/include/Rivet/Tools/fjcontrib/XConePlugin.hh b/include/Rivet/Tools/fjcontrib/XConePlugin.hh deleted file mode 100644 --- a/include/Rivet/Tools/fjcontrib/XConePlugin.hh +++ /dev/null @@ -1,170 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: XConePlugin.hh 748 2014-10-02 06:13:28Z tjwilk $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#ifndef __FASTJET_CONTRIB_XCONEPLUGIN_HH__ -#define __FASTJET_CONTRIB_XCONEPLUGIN_HH__ - -#include - -#include "NjettinessPlugin.hh" - -#include "fastjet/ClusterSequence.hh" -#include "fastjet/JetDefinition.hh" - -#include -#include - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -///------------------------------------------------------------------------ -/// \class XConePlugin -/// \brief Implements the XCone Jet Algorithm -/** - * An exclusive jet finder that identifies N jets. First N axes are found, then - * particles are assigned to the nearest (approximte DeltaR) axis and for each axis the - * corresponding jet is simply the four-momentum sum of these particles. - * - * The XConePlugin is based on NjettinessPlugin, but with sensible default - * values for the AxesDefinition and MeasureDefinition. There are three arguments - * - * int N: number of exclusive jets to be found - * double R0: approximate jet radius - * double beta: determines style of jet finding with the recommended values being: - * beta = 2: standard "mean" jets where jet momentum/axis align approximately. - * beta = 1: recoil-free "median" variant where jet axis points at hardest cluster. - * - * The AxesDefinition is OnePass_GenET_GenKT_Axes, which uses a generalized kT - * clustering algorithm matched to the beta value. - * - * The MeasureDefinition is the XConeMeasure, which is based on the - * ConicalGeometric measure. - */ -class XConePlugin : public NjettinessPlugin { -public: - - /// Constructor with N, R0, and beta as the options. beta = 2.0 is the default - /// All this does is use the NjettinessPlugin with OnePass_GenET_GenKT_Axes and the XConeMeasure. - /// For more advanced usage, call NjettinessPlugin directly - /// Note that the order of the R0 and beta values is reversed from the XConeMeasure to - /// standard usage for Plugins. - XConePlugin(int N, double R0, double beta = 2.0) - : NjettinessPlugin(N, - OnePass_GenET_GenKT_Axes(calc_delta(beta), calc_power(beta), R0), // use recommended axes method only - XConeMeasure(beta, R0) // use recommended XCone measure. - ), - _N(N), _R0(R0), _beta(beta) - {} - - // The things that are required by base class. - virtual std::string description () const; - virtual double R() const {return _R0;} - - // run_clustering is done by NjettinessPlugin - - virtual ~XConePlugin() {} - -private: - - /// Static call used within the constructor to set the recommended delta value - static double calc_delta(double beta) { - double delta; - if (beta > 1) delta = 1/(beta - 1); - else delta = std::numeric_limits::max(); // use winner take all - return delta; - } - - /// Static call used within the constructor to set the recommended p value - static double calc_power(double beta) { - return (double) 1.0/beta; - } - - double _N; ///< Number of desired jets - double _R0; ///< Jet radius - double _beta; ///< Angular exponent (beta = 2.0 is dafault, beta = 1.0 is recoil-free) - -public: - -}; - - -/// \class PseudoXConePlugin -/// \brief Implements a faster, non-optimal version of the XCone Jet Algorithm -/// -/// A "poor man's" version of XCone with no minimization step -/// Right now, used just for testing purposes by the developers -class PseudoXConePlugin : public NjettinessPlugin { -public: - - /// Constructor with N, R0, and beta as the options. beta = 2.0 is the default - /// All this does is use the NjettinessPlugin with GenET_GenKT_Axes and the XConeMeasure. - PseudoXConePlugin(int N, double R0, double beta = 2.0) - : NjettinessPlugin(N, - GenET_GenKT_Axes(calc_delta(beta), calc_power(beta), R0), // poor man's axes - XConeMeasure(beta, R0) // use recommended XCone measure. - ), - _N(N), _R0(R0), _beta(beta) - {} - - // The things that are required by base class. - virtual std::string description () const; - virtual double R() const {return _R0;} - - // run_clustering is done by NjettinessPlugin - - virtual ~PseudoXConePlugin() {} - -private: - - /// Static call used within the constructor to set the recommended delta value - static double calc_delta(double beta) { - double delta; - if (beta > 1) delta = 1/(beta - 1); - else delta = std::numeric_limits::max(); // use winner take all - return delta; - } - - /// Static call used within the constructor to set the recommended p value - static double calc_power(double beta) { - return (double) 1.0/beta; - } - - double _N; ///< Number of desired jets - double _R0; ///< Jet radius - double _beta; ///< Angular exponent (beta = 2.0 is dafault, beta = 1.0 is recoil-free) - -public: - -}; - - - -} // namespace Nsubjettiness - -} -} - -#endif // __FASTJET_CONTRIB_XConePlugin_HH__ diff --git a/pyext/rivet/core.pyx b/pyext/rivet/core.pyx --- a/pyext/rivet/core.pyx +++ b/pyext/rivet/core.pyx @@ -1,250 +1,253 @@ # distutils: language = c++ cimport rivet as c from cython.operator cimport dereference as deref # Need to be careful with memory management -- perhaps use the base object that # we used in YODA? cdef extern from "" namespace "std" nogil: cdef c.unique_ptr[c.Analysis] move(c.unique_ptr[c.Analysis]) cdef class AnalysisHandler: cdef c.AnalysisHandler *_ptr def __cinit__(self): self._ptr = new c.AnalysisHandler() def __del__(self): del self._ptr def setIgnoreBeams(self, ignore=True): self._ptr.setIgnoreBeams(ignore) + def skipMultiWeights(self, ignore=False): + self._ptr.skipMultiWeights(ignore) + def addAnalysis(self, name): self._ptr.addAnalysis(name.encode('utf-8')) return self def analysisNames(self): anames = self._ptr.analysisNames() return [ a.decode('utf-8') for a in anames ] # def analysis(self, aname): # cdef c.Analysis* ptr = self._ptr.analysis(aname) # cdef Analysis pyobj = Analysis.__new__(Analysis) # if not ptr: # return None # pyobj._ptr = ptr # return pyobj def readData(self, name): self._ptr.readData(name.encode('utf-8')) def writeData(self, name): self._ptr.writeData(name.encode('utf-8')) def nominalCrossSection(self): return self._ptr.nominalCrossSection() def finalize(self): self._ptr.finalize() def dump(self, file, period): self._ptr.dump(file, period) def mergeYodas(self, filelist, delopts, equiv): self._ptr.mergeYodas(filelist, delopts, equiv) cdef class Run: cdef c.Run *_ptr def __cinit__(self, AnalysisHandler h): self._ptr = new c.Run(h._ptr[0]) def __del__(self): del self._ptr def setCrossSection(self, double x): self._ptr.setCrossSection(x) return self def setListAnalyses(self, choice): self._ptr.setListAnalyses(choice) return self def init(self, name, weight=1.0): return self._ptr.init(name.encode('utf-8'), weight) def openFile(self, name, weight=1.0): return self._ptr.openFile(name.encode('utf-8'), weight) def readEvent(self): return self._ptr.readEvent() # def skipEvent(self): # return self._ptr.skipEvent() def processEvent(self): return self._ptr.processEvent() def finalize(self): return self._ptr.finalize() cdef class Analysis: cdef c.unique_ptr[c.Analysis] _ptr def __init__(self): raise RuntimeError('This class cannot be instantiated') def requiredBeams(self): return deref(self._ptr).requiredBeams() def requiredEnergies(self): return deref(self._ptr).requiredEnergies() def keywords(self): kws = deref(self._ptr).keywords() return [ k.decode('utf-8') for k in kws ] def validation(self): vld = deref(self._ptr).validation() return [ k.decode('utf-8') for k in vld ] def authors(self): auths = deref(self._ptr).authors() return [ a.decode('utf-8') for a in auths ] def bibKey(self): return deref(self._ptr).bibKey().decode('utf-8') def name(self): return deref(self._ptr).name().decode('utf-8') def bibTeX(self): return deref(self._ptr).bibTeX().decode('utf-8') def references(self): refs = deref(self._ptr).references() return [ r.decode('utf-8') for r in refs ] def collider(self): return deref(self._ptr).collider().decode('utf-8') def description(self): return deref(self._ptr).description().decode('utf-8') def experiment(self): return deref(self._ptr).experiment().decode('utf-8') def inspireId(self): return deref(self._ptr).inspireId().decode('utf-8') def spiresId(self): return deref(self._ptr).spiresId().decode('utf-8') def runInfo(self): return deref(self._ptr).runInfo().decode('utf-8') def status(self): return deref(self._ptr).status().decode('utf-8') def summary(self): return deref(self._ptr).summary().decode('utf-8') def year(self): return deref(self._ptr).year().decode('utf-8') def luminosityfb(self): return deref(self._ptr).luminosityfb().decode('utf-8') #cdef object LEVELS = dict(TRACE = 0, DEBUG = 10, INFO = 20, WARN = 30, WARNING = 30, ERROR = 40, CRITICAL = 50, ALWAYS = 50) cdef class AnalysisLoader: @staticmethod def analysisNames(): names = c.AnalysisLoader_analysisNames() return [ n.decode('utf-8') for n in names ] @staticmethod def getAnalysis(name): name = name.encode('utf-8') cdef c.unique_ptr[c.Analysis] ptr = c.AnalysisLoader_getAnalysis(name) cdef Analysis pyobj = Analysis.__new__(Analysis) if not ptr: return None pyobj._ptr = move(ptr) # Create python object return pyobj ## Convenience versions in main rivet namespace def analysisNames(): return AnalysisLoader.analysisNames() def getAnalysis(name): return AnalysisLoader.getAnalysis(name) ## Path functions def getAnalysisLibPaths(): ps = c.getAnalysisLibPaths() return [ p.decode('utf-8') for p in ps ] def setAnalysisLibPaths(xs): bs = [ x.encode('utf-8') for x in xs ] c.setAnalysisLibPaths(bs) def addAnalysisLibPath(path): c.addAnalysisLibPath(path.encode('utf-8')) def setAnalysisDataPaths(xs): bs = [ x.encode('utf-8') for x in xs ] c.setAnalysisDataPaths(bs) def addAnalysisDataPath(path): c.addAnalysisDataPath(path.encode('utf-8')) def getAnalysisDataPaths(): ps = c.getAnalysisDataPaths() return [ p.decode('utf-8') for p in ps ] def findAnalysisDataFile(q): f = c.findAnalysisDataFile(q.encode('utf-8')) return f.decode('utf-8') def getAnalysisRefPaths(): ps = c.getAnalysisRefPaths() return [ p.decode('utf-8') for p in ps ] def findAnalysisRefFile(q): f = c.findAnalysisRefFile(q.encode('utf-8')) return f.decode('utf-8') def getAnalysisInfoPaths(): ps = c.getAnalysisInfoPaths() return [ p.decode('utf-8') for p in ps ] def findAnalysisInfoFile(q): f = c.findAnalysisInfoFile(q.encode('utf-8')) return f.decode('utf-8') def getAnalysisPlotPaths(): ps = c.getAnalysisPlotPaths() return [ p.decode('utf-8') for p in ps ] def findAnalysisPlotFile(q): f = c.findAnalysisPlotFile(q.encode('utf-8')) return f.decode('utf-8') def version(): return c.version().decode('utf-8') def setLogLevel(name, level): c.setLogLevel(name.encode('utf-8'), level) diff --git a/pyext/rivet/rivet.pxd b/pyext/rivet/rivet.pxd --- a/pyext/rivet/rivet.pxd +++ b/pyext/rivet/rivet.pxd @@ -1,95 +1,96 @@ from libcpp.map cimport map from libcpp.pair cimport pair from libcpp.vector cimport vector from libcpp cimport bool from libcpp.string cimport string from libcpp.memory cimport unique_ptr ctypedef int PdgId ctypedef pair[PdgId,PdgId] PdgIdPair cdef extern from "Rivet/AnalysisHandler.hh" namespace "Rivet": cdef cppclass AnalysisHandler: void setIgnoreBeams(bool) + void skipMultiWeights(bool) AnalysisHandler& addAnalysis(string) vector[string] analysisNames() const # Analysis* analysis(string) void writeData(string&) void readData(string&) double nominalCrossSection() void finalize() void dump(string, int) void mergeYodas(vector[string], vector[string], bool) cdef extern from "Rivet/Run.hh" namespace "Rivet": cdef cppclass Run: Run(AnalysisHandler) Run& setCrossSection(double) # For chaining? Run& setListAnalyses(bool) bool init(string, double) except + # $2=1.0 bool openFile(string, double) except + # $2=1.0 bool readEvent() except + bool skipEvent() except + bool processEvent() except + bool finalize() except + cdef extern from "Rivet/Analysis.hh" namespace "Rivet": cdef cppclass Analysis: vector[PdgIdPair]& requiredBeams() vector[pair[double, double]] requiredEnergies() vector[string] authors() vector[string] references() vector[string] keywords() vector[string] validation() string name() string bibTeX() string bibKey() string collider() string description() string experiment() string inspireId() string spiresId() string runInfo() string status() string summary() string year() string luminosityfb() # Might need to translate the following errors, although I believe 'what' is now # preserved. But often, we need the exception class name. #Error #RangeError #LogicError #PidError #InfoError #WeightError #UserError cdef extern from "Rivet/AnalysisLoader.hh": vector[string] AnalysisLoader_analysisNames "Rivet::AnalysisLoader::analysisNames" () unique_ptr[Analysis] AnalysisLoader_getAnalysis "Rivet::AnalysisLoader::getAnalysis" (string) cdef extern from "Rivet/Tools/RivetPaths.hh" namespace "Rivet": vector[string] getAnalysisLibPaths() void setAnalysisLibPaths(vector[string]) void addAnalysisLibPath(string) vector[string] getAnalysisDataPaths() void setAnalysisDataPaths(vector[string]) void addAnalysisDataPath(string) string findAnalysisDataFile(string) vector[string] getAnalysisRefPaths() string findAnalysisRefFile(string) vector[string] getAnalysisInfoPaths() string findAnalysisInfoFile(string) vector[string] getAnalysisPlotPaths() string findAnalysisPlotFile(string) cdef extern from "Rivet/Rivet.hh" namespace "Rivet": string version() cdef extern from "Rivet/Tools/Logging.hh": void setLogLevel "Rivet::Log::setLevel" (string, int) diff --git a/src/Core/Analysis.cc b/src/Core/Analysis.cc --- a/src/Core/Analysis.cc +++ b/src/Core/Analysis.cc @@ -1,1073 +1,921 @@ // -*- C++ -*- #include "Rivet/Config/RivetCommon.hh" #include "Rivet/Analysis.hh" #include "Rivet/AnalysisHandler.hh" #include "Rivet/AnalysisInfo.hh" #include "Rivet/Tools/BeamConstraint.hh" #include "Rivet/Projections/ImpactParameterProjection.hh" #include "Rivet/Projections/GeneratedPercentileProjection.hh" #include "Rivet/Projections/UserCentEstimate.hh" #include "Rivet/Projections/CentralityProjection.hh" namespace Rivet { Analysis::Analysis(const string& name) : _analysishandler(nullptr) { ProjectionApplier::_allowProjReg = false; _defaultname = name; unique_ptr ai = AnalysisInfo::make(name); assert(ai); _info = move(ai); assert(_info); } double Analysis::sqrtS() const { return handler().sqrtS(); } const ParticlePair& Analysis::beams() const { return handler().beams(); } const PdgIdPair Analysis::beamIds() const { return handler().beamIds(); } const string Analysis::histoDir() const { /// @todo Cache in a member variable string _histoDir; if (_histoDir.empty()) { _histoDir = "/" + name(); if (handler().runName().length() > 0) { _histoDir = "/" + handler().runName() + _histoDir; } replace_all(_histoDir, "//", "/"); //< iterates until none } return _histoDir; } const string Analysis::histoPath(const string& hname) const { const string path = histoDir() + "/" + hname; return path; } const string Analysis::histoPath(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return histoDir() + "/" + mkAxisCode(datasetId, xAxisId, yAxisId); } const string Analysis::mkAxisCode(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { std::stringstream axisCode; axisCode << "d"; if (datasetId < 10) axisCode << 0; axisCode << datasetId; axisCode << "-x"; if (xAxisId < 10) axisCode << 0; axisCode << xAxisId; axisCode << "-y"; if (yAxisId < 10) axisCode << 0; axisCode << yAxisId; return axisCode.str(); } Log& Analysis::getLog() const { string logname = "Rivet.Analysis." + name(); return Log::getLog(logname); } /////////////////////////////////////////// size_t Analysis::numEvents() const { return handler().numEvents(); } double Analysis::sumW() const { return handler().sumW(); } double Analysis::sumW2() const { return handler().sumW2(); } /////////////////////////////////////////// bool Analysis::isCompatible(const ParticlePair& beams) const { return isCompatible(beams.first.pid(), beams.second.pid(), beams.first.energy(), beams.second.energy()); } bool Analysis::isCompatible(PdgId beam1, PdgId beam2, double e1, double e2) const { PdgIdPair beams(beam1, beam2); pair energies(e1, e2); return isCompatible(beams, energies); } // bool Analysis::beamIDsCompatible(const PdgIdPair& beams) const { // bool beamIdsOk = false; // for (const PdgIdPair& bp : requiredBeams()) { // if (compatible(beams, bp)) { // beamIdsOk = true; // break; // } // } // return beamIdsOk; // } // /// Check that the energies are compatible (within 1% or 1 GeV, whichever is larger, for a bit of UI forgiveness) // bool Analysis::beamEnergiesCompatible(const pair& energies) const { // /// @todo Use some sort of standard ordering to improve comparisons, esp. when the two beams are different particles // bool beamEnergiesOk = requiredEnergies().size() > 0 ? false : true; // typedef pair DoublePair; // for (const DoublePair& ep : requiredEnergies()) { // if ((fuzzyEquals(ep.first, energies.first, 0.01) && fuzzyEquals(ep.second, energies.second, 0.01)) || // (fuzzyEquals(ep.first, energies.second, 0.01) && fuzzyEquals(ep.second, energies.first, 0.01)) || // (abs(ep.first - energies.first) < 1*GeV && abs(ep.second - energies.second) < 1*GeV) || // (abs(ep.first - energies.second) < 1*GeV && abs(ep.second - energies.first) < 1*GeV)) { // beamEnergiesOk = true; // break; // } // } // return beamEnergiesOk; // } // bool Analysis::beamsCompatible(const PdgIdPair& beams, const pair& energies) const { bool Analysis::isCompatible(const PdgIdPair& beams, const pair& energies) const { // First check the beam IDs bool beamIdsOk = false; for (const PdgIdPair& bp : requiredBeams()) { if (compatible(beams, bp)) { beamIdsOk = true; break; } } if (!beamIdsOk) return false; // Next check that the energies are compatible (within 1% or 1 GeV, whichever is larger, for a bit of UI forgiveness) /// @todo Use some sort of standard ordering to improve comparisons, esp. when the two beams are different particles bool beamEnergiesOk = requiredEnergies().size() > 0 ? false : true; typedef pair DoublePair; for (const DoublePair& ep : requiredEnergies()) { if ((fuzzyEquals(ep.first, energies.first, 0.01) && fuzzyEquals(ep.second, energies.second, 0.01)) || (fuzzyEquals(ep.first, energies.second, 0.01) && fuzzyEquals(ep.second, energies.first, 0.01)) || (abs(ep.first - energies.first) < 1*GeV && abs(ep.second - energies.second) < 1*GeV) || (abs(ep.first - energies.second) < 1*GeV && abs(ep.second - energies.first) < 1*GeV)) { beamEnergiesOk = true; break; } } return beamEnergiesOk; } /////////////////////////////////////////// double Analysis::crossSection() const { const YODA::Scatter1D::Points& ps = handler().crossSection()->points(); if (ps.size() != 1) { string errMsg = "cross section missing for analysis " + name(); throw Error(errMsg); } return ps[0].x(); } double Analysis::crossSectionPerEvent() const { return crossSection()/sumW(); } //////////////////////////////////////////////////////////// // Histogramming void Analysis::_cacheRefData() const { if (_refdata.empty()) { MSG_TRACE("Getting refdata cache for paper " << name()); _refdata = getRefData(getRefDataName()); } } // vector Analysis::getAllData(bool includeorphans) const{ // return handler().getData(includeorphans, false, false); // } CounterPtr & Analysis::book(CounterPtr & ctr, - const string& cname, - const string& title) { + const string& cname) { // const string path = histoPath(cname); // ctr = CounterPtr(handler().weightNames(), Counter(path, title)); // ctr = addAnalysisObject(ctr); // return ctr; - return ctr = registerAO(Counter(histoPath(cname), title)); + return ctr = registerAO( Counter(histoPath(cname)) ); } - CounterPtr & Analysis::book(CounterPtr & ctr, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const string& title) { + CounterPtr & Analysis::book(CounterPtr & ctr, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { const string axisCode = mkAxisCode(datasetId, xAxisId, yAxisId); - return book(ctr, axisCode, title); + return book(ctr, axisCode); } - Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, - size_t nbins, double lower, double upper, - const string& title, - const string& xtitle, - const string& ytitle) { + Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, size_t nbins, double lower, double upper) { const string path = histoPath(hname); - Histo1D hist = Histo1D(nbins, lower, upper, path, title); - hist.setAnnotation("XLabel", xtitle); - hist.setAnnotation("YLabel", ytitle); + Histo1D hist = Histo1D(nbins, lower, upper, path); // histo = Histo1DPtr(handler().weightNames(), hist); // histo = addAnalysisObject(histo); // return histo; return histo = registerAO(hist); } - Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, - const initializer_list& binedges, - const string& title, - const string& xtitle, - const string& ytitle) { - return book(histo, hname, vector{binedges}, title, xtitle, ytitle); + Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, const initializer_list& binedges) { + return book(histo, hname, vector{binedges}); } - Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, - const vector& binedges, - const string& title, - const string& xtitle, - const string& ytitle) { + Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, const vector& binedges) { const string path = histoPath(hname); - Histo1D hist = Histo1D(binedges, path, title); - hist.setAnnotation("XLabel", xtitle); - hist.setAnnotation("YLabel", ytitle); + Histo1D hist = Histo1D(binedges, path); // histo = Histo1DPtr(handler().weightNames(), hist); // histo = addAnalysisObject(histo); // return histo; return histo = registerAO(hist); } - Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname, - const string& title, - const string& xtitle, - const string& ytitle) { + Histo1DPtr & Analysis::book(Histo1DPtr & histo, const string& hname) { const Scatter2D& refdata = refData(hname); - return book(histo, hname, refdata, title, xtitle, ytitle); + return book(histo, hname, refdata); } - Histo1DPtr & Analysis::book(Histo1DPtr & histo, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const string& title, - const string& xtitle, - const string& ytitle) { + Histo1DPtr & Analysis::book(Histo1DPtr & histo, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { const string axisCode = mkAxisCode(datasetId, xAxisId, yAxisId); - return book(histo, axisCode, title, xtitle, ytitle); + return book(histo, axisCode); } - Histo1DPtr & Analysis::book(Histo1DPtr& histo, - const string& hname, - const Scatter2D& refscatter, - const string& title, - const string& xtitle, - const string& ytitle) { + Histo1DPtr & Analysis::book(Histo1DPtr& histo, const string& hname, const Scatter2D& refscatter) { const string path = histoPath(hname); Histo1D hist = Histo1D(refscatter, path); - hist.setTitle(title); - hist.setAnnotation("XLabel", xtitle); - hist.setAnnotation("YLabel", ytitle); - if (hist.hasAnnotation("IsRef")) hist.rmAnnotation("IsRef"); + for (const string& a : hist.annotations()) { + if (a != "Path") hist.rmAnnotation(a); + } // histo = Histo1DPtr(handler().weightNames(), hist); // histo = addAnalysisObject(histo); // return histo; return histo = registerAO(hist); } ///////////////// Histo2DPtr & Analysis::book(Histo2DPtr & h2d,const string& hname, size_t nxbins, double xlower, double xupper, - size_t nybins, double ylower, double yupper, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) - { + size_t nybins, double ylower, double yupper) { const string path = histoPath(hname); - Histo2D hist(nxbins, xlower, xupper, nybins, ylower, yupper, path, title); - hist.setAnnotation("XLabel", xtitle); - hist.setAnnotation("YLabel", ytitle); - hist.setAnnotation("ZLabel", ztitle); + Histo2D hist(nxbins, xlower, xupper, nybins, ylower, yupper, path); // h2d = Histo2DPtr(handler().weightNames(), hist); // h2d = addAnalysisObject(h2d); // return h2d; return h2d = registerAO(hist); } Histo2DPtr & Analysis::book(Histo2DPtr & h2d,const string& hname, const initializer_list& xbinedges, - const initializer_list& ybinedges, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) - { - return book(h2d, hname, vector{xbinedges}, vector{ybinedges}, title, xtitle, ytitle, ztitle); + const initializer_list& ybinedges) { + return book(h2d, hname, vector{xbinedges}, vector{ybinedges}); } Histo2DPtr & Analysis::book(Histo2DPtr & h2d,const string& hname, const vector& xbinedges, - const vector& ybinedges, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) - { + const vector& ybinedges) { const string path = histoPath(hname); - Histo2D hist(xbinedges, ybinedges, path, title); - hist.setAnnotation("XLabel", xtitle); - hist.setAnnotation("YLabel", ytitle); - hist.setAnnotation("ZLabel", ztitle); + Histo2D hist(xbinedges, ybinedges, path); // h2d = Histo2DPtr(handler().weightNames(), hist); // h2d = addAnalysisObject(h2d); // return h2d; return h2d = registerAO(hist); } - Histo2DPtr & Analysis::book(Histo2DPtr & histo, const string& hname, - const Scatter3D& refscatter, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) { + Histo2DPtr & Analysis::book(Histo2DPtr & histo, const string& hname, const Scatter3D& refscatter) { const string path = histoPath(hname); Histo2D hist = Histo2D(refscatter, path); - hist.setTitle(title); - hist.setAnnotation("XLabel", xtitle); - hist.setAnnotation("YLabel", ytitle); - hist.setAnnotation("ZLabel", ztitle); - if (hist.hasAnnotation("IsRef")) hist.rmAnnotation("IsRef"); + for (const string& a : hist.annotations()) { + if (a != "Path") hist.rmAnnotation(a); + } // histo = Histo2DPtr(handler().weightNames(), hist); // histo = addAnalysisObject(histo); // return histo; return histo = registerAO(hist); } - Histo2DPtr & Analysis::book(Histo2DPtr & histo, const string& hname, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) { + Histo2DPtr & Analysis::book(Histo2DPtr & histo, const string& hname) { const Scatter3D& refdata = refData(hname); - return book(histo, hname, refdata, title, xtitle, ytitle, ztitle); + return book(histo, hname, refdata); } - Histo2DPtr & Analysis::book(Histo2DPtr & histo, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) { + Histo2DPtr & Analysis::book(Histo2DPtr & histo, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { const string axisCode = mkAxisCode(datasetId, xAxisId, yAxisId); - return book(histo, axisCode, title, xtitle, ytitle, ztitle); + return book(histo, axisCode); } ///////////////// - Profile1DPtr & Analysis::book(Profile1DPtr & p1d,const string& hname, - size_t nbins, double lower, double upper, - const string& title, - const string& xtitle, - const string& ytitle) { + Profile1DPtr & Analysis::book(Profile1DPtr & p1d,const string& hname, size_t nbins, double lower, double upper) { const string path = histoPath(hname); - Profile1D prof(nbins, lower, upper, path, title); - prof.setAnnotation("XLabel", xtitle); - prof.setAnnotation("YLabel", ytitle); + Profile1D prof(nbins, lower, upper, path); // p1d = Profile1DPtr(handler().weightNames(), prof); // p1d = addAnalysisObject(p1d); // return p1d; return p1d = registerAO(prof); } - Profile1DPtr & Analysis::book(Profile1DPtr & p1d,const string& hname, - const initializer_list& binedges, - const string& title, - const string& xtitle, - const string& ytitle) { - return book(p1d, hname, vector{binedges}, title, xtitle, ytitle); + Profile1DPtr & Analysis::book(Profile1DPtr & p1d,const string& hname, const initializer_list& binedges) { + return book(p1d, hname, vector{binedges}); } - Profile1DPtr & Analysis::book(Profile1DPtr & p1d, const string& hname, - const vector& binedges, - const string& title, - const string& xtitle, - const string& ytitle) { + Profile1DPtr & Analysis::book(Profile1DPtr & p1d, const string& hname, const vector& binedges) { const string path = histoPath(hname); - Profile1D prof(binedges, path, title); - prof.setAnnotation("XLabel", xtitle); - prof.setAnnotation("YLabel", ytitle); + Profile1D prof(binedges, path); // p1d = Profile1DPtr(handler().weightNames(), prof); // p1d = addAnalysisObject(p1d); // return p1d; return p1d = registerAO(prof); } - Profile1DPtr & Analysis::book(Profile1DPtr & p1d, const string& hname, - const Scatter2D& refscatter, - const string& title, - const string& xtitle, - const string& ytitle) { + Profile1DPtr & Analysis::book(Profile1DPtr & p1d, const string& hname, const Scatter2D& refscatter) { const string path = histoPath(hname); Profile1D prof(refscatter, path); - prof.setTitle(title); - prof.setAnnotation("XLabel", xtitle); - prof.setAnnotation("YLabel", ytitle); - if (prof.hasAnnotation("IsRef")) prof.rmAnnotation("IsRef"); + for (const string& a : prof.annotations()) { + if (a != "Path") prof.rmAnnotation(a); + } // p1d = Profile1DPtr(handler().weightNames(), prof); // p1d = addAnalysisObject(p1d); // return p1d; return p1d = registerAO(prof); } - Profile1DPtr & Analysis::book(Profile1DPtr & p1d,const string& hname, - const string& title, - const string& xtitle, - const string& ytitle) { + Profile1DPtr & Analysis::book(Profile1DPtr & p1d,const string& hname) { const Scatter2D& refdata = refData(hname); - book(p1d, hname, refdata, title, xtitle, ytitle); - return p1d; + return book(p1d, hname, refdata); } - Profile1DPtr & Analysis::book(Profile1DPtr & p1d,unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - const string& title, - const string& xtitle, - const string& ytitle) { + Profile1DPtr & Analysis::book(Profile1DPtr & p1d,unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { const string axisCode = mkAxisCode(datasetId, xAxisId, yAxisId); - return book(p1d, axisCode, title, xtitle, ytitle); + return book(p1d, axisCode); } /////////////////// Profile2DPtr & Analysis::book(Profile2DPtr & p2d, const string& hname, size_t nxbins, double xlower, double xupper, - size_t nybins, double ylower, double yupper, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) - { + size_t nybins, double ylower, double yupper) { const string path = histoPath(hname); - Profile2D prof(nxbins, xlower, xupper, nybins, ylower, yupper, path, title); - prof.setAnnotation("XLabel", xtitle); - prof.setAnnotation("YLabel", ytitle); - prof.setAnnotation("ZLabel", ztitle); + Profile2D prof(nxbins, xlower, xupper, nybins, ylower, yupper, path); // p2d = Profile2DPtr(handler().weightNames(), prof); // p2d = addAnalysisObject(p2d); // return p2d; return p2d = registerAO(prof); } Profile2DPtr & Analysis::book(Profile2DPtr & p2d, const string& hname, const initializer_list& xbinedges, - const initializer_list& ybinedges, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) - { - return book(p2d, hname, vector{xbinedges}, vector{ybinedges}, title, xtitle, ytitle, ztitle); + const initializer_list& ybinedges) { + return book(p2d, hname, vector{xbinedges}, vector{ybinedges}); } Profile2DPtr & Analysis::book(Profile2DPtr & p2d, const string& hname, const vector& xbinedges, - const vector& ybinedges, - const string& title, - const string& xtitle, - const string& ytitle, - const string& ztitle) - { + const vector& ybinedges) { const string path = histoPath(hname); - Profile2D prof(xbinedges, ybinedges, path, title); - prof.setAnnotation("XLabel", xtitle); - prof.setAnnotation("YLabel", ytitle); - prof.setAnnotation("ZLabel", ztitle); + Profile2D prof(xbinedges, ybinedges, path); // p2d = Profile2DPtr(handler().weightNames(), prof); // p2d = addAnalysisObject(p2d); // return p2d; return p2d = registerAO(prof); } /// @todo REINSTATE // Profile2DPtr Analysis::book(Profile2DPtr& prof,const string& hname, - // const Scatter3D& refscatter, - // const string& title, - // const string& xtitle, - // const string& ytitle, - // const string& ztitle) { + // const Scatter3D& refscatter) { // const string path = histoPath(hname); // /// @todo Add no-metadata argument to YODA copy constructors // Profile2D prof(refscatter, path); - // prof.setTitle(title); - // prof.setAnnotation("XLabel", xtitle); - // prof.setAnnotation("YLabel", ytitle); - // prof.setAnnotation("ZLabel", ztitle); // if (prof.hasAnnotation("IsRef")) prof.rmAnnotation("IsRef"); // p2d = Profile2DPtr(handler().weightNames(), prof); // p2d = addAnalysisObject(p2d); // return p2d; // } - // Profile2DPtr Analysis::book(Profile2DPtr& prof, const string& hname, - // const string& title, - // const string& xtitle, - // const string& ytitle, - // const string& ztitle) { + // Profile2DPtr Analysis::book(Profile2DPtr& prof, const string& hname) { // const Scatter3D& refdata = refData(hname); - // return book(prof, hname, refdata, title, xtitle, ytitle, ztitle); + // return book(prof, hname, refdata); // } /////////////// /// @todo Should be able to book Scatter1Ds /////////////// - Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, - bool copy_pts, - const string& title, - const string& xtitle, - const string& ytitle) { + Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, unsigned int datasetId, + unsigned int xAxisId, unsigned int yAxisId, bool copy_pts) { const string axisCode = mkAxisCode(datasetId, xAxisId, yAxisId); - return book(s2d, axisCode, copy_pts, title, xtitle, ytitle); + return book(s2d, axisCode, copy_pts); } - Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, const string& hname, - bool copy_pts, - const string& title, - const string& xtitle, - const string& ytitle) { + Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, const string& hname, bool copy_pts) { const string path = histoPath(hname); Scatter2D scat; if (copy_pts) { const Scatter2D& refdata = refData(hname); scat = Scatter2D(refdata, path); for (Point2D& p : scat.points()) p.setY(0, 0); + for (const string& a : scat.annotations()) { + if (a != "Path") scat.rmAnnotation(a); + } } else { scat = Scatter2D(path); } - scat.setTitle(title); - scat.setAnnotation("XLabel", xtitle); - scat.setAnnotation("YLabel", ytitle); - if (scat.hasAnnotation("IsRef")) scat.rmAnnotation("IsRef"); - // s2d = Scatter2DPtr(handler().weightNames(), scat); // s2d = addAnalysisObject(s2d); // return s2d; return s2d = registerAO(scat); } - Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, const string& hname, - size_t npts, double lower, double upper, - const string& title, - const string& xtitle, - const string& ytitle) { + Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, const string& hname, size_t npts, double lower, double upper) { const string path = histoPath(hname); Scatter2D scat; const double binwidth = (upper-lower)/npts; for (size_t pt = 0; pt < npts; ++pt) { const double bincentre = lower + (pt + 0.5) * binwidth; scat.addPoint(bincentre, 0, binwidth/2.0, 0); } - scat.setTitle(title); - scat.setAnnotation("XLabel", xtitle); - scat.setAnnotation("YLabel", ytitle); - // s2d = Scatter2DPtr(handler().weightNames(), scat); // s2d = addAnalysisObject(s2d); // return s2d; return s2d = registerAO(scat); } - Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, const string& hname, - const vector& binedges, - const string& title, - const string& xtitle, - const string& ytitle) { + Scatter2DPtr & Analysis::book(Scatter2DPtr & s2d, const string& hname, const vector& binedges) { const string path = histoPath(hname); Scatter2D scat; for (size_t pt = 0; pt < binedges.size()-1; ++pt) { const double bincentre = (binedges[pt] + binedges[pt+1]) / 2.0; const double binwidth = binedges[pt+1] - binedges[pt]; scat.addPoint(bincentre, 0, binwidth/2.0, 0); } - scat.setTitle(title); - scat.setAnnotation("XLabel", xtitle); - scat.setAnnotation("YLabel", ytitle); - // s2d = Scatter2DPtr(handler().weightNames(), scat); // s2d = addAnalysisObject(s2d); // return s2d; return s2d = registerAO(scat); } /// @todo Book Scatter3Ds? ///////////////////// void Analysis::divide(CounterPtr c1, CounterPtr c2, Scatter1DPtr s) const { const string path = s->path(); *s = *c1 / *c2; s->setPath(path); } void Analysis::divide(const Counter& c1, const Counter& c2, Scatter1DPtr s) const { const string path = s->path(); *s = c1 / c2; s->setPath(path); } void Analysis::divide(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const { const string path = s->path(); *s = *h1 / *h2; s->setPath(path); } void Analysis::divide(const Histo1D& h1, const Histo1D& h2, Scatter2DPtr s) const { const string path = s->path(); *s = h1 / h2; s->setPath(path); } void Analysis::divide(Profile1DPtr p1, Profile1DPtr p2, Scatter2DPtr s) const { const string path = s->path(); *s = *p1 / *p2; s->setPath(path); } void Analysis::divide(const Profile1D& p1, const Profile1D& p2, Scatter2DPtr s) const { const string path = s->path(); *s = p1 / p2; s->setPath(path); } void Analysis::divide(Histo2DPtr h1, Histo2DPtr h2, Scatter3DPtr s) const { const string path = s->path(); *s = *h1 / *h2; s->setPath(path); } void Analysis::divide(const Histo2D& h1, const Histo2D& h2, Scatter3DPtr s) const { const string path = s->path(); *s = h1 / h2; s->setPath(path); } void Analysis::divide(Profile2DPtr p1, Profile2DPtr p2, Scatter3DPtr s) const { const string path = s->path(); *s = *p1 / *p2; s->setPath(path); } void Analysis::divide(const Profile2D& p1, const Profile2D& p2, Scatter3DPtr s) const { const string path = s->path(); *s = p1 / p2; s->setPath(path); } /// @todo Counter and Histo2D efficiencies and asymms void Analysis::efficiency(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const { const string path = s->path(); *s = YODA::efficiency(*h1, *h2); s->setPath(path); } void Analysis::efficiency(const Histo1D& h1, const Histo1D& h2, Scatter2DPtr s) const { const string path = s->path(); *s = YODA::efficiency(h1, h2); s->setPath(path); } void Analysis::asymm(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const { const string path = s->path(); *s = YODA::asymm(*h1, *h2); s->setPath(path); } void Analysis::asymm(const Histo1D& h1, const Histo1D& h2, Scatter2DPtr s) const { const string path = s->path(); *s = YODA::asymm(h1, h2); s->setPath(path); } void Analysis::scale(CounterPtr cnt, Analysis::CounterAdapter factor) { if (!cnt) { MSG_WARNING("Failed to scale counter=NULL in analysis " << name() << " (scale=" << double(factor) << ")"); return; } if (std::isnan(double(factor)) || std::isinf(double(factor))) { MSG_WARNING("Failed to scale counter=" << cnt->path() << " in analysis: " << name() << " (invalid scale factor = " << double(factor) << ")"); factor = 0; } MSG_TRACE("Scaling counter " << cnt->path() << " by factor " << double(factor)); try { cnt->scaleW(factor); } catch (YODA::Exception& we) { MSG_WARNING("Could not scale counter " << cnt->path()); return; } } void Analysis::normalize(Histo1DPtr histo, Analysis::CounterAdapter norm, bool includeoverflows) { if (!histo) { MSG_WARNING("Failed to normalize histo=NULL in analysis " << name() << " (norm=" << double(norm) << ")"); return; } MSG_TRACE("Normalizing histo " << histo->path() << " to " << double(norm)); try { const double hint = histo->integral(includeoverflows); if (hint == 0) MSG_WARNING("Skipping histo with null area " << histo->path()); else histo->normalize(norm, includeoverflows); } catch (YODA::Exception& we) { MSG_WARNING("Could not normalize histo " << histo->path()); return; } } void Analysis::scale(Histo1DPtr histo, Analysis::CounterAdapter factor) { if (!histo) { MSG_WARNING("Failed to scale histo=NULL in analysis " << name() << " (scale=" << double(factor) << ")"); return; } if (std::isnan(double(factor)) || std::isinf(double(factor))) { MSG_WARNING("Failed to scale histo=" << histo->path() << " in analysis: " << name() << " (invalid scale factor = " << double(factor) << ")"); factor = 0; } MSG_TRACE("Scaling histo " << histo->path() << " by factor " << double(factor)); try { histo->scaleW(factor); } catch (YODA::Exception& we) { MSG_WARNING("Could not scale histo " << histo->path()); return; } } void Analysis::normalize(Histo2DPtr histo, Analysis::CounterAdapter norm, bool includeoverflows) { if (!histo) { MSG_ERROR("Failed to normalize histo=NULL in analysis " << name() << " (norm=" << double(norm) << ")"); return; } MSG_TRACE("Normalizing histo " << histo->path() << " to " << double(norm)); try { const double hint = histo->integral(includeoverflows); if (hint == 0) MSG_WARNING("Skipping histo with null area " << histo->path()); else histo->normalize(norm, includeoverflows); } catch (YODA::Exception& we) { MSG_WARNING("Could not normalize histo " << histo->path()); return; } } void Analysis::scale(Histo2DPtr histo, Analysis::CounterAdapter factor) { if (!histo) { MSG_ERROR("Failed to scale histo=NULL in analysis " << name() << " (scale=" << double(factor) << ")"); return; } if (std::isnan(double(factor)) || std::isinf(double(factor))) { MSG_ERROR("Failed to scale histo=" << histo->path() << " in analysis: " << name() << " (invalid scale factor = " << double(factor) << ")"); factor = 0; } MSG_TRACE("Scaling histo " << histo->path() << " by factor " << double(factor)); try { histo->scaleW(factor); } catch (YODA::Exception& we) { MSG_WARNING("Could not scale histo " << histo->path()); return; } } void Analysis::integrate(Histo1DPtr h, Scatter2DPtr s) const { // preserve the path info const string path = s->path(); *s = toIntegralHisto(*h); s->setPath(path); } void Analysis::integrate(const Histo1D& h, Scatter2DPtr s) const { // preserve the path info const string path = s->path(); *s = toIntegralHisto(h); s->setPath(path); } } /// @todo 2D versions of integrate... defined how, exactly?!? ////////////////////////////////// // namespace { // void errormsg(std::string name) { // // #ifdef HAVE_BACKTRACE // // void * buffer[4]; // // backtrace(buffer, 4); // // backtrace_symbols_fd(buffer, 4 , 1); // // #endif // std::cerr << name << ": Can't book objects outside of init().\n"; // assert(false); // } // } namespace Rivet { // void Analysis::addAnalysisObject(const MultiweightAOPtr & ao) { // if (handler().stage() == AnalysisHandler::Stage::INIT) { // _analysisobjects.push_back(ao); // } // else { // errormsg(name()); // } // } void Analysis::removeAnalysisObject(const string& path) { for (auto it = _analysisobjects.begin(); it != _analysisobjects.end(); ++it) { if ((*it)->path() == path) { _analysisobjects.erase(it); break; } } } void Analysis::removeAnalysisObject(const MultiweightAOPtr & ao) { for (auto it = _analysisobjects.begin(); it != _analysisobjects.end(); ++it) { if ((*it) == ao) { _analysisobjects.erase(it); break; } } } const CentralityProjection & Analysis::declareCentrality(const SingleValueProjection &proj, string calAnaName, string calHistName, const string projName, bool increasing) { CentralityProjection cproj; // Select the centrality variable from option. Use REF as default. // Other selections are "GEN", "IMP" and "USR" (USR only in HEPMC 3). string sel = getOption("cent","REF"); set done; if ( sel == "REF" ) { YODA::Scatter2DPtr refscat; auto refmap = getRefData(calAnaName); if ( refmap.find(calHistName) != refmap.end() ) refscat = dynamic_pointer_cast(refmap.find(calHistName)->second); if ( !refscat ) { MSG_WARNING("No reference calibration histogram for " << "CentralityProjection " << projName << " found " << "(requested histogram " << calHistName << " in " << calAnaName << ")"); } else { MSG_INFO("Found calibration histogram " << sel << " " << refscat->path()); cproj.add(PercentileProjection(proj, *refscat, increasing), sel); } } else if ( sel == "GEN" ) { YODA::Histo1DPtr genhists = getPreload("/" + calAnaName + "/" + calHistName); // for ( YODA::AnalysisObjectPtr ao : handler().getData(true) ) { // if ( ao->path() == histpath ) // genhist = dynamic_pointer_cast(ao); // } if ( !genhists || genhists->numEntries() <= 1 ) { MSG_WARNING("No generated calibration histogram for " << "CentralityProjection " << projName << " found " << "(requested histogram " << calHistName << " in " << calAnaName << ")"); } else { MSG_INFO("Found calibration histogram " << sel << " " << genhists->path()); cproj.add(PercentileProjection(proj, *genhists, increasing), sel); } } else if ( sel == "IMP" ) { YODA::Histo1DPtr imphists = getPreload("/" + calAnaName + "/" + calHistName + "_IMP"); if ( !imphists || imphists->numEntries() <= 1 ) { MSG_WARNING("No impact parameter calibration histogram for " << "CentralityProjection " << projName << " found " << "(requested histogram " << calHistName << "_IMP in " << calAnaName << ")"); } else { MSG_INFO("Found calibration histogram " << sel << " " << imphists->path()); cproj.add(PercentileProjection(ImpactParameterProjection(), *imphists, true), sel); } } else if ( sel == "USR" ) { #if HEPMC_VERSION_CODE >= 3000000 YODA::Histo1DPtr usrhists = getPreload("/" + calAnaName + "/" + calHistName + "_USR"); if ( !usrhists || usrhists->numEntries() <= 1 ) { MSG_WARNING("No user-defined calibration histogram for " << "CentralityProjection " << projName << " found " << "(requested histogram " << calHistName << "_USR in " << calAnaName << ")"); continue; } else { MSG_INFO("Found calibration histogram " << sel << " " << usrhists->path()); cproj.add((UserCentEstimate(), usrhists*, true), sel); } #else MSG_WARNING("UserCentEstimate is only available with HepMC3."); #endif } else if ( sel == "RAW" ) { #if HEPMC_VERSION_CODE >= 3000000 cproj.add(GeneratedCentrality(), sel); #else MSG_WARNING("GeneratedCentrality is only available with HepMC3."); #endif } else MSG_WARNING("'" << sel << "' is not a valid PercentileProjection tag."); if ( cproj.empty() ) MSG_WARNING("CentralityProjection " << projName << " did not contain any valid PercentileProjections."); return declare(cproj, projName); } vector Analysis::_weightNames() const { return handler().weightNames(); } YODA::AnalysisObjectPtr Analysis::_getPreload(string path) const { return handler().getPreload(path); } size_t Analysis::_defaultWeightIndex() const { return handler().defaultWeightIndex(); } MultiweightAOPtr Analysis::_getOtherAnalysisObject(const std::string & ananame, const std::string& name) { std::string path = "/" + ananame + "/" + name; const auto& ana = handler().analysis(ananame); return ana->getAnalysisObject(name); //< @todo includeorphans check?? } void Analysis::_checkBookInit() const { if (handler().stage() != AnalysisHandler::Stage::INIT) { MSG_ERROR("Can't book objects outside of init()"); throw UserError(name() + ": Can't book objects outside of init()."); } } bool Analysis::inInit() const { return handler().stage() == AnalysisHandler::Stage::INIT; } bool Analysis::inFinalize() const { return handler().stage() == AnalysisHandler::Stage::FINALIZE; } } diff --git a/src/Core/AnalysisHandler.cc b/src/Core/AnalysisHandler.cc --- a/src/Core/AnalysisHandler.cc +++ b/src/Core/AnalysisHandler.cc @@ -1,691 +1,696 @@ // -*- C++ -*- #include "Rivet/Config/RivetCommon.hh" #include "Rivet/AnalysisHandler.hh" #include "Rivet/Analysis.hh" #include "Rivet/Tools/ParticleName.hh" #include "Rivet/Tools/BeamConstraint.hh" #include "Rivet/Tools/Logging.hh" #include "Rivet/Projections/Beam.hh" #include "YODA/IO.h" -#include #include using std::cout; using std::cerr; namespace Rivet { AnalysisHandler::AnalysisHandler(const string& runname) : _runname(runname), - _initialised(false), _ignoreBeams(false), + _initialised(false), _ignoreBeams(false), _skipWeights(false), _defaultWeightIdx(0), _dumpPeriod(0), _dumping(false) { } AnalysisHandler::~AnalysisHandler() { } Log& AnalysisHandler::getLog() const { return Log::getLog("Rivet.Analysis.Handler"); } /// http://stackoverflow.com/questions/4654636/how-to-determine-if-a-string-is-a-number-with-c namespace { bool is_number(const std::string& s) { std::string::const_iterator it = s.begin(); while (it != s.end() && std::isdigit(*it)) ++it; return !s.empty() && it == s.end(); } } /// Check if any of the weightnames is not a number bool AnalysisHandler::haveNamedWeights() const { bool dec=false; for (unsigned int i=0;i<_weightNames.size();++i) { string s = _weightNames[i]; if (!is_number(s)) { dec=true; break; } } return dec; } void AnalysisHandler::init(const GenEvent& ge) { if (_initialised) throw UserError("AnalysisHandler::init has already been called: cannot re-initialize!"); /// @todo Should the Rivet analysis objects know about weight names? setRunBeams(Rivet::beams(ge)); MSG_DEBUG("Initialising the analysis handler"); _eventNumber = ge.event_number(); setWeightNames(ge); - if (haveNamedWeights()) + if (_skipWeights) + MSG_INFO("Only using nominal weight. Variation weights will be ignored."); + else if (haveNamedWeights()) MSG_INFO("Using named weights"); else MSG_INFO("NOT using named weights. Using first weight as nominal weight"); _eventCounter = CounterPtr(weightNames(), Counter("_EVTCOUNT")); // Set the cross section based on what is reported by this event. if (ge.cross_section()) { MSG_TRACE("Getting cross section."); double xs = ge.cross_section()->cross_section(); double xserr = ge.cross_section()->cross_section_error(); setCrossSection(xs, xserr); } // Check that analyses are beam-compatible, and remove those that aren't const size_t num_anas_requested = analysisNames().size(); vector anamestodelete; for (const AnaHandle a : analyses()) { if (!_ignoreBeams && !a->isCompatible(beams())) { //MSG_DEBUG(a->name() << " requires beams " << a->requiredBeams() << " @ " << a->requiredEnergies() << " GeV"); anamestodelete.push_back(a->name()); } } for (const string& aname : anamestodelete) { MSG_WARNING("Analysis '" << aname << "' is incompatible with the provided beams: removing"); removeAnalysis(aname); } if (num_anas_requested > 0 && analysisNames().empty()) { cerr << "All analyses were incompatible with the first event's beams\n" << "Exiting, since this probably wasn't intentional!" << endl; exit(1); } // Warn if any analysis' status is not unblemished for (const AnaHandle a : analyses()) { if ( a->info().preliminary() ) { MSG_WARNING("Analysis '" << a->name() << "' is preliminary: be careful, it may change and/or be renamed!"); } else if ( a->info().obsolete() ) { MSG_WARNING("Analysis '" << a->name() << "' is obsolete: please update!"); } else if (( a->info().unvalidated() ) ) { MSG_WARNING("Analysis '" << a->name() << "' is unvalidated: be careful, it may be broken!"); } } // Initialize the remaining analyses _stage = Stage::INIT; for (AnaHandle a : analyses()) { MSG_DEBUG("Initialising analysis: " << a->name()); try { // Allow projection registration in the init phase onwards a->_allowProjReg = true; a->init(); //MSG_DEBUG("Checking consistency of analysis: " << a->name()); //a->checkConsistency(); } catch (const Error& err) { cerr << "Error in " << a->name() << "::init method: " << err.what() << endl; exit(1); } MSG_DEBUG("Done initialising analysis: " << a->name()); } _stage = Stage::OTHER; _initialised = true; MSG_DEBUG("Analysis handler initialised"); } void AnalysisHandler::setWeightNames(const GenEvent& ge) { - _weightNames = HepMCUtils::weightNames(ge); - if ( _weightNames.empty() ) _weightNames.push_back(""); + if (!_skipWeights) _weightNames = HepMCUtils::weightNames(ge); + if ( _weightNames.empty() ) _weightNames.push_back(""); for ( int i = 0, N = _weightNames.size(); i < N; ++i ) if ( _weightNames[i] == "" ) _defaultWeightIdx = i; } void AnalysisHandler::analyze(const GenEvent& ge) { // Call init with event as template if not already initialised if (!_initialised) init(ge); assert(_initialised); // Ensure that beam details match those from the first event (if we're checking beams) if ( !_ignoreBeams ) { const PdgIdPair beams = Rivet::beamIds(ge); const double sqrts = Rivet::sqrtS(ge); if (!compatible(beams, _beams) || !fuzzyEquals(sqrts, sqrtS())) { cerr << "Event beams mismatch: " << PID::toBeamsString(beams) << " @ " << sqrts/GeV << " GeV" << " vs. first beams " << this->beams() << " @ " << this->sqrtS()/GeV << " GeV" << endl; exit(1); } } // Create the Rivet event wrapper /// @todo Filter/normalize the event here bool strip = ( getEnvParam("RIVET_STRIP_HEPMC", string("NOOOO") ) != "NOOOO" ); Event event(ge, strip); // set the cross section based on what is reported by this event. // if no cross section MSG_TRACE("getting cross section."); if (ge.cross_section()) { MSG_TRACE("getting cross section from GenEvent."); double xs = ge.cross_section()->cross_section(); double xserr = ge.cross_section()->cross_section_error(); setCrossSection(xs, xserr); } // Won't happen for first event because _eventNumber is set in init() if (_eventNumber != ge.event_number()) { /// @todo Can we get away with not passing a matrix? MSG_TRACE("AnalysisHandler::analyze(): Pushing _eventCounter to persistent."); _eventCounter.get()->pushToPersistent(_subEventWeights); // if this is indeed a new event, push the temporary // histograms and reset for (const AnaHandle& a : analyses()) { for (auto ao : a->analysisObjects()) { MSG_TRACE("AnalysisHandler::analyze(): Pushing " << a->name() << "'s " << ao->name() << " to persistent."); ao.get()->pushToPersistent(_subEventWeights); } MSG_TRACE("AnalysisHandler::analyze(): finished pushing " << a->name() << "'s objects to persistent."); } _eventNumber = ge.event_number(); MSG_DEBUG("nominal event # " << _eventCounter.get()->_persistent[0]->numEntries()); MSG_DEBUG("nominal sum of weights: " << _eventCounter.get()->_persistent[0]->sumW()); MSG_DEBUG("Event has " << _subEventWeights.size() << " sub events."); _subEventWeights.clear(); } MSG_TRACE("starting new sub event"); _eventCounter.get()->newSubEvent(); for (const AnaHandle& a : analyses()) { for (auto ao : a->analysisObjects()) { ao.get()->newSubEvent(); } } _subEventWeights.push_back(event.weights()); MSG_DEBUG("Analyzing subevent #" << _subEventWeights.size() - 1 << "."); _eventCounter->fill(); // Run the analyses for (AnaHandle a : analyses()) { MSG_TRACE("About to run analysis " << a->name()); try { a->analyze(event); } catch (const Error& err) { cerr << "Error in " << a->name() << "::analyze method: " << err.what() << endl; exit(1); } MSG_TRACE("Finished running analysis " << a->name()); } if ( _dumpPeriod > 0 && numEvents() > 0 && numEvents()%_dumpPeriod == 0 ) { MSG_INFO("Dumping intermediate results to " << _dumpFile << "."); _dumping = numEvents()/_dumpPeriod; finalize(); writeData(_dumpFile); _dumping = 0; } } void AnalysisHandler::analyze(const GenEvent* ge) { if (ge == nullptr) { MSG_ERROR("AnalysisHandler received null pointer to GenEvent"); //throw Error("AnalysisHandler received null pointer to GenEvent"); } analyze(*ge); } void AnalysisHandler::finalize() { if (!_initialised) return; MSG_INFO("Finalising analyses"); _stage = Stage::FINALIZE; // First push all analyses' objects to persistent and final MSG_TRACE("AnalysisHandler::finalize(): Pushing analysis objects to persistent."); _eventCounter.get()->pushToPersistent(_subEventWeights); _eventCounter.get()->pushToFinal(); _xs.get()->pushToFinal(); for (const AnaHandle& a : analyses()) { for (auto ao : a->analysisObjects()) { ao.get()->pushToPersistent(_subEventWeights); ao.get()->pushToFinal(); } } for (AnaHandle a : analyses()) { if ( _dumping && !a->info().reentrant() ) { if ( _dumping == 1 ) MSG_INFO("Skipping finalize in periodic dump of " << a->name() << " as it is not declared reentrant."); continue; } for (size_t iW = 0; iW < numWeights(); iW++) { _eventCounter.get()->setActiveFinalWeightIdx(iW); _xs.get()->setActiveFinalWeightIdx(iW); for (auto ao : a->analysisObjects()) ao.get()->setActiveFinalWeightIdx(iW); try { MSG_TRACE("running " << a->name() << "::finalize() for weight " << iW << "."); a->finalize(); } catch (const Error& err) { cerr << "Error in " << a->name() << "::finalize method: " << err.what() << '\n'; exit(1); } } } // Print out number of events processed const int nevts = numEvents(); MSG_INFO("Processed " << nevts << " event" << (nevts != 1 ? "s" : "")); _stage = Stage::OTHER; if ( _dumping ) return; // Print out MCnet boilerplate if (getLog().getLevel()<=20){ cout << endl; cout << "The MCnet usage guidelines apply to Rivet: see http://www.montecarlonet.org/GUIDELINES" << endl; cout << "Please acknowledge plots made with Rivet analyses, and cite arXiv:1003.0694 (http://arxiv.org/abs/1003.0694)" << endl; } } AnalysisHandler& AnalysisHandler::addAnalysis(const string& analysisname, std::map pars) { // Make an option handle. std::string parHandle = ""; for (map::iterator par = pars.begin(); par != pars.end(); ++par) { parHandle +=":"; parHandle += par->first + "=" + par->second; } return addAnalysis(analysisname + parHandle); } AnalysisHandler& AnalysisHandler::addAnalysis(const string& analysisname) { // Check for a duplicate analysis /// @todo Might we want to be able to run an analysis twice, with different params? /// Requires avoiding histo tree clashes, i.e. storing the histos on the analysis objects. string ananame = analysisname; vector anaopt = split(analysisname, ":"); if ( anaopt.size() > 1 ) ananame = anaopt[0]; AnaHandle analysis( AnalysisLoader::getAnalysis(ananame) ); if (analysis.get() != 0) { // < Check for null analysis. MSG_DEBUG("Adding analysis '" << analysisname << "'"); map opts; for ( int i = 1, N = anaopt.size(); i < N; ++i ) { vector opt = split(anaopt[i], "="); if ( opt.size() != 2 ) { MSG_WARNING("Error in option specification. Skipping analysis " << analysisname); return *this; } if ( !analysis->info().validOption(opt[0], opt[1]) ) { MSG_WARNING("Cannot set option '" << opt[0] << "' to '" << opt[1] << "'. Skipping analysis " << analysisname); return *this; } opts[opt[0]] = opt[1]; } for ( auto opt: opts) { analysis->_options[opt.first] = opt.second; analysis->_optstring += ":" + opt.first + "=" + opt.second; } for (const AnaHandle& a : analyses()) { if (a->name() == analysis->name() ) { MSG_WARNING("Analysis '" << analysisname << "' already registered: skipping duplicate"); return *this; } } analysis->_analysishandler = this; _analyses[analysisname] = analysis; } else { MSG_WARNING("Analysis '" << analysisname << "' not found."); } // MSG_WARNING(_analyses.size()); // for (const AnaHandle& a : _analyses) MSG_WARNING(a->name()); return *this; } AnalysisHandler& AnalysisHandler::removeAnalysis(const string& analysisname) { MSG_DEBUG("Removing analysis '" << analysisname << "'"); if (_analyses.find(analysisname) != _analyses.end()) _analyses.erase(analysisname); // } return *this; } // void AnalysisHandler::addData(const std::vector& aos) { // for (const YODA::AnalysisObjectPtr ao : aos) { // string path = ao->path(); // if ( path.substr(0, 5) != "/RAW/" ) { // _orphanedPreloads.push_back(ao); // continue; // } // path = path.substr(4); // ao->setPath(path); // if (path.size() > 1) { // path > "/" // try { // const string ananame = ::split(path, "/")[0]; // AnaHandle a = analysis(ananame); // /// @todo FIXXXXX // //MultiweightAOPtr mao = ????; /// @todo generate right Multiweight object from ao // //a->addAnalysisObject(mao); /// @todo Need to statistically merge... // } catch (const Error& e) { // MSG_TRACE("Adding analysis object " << path << // " to the list of orphans."); // _orphanedPreloads.push_back(ao); // } // } // } // } void AnalysisHandler::stripOptions(YODA::AnalysisObjectPtr ao, const vector & delopts) const { string path = ao->path(); string ananame = split(path, "/")[0]; vector anaopts = split(ananame, ":"); for ( int i = 1, N = anaopts.size(); i < N; ++i ) for ( auto opt : delopts ) if ( opt == "*" || anaopts[i].find(opt + "=") == 0 ) path.replace(path.find(":" + anaopts[i]), (":" + anaopts[i]).length(), ""); ao->setPath(path); } void AnalysisHandler::mergeYodas(const vector & aofiles, const vector & delopts, bool equiv) { // Convenient typedef; typedef multimap AOMap; // Store all found weights here. set foundWeightNames; // Stor all found analyses. set foundAnalyses; // Store all analysis objects here. vector allaos; // Go through all files and collect information. for ( auto file : aofiles ) { allaos.push_back(AOMap()); AOMap & aomap = allaos.back(); vector aos_raw; try { YODA::read(file, aos_raw); } catch (...) { //< YODA::ReadError& throw UserError("Unexpected error in reading file: " + file); } for (YODA::AnalysisObject* aor : aos_raw) { YODA::AnalysisObjectPtr ao(aor); AOPath path(ao->path()); if ( !path ) throw UserError("Invalid path name in file: " + file); if ( !path.isRaw() ) continue; foundWeightNames.insert(path.weight()); // Now check if any options should be removed. for ( string delopt : delopts ) if ( path.hasOption(delopt) ) path.removeOption(delopt); path.setPath(); if ( path.analysisWithOptions() != "" ) foundAnalyses.insert(path.analysisWithOptions()); aomap.insert(make_pair(path.path(), ao)); } } // Now make analysis handler aware of the weight names present. _weightNames.clear(); _weightNames.push_back(""); _defaultWeightIdx = 0; for ( string name : foundWeightNames ) _weightNames.push_back(name); // Then we create and initialize all analyses for ( string ananame : foundAnalyses ) addAnalysis(ananame); for (AnaHandle a : analyses() ) { MSG_TRACE("Initialising analysis: " << a->name()); if ( !a->info().reentrant() ) MSG_WARNING("Analysis " << a->name() << " has not been validated to have " << "a reentrant finalize method. The merged result is unpredictable."); try { // Allow projection registration in the init phase onwards a->_allowProjReg = true; a->init(); } catch (const Error& err) { cerr << "Error in " << a->name() << "::init method: " << err.what() << endl; exit(1); } MSG_TRACE("Done initialising analysis: " << a->name()); } // Now get all booked analysis objects. vector raos; for (AnaHandle a : analyses()) { for (const auto & ao : a->analysisObjects()) { raos.push_back(ao); } } // Collect global weights and xcoss sections and fix scaling for // all files. _eventCounter = CounterPtr(weightNames(), Counter("_EVTCOUNT")); _xs = Scatter1DPtr(weightNames(), Scatter1D("_XSEC")); for (size_t iW = 0; iW < numWeights(); iW++) { _eventCounter.get()->setActiveWeightIdx(iW); _xs.get()->setActiveWeightIdx(iW); YODA::Counter & sumw = *_eventCounter; YODA::Scatter1D & xsec = *_xs; vector xsecs; vector sows; for ( auto & aomap : allaos ) { auto xit = aomap.find(xsec.path()); if ( xit == aomap.end() ) xsecs.push_back(dynamic_pointer_cast(xit->second)); else xsecs.push_back(YODA::Scatter1DPtr()); xit = aomap.find(sumw.path()); if ( xit == aomap.end() ) sows.push_back(dynamic_pointer_cast(xit->second)); else sows.push_back(YODA::CounterPtr()); } double xs = 0.0, xserr = 0.0; for ( int i = 0, N = sows.size(); i < N; ++i ) { if ( !sows[i] || !xsecs[i] ) continue; double xseci = xsecs[i]->point(0).x(); double xsecerri = sqr(xsecs[i]->point(0).xErrAvg()); sumw += *sows[i]; double effnent = sows[i]->effNumEntries(); xs += (equiv? effnent: 1.0)*xseci; xserr += (equiv? sqr(effnent): 1.0)*xsecerri; } vector scales(sows.size(), 1.0); if ( equiv ) { xs /= sumw.effNumEntries(); xserr = sqrt(xserr)/sumw.effNumEntries(); } else { xserr = sqrt(xserr); for ( int i = 0, N = sows.size(); i < N; ++i ) scales[i] = (sumw.sumW()/sows[i]->sumW())* (xsecs[i]->point(0).x()/xs); } xsec.point(0) = Point1D(xs, xserr); // Go through alla analyses and add stuff to their analysis objects; for (AnaHandle a : analyses()) { for (const auto & ao : a->analysisObjects()) { ao.get()->setActiveWeightIdx(iW); YODA::AnalysisObjectPtr yao = ao.get()->activeYODAPtr(); for ( int i = 0, N = sows.size(); i < N; ++i ) { if ( !sows[i] || !xsecs[i] ) continue; auto range = allaos[i].equal_range(yao->path()); for ( auto aoit = range.first; aoit != range.second; ++aoit ) if ( !addaos(yao, aoit->second, scales[i]) ) MSG_WARNING("Cannot merge objects with path " << yao->path() <<" of type " << yao->annotation("Type") ); } ao.get()->unsetActiveWeight(); } } _eventCounter.get()->unsetActiveWeight(); _xs.get()->unsetActiveWeight(); } // Finally we just have to finalize all analyses, leaving to the // controlling program to write it out some yoda-file. finalize(); } void AnalysisHandler::readData(const string& filename) { try { /// @todo Use new YODA SFINAE to fill the smart ptr vector directly vector aos_raw; YODA::read(filename, aos_raw); for (YODA::AnalysisObject* aor : aos_raw) _preloads[aor->path()] = YODA::AnalysisObjectPtr(aor); } catch (...) { //< YODA::ReadError& throw UserError("Unexpected error in reading file: " + filename); } } vector AnalysisHandler::getRivetAOs() const { vector rtn; for (AnaHandle a : analyses()) { for (const auto & ao : a->analysisObjects()) { rtn.push_back(ao); } } rtn.push_back(_eventCounter); rtn.push_back(_xs); return rtn; } void AnalysisHandler::writeData(const string& filename) const { // This is where we store the OAs to be written. vector output; // First get all multiwight AOs vector raos = getRivetAOs(); output.reserve(raos.size()*2*numWeights()); // Fix the oredering so that default weight is written out first. vector order; if ( _defaultWeightIdx >= 0 && _defaultWeightIdx < numWeights() ) order.push_back(_defaultWeightIdx); for ( size_t i = 0; i < numWeights(); ++i ) if ( i != _defaultWeightIdx ) order.push_back(i); // Then we go through all finalized AOs one weight at a time for (size_t iW : order ) { for ( auto rao : raos ) { rao.get()->setActiveFinalWeightIdx(iW); if ( rao->path().find("/TMP/") != string::npos ) continue; output.push_back(rao.get()->activeYODAPtr()); } } // Finally the RAW objects. for (size_t iW : order ) { for ( auto rao : raos ) { rao.get()->setActiveWeightIdx(iW); output.push_back(rao.get()->activeYODAPtr()); } } try { YODA::write(filename, output.begin(), output.end()); } catch (...) { //< YODA::WriteError& throw UserError("Unexpected error in writing file: " + filename); } } string AnalysisHandler::runName() const { return _runname; } size_t AnalysisHandler::numEvents() const { return _eventCounter->numEntries(); } std::vector AnalysisHandler::analysisNames() const { std::vector rtn; for (AnaHandle a : analyses()) { rtn.push_back(a->name()); } return rtn; } AnalysisHandler& AnalysisHandler::addAnalyses(const std::vector& analysisnames) { for (const string& aname : analysisnames) { //MSG_DEBUG("Adding analysis '" << aname << "'"); addAnalysis(aname); } return *this; } AnalysisHandler& AnalysisHandler::removeAnalyses(const std::vector& analysisnames) { for (const string& aname : analysisnames) { removeAnalysis(aname); } return *this; } AnalysisHandler& AnalysisHandler::setCrossSection(double xs, double xserr) { _xs = Scatter1DPtr(weightNames(), Scatter1D("_XSEC")); _eventCounter.get()->setActiveWeightIdx(_defaultWeightIdx); double nomwgt = sumW(); // The cross section of each weight variation is the nominal cross section // times the sumW(variation) / sumW(nominal). // This way the cross section will work correctly for (size_t iW = 0; iW < numWeights(); iW++) { _eventCounter.get()->setActiveWeightIdx(iW); double s = sumW() / nomwgt; _xs.get()->setActiveWeightIdx(iW); _xs->addPoint(xs*s, xserr*s); } _eventCounter.get()->unsetActiveWeight(); _xs.get()->unsetActiveWeight(); return *this; } AnalysisHandler& AnalysisHandler::addAnalysis(Analysis* analysis) { analysis->_analysishandler = this; // _analyses.insert(AnaHandle(analysis)); _analyses[analysis->name()] = AnaHandle(analysis); return *this; } PdgIdPair AnalysisHandler::beamIds() const { return Rivet::beamIds(beams()); } double AnalysisHandler::sqrtS() const { return Rivet::sqrtS(beams()); } void AnalysisHandler::setIgnoreBeams(bool ignore) { _ignoreBeams=ignore; } + void AnalysisHandler::skipMultiWeights(bool ignore) { + _skipWeights=ignore; + } + } diff --git a/src/Core/Projection.cc b/src/Core/Projection.cc --- a/src/Core/Projection.cc +++ b/src/Core/Projection.cc @@ -1,62 +1,62 @@ // -*- C++ -*- #include "Rivet/Event.hh" #include "Rivet/Projection.hh" #include "Rivet/Tools/Logging.hh" #include "Rivet/Tools/BeamConstraint.hh" #include "Rivet/Tools/Cmp.hh" namespace Rivet { Projection::Projection() : _name("BaseProjection"), _isValid(true) { addPdgIdPair(PID::ANY, PID::ANY); } Projection:: ~Projection() = default; Projection& Projection::operator = (const Projection&) { return *this; } bool Projection::before(const Projection& p) const { + // actual ordering is irrelevant, return true if projections not equal const std::type_info& thisid = typeid(*this); const std::type_info& otherid = typeid(p); if (thisid == otherid) { const CmpState cmpst = compare(p); - // const bool cmp = (cmpst == CmpState::LT || cmpst == CmpState::NEQ || cmpst == CmpState::UNDEF); - const bool cmp = (cmpst != CmpState::GT); + const bool cmp = (cmpst != CmpState::EQ); MSG_TRACE("Comparing projections of same RTTI type: " << this << " < " << &p << " = " << cmp); return cmp; } else { const bool cmp = thisid.before(otherid); MSG_TRACE("Ordering projections of different RTTI type: " << this << " < " << &p << " = " << cmp); return cmp; } } const set Projection::beamPairs() const { set ret = _beamPairs; set projs = getProjections(); for (set::const_iterator ip = projs.begin(); ip != projs.end(); ++ip) { ConstProjectionPtr p = *ip; getLog() << Log::TRACE << "Proj addr = " << p << '\n'; if (p) ret = intersection(ret, p->beamPairs()); } return ret; } Cmp Projection::mkNamedPCmp(const Projection& otherparent, const string& pname) const { return pcmp(*this, otherparent, pname); } Cmp Projection::mkPCmp(const Projection& otherparent, const string& pname) const { return pcmp(*this, otherparent, pname); } } diff --git a/src/Makefile.am b/src/Makefile.am --- a/src/Makefile.am +++ b/src/Makefile.am @@ -1,25 +1,24 @@ SUBDIRS = Core Tools Projections AnalysisTools lib_LTLIBRARIES = libRivet.la libRivet_la_SOURCES = libRivet_la_LDFLAGS = -export-dynamic -avoid-version -L$(YODALIBPATH) libRivet_la_LIBADD = \ Core/libRivetCore.la \ Projections/libRivetProjections.la \ Tools/libRivetTools.la \ AnalysisTools/libRivetAnalysisTools.la \ -lYODA -ldl -lm \ -$(FASTJETCONFIGLIBADD) \ -$(GSL_LDFLAGS) +$(FASTJETCONFIGLIBADD) if ENABLE_HEPMC_3 libRivet_la_LIBADD += -lHepMC3 -lHepMC3search libRivet_la_LDFLAGS += -L$(HEPMC3LIBPATH) else libRivet_la_LIBADD += -lHepMC libRivet_la_LDFLAGS += -L$(HEPMCLIBPATH) endif diff --git a/src/Projections/ChargedFinalState.cc b/src/Projections/ChargedFinalState.cc --- a/src/Projections/ChargedFinalState.cc +++ b/src/Projections/ChargedFinalState.cc @@ -1,45 +1,45 @@ // -*- C++ -*- #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { ChargedFinalState::ChargedFinalState(const FinalState& fsp) { setName("ChargedFinalState"); declare(fsp, "FS"); } ChargedFinalState::ChargedFinalState(const Cut& c) { setName("ChargedFinalState"); declare(FinalState(c), "FS"); } CmpState ChargedFinalState::compare(const Projection& p) const { return mkNamedPCmp(p, "FS"); } } -namespace { +/*namespace { inline bool chargedParticleFilter(const Rivet::Particle& p) { return Rivet::PID::charge3(p.pid()) == 0; } -} +}*/ namespace Rivet { void ChargedFinalState::project(const Event& e) { const FinalState& fs = applyProjection(e, "FS"); // _theParticles.clear(); // std::remove_copy_if(fs.particles().begin(), fs.particles().end(), // std::back_inserter(_theParticles), // [](const Rivet::Particle& p) { return p.charge3() == 0; }); _theParticles = filter_select(fs.particles(), isCharged); MSG_DEBUG("Number of charged final-state particles = " << _theParticles.size()); if (getLog().isActive(Log::TRACE)) { for (const Particle& p : _theParticles) { MSG_TRACE("Selected: " << p.pid() << ", charge = " << p.charge()); } } } } diff --git a/src/Projections/FinalState.cc b/src/Projections/FinalState.cc --- a/src/Projections/FinalState.cc +++ b/src/Projections/FinalState.cc @@ -1,83 +1,83 @@ // -*- C++ -*- #include "Rivet/Projections/FinalState.hh" namespace Rivet { FinalState::FinalState(const Cut& c) : ParticleFinder(c) { setName("FinalState"); const bool isopen = (c == Cuts::open()); MSG_TRACE("Check for open FS conditions: " << std::boolalpha << isopen); if (!isopen) declare(FinalState(), "OpenFS"); } FinalState::FinalState(const FinalState& fsp, const Cut& c) : ParticleFinder(c) { setName("FinalState"); MSG_TRACE("Registering base FSP as 'PrevFS'"); declare(fsp, "PrevFS"); } CmpState FinalState::compare(const Projection& p) const { const FinalState& other = dynamic_cast(p); // First check if there is a PrevFS and it it matches - if (hasProjection("PrevFS") != other.hasProjection("PrevFS")) return CmpState::UNDEF; + if (hasProjection("PrevFS") != other.hasProjection("PrevFS")) return CmpState::NEQ; if (hasProjection("PrevFS")) { const PCmp prevcmp = mkPCmp(other, "PrevFS"); - if (prevcmp != CmpState::EQ) return prevcmp; + if (prevcmp != CmpState::EQ) return CmpState::NEQ; } // Then check the extra cuts const bool cutcmp = _cuts == other._cuts; MSG_TRACE(_cuts << " VS " << other._cuts << " -> EQ == " << std::boolalpha << cutcmp); if (!cutcmp) return CmpState::NEQ; // Checks all passed: these FSes are equivalent return CmpState::EQ; } void FinalState::project(const Event& e) { _theParticles.clear(); // Handle "open FS" special case, which should not/cannot recurse if (_cuts == Cuts::OPEN) { MSG_TRACE("Open FS processing: should only see this once per event (" << e.genEvent()->event_number() << ")"); for (ConstGenParticlePtr p : HepMCUtils::particles(e.genEvent())) { if (p->status() == 1) { MSG_TRACE("FS GV = " << p->production_vertex()->position()); _theParticles.push_back(Particle(p)); } } MSG_TRACE("Number of open-FS selected particles = " << _theParticles.size()); return; } // Base the calculation on PrevFS if available, otherwise OpenFS /// @todo In general, we'd like to calculate a restrictive FS based on the most restricted superset FS. const Particles& allstable = applyProjection(e, (hasProjection("PrevFS") ? "PrevFS" : "OpenFS")).particles(); MSG_TRACE("Beginning Cuts selection"); for (const Particle& p : allstable) { const bool passed = accept(p); MSG_TRACE("Choosing: ID = " << p.pid() << ", pT = " << p.pT()/GeV << " GeV" << ", eta = " << p.eta() << ": result = " << std::boolalpha << passed); if (passed) _theParticles.push_back(p); } MSG_TRACE("Number of final-state particles = " << _theParticles.size()); } /// Decide if a particle is to be accepted or not. bool FinalState::accept(const Particle& p) const { // Not having status == 1 should never happen! assert(p.genParticle() == NULL || p.genParticle()->status() == 1); return _cuts->accept(p); } } diff --git a/src/Projections/LeadingParticlesFinalState.cc b/src/Projections/LeadingParticlesFinalState.cc --- a/src/Projections/LeadingParticlesFinalState.cc +++ b/src/Projections/LeadingParticlesFinalState.cc @@ -1,73 +1,72 @@ #include "Rivet/Projections/LeadingParticlesFinalState.hh" #include "Rivet/Particle.hh" namespace Rivet { CmpState LeadingParticlesFinalState::compare(const Projection& p) const { // First compare the final states we are running on CmpState fscmp = mkNamedPCmp(p, "FS"); if (fscmp != CmpState::EQ) return fscmp; // Then compare the two as final states const LeadingParticlesFinalState& other = dynamic_cast(p); fscmp = FinalState::compare(other); if (fscmp != CmpState::EQ) return fscmp; CmpState locmp = cmp(_leading_only, other._leading_only); if (locmp != CmpState::EQ) return locmp; // Finally compare the IDs - if (_ids < other._ids) return CmpState::LT; - else if (other._ids < _ids) return CmpState::GT; - return CmpState::EQ; + if (_ids == other._ids) return CmpState::EQ; + return CmpState::NEQ; } void LeadingParticlesFinalState::project(const Event & e) { _theParticles.clear(); const FinalState& fs = applyProjection(e, "FS"); const Particles& particles = fs.particles(); MSG_DEBUG("Original final state particles size " << particles.size()); map tmp; for (const Particle& p : particles) { const PdgId pid = p.pid(); // If it's a PID we're looking for, and passes the cuts if (_ids.find(pid) != _ids.end() && FinalState::accept(p.genParticle())) { // Look for an existing particle in tmp container if (tmp.find(pid) != tmp.end()) { // if a particle with this type has been already selected const Particle& p2 = *tmp.find(pid)->second; // If the new pT is higher than the previous one, then substitute... if (p.pT() > p2.pT()) tmp[pid] = &p; } else { // ...otherwise insert in the container tmp[pid] = &p; } } } // Loop on the tmp container and fill _theParticles for (const pair& id_p : tmp) { MSG_DEBUG("Accepting particle ID " << id_p.first << " with momentum " << id_p.second->momentum()); _theParticles.push_back(*id_p.second); } if (_leading_only) { double ptmax=0.0; Particle pmax; for (const Particle& p : _theParticles) { if (p.pT() > ptmax) { ptmax = p.pT(); pmax = p; } } _theParticles.clear(); _theParticles.push_back(pmax); } } } diff --git a/src/Projections/ParticleFinder.cc b/src/Projections/ParticleFinder.cc --- a/src/Projections/ParticleFinder.cc +++ b/src/Projections/ParticleFinder.cc @@ -1,12 +1,12 @@ // -*- C++ -*- #include "Rivet/Projections/ParticleFinder.hh" namespace Rivet { /// @todo HOW DO WE COMPARE CUTS OBJECTS? CmpState ParticleFinder::compare(const Projection& p) const { const ParticleFinder& other = dynamic_cast(p); - return _cuts == other._cuts ? CmpState::EQ : CmpState::UNDEF; + return _cuts == other._cuts ? CmpState::EQ : CmpState::NEQ; } } diff --git a/src/Tools/Makefile.am b/src/Tools/Makefile.am --- a/src/Tools/Makefile.am +++ b/src/Tools/Makefile.am @@ -1,34 +1,27 @@ -SUBDIRS = fjcontrib - noinst_LTLIBRARIES = libRivetTools.la libRivetTools_la_SOURCES = \ BinnedHistogram.cc \ Correlators.cc \ Cuts.cc \ JetUtils.cc \ Random.cc \ Logging.cc \ ParticleUtils.cc \ ParticleName.cc \ Percentile.cc \ RivetYODA.cc \ RivetMT2.cc \ RivetPaths.cc \ Utils.cc \ binreloc.c if ENABLE_HEPMC_3 libRivetTools_la_SOURCES += RivetHepMC_3.cc ReaderCompressedAscii.cc WriterCompressedAscii.cc else libRivetTools_la_SOURCES += RivetHepMC_2.cc endif dist_noinst_HEADERS = binreloc.h lester_mt2_bisect.hh libRivetTools_la_CPPFLAGS = $(AM_CPPFLAGS) -DENABLE_BINRELOC -DDEFAULTDATADIR=\"$(datadir)\" -DDEFAULTLIBDIR=\"$(libdir)\" - -libRivetTools_la_LIBADD = \ - fjcontrib/EnergyCorrelator/libRivetEnergyCorrelator.la \ - fjcontrib/Nsubjettiness/libRivetNsubjettiness.la \ - fjcontrib/RecursiveTools/libRivetRecursiveTools.la diff --git a/src/Tools/RivetHepMC_2.cc b/src/Tools/RivetHepMC_2.cc --- a/src/Tools/RivetHepMC_2.cc +++ b/src/Tools/RivetHepMC_2.cc @@ -1,174 +1,199 @@ // -*- C++ -*- -#include +//#include #include "Rivet/Tools/Utils.hh" #include "Rivet/Tools/RivetHepMC.hh" #include "Rivet/Tools/Logging.hh" -namespace { +/*namespace { inline std::vector split(const std::string& input, const std::string& regex) { // passing -1 as the submatch index parameter performs splitting std::regex re(regex); std::sregex_token_iterator first{input.begin(), input.end(), re, -1}, last; return {first, last}; } -} +}*/ namespace Rivet{ const Relatives Relatives::PARENTS = HepMC::parents; const Relatives Relatives::CHILDREN = HepMC::children; const Relatives Relatives::ANCESTORS = HepMC::ancestors; const Relatives Relatives::DESCENDANTS = HepMC::descendants; namespace HepMCUtils{ ConstGenParticlePtr getParticlePtr(const RivetHepMC::GenParticle & gp) { return &gp; } std::vector particles(ConstGenEventPtr ge){ std::vector result; for(GenEvent::particle_const_iterator pi = ge->particles_begin(); pi != ge->particles_end(); ++pi){ result.push_back(*pi); } return result; } std::vector particles(const GenEvent *ge){ std::vector result; for(GenEvent::particle_const_iterator pi = ge->particles_begin(); pi != ge->particles_end(); ++pi){ result.push_back(*pi); } return result; } std::vector vertices(ConstGenEventPtr ge){ std::vector result; for(GenEvent::vertex_const_iterator vi = ge->vertices_begin(); vi != ge->vertices_end(); ++vi){ result.push_back(*vi); } return result; } std::vector vertices(const GenEvent *ge){ std::vector result; for(GenEvent::vertex_const_iterator vi = ge->vertices_begin(); vi != ge->vertices_end(); ++vi){ result.push_back(*vi); } return result; } std::vector particles(ConstGenVertexPtr gv, const Relatives &relo){ std::vector result; /// @todo A particle_const_iterator on GenVertex would be nice... // Before HepMC 2.7.0 there were no GV::particles_const_iterators and constness consistency was all screwed up :-/ #if HEPMC_VERSION_CODE >= 2007000 for (HepMC::GenVertex::particle_iterator pi = gv->particles_begin(relo); pi != gv->particles_end(relo); ++pi) result.push_back(*pi); #else HepMC::GenVertex* gv2 = const_cast(gv); for (HepMC::GenVertex::particle_iterator pi = gv2->particles_begin(relo); pi != gv2->particles_end(relo); ++pi) result.push_back(const_cast(*pi)); #endif return result; } std::vector particles(ConstGenParticlePtr gp, const Relatives &relo){ ConstGenVertexPtr vtx; switch(relo){ case HepMC::parents: case HepMC::ancestors: vtx = gp->production_vertex(); break; case HepMC::children: case HepMC::descendants: vtx = gp->end_vertex(); break; default: throw std::runtime_error("Not implemented!"); break; } return particles(vtx, relo); } int uniqueId(ConstGenParticlePtr gp){ return gp->barcode(); } int particles_size(ConstGenEventPtr ge){ return ge->particles_size(); } int particles_size(const GenEvent *ge){ return ge->particles_size(); } std::pair beams(const GenEvent *ge){ return ge->beam_particles(); } std::shared_ptr makeReader(std::istream &istr, std::string *) { return make_shared(istr); } bool readEvent(std::shared_ptr io, std::shared_ptr evt){ if(io->rdstate() != 0) return false; if(!io->fill_next_event(evt.get())) return false; return true; } // This functions could be filled with code doing the same stuff as // in the HepMC3 version of This file. void strip(GenEvent &, const set &) {} vector weightNames(const GenEvent & ge) { /// reroute the print output to a std::stringstream and process /// The iteration is done over a map in hepmc2 so this is safe vector ret; - std::ostringstream stream; + + /// Obtaining weight names using regex probably neater, but regex + /// is not defined in GCC4.8, which is currently used by Lxplus. + /// Attempt an alternative solution based on stringstreams: + std::stringstream stream; + ge.weights().print(stream); + std::string pair; // placeholder for subtsring matches + while (std::getline(stream, pair, ' ')) { + pair.erase(pair.begin()); // removes the "(" on the LHS + pair.pop_back(); // removes the ")" on the RHS + if (pair.empty()) continue; + std::stringstream spair(pair); + vector temp; + while (std::getline(spair, pair, ',')) { + temp.push_back(std::move(pair)); + } + if (temp.size() == 2) { + // store the default weight based on weight names + if (temp[0] == "Weight" || temp[0] == "0" || temp[0] == "Default") { + ret.push_back(""); + } + else ret.push_back(temp[0]); + } + } + /// Possible future solution based on regex + /*std::ostringstream stream; ge.weights().print(stream); // Super lame, I know string str = stream.str(); std::regex re("(([^()]+))"); // Regex for stuff enclosed by parentheses () for (std::sregex_iterator i = std::sregex_iterator(str.begin(), str.end(), re); i != std::sregex_iterator(); ++i ) { std::smatch m = *i; vector temp = ::split(m.str(), "[,]"); if (temp.size() ==2) { // store the default weight based on weight names if (temp[0] == "Weight" || temp[0] == "0" || temp[0] == "Default") { ret.push_back(""); } else ret.push_back(temp[0]); } - } + }*/ return ret; } double crossSection(const GenEvent & ge) { return ge.cross_section()->cross_section(); } std::valarray weights(const GenEvent & ge) { const size_t W = ge.weights().size(); std::valarray wts(W); for (unsigned int iw = 0; iw < W; ++iw) wts[iw] = ge.weights()[iw]; return wts; } } } diff --git a/src/Tools/RivetYODA.cc b/src/Tools/RivetYODA.cc --- a/src/Tools/RivetYODA.cc +++ b/src/Tools/RivetYODA.cc @@ -1,507 +1,507 @@ #include "Rivet/Config/RivetCommon.hh" #include "Rivet/Tools/RivetYODA.hh" #include "Rivet/Tools/RivetPaths.hh" #include "YODA/ReaderYODA.h" #include "YODA/ReaderAIDA.h" // use execinfo for backtrace if available #include "DummyConfig.hh" #ifdef HAVE_EXECINFO_H #include #endif #include #include using namespace std; namespace Rivet { template Wrapper::~Wrapper() {} template Wrapper::Wrapper(const vector& weightNames, const T & p) { _basePath = p.path(); for (const string& weightname : weightNames) { _persistent.push_back(make_shared(p)); _final.push_back(make_shared(p)); auto obj = _persistent.back(); auto final = _final.back(); if (weightname != "") { obj->setPath("/RAW" + obj->path() + "[" + weightname + "]"); final->setPath(final->path() + "[" + weightname + "]"); } } } template typename T::Ptr Wrapper::active() const { if ( !_active ) { #ifdef HAVE_BACKTRACE void * buffer[4]; backtrace(buffer, 4); backtrace_symbols_fd(buffer, 4 , 1); #endif assert(false && "No active pointer set. Was this object booked in init()?"); } return _active; } template void Wrapper::newSubEvent() { typename TupleWrapper::Ptr tmp = make_shared>(_persistent[0]->clone()); tmp->reset(); _evgroup.push_back( tmp ); _active = _evgroup.back(); assert(_active); } string getDatafilePath(const string& papername) { /// Try to find YODA otherwise fall back to try AIDA const string path1 = findAnalysisRefFile(papername + ".yoda"); if (!path1.empty()) return path1; const string path2 = findAnalysisRefFile(papername + ".aida"); if (!path2.empty()) return path2; throw Rivet::Error("Couldn't find ref data file '" + papername + ".yoda" + " in data path, '" + getRivetDataPath() + "', or '.'"); } map getRefData(const string& papername) { const string datafile = getDatafilePath(papername); // Make an appropriate data file reader and read the data objects /// @todo Remove AIDA support some day... YODA::Reader& reader = (datafile.find(".yoda") != string::npos) ? \ YODA::ReaderYODA::create() : YODA::ReaderAIDA::create(); vector aovec; reader.read(datafile, aovec); // Return value, to be populated map rtn; for ( YODA::AnalysisObject* ao : aovec ) { YODA::AnalysisObjectPtr refdata(ao); if (!refdata) continue; const string plotpath = refdata->path(); // Split path at "/" and only return the last field, i.e. the histogram ID const size_t slashpos = plotpath.rfind("/"); const string plotname = (slashpos+1 < plotpath.size()) ? plotpath.substr(slashpos+1) : ""; rtn[plotname] = refdata; } return rtn; } } namespace { using Rivet::Fill; using Rivet::Fills; using Rivet::TupleWrapper; template double get_window_size(const typename T::Ptr & histo, typename T::BinType x) { // the bin index we fall in const auto binidx = histo->binIndexAt(x); // gaps, overflow, underflow don't contribute if ( binidx == -1 ) return 0; const auto & b = histo->bin(binidx); // if we don't have a valid neighbouring bin, // we use infinite width typename T::Bin b1(-1.0/0.0, 1.0/0.0); // points in the top half compare to the upper neighbour if ( x > b.xMid() ) { size_t nextidx = binidx + 1; if ( nextidx < histo->bins().size() ) b1 = histo->bin(nextidx); } else { // compare to the lower neighbour int nextidx = binidx - 1; if ( nextidx >= 0 ) b1 = histo->bin(nextidx); } // the factor 2 is arbitrary, could poss. be smaller return min( b.width(), b1.width() ) / 2.0; } template typename T::BinType fillT2binT(typename T::FillType a) { return a; } template <> YODA::Profile1D::BinType fillT2binT(YODA::Profile1D::FillType a) { return get<0>(a); } template <> YODA::Profile2D::BinType fillT2binT(YODA::Profile2D::FillType a) { return YODA::Profile2D::BinType{ get<0>(a), get<1>(a) }; } template void commit(vector & persistent, const vector< vector> > & tuple, const vector> & weights ) { // TODO check if all the xs are in the same bin anyway! // Then no windowing needed assert(persistent.size() == weights[0].size()); for ( const auto & x : tuple ) { double maxwindow = 0.0; for ( const auto & xi : x ) { // TODO check for NOFILL here // persistent[0] has the same binning as all the other persistent objects double window = get_window_size(persistent[0], fillT2binT(xi.first)); if ( window > maxwindow ) maxwindow = window; } const double wsize = maxwindow; // all windows have same size set edgeset; // bin edges need to be in here! for ( const auto & xi : x ) { edgeset.insert(fillT2binT(xi.first) - wsize); edgeset.insert(fillT2binT(xi.first) + wsize); } vector< std::tuple,double> > hfill; double sumf = 0.0; auto edgit = edgeset.begin(); double ehi = *edgit; while ( ++edgit != edgeset.end() ) { double elo = ehi; ehi = *edgit; valarray sumw(0.0, persistent.size()); // need m copies of this bool gap = true; // Check for gaps between the sub-windows. for ( size_t i = 0; i < x.size(); ++i ) { // check equals comparisons here! if ( fillT2binT(x[i].first) + wsize >= ehi && fillT2binT(x[i].first) - wsize <= elo ) { sumw += x[i].second * weights[i]; gap = false; } } if ( gap ) continue; hfill.push_back( make_tuple( (ehi + elo)/2.0, sumw, (ehi - elo) ) ); sumf += ehi - elo; } for ( auto f : hfill ) for ( size_t m = 0; m < persistent.size(); ++m ) persistent[m]->fill( get<0>(f), get<1>(f)[m], get<2>(f)/sumf ); // Note the scaling to one single fill } } template<> void commit(vector & persistent, const vector< vector> > & tuple, const vector> & weights) {} template<> void commit(vector & persistent, const vector< vector> > & tuple, const vector> & weights) {} template double distance(T a, T b) { return abs(a - b); } template <> double distance >(tuple a, tuple b) { return Rivet::sqr(get<0>(a) - get<0>(b)) + Rivet::sqr(get<1>(a) - get<1>(b)); } } namespace Rivet { bool copyao(YODA::AnalysisObjectPtr src, YODA::AnalysisObjectPtr dst) { for (const std::string& a : src->annotations()) dst->setAnnotation(a, src->annotation(a)); if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; if ( aocopy(src,dst) ) return true; return false; } bool addaos(YODA::AnalysisObjectPtr dst, YODA::AnalysisObjectPtr src, double scale) { if ( aoadd(dst,src,scale) ) return true; if ( aoadd(dst,src,scale) ) return true; if ( aoadd(dst,src,scale) ) return true; if ( aoadd(dst,src,scale) ) return true; if ( aoadd(dst,src,scale) ) return true; return false; } } namespace { /// fills is a vector of sub-event with an ordered set of x-values of /// the fills in each sub-event. NOFILL should be an "impossible" /// value for this histogram. Returns a vector of sub-events with /// an ordered vector of fills (including NOFILLs) for each sub-event. template vector< vector > > match_fills(const vector::Ptr> & evgroup, const Fill & NOFILL) { vector< vector > > matched; // First just copy subevents into vectors and find the longest vector. unsigned int maxfill = 0; // length of biggest vector int imax = 0; // index position of biggest vector for ( const auto & it : evgroup ) { const auto & subev = it->fills(); if ( subev.size() > maxfill ) { maxfill = subev.size(); imax = matched.size(); } matched.push_back(vector >(subev.begin(), subev.end())); } // Now, go through all subevents with missing fills. const vector> & full = matched[imax]; // the longest one for ( auto & subev : matched ) { if ( subev.size() == maxfill ) continue; // Add NOFILLs to the end; while ( subev.size() < maxfill ) subev.push_back(NOFILL); // Iterate from the back and shift all fill values backwards by // swapping with NOFILLs so that they better match the full // subevent. for ( int i = maxfill - 1; i >= 0; --i ) { if ( subev[i] == NOFILL ) continue; size_t j = i; while ( j + 1 < maxfill && subev[j + 1] == NOFILL && distance(fillT2binT(subev[j].first), fillT2binT(full[j].first)) > distance(fillT2binT(subev[j].first), fillT2binT(full[j + 1].first)) ) { swap(subev[j], subev[j + 1]); ++j; } } } // transpose vector>> result(maxfill,vector>(matched.size())); for (size_t i = 0; i < matched.size(); ++i) for (size_t j = 0; j < maxfill; ++j) result.at(j).at(i) = matched.at(i).at(j); return result; } } namespace Rivet { template void Wrapper::pushToPersistent(const vector >& weight) { assert( _evgroup.size() == weight.size() ); // have we had subevents at all? const bool have_subevents = _evgroup.size() > 1; if ( ! have_subevents ) { // simple replay of all tuple entries // each recorded fill is inserted into all persistent weightname histos for ( size_t m = 0; m < _persistent.size(); ++m ) for ( const auto & f : _evgroup[0]->fills() ) _persistent[m]->fill( f.first, f.second * weight[0][m] ); } else { // outer index is subevent, inner index is jets in the event vector>> linedUpXs = match_fills(_evgroup, {typename T::FillType(), 0.0}); commit( _persistent, linedUpXs, weight ); } _evgroup.clear(); _active.reset(); } template void Wrapper::pushToFinal() { for ( size_t m = 0; m < _persistent.size(); ++m ) { copyao(_persistent.at(m), _final.at(m)); if ( _final[m]->path().substr(0,4) == "/RAW" ) _final[m]->setPath(_final[m]->path().substr(4)); } } template <> void Wrapper::pushToPersistent(const vector >& weight) { for ( size_t m = 0; m < _persistent.size(); ++m ) { for ( size_t n = 0; n < _evgroup.size(); ++n ) { for ( const auto & f : _evgroup[n]->fills() ) { _persistent[m]->fill( f.second * weight[n][m] ); } } } _evgroup.clear(); _active.reset(); } template <> void Wrapper::pushToPersistent(const vector >& weight) { _evgroup.clear(); _active.reset(); } template <> void Wrapper::pushToPersistent(const vector >& weight) { _evgroup.clear(); _active.reset(); } template <> void Wrapper::pushToPersistent(const vector >& weight) { _evgroup.clear(); _active.reset(); } // explicitly instantiate all wrappers template class Wrapper; template class Wrapper; template class Wrapper; template class Wrapper; template class Wrapper; template class Wrapper; template class Wrapper; template class Wrapper; AOPath::AOPath(string fullpath) { - // First checck if This is a global system path + // First check if this is a global system path _path = fullpath; std::regex resys("^(/RAW)?/([^\\[/]+)(\\[(.+)\\])?$"); smatch m; _valid = regex_search(fullpath, m, resys); if ( _valid ) { _raw = (m[1] == "/RAW"); _name = m[2]; _weight = m[4]; return; } // If not, assume it is a normal analysis path. std::regex repath("^(/RAW)?(/REF)?/([^/:]+)(:[^/]+)?(/TMP)?/([^\\[]+)(\\[(.+)\\])?"); _valid = regex_search(fullpath, m, repath); if ( !_valid ) return; _raw = (m[1] == "/RAW"); _ref = (m[2] == "/REF"); _analysis = m[3]; _optionstring = m[4]; _tmp = (m[5] == "/TMP"); _name = m[6]; _weight = m[8]; std::regex reopt(":([^=]+)=([^:]+)"); string s = _optionstring; while ( regex_search(s, m, reopt) ) { _options[m[1]] = m[2]; s = m.suffix(); } } void AOPath::fixOptionString() { ostringstream oss; for ( auto optval : _options ) oss << ":" << optval.first << "=" << optval.second; _optionstring = oss.str(); } string AOPath::mkPath() const { ostringstream oss; if ( isRaw() ) oss << "/RAW"; else if ( isRef() ) oss << "/REF"; if ( _analysis != "" ) oss << "/" << analysis(); for ( auto optval : _options ) oss << ":" << optval.first << "=" << optval.second; if ( isTmp() ) oss << "/TMP"; oss << "/" << name(); if ( weight() != "" ) oss << "[" << weight() << "]"; return oss.str(); } void AOPath::debug() const { cout << "Full path: " << _path << endl; if ( !_valid ) { cout << "This is not a valid analysis object path" << endl << endl; return; } cout << "Check path: " << mkPath() << endl; cout << "Analysis: " << _analysis << endl; cout << "Name: " << _name << endl; cout << "Weight: " << _weight << endl; cout << "Properties: "; if ( _raw ) cout << "raw "; if ( _tmp ) cout << "tmp "; if ( _ref ) cout << "ref "; cout << endl; cout << "Options: "; for ( auto opt : _options ) cout << opt.first << "->" << opt.second << " "; cout << endl << endl; } } diff --git a/src/Tools/fjcontrib/EnergyCorrelator/AUTHORS b/src/Tools/fjcontrib/EnergyCorrelator/AUTHORS deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/AUTHORS +++ /dev/null @@ -1,23 +0,0 @@ -The EnergyCorrelator FastJet contrib was written and is maintained and developed by: - - Andrew Larkoski - Lina Necib - Gavin Salam - Jesse Thaler - -For physics details, see: - - Energy Correlation Functions for Jet Substructure. - Andrew J. Larkoski, Gavin P. Salam, and Jesse Thaler. - JHEP 1306, 108 (2013) - arXiv:1305.0007. - - Power Counting to Better Jet Observables. - Andrew J. Larkoski, Ian Moult, and Duff Neill. - JHEP 1412, 009 (2014) - arXiv:1409.6298. - - New Angles on Energy Correlation Functions. - Ian Moult, Lina Necib, and Jesse Thaler. - arXiv:1609.07483. ----------------------------------------------------------------------- diff --git a/src/Tools/fjcontrib/EnergyCorrelator/COPYING b/src/Tools/fjcontrib/EnergyCorrelator/COPYING deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/COPYING +++ /dev/null @@ -1,360 +0,0 @@ -The EnergyCorrelator contrib to FastJet is released -under the terms of the GNU General Public License v2 (GPLv2). - -A copy of the GPLv2 is to be found at the end of this file. - -While the GPL license grants you considerable freedom, please bear in -mind that this code's use falls under guidelines similar to those that -are standard for Monte Carlo event generators -(http://www.montecarlonet.org/GUIDELINES). In particular, if you use -this code as part of work towards a scientific publication, whether -directly or contained within another program you should include a citation -to - - arXiv:1305.0007 - arXiv:1409.6298 - arXiv:1609.07483 - -====================================================================== -====================================================================== -====================================================================== - GNU GENERAL PUBLIC LICENSE - Version 2, June 1991 - - Copyright (C) 1989, 1991 Free Software Foundation, Inc. - 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The licenses for most software are designed to take away your -freedom to share and change it. By contrast, the GNU General Public -License is intended to guarantee your freedom to share and change free -software--to make sure the software is free for all its users. This -General Public License applies to most of the Free Software -Foundation's software and to any other program whose authors commit to -using it. (Some other Free Software Foundation software is covered by -the GNU Library General Public License instead.) You can apply it to -your programs, too. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -this service if you wish), that you receive source code or can get it -if you want it, that you can change the software or use pieces of it -in new free programs; and that you know you can do these things. - - To protect your rights, we need to make restrictions that forbid -anyone to deny you these rights or to ask you to surrender the rights. -These restrictions translate to certain responsibilities for you if you -distribute copies of the software, or if you modify it. - - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must give the recipients all the rights that -you have. You must make sure that they, too, receive or can get the -source code. And you must show them these terms so they know their -rights. - - We protect your rights with two steps: (1) copyright the software, and -(2) offer you this license which gives you legal permission to copy, -distribute and/or modify the software. - - Also, for each author's protection and ours, we want to make certain -that everyone understands that there is no warranty for this free -software. If the software is modified by someone else and passed on, we -want its recipients to know that what they have is not the original, so -that any problems introduced by others will not reflect on the original -authors' reputations. - - Finally, any free program is threatened constantly by software -patents. We wish to avoid the danger that redistributors of a free -program will individually obtain patent licenses, in effect making the -program proprietary. To prevent this, we have made it clear that any -patent must be licensed for everyone's free use or not licensed at all. - - The precise terms and conditions for copying, distribution and -modification follow. - - GNU GENERAL PUBLIC LICENSE - TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - 0. This License applies to any program or other work which contains -a notice placed by the copyright holder saying it may be distributed -under the terms of this General Public License. The "Program", below, -refers to any such program or work, and a "work based on the Program" -means either the Program or any derivative work under copyright law: -that is to say, a work containing the Program or a portion of it, -either verbatim or with modifications and/or translated into another -language. (Hereinafter, translation is included without limitation in -the term "modification".) Each licensee is addressed as "you". - -Activities other than copying, distribution and modification are not -covered by this License; they are outside its scope. The act of -running the Program is not restricted, and the output from the Program -is covered only if its contents constitute a work based on the -Program (independent of having been made by running the Program). -Whether that is true depends on what the Program does. - - 1. You may copy and distribute verbatim copies of the Program's -source code as you receive it, in any medium, provided that you -conspicuously and appropriately publish on each copy an appropriate -copyright notice and disclaimer of warranty; keep intact all the -notices that refer to this License and to the absence of any warranty; -and give any other recipients of the Program a copy of this License -along with the Program. - -You may charge a fee for the physical act of transferring a copy, and -you may at your option offer warranty protection in exchange for a fee. - - 2. You may modify your copy or copies of the Program or any portion -of it, thus forming a work based on the Program, and copy and -distribute such modifications or work under the terms of Section 1 -above, provided that you also meet all of these conditions: - - a) You must cause the modified files to carry prominent notices - stating that you changed the files and the date of any change. - - b) You must cause any work that you distribute or publish, that in - whole or in part contains or is derived from the Program or any - part thereof, to be licensed as a whole at no charge to all third - parties under the terms of this License. - - c) If the modified program normally reads commands interactively - when run, you must cause it, when started running for such - interactive use in the most ordinary way, to print or display an - announcement including an appropriate copyright notice and a - notice that there is no warranty (or else, saying that you provide - a warranty) and that users may redistribute the program under - these conditions, and telling the user how to view a copy of this - License. (Exception: if the Program itself is interactive but - does not normally print such an announcement, your work based on - the Program is not required to print an announcement.) - -These requirements apply to the modified work as a whole. If -identifiable sections of that work are not derived from the Program, -and can be reasonably considered independent and separate works in -themselves, then this License, and its terms, do not apply to those -sections when you distribute them as separate works. But when you -distribute the same sections as part of a whole which is a work based -on the Program, the distribution of the whole must be on the terms of -this License, whose permissions for other licensees extend to the -entire whole, and thus to each and every part regardless of who wrote it. - -Thus, it is not the intent of this section to claim rights or contest -your rights to work written entirely by you; rather, the intent is to -exercise the right to control the distribution of derivative or -collective works based on the Program. - -In addition, mere aggregation of another work not based on the Program -with the Program (or with a work based on the Program) on a volume of -a storage or distribution medium does not bring the other work under -the scope of this License. - - 3. You may copy and distribute the Program (or a work based on it, -under Section 2) in object code or executable form under the terms of -Sections 1 and 2 above provided that you also do one of the following: - - a) Accompany it with the complete corresponding machine-readable - source code, which must be distributed under the terms of Sections - 1 and 2 above on a medium customarily used for software interchange; or, - - b) Accompany it with a written offer, valid for at least three - years, to give any third party, for a charge no more than your - cost of physically performing source distribution, a complete - machine-readable copy of the corresponding source code, to be - distributed under the terms of Sections 1 and 2 above on a medium - customarily used for software interchange; or, - - c) Accompany it with the information you received as to the offer - to distribute corresponding source code. (This alternative is - allowed only for noncommercial distribution and only if you - received the program in object code or executable form with such - an offer, in accord with Subsection b above.) - -The source code for a work means the preferred form of the work for -making modifications to it. For an executable work, complete source -code means all the source code for all modules it contains, plus any -associated interface definition files, plus the scripts used to -control compilation and installation of the executable. However, as a -special exception, the source code distributed need not include -anything that is normally distributed (in either source or binary -form) with the major components (compiler, kernel, and so on) of the -operating system on which the executable runs, unless that component -itself accompanies the executable. - -If distribution of executable or object code is made by offering -access to copy from a designated place, then offering equivalent -access to copy the source code from the same place counts as -distribution of the source code, even though third parties are not -compelled to copy the source along with the object code. - - 4. You may not copy, modify, sublicense, or distribute the Program -except as expressly provided under this License. Any attempt -otherwise to copy, modify, sublicense or distribute the Program is -void, and will automatically terminate your rights under this License. -However, parties who have received copies, or rights, from you under -this License will not have their licenses terminated so long as such -parties remain in full compliance. - - 5. You are not required to accept this License, since you have not -signed it. However, nothing else grants you permission to modify or -distribute the Program or its derivative works. These actions are -prohibited by law if you do not accept this License. Therefore, by -modifying or distributing the Program (or any work based on the -Program), you indicate your acceptance of this License to do so, and -all its terms and conditions for copying, distributing or modifying -the Program or works based on it. - - 6. Each time you redistribute the Program (or any work based on the -Program), the recipient automatically receives a license from the -original licensor to copy, distribute or modify the Program subject to -these terms and conditions. You may not impose any further -restrictions on the recipients' exercise of the rights granted herein. -You are not responsible for enforcing compliance by third parties to -this License. - - 7. If, as a consequence of a court judgment or allegation of patent -infringement or for any other reason (not limited to patent issues), -conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot -distribute so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you -may not distribute the Program at all. For example, if a patent -license would not permit royalty-free redistribution of the Program by -all those who receive copies directly or indirectly through you, then -the only way you could satisfy both it and this License would be to -refrain entirely from distribution of the Program. - -If any portion of this section is held invalid or unenforceable under -any particular circumstance, the balance of the section is intended to -apply and the section as a whole is intended to apply in other -circumstances. - -It is not the purpose of this section to induce you to infringe any -patents or other property right claims or to contest validity of any -such claims; this section has the sole purpose of protecting the -integrity of the free software distribution system, which is -implemented by public license practices. Many people have made -generous contributions to the wide range of software distributed -through that system in reliance on consistent application of that -system; it is up to the author/donor to decide if he or she is willing -to distribute software through any other system and a licensee cannot -impose that choice. - -This section is intended to make thoroughly clear what is believed to -be a consequence of the rest of this License. - - 8. If the distribution and/or use of the Program is restricted in -certain countries either by patents or by copyrighted interfaces, the -original copyright holder who places the Program under this License -may add an explicit geographical distribution limitation excluding -those countries, so that distribution is permitted only in or among -countries not thus excluded. In such case, this License incorporates -the limitation as if written in the body of this License. - - 9. The Free Software Foundation may publish revised and/or new versions -of the General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies a version number of this License which applies to it and "any -later version", you have the option of following the terms and conditions -either of that version or of any later version published by the Free -Software Foundation. If the Program does not specify a version number of -this License, you may choose any version ever published by the Free Software -Foundation. - - 10. If you wish to incorporate parts of the Program into other free -programs whose distribution conditions are different, write to the author -to ask for permission. For software which is copyrighted by the Free -Software Foundation, write to the Free Software Foundation; we sometimes -make exceptions for this. Our decision will be guided by the two goals -of preserving the free status of all derivatives of our free software and -of promoting the sharing and reuse of software generally. - - NO WARRANTY - - 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY -FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN -OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES -PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED -OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS -TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE -PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, -REPAIR OR CORRECTION. - - 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR -REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, -INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING -OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED -TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY -YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER -PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE -POSSIBILITY OF SUCH DAMAGES. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -convey the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - - Gnomovision version 69, Copyright (C) year name of author - Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, the commands you use may -be called something other than `show w' and `show c'; they could even be -mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - - Yoyodyne, Inc., hereby disclaims all copyright interest in the program - `Gnomovision' (which makes passes at compilers) written by James Hacker. - - , 1 April 1989 - Ty Coon, President of Vice - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Library General -Public License instead of this License. diff --git a/src/Tools/fjcontrib/EnergyCorrelator/ChangeLog b/src/Tools/fjcontrib/EnergyCorrelator/ChangeLog deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/ChangeLog +++ /dev/null @@ -1,307 +0,0 @@ -2018-02-08 Jesse Thaler - - * NEWS - * VERSION - Making 1.3.1 - - -2018-02-08 Lina Necib - - * EnergyCorrelator.cc - Debugging memory allocation, deleting energyStore/angleStore properly this time - - -2018-02-04 Lina Necib - - * EnergyCorrelator.cc - Debugging memory allocation, deleting energyStore/angleStore - - * VERSION - Making 1.3.1-devel - -2018-01-09 Jesse Thaler - - * NEWS - * VERSION - Finalizing 1.3.0 - -2018-01-09 Lina Necib - - * NEWS - Preparing for version candidate - - * VERSION - Making 1.3.0-rc1 - -2018-01-07 Lina Necib - - * EnergyCorrelator.hh/cc - Sped up evaluations of ECFs. - Functions are defined in ECFs and called to evaluate ECFGs. - - * VERSION - Making 1.2.3-devel - - -2018-01-04 Lina Necib - - * EnergyCorrelator.hh/cc - Sped up evaluations of ECFG by a factor of 4. - Refactored some of the code in functions - - * VERSION - Making 1.2.2-devel - - * example.cc - Added some timings tests for ECFGs, N2, and N3. - - -2017-01-25 Jesse Thaler - - * EnergyCorrelator.hh/cc - Converting all _N to unsigned integers, removing _N < 0 warning - Added warning to constructors for N < 1 in some cases. - - * VERSION - Making 1.2.1-devel - - * Makefile - Added -Wextra to compiler flags - -2016-10-07 Jesse Thaler - - * AUTHORS/COPYING: - Updated the publication arXiv number to 1609.07483. - - * EnergyCorrelator.hh - Fixed typo in EnergyCorrelatorGeneralized description - - * NEWS - Added "USeries" to news. - - * VERSION - Changed version to 1.2.0 - -2016-09-27 Lina Necib - - * EnergyCorrelator.hh/README - Updated the publication arXiv number to 1609.07483. - -2016-09-23 Lina Necib - - * EnergyCorrelator.cc - Made the evaluation of ECFG faster by replacing sort -> partial_sort and simplified multiplication syntax - * example.cc/ref - Fixed minor typo - * VERSION - Changed version to 1.2.0-rc3 - -2016-09-23 Lina Necib - - * EnergyCorrelator.hh - Fixed minor doxygen typo - * example.cc/ref - Changed EnergyCorrelatorNormalized -> EnergyCorrelatorGeneralized in function explanations - * VERSION - Changed version to 1.2.0-rc2 - -2016-09-22 Jesse Thaler - - * EnergyCorrelator.cc/hh - Renamed EnergyCorrelatorNormalized -> EnergyCorrelatorGeneralized - Changed order of arguments for EnergyCorrelatorGeneralized - Updated doxygen documentation - * example_basic_usage.cc - Changed EnergyCorrelatorNormalized -> EnergyCorrelatorGeneralized - * README - Updated for EnergyCorrelatorGeneralized - -2016-09-15 Lina Necib - - *VERSION: - 1.2.0-rc1 - -2016-08-24 Jesse Thaler - - Minor comment fixes throughout. - - * EnergyCorrelator.cc/hh - Put in _helper_correlator when defining overall normalization - Removed angle error for result_all_angles() - *NEWS: - Made this more readable. - *README: - Clarified some of the documentation. - -2016-08-23 Lina Necib - - *VERSION: - *NEWS: - *AUTHORS: - *COPYING: - *README: - *EnergyCorrelator.cc - Added if statement that the ECF and ECFNs return exactly zero - if the number of particles is less than N of the ECF. - *EnergyCorrelator.hh - *example.cc - *example_basic_usage.cc - Updated this example. - -2016-08-21 Lina Necib - - *VERSION: - *NEWS: - *AUTHORS: - *COPYING: - *README: - *EnergyCorrelator.cc - Added Cseries. - *EnergyCorrelator.hh - Added Cseries. - *example.cc - Added example of Cseries - *example_basic_usage.cc - Simplified examples. - -2016-08-17 Lina Necib - - *VERSION: - *NEWS: - *AUTHORS: - Added placeholder for new reference - *COPYING: - Added placeholder for new reference - *README: - added information on different measures E_inv - *EnergyCorrelator.cc - Minor text edits + added comments - *EnergyCorrelator.hh - Minor text edits + added comments - *example_basic_usage.cc - added a simplified example file that shows the use of the - different observables. - - -2016-06-23 Lina Necib - - *VERSION: - 1.2.0-alpha0 - *NEWS: - *AUTHORS: - Lina Necib - *COPYING: - *README: - added descriptions of functions that will appear shortly - arXiv:160X.XXXXX - - *EnergyCorrelator.cc - added code to calculate normalized ECFS. - *EnergyCorrelator.hh - added code to calculate normalized ECFS, Nseries, generalized - D2, N2, N3, and M2. - - *example.cc - added calculation of normalized ECFS, Nseries, generalized - D2, N2, N3, and M2 to example file. - - -2014-11-20 Jesse Thaler - - * README: - Typos fixed - -2014-11-13 Andrew Larkoski - - *VERSION: - *NEWS: - release of version 1.1.0 - - *AUTHORS: - *COPYING: - *README: - added reference to arXiv:1409.6298. - - *EnergyCorrelator.cc - *EnergyCorrelator.hh - added code to calculate C1, C2, and D2 observables. - - *example.cc - added calculation of C1, C2, and D2 to example file. - -2013-05-01 Gavin Salam - - * VERSION: - * NEWS: - release of version 1.0.1 - - * README: - updated to reflect v1.0 interface. - -2013-05-01 Jesse Thaler - - * EnergyCorrelator.cc - Switched from NAN to std::numeric_limits::quiet_NaN() - -2013-04-30 Jesse Thaler - - * EnergyCorrelator.cc - Implemented Gregory Soyez's suggestions on errors/asserts - -2013-04-30 Gavin Salam - - * VERSION: - * NEWS: - released v. 1.0.0 - - * EnergyCorrelator.hh: - * example.cc - small changes to documentation to reflect the change below + an - gave an explicit command-line in the example. - -2013-04-29 Jesse Thaler - - * EnergyCorrelator.cc - - Added support for N = 5 - - * example.cc|.ref - - Added N = 5 to test suite. - -2013-04-29 Gavin Salam - - * EnergyCorrelator.hh|cc: - - reduced memory usage from roughly 8n^2 to 4n^2 bytes (note that - sums are now performed in a different order, so results may - change to within rounding error). - - - Implemented description() for all three classes. - - - Worked on doxygen-style comments and moved some bits of code into - the .cc file. - - * Doxyfile: *** ADDED *** - - * example.cc: - developers' timing component now uses clock(), to get - finer-grained timing, and also outputs a description for some of - the correlators being used. - -2013-04-26 + 04-27 Jesse Thaler - - * EnergyCorrelator.hh|cc: - added temporary storage array for interparticle angles, to speed - up the energy correlator calculation for N>2 - - * example.cc - has internal option for printing out timing info. - -2013-04-26 Gavin Salam + Jesse & Andrew - - * Makefile: - added explicit dependencies - - * example.cc (analyze): - added a little bit of commented code for timing tests. - -2013-03-10 - Initial creation diff --git a/src/Tools/fjcontrib/EnergyCorrelator/EnergyCorrelator.cc b/src/Tools/fjcontrib/EnergyCorrelator/EnergyCorrelator.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/EnergyCorrelator.cc +++ /dev/null @@ -1,1093 +0,0 @@ -// EnergyCorrelator Package -// Questions/Comments? Email the authors: -// larkoski@mit.edu, lnecib@mit.edu, -// gavin.salam@cern.ch jthaler@jthaler.net -// -// Copyright (c) 2013-2016 -// Andrew Larkoski, Lina Necib, Gavin Salam, and Jesse Thaler -// -// $Id: EnergyCorrelator.cc 1106 2018-02-09 01:47:28Z linoush $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "fjcontrib/EnergyCorrelator.hh" -#include -#include -using namespace std; - -namespace Rivet { -// FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - namespace fjcontrib { - using namespace fastjet; - - - double EnergyCorrelator::result(const PseudoJet& jet) const { - - // if jet does not have constituents, throw error - if (!jet.has_constituents()) throw Error("EnergyCorrelator called on jet with no constituents."); - - // get N = 0 case out of the way - if (_N == 0) return 1.0; - - // find constituents - std::vector particles = jet.constituents(); - - // return zero if the number of constituents is less than _N - if (particles.size() < _N) return 0.0 ; - - double answer = 0.0; - - // take care of N = 1 case. - if (_N == 1) { - for (unsigned int i = 0; i < particles.size(); i++) { - answer += energy(particles[i]); - } - return answer; - } - - double half_beta = _beta/2.0; - - // take care of N = 2 case. - if (_N == 2) { - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { //note offset by one so that angle is never called on identical pairs - answer += energy(particles[i]) - * energy(particles[j]) - * pow(angleSquared(particles[i],particles[j]), half_beta); - } - } - return answer; - } - - - // if N > 5, then throw error - if (_N > 5) { - throw Error("EnergyCorrelator is only hard coded for N = 0,1,2,3,4,5"); - } - - - // Now deal with N = 3,4,5. Different options if storage array is used or not. - if (_strategy == storage_array) { - - // For N > 2, fill static storage array to save computation time. - unsigned int nC = particles.size(); - // Make energy storage - double *energyStore = new double[nC]; - - // Make angular storage - double **angleStore = new double*[nC]; - - precompute_energies_and_angles(particles, energyStore, angleStore); - - // Define n_angles so it is the same function for ECFs and ECFGs - unsigned int n_angles = _N * (_N - 1) / 2; - // now do recursion - if (_N == 3) { - answer = evaluate_n3(nC, n_angles, energyStore, angleStore); - } else if (_N == 4) { - answer = evaluate_n4(nC, n_angles, energyStore, angleStore); - } else if (_N == 5) { - answer = evaluate_n5(nC, n_angles, energyStore, angleStore); - } else { - assert(_N <= 5); - } - // Deleting arrays - delete[] energyStore; - - for (unsigned int i = 0; i < particles.size(); i++) { - delete[] angleStore[i]; - } - delete[] angleStore; - - } else if (_strategy == slow) { - if (_N == 3) { - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { - double ans_ij = energy(particles[i]) - * energy(particles[j]) - * pow(angleSquared(particles[i],particles[j]), half_beta); - for (unsigned int k = j + 1; k < particles.size(); k++) { - answer += ans_ij - * energy(particles[k]) - * pow(angleSquared(particles[i],particles[k]), half_beta) - * pow(angleSquared(particles[j],particles[k]), half_beta); - } - } - } - } else if (_N == 4) { - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { - double ans_ij = energy(particles[i]) - * energy(particles[j]) - * pow(angleSquared(particles[i],particles[j]), half_beta); - for (unsigned int k = j + 1; k < particles.size(); k++) { - double ans_ijk = ans_ij - * energy(particles[k]) - * pow(angleSquared(particles[i],particles[k]), half_beta) - * pow(angleSquared(particles[j],particles[k]), half_beta); - for (unsigned int l = k + 1; l < particles.size(); l++) { - answer += ans_ijk - * energy(particles[l]) - * pow(angleSquared(particles[i],particles[l]), half_beta) - * pow(angleSquared(particles[j],particles[l]), half_beta) - * pow(angleSquared(particles[k],particles[l]), half_beta); - } - } - } - } - } else if (_N == 5) { - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { - double ans_ij = energy(particles[i]) - * energy(particles[j]) - * pow(angleSquared(particles[i],particles[j]), half_beta); - for (unsigned int k = j + 1; k < particles.size(); k++) { - double ans_ijk = ans_ij - * energy(particles[k]) - * pow(angleSquared(particles[i],particles[k]), half_beta) - * pow(angleSquared(particles[j],particles[k]), half_beta); - for (unsigned int l = k + 1; l < particles.size(); l++) { - double ans_ijkl = ans_ijk - * energy(particles[l]) - * pow(angleSquared(particles[i],particles[l]), half_beta) - * pow(angleSquared(particles[j],particles[l]), half_beta) - * pow(angleSquared(particles[k],particles[l]), half_beta); - for (unsigned int m = l + 1; m < particles.size(); m++) { - answer += ans_ijkl - * energy(particles[m]) - * pow(angleSquared(particles[i],particles[m]), half_beta) - * pow(angleSquared(particles[j],particles[m]), half_beta) - * pow(angleSquared(particles[k],particles[m]), half_beta) - * pow(angleSquared(particles[l],particles[m]), half_beta); - } - } - } - } - } - } else { - assert(_N <= 5); - } - } else { - assert(_strategy == slow || _strategy == storage_array); - } - - return answer; - } - - double EnergyCorrelator::energy(const PseudoJet& jet) const { - if (_measure == pt_R) { - return jet.perp(); - } else if (_measure == E_theta || _measure == E_inv) { - return jet.e(); - } else { - assert(_measure==pt_R || _measure==E_theta || _measure==E_inv); - return std::numeric_limits::quiet_NaN(); - } - } - - double EnergyCorrelator::angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const { - if (_measure == pt_R) { - return jet1.squared_distance(jet2); - } else if (_measure == E_theta) { - // doesn't seem to be a fastjet built in for this - double dot = jet1.px()*jet2.px() + jet1.py()*jet2.py() + jet1.pz()*jet2.pz(); - double norm1 = jet1.px()*jet1.px() + jet1.py()*jet1.py() + jet1.pz()*jet1.pz(); - double norm2 = jet2.px()*jet2.px() + jet2.py()*jet2.py() + jet2.pz()*jet2.pz(); - - double costheta = dot/(sqrt(norm1 * norm2)); - if (costheta > 1.0) costheta = 1.0; // Need to handle case of numerical overflow - double theta = acos(costheta); - return theta*theta; - - } else if (_measure == E_inv) { - if (jet1.E() < 0.0000001 || jet2.E() < 0.0000001) return 0.0; - else { - double dot4 = max(jet1.E()*jet2.E() - jet1.px()*jet2.px() - jet1.py()*jet2.py() - jet1.pz()*jet2.pz(),0.0); - return 2.0 * dot4 / jet1.E() / jet2.E(); - } - } else { - assert(_measure==pt_R || _measure==E_theta || _measure==E_inv); - return std::numeric_limits::quiet_NaN(); - } - } - - - double EnergyCorrelator::multiply_angles(double angle_list[], int n_angles, unsigned int N_total) const { - // Compute the product of the n_angles smallest angles. - // std::partial_sort could also work, but since angle_list contains - // less than 10 elements, this way is usually faster. - double product = 1; - - for (int a = 0; a < n_angles; a++) { - double cur_min = angle_list[0]; - int cur_min_pos = 0; - for (unsigned int b = 1; b < N_total; b++) { - if (angle_list[b] < cur_min) { - cur_min = angle_list[b]; - cur_min_pos = b; - } - } - - // multiply it by the next smallest - product *= cur_min; - angle_list[cur_min_pos] = INT_MAX; - } - return product; - } - - void EnergyCorrelator::precompute_energies_and_angles(std::vector const &particles, double* energyStore, double** angleStore) const { - // Fill storage with energy/angle information - unsigned int nC = particles.size(); - for (unsigned int i = 0; i < nC; i++) { - angleStore[i] = new double[i]; - } - - double half_beta = _beta/2.0; - for (unsigned int i = 0; i < particles.size(); i++) { - energyStore[i] = energy(particles[i]); - for (unsigned int j = 0; j < i; j++) { - if (half_beta == 1.0){ - angleStore[i][j] = angleSquared(particles[i], particles[j]); - } else { - angleStore[i][j] = pow(angleSquared(particles[i], particles[j]), half_beta); - } - } - } - } - - double EnergyCorrelator::evaluate_n3(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const { - unsigned int N_total = 3; - double angle1, angle2, angle3; - double angle; - double answer = 0; - - for (unsigned int i = 2; i < nC; i++) { - for (unsigned int j = 1; j < i; j++) { - double mult_energy_i_j = energyStore[i] * energyStore[j]; - - for (unsigned int k = 0; k < j; k++) { - angle1 = angleStore[i][j]; - angle2 = angleStore[i][k]; - angle3 = angleStore[j][k]; - - double angle_list[] = {angle1, angle2, angle3}; - - if (n_angles == N_total) { - angle = angle1 * angle2 * angle3; - } else { - angle = multiply_angles(angle_list, n_angles, N_total); - } - - answer += mult_energy_i_j - * energyStore[k] - * angle; - } - } - } - return answer; - } - - double EnergyCorrelator::evaluate_n4(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const { - double answer = 0; - double angle1, angle2, angle3, angle4, angle5, angle6; - unsigned int N_total = 6; - double angle; - - for (unsigned int i = 3; i < nC; i++) { - for (unsigned int j = 2; j < i; j++) { - for (unsigned int k = 1; k < j; k++) { - for (unsigned int l = 0; l < k; l++) { - - angle1 = angleStore[i][j]; - angle2 = angleStore[i][k]; - angle3 = angleStore[i][l]; - angle4 = angleStore[j][k]; - angle5 = angleStore[j][l]; - angle6 = angleStore[k][l]; - - double angle_list[] = {angle1, angle2, angle3, angle4, angle5, angle6}; - - if (n_angles == N_total) { - angle = angle1 * angle2 * angle3 * angle4 * angle5 * angle6; - } else { - angle = multiply_angles(angle_list, n_angles, N_total); - } - - answer += energyStore[i] - * energyStore[j] - * energyStore[k] - * energyStore[l] - * angle; - } - } - } - } - return answer; - } - - double EnergyCorrelator::evaluate_n5(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const { - - double answer = 0; - double angle1, angle2, angle3, angle4, angle5, angle6, angle7, angle8, angle9, angle10; - unsigned int N_total = 10; - double angle; - - for (unsigned int i = 4; i < nC; i++) { - for (unsigned int j = 3; j < i; j++) { - for (unsigned int k = 2; k < j; k++) { - for (unsigned int l = 1; l < k; l++) { - for (unsigned int m = 0; m < l; m++) { - - angle1 = angleStore[i][j]; - angle2 = angleStore[i][k]; - angle3 = angleStore[i][l]; - angle4 = angleStore[i][m]; - angle5 = angleStore[j][k]; - angle6 = angleStore[j][l]; - angle7 = angleStore[j][m]; - angle8 = angleStore[k][l]; - angle9 = angleStore[k][m]; - angle10 = angleStore[l][m]; - - double angle_list[] = {angle1, angle2, angle3, angle4, angle5, angle6, angle7, angle8, - angle9, angle10}; - - angle = multiply_angles(angle_list, n_angles, N_total); - - answer += energyStore[i] - * energyStore[j] - * energyStore[k] - * energyStore[l] - * energyStore[m] - * angle; - } - } - } - } - } - return answer; - } - - - - double EnergyCorrelatorGeneralized::multiply_angles(double angle_list[], int n_angles, unsigned int N_total) const { - - return _helper_correlator.multiply_angles(angle_list, n_angles, N_total); - } - - void EnergyCorrelatorGeneralized::precompute_energies_and_angles(std::vector const &particles, double* energyStore, double** angleStore) const { - - return _helper_correlator.precompute_energies_and_angles(particles, energyStore, angleStore); - } - - double EnergyCorrelatorGeneralized::evaluate_n3(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const { - - return _helper_correlator.evaluate_n3(nC, n_angles, energyStore, angleStore); - } - - double EnergyCorrelatorGeneralized::evaluate_n4(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const { - - return _helper_correlator.evaluate_n4(nC, n_angles, energyStore, angleStore); - } - - double EnergyCorrelatorGeneralized::evaluate_n5(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const { - - return _helper_correlator.evaluate_n5(nC, n_angles, energyStore, angleStore); - } - - double EnergyCorrelatorGeneralized::result(const PseudoJet& jet) const { - - // if jet does not have constituents, throw error - if (!jet.has_constituents()) throw Error("EnergyCorrelator called on jet with no constituents."); - - // Throw an error if N < 0 - // Not needed if N is unsigned integer - //if (_N < 0 ) throw Error("N cannot be negative"); - // get N = 0 case out of the way - if (_N == 0) return 1.0; - - // take care of N = 1 case. - if (_N == 1) return 1.0; - - // find constituents - std::vector particles = jet.constituents(); - double answer = 0.0; - - // return zero if the number of constituents is less than _N for the ECFG - if (particles.size() < _N) return 0.0 ; - - // The normalization is the energy or pt of the jet, which is also ECF(1, beta) - double EJ = _helper_correlator.result(jet); - - // The overall normalization - double norm = pow(EJ, _N); - - // Find the max number of angles and throw an error if unsuitable - int N_total = int(_N*(_N-1)/2); - if (_angles > N_total) throw Error("Requested number of angles for EnergyCorrelatorGeneralized is larger than number of angles available"); - if (_angles < -1) throw Error("Negative number of angles called for EnergyCorrelatorGeneralized"); - - double half_beta = _beta/2.0; - - // take care of N = 2 case. - if (_N == 2) { - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { //note offset by one so that angle is never called on identical pairs - answer += energy(particles[i]) - * energy(particles[j]) - * pow(angleSquared(particles[i],particles[j]), half_beta)/norm; - } - } - return answer; - } - - - // if N > 4, then throw error - if (_N > 5) { - throw Error("EnergyCorrelatorGeneralized is only hard coded for N = 0,1,2,3,4,5"); - } - - // Now deal with N = 3,4,5. Different options if storage array is used or not. - if (_strategy == EnergyCorrelator::storage_array) { - - // For N > 2, fill static storage array to save computation time. - - unsigned int nC = particles.size(); - // Make energy storage -// double energyStore[nC]; - double *energyStore = new double[nC]; - - // Make angular storage -// double angleStore[nC][nC]; - double **angleStore = new double*[nC]; - - precompute_energies_and_angles(particles, energyStore, angleStore); - - unsigned int n_angles = _angles; - if (_angles < 0) { - n_angles = N_total; - } - - // now do recursion - if (_N == 3) { - answer = evaluate_n3(nC, n_angles, energyStore, angleStore) / norm; - } else if (_N == 4) { - answer = evaluate_n4(nC, n_angles, energyStore, angleStore) / norm; - } else if (_N == 5) { - answer = evaluate_n5(nC, n_angles, energyStore, angleStore) / norm; - } else { - assert(_N <= 5); - } - // Deleting arrays - delete[] energyStore; - - for (unsigned int i = 0; i < particles.size(); i++) { - delete[] angleStore[i]; - } - delete[] angleStore; - } else if (_strategy == EnergyCorrelator::slow) { - if (_N == 3) { - unsigned int N_total = 3; - double angle1, angle2, angle3; - double angle; - - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { - for (unsigned int k = j + 1; k < particles.size(); k++) { - - angle1 = angleSquared(particles[i], particles[j]); - angle2 = angleSquared(particles[i], particles[k]); - angle3 = angleSquared(particles[j], particles[k]); - - if (_angles == -1){ - angle = angle1*angle2*angle3; - } else { - double angle_list[] = {angle1, angle2, angle3}; - std::vector angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - angle = angle_vector[0]; - for ( int l = 1; l < _angles; l++) { angle = angle * angle_vector[l]; } - } - answer += energy(particles[i]) - * energy(particles[j]) - * energy(particles[k]) - * pow(angle, half_beta) /norm; - } - } - } - } else if (_N == 4) { - double angle1, angle2, angle3, angle4, angle5, angle6; - unsigned int N_total = 6; - double angle; - - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { - for (unsigned int k = j + 1; k < particles.size(); k++) { - for (unsigned int l = k + 1; l < particles.size(); l++) { - - angle1 = angleSquared(particles[i], particles[j]); - angle2 = angleSquared(particles[i], particles[k]); - angle3 = angleSquared(particles[i], particles[l]); - angle4 = angleSquared(particles[j], particles[k]); - angle5 = angleSquared(particles[j], particles[l]); - angle6 = angleSquared(particles[k], particles[l]); - - if(_angles == -1) { - angle = angle1*angle2*angle3*angle4*angle5*angle6; - } else { - - double angle_list[] = {angle1, angle2, angle3, angle4, angle5, angle6}; - std::vector angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - angle = angle_vector[0]; - for ( int s = 1; s < _angles; s++) { angle = angle * angle_vector[s]; } - - } - answer += energy(particles[i]) - * energy(particles[j]) - * energy(particles[k]) - * energy(particles[l]) - * pow(angle, half_beta)/norm; - } - } - } - } - } else if (_N == 5) { - double angle1, angle2, angle3, angle4, angle5, angle6, angle7, angle8, angle9, angle10; - unsigned int N_total = 10; - double angle; - - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { - for (unsigned int k = j + 1; k < particles.size(); k++) { - for (unsigned int l = k + 1; l < particles.size(); l++) { - for (unsigned int m = l + 1; m < particles.size(); m++) { - - angle1 = angleSquared(particles[i], particles[j]); - angle2 = angleSquared(particles[i], particles[k]); - angle3 = angleSquared(particles[i], particles[l]); - angle4 = angleSquared(particles[j], particles[k]); - angle5 = angleSquared(particles[j], particles[l]); - angle6 = angleSquared(particles[k], particles[l]); - angle7 = angleSquared(particles[m], particles[i]); - angle8 = angleSquared(particles[m], particles[j]); - angle9 = angleSquared(particles[m], particles[k]); - angle10 = angleSquared(particles[m], particles[l]); - - if (_angles == -1){ - angle = angle1*angle2*angle3*angle4*angle5*angle6*angle7*angle8*angle9*angle10; - } else { - double angle_list[] = {angle1, angle2, angle3, angle4, angle5, angle6, - angle7, angle8, angle9, angle10}; - std::vector angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - angle = angle_vector[0]; - for ( int s = 1; s < _angles; s++) { angle = angle * angle_vector[s]; } - } - answer += energy(particles[i]) - * energy(particles[j]) - * energy(particles[k]) - * energy(particles[l]) - * energy(particles[m]) - * pow(angle, half_beta) /norm; - } - } - } - } - } - } else { - assert(_N <= 5); - } - } else { - assert(_strategy == EnergyCorrelator::slow || _strategy == EnergyCorrelator::storage_array); - } - return answer; - } - - - std::vector EnergyCorrelatorGeneralized::result_all_angles(const PseudoJet& jet) const { - - // if jet does not have constituents, throw error - if (!jet.has_constituents()) throw Error("EnergyCorrelator called on jet with no constituents."); - - // Throw an error if N < 1 - if (_N < 1 ) throw Error("N cannot be negative or zero"); - - // get the N = 1 case out of the way - if (_N == 1) { - std::vector ans (1, 1.0); - return ans; - } - - // find constituents - std::vector particles = jet.constituents(); - - // return zero if the number of constituents is less than _N for the ECFG - if (particles.size() < _N) { - std::vector ans (_N, 0.0); - return ans; - } - - // The normalization is the energy or pt of the jet, which is also ECF(1, beta) - double EJ = _helper_correlator.result(jet); - - // The overall normalization - double norm = pow(EJ, _N); - - // Find the max number of angles and throw an error if it unsuitable - int N_total = _N * (_N - 1)/2; - - double half_beta = _beta/2.0; - - // take care of N = 2 case. - if (_N == 2) { - double answer = 0.0; - for (unsigned int i = 0; i < particles.size(); i++) { - for (unsigned int j = i + 1; j < particles.size(); j++) { //note offset by one so that angle is never called on identical pairs - answer += energy(particles[i]) - * energy(particles[j]) - * pow(angleSquared(particles[i],particles[j]), half_beta)/norm; - } - } - std::vector ans(N_total, answer); - return ans; - } - - // Prepare the answer vector - std::vector ans (N_total, 0.0); - // if N > 4, then throw error - if (_N > 5) { - throw Error("EnergyCorrelatorGeneralized is only hard coded for N = 0,1,2,3,4,5"); - } - - // Now deal with N = 3,4,5. Different options if storage array is used or not. - if (_strategy == EnergyCorrelator::storage_array) { - - // For N > 2, fill static storage array to save computation time. - - // Make energy storage - std::vector energyStore; - energyStore.resize(particles.size()); - - // Make angular storage - std::vector < std::vector > angleStore; - angleStore.resize(particles.size()); - for (unsigned int i = 0; i < angleStore.size(); i++) { - angleStore[i].resize(i); - } - - // Fill storage with energy/angle information - for (unsigned int i = 0; i < particles.size(); i++) { - energyStore[i] = energy(particles[i]); - for (unsigned int j = 0; j < i; j++) { - if (half_beta == 1){ - angleStore[i][j] = angleSquared(particles[i], particles[j]); - } else { - angleStore[i][j] = pow(angleSquared(particles[i], particles[j]), half_beta); - } - } - } - - // now do recursion - if (_N == 3) { - double angle1, angle2, angle3; - - for (unsigned int i = 2; i < particles.size(); i++) { - for (unsigned int j = 1; j < i; j++) { - for (unsigned int k = 0; k < j; k++) { - - angle1 = angleStore[i][j]; - angle2 = angleStore[i][k]; - angle3 = angleStore[j][k]; - - double angle_list[] = {angle1, angle2, angle3}; - std::vector angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - std::vector final_angles (N_total, angle_vector[0]); - - double z_product = energyStore[i] * energyStore[j] * energyStore[k]/norm; - ans[0] += z_product * final_angles[0]; - for ( int s=1 ; s angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - std::vector final_angles (N_total, angle_vector[0]); - - double z_product = energyStore[i] * energyStore[j] * energyStore[k] * energyStore [l]/norm; - ans[0] += z_product * final_angles[0]; - for ( int s=1 ; s angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - std::vector final_angles (N_total, angle_vector[0]); - - double z_product = energyStore[i] * energyStore[j] * energyStore[k] - * energyStore[l] * energyStore[m]/norm; - ans[0] += z_product * final_angles[0]; - for ( int s=1 ; s angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - std::vector final_angles (N_total, angle_vector[0]); - - double z_product = energy(particles[i]) - * energy(particles[j]) - * energy(particles[k])/norm; - - ans[0] += z_product * final_angles[0]; - for ( int s=1 ; s angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - std::vector final_angles (N_total, angle_vector[0]); - - double z_product = energy(particles[i]) - * energy(particles[j]) - * energy(particles[k]) - * energy(particles[l])/norm; - ans[0] += z_product * final_angles[0]; - for ( int s=1 ; s angle_vector(angle_list, angle_list + N_total); - std::sort(angle_vector.begin(), angle_vector.begin() + N_total); - - std::vector final_angles (N_total, angle_vector[0]); - - double z_product = energy(particles[i]) - * energy(particles[j]) - * energy(particles[k]) - * energy(particles[l]) - * energy(particles[m])/norm; - - ans[0] += z_product * final_angles[0]; - for ( int s=1 ; s. -//---------------------------------------------------------------------- - -#include -#include "fastjet/FunctionOfPseudoJet.hh" - -#include -#include - -FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - -namespace contrib{ - -/// \mainpage EnergyCorrelator contrib -/// -/// The EnergyCorrelator contrib provides an implementation of energy -/// correlators and their ratios as described in arXiv:1305.0007 by -/// Larkoski, Salam and Thaler. Additionally, the ratio observable -/// D2 described in arXiv:1409.6298 by Larkoski, Moult and Neill -/// is also included in this contrib. Finally, a generalized version of -/// the energy correlation functions is added, defined in -/// arXiv:1609.07483 by Moult, Necib and Thaler, which allow the -/// definition of the M series, N series, and U series observables. -/// There is also a generalized version of D2. -/// -/// -///

There are 4 main classes: -/// -/// - EnergyCorrelator -/// - EnergyCorrelatorRatio -/// - EnergyCorrelatorDoubleRatio -/// - EnergyCorrelatorGeneralized -/// -///

There are five classes that define useful combinations of the ECFs. -/// -/// - EnergyCorrelatorNseries -/// - EnergyCorrelatorMseries -/// - EnergyCorrelatorUseries -/// - EnergyCorrelatorD2 -/// - EnergyCorrelatorGeneralizedD2 -/// -///

There are also aliases for easier access: -/// - EnergyCorrelatorCseries (same as EnergyCorrelatorDoubleRatio) -/// - EnergyCorrelatorC1 (EnergyCorrelatorCseries with i=1) -/// - EnergyCorrelatorC2 (EnergyCorrelatorCseries with i=2) -/// - EnergyCorrelatorN2 (EnergyCorrelatorNseries with i=2) -/// - EnergyCorrelatorN3 (EnergyCorrelatorNseries with i=3) -/// - EnergyCorrelatorM2 (EnergyCorrelatorMseries with i=2) -/// - EnergyCorrelatorU1 (EnergyCorrelatorUseries with i=1) -/// - EnergyCorrelatorU2 (EnergyCorrelatorUseries with i=2) -/// - EnergyCorrelatorU3 (EnergyCorrelatorUseries with i=3) -/// -/// Each of these classes is a FastJet FunctionOfPseudoJet. -/// EnergyCorrelatorDoubleRatio (which is equivalent to EnergyCorrelatorCseries) -/// is in particular is useful for quark/gluon discrimination and boosted -/// object tagging. -/// -/// Using the original 2- and 3-point correlators, EnergyCorrelationD2 has -/// been shown to be the optimal combination for boosted 2-prong tagging. -/// -/// The EnergyCorrelatorNseries and EnergyCorrelatorMseries use -/// generalized correlation functions with different angular scaling, -/// and are intended for use on 2-prong and 3-prong jets. -/// The EnergyCorrelatorUseries is useful for quark/gluon discrimimation. -/// -/// See the file example.cc for an illustration of usage and -/// example_basic_usage.cc for the most commonly used functions. - -//------------------------------------------------------------------------ -/// \class EnergyCorrelator -/// ECF(N,beta) is the N-point energy correlation function, with an angular exponent beta. -/// -/// It is defined as follows -/// -/// - \f$ \mathrm{ECF}(1,\beta) = \sum_i E_i \f$ -/// - \f$ \mathrm{ECF}(2,\beta) = \sum_{i { - friend class EnergyCorrelatorGeneralized; ///< This allow ECFG to access the energy and angle definitions - ///< of this class, which are otherwise private. - public: - - enum Measure { - pt_R, ///< use transverse momenta and boost-invariant angles, - ///< eg \f$\mathrm{ECF}(2,\beta) = \sum_{i=3 this leads to many expensive recomputations, - ///< but has only O(n) memory usage for n particles - - storage_array /// the interparticle angles are cached. This gives a significant speed - /// improvement for N>=3, but has a memory requirement of (4n^2) bytes. - }; - - public: - - /// constructs an N-point correlator with angular exponent beta, - /// using the specified choice of energy and angular measure as well - /// one of two possible underlying computational Strategy - EnergyCorrelator(unsigned int N, - double beta, - Measure measure = pt_R, - Strategy strategy = storage_array) : - _N(N), _beta(beta), _measure(measure), _strategy(strategy) {}; - - /// destructor - virtual ~EnergyCorrelator(){} - - /// returns the value of the energy correlator for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - /// returns the the part of the description related to the parameters - std::string description_parameters() const; - std::string description_no_N() const; - - private: - - unsigned int _N; - double _beta; - Measure _measure; - Strategy _strategy; - - double energy(const PseudoJet& jet) const; - double angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const; - double multiply_angles(double angles[], int n_angles, unsigned int N_total) const; - void precompute_energies_and_angles(std::vector const &particles, double* energyStore, double** angleStore) const; - double evaluate_n3(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n4(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n5(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - }; - -// core EnergyCorrelator::result code in .cc file. - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorRatio -/// A class to calculate the ratio of (N+1)-point to N-point energy correlators, -/// ECF(N+1,beta)/ECF(N,beta), -/// called \f$ r_N^{(\beta)} \f$ in the publication. - class EnergyCorrelatorRatio : public FunctionOfPseudoJet { - - public: - - /// constructs an (N+1)-point to N-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorRatio(unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _N(N), _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorRatio() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _N; - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - inline double EnergyCorrelatorRatio::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelator(_N + 1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelator(_N, _beta, _measure, _strategy).result(jet); - - return numerator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorDoubleRatio -/// Calculates the double ratio of energy correlators, ECF(N-1,beta)*ECF(N+1)/ECF(N,beta)^2. -/// -/// A class to calculate a double ratio of energy correlators, -/// ECF(N-1,beta)*ECF(N+1,beta)/ECF(N,beta)^2, -/// called \f$C_N^{(\beta)}\f$ in the publication, and equal to -/// \f$ r_N^{(\beta)}/r_{N-1}^{(\beta)} \f$. -/// - - class EnergyCorrelatorDoubleRatio : public FunctionOfPseudoJet { - - public: - - EnergyCorrelatorDoubleRatio(unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _N(N), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_N < 1) throw Error("EnergyCorrelatorDoubleRatio: N must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorDoubleRatio() {} - - - /// returns the value of the energy correlator double-ratio for a - /// jet's constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _N; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorDoubleRatio::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelator(_N - 1, _beta, _measure, _strategy).result(jet) * EnergyCorrelator(_N + 1, _beta, _measure, _strategy).result(jet); - double denominator = pow(EnergyCorrelator(_N, _beta, _measure, _strategy).result(jet), 2.0); - - return numerator/denominator; - - } - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorC1 -/// A class to calculate the normalized 2-point energy correlators, -/// ECF(2,beta)/ECF(1,beta)^2, -/// called \f$ C_1^{(\beta)} \f$ in the publication. - class EnergyCorrelatorC1 : public FunctionOfPseudoJet { - - public: - - /// constructs a 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorC1(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorC1() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorC1::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelator(2, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelator(1, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorC2 -/// A class to calculate the double ratio of 3-point to 2-point -/// energy correlators, -/// ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2, -/// called \f$ C_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorC2 : public FunctionOfPseudoJet { - - public: - - /// constructs a 3-point to 2-point correlator double ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorC2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorC2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorC2::result(const PseudoJet& jet) const { - - double numerator3 = EnergyCorrelator(3, _beta, _measure, _strategy).result(jet); - double numerator1 = EnergyCorrelator(1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelator(2, _beta, _measure, _strategy).result(jet); - - return numerator3*numerator1/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorD2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3, -/// called \f$ D_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorD2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorD2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorD2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorD2::result(const PseudoJet& jet) const { - - double numerator3 = EnergyCorrelator(3, _beta, _measure, _strategy).result(jet); - double numerator1 = EnergyCorrelator(1, _beta, _measure, _strategy).result(jet); - double denominator2 = EnergyCorrelator(2, _beta, _measure, _strategy).result(jet); - - return numerator3*numerator1*numerator1*numerator1/denominator2/denominator2/denominator2; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorGeneralized -/// A generalized and normalized version of the N-point energy correlators, with -/// angular exponent beta and v number of pairwise angles. When \f$v = {N \choose 2}\f$ -/// (or, for convenience, \f$v = -1\f$), EnergyCorrelatorGeneralized just gives normalized -/// versions of EnergyCorrelator: -/// - \f$ \mathrm{ECFG}(-1,1,\beta) = \mathrm{ECFN}(N,\beta) = \mathrm{ECF}(N,\beta)/\mathrm{ECF}(1,\beta)\f$ -/// -/// Note that there is no separate class that implements ECFN, though it is a -/// notation that we will use in this documentation. Examples of the low-point normalized -/// correlators are: -/// - \f$\mathrm{ECFN}(1,\beta) = 1\f$ -/// - \f$\mathrm{ECFN}(2,\beta) = \sum_{i { - public: - - /// constructs an N-point correlator with v_angles pairwise angles - /// and angular exponent beta, - /// using the specified choice of energy and angular measure as well - /// one of two possible underlying computational Strategy - EnergyCorrelatorGeneralized(int v_angles, - unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _angles(v_angles), _N(N), _beta(beta), _measure(measure), _strategy(strategy), _helper_correlator(1,_beta, _measure, _strategy) {}; - - /// destructor - virtual ~EnergyCorrelatorGeneralized(){} - - /// returns the value of the normalized energy correlator for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - - double result(const PseudoJet& jet) const; - std::vector result_all_angles(const PseudoJet& jet) const; - - private: - - int _angles; - unsigned int _N; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - EnergyCorrelator _helper_correlator; - - double energy(const PseudoJet& jet) const; - double angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const; - double multiply_angles(double angles[], int n_angles, unsigned int N_total) const; - void precompute_energies_and_angles(std::vector const &particles, double* energyStore, double** angleStore) const; - double evaluate_n3(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n4(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - double evaluate_n5(unsigned int nC, unsigned int n_angles, double* energyStore, double** angleStore) const; - }; - - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorGeneralizedD2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFN(3,alpha)/ECFN(2,beta)^3 alpha/beta, -/// called \f$ D_2^{(\alpha, \beta)} \f$ in the publication. - class EnergyCorrelatorGeneralizedD2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorGeneralizedD2( - double alpha, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _alpha(alpha), _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorGeneralizedD2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _alpha; - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorGeneralizedD2::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(-1, 3, _alpha, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(-1, 2, _beta, _measure, _strategy).result(jet); - - return numerator/pow(denominator, 3.0*_alpha/_beta); - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorNseries -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// N_n = ECFG(2,n+1,beta)/ECFG(1,n,beta)^2, -/// called \f$ N_i^{(\alpha, \beta)} \f$ in the publication. -/// By definition, N_1^{beta} = ECFG(1, 2, 2*beta), where the angular exponent -/// is twice as big since the N series should involve two pairwise angles. - class EnergyCorrelatorNseries : public FunctionOfPseudoJet { - - public: - - /// constructs a n 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorNseries( - unsigned int n, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _n(n), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_n < 1) throw Error("EnergyCorrelatorNseries: n must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorNseries() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _n; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - }; - - - inline double EnergyCorrelatorNseries::result(const PseudoJet& jet) const { - - if (_n == 1) return EnergyCorrelatorGeneralized(1, 2, 2*_beta, _measure, _strategy).result(jet); - // By definition, N1 = ECFN(2, 2 beta) - double numerator = EnergyCorrelatorGeneralized(2, _n + 1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, _n, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorN2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFG(2,3,beta)/ECFG(1,2,beta)^2, -/// called \f$ N_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorN2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorN2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorN2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorN2::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(2, 3, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorN3 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFG(2,4,beta)/ECFG(1,3,beta)^2, -/// called \f$ N_3^{(\beta)} \f$ in the publication. - class EnergyCorrelatorN3 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorN3(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorN3() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorN3::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(2, 4, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, 3, _beta, _measure, _strategy).result(jet); - - return numerator/denominator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorMseries -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// M_n = ECFG(1,n+1,beta)/ECFG(1,n,beta), -/// called \f$ M_i^{(\alpha, \beta)} \f$ in the publication. -/// By definition, M_1^{beta} = ECFG(1,2,beta) - class EnergyCorrelatorMseries : public FunctionOfPseudoJet { - - public: - - /// constructs a n 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorMseries( - unsigned int n, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _n(n), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_n < 1) throw Error("EnergyCorrelatorMseries: n must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorMseries() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _n; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - }; - - - inline double EnergyCorrelatorMseries::result(const PseudoJet& jet) const { - - if (_n == 1) return EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - double numerator = EnergyCorrelatorGeneralized(1, _n + 1, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, _n, _beta, _measure, _strategy).result(jet); - - return numerator/denominator; - - } - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorM2 -/// A class to calculate the observable formed from the ratio of the -/// 3-point and 2-point energy correlators, -/// ECFG(1,3,beta)/ECFG(1,2,beta), -/// called \f$ M_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorM2 : public FunctionOfPseudoJet { - - public: - - /// constructs an 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorM2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorM2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorM2::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(1, 3, _beta, _measure, _strategy).result(jet); - double denominator = EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - return numerator/denominator; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorCseries -/// Calculates the C series energy correlators, ECFN(N-1,beta)*ECFN(N+1,beta)/ECFN(N,beta)^2. -/// This is equivalent to EnergyCorrelatorDoubleRatio -/// -/// A class to calculate a double ratio of energy correlators, -/// ECFN(N-1,beta)*ECFN(N+1,beta)/ECFN(N,beta)^2, -/// called \f$C_N^{(\beta)}\f$ in the publication, and equal to -/// \f$ r_N^{(\beta)}/r_{N-1}^{(\beta)} \f$. -/// - - class EnergyCorrelatorCseries : public FunctionOfPseudoJet { - - public: - - EnergyCorrelatorCseries(unsigned int N, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _N(N), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_N < 1) throw Error("EnergyCorrelatorCseries: N must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorCseries() {} - - - /// returns the value of the energy correlator double-ratio for a - /// jet's constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _N; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorCseries::result(const PseudoJet& jet) const { - - double numerator = EnergyCorrelatorGeneralized(-1, _N - 1, _beta, _measure, _strategy).result(jet) * EnergyCorrelatorGeneralized(-1, _N + 1, _beta, _measure, _strategy).result(jet); - double denominator = pow(EnergyCorrelatorGeneralized(-1, _N, _beta, _measure, _strategy).result(jet), 2.0); - - return numerator/denominator; - - } - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorUseries -/// A class to calculate the observable used for quark versus gluon discrimination -/// U_n = ECFG(1,n+1,beta), -/// called \f$ U_i^{(\beta)} \f$ in the publication. - - class EnergyCorrelatorUseries : public FunctionOfPseudoJet { - - public: - - /// constructs a n 3-point to 2-point correlator ratio with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorUseries( - unsigned int n, - double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _n(n), _beta(beta), _measure(measure), _strategy(strategy) { - - if (_n < 1) throw Error("EnergyCorrelatorUseries: n must be 1 or greater."); - - }; - - virtual ~EnergyCorrelatorUseries() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - unsigned int _n; - double _beta; - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - }; - - - inline double EnergyCorrelatorUseries::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, _n + 1, _beta, _measure, _strategy).result(jet); - return answer; - - } - - -//------------------------------------------------------------------------ -/// \class EnergyCorrelatorU1 -/// A class to calculate the observable formed from -/// ECFG(1,2,beta), -/// called \f$ U_1^{(\beta)} \f$ in the publication. - class EnergyCorrelatorU1 : public FunctionOfPseudoJet { - - public: - - /// constructs a 2-point correlator with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorU1(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorU1() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorU1::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, 2, _beta, _measure, _strategy).result(jet); - - return answer; - - } - - - //------------------------------------------------------------------------ - /// \class EnergyCorrelatorU2 - /// A class to calculate the observable formed from - /// ECFG(1,3,beta), - /// called \f$ U_2^{(\beta)} \f$ in the publication. - class EnergyCorrelatorU2 : public FunctionOfPseudoJet { - - public: - - /// constructs a 3-point correlator with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorU2(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorU2() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorU2::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, 3, _beta, _measure, _strategy).result(jet); - - return answer; - - } - - - //------------------------------------------------------------------------ - /// \class EnergyCorrelatorU3 - /// A class to calculate the observable formed from - /// ECFG(1,4,beta), - /// called \f$ U_3^{(\beta)} \f$ in the publication. - class EnergyCorrelatorU3 : public FunctionOfPseudoJet { - - public: - - /// constructs a 4-point correlator with - /// angular exponent beta, using the specified choice of energy and - /// angular measure as well one of two possible underlying - /// computational strategies - EnergyCorrelatorU3(double beta, - EnergyCorrelator::Measure measure = EnergyCorrelator::pt_R, - EnergyCorrelator::Strategy strategy = EnergyCorrelator::storage_array) - : _beta(beta), _measure(measure), _strategy(strategy) {}; - - virtual ~EnergyCorrelatorU3() {} - - /// returns the value of the energy correlator ratio for a jet's - /// constituents. (Normally accessed by the parent class's - /// operator()). - double result(const PseudoJet& jet) const; - - std::string description() const; - - private: - - double _beta; - - EnergyCorrelator::Measure _measure; - EnergyCorrelator::Strategy _strategy; - - - }; - - - inline double EnergyCorrelatorU3::result(const PseudoJet& jet) const { - - double answer = EnergyCorrelatorGeneralized(1, 4, _beta, _measure, _strategy).result(jet); - - return answer; - - } - - - -} // namespace contrib - -FASTJET_END_NAMESPACE - -#endif // __FASTJET_CONTRIB_ENERGYCORRELATOR_HH__ diff --git a/src/Tools/fjcontrib/EnergyCorrelator/Makefile.am b/src/Tools/fjcontrib/EnergyCorrelator/Makefile.am deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/Makefile.am +++ /dev/null @@ -1,3 +0,0 @@ -noinst_LTLIBRARIES = libRivetEnergyCorrelator.la -libRivetEnergyCorrelator_la_SOURCES = EnergyCorrelator.cc -libRivetEnergyCorrelator_la_CPPFLAGS = $(AM_CPPFLAGS) -I${top_srcdir}/include/Rivet/Tools diff --git a/src/Tools/fjcontrib/EnergyCorrelator/NEWS b/src/Tools/fjcontrib/EnergyCorrelator/NEWS deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/NEWS +++ /dev/null @@ -1,14 +0,0 @@ -2018-02-08: Release of version 1.3.1: - Fixing memory leak, functionality unchanged. -2018-01-09: Release of version 1.3.0: - Speed up of core ECF code, functionality unchanged. -2016-10-07: Release of version 1.2.0: - Incorporates the generalized energy correlation functions - and adds the NSeries, MSeries, USeries, and GeneralizedD2 - observables. -2014-11-13: Release of version 1.1.0: - Added the C1, C2, and D2 observables -2013-05-01: Release of version 1.0.1: - Improved error reporting -2013-04-30: Release of version 1.0.0: - Initial release diff --git a/src/Tools/fjcontrib/EnergyCorrelator/README b/src/Tools/fjcontrib/EnergyCorrelator/README deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/README +++ /dev/null @@ -1,190 +0,0 @@ -The EnergyCorrelator package is based on the physics described in: - - Energy Correlation Functions for Jet Substructure. - Andrew J. Larkoski, Gavin Salam, and Jesse Thaler. - JHEP 1306, 108 (2013) - arXiv:1305.0007. - -Additional information and a new observable formed from the -energy correlation functions was described in - - Power Counting to Better Jet Observables. - Andrew J. Larkoski, Ian Moult, and Duff Neill. - JHEP 1412, 009 (2014) - arXiv:1409.6298. - -Additional observables based on generalizations of the energy -correlation functions are described in - - New Angles on Energy Correlation Functions. - Ian Moult, Lina Necib, and Jesse Thaler. - arXiv:1609.07483. - -This FastJet-contrib package contains a number of classes derived from -FunctionOfPseudoJet. - ----------------------------------------------------------------------------- - -The core classes from 1305.0007, and defined since version 1.0, are: - -EnergyCorrelator(int N, double beta, Measure measure) - - Called ECF(N,beta) in arXiv:1305.0007. Corresponds to the N-point - correlation function, with beta the angular exponent, while measure - = pt_R (default) or E_theta sets how energies and angles are - determined. - -EnergyCorrelatorRatio(int N, double beta, Measure measure) - - Called r_N^(beta) in arXiv:1305.0007. - Equals ECF(N+1,beta)/ECF(N,beta). - -EnergyCorrelatorDoubleRatio(int N, double beta, Measure measure) - - Called C_N^(beta) in arXiv:1305.0007. Equals r_N/r_{N-1}. This - observable provides good boosted N-prong object discrimination. - (N=1 for quark/gluon, N=2 for boosted W/Z/H, N=3 for boosted top) - Also given in EnergyCorrelatorCseries as of version 1.2. - ----------------------------------------------------------------------------- - -The D2 observable from 1409.6298, as well as C1 and C2 alias classes, were -added in version 1.1: - -EnergyCorrelatorC1(double beta, Measure measure) - - This calculates the double ratio observable C_1^(beta) which is - useful for quark versus gluon discrimination. - -EnergyCorrelatorC2(double beta, Measure measure) - - This calculates the double ratio observable C_2^(beta) which is - useful for boosted W/Z/H identification. - -EnergyCorrelatorD2(double beta, Measure measure) - - Called D_2^(beta) in arXiv:1409.6298. - Equals ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3. - This is the recommended function for boosted 2-prong object - discrimination (boosted W/Z/H). - ----------------------------------------------------------------------------- - -Generalized energy correlators were introduced in 1609.07483 and appear in -version 1.2. They are defined in the class: - -EnergyCorrelatorGeneralized(int angles, int N, double beta, Measure measure) - - Called {}_v e_n^{(beta)} in 1609.07483, but will be denoted here as - ECFG(angles,N,beta), where v=angles and n=N. As for EnergyCorrelator, - beta is the angular exponent, while measure = pt_R (default) or E_theta - sets how energies and angles are determined. The integer angles - determines the number of angles in the observable. The choice angles=-1 - sets angles = N choose 2, which corresponds to the N-point - normalized (dimensionless) correlation function, with - ECFN(N,beta) = ECFG(N choose 2,N,beta) = ECF(N,beta)/ECF(1,beta)^N - -From the generalized correlators, a variety of useful ratios are defined. -They are mainly organized by series, with special values highlighted for -recommended usage. - ----------------------------------------------------------------------------- - -EnergyCorrelatorGeneralizedD2(double alpha, double beta, Measure measure) - - Called D_2^(alpha, beta) in arXiv:1609.07483 - Equals ECFN(3,alpha)/ECFN(2,beta)^(3 alpha/beta). - Useful for groomed 2-prong object tagging. We recommend the use of alpha=1 - and beta=2. - ----------------------------------------------------------------------------- - -EnergyCorrelatorNseries(int i, double beta, Measure measure) - - Called N_i^(beta) in arXiv:1609.07483 - Equals ECFG(2,n+1,beta)/ECFN(1,n,beta)^2. - -EnergyCorrelatorN2(double beta, Measure measure) - - Called N_2^(beta) in arXiv:1609.07483 - Equals ECFG(2,3,beta)/ECFG(1,2,beta)^2. - Useful for groomed and ungroomed 2-prong object tagging. - -EnergyCorrelatorN3(double beta, Measure measure) - - Called N_3^(beta) in arXiv:1609.07483 - Equals ECFG(2,4,beta)/ECFG(1,3,beta)^2. - Useful for groomed 3-prong object tagging. - ----------------------------------------------------------------------------- - -EnergyCorrelatorMseries(int i, double beta, Measure measure) - - Called M_i^(beta) in arXiv:1609.07483 - Equals ECFG(1,n+1,beta)/ECFG(1,n,beta). - -EnergyCorrelatorM2(double beta, Measure measure) - - Called M_2^(beta) in arXiv:1609.07483 - Equals ECFG(1,3,beta)/ECFG(1,2,beta). - Useful for groomed 2-prong object tagging. - - ----------------------------------------------------------------------------- - -EnergyCorrelatorUseries(int i, double beta, Measure measure) - - Called U_i^(beta) in arXiv:1609.07483 - Equals ECFG(1,n+1,beta). - -EnergyCorrelatorU1(double beta, Measure measure) - - Called U_1^(beta) in arXiv:1609.07483 - Equals ECFG(1,2,beta). - Useful for quark vs. gluon discrimination. - -EnergyCorrelatorU2(double beta, Measure measure) - - Called U_2^(beta) in arXiv:1609.07483 - Equals ECFG(1,3,beta). - Useful for quark vs. gluon discrimination. - -EnergyCorrelatorU3(double beta, Measure measure) - - Called U_3^(beta) in arXiv:1609.07483 - Equals ECFG(1,4,beta). - Useful for quark vs. gluon discrimination. ----------------------------------------------------------------------------- - -The argument Measure in each of the above functions sets how energies -and angles are defined in the observable. The measure - - pt_R - -uses hadron collider coordinates (transverse momenta and boost-invariant -angles). The "energy" in this case is defined as the pT of the jet, -and the "angle" is the distance between the jets in phi, eta space. - -The measure - - E_theta - -uses particle energies and angles and is appropriate for e+e- -collider applications. The "energy" is the jet energy and the angle -between 2 jets is computed from the dot product of the 3 vectors p1 and p2. - -The measure - - E_inv - -uses particle energies and angles and is also appropriate for e+e- -collider applications. In this case “theta” is replaced by Mandelstam -invariants with the same behavior in the collinear limits, leading to a more -calculation friendly observables. The "energy" is defined as the jet energy -and the "angle squared" is defined as (2p_i \cdot p_j/E_i E_j), -where p_i,p_j are the momenta of the jets i adn j, and E_i, E_j are their -respective energies. - -General usage is shown in the example.cc program, and recommended usage -is shown in example_basic_usage.cc. - diff --git a/src/Tools/fjcontrib/EnergyCorrelator/VERSION b/src/Tools/fjcontrib/EnergyCorrelator/VERSION deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/VERSION +++ /dev/null @@ -1,1 +0,0 @@ -1.3.1 \ No newline at end of file diff --git a/src/Tools/fjcontrib/EnergyCorrelator/example.cc b/src/Tools/fjcontrib/EnergyCorrelator/example.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/example.cc +++ /dev/null @@ -1,686 +0,0 @@ -// Example showing usage of energy correlator classes. -// -// Compile it with "make example" and run it with -// -// ./example < ../data/single-event.dat -// -// Copyright (c) 2013-2016 -// Andrew Larkoski, Lina Necib, Gavin Salam, and Jesse Thaler -// -// $Id: example.cc 1097 2018-01-05 00:04:20Z linoush $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "fastjet/PseudoJet.hh" -#include "fastjet/ClusterSequence.hh" -#include "fastjet/JetDefinition.hh" - -#include -#include "EnergyCorrelator.hh" // In external code, this should be fastjet/contrib/EnergyCorrelator.hh - -using namespace std; -using namespace fastjet; -using namespace fastjet::contrib; - -// forward declaration to make things clearer -void read_event(vector &event); -void analyze(const vector & input_particles); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - //---------------------------------------------------------- - // illustrate how this EnergyCorrelator contrib works - - analyze(event); - - return 0; -} - -// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//////// -// -// Main Routine for Analysis -// -/////// - -void analyze(const vector & input_particles) { - - /////// EnergyCorrelator ///////////////////////////// - - // Initial clustering with anti-kt algorithm - JetAlgorithm algorithm = antikt_algorithm; - double jet_rad = 1.00; // jet radius for anti-kt algorithm - JetDefinition jetDef = JetDefinition(algorithm,jet_rad,E_scheme,Best); - ClusterSequence clust_seq(input_particles,jetDef); - vector antikt_jets = sorted_by_pt(clust_seq.inclusive_jets()); - - for (int j = 0; j < 2; j++) { // Two hardest jets per event - if (antikt_jets[j].perp() > 200) { - - PseudoJet myJet = antikt_jets[j]; - - // various values of beta - vector betalist; - betalist.push_back(0.1); - betalist.push_back(0.2); - betalist.push_back(0.5); - betalist.push_back(1.0); - betalist.push_back(1.5); - betalist.push_back(2.0); - - // various values of alpha - vector alphalist; - alphalist.push_back(0.1); - alphalist.push_back(0.2); - alphalist.push_back(0.5); - alphalist.push_back(1.0); - - - // checking the two energy/angle modes - vector measurelist; - measurelist.push_back(EnergyCorrelator::pt_R); - measurelist.push_back(EnergyCorrelator::E_theta); - //measurelist.push_back(EnergyCorrelator::E_inv); - - vector modename; - modename.push_back("pt_R"); - modename.push_back("E_theta"); - //modename.push_back("E_inv"); - - for (unsigned int M = 0; M < measurelist.size(); M++) { - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelator: ECF(N,beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s %14s %15s\n","beta", "N=1 (GeV)", "N=2 (GeV^2)", "N=3 (GeV^3)", "N=4 (GeV^4)", "N=5 (GeV^5)"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelator ECF0(0,beta,measurelist[M]); - EnergyCorrelator ECF1(1,beta,measurelist[M]); - EnergyCorrelator ECF2(2,beta,measurelist[M]); - EnergyCorrelator ECF3(3,beta,measurelist[M]); - EnergyCorrelator ECF4(4,beta,measurelist[M]); - EnergyCorrelator ECF5(5,beta,measurelist[M]); - - printf("%7.3f %14.2f %14.2f %14.2f %14.2f %15.2f \n",beta,ECF1(myJet),ECF2(myJet),ECF3(myJet),ECF4(myJet),ECF5(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorRatio: r_N^(beta) = ECF(N+1,beta)/ECF(N,beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s %14s %15s \n","beta", "N=0 (GeV)", "N=1 (GeV)", "N=2 (GeV)", "N=3 (GeV)","N=4 (GeV)"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorRatio r0(0,beta,measurelist[M]); - EnergyCorrelatorRatio r1(1,beta,measurelist[M]); - EnergyCorrelatorRatio r2(2,beta,measurelist[M]); - EnergyCorrelatorRatio r3(3,beta,measurelist[M]); - EnergyCorrelatorRatio r4(4,beta,measurelist[M]); - - printf("%7.3f %14.4f %14.4f %14.4f %14.4f %15.4f \n",beta,r0(myJet),r1(myJet),r2(myJet),r3(myJet),r4(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorDoubleRatio: C_N^(beta) = r_N^(beta)/r_{N-1}^(beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s %14s \n","beta", "N=1", "N=2", "N=3", "N=4"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorDoubleRatio C1(1,beta,measurelist[M]); - EnergyCorrelatorDoubleRatio C2(2,beta,measurelist[M]); - EnergyCorrelatorDoubleRatio C3(3,beta,measurelist[M]); - EnergyCorrelatorDoubleRatio C4(4,beta,measurelist[M]); - - printf("%7.3f %14.6f %14.6f %14.6f %14.6f \n",beta,C1(myJet),C2(myJet),C3(myJet),C4(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorC1: C_1^(beta) = ECF(2,beta)/ECF(1,beta)^2 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta","C1 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorC1 c1(beta,measurelist[M]); - - printf("%7.3f %14.6f \n",beta,c1(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorC2: C_2^(beta) = ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta","C2 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorC2 c2(beta,measurelist[M]); - - printf("%7.3f %14.6f \n",beta,c2(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorD2: D_2^(beta) = ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta","D2 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorD2 d2(beta,measurelist[M]); - - printf("%7.3f %14.6f \n",beta,d2(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorGeneralizedD2: D_2^(alpha, beta) = ECFN(3,alpha)/ECFN(2,beta)^(3*alpha/beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %18s %18s %18s %18s\n","beta","alpha = 0.100","alpha = 0.200","alpha = 0.500","alpha = 1.000"); - - for (unsigned int B = 1; B < betalist.size(); B++) { - double beta = betalist[B]; - - printf("%7.3f ", beta); - for (unsigned int A = 0; A < alphalist.size(); A++) { - double alpha = alphalist[A]; - - EnergyCorrelatorGeneralizedD2 d2(alpha, beta, measurelist[M]); - - printf("%18.4f ", d2(myJet)); - } - printf("\n"); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorGeneralized (angles = N Choose 2): ECFN(N, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %7s %14s %14s %14s\n","beta", "N=1", "N=2", "N=3", "N=4"); - - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorGeneralized ECF1(-1,1, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF2(-1,2, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF3(-1,3, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF4(-1,4, beta, measurelist[M]); - //EnergyCorrelatorGeneralized ECF5(-1, 5, beta, measurelist[M]); - - printf("%7.3f %7.2f %14.10f %14.10f %14.10f \n", beta, ECF1(myJet), ECF2(myJet), ECF3(myJet), - ECF4(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << - endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorGeneralized: ECFG(angles, N, beta=1) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %7s %14s %14s %14s\n","angles", "N=1", "N=2", "N=3", "N=4"); - - double beta = 1.0; - for (unsigned int A = 1; A < 2; A++) { - double angle = A; - - EnergyCorrelatorGeneralized ECF1(angle, 1, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF2(angle, 2, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF3(angle, 3, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF4(angle, 4, beta, measurelist[M], EnergyCorrelator::slow); - - printf("%7.0f %7.2f %14.10f %14.10f %14.10f \n", angle, ECF1(myJet), ECF2(myJet), ECF3(myJet), - ECF4(myJet)); - - } - - for (unsigned int A = 2; A < 4; A++) { - double angle = A; - - EnergyCorrelatorGeneralized ECF3(angle, 3, beta, measurelist[M]); - EnergyCorrelatorGeneralized ECF4(angle, 4, beta, measurelist[M]); - - printf("%7.0f %7s %14s %14.10f %14.10f \n", angle, " " , " " ,ECF3(myJet), ECF4(myJet)); - } - - for (unsigned int A = 4; A < 7; A++) { - double angle = A; - - EnergyCorrelatorGeneralized ECF4(angle, 4, beta, measurelist[M]); - printf("%7.0f %7s %14s %14s %14.10f \n", angle, " ", " ", " ", ECF4(myJet) ); - } - cout << "-------------------------------------------------------------------------------------" << - endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorNseries: N_i(beta) = ECFG(i+1, 2, beta)/ECFG(i, 1, beta)^2 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s \n","beta", "N=1", "N=2", "N=3"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorNseries N1(1,beta,measurelist[M]); - EnergyCorrelatorNseries N2(2,beta,measurelist[M]); - EnergyCorrelatorNseries N3(3,beta,measurelist[M]); - - printf("%7.3f %14.6f %14.6f %14.6f \n",beta,N1(myJet),N2(myJet),N3(myJet)); - - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorN2: N2(beta) = ECFG(3, 2, beta)/ECFG(2, 1, beta)^2 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta", "N2 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorN2 N2(beta,measurelist[M]); - - printf("%7.3f %14.6f \n",beta,N2(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorN3: N3(beta) = ECFG(4, 2, beta)/ECFG(3, 1, beta)^2 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta", "N3 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorN3 N3(beta,measurelist[M]); - - printf("%7.3f %14.6f \n",beta,N3(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorMseries: M_i(beta) = ECFG(i+1, 1, beta)/ECFN(i, 1, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s \n","beta", "N=1", "N=2", "N=3"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorMseries M1(1,beta,measurelist[M]); - EnergyCorrelatorMseries M2(2,beta,measurelist[M]); - EnergyCorrelatorMseries M3(3,beta,measurelist[M]); - - - printf("%7.3f %14.6f %14.6f %14.6f \n",beta,M1(myJet),M2(myJet),M3(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorM2: M2(beta) = ECFG(3, 1, beta)/ECFG(3, 1, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta", "M2 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorM2 M2(beta,measurelist[M]); - - printf("%7.3f %14.6f \n",beta,M2(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorCseries: C_i(beta) = ECFN(i-1, beta)*ECFN(i+1, beta)/ECFN(i, beta)^2 with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %20s %20s %20s \n","beta", "N=1", "N=2", "N=3"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorCseries C1(1,beta,measurelist[M]); - EnergyCorrelatorCseries C2(2,beta,measurelist[M]); - EnergyCorrelatorCseries C3(3,beta,measurelist[M]); - - - printf("%7.3f %20.10f %20.10f %20.10f \n",beta,C1(myJet),C2(myJet),C3(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorUseries: U_i(beta) = ECFG(i+1, 1, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %20s %20s %20s \n","beta", "N=1", "N=2", "N=3"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorUseries U1(1,beta,measurelist[M]); - EnergyCorrelatorUseries U2(2,beta,measurelist[M]); - EnergyCorrelatorUseries U3(3,beta,measurelist[M]); - - - printf("%7.3f %20.10f %20.10f %20.10f \n",beta,U1(myJet),U2(myJet),U3(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorU1: U1(beta) = ECFG(2, 1, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta", "U1 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorU1 U1(beta,measurelist[M]); - - printf("%7.3f %14.10f \n",beta,U1(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorU2: U2(beta) = ECFG(3, 1, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta", "U2 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorU2 U2(beta,measurelist[M]); - - printf("%7.3f %14.10f \n",beta,U2(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelatorU3: U3(beta) = ECFG(4, 1, beta) with " << modename[M] << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s \n","beta", "U3 obs"); - - for (unsigned int B = 0; B < betalist.size(); B++) { - double beta = betalist[B]; - - EnergyCorrelatorU3 U3(beta,measurelist[M]); - - printf("%7.3f %14.10f \n",beta,U3(myJet)); - } - cout << "-------------------------------------------------------------------------------------" << endl << endl; - - - // timing tests for the developers - double do_timing_test = false; - if (do_timing_test) { - - cout << "jet with pt = " << myJet.pt() << " and " << myJet.constituents().size() << " constituents" << endl; - - clock_t clock_begin, clock_end; - double num_iter; - double beta = 0.5; - - cout << setprecision(6); - - // test C1 - num_iter = 20000; - clock_begin = clock(); - EnergyCorrelatorDoubleRatio C1s(1,beta,measurelist[M],EnergyCorrelator::slow); - EnergyCorrelatorDoubleRatio C1f(1,beta,measurelist[M],EnergyCorrelator::storage_array); - cout << "timing " << C1s.description() << endl; - cout << "timing " << C1f.description() << endl; - for (int t = 0; t < num_iter; t++) { - C1s(myJet); - } - clock_end = clock(); - cout << "Slow method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C1"<< endl; - - num_iter = 20000; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - C1f(myJet); - } - clock_end = clock(); - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C1"<< endl; - - - // test C2 - num_iter = 1000; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorDoubleRatio C2(2,beta,measurelist[M],EnergyCorrelator::slow); - C2(myJet); - } - clock_end = clock(); - cout << "Slow method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C2"<< endl; - - num_iter = 10000; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorDoubleRatio C2(2,beta,measurelist[M],EnergyCorrelator::storage_array); - C2(myJet); - } - clock_end = clock(); - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C2"<< endl; - - // test C3 - num_iter = 100; - clock_begin = clock(); - - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorDoubleRatio C3(3,beta,measurelist[M],EnergyCorrelator::slow); - C3(myJet); - } - clock_end = clock(); - cout << "Slow method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C3"<< endl; - - num_iter = 3000; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorDoubleRatio C3(3,beta,measurelist[M],EnergyCorrelator::storage_array); - C3(myJet); - } - clock_end = clock(); - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C3"<< endl; - - // test C4 - num_iter = 10; - clock_begin = clock(); - - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorDoubleRatio C4(4,beta,measurelist[M],EnergyCorrelator::slow); - C4(myJet); - } - clock_end = clock(); - cout << "Slow method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C4"<< endl; - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorDoubleRatio C4(4,beta,measurelist[M],EnergyCorrelator::storage_array); - C4(myJet); - } - clock_end = clock(); - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per C4"<< endl; - - // test N2 - num_iter = 10; - clock_begin = clock(); - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorN2 N2(beta,measurelist[M],EnergyCorrelator::storage_array); - N2(myJet); - } - clock_end = clock(); - EnergyCorrelatorN2 N2test(beta,measurelist[M],EnergyCorrelator::storage_array); - cout << "Beta is: "<< beta << endl; - cout << "Result of N2: "<< N2test(myJet) << endl; - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per N2"<< endl; - - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorN3 N3(beta,measurelist[M],EnergyCorrelator::storage_array); - N3(myJet); - } - clock_end = clock(); - EnergyCorrelatorN3 N3test(beta,measurelist[M],EnergyCorrelator::storage_array); - cout << "Beta is: "<< beta << endl; - cout << "Result of N3: "<< N3test(myJet) << endl; - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per N3"<< endl; - - - - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorGeneralized ECF1(2,3, beta, measurelist[M]); - ECF1(myJet); - } - clock_end = clock(); - EnergyCorrelatorGeneralized ECF1test(2,3, beta, measurelist[M]); - cout << "Beta is: "<< beta << endl; - cout << "Result of 2e3: "<< ECF1test(myJet) << endl; - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per 2e3"<< endl; - - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorGeneralized ECF3(2,4, beta, measurelist[M]); - ECF3(myJet); - } - clock_end = clock(); - EnergyCorrelatorGeneralized ECF2test(2,4, beta, measurelist[M]); - cout << "Beta is: "<< beta << endl; - cout << "Result of 2e4: "<< ECF2test(myJet) << endl; - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per 2e4"<< endl; - - -// num_iter = 300; -// clock_begin = clock(); -// for (int t = 0; t < num_iter; t++) { -// EnergyCorrelatorGeneralized ECF5(2,5, beta, measurelist[M]); -// ECF5(myJet); -// } -// clock_end = clock(); -// EnergyCorrelatorGeneralized ECF5test(2,5, beta, measurelist[M]); -// cout << "Beta is: "<< beta << endl; -// cout << "Result of 2e5: "<< ECF5test(myJet) << endl; -// cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per 2e5"<< endl; -// - - // test M2 - num_iter = 10; - clock_begin = clock(); - - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorM2 M2(beta,measurelist[M],EnergyCorrelator::slow); - M2(myJet); - } - clock_end = clock(); - cout << "Slow method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per M2"<< endl; - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorM2 M2(beta,measurelist[M],EnergyCorrelator::storage_array); - M2(myJet); - } - clock_end = clock(); - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per M2"<< endl; - - // test M3 - num_iter = 10; - clock_begin = clock(); - - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorMseries M3(3,beta,measurelist[M],EnergyCorrelator::slow); - M3(myJet); - } - clock_end = clock(); - cout << "Slow method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per M3"<< endl; - - num_iter = 300; - clock_begin = clock(); - for (int t = 0; t < num_iter; t++) { - EnergyCorrelatorMseries M3(3,beta,measurelist[M],EnergyCorrelator::storage_array); - M3(myJet); - } - clock_end = clock(); - cout << "Storage array method: " << (clock_end-clock_begin)/double(CLOCKS_PER_SEC*num_iter)*1000 << " ms per M3"<< endl; - - - } - } - } - } -} - - - diff --git a/src/Tools/fjcontrib/EnergyCorrelator/example.ref b/src/Tools/fjcontrib/EnergyCorrelator/example.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/example.ref +++ /dev/null @@ -1,922 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.2.0 -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -EnergyCorrelator: ECF(N,beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 (GeV) N=2 (GeV^2) N=3 (GeV^3) N=4 (GeV^4) N=5 (GeV^5) - 0.100 983.64 265690.56 27540663.76 1287213926.38 30113720909.66 - 0.200 983.64 172787.11 8222752.45 134175168.40 888560847.26 - 0.500 983.64 52039.98 364021.49 608244.06 356259.05 - 1.000 983.64 10006.49 9934.06 2670.10 271.98 - 1.500 983.64 3001.20 1066.15 118.17 4.24 - 2.000 983.64 1272.64 260.18 14.88 0.22 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorRatio: r_N^(beta) = ECF(N+1,beta)/ECF(N,beta) with pt_R -------------------------------------------------------------------------------------- - beta N=0 (GeV) N=1 (GeV) N=2 (GeV) N=3 (GeV) N=4 (GeV) - 0.100 983.6369 270.1104 103.6569 46.7387 23.3945 - 0.200 983.6369 175.6615 47.5889 16.3175 6.6224 - 0.500 983.6369 52.9057 6.9950 1.6709 0.5857 - 1.000 983.6369 10.1730 0.9928 0.2688 0.1019 - 1.500 983.6369 3.0511 0.3552 0.1108 0.0359 - 2.000 983.6369 1.2938 0.2044 0.0572 0.0151 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorDoubleRatio: C_N^(beta) = r_N^(beta)/r_{N-1}^(beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 0.274604 0.383758 0.450898 0.500538 - 0.200 0.178584 0.270913 0.342885 0.405845 - 0.500 0.053786 0.132217 0.238870 0.350540 - 1.000 0.010342 0.097588 0.270742 0.378972 - 1.500 0.003102 0.116430 0.312016 0.323906 - 2.000 0.001315 0.158011 0.279675 0.263310 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC1: C_1^(beta) = ECF(2,beta)/ECF(1,beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta C1 obs - 0.100 0.274604 - 0.200 0.178584 - 0.500 0.053786 - 1.000 0.010342 - 1.500 0.003102 - 2.000 0.001315 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC2: C_2^(beta) = ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta C2 obs - 0.100 0.383758 - 0.200 0.270913 - 0.500 0.132217 - 1.000 0.097588 - 1.500 0.116430 - 2.000 0.158011 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorD2: D_2^(beta) = ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3 with pt_R -------------------------------------------------------------------------------------- - beta D2 obs - 0.100 1.397496 - 0.200 1.517007 - 0.500 2.458216 - 1.000 9.435950 - 1.500 37.535182 - 2.000 120.129760 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralizedD2: D_2^(alpha, beta) = ECFN(3,alpha)/ECFN(2,beta)^(3*alpha/beta) with pt_R -------------------------------------------------------------------------------------- - beta alpha = 0.100 alpha = 0.200 alpha = 0.500 alpha = 1.000 - 0.200 0.3834 1.5170 156.2463 1741793.7615 - 0.500 0.1671 0.2882 2.4582 431.1389 - 1.000 0.1140 0.1342 0.3637 9.4359 - 1.500 0.0919 0.0871 0.1233 1.0849 - 2.000 0.0783 0.0632 0.0554 0.2188 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized (angles = N Choose 2): ECFN(N, beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 1.00 0.2746037664 0.0289380967 0.0013750278 - 0.200 1.00 0.1785836568 0.0086399808 0.0001433286 - 0.500 1.00 0.0537857869 0.0003824922 0.0000006497 - 1.000 1.00 0.0103421840 0.0000104381 0.0000000029 - 1.500 1.00 0.0031018827 0.0000011202 0.0000000001 - 2.000 1.00 0.0013153375 0.0000002734 0.0000000000 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized: ECFG(angles, N, beta=1) with pt_R -------------------------------------------------------------------------------------- - angles N=1 N=2 N=3 N=4 - 1 1.00 0.0103421840 0.0007353402 0.0000733472 - 2 0.0000460939 0.0000010704 - 3 0.0000104381 0.0000000552 - 4 0.0000000114 - 5 0.0000000045 - 6 0.0000000029 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorNseries: N_i(beta) = ECFG(i+1, 2, beta)/ECFG(i, 1, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.178584 0.551898 1.506618 - 0.200 0.078954 0.534774 1.521976 - 0.500 0.010342 0.487547 1.601014 - 1.000 0.001315 0.430942 1.979487 - 1.500 0.000437 0.391935 3.250084 - 2.000 0.000220 0.365859 6.700247 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN2: N2(beta) = ECFG(3, 2, beta)/ECFG(2, 1, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N2 obs - 0.100 0.551898 - 0.200 0.534774 - 0.500 0.487547 - 1.000 0.430942 - 1.500 0.391935 - 2.000 0.365859 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN3: N3(beta) = ECFG(4, 2, beta)/ECFG(3, 1, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N3 obs - 0.100 1.506618 - 0.200 1.521976 - 0.500 1.601014 - 1.000 1.979487 - 1.500 3.250084 - 2.000 6.700247 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorMseries: M_i(beta) = ECFG(i+1, 1, beta)/ECFN(i, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.274604 0.225158 0.151396 - 0.200 0.178584 0.206357 0.146715 - 0.500 0.053786 0.149827 0.130985 - 1.000 0.010342 0.071101 0.099746 - 1.500 0.003102 0.027603 0.065806 - 2.000 0.001315 0.010448 0.036511 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorM2: M2(beta) = ECFG(3, 1, beta)/ECFG(3, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta M2 obs - 0.100 0.225158 - 0.200 0.206357 - 0.500 0.149827 - 1.000 0.071101 - 1.500 0.027603 - 2.000 0.010448 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorCseries: C_i(beta) = ECFN(i-1, beta)*ECFN(i+1, beta)/ECFN(i, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.2746037664 0.3837575950 0.4508977207 - 0.200 0.1785836568 0.2709126926 0.3428854490 - 0.500 0.0537857869 0.1322170711 0.2388696533 - 1.000 0.0103421840 0.0975883303 0.2707417692 - 1.500 0.0031018827 0.1164297328 0.3120164994 - 2.000 0.0013153375 0.1580111821 0.2796746959 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorUseries: U_i(beta) = ECFG(i+1, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.2746037664 0.0618292258 0.0093606928 - 0.200 0.1785836568 0.0368519186 0.0054067344 - 0.500 0.0537857869 0.0080585540 0.0010555469 - 1.000 0.0103421840 0.0007353402 0.0000733472 - 1.500 0.0031018827 0.0000856204 0.0000056343 - 2.000 0.0013153375 0.0000137425 0.0000005017 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU1: U1(beta) = ECFG(2, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta U1 obs - 0.100 0.2746037664 - 0.200 0.1785836568 - 0.500 0.0537857869 - 1.000 0.0103421840 - 1.500 0.0031018827 - 2.000 0.0013153375 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU2: U2(beta) = ECFG(3, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta U2 obs - 0.100 0.0618292258 - 0.200 0.0368519186 - 0.500 0.0080585540 - 1.000 0.0007353402 - 1.500 0.0000856204 - 2.000 0.0000137425 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU3: U3(beta) = ECFG(4, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta U3 obs - 0.100 0.0093606928 - 0.200 0.0054067344 - 0.500 0.0010555469 - 1.000 0.0000733472 - 1.500 0.0000056343 - 2.000 0.0000005017 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelator: ECF(N,beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 (GeV) N=2 (GeV^2) N=3 (GeV^3) N=4 (GeV^4) N=5 (GeV^5) - 0.100 1378.16 504056.94 68362746.10 4040636656.75 115642116882.67 - 0.200 1378.16 316828.68 18441161.70 345185274.86 2475861856.72 - 0.500 1378.16 86243.22 614652.00 941844.70 459093.24 - 1.000 1378.16 14186.79 11682.63 2060.77 92.98 - 1.500 1378.16 3751.90 907.11 39.34 0.27 - 2.000 1378.16 1439.57 152.71 1.84 0.00 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorRatio: r_N^(beta) = ECF(N+1,beta)/ECF(N,beta) with E_theta -------------------------------------------------------------------------------------- - beta N=0 (GeV) N=1 (GeV) N=2 (GeV) N=3 (GeV) N=4 (GeV) - 0.100 1378.1622 365.7457 135.6250 59.1058 28.6198 - 0.200 1378.1622 229.8922 58.2055 18.7182 7.1726 - 0.500 1378.1622 62.5784 7.1270 1.5323 0.4874 - 1.000 1378.1622 10.2940 0.8235 0.1764 0.0451 - 1.500 1378.1622 2.7224 0.2418 0.0434 0.0069 - 2.000 1378.1622 1.0446 0.1061 0.0121 0.0013 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorDoubleRatio: C_N^(beta) = r_N^(beta)/r_{N-1}^(beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 0.265387 0.370818 0.435803 0.484212 - 0.200 0.166811 0.253186 0.321588 0.383186 - 0.500 0.045407 0.113888 0.215004 0.318106 - 1.000 0.007469 0.079997 0.214207 0.255786 - 1.500 0.001975 0.088810 0.179360 0.160180 - 2.000 0.000758 0.101555 0.113867 0.106401 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC1: C_1^(beta) = ECF(2,beta)/ECF(1,beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta C1 obs - 0.100 0.265387 - 0.200 0.166811 - 0.500 0.045407 - 1.000 0.007469 - 1.500 0.001975 - 2.000 0.000758 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC2: C_2^(beta) = ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta C2 obs - 0.100 0.370818 - 0.200 0.253186 - 0.500 0.113888 - 1.000 0.079997 - 1.500 0.088810 - 2.000 0.101555 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorD2: D_2^(beta) = ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3 with E_theta -------------------------------------------------------------------------------------- - beta D2 obs - 0.100 1.397274 - 0.200 1.517804 - 0.500 2.508161 - 1.000 10.709981 - 1.500 44.958245 - 2.000 133.988418 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralizedD2: D_2^(alpha, beta) = ECFN(3,alpha)/ECFN(2,beta)^(3*alpha/beta) with E_theta -------------------------------------------------------------------------------------- - beta alpha = 0.100 alpha = 0.200 alpha = 0.500 alpha = 1.000 - 0.200 0.3833 1.5178 159.9740 2071485.6064 - 0.500 0.1670 0.2880 2.5082 509.2064 - 1.000 0.1135 0.1330 0.3637 10.7100 - 1.500 0.0907 0.0850 0.1189 1.1438 - 2.000 0.0767 0.0608 0.0514 0.2139 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized (angles = N Choose 2): ECFN(N, beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 1.00 0.2653865755 0.0261167137 0.0011200786 - 0.200 1.00 0.1668106769 0.0070451023 0.0000956866 - 0.500 1.00 0.0454071552 0.0002348163 0.0000002611 - 1.000 1.00 0.0074693640 0.0000044631 0.0000000006 - 1.500 1.00 0.0019753776 0.0000003465 0.0000000000 - 2.000 1.00 0.0007579352 0.0000000583 0.0000000000 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized: ECFG(angles, N, beta=1) with E_theta -------------------------------------------------------------------------------------- - angles N=1 N=2 N=3 N=4 - 1 1.00 0.0074693640 0.0005219446 0.0000521319 - 2 0.0000242551 0.0000005388 - 3 0.0000044631 0.0000000205 - 4 0.0000000035 - 5 0.0000000011 - 6 0.0000000006 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorNseries: N_i(beta) = ECFG(i+1, 2, beta)/ECFG(i, 1, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.166811 0.551800 1.506192 - 0.200 0.068920 0.534652 1.521253 - 0.500 0.007469 0.487710 1.598907 - 1.000 0.000758 0.434746 1.977674 - 1.500 0.000207 0.401351 3.333270 - 2.000 0.000082 0.372570 7.518334 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN2: N2(beta) = ECFG(3, 2, beta)/ECFG(2, 1, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N2 obs - 0.100 0.551800 - 0.200 0.534652 - 0.500 0.487710 - 1.000 0.434746 - 1.500 0.401351 - 2.000 0.372570 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN3: N3(beta) = ECFG(4, 2, beta)/ECFG(3, 1, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N3 obs - 0.100 1.506192 - 0.200 1.521253 - 0.500 1.598907 - 1.000 1.977674 - 1.500 3.333270 - 2.000 7.518334 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorMseries: M_i(beta) = ECFG(i+1, 1, beta)/ECFN(i, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.265387 0.225134 0.151350 - 0.200 0.166811 0.206335 0.146685 - 0.500 0.045407 0.149607 0.131015 - 1.000 0.007469 0.069878 0.099880 - 1.500 0.001975 0.025930 0.065823 - 2.000 0.000758 0.009248 0.036027 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorM2: M2(beta) = ECFG(3, 1, beta)/ECFG(3, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta M2 obs - 0.100 0.225134 - 0.200 0.206335 - 0.500 0.149607 - 1.000 0.069878 - 1.500 0.025930 - 2.000 0.009248 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorCseries: C_i(beta) = ECFN(i-1, beta)*ECFN(i+1, beta)/ECFN(i, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.2653865755 0.3708178471 0.4358031826 - 0.200 0.1668106769 0.2531859580 0.3215882713 - 0.500 0.0454071552 0.1138884647 0.2150035566 - 1.000 0.0074693640 0.0799967491 0.2142068062 - 1.500 0.0019753776 0.0888095113 0.1793603915 - 2.000 0.0007579352 0.1015545382 0.1138668084 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorUseries: U_i(beta) = ECFG(i+1, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.2653865755 0.0597476192 0.0090428096 - 0.200 0.1668106769 0.0344188111 0.0050487063 - 0.500 0.0454071552 0.0067932369 0.0008900155 - 1.000 0.0074693640 0.0005219446 0.0000521319 - 1.500 0.0019753776 0.0000512213 0.0000033715 - 2.000 0.0007579352 0.0000070092 0.0000002525 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU1: U1(beta) = ECFG(2, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta U1 obs - 0.100 0.2653865755 - 0.200 0.1668106769 - 0.500 0.0454071552 - 1.000 0.0074693640 - 1.500 0.0019753776 - 2.000 0.0007579352 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU2: U2(beta) = ECFG(3, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta U2 obs - 0.100 0.0597476192 - 0.200 0.0344188111 - 0.500 0.0067932369 - 1.000 0.0005219446 - 1.500 0.0000512213 - 2.000 0.0000070092 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU3: U3(beta) = ECFG(4, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta U3 obs - 0.100 0.0090428096 - 0.200 0.0050487063 - 0.500 0.0008900155 - 1.000 0.0000521319 - 1.500 0.0000033715 - 2.000 0.0000002525 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelator: ECF(N,beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 (GeV) N=2 (GeV^2) N=3 (GeV^3) N=4 (GeV^4) N=5 (GeV^5) - 0.100 910.03 163661.63 10858223.95 340994615.02 5907941830.38 - 0.200 910.03 111307.49 3950123.18 59638399.31 478368362.42 - 0.500 910.03 40470.38 446453.20 2147680.90 5564873.72 - 1.000 910.03 13863.90 65435.19 142222.94 166409.02 - 1.500 910.03 8417.59 26548.97 39560.12 31484.21 - 2.000 910.03 6328.24 16871.11 21081.97 13743.58 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorRatio: r_N^(beta) = ECF(N+1,beta)/ECF(N,beta) with pt_R -------------------------------------------------------------------------------------- - beta N=0 (GeV) N=1 (GeV) N=2 (GeV) N=3 (GeV) N=4 (GeV) - 0.100 910.0320 179.8416 66.3456 31.4043 17.3256 - 0.200 910.0320 122.3116 35.4884 15.0979 8.0211 - 0.500 910.0320 44.4714 11.0316 4.8105 2.5911 - 1.000 910.0320 15.2345 4.7198 2.1735 1.1701 - 1.500 910.0320 9.2498 3.1540 1.4901 0.7959 - 2.000 910.0320 6.9539 2.6660 1.2496 0.6519 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorDoubleRatio: C_N^(beta) = r_N^(beta)/r_{N-1}^(beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 0.197621 0.368911 0.473344 0.551696 - 0.200 0.134404 0.290147 0.425431 0.531277 - 0.500 0.048868 0.248061 0.436069 0.538632 - 1.000 0.016741 0.309811 0.460503 0.538330 - 1.500 0.010164 0.340980 0.472444 0.534103 - 2.000 0.007641 0.383385 0.468712 0.521701 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC1: C_1^(beta) = ECF(2,beta)/ECF(1,beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta C1 obs - 0.100 0.197621 - 0.200 0.134404 - 0.500 0.048868 - 1.000 0.016741 - 1.500 0.010164 - 2.000 0.007641 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC2: C_2^(beta) = ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta C2 obs - 0.100 0.368911 - 0.200 0.290147 - 0.500 0.248061 - 1.000 0.309811 - 1.500 0.340980 - 2.000 0.383385 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorD2: D_2^(beta) = ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3 with pt_R -------------------------------------------------------------------------------------- - beta D2 obs - 0.100 1.866758 - 0.200 2.158775 - 0.500 5.076145 - 1.000 18.506536 - 1.500 33.547049 - 2.000 50.172493 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralizedD2: D_2^(alpha, beta) = ECFN(3,alpha)/ECFN(2,beta)^(3*alpha/beta) with pt_R -------------------------------------------------------------------------------------- - beta alpha = 0.100 alpha = 0.200 alpha = 0.500 alpha = 1.000 - 0.200 0.2924 2.1588 2039.4962 1029141861.7962 - 0.500 0.0881 0.1962 5.0761 6375.2544 - 1.000 0.0491 0.0610 0.2735 18.5065 - 1.500 0.0361 0.0329 0.0583 0.8404 - 2.000 0.0299 0.0226 0.0229 0.1300 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized (angles = N Choose 2): ECFN(N, beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 1.00 0.1976212201 0.0144075072 0.0004971883 - 0.200 1.00 0.1344036550 0.0052413202 0.0000869560 - 0.500 1.00 0.0488679347 0.0005923876 0.0000031314 - 1.000 1.00 0.0167406417 0.0000868243 0.0000002074 - 1.500 1.00 0.0101642297 0.0000352272 0.0000000577 - 2.000 1.00 0.0076413374 0.0000223859 0.0000000307 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized: ECFG(angles, N, beta=1) with pt_R -------------------------------------------------------------------------------------- - angles N=1 N=2 N=3 N=4 - 1 1.00 0.0167406417 0.0005149099 0.0000275956 - 2 0.0001201519 0.0000010521 - 3 0.0000868243 0.0000003239 - 4 0.0000002207 - 5 0.0000001864 - 6 0.0000002074 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorNseries: N_i(beta) = ECFG(i+1, 2, beta)/ECFG(i, 1, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.134404 0.501515 2.093230 - 0.200 0.066590 0.490962 2.107355 - 0.500 0.016741 0.472128 2.270271 - 1.000 0.007641 0.428733 3.968204 - 1.500 0.005197 0.367850 11.456776 - 2.000 0.003938 0.329248 20.513687 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN2: N2(beta) = ECFG(3, 2, beta)/ECFG(2, 1, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N2 obs - 0.100 0.501515 - 0.200 0.490962 - 0.500 0.472128 - 1.000 0.428733 - 1.500 0.367850 - 2.000 0.329248 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN3: N3(beta) = ECFG(4, 2, beta)/ECFG(3, 1, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N3 obs - 0.100 2.093230 - 0.200 2.107355 - 0.500 2.270271 - 1.000 3.968204 - 1.500 11.456776 - 2.000 20.513687 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorMseries: M_i(beta) = ECFG(i+1, 1, beta)/ECFN(i, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.197621 0.140440 0.090587 - 0.200 0.134404 0.127907 0.086996 - 0.500 0.048868 0.087042 0.076041 - 1.000 0.016741 0.030758 0.053593 - 1.500 0.010164 0.010637 0.025871 - 2.000 0.007641 0.005839 0.009492 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorM2: M2(beta) = ECFG(3, 1, beta)/ECFG(3, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta M2 obs - 0.100 0.140440 - 0.200 0.127907 - 0.500 0.087042 - 1.000 0.030758 - 1.500 0.010637 - 2.000 0.005839 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorCseries: C_i(beta) = ECFN(i-1, beta)*ECFN(i+1, beta)/ECFN(i, beta)^2 with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.1976212201 0.3689110739 0.4733439247 - 0.200 0.1344036550 0.2901472976 0.4254309496 - 0.500 0.0488679347 0.2480607340 0.4360689366 - 1.000 0.0167406417 0.3098112813 0.4605028221 - 1.500 0.0101642297 0.3409799143 0.4724436009 - 2.000 0.0076413374 0.3833849510 0.4687122507 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorUseries: U_i(beta) = ECFG(i+1, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.1976212201 0.0277540195 0.0025141656 - 0.200 0.1344036550 0.0171911179 0.0014955586 - 0.500 0.0488679347 0.0042535497 0.0003234433 - 1.000 0.0167406417 0.0005149099 0.0000275956 - 1.500 0.0101642297 0.0001081211 0.0000027972 - 2.000 0.0076413374 0.0000446167 0.0000004235 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU1: U1(beta) = ECFG(2, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta U1 obs - 0.100 0.1976212201 - 0.200 0.1344036550 - 0.500 0.0488679347 - 1.000 0.0167406417 - 1.500 0.0101642297 - 2.000 0.0076413374 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU2: U2(beta) = ECFG(3, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta U2 obs - 0.100 0.0277540195 - 0.200 0.0171911179 - 0.500 0.0042535497 - 1.000 0.0005149099 - 1.500 0.0001081211 - 2.000 0.0000446167 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU3: U3(beta) = ECFG(4, 1, beta) with pt_R -------------------------------------------------------------------------------------- - beta U3 obs - 0.100 0.0025141656 - 0.200 0.0014955586 - 0.500 0.0003234433 - 1.000 0.0000275956 - 1.500 0.0000027972 - 2.000 0.0000004235 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelator: ECF(N,beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 (GeV) N=2 (GeV^2) N=3 (GeV^3) N=4 (GeV^4) N=5 (GeV^5) - 0.100 934.39 173035.93 11869881.64 387687952.45 7030672399.84 - 0.200 934.39 117791.85 4351824.23 69148951.24 587495287.84 - 0.500 934.39 43220.34 507378.92 2589996.65 7002091.93 - 1.000 934.39 15182.02 75656.66 165115.95 178354.07 - 1.500 934.39 9296.03 30143.02 40417.01 24563.71 - 2.000 934.39 6943.30 18259.83 17679.34 7116.38 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorRatio: r_N^(beta) = ECF(N+1,beta)/ECF(N,beta) with E_theta -------------------------------------------------------------------------------------- - beta N=0 (GeV) N=1 (GeV) N=2 (GeV) N=3 (GeV) N=4 (GeV) - 0.100 934.3868 185.1866 68.5978 32.6615 18.1349 - 0.200 934.3868 126.0633 36.9450 15.8896 8.4961 - 0.500 934.3868 46.2553 11.7394 5.1047 2.7035 - 1.000 934.3868 16.2481 4.9833 2.1824 1.0802 - 1.500 934.3868 9.9488 3.2426 1.3408 0.6078 - 2.000 934.3868 7.4309 2.6299 0.9682 0.4025 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorDoubleRatio: C_N^(beta) = r_N^(beta)/r_{N-1}^(beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 0.198191 0.370425 0.476130 0.555237 - 0.200 0.134916 0.293067 0.430089 0.534693 - 0.500 0.049503 0.253795 0.434833 0.529617 - 1.000 0.017389 0.306701 0.437950 0.494940 - 1.500 0.010647 0.325926 0.413512 0.453265 - 2.000 0.007953 0.353909 0.368161 0.415742 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC1: C_1^(beta) = ECF(2,beta)/ECF(1,beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta C1 obs - 0.100 0.198191 - 0.200 0.134916 - 0.500 0.049503 - 1.000 0.017389 - 1.500 0.010647 - 2.000 0.007953 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorC2: C_2^(beta) = ECF(3,beta)*ECF(1,beta)/ECF(2,beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta C2 obs - 0.100 0.370425 - 0.200 0.293067 - 0.500 0.253795 - 1.000 0.306701 - 1.500 0.325926 - 2.000 0.353909 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorD2: D_2^(beta) = ECF(3,beta)*ECF(1,beta)^3/ECF(2,beta)^3 with E_theta -------------------------------------------------------------------------------------- - beta D2 obs - 0.100 1.869035 - 0.200 2.172229 - 0.500 5.126818 - 1.000 17.637552 - 1.500 30.610782 - 2.000 44.502015 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralizedD2: D_2^(alpha, beta) = ECFN(3,alpha)/ECFN(2,beta)^(3*alpha/beta) with E_theta -------------------------------------------------------------------------------------- - beta alpha = 0.100 alpha = 0.200 alpha = 0.500 alpha = 1.000 - 0.200 0.2936 2.1722 2081.0777 1038338617.9694 - 0.500 0.0883 0.1966 5.1268 6301.7080 - 1.000 0.0491 0.0607 0.2712 17.6376 - 1.500 0.0361 0.0328 0.0584 0.8180 - 2.000 0.0300 0.0227 0.0234 0.1308 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized (angles = N Choose 2): ECFN(N, beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 N=4 - 0.100 1.00 0.1981905437 0.0145501117 0.0005085991 - 0.200 1.00 0.1349155085 0.0053344701 0.0000907149 - 0.500 1.00 0.0495033749 0.0006219455 0.0000033978 - 1.000 1.00 0.0173890665 0.0000927400 0.0000002166 - 1.500 1.00 0.0106474132 0.0000369493 0.0000000530 - 2.000 1.00 0.0079526591 0.0000223829 0.0000000232 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorGeneralized: ECFG(angles, N, beta=1) with E_theta -------------------------------------------------------------------------------------- - angles N=1 N=2 N=3 N=4 - 1 1.00 0.0173890665 0.0005243822 0.0000276026 - 2 0.0001290539 0.0000011436 - 3 0.0000927400 0.0000003720 - 4 0.0000002479 - 5 0.0000002021 - 6 0.0000002166 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorNseries: N_i(beta) = ECFG(i+1, 2, beta)/ECFG(i, 1, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.134916 0.502292 2.091587 - 0.200 0.067180 0.491827 2.107891 - 0.500 0.017389 0.472830 2.289141 - 1.000 0.007953 0.426794 4.159019 - 1.500 0.005251 0.366194 11.630367 - 2.000 0.003828 0.328039 19.240170 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN2: N2(beta) = ECFG(3, 2, beta)/ECFG(2, 1, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N2 obs - 0.100 0.502292 - 0.200 0.491827 - 0.500 0.472830 - 1.000 0.426794 - 1.500 0.366194 - 2.000 0.328039 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorN3: N3(beta) = ECFG(4, 2, beta)/ECFG(3, 1, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N3 obs - 0.100 2.091587 - 0.200 2.107891 - 0.500 2.289141 - 1.000 4.159019 - 1.500 11.630367 - 2.000 19.240170 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorMseries: M_i(beta) = ECFG(i+1, 1, beta)/ECFN(i, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.198191 0.140807 0.091081 - 0.200 0.134916 0.127947 0.087436 - 0.500 0.049503 0.086121 0.076197 - 1.000 0.017389 0.030156 0.052638 - 1.500 0.010647 0.010921 0.024416 - 2.000 0.007953 0.006325 0.009110 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorM2: M2(beta) = ECFG(3, 1, beta)/ECFG(3, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta M2 obs - 0.100 0.140807 - 0.200 0.127947 - 0.500 0.086121 - 1.000 0.030156 - 1.500 0.010921 - 2.000 0.006325 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorCseries: C_i(beta) = ECFN(i-1, beta)*ECFN(i+1, beta)/ECFN(i, beta)^2 with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.1981905437 0.3704251544 0.4761303115 - 0.200 0.1349155085 0.2930674174 0.4300888197 - 0.500 0.0495033749 0.2537948134 0.4348330516 - 1.000 0.0173890665 0.3067005727 0.4379497669 - 1.500 0.0106474132 0.3259256469 0.4135119649 - 2.000 0.0079526591 0.3539093501 0.3681613681 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorUseries: U_i(beta) = ECFG(i+1, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta N=1 N=2 N=3 - 0.100 0.1981905437 0.0279066279 0.0025417550 - 0.200 0.1349155085 0.0172619856 0.0015093221 - 0.500 0.0495033749 0.0042632685 0.0003248489 - 1.000 0.0173890665 0.0005243822 0.0000276026 - 1.500 0.0106474132 0.0001162849 0.0000028392 - 2.000 0.0079526591 0.0000503042 0.0000004582 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU1: U1(beta) = ECFG(2, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta U1 obs - 0.100 0.1981905437 - 0.200 0.1349155085 - 0.500 0.0495033749 - 1.000 0.0173890665 - 1.500 0.0106474132 - 2.000 0.0079526591 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU2: U2(beta) = ECFG(3, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta U2 obs - 0.100 0.0279066279 - 0.200 0.0172619856 - 0.500 0.0042632685 - 1.000 0.0005243822 - 1.500 0.0001162849 - 2.000 0.0000503042 -------------------------------------------------------------------------------------- - -------------------------------------------------------------------------------------- -EnergyCorrelatorU3: U3(beta) = ECFG(4, 1, beta) with E_theta -------------------------------------------------------------------------------------- - beta U3 obs - 0.100 0.0025417550 - 0.200 0.0015093221 - 0.500 0.0003248489 - 1.000 0.0000276026 - 1.500 0.0000028392 - 2.000 0.0000004582 -------------------------------------------------------------------------------------- - diff --git a/src/Tools/fjcontrib/EnergyCorrelator/example_basic_usage.cc b/src/Tools/fjcontrib/EnergyCorrelator/example_basic_usage.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/example_basic_usage.cc +++ /dev/null @@ -1,234 +0,0 @@ -// Example showing basic usage of energy correlator classes. -// -// Compile it with "make example" and run it with -// -// ./example_basic_usage < ../data/single-event.dat -// -// Copyright (c) 2013-2016 -// Andrew Larkoski, Lina Necib, Gavin Salam, and Jesse Thaler -// -// $Id: example.cc 958 2016-08-17 00:25:14Z linoush $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "fastjet/PseudoJet.hh" -#include "fastjet/ClusterSequence.hh" -#include "fastjet/JetDefinition.hh" - -#include -#include "EnergyCorrelator.hh" // In external code, this should be fastjet/contrib/EnergyCorrelator.hh - -using namespace std; -using namespace fastjet; -using namespace fastjet::contrib; - -// forward declaration to make things clearer -void read_event(vector &event); -void analyze(const vector & input_particles); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - //---------------------------------------------------------- - // illustrate how this EnergyCorrelator contrib works - - analyze(event); - - return 0; -} - -// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//////// -// -// Main Routine for Analysis -// -/////// - -void analyze(const vector & input_particles) { - - /////// EnergyCorrelator ///////////////////////////// - - // Initial clustering with anti-kt algorithm - JetAlgorithm algorithm = antikt_algorithm; - double jet_rad = 1.00; // jet radius for anti-kt algorithm - JetDefinition jetDef = JetDefinition(algorithm,jet_rad,E_scheme,Best); - ClusterSequence clust_seq(input_particles,jetDef); - vector antikt_jets = sorted_by_pt(clust_seq.inclusive_jets()); - - for (int j = 0; j < 1; j++) { // Hardest jet per event - if (antikt_jets[j].perp() > 200) { - - PseudoJet myJet = antikt_jets[j]; - - - // The angularity is set by the value of beta - double beta; - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelator: C series " << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s\n","beta", "C1", "C2", "C3"); - - - beta = 1.0; - //Defining the Cseries for beta= 1.0 - EnergyCorrelatorC1 C1s(beta); - EnergyCorrelatorC2 C2s(beta); - EnergyCorrelatorCseries C3s(3, beta); - - - printf("%7.3f %14.6f %14.6f %14.6f\n",beta,C1s(myJet),C2s(myJet),C3s(myJet)); - - beta = 2.0; - //Defining the Cseries for beta= 2.0 - EnergyCorrelatorC1 C1s_2(beta); - EnergyCorrelatorC2 C2s_2(beta); - EnergyCorrelatorCseries C3s_2(3, beta); - - - printf("%7.3f %14.6f %14.6f %14.6f\n",beta,C1s_2(myJet),C2s_2(myJet),C3s_2(myJet)); - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelator: D2, orignal (alpha=beta) and generalized " << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s\n","beta","D2", "D2(alpha=1)", "D2(alpha=2)"); - - - beta = 1.0; - EnergyCorrelatorD2 d2(beta); - double alpha = 1.0; - EnergyCorrelatorGeneralizedD2 d2_generalized(alpha, beta); - alpha = 2.0; - EnergyCorrelatorGeneralizedD2 d2_generalized_2(alpha, beta); - - printf("%7.3f %14.6f %14.6f %14.6f\n",beta,d2(myJet), d2_generalized(myJet), d2_generalized_2(myJet)); - beta = 2.0; - EnergyCorrelatorD2 d2_2(beta); - alpha = 1.0; - EnergyCorrelatorGeneralizedD2 d2_generalized_3(alpha, beta); - alpha = 2.0; - EnergyCorrelatorGeneralizedD2 d2_generalized_4(alpha, beta); - printf("%7.3f %14.6f %14.6f %14.6f\n",beta,d2_2(myJet), d2_generalized_3(myJet), d2_generalized_4(myJet)); - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelator: N series " << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s\n","beta", "N2", "N3"); - - - beta = 1.0; - // Directly defining the EnergyCorrelator N2 and N3 - EnergyCorrelatorN2 N2(beta); - EnergyCorrelatorN3 N3(beta); - printf("%7.3f %14.6f %14.6f\n",beta, N2(myJet), N3(myJet)); - - beta = 2.0; - EnergyCorrelatorN2 N2_2(beta); - EnergyCorrelatorN3 N3_2(beta); - printf("%7.3f %14.6f %14.6f\n",beta, N2_2(myJet), N3_2(myJet)); - - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelator: M series " << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s\n","beta", "M2", "M3"); - - beta = 1.0; - //Directly defining M2 - EnergyCorrelatorM2 M2(beta); - EnergyCorrelatorMseries M3(3, beta); - printf("%7.3f %14.6f %14.6f\n", beta, M2(myJet), M3(myJet)); - - beta = 2.0; - EnergyCorrelatorM2 M2_2(beta); - EnergyCorrelatorMseries M3_2(3, beta); - printf("%7.3f %14.6f %14.6f\n", beta, M2_2(myJet), M3_2(myJet)); - - cout << "-------------------------------------------------------------------------------------" << endl; - cout << "EnergyCorrelator: U series " << endl; - cout << "-------------------------------------------------------------------------------------" << endl; - printf("%7s %14s %14s %14s\n","beta", "U1", "U2", "U3"); - - - beta = 0.5; - //Defining the Useries for beta= 0.5 - EnergyCorrelatorU1 U1s(beta); - EnergyCorrelatorU2 U2s(beta); - EnergyCorrelatorU3 U3s(beta); - - - printf("%7.3f %14.8f %14.8f %14.8f\n",beta,U1s(myJet),U2s(myJet),U3s(myJet)); - - beta = 1.0; - //Defining the Useries for beta= 1.0 - EnergyCorrelatorU1 U1s_2(beta); - EnergyCorrelatorU2 U2s_2(beta); - EnergyCorrelatorU3 U3s_2(beta); - - - printf("%7.3f %14.8f %14.8f %14.8f\n",beta,U1s_2(myJet),U2s_2(myJet),U3s_2(myJet)); - - beta = 2.0; - //Defining the Useries for beta= 2.0 - EnergyCorrelatorU1 U1s_3(beta); - EnergyCorrelatorU2 U2s_3(beta); - EnergyCorrelatorU3 U3s_3(beta); - - - printf("%7.3f %14.8f %14.8f %14.8f\n",beta,U1s_3(myJet),U2s_3(myJet),U3s_3(myJet)); - } - } -} - - - - diff --git a/src/Tools/fjcontrib/EnergyCorrelator/example_basic_usage.ref b/src/Tools/fjcontrib/EnergyCorrelator/example_basic_usage.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/EnergyCorrelator/example_basic_usage.ref +++ /dev/null @@ -1,45 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.2.0 -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -------------------------------------------------------------------------------------- -EnergyCorrelator: C series -------------------------------------------------------------------------------------- - beta C1 C2 C3 - 1.000 0.010342 0.097588 0.270742 - 2.000 0.001315 0.158011 0.279675 -------------------------------------------------------------------------------------- -EnergyCorrelator: D2, orignal (alpha=beta) and generalized -------------------------------------------------------------------------------------- - beta D2 D2(alpha=1) D2(alpha=2) - 1.000 9.435950 9.435950 223402.841831 - 2.000 120.129760 0.218810 120.129760 -------------------------------------------------------------------------------------- -EnergyCorrelator: N series -------------------------------------------------------------------------------------- - beta N2 N3 - 1.000 0.430942 1.979487 - 2.000 0.365859 6.700247 -------------------------------------------------------------------------------------- -EnergyCorrelator: M series -------------------------------------------------------------------------------------- - beta M2 M3 - 1.000 0.071101 0.099746 - 2.000 0.010448 0.036511 -------------------------------------------------------------------------------------- -EnergyCorrelator: U series -------------------------------------------------------------------------------------- - beta U1 U2 U3 - 0.500 0.05378579 0.00805855 0.00105555 - 1.000 0.01034218 0.00073534 0.00007335 - 2.000 0.00131534 0.00001374 0.00000050 diff --git a/src/Tools/fjcontrib/Makefile.am b/src/Tools/fjcontrib/Makefile.am deleted file mode 100644 --- a/src/Tools/fjcontrib/Makefile.am +++ /dev/null @@ -1,1 +0,0 @@ -SUBDIRS = EnergyCorrelator Nsubjettiness RecursiveTools diff --git a/src/Tools/fjcontrib/Nsubjettiness/AUTHORS b/src/Tools/fjcontrib/Nsubjettiness/AUTHORS deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/AUTHORS +++ /dev/null @@ -1,47 +0,0 @@ -------------------------------------------------------------------------------- - -The Nsubjettiness FastJet contrib was written, maintained, and developed by: - - Jesse Thaler - Ken Van Tilburg - Christopher K. Vermilion - TJ Wilkason - -Questions and comments should be directed to: jthaler@mit.edu. - -------------------------------------------------------------------------------- - -For physics details on N-subjettiness, see: - - Identifying Boosted Objects with N-subjettiness. - Jesse Thaler and Ken Van Tilburg. - JHEP 1103:015 (2011), arXiv:1011.2268. - - Maximizing Boosted Top Identification by Minimizing N-subjettiness. - Jesse Thaler and Ken Van Tilburg. - JHEP 1202:093 (2012), arXiv:1108.2701. - -New in v2.0 is the winner-take-all axis, described in: - - Jet Observables Without Jet Algorithms. - Daniele Bertolini, Tucker Chan, and Jesse Thaler. - JHEP 1404:013 (2014), arXiv:1310.7584. - - Jet Shapes with the Broadening Axis. - Andrew J. Larkoski, Duff Neill, and Jesse Thaler. - JHEP 1404:017 (2014), arXiv:1401.2158. - - Unpublished work by Gavin Salam. - -New in v2.2 is the XCone jet algorithm, described in: - - XCone: N-jettiness as an Exclusive Cone Jet Algorithm. - Iain W. Stewart, Frank J. Tackmann, Jesse Thaler, - Christopher K. Vermilion, and Thomas F. Wilkason. - arXiv:1508.01516. - - Resolving Boosted Jets with XCone. - Jesse Thaler and Thomas F. Wilkason. - arXiv:1508.01518. - -------------------------------------------------------------------------------- diff --git a/src/Tools/fjcontrib/Nsubjettiness/AxesDefinition.cc b/src/Tools/fjcontrib/Nsubjettiness/AxesDefinition.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/AxesDefinition.cc +++ /dev/null @@ -1,97 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: AxesDefinition.cc 833 2015-07-23 14:35:23Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "AxesDefinition.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -// Repeatedly calls the one pass finder to try to find global minimum -std::vector AxesDefinition::get_multi_pass_axes(int n_jets, - const std::vector & inputJets, - const std::vector & seedAxes, - const MeasureDefinition* measure) const { - - assert(n_jets == (int)seedAxes.size()); //added int casting to get rid of compiler warning - - // first iteration - std::vector bestAxes = measure->get_one_pass_axes(n_jets, inputJets, seedAxes,_nAttempts,_accuracy); - - double bestTau = measure->result(inputJets,bestAxes); - - for (int l = 1; l < _Npass; l++) { // Do minimization procedure multiple times (l = 1 to start since first iteration is done already) - - // Add noise to current best axes - std::vector< PseudoJet > noiseAxes(n_jets, PseudoJet(0,0,0,0)); - for (int k = 0; k < n_jets; k++) { - noiseAxes[k] = jiggle(bestAxes[k]); - } - - std::vector testAxes = measure->get_one_pass_axes(n_jets, inputJets, noiseAxes,_nAttempts,_accuracy); - double testTau = measure->result(inputJets,testAxes); - - if (testTau < bestTau) { - bestTau = testTau; - bestAxes = testAxes; - } - } - - return bestAxes; -} - - -// Used by multi-pass minimization to jiggle axes with noise range. -PseudoJet AxesDefinition::jiggle(const PseudoJet& axis) const { - double phi_noise = ((double)rand()/(double)RAND_MAX) * _noise_range * 2.0 - _noise_range; - double rap_noise = ((double)rand()/(double)RAND_MAX) * _noise_range * 2.0 - _noise_range; - - double new_phi = axis.phi() + phi_noise; - if (new_phi >= 2.0*M_PI) new_phi -= 2.0*M_PI; - if (new_phi <= -2.0*M_PI) new_phi += 2.0*M_PI; - - PseudoJet newAxis(0,0,0,0); - newAxis.reset_PtYPhiM(axis.perp(),axis.rap() + rap_noise,new_phi); - return newAxis; -} - - -LimitedWarning HardestJetAxes::_too_few_axes_warning; -LimitedWarning ExclusiveJetAxes::_too_few_axes_warning; -LimitedWarning ExclusiveCombinatorialJetAxes::_too_few_axes_warning; - -std::vector Manual_Axes::get_starting_axes(int, - const std::vector&, - const MeasureDefinition *) const { - // This is a function dummy and should never be called - assert(false); - std::vector dummy; - return dummy; -} - -} // namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/COPYING b/src/Tools/fjcontrib/Nsubjettiness/COPYING deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/COPYING +++ /dev/null @@ -1,369 +0,0 @@ -The Nsubjettiness contrib to FastJet is released -under the terms of the GNU General Public License v2 (GPLv2). - -A copy of the GPLv2 is to be found at the end of this file. - -While the GPL license grants you considerable freedom, please bear in -mind that this code's use falls under guidelines similar to those that -are standard for Monte Carlo event generators -(http://www.montecarlonet.org/GUIDELINES). In particular, if you use -this code as part of work towards a scientific publication, whether -directly or contained within another program you should include a citation -to - -N-subjettiness: - arXiv:1011.2268 - arXiv:1108.2701 - -Winner-take-all axes: - arXiv:1310.7584 - arXiv:1401.2158 - -XCone: - arXiv:1508.01516 - arXiv:1508.01518 - - -====================================================================== -====================================================================== -====================================================================== - GNU GENERAL PUBLIC LICENSE - Version 2, June 1991 - - Copyright (C) 1989, 1991 Free Software Foundation, Inc. - 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - Everyone is permitted to copy and distribute verbatim copies - of this license document, but changing it is not allowed. - - Preamble - - The licenses for most software are designed to take away your -freedom to share and change it. By contrast, the GNU General Public -License is intended to guarantee your freedom to share and change free -software--to make sure the software is free for all its users. This -General Public License applies to most of the Free Software -Foundation's software and to any other program whose authors commit to -using it. (Some other Free Software Foundation software is covered by -the GNU Library General Public License instead.) You can apply it to -your programs, too. - - When we speak of free software, we are referring to freedom, not -price. Our General Public Licenses are designed to make sure that you -have the freedom to distribute copies of free software (and charge for -this service if you wish), that you receive source code or can get it -if you want it, that you can change the software or use pieces of it -in new free programs; and that you know you can do these things. - - To protect your rights, we need to make restrictions that forbid -anyone to deny you these rights or to ask you to surrender the rights. -These restrictions translate to certain responsibilities for you if you -distribute copies of the software, or if you modify it. - - For example, if you distribute copies of such a program, whether -gratis or for a fee, you must give the recipients all the rights that -you have. You must make sure that they, too, receive or can get the -source code. And you must show them these terms so they know their -rights. - - We protect your rights with two steps: (1) copyright the software, and -(2) offer you this license which gives you legal permission to copy, -distribute and/or modify the software. - - Also, for each author's protection and ours, we want to make certain -that everyone understands that there is no warranty for this free -software. If the software is modified by someone else and passed on, we -want its recipients to know that what they have is not the original, so -that any problems introduced by others will not reflect on the original -authors' reputations. - - Finally, any free program is threatened constantly by software -patents. We wish to avoid the danger that redistributors of a free -program will individually obtain patent licenses, in effect making the -program proprietary. To prevent this, we have made it clear that any -patent must be licensed for everyone's free use or not licensed at all. - - The precise terms and conditions for copying, distribution and -modification follow. - - GNU GENERAL PUBLIC LICENSE - TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION - - 0. This License applies to any program or other work which contains -a notice placed by the copyright holder saying it may be distributed -under the terms of this General Public License. The "Program", below, -refers to any such program or work, and a "work based on the Program" -means either the Program or any derivative work under copyright law: -that is to say, a work containing the Program or a portion of it, -either verbatim or with modifications and/or translated into another -language. (Hereinafter, translation is included without limitation in -the term "modification".) Each licensee is addressed as "you". - -Activities other than copying, distribution and modification are not -covered by this License; they are outside its scope. The act of -running the Program is not restricted, and the output from the Program -is covered only if its contents constitute a work based on the -Program (independent of having been made by running the Program). -Whether that is true depends on what the Program does. - - 1. You may copy and distribute verbatim copies of the Program's -source code as you receive it, in any medium, provided that you -conspicuously and appropriately publish on each copy an appropriate -copyright notice and disclaimer of warranty; keep intact all the -notices that refer to this License and to the absence of any warranty; -and give any other recipients of the Program a copy of this License -along with the Program. - -You may charge a fee for the physical act of transferring a copy, and -you may at your option offer warranty protection in exchange for a fee. - - 2. You may modify your copy or copies of the Program or any portion -of it, thus forming a work based on the Program, and copy and -distribute such modifications or work under the terms of Section 1 -above, provided that you also meet all of these conditions: - - a) You must cause the modified files to carry prominent notices - stating that you changed the files and the date of any change. - - b) You must cause any work that you distribute or publish, that in - whole or in part contains or is derived from the Program or any - part thereof, to be licensed as a whole at no charge to all third - parties under the terms of this License. - - c) If the modified program normally reads commands interactively - when run, you must cause it, when started running for such - interactive use in the most ordinary way, to print or display an - announcement including an appropriate copyright notice and a - notice that there is no warranty (or else, saying that you provide - a warranty) and that users may redistribute the program under - these conditions, and telling the user how to view a copy of this - License. (Exception: if the Program itself is interactive but - does not normally print such an announcement, your work based on - the Program is not required to print an announcement.) - -These requirements apply to the modified work as a whole. If -identifiable sections of that work are not derived from the Program, -and can be reasonably considered independent and separate works in -themselves, then this License, and its terms, do not apply to those -sections when you distribute them as separate works. But when you -distribute the same sections as part of a whole which is a work based -on the Program, the distribution of the whole must be on the terms of -this License, whose permissions for other licensees extend to the -entire whole, and thus to each and every part regardless of who wrote it. - -Thus, it is not the intent of this section to claim rights or contest -your rights to work written entirely by you; rather, the intent is to -exercise the right to control the distribution of derivative or -collective works based on the Program. - -In addition, mere aggregation of another work not based on the Program -with the Program (or with a work based on the Program) on a volume of -a storage or distribution medium does not bring the other work under -the scope of this License. - - 3. You may copy and distribute the Program (or a work based on it, -under Section 2) in object code or executable form under the terms of -Sections 1 and 2 above provided that you also do one of the following: - - a) Accompany it with the complete corresponding machine-readable - source code, which must be distributed under the terms of Sections - 1 and 2 above on a medium customarily used for software interchange; or, - - b) Accompany it with a written offer, valid for at least three - years, to give any third party, for a charge no more than your - cost of physically performing source distribution, a complete - machine-readable copy of the corresponding source code, to be - distributed under the terms of Sections 1 and 2 above on a medium - customarily used for software interchange; or, - - c) Accompany it with the information you received as to the offer - to distribute corresponding source code. (This alternative is - allowed only for noncommercial distribution and only if you - received the program in object code or executable form with such - an offer, in accord with Subsection b above.) - -The source code for a work means the preferred form of the work for -making modifications to it. For an executable work, complete source -code means all the source code for all modules it contains, plus any -associated interface definition files, plus the scripts used to -control compilation and installation of the executable. However, as a -special exception, the source code distributed need not include -anything that is normally distributed (in either source or binary -form) with the major components (compiler, kernel, and so on) of the -operating system on which the executable runs, unless that component -itself accompanies the executable. - -If distribution of executable or object code is made by offering -access to copy from a designated place, then offering equivalent -access to copy the source code from the same place counts as -distribution of the source code, even though third parties are not -compelled to copy the source along with the object code. - - 4. You may not copy, modify, sublicense, or distribute the Program -except as expressly provided under this License. Any attempt -otherwise to copy, modify, sublicense or distribute the Program is -void, and will automatically terminate your rights under this License. -However, parties who have received copies, or rights, from you under -this License will not have their licenses terminated so long as such -parties remain in full compliance. - - 5. You are not required to accept this License, since you have not -signed it. However, nothing else grants you permission to modify or -distribute the Program or its derivative works. These actions are -prohibited by law if you do not accept this License. Therefore, by -modifying or distributing the Program (or any work based on the -Program), you indicate your acceptance of this License to do so, and -all its terms and conditions for copying, distributing or modifying -the Program or works based on it. - - 6. Each time you redistribute the Program (or any work based on the -Program), the recipient automatically receives a license from the -original licensor to copy, distribute or modify the Program subject to -these terms and conditions. You may not impose any further -restrictions on the recipients' exercise of the rights granted herein. -You are not responsible for enforcing compliance by third parties to -this License. - - 7. If, as a consequence of a court judgment or allegation of patent -infringement or for any other reason (not limited to patent issues), -conditions are imposed on you (whether by court order, agreement or -otherwise) that contradict the conditions of this License, they do not -excuse you from the conditions of this License. If you cannot -distribute so as to satisfy simultaneously your obligations under this -License and any other pertinent obligations, then as a consequence you -may not distribute the Program at all. For example, if a patent -license would not permit royalty-free redistribution of the Program by -all those who receive copies directly or indirectly through you, then -the only way you could satisfy both it and this License would be to -refrain entirely from distribution of the Program. - -If any portion of this section is held invalid or unenforceable under -any particular circumstance, the balance of the section is intended to -apply and the section as a whole is intended to apply in other -circumstances. - -It is not the purpose of this section to induce you to infringe any -patents or other property right claims or to contest validity of any -such claims; this section has the sole purpose of protecting the -integrity of the free software distribution system, which is -implemented by public license practices. Many people have made -generous contributions to the wide range of software distributed -through that system in reliance on consistent application of that -system; it is up to the author/donor to decide if he or she is willing -to distribute software through any other system and a licensee cannot -impose that choice. - -This section is intended to make thoroughly clear what is believed to -be a consequence of the rest of this License. - - 8. If the distribution and/or use of the Program is restricted in -certain countries either by patents or by copyrighted interfaces, the -original copyright holder who places the Program under this License -may add an explicit geographical distribution limitation excluding -those countries, so that distribution is permitted only in or among -countries not thus excluded. In such case, this License incorporates -the limitation as if written in the body of this License. - - 9. The Free Software Foundation may publish revised and/or new versions -of the General Public License from time to time. Such new versions will -be similar in spirit to the present version, but may differ in detail to -address new problems or concerns. - -Each version is given a distinguishing version number. If the Program -specifies a version number of this License which applies to it and "any -later version", you have the option of following the terms and conditions -either of that version or of any later version published by the Free -Software Foundation. If the Program does not specify a version number of -this License, you may choose any version ever published by the Free Software -Foundation. - - 10. If you wish to incorporate parts of the Program into other free -programs whose distribution conditions are different, write to the author -to ask for permission. For software which is copyrighted by the Free -Software Foundation, write to the Free Software Foundation; we sometimes -make exceptions for this. Our decision will be guided by the two goals -of preserving the free status of all derivatives of our free software and -of promoting the sharing and reuse of software generally. - - NO WARRANTY - - 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY -FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN -OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES -PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED -OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF -MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS -TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE -PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, -REPAIR OR CORRECTION. - - 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING -WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR -REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, -INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING -OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED -TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY -YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER -PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE -POSSIBILITY OF SUCH DAMAGES. - - END OF TERMS AND CONDITIONS - - How to Apply These Terms to Your New Programs - - If you develop a new program, and you want it to be of the greatest -possible use to the public, the best way to achieve this is to make it -free software which everyone can redistribute and change under these terms. - - To do so, attach the following notices to the program. It is safest -to attach them to the start of each source file to most effectively -convey the exclusion of warranty; and each file should have at least -the "copyright" line and a pointer to where the full notice is found. - - - Copyright (C) - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - - Gnomovision version 69, Copyright (C) year name of author - Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, the commands you use may -be called something other than `show w' and `show c'; they could even be -mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - - Yoyodyne, Inc., hereby disclaims all copyright interest in the program - `Gnomovision' (which makes passes at compilers) written by James Hacker. - - , 1 April 1989 - Ty Coon, President of Vice - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Library General -Public License instead of this License. diff --git a/src/Tools/fjcontrib/Nsubjettiness/ChangeLog b/src/Tools/fjcontrib/Nsubjettiness/ChangeLog deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/ChangeLog +++ /dev/null @@ -1,326 +0,0 @@ -2016-06-08 - Fixed bug in MeasureDefinition.cc where axes were not completely defined, - leading to problems with multi-pass axes -2016-04-04 - Fixed Njettiness.cc to give value of _current_tau_components even if less - than N constituents - Delete extraneous code in example_advanced_usage.cc -2016-03-29 - Update for FJ 3.2.0 to deal with SharedPtr () deprecation -2015-09-28 - Updated NEWS for 2.2.1 release. -2015-09-18 - Fixed duplicate XConePlugin entry in Makefile. -2015-08-20 - Trying to fix "abs" bug in ExtraRecombiners.cc -2015-08-19 - Adding arXiv numbers to XCone papers - Used this_jet in example_basic_usage. - Fixed typo in example_advanced_usage header. - Added copy/paste code in README file. -2015-08-13 - Ready for 2.2.0 release -2015-07-23 - Fixed typo in GenET_GenKT_Axes error message - Added _too_few_axes_warning to ExclusiveJetAxes and ExclusiveCombinatorialJetAxes - Switched to ../data/single_event_ee.dat for make check -2015-07-20 - Renamed WinnerTakeAllRecombiner.hh/cc to ExtraRecombiners.hh/cc - Added _too_few_axes_warning to HardestJetAxes - Added GenKT_Axes and OnePass_GenKT_Axes and Comb_GenKT_Axes (using E-scheme recombination). - Added warning about p < 0 or delta <=0 in GenKT axes finders. - Added warning about beta <= 0 in all measures. -2015-07-10 - Putting in small tweaks in documentation to get ready for 2.2 release candidate 1. -2015-06-15 - Starting doxygen file for eventual improved documentation. - Starting long process of improving documentation throughout. - Made the basic usage file a bit easier to read. - Adding in LimitedWarnings for old style constructors -2015-06-12 - Synchronized definition of new measures with XCone paper. - In MeasureDefinition, added default values of jet_distance_squared and beam_distance_squared for cases where we don't want to optimize specifically. - Fixed bug in OriginalGeometricMeasure and ModifiedGeometric Measure - Commented out DeprecatedGeometricMeasure and DeprecatedGeometricCutoffMeasure since they were only causing confusion -2015-05-26 - Removed axis_scale_factor(), added bool to calculate this value if needed to save computation time - Defined small offset in denominator of axis scaling according to accuracy of refinement - Updated advanced examples to include tau values and number of jet constituents -2015-05-25 - Clean up of AxesDefinition - Splitting get_axes into get_starting_axes and get_refined axes - Putting in proper noise ranges (hopefully) for MultiPass - Clean up of MeasureDefinition, rename jet_gamma to beam_gamma - Put in zero checking for jet_distance in ConicalGeometricMeasure - Added in ConicalMeasure for consistency - Changing OnePass Minimization to allow for temporary uphill -2015-05-24 - Added Combinatorial GenET_GenKT_Axes and MultiPass_Manual_Axes - Moved Axes refining information into MeasureDefinition, associated each measure with corresponding axes refiner - Moved get_one_pass_axes into MeasureDefinition, removed any mention of Npass - Moved all information on number of passes to AxesDefinition - Made AxesRefiner.hh/.cc into defunct files -2015-05-22 - Cleaning out commented text. Renaming classes to be consistent with recommended usage. -2015-05-22 - Added XConePlugin as a specific implementation of NjettinessPlugin - Added usage of XCone beta = 1.0 and beta = 2.0 to both basic and advanced example files - Added OriginalGeometric, ModifiedGeometric, ConicalGeometric, and XCone measures to list of test measures - Added OnePass_GenRecomb_GenKT_Axes to list of test axes - Added description to XCone measure in MeasureDefinition -2015-05-21 - Updated minimization scheme to avoid divide-by-zero errors - Fixed various factors of 2 in the definition of the measures -2015-04-19 - Fixed bug in minimization scheme for GeneralAxesRefiner - Moved measure_type to DefaultMeasure, removed geometric measure from e+e- example file -2015-03-22 - Added OriginalGeometricMeasure and ModifiedGeometricMeasure definitions - Changed all instances of GeometricMeasure to DeprecatedGeometricMeasure, and added error statements - Made GeneralAxesRefiner the default axes refiner for Measure Definition, overwritten by DefaultMeasure and GeometricMeasure - Created DefaultMeasure class for all the conical measure subclasses - Separated out e+e- and pp measures into separate example files -2015-03-09 - Added ConicalGeometric measures with jet_beta and jet_gamma definitions - Added XCone measures derived from ConicalGeometric with jet_gamma = 1.0 - Added GeneralAxesRefiner for use with any measure (currently defined with XCone measure) - Added axes_numerator in MeasureDefinition to define the momentum scaling for minimization (currently only defined for Conical Geometric measure) -2014-11-28 - Minor change to default parameters in axes definition -2014-10-08 - Updated example file with new e+e- measure definitions - Added measure type to measure definition descriptions - Changed order of parameters in new axes definitions - Added standard C++ epsilon definition to GeneralERecombiner -2014-10-07 - Updated example_advanced_usage with new axes choices - Reversed inheritance of NormalizedMeasure and NormalizedCutoffMeasure (and Geometric) back to original - Storing _RcutoffSq as separate variable, and recalculating it in NormalizedMeasure - Cleaning up ExclusiveCombinatorialJetAxes and added comments to explain the process - Fixed memory leaks using delete_recombiner_when_unused() - Fixed manual axes bug in Njettiness - Cleaned up enum definitions -2014-10-01 - Added new parameterized recombination scheme to Winner-Take-All recombiner - Created Winner-Take-All GenKT and general Recomb GenKT axes finders and onepass versions - Created new N choose M minimization axis finder, created N choose M WTA GenKT axis finder as example - Removed NPass as constructor argument in AxesDefinition, made it set through protected method - Removed TauMode as constructor argument in MeasureDefinition, made it set through protected method - Flipped inheritance of NormalizedMeasure and NormalizedCutoffMeasure (same for Geometric) to remove error of squaring the integer maximum - Created new MeasureType enum to allow user to choose between pp and ee variables (ee variables need testing) - Updated MeasureDefinition constructors to take in extra MeasureType parameter (but defaulted to pp variables) - Added new Default TauMode argument - Fixed unsigned integers in various places - Added setAxes method to NjettinessPlugin -2014-08-26 - Enhanced TauComponents to include more infomation - NjettinessExtras now inherits from TauComponents - Removed getPartition from Njettiness, to avoid code duplication - Fixed double calculating issue in NjettinessPlugin::run_clustering() - Now AxesDefinition can use measure information without running AxesRefiner - Added TauStructure so the jets returned by TauComponents can know their tau value. -2014-08-25 - Merged MeasureDefinition and MeasureFunction into new MeasureDefinition. - Merged StartingAxesFinder and AxesDefinition into new AxesDefinition. - Renamed AxesFinder.cc/hh to AxesRefiner.cc/hh - Renamed NjettinessDefinition.cc/hh to AxesDefinition.cc/hh - Renamed MeasureFunction.cc/hh to MeasureDefinition.cc/hh - Renaming result() function in MeasureDefinition to be consistent with Nsubjettiness interface. - Split off TauComponents into separate header - Added TauPartition class for readability of partitioning - Moved NjettinessExtras into TauComponents, as this will eventually be the logical location - Added cross check of new MeasureDefinition and AxesDefinition in example_advanced_usage. - Lots of comments updated. - Changed version number to 2.2.0-alpha-dev, since this is going to be a bigger update than I had originally thought -2014-08-20 - Incorporated code in NjettinessPlugin to handle FJ3.1 treatment of auto_ptr (thanks Gregory) - Changed version number to 2.1.1-alpha-dev - Split AxesFinder into StartingAxesFinder and RefiningAxesFinder for clarity. - Manual axes mode now corresponds to a NULL StartingAxesFinder in Njettiness (so removed AxesFinderFromUserInput) - Added AxesRefiningMode to make selection of minimization routine more transparent in Njettiness - Moved sq() to more appropriate place in AxesFinder.hh - Rearranged Nsubjettiness.hh to make the old code less visible. - Renamed AxesFinderFromOnePassMinimization -> AxesFinderFromConicalMinimization - Renamed DefaultUnnormalizedMeasureFunction -> ConicalUnnormalizedMeasureFunction - Removed supportsMultiPassMinimization() from MeasureDefinition since any One Pass algorithm can be multipass. -2014-07-09 - Changed version for 2.1.0 release. - Updated NEWS to reflect 2.1.0 release -2014-07-07 - Added forward declaration of options in NjettinessDefinition for readability. - Updated README with some clarifications - Added usage information in the example file - Reran svn propset svn:keywords Id *.cc *.hh -2014-06-25 - Declaring release candidate of 2.1 -2014-06-11 - Fixed virtual destructor issue in AxesFinder - Changing copy() to create() in NjettinessDefinition for "new" clarity - Converted some SharedPtr to regular pointers in NjettinessDefinition to be consistent on meaning of "create" commands. -2014-06-10 - Slight modification of example_advanced_usage - Fixed bug in GeometricCutoffMeasure (incorrect denominator setting) -2014-06-05 - Moved public before private in the .hh files for readability - Starting process of switching to SharedPtr internally - Clean up of AxesFinderFromGeometricMinimization - Simplified AxesFinder interface such that it doesn't know about starting axes finders (this is now handled in Njettiness). - Added const qualifiers in Njettiness -2014-06-04 - Implemented AxesDefinition class - Added descriptions to AxesDefinition and MeasureDefinition - Simplified example_advanced_usage with new Definitions - Made copy constructor private for Njettiness, to avoid copying -2014-06-03 - Implemented remaining suggestions from FJ authors (Thanks!) - Fixed bug in example_advanced_usage where wrong beta value was used for NjettinessPlugin tests. - Removed NANs as signals for number of parameters in Nsubjettiness and NjettinessPlugin - Reduced the number of allowed parameters from 4 to 3. - Wrapped NEWS to 80 characters - Added MeasureDefinition as way to safely store information about the measures used - Converted a few NANs to std::numeric_limits::quiet_NaN() when a parameter shouldn't be used. - Added AxesStruct and MeasureStruct to simplify the form of example_advanced_usage - Added example_v1p0p3 to check for backwards compatibility with v1.0.3 - Changed the names of the MeasureFunctions in order to avoid conflicts with the new MeasureDefinitions - Fixed bug in correlation between subjets and tau values in NjettinessPlugin - Added currentTauComponents to Nsubjettiness - Added subTau information to example_basic_usage - Added file NjettinessDefinition to hold MeasureDefinition - Changed Njettiness constructors to treat MeasureSpecification as primary object - Fixed segmentation fault with ClusterSequenceAreas -2014-06-02 - Implemented many suggestions from FJ authors (Thanks!) - Removed FastJet 2 specific code - Made sq() function into internal namespace (as "inline static" to avoid conflicts with other packages) - Made setAxes() take const reference argument - Rewrapped README to 80 characters and updated/improved some of the descriptions - Clarified NEWS about what parts of the Nsubjettiness code is backwards compatible with v1.0.3 - Clarified the para choices in Nsubjettiness constructor -2014-04-30 - Added (void)(n_jets) in AxesFinder.hh to fix unused-parameter warning -2014-04-29 - Added manual definition of NAN for compilers that don't have it. - Removed a few more unused parameters for compilation -2014-04-22 - Turned on -Wunused-parameter compiler flag to fix ATLAS compile issues. -2014-04-18 - Tweaks to NEWS and README. Preparing for 2.0.0-rc1 release. -2014-04-16 - Decided that enough has changed that this should be v2.0 - Added Id tags -2014-04-14 - Added get_partition_list to MeasureFunction - Removed do_cluster from MeasureFunction (no longer needed) - Fixed bug with NjettinessPlugin where jets were listed in backwards order from axes. - Removed various commented out pieces of code. -2014-03-16 - Added partitioning information to Nsubjettiness - Partitioning is now calculated in MeasureFunction and stored by Njettiness. - Rewrote MeasureFunction result() to call result_from_partition() - Added subjet (and constituent counting) information to example_basic_usage - Commented out redundant "getJets" function -2014-02-25 - Added access to seedAxes used for one-pass minimization routines. - Added axes print out to example_basic_usage, and fixed too many PrintJets declarations -2014-02-24 - Fixed embarrassing bug with min_axes (error introduced after v1.0 to make it the same as onepass_kt) - Simplified GeometricMeasure and added possibility of beta dependence - Commented out WTA2 options, since those have not been fully tested (nor do they seem particularly useful at the moment). They can be reinstated if the physics case can be made to use them. - Split example into example_basic_usage and example_advanced_usage -2014-01-28 - Added new options in WinnerTakeAllRecombiner to use either pT or pT^2/E to recombine particles -2014-01-24 - Added access to currentAxes from Nsubjettiness. -2014-01-18 - Added beam regions to MeasureFunction, correspondingly renamed functions to have jet and beam regions - Renamed functions in TauComponents for consistency with MeasureFunction - Adding debugging code to AxesFinderFromOnePassMinimization::getAxes - Worked extensively on example.cc to make sure that it tests all available options. - Rewrote PrintJets command in example.cc for later improvements - Converted some magic numbers to std::numeric_limits::max() -2014-01-17 - Rewrote KMeansMinimization to call OnePassMinimization, adding noise explicitly. - Removed any nothing of noise from OnePassMinimization - Removed Double32_t for root usage is Nsubjettiness - Clean up of many comments throughout the code, updating of README file - Removed unnecessary establishAxes in Njettiness - Removed bare constructor for Njettiness to avoid incompatibility with enum choices, may reinstate later. Also removed setMeasureFunction, setAxesFinder for same reason - NjettinessExtras now calls TauComponents -2014-01-16 - Moved minimization functions to OnePassMinimization, changed KMeansMinimization class to simply call OnePassMinimization a specified number of times - Added extra tau function in TauComponents for users to get tau directly - Changed radius parameter in AxesFinderFromExclusiveJet subclasses to use max_allowable_R - Updated example.ref to account for changes due to change in radius parameter -2014-01-15 - Changed NjettinessComponents to TauComponents - Updated MeasureFunction with "result" function that returns TauComponents object - TauComponents changed to calculate all tau components given subtaus_numerator and tau_denominator - Njettiness updated to return TauComponents object rather than individual components - Nsubjettiness and NjettinessPlugin updated to have option for 4th parameter -2014-01-14 - Added NjettinessComponents class so Njettiness does not recalculate tau values - Removed old Njettiness constructors, updated Nsubjettiness and NjettinessPlugin constructors to use new constructor - Added geometric minimization to OnePassAxesFinders - Created new Njettiness function to set OnePassAxesFinders to reduce code - Updated LightLikeAxis with ConvertToPseudoJet function - Updated README with new functionality of code -2014-01-12 - Removed NsubGeometricParameters in all functions/constructors, replaced with Rcutoff double - Added three new measure mode options where Rcutoff is declared explicitly in parameters - Added checks so minimization axes finders are not used for geometric measures - AxesFinderFromOnePassMinimization class created as child of AxesFinderFromKmeansMinimization - Added new NsubjettinessRatio constructor to include MeasureMode option - Moved AxesFinder and MeasureFunction declarations from AxesMode and MeasureMode into separate Njettiness function - Removed R0 from AxesFinderFromKmeansMinimization - Changed example.cc to get rid of use of NsubGeometricParameters -2014-01-9 - Removed NsubParameters in all functions/constructors, replaced with three separate parameters - Added checks for correct number of parameters in Njettiness constructor -2014-01-8 - Removed normalization information from Nsubjettiness - Added flag to MeasureFunction to give option of using the denominator - Split DefaultMeasure into separate normalized and unnormalized classes -2014-01-7 - Added capability of choosing a specific Measure in Njettiness - Added new Nsubjettiness constructor to allow choice of both AxesMode and MeasureMode -2014-01-6 - Updated copyright information - Fixed bug in WinnerTakeAllRecombiner - Moved KMeansParameters to AxesFinder - Updated README with descriptions of new header files -2013-12-30 - Changed name of MeasureFunctor to MeasureFunction - Created separate .hh/.cc files for MeasureFunction, AxesFinder, and WinnerTakeAllRecombiner - Updated Makefile to account for new files - Removed getMinimumAxes in AxesFinderFromKMeansMinimization, consolidated with getAxes - Updated comments on classes and major functions -2013-12-22 - Created .cc files and moved all function definitions into .cc files - Updated Makefile to account for new .cc files -2013-11-12 - Added to fjcontrib svn -2013-11-12 - Debugging svn -2013-11-11 - Changed MeasureFunctor to separately treat tau numerator and denominator - Changed some of the function names in MeasureFunctor. Should not affect users - Added more informative function names to Njettiness. - Njettiness now allows finding unnormalized tau values - Added WTARecombiner to define winner-take-all axes - Added various WTA options to AxesMode - Added setAxes to Nsubjettiness - Added NsubjettinessRatio function -2013-08-26 - Added inlines to fix compile issue - Put some of the minimization code inside of the AxesFinderFromKmeansMinimization class -2013-02-23 - Fixed dependency issue (now using make depend) -2013-02-22 - Fixed memory management and failed make check issues. -2013-02-21 - First version submitted to fjcontrib -2013-02-20 - Initial creation based on previous plugin hosted at http://www.jthaler.net/jets/ - - - diff --git a/src/Tools/fjcontrib/Nsubjettiness/ExtraRecombiners.cc b/src/Tools/fjcontrib/Nsubjettiness/ExtraRecombiners.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/ExtraRecombiners.cc +++ /dev/null @@ -1,105 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: ExtraRecombiners.cc 842 2015-08-20 13:44:31Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "ExtraRecombiners.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness{ - -std::string GeneralEtSchemeRecombiner::description() const { - return "General Et-scheme recombination"; -} - -// recombine pa and pb according to a generalized Et-scheme parameterized by the power delta -void GeneralEtSchemeRecombiner::recombine(const fastjet::PseudoJet & pa, const fastjet::PseudoJet & pb, fastjet::PseudoJet & pab) const { - - // Define new weights for recombination according to delta - // definition of ratio done so that we do not encounter issues about numbers being too large for huge values of delta - double ratio; - if (std::abs(_delta - 1.0) < std::numeric_limits::epsilon()) ratio = pb.perp()/pa.perp(); // save computation time of pow() - else ratio = pow(pb.perp()/pa.perp(), _delta); - double weighta = 1.0/(1.0 + ratio); - double weightb = 1.0/(1.0 + 1.0/ratio); - - double perp_ab = pa.perp() + pb.perp(); - // reweight the phi and rap sums according to the weights above - if (perp_ab != 0.0) { - double y_ab = (weighta * pa.rap() + weightb * pb.rap()); - - double phi_a = pa.phi(), phi_b = pb.phi(); - if (phi_a - phi_b > pi) phi_b += twopi; - if (phi_a - phi_b < -pi) phi_b -= twopi; - double phi_ab = (weighta * phi_a + weightb * phi_b); - - pab.reset_PtYPhiM(perp_ab, y_ab, phi_ab); - - } - else { - pab.reset(0.0,0.0,0.0,0.0); - } -} - - -std::string WinnerTakeAllRecombiner::description() const { - return "Winner-Take-All recombination"; -} - -// recombine pa and pb by creating pab with energy of the sum of particle energies in the direction of the harder particle -// updated recombiner to use more general form of a metric equal to E*(pT/E)^(alpha), which reduces to pT*cosh(rap)^(1-alpha) -// alpha is specified by the user. The default is alpha = 1, which is the typical behavior. alpha = 2 provides a metric which more -// favors central jets -void WinnerTakeAllRecombiner::recombine(const fastjet::PseudoJet & pa, const fastjet::PseudoJet & pb, fastjet::PseudoJet & pab) const { - double a_pt = pa.perp(), b_pt = pb.perp(), a_rap = pa.rap(), b_rap = pb.rap(); - - // special case of alpha = 1, everything is just pt (made separate so that pow function isn't called) - if (_alpha == 1.0) { - if (a_pt >= b_pt) { - pab.reset_PtYPhiM(a_pt + b_pt, a_rap, pa.phi()); - } - else if (b_pt > a_pt) { - pab.reset_PtYPhiM(a_pt + b_pt, b_rap, pb.phi()); - } - } - - // every other case uses additional cosh(rap) term - else { - double a_metric = a_pt*pow(cosh(a_rap), 1.0-_alpha); - double b_metric = b_pt*pow(cosh(b_rap), 1.0-_alpha); - if (a_metric >= b_metric) { - double new_pt = a_pt + b_pt*pow(cosh(b_rap)/cosh(a_rap), 1.0-_alpha); - pab.reset_PtYPhiM(new_pt, a_rap, pa.phi()); - } - if (b_metric > a_metric) { - double new_pt = b_pt + a_pt*pow(cosh(a_rap)/cosh(b_rap), 1.0-_alpha); - pab.reset_PtYPhiM(new_pt, b_rap, pb.phi()); - } - } -} - -} //namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/Makefile.am b/src/Tools/fjcontrib/Nsubjettiness/Makefile.am deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/Makefile.am +++ /dev/null @@ -1,13 +0,0 @@ -noinst_LTLIBRARIES = libRivetNsubjettiness.la - -libRivetNsubjettiness_la_SOURCES = \ - Nsubjettiness.cc \ - Njettiness.cc \ - NjettinessPlugin.cc \ - XConePlugin.cc \ - MeasureDefinition.cc \ - ExtraRecombiners.cc \ - AxesDefinition.cc \ - TauComponents.cc - -libRivetNsubjettiness_la_CPPFLAGS = $(AM_CPPFLAGS) -I${top_srcdir}/include/Rivet/Tools/fjcontrib diff --git a/src/Tools/fjcontrib/Nsubjettiness/MeasureDefinition.cc b/src/Tools/fjcontrib/Nsubjettiness/MeasureDefinition.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/MeasureDefinition.cc +++ /dev/null @@ -1,630 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: MeasureDefinition.cc 946 2016-06-14 19:11:27Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - - -// #include "AxesRefiner.hh" -#include "MeasureDefinition.hh" - -#include - - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -/////// -// -// Measure Function -// -/////// - - -//descriptions updated to include measure type -std::string DefaultMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Default Measure (should not be used directly)"; - return stream.str(); -} - -std::string NormalizedMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Normalized Measure (beta = " << _beta << ", R0 = " << _R0 << ")"; - return stream.str(); -} - -std::string UnnormalizedMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Unnormalized Measure (beta = " << _beta << ", in GeV)"; - return stream.str(); -} - - -std::string NormalizedCutoffMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Normalized Cutoff Measure (beta = " << _beta << ", R0 = " << _R0 << ", Rcut = " << _Rcutoff << ")"; - return stream.str(); -} - -std::string UnnormalizedCutoffMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Unnormalized Cutoff Measure (beta = " << _beta << ", Rcut = " << _Rcutoff << ", in GeV)"; - return stream.str(); -} - -//std::string DeprecatedGeometricMeasure::description() const { -// std::stringstream stream; -// stream << std::fixed << std::setprecision(2) -// << "Deprecated Geometric Measure (beta = " << _jet_beta << ", in GeV)"; -// return stream.str(); -//} - -//std::string DeprecatedGeometricCutoffMeasure::description() const { -// std::stringstream stream; -// stream << std::fixed << std::setprecision(2) -// << "Deprecated Geometric Cutoff Measure (beta = " << _jet_beta << ", Rcut = " << _Rcutoff << ", in GeV)"; -// return stream.str(); -//} - -std::string ConicalMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Conical Measure (beta = " << _beta << ", Rcut = " << _Rcutoff << ", in GeV)"; - return stream.str(); -} - -std::string OriginalGeometricMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Original Geometric Measure (Rcut = " << _Rcutoff << ", in GeV)"; - return stream.str(); -} - -std::string ModifiedGeometricMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Modified Geometric Measure (Rcut = " << _Rcutoff << ", in GeV)"; - return stream.str(); -} - -std::string ConicalGeometricMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "Conical Geometric Measure (beta = " << _jet_beta << ", gamma = " << _beam_gamma << ", Rcut = " << _Rcutoff << ", in GeV)"; - return stream.str(); -} - - -std::string XConeMeasure::description() const { - std::stringstream stream; - stream << std::fixed << std::setprecision(2) - << "XCone Measure (beta = " << _jet_beta << ", Rcut = " << _Rcutoff << ", in GeV)"; - return stream.str(); -} - -// Return all of the necessary TauComponents for specific input particles and axes -TauComponents MeasureDefinition::component_result(const std::vector& particles, - const std::vector& axes) const { - - // first find partition - TauPartition partition = get_partition(particles,axes); - - // then return result calculated from partition - return component_result_from_partition(partition,axes); -} - -TauPartition MeasureDefinition::get_partition(const std::vector& particles, - const std::vector& axes) const { - - TauPartition myPartition(axes.size()); - - // Figures out the partiting of the input particles into the various jet pieces - // Based on which axis the parition is closest to - for (unsigned i = 0; i < particles.size(); i++) { - - // find minimum distance; start with beam (-1) for reference - int j_min = -1; - double minRsq; - if (has_beam()) minRsq = beam_distance_squared(particles[i]); - else minRsq = std::numeric_limits::max(); // make it large value - - - // check to see which axis the particle is closest to - for (unsigned j = 0; j < axes.size(); j++) { - double tempRsq = jet_distance_squared(particles[i],axes[j]); // delta R distance - - if (tempRsq < minRsq) { - minRsq = tempRsq; - j_min = j; - } - } - - if (j_min == -1) { - assert(has_beam()); // should have beam for this to make sense. - myPartition.push_back_beam(particles[i],i); - } else { - myPartition.push_back_jet(j_min,particles[i],i); - } - } - - return myPartition; -} - -// Uses existing partition and calculates result -// TODO: Can we cache this for speed up when doing area subtraction? -TauComponents MeasureDefinition::component_result_from_partition(const TauPartition& partition, - const std::vector& axes) const { - - std::vector jetPieces(axes.size(), 0.0); - double beamPiece = 0.0; - - double tauDen = 0.0; - if (!has_denominator()) tauDen = 1.0; // if no denominator, then 1.0 for no normalization factor - - // first find jet pieces - for (unsigned j = 0; j < axes.size(); j++) { - std::vector thisPartition = partition.jet(j).constituents(); - for (unsigned i = 0; i < thisPartition.size(); i++) { - jetPieces[j] += jet_numerator(thisPartition[i],axes[j]); //numerator jet piece - if (has_denominator()) tauDen += denominator(thisPartition[i]); // denominator - } - } - - // then find beam piece - if (has_beam()) { - std::vector beamPartition = partition.beam().constituents(); - - for (unsigned i = 0; i < beamPartition.size(); i++) { - beamPiece += beam_numerator(beamPartition[i]); //numerator beam piece - if (has_denominator()) tauDen += denominator(beamPartition[i]); // denominator - } - } - - // create jets for storage in TauComponents - std::vector jets = partition.jets(); - - return TauComponents(_tau_mode, jetPieces, beamPiece, tauDen, jets, axes); -} - -// new methods added to generalize energy and angle squared for different measure types -double DefaultMeasure::energy(const PseudoJet& jet) const { - double energy; - switch (_measure_type) { - case pt_R : - case perp_lorentz_dot : - energy = jet.perp(); - break; - case E_theta : - case lorentz_dot : - energy = jet.e(); - break; - default : { - assert(_measure_type == pt_R || _measure_type == E_theta || _measure_type == lorentz_dot || _measure_type == perp_lorentz_dot); - energy = std::numeric_limits::quiet_NaN(); - break; - } - } - return energy; -} - -double DefaultMeasure::angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const { - double pseudoRsquared; - switch(_measure_type) { - case pt_R : { - pseudoRsquared = jet1.squared_distance(jet2); - break; - } - case E_theta : { - // doesn't seem to be a fastjet built in for this - double dot = jet1.px()*jet2.px() + jet1.py()*jet2.py() + jet1.pz()*jet2.pz(); - double norm1 = sqrt(jet1.px()*jet1.px() + jet1.py()*jet1.py() + jet1.pz()*jet1.pz()); - double norm2 = sqrt(jet2.px()*jet2.px() + jet2.py()*jet2.py() + jet2.pz()*jet2.pz()); - - double costheta = dot/(norm1 * norm2); - if (costheta > 1.0) costheta = 1.0; // Need to handle case of numerical overflow - double theta = acos(costheta); - pseudoRsquared = theta*theta; - break; - } - case lorentz_dot : { - double dotproduct = dot_product(jet1,jet2); - pseudoRsquared = 2.0 * dotproduct / (jet1.e() * jet2.e()); - break; - } - case perp_lorentz_dot : { - PseudoJet lightJet = lightFrom(jet2); // assuming jet2 is the axis - double dotproduct = dot_product(jet1,lightJet); - pseudoRsquared = 2.0 * dotproduct / (lightJet.pt() * jet1.pt()); - break; - } - default : { - assert(_measure_type == pt_R || _measure_type == E_theta || _measure_type == lorentz_dot || _measure_type == perp_lorentz_dot); - pseudoRsquared = std::numeric_limits::quiet_NaN(); - break; - } - } - - return pseudoRsquared; - -} - - -/////// -// -// Axes Refining -// -/////// - -// uses minimization of N-jettiness to continually update axes until convergence. -// The function returns the axes found at the (local) minimum -// This is the general axes refiner that can be used for a generic measure (but is -// overwritten in the case of the conical measure and the deprecated geometric measure) -std::vector MeasureDefinition::get_one_pass_axes(int n_jets, - const std::vector & particles, - const std::vector& currentAxes, - int nAttempts, - double accuracy) const { - - assert(n_jets == (int)currentAxes.size()); - - std::vector seedAxes = currentAxes; - - std::vector temp_axes(seedAxes.size(),fastjet::PseudoJet(0,0,0,0)); - for (unsigned int k = 0; k < seedAxes.size(); k++) { - seedAxes[k] = lightFrom(seedAxes[k]) * seedAxes[k].E(); // making light-like, but keeping energy - } - - double seedTau = result(particles, seedAxes); - - std::vector bestAxesSoFar = seedAxes; - double bestTauSoFar = seedTau; - - for (int i_att = 0; i_att < nAttempts; i_att++) { - - std::vector newAxes(seedAxes.size(),fastjet::PseudoJet(0,0,0,0)); - std::vector summed_jets(seedAxes.size(), fastjet::PseudoJet(0,0,0,0)); - - // find closest axis and assign to that - for (unsigned int i = 0; i < particles.size(); i++) { - - // start from unclustered beam measure - int minJ = -1; - double minDist = beam_distance_squared(particles[i]); - - // which axis am I closest to? - for (unsigned int j = 0; j < seedAxes.size(); j++) { - double tempDist = jet_distance_squared(particles[i],seedAxes[j]); - if (tempDist < minDist) { - minDist = tempDist; - minJ = j; - } - } - - // if not unclustered, then cluster - if (minJ != -1) { - summed_jets[minJ] += particles[i]; // keep track of energy to use later. - if (_useAxisScaling) { - double pseudoMomentum = dot_product(lightFrom(seedAxes[minJ]),particles[i]) + accuracy; // need small offset to avoid potential divide by zero issues - double axis_scaling = (double)jet_numerator(particles[i], seedAxes[minJ])/pseudoMomentum; - - newAxes[minJ] += particles[i]*axis_scaling; - } - } - } - if (!_useAxisScaling) newAxes = summed_jets; - - // convert the axes to LightLike and then back to PseudoJet - for (unsigned int k = 0; k < newAxes.size(); k++) { - if (newAxes[k].perp() > 0) { - newAxes[k] = lightFrom(newAxes[k]); - newAxes[k] *= summed_jets[k].E(); // scale by energy to get sensible result - } - } - - // calculate tau on new axes - double newTau = result(particles, newAxes); - - // find the smallest value of tau (and the corresponding axes) so far - if (newTau < bestTauSoFar) { - bestAxesSoFar = newAxes; - bestTauSoFar = newTau; - } - - if (fabs(newTau - seedTau) < accuracy) {// close enough for jazz - seedAxes = newAxes; - seedTau = newTau; - break; - } - - seedAxes = newAxes; - seedTau = newTau; - -} - - // return the axes corresponding to the smallest tau found throughout all iterations - // this is to prevent the minimization from returning a non-minimized of tau due to potential oscillations around the minimum - return bestAxesSoFar; - -} - - -// One pass minimization for the DefaultMeasure - -// Given starting axes, update to find better axes by using Kmeans clustering around the old axes -template -std::vector DefaultMeasure::UpdateAxesFast(const std::vector & old_axes, - const std::vector & inputJets, - double accuracy - ) const { - assert(old_axes.size() == N); - - // some storage, declared static to save allocation/re-allocation costs - static LightLikeAxis new_axes[N]; - static fastjet::PseudoJet new_jets[N]; - for (int n = 0; n < N; ++n) { - new_axes[n].reset(0.0,0.0,0.0,0.0); - new_jets[n].reset_momentum(0.0,0.0,0.0,0.0); - } - - double precision = accuracy; //TODO: actually cascade this in - - /////////////// Assignment Step ////////////////////////////////////////////////////////// - std::vector assignment_index(inputJets.size()); - int k_assign = -1; - - for (unsigned i = 0; i < inputJets.size(); i++){ - double smallestDist = std::numeric_limits::max(); //large number - for (int k = 0; k < N; k++) { - double thisDist = old_axes[k].DistanceSq(inputJets[i]); - if (thisDist < smallestDist) { - smallestDist = thisDist; - k_assign = k; - } - } - if (smallestDist > sq(_Rcutoff)) {k_assign = -1;} - assignment_index[i] = k_assign; - } - - //////////////// Update Step ///////////////////////////////////////////////////////////// - double distPhi, old_dist; - for (unsigned i = 0; i < inputJets.size(); i++) { - int old_jet_i = assignment_index[i]; - if (old_jet_i == -1) {continue;} - - const fastjet::PseudoJet& inputJet_i = inputJets[i]; - LightLikeAxis& new_axis_i = new_axes[old_jet_i]; - double inputPhi_i = inputJet_i.phi(); - double inputRap_i = inputJet_i.rap(); - - // optimize pow() call - // add noise (the precision term) to make sure we don't divide by zero - if (_beta == 1.0) { - double DR = std::sqrt(sq(precision) + old_axes[old_jet_i].DistanceSq(inputJet_i)); - old_dist = 1.0/DR; - } else if (_beta == 2.0) { - old_dist = 1.0; - } else if (_beta == 0.0) { - double DRSq = sq(precision) + old_axes[old_jet_i].DistanceSq(inputJet_i); - old_dist = 1.0/DRSq; - } else { - old_dist = sq(precision) + old_axes[old_jet_i].DistanceSq(inputJet_i); - old_dist = std::pow(old_dist, (0.5*_beta-1.0)); - } - - // TODO: Put some of these addition functions into light-like axes - // rapidity sum - new_axis_i.set_rap(new_axis_i.rap() + inputJet_i.perp() * inputRap_i * old_dist); - // phi sum - distPhi = inputPhi_i - old_axes[old_jet_i].phi(); - if (fabs(distPhi) <= M_PI){ - new_axis_i.set_phi( new_axis_i.phi() + inputJet_i.perp() * inputPhi_i * old_dist ); - } else if (distPhi > M_PI) { - new_axis_i.set_phi( new_axis_i.phi() + inputJet_i.perp() * (-2*M_PI + inputPhi_i) * old_dist ); - } else if (distPhi < -M_PI) { - new_axis_i.set_phi( new_axis_i.phi() + inputJet_i.perp() * (+2*M_PI + inputPhi_i) * old_dist ); - } - // weights sum - new_axis_i.set_weight( new_axis_i.weight() + inputJet_i.perp() * old_dist ); - // momentum magnitude sum - new_jets[old_jet_i] += inputJet_i; - } - // normalize sums - for (int k = 0; k < N; k++) { - if (new_axes[k].weight() == 0) { - // no particles were closest to this axis! Return to old axis instead of (0,0,0,0) - new_axes[k] = old_axes[k]; - } else { - new_axes[k].set_rap( new_axes[k].rap() / new_axes[k].weight() ); - new_axes[k].set_phi( new_axes[k].phi() / new_axes[k].weight() ); - new_axes[k].set_phi( std::fmod(new_axes[k].phi() + 2*M_PI, 2*M_PI) ); - new_axes[k].set_mom( std::sqrt(new_jets[k].modp2()) ); - } - } - std::vector new_axes_vec(N); - for (unsigned k = 0; k < N; ++k) new_axes_vec[k] = new_axes[k]; - return new_axes_vec; -} - -// Given N starting axes, this function updates all axes to find N better axes. -// (This is just a wrapper for the templated version above.) -// TODO: Consider removing this in a future version -std::vector DefaultMeasure::UpdateAxes(const std::vector & old_axes, - const std::vector & inputJets, - double accuracy) const { - int N = old_axes.size(); - switch (N) { - case 1: return UpdateAxesFast<1>(old_axes, inputJets, accuracy); - case 2: return UpdateAxesFast<2>(old_axes, inputJets, accuracy); - case 3: return UpdateAxesFast<3>(old_axes, inputJets, accuracy); - case 4: return UpdateAxesFast<4>(old_axes, inputJets, accuracy); - case 5: return UpdateAxesFast<5>(old_axes, inputJets, accuracy); - case 6: return UpdateAxesFast<6>(old_axes, inputJets, accuracy); - case 7: return UpdateAxesFast<7>(old_axes, inputJets, accuracy); - case 8: return UpdateAxesFast<8>(old_axes, inputJets, accuracy); - case 9: return UpdateAxesFast<9>(old_axes, inputJets, accuracy); - case 10: return UpdateAxesFast<10>(old_axes, inputJets, accuracy); - case 11: return UpdateAxesFast<11>(old_axes, inputJets, accuracy); - case 12: return UpdateAxesFast<12>(old_axes, inputJets, accuracy); - case 13: return UpdateAxesFast<13>(old_axes, inputJets, accuracy); - case 14: return UpdateAxesFast<14>(old_axes, inputJets, accuracy); - case 15: return UpdateAxesFast<15>(old_axes, inputJets, accuracy); - case 16: return UpdateAxesFast<16>(old_axes, inputJets, accuracy); - case 17: return UpdateAxesFast<17>(old_axes, inputJets, accuracy); - case 18: return UpdateAxesFast<18>(old_axes, inputJets, accuracy); - case 19: return UpdateAxesFast<19>(old_axes, inputJets, accuracy); - case 20: return UpdateAxesFast<20>(old_axes, inputJets, accuracy); - default: std::cout << "N-jettiness is hard-coded to only allow up to 20 jets!" << std::endl; - return std::vector(); - } - -} - -// uses minimization of N-jettiness to continually update axes until convergence. -// The function returns the axes found at the (local) minimum -std::vector DefaultMeasure::get_one_pass_axes(int n_jets, - const std::vector & inputJets, - const std::vector& seedAxes, - int nAttempts, - double accuracy - ) const { - - // if the measure type doesn't use the pt_R metric, then the standard minimization scheme should be used - if (_measure_type != pt_R) { - return MeasureDefinition::get_one_pass_axes(n_jets, inputJets, seedAxes, nAttempts, accuracy); - } - - // convert from PseudoJets to LightLikeAxes - std::vector< LightLikeAxis > old_axes(n_jets, LightLikeAxis(0,0,0,0)); - for (int k = 0; k < n_jets; k++) { - old_axes[k].set_rap( seedAxes[k].rap() ); - old_axes[k].set_phi( seedAxes[k].phi() ); - old_axes[k].set_mom( seedAxes[k].modp() ); - } - - // Find new axes by iterating (only one pass here) - std::vector< LightLikeAxis > new_axes(n_jets, LightLikeAxis(0,0,0,0)); - double cmp = std::numeric_limits::max(); //large number - int h = 0; - - while (cmp > accuracy && h < nAttempts) { // Keep updating axes until near-convergence or too many update steps - cmp = 0.0; - h++; - new_axes = UpdateAxes(old_axes, inputJets,accuracy); // Update axes - for (int k = 0; k < n_jets; k++) { - cmp += old_axes[k].Distance(new_axes[k]); - } - cmp = cmp / ((double) n_jets); - old_axes = new_axes; - } - - // Convert from internal LightLikeAxes to PseudoJet - std::vector outputAxes; - for (int k = 0; k < n_jets; k++) { - fastjet::PseudoJet temp = old_axes[k].ConvertToPseudoJet(); - outputAxes.push_back(temp); - } - - // this is used to debug the minimization routine to make sure that it works. - /*bool do_debug = false; - if (do_debug) { - // get this information to make sure that minimization is working properly - double seed_tau = result(inputJets, seedAxes); - double outputTau = result(inputJets, outputAxes); - assert(outputTau <= seed_tau); - }*/ - - return outputAxes; -} - -//// One-pass minimization for the Deprecated Geometric Measure -//// Uses minimization of the geometric distance in order to find the minimum axes. -//// It continually updates until it reaches convergence or it reaches the maximum number of attempts. -//// This is essentially the same as a stable cone finder. -//std::vector DeprecatedGeometricCutoffMeasure::get_one_pass_axes(int n_jets, -// const std::vector & particles, -// const std::vector& currentAxes, -// int nAttempts, -// double accuracy) const { -// -// assert(n_jets == (int)currentAxes.size()); //added int casting to get rid of compiler warning -// -// std::vector seedAxes = currentAxes; -// double seedTau = result(particles, seedAxes); -// -// for (int i = 0; i < nAttempts; i++) { -// -// std::vector newAxes(seedAxes.size(),fastjet::PseudoJet(0,0,0,0)); -// -// // find closest axis and assign to that -// for (unsigned int i = 0; i < particles.size(); i++) { -// -// // start from unclustered beam measure -// int minJ = -1; -// double minDist = beam_distance_squared(particles[i]); -// -// // which axis am I closest to? -// for (unsigned int j = 0; j < seedAxes.size(); j++) { -// double tempDist = jet_distance_squared(particles[i],seedAxes[j]); -// if (tempDist < minDist) { -// minDist = tempDist; -// minJ = j; -// } -// } -// -// // if not unclustered, then cluster -// if (minJ != -1) newAxes[minJ] += particles[i]; -// } -// -// // calculate tau on new axes -// seedAxes = newAxes; -// double tempTau = result(particles, newAxes); -// -// // close enough to stop? -// if (fabs(tempTau - seedTau) < accuracy) break; -// seedTau = tempTau; -// } -// -// return seedAxes; -//} - - -// Go from internal LightLikeAxis to PseudoJet -fastjet::PseudoJet LightLikeAxis::ConvertToPseudoJet() { - double px, py, pz, E; - E = _mom; - pz = (std::exp(2.0*_rap) - 1.0) / (std::exp(2.0*_rap) + 1.0) * E; - px = std::cos(_phi) * std::sqrt( std::pow(E,2) - std::pow(pz,2) ); - py = std::sin(_phi) * std::sqrt( std::pow(E,2) - std::pow(pz,2) ); - return fastjet::PseudoJet(px,py,pz,E); -} - -} //namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/NEWS b/src/Tools/fjcontrib/Nsubjettiness/NEWS deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/NEWS +++ /dev/null @@ -1,99 +0,0 @@ -------------------------- -Version 2 -------------------------- - -This is a streamlining of the N-subjettiness code, developed mainly by TJ -Wilkason. The core functionality is unchanged, but classes have been -dramatically reorganized to allow for later expansion. Because the API for -Njettiness has changed, we have called this v2 (http://semver.org). - -Note that we have maintain backwards compatibility for the typical ways that -Nsubjettiness was used. In particular, all of the Nsubjettiness class code in -the example file from v1.0.3 still compiles, as does the NjettinessPlugin class -code that uses the default measure. - -The key new features are: - - * NsubjettinessRatio: Direct access to tau_N / tau_M (the most requested - feature) - * MeasureDefinition to allow access to normalized and unnormalized measures - * AxesDefinition to allow for access to more general axes modes - * Winner-Take-All recombination axes: a faster way to find axes than beta=1 - minimization, but with comparable performance. - * TauComponents to get access to the pieces of the N-(sub)jettiness - calculation. - * TauExtras to get complete access to get partitioning and axes information. - * For clarity, split the example file into an example_basic_usage and - example_advanced_usage (and example_advanced_usage_ee for e+e- collisions). - * In Nsubjettiness, access to seedAxes() and currentAxes() to figure out the - axes used before and after minimization. - * In Nsubjettiness, access to currentSubjets() to get the subjet fourvectors. - * (v2.2) XConePlugin, which improves on the previous NjettinessPlugin to use - N-jettiness as a jet finder using the new ConicalGeometric measure. - --- 2.2.4: (Jun 14, 2016) Fixed bug where multi-pass minimization could yield - pathological axes (thanks Gregory Soyez) --- 2.2.3: (Apr 4, 2016) Fixed bug where a jet with fewer than N constituents - could give random value for tau_N (thanks Nathan Hartland) --- 2.2.2: (Mar 29, 2016) Updating SharedPtr interface for FJ 3.2 --- 2.2.1: (Sept 28, 2015) Fix of small Makefile bug --- 2.2.0: (Sept 7, 2015) Inclusion of the XCone jet algorithm, as well as a - few new measures, including the "conical geometric" measure and - options for e+e- colliders. Improvement of the - Measure/AxesDefinition interface to allow for direct - use in calculations. - * Fixed bug where MultiPass_Axes did not actually minimize - * Fixed floating point error with infinity^2 in various measures - --- 2.1.0: (July 9, 2014) Inclusion of Measure/AxesDefinition interface. - This was the first publicly available version of Nsubjettiness v2. --- 2.0.0: Initial release of v2.0. This was never officially made public. - -------------------------- -Version 1 -------------------------- - -This was a new release using FastJet contrib framework, primary developed by -Jesse Thaler. - --- 1.0.3: Added inlines to fix compile issue (thanks Matthew Low) --- 1.0.2: Fixed potential dependency issue (thanks FJ authors) --- 1.0.1: Fixed memory leak in Njettiness.hh (thanks FJ authors) --- 1.0.0: New release using FastJet contrib framework. This includes a new -makefile and a simplified example program. - -------------------------- -Previous Versions -------------------------- - -The previous versions of this code were developed initially by Ken Van Tilburg, -tweaked by Jesse Thaler, and made into a robust FastJet add on by Chris -Vermilion. - -Previous versions available from: - http://jthaler.net/jets/Njettiness-0.5.1.tar.gz (Experimental Version) - http://jthaler.net/jets/Njettiness-0.4.1.tar.gz (Stable Version) - -Previous version history: --- 0.5.1: For Njettiness Plugin, added access to currentTau values and axes via - ClusterSequence::Extras class. (thanks to Dinko Ferencek and John - Paul Chou) --- 0.5.0: Corrected fatal error in ConstituentTauValue (TauValue unaffected). - Started process of allowing for more general measures and alternative - minimization schemes. Extremely preliminary inclusion of alternative - "geometric" measure. --- 0.4.1: Corrected bug where a too-small value of Rcut would cause the - minimization procedure to fail (thanks Marat Freytsis, Brian Shuve) --- 0.4.0: Adding Nsubjettiness FunctionOfPseudoJet. Re-organizing file - structure and doing some re-naming to clarify Njettiness vs. - Nsubjettiness. Some speedup in UpdateAxes code. (CKV) --- 0.3.2: Returns zero instead of a segmentation fault when the number of - particles in a jet is smaller than the N value in tau_N (thanks - Grigory Ovanesyan) --- 0.3.2: Fixed -Wshadow errors (thanks Grigory Ovanesyan) --- 0.3.1: Fixed stray comma/semicolon compiler error (thanks Grigory Ovanesyan) --- 0.3.1: Corrected tarbomb issue (thanks Jonathan Walsh) --- 0.3.1: Added anti-kT seeds as option --- 0.3.1: Fixed bug in minimization code with R_cutoff (thanks Chris Vermilion) --- 0.3.1: Added getPartition() and getJets() functions as helper functions for - Chris Vermilion. (JT) diff --git a/src/Tools/fjcontrib/Nsubjettiness/Njettiness.cc b/src/Tools/fjcontrib/Nsubjettiness/Njettiness.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/Njettiness.cc +++ /dev/null @@ -1,225 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: Njettiness.cc 933 2016-04-04 22:23:32Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "Njettiness.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - - -/////// -// -// Main Njettiness Class -// -/////// - -LimitedWarning Njettiness::_old_measure_warning; -LimitedWarning Njettiness::_old_axes_warning; - - -// Constructor -Njettiness::Njettiness(const AxesDefinition & axes_def, const MeasureDefinition & measure_def) -: _axes_def(axes_def.create()), _measure_def(measure_def.create()) {} - -// setAxes for Manual mode -void Njettiness::setAxes(const std::vector & myAxes) { - if (_axes_def->needsManualAxes()) { - _currentAxes = myAxes; - } else { - throw Error("You can only use setAxes for manual AxesDefinitions"); - } -} - -// Calculates and returns all TauComponents that user would want. -// This information is stored in _current_tau_components for later access as well. -TauComponents Njettiness::getTauComponents(unsigned n_jets, const std::vector & inputJets) const { - if (inputJets.size() <= n_jets) { //if not enough particles, return zero - _currentAxes = inputJets; - _currentAxes.resize(n_jets,fastjet::PseudoJet(0.0,0.0,0.0,0.0)); - - // Put in empty tau components - std::vector dummy_jet_pieces; - _current_tau_components = TauComponents(UNDEFINED_SHAPE, - dummy_jet_pieces, - 0.0, - 1.0, - _currentAxes, - _currentAxes - ); - _seedAxes = _currentAxes; - _currentPartition = TauPartition(n_jets); // empty partition - } else { - assert(_axes_def); // this should never fail. - - if (_axes_def->needsManualAxes()) { // if manual mode - // take current axes as seeds - _seedAxes = _currentAxes; - - // refine axes if requested - _currentAxes = _axes_def->get_refined_axes(n_jets,inputJets,_seedAxes, _measure_def.get()); - } else { // non-manual axes - - //set starting point for minimization - _seedAxes = _axes_def->get_starting_axes(n_jets,inputJets,_measure_def.get()); - - // refine axes as needed - _currentAxes = _axes_def->get_refined_axes(n_jets,inputJets,_seedAxes, _measure_def.get()); - - // NOTE: The above two function calls are combined in "AxesDefinition::get_axes" - // but are separated here to allow seed axes to be stored. - } - - // Find and store partition - _currentPartition = _measure_def->get_partition(inputJets,_currentAxes); - - // Find and store tau value - _current_tau_components = _measure_def->component_result_from_partition(_currentPartition, _currentAxes); // sets current Tau Values - } - return _current_tau_components; -} - - -/////// -// -// Below is code for backward compatibility to use the old interface. -// May be deleted in a future version -// -/////// - -Njettiness::Njettiness(AxesMode axes_mode, const MeasureDefinition & measure_def) -: _axes_def(createAxesDef(axes_mode)), _measure_def(measure_def.create()) {} - -// Convert from MeasureMode enum to MeasureDefinition -// This returns a pointer that will be claimed by a SharedPtr -MeasureDefinition* Njettiness::createMeasureDef(MeasureMode measure_mode, int num_para, double para1, double para2, double para3) const { - - _old_measure_warning.warn("Njettiness::createMeasureDef: You are using the old MeasureMode way of specifying N-subjettiness measures. This is deprecated as of v2.1 and will be removed in v3.0. Please use MeasureDefinition instead."); - - // definition of maximum Rcutoff for non-cutoff measures, changed later by other measures - double Rcutoff = std::numeric_limits::max(); //large number - // Most (but not all) measures have some kind of beta value - double beta = std::numeric_limits::quiet_NaN(); - // The normalized measures have an R0 value. - double R0 = std::numeric_limits::quiet_NaN(); - - // Find the MeasureFunction and set the parameters. - switch (measure_mode) { - case normalized_measure: - beta = para1; - R0 = para2; - if(num_para == 2) { - return new NormalizedMeasure(beta,R0); - } else { - throw Error("normalized_measure needs 2 parameters (beta and R0)"); - } - break; - case unnormalized_measure: - beta = para1; - if(num_para == 1) { - return new UnnormalizedMeasure(beta); - } else { - throw Error("unnormalized_measure needs 1 parameter (beta)"); - } - break; - case geometric_measure: - throw Error("This class has been removed. Please use OriginalGeometricMeasure, ModifiedGeometricMeasure, or ConicalGeometricMeasure with the new Njettiness constructor."); - break; - case normalized_cutoff_measure: - beta = para1; - R0 = para2; - Rcutoff = para3; //Rcutoff parameter is 3rd parameter in normalized_cutoff_measure - if (num_para == 3) { - return new NormalizedCutoffMeasure(beta,R0,Rcutoff); - } else { - throw Error("normalized_cutoff_measure has 3 parameters (beta, R0, Rcutoff)"); - } - break; - case unnormalized_cutoff_measure: - beta = para1; - Rcutoff = para2; //Rcutoff parameter is 2nd parameter in normalized_cutoff_measure - if (num_para == 2) { - return new UnnormalizedCutoffMeasure(beta,Rcutoff); - } else { - throw Error("unnormalized_cutoff_measure has 2 parameters (beta, Rcutoff)"); - } - break; - case geometric_cutoff_measure: - throw Error("This class has been removed. Please use OriginalGeometricMeasure, ModifiedGeometricMeasure, or ConicalGeometricMeasure with the new Njettiness constructor."); - default: - assert(false); - break; - } - return NULL; -} - -// Convert from AxesMode enum to AxesDefinition -// This returns a pointer that will be claimed by a SharedPtr -AxesDefinition* Njettiness::createAxesDef(Njettiness::AxesMode axes_mode) const { - - _old_axes_warning.warn("Njettiness::createAxesDef: You are using the old AxesMode way of specifying N-subjettiness axes. This is deprecated as of v2.1 and will be removed in v3.0. Please use AxesDefinition instead."); - - - switch (axes_mode) { - case wta_kt_axes: - return new WTA_KT_Axes(); - case wta_ca_axes: - return new WTA_CA_Axes(); - case kt_axes: - return new KT_Axes(); - case ca_axes: - return new CA_Axes(); - case antikt_0p2_axes: - return new AntiKT_Axes(0.2); - case onepass_wta_kt_axes: - return new OnePass_WTA_KT_Axes(); - case onepass_wta_ca_axes: - return new OnePass_WTA_CA_Axes(); - case onepass_kt_axes: - return new OnePass_KT_Axes(); - case onepass_ca_axes: - return new OnePass_CA_Axes(); - case onepass_antikt_0p2_axes: - return new OnePass_AntiKT_Axes(0.2); - case onepass_manual_axes: - return new OnePass_Manual_Axes(); - case min_axes: - return new MultiPass_Axes(100); - case manual_axes: - return new Manual_Axes(); - default: - assert(false); - return NULL; - } -} - - - - - -} // namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/NjettinessPlugin.cc b/src/Tools/fjcontrib/Nsubjettiness/NjettinessPlugin.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/NjettinessPlugin.cc +++ /dev/null @@ -1,98 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: NjettinessPlugin.cc 821 2015-06-15 18:50:53Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "NjettinessPlugin.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness{ - -LimitedWarning NjettinessPlugin::_old_constructor_warning; - -std::string NjettinessPlugin::description() const {return "N-jettiness jet finder";} - - -// Clusters the particles according to the Njettiness jet algorithm -// Apologies for the complication with this code, but we need to make -// a fake jet clustering tree. The partitioning is done by getPartitionList -void NjettinessPlugin::run_clustering(ClusterSequence& cs) const -{ - std::vector particles = cs.jets(); - - // HACK: remove area information from particles (in case this is called by - // a ClusterSequenceArea. Will be fixed in a future FastJet release) - for (unsigned i = 0; i < particles.size(); i++) { - particles[i].set_structure_shared_ptr(SharedPtr()); - } - - - TauComponents tau_components = _njettinessFinder.getTauComponents(_N, particles); - TauPartition tau_partition = _njettinessFinder.currentPartition(); - std::vector > partition = tau_partition.jets_list(); - - std::vector jet_indices_for_extras; - - // output clusterings for each jet - for (size_t i0 = 0; i0 < partition.size(); ++i0) { - size_t i = partition.size() - 1 - i0; // reversed order of reading to match axes order - std::list& indices = partition[i]; - if (indices.size() == 0) continue; - while (indices.size() > 1) { - int merge_i = indices.back(); indices.pop_back(); - int merge_j = indices.back(); indices.pop_back(); - int newIndex; - double fakeDij = -1.0; - - cs.plugin_record_ij_recombination(merge_i, merge_j, fakeDij, newIndex); - - indices.push_back(newIndex); - } - double fakeDib = -1.0; - - int finalJet = indices.back(); - cs.plugin_record_iB_recombination(finalJet, fakeDib); - jet_indices_for_extras.push_back(cs.jets()[finalJet].cluster_hist_index()); // Get the four vector for the final jets to compare later. - } - - //HACK: Re-reverse order of reading to match CS order - reverse(jet_indices_for_extras.begin(),jet_indices_for_extras.end()); - - // Store extra information about jets - NjettinessExtras * extras = new NjettinessExtras(tau_components,jet_indices_for_extras); - -#if FASTJET_VERSION_NUMBER>=30100 - cs.plugin_associate_extras(extras); -#else - // auto_ptr no longer supported, apparently - cs.plugin_associate_extras(std::auto_ptr(extras)); -#endif - -} - - -} // namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/Nsubjettiness.cc b/src/Tools/fjcontrib/Nsubjettiness/Nsubjettiness.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/Nsubjettiness.cc +++ /dev/null @@ -1,58 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: Nsubjettiness.cc 821 2015-06-15 18:50:53Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "Nsubjettiness.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { - -using namespace fastjet; - -LimitedWarning Nsubjettiness::_old_constructor_warning; - - -//result returns tau_N with normalization dependent on what is specified in constructor -double Nsubjettiness::result(const PseudoJet& jet) const { - std::vector particles = jet.constituents(); - return _njettinessFinder.getTau(_N, particles); -} - -TauComponents Nsubjettiness::component_result(const PseudoJet& jet) const { - std::vector particles = jet.constituents(); - return _njettinessFinder.getTauComponents(_N, particles); -} - -//ratio result uses Nsubjettiness result to find the ratio tau_N/tau_M, where N and M are specified by user -double NsubjettinessRatio::result(const PseudoJet& jet) const { - double numerator = _nsub_numerator.result(jet); - double denominator = _nsub_denominator.result(jet); - return numerator/denominator; -} - -} // namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/README b/src/Tools/fjcontrib/Nsubjettiness/README deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/README +++ /dev/null @@ -1,394 +0,0 @@ --------------------------------------------------------------------------------- -Nsubjettiness Package --------------------------------------------------------------------------------- - -The Nsubjettiness package is based on the physics described in: - - Identifying Boosted Objects with N-subjettiness. - Jesse Thaler and Ken Van Tilburg. - JHEP 1103:015 (2011), arXiv:1011.2268. - - Maximizing Boosted Top Identification by Minimizing N-subjettiness. - Jesse Thaler and Ken Van Tilburg. - JHEP 1202:093 (2012), arXiv:1108.2701. - -New in v2.0 is the winner-take-all axis, which is described in: - - Jet Observables Without Jet Algorithms. - Daniele Bertolini, Tucker Chan, and Jesse Thaler. - JHEP 1404:013 (2014), arXiv:1310.7584. - - Jet Shapes with the Broadening Axis. - Andrew J. Larkoski, Duff Neill, and Jesse Thaler. - JHEP 1404:017 (2014), arXiv:1401.2158. - - Unpublished work by Gavin Salam - -New in v2.2 are new measures used in the XCone jet algorithm, described in: - - XCone: N-jettiness as an Exclusive Cone Jet Algorithm. - Iain W. Stewart, Frank J. Tackmann, Jesse Thaler, - Christopher K. Vermilion, and Thomas F. Wilkason. - arXiv:1508.01516. - - Resolving Boosted Jets with XCone. - Jesse Thaler and Thomas F. Wilkason. - arXiv:1508.01518. - --------------------------------------------------------------------------------- -Core Classes --------------------------------------------------------------------------------- - -There are various ways to access N-(sub)jettiness variables, described -in more detail below: - -Nsubjettiness [Nsubjettiness.hh]: - A FunctionOfPseudoJet interface to measure the - N-subjettiness jet shape - (Recommended for most users) - -NsubjettinessRatio [Nsubjettiness.hh]: - A FunctionOfPseudoJet interface to measure ratios of - two different N-subjettiness (i.e. tau3/tau2) - (Recommended for most users) - -XConePlugin [XConePlugin.hh]: - A FastJet plugin for using the XCone jet algorithm. - (Recommended for most users) - -NjettinessPlugin [NjettinessPlugin.hh]: - A FastJet plugin for finding jets by minimizing N-jettiness. - Same basic philosophy as XCone, but many more options. - (Recommended for advanced users only.) - -Njettiness [Njettiness.hh]: - Access to the core Njettiness code. - (Not recommended for users, since the interface might change) - -The code assumes that you have FastJet 3, but does not (yet) require FastJet 3.1 - --------------------------------------------------------------------------------- -Basic Usage: Nsubjettiness and NsubjettinessRatio [Nsubjettiness.hh] --------------------------------------------------------------------------------- - -Most users will only need to use the Nsubjettiness class. The basic -functionality is given by: - - Nsubjettiness nSub(N, AxesDefinition, MeasureDefinition) - // N specifies the number of (sub) jets to measure - // AxesDefinition is WTA_KT_Axes, OnePass_KT_Axes, etc. - // MeasureDefinition is UnnormalizedMeasure(beta), - // NormalizedMeasure(beta,R0), etc. - - // get tau value - double tauN = nSub.result(PseudoJet); - -Also available are ratios of N-subjettiness values - NsubjettinessRatio nSubRatio(N, M, AxesDefinition, - MeasureDefinition) - // N and M give tau_N / tau_M, all other options the same - -For example, if you just want the tau_2/tau_1 value of a jet, using recommended -parameter choices, do this: - - PseudoJet this_jet = /*from your favorite jet algorithm*/; - double beta = 1.0; - NsubjettinessRatio nSub21(2,1, - OnePass_WTA_KT_Axes(), - UnnormalizedMeasure(beta)); - double tau21 = nSub21(this_jet); - --------------------------------------------------------------------------------- -AxesDefinition [NjettinessDefinition.hh] --------------------------------------------------------------------------------- - -N-(sub)jettiness requires choosing axes as well as a measure (see below). There -are a number of axes choices available to the user, though modes with a (*) are -recommended. Arguments in parentheses are parameters that the user must set. - -Axes can be found using standard recursive clustering procedures. New in v2 is -the option to use the "winner-take-all" recombination scheme: -(*) KT_Axes // exclusive kt axes - CA_Axes // exclusive ca axes - AntiKT_Axes(R0) // inclusive hardest axes with antikt, R0 = radius -(*) WTA_KT_Axes // exclusive kt with winner-take-all recombination - WTA_CA_Axes // exclusive ca with winner-take-all recombination - -New in v2.2 are generalized recombination/clustering schemes: - GenET_GenKT_Axes(delta, p, R0 = inf) - WTA_GenKT_Axes(p, R0 = inf) - GenKT_Axes(p, R0 = inf) -Here, delta > 0 labels the generalized ET recombination scheme (delta = 1 for -standard ET scheme, delta = 2 for ET^2 scheme, delta = infinity for WTA scheme) -p >= 0 labels the generalized KT clustering metric (p = 0 for ca, p = 1 for kt), -R0 is the radius parameter, and the clustering is run in exclusive mode. The -GenKT_Axes mode uses standard E-scheme recombination. By default the value of -R0 is set to "infinity", namely fastjet::JetDefinition::max_allowable_R. - -Also new in v2.2 is option of identifying nExtra axes through exclusive -clustering and then looking at all (N + nExtra) choose N axes and finding the -one that gives the smallest N-(sub)jettiness value: - Comb_GenET_GenKT_Axes(nExtra, delta, p, R0 = inf) - Comb_WTA_GenKT_Axes(nExtra, p, R0 = inf) - Comb_GenKT_Axes(nExtra, p, R0 = inf) -These modes are not recommended for reasons of speed. - -Starting from any set of seed axes, one can run a minimization routine to find -a (local) minimum of N-(sub)jettiness. Note that the one-pass minimization -routine is tied to the choice of MeasureDefinition. -(*) OnePass_KT_Axes // one-pass minimization from kt starting point - OnePass_CA_Axes // one-pass min. from ca starting point - OnePass_AntiKT(R0) // one-pass min. from antikt starting point,R0=rad -(*) OnePass_WTA_KT_Axes // one-pass min. from wta_kt starting point - OnePass_WTA_CA_Axes // one-pass min. from wta_ca starting point - OnePass_GenET_GenKT_Axes(delta, p, R0 = inf) // one-pass min. from GenET/KT - OnePass_WTA_GenKT_Axes(p, R0 = inf) // one-pass min from WTA/GenKT - OnePass_GenKT_Axes(p, R0 = inf) // one-pass min from GenKT - -For one-pass minimization, OnePass_CA_Axes and OnePass_WTA_CA_Axes are not -recommended as they provide a poor choice of seed axes. - -In general, it is difficult to find the global minimum, but this mode attempts -to do so: - MultiPass_Axes(NPass) // axes that (attempt to) minimize N-subjettiness - // (NPass = 100 is typical) -This does multi-pass minimization from KT_Axes starting points. - -Finally, one can set manual axes: - Manual_Axes // set your own axes with setAxes() - OnePass_Manual_Axes // one-pass minimization from manual starting point - MultiPass_Manual_Axes(Npass) // multi-pass min. from manual - -If one wants to change the number of passes used by any of the axes finders, one -can call the function - setNPass(NPass,nAttempts,accuracy,noise_range) -where NPass = 0 only uses the seed axes, NPass = 1 is one-pass minimization, and -NPass = 100 is the default multi-pass. nAttempts is the number of iterations to -use in each pass, accuracy is how close to the minimum one tries to get, and -noise_range is how much in rapidity/azimuth the random axes are jiggled. - -For most cases, running with OnePass_KT_Axes or OnePass_WTA_KT_Axes gives -reasonable results (and the results are IRC safe). Because it uses random -number seeds, MultiPass_Axes is not IRC safe (and the code is rather slow). -Note that for the minimization routines, beta = 1.1 is faster than beta = 1, -with comparable performance. - --------------------------------------------------------------------------------- -MeasureDefinition [NjettinessDefinition.hh] --------------------------------------------------------------------------------- - -The value of N-(sub)jettiness depends crucially on the choice of measure. Each -measure has a different number of parameters, so one has to be careful when -switching between measures The one indicated by (*) is the one recommended for -use by users new to Nsubjettiness. - -The original N-subjettiness measures are: - NormalizedMeasure(beta,R0) //default normalized measure with - //parameters beta and R0 (dimensionless) -(*) UnnormalizedMeasure(beta) //default unnormalized measure with just - //parameter beta (dimensionful) - -There are also measures that incorporate a radial cutoff: - NormalizedCutoffMeasure(beta,R0,Rcutoff) //normalized measure with - //additional Rcutoff - UnnormalizedCutoffMeasure(beta,Rcutoff) //unnormalized measure with - //additional Rcutoff - -For all of the above measures, there is an optional argument to change from the -ordinary pt_R distance measure recommended for pp collisions to an -E_theta distance measure recommended for ee collisions. There are also -lorentz_dot and perp_lorentz_dot distance measures recommended only for -advanced users. - -New for v2.2 is a set of measures defined in arXiv:1508.01516. First, there is -the "conical measure": - - ConicalMeasure(beta,R0) // same jets as UnnormalizedCutoffMeasure - // but differs in normalization and specifics - // of one-pass minimization - -Next, there is the geometric measure (as well as a modified version to yield -more conical jet regions): - - OriginalGeometricMeasure(R) // not recommended for analysis - ModifiedGeometricMeasure(R) - -(Prior to v2.2, there was a "GeometricMeasure" which unfortunately had the wrong -definition. These have been commented out in the code as -"DeprecatedGeometricMeasure" and "DeprecatedGeometricCutoffMeasure", but they -should not be used.) - -Next, there is a "conical geometric" measure: - - ConicalGeometricMeasure(beta, gamma, Rcutoff) - -This is a hybrid between the conical and geometric measures and is the basis for -the XCone jet algorithm. Finally, setting to the gamma = 1 default gives the -XCone default measure, which is used in the XConePlugin jet finder - -(*) XConeMeasure(beta,Rcutoff) - -where beta = 2 is the recommended default value and beta = 1 is the recoil-free -default. - --------------------------------------------------------------------------------- -A note on beta dependence --------------------------------------------------------------------------------- - -The angular exponent in N-subjettiness is called beta. The original -N-subjettiness paper advocated beta = 1, but it is now understood that different -beta values can be useful in different contexts. The two main choices are: - -beta = 1: aka broadening/girth/width measure - the axes behave like the "median" in that they point to the hardest cluster - wta_kt_axes are approximately the same as minimizing beta = 1 measure - -beta = 2: aka thrust/mass measure - the axes behave like the "mean" in that they point along the jet momentum - kt_axes are approximately the same as minimizing beta = 2 measure - -N.B. The minimization routines are only valid for 1 < beta < 3. - -For quark/gluon discrimination with N = 1, beta~0.2 with wta_kt_axes appears -to be a good choice. - --------------------------------------------------------------------------------- -XConePlugin [XConePlugin.hh] --------------------------------------------------------------------------------- - -The XCone FastJet plugin is an exclusive cone jet finder which yields a -fixed N number of jets which approximately conical boundaries. The algorithm -finds N axes, and jets are simply the sum of particles closest to a given axis -(or unclustered if they are closest to the beam). Unlike the NjettinessPlugin -below, the user is restricted to using the XConeMeasure. - - XConePlugin plugin(N,R,beta=2); - JetDefinition def(&plugin); - ClusterSequence cs(vector,def); - vector jets = cs.inclusive_jets(); - -Note that despite being an exclusive jet algorithm, one finds the jets using the -inclusive_jets() call. - -The AxesDefinition and MeasureDefinition are defaulted in this measure to -OnePass_GenET_GenKT_Axes and XConeMeasure, respectively. The parameters chosen -for the OnePass_GenET_GenKT_Axes are defined according to the chosen value of -beta as delta = 1/(beta - 1) and p = 1/beta. These have been shown to give the -optimal choice of seed axes. The R value for finding the axes is chosen to be -the same as the R for the jet algorithm, although in principle, these two radii -could be different. - -N.B.: The order of the R, beta arguments is *reversed* from the XConeMeasure -itself, since this ordering is the more natural one to use for Plugins. We -apologize in advance for any confusion this might cause. - --------------------------------------------------------------------------------- -Advanced Usage: NjettinessPlugin [NjettinessPlugin.hh] --------------------------------------------------------------------------------- - -Same as the XConePlugin, but the axes finding methods and measures are the same -as for Nsubjettiness, allowing more flexibility. - - NjettinessPlugin plugin(N, AxesDefinition, MeasureDefinition); - JetDefinition def(&plugin); - ClusterSequence cs(vector,def); - vector jets = cs.inclusive_jets(); - --------------------------------------------------------------------------------- -Very Advanced Usage: Njettiness [Njettiness.hh] --------------------------------------------------------------------------------- - -Most users will want to use the Nsubjettiness or NjettinessPlugin classes to -access N-(sub)jettiness information. For direct access to the Njettiness class, -one can use Njettiness.hh directly. This class is in constant evolution, so -users who wish to extend its functionality should contact the authors first. - --------------------------------------------------------------------------------- -TauComponents [MeasureDefinition.hh] --------------------------------------------------------------------------------- - -For most users, they will only need the value of N-subjettiness (i.e. tau) -itself. For advanced users, they can access individual tau components (i.e. -the individual numerator pieces, the denominator, etc.) - - TauComponents tauComp = nSub.component_result(jet); - vector numer = tauComp.jet_pieces_numerator(); //tau for each subjet - double denom = tauComp.denominator(); //normalization factor - --------------------------------------------------------------------------------- -Extra Recombiners [ExtraRecombiners.hh] --------------------------------------------------------------------------------- - -New in v2.0 are winner-take-all axes. (These have now been included in -FastJet 3.1, but we have left the code here to allow the plugin to work under -FJ 3.0). These axes are found with the help of the WinnerTakeAllRecombiner. -This class defines a new recombination scheme for clustering particles. This -scheme recombines two PseudoJets into a PseudoJet with pT of the sum of the two -input PseudoJet pTs and direction of the harder PseudoJet. This is a -"recoil-free" recombination scheme that guarantees that the axes is aligned with -one of the input particles. It is IRC safe. Axes found with the standard -E-scheme recombiner at similar to the beta = 2 minimization, while -winner-take-all is similar to the beta = 1 measure. - -New in v2.2 is the GeneralEtSchemeRecombiner, as defined in arxiv:1506.XXXX. -This functions similarly to the Et-scheme defined in Fastjet, but the reweighting -of the sum of rap and phi is parameterized by an exponent delta. Thus, delta = 1 -is the normal Et-scheme recombination, delta = 2 is Et^2 recombination, and -delta = infinity is the winner-take-all recombination. This recombination scheme -is used in GenET_GenKT_Axes, and we find that optimal seed axes for minimization -can be found by using delta = 1/(beta - 1). - -Note that the WinnerTakeAllRecombiner can be used outside of Nsubjettiness -itself for jet finding. For example, the direction of anti-kT jets found -with the WinnerTakeAllRecombiner is particularly robust against soft jet -contamination. That said, this functionality is now included in FJ 3.1, so this -code is likely to be deprecated in a future version. - --------------------------------------------------------------------------------- -Technical Details --------------------------------------------------------------------------------- - -In general, the user will never need access to these header files. Here is a -brief description about how they are used to help the calculation of -N-(sub)jettiness: - -AxesDefinition.hh: - -The AxesDefinition class (and derived classes) defines the axes used in the -calculation of N-(sub)jettiness. These axes can be defined from the exclusive -jets from a kT or CA algorithm, the hardest jets from an anti-kT algorithm, -manually, or from minimization of N-jettiness. In the future, the user will be -able to write their own axes finder, though currently the interface is still -evolving. At the moment, the user should stick to the options allowed by -AxesDefinition. - -MeasureDefinition.hh: - -The MeasureDefinition class (and derived classes) defines the measure by which -N-(sub)jettiness is calculated. This measure is calculated between each -particle and its corresponding axis, and then summed and normalized to -produce N-(sub)jettiness. The default measure for this calculation is -pT*dR^beta, where dR is the rapidity-azimuth distance between the particle -and its axis, and beta is the angular exponent. Again, in the future the user -will be able to write their own measures, but for the time being, only the -predefined MeasureDefinition values should be used. Note that the one-pass -minimization algorithms are defined within MeasureDefinition, since they are -measure specific. - --------------------------------------------------------------------------------- -Known Issues --------------------------------------------------------------------------------- - --- The MultiPass_Axes mode gives different answers on different runs, since - random numbers are used. --- For the default measures, in rare cases, one pass minimization can give a - larger value of Njettiness than without minimization. The reason is due - to the fact that axes in default measure are not defined as light-like --- Nsubjettiness is not thread safe, since there are mutables in Njettiness. --- If the AxesDefinition does not find N axes, then it adds zero vectors to the - list of axes to get the total up to N. This can lead to unpredictable - results (including divide by zero issues), and a warning is thrown to alert - the user. - --------------------------------------------------------------------------------- --------------------------------------------------------------------------------- \ No newline at end of file diff --git a/src/Tools/fjcontrib/Nsubjettiness/TauComponents.cc b/src/Tools/fjcontrib/Nsubjettiness/TauComponents.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/TauComponents.cc +++ /dev/null @@ -1,91 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: NjettinessDefinition.cc 704 2014-07-07 14:30:43Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "TauComponents.hh" -#include "MeasureDefinition.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness { using namespace fastjet; - -// This constructor takes input vector and double and calculates all necessary tau components -TauComponents::TauComponents(TauMode tau_mode, - const std::vector & jet_pieces_numerator, - double beam_piece_numerator, - double denominator, - const std::vector & jets, - const std::vector & axes - ) -: _tau_mode(tau_mode), -_jet_pieces_numerator(jet_pieces_numerator), -_beam_piece_numerator(beam_piece_numerator), -_denominator(denominator), -_jets(jets), -_axes(axes) -{ - - if (!has_denominator()) assert(_denominator == 1.0); //make sure no effect from _denominator if _has_denominator is false - if (!has_beam()) assert (_beam_piece_numerator == 0.0); //make sure no effect from _beam_piece_numerator if _has_beam is false - - // Put the pieces together - _numerator = _beam_piece_numerator; - _jet_pieces.resize(_jet_pieces_numerator.size(),0.0); - for (unsigned j = 0; j < _jet_pieces_numerator.size(); j++) { - _jet_pieces[j] = _jet_pieces_numerator[j]/_denominator; - _numerator += _jet_pieces_numerator[j]; - - // Add structural information to jets - StructureType * structure = new StructureType(_jets[j]); - structure->_tau_piece = _jet_pieces[j]; - _jets[j].set_structure_shared_ptr(SharedPtr(structure)); - } - - _beam_piece = _beam_piece_numerator/_denominator; - _tau = _numerator/_denominator; - - // Add total_jet with structural information - _total_jet = join(_jets); - StructureType * total_structure = new StructureType(_total_jet); - total_structure->_tau_piece = _tau; - _total_jet.set_structure_shared_ptr(SharedPtr(total_structure)); -} - - - -// test for denominator/beams -bool TauComponents::has_denominator() const { - return (_tau_mode == NORMALIZED_JET_SHAPE - || _tau_mode == NORMALIZED_EVENT_SHAPE); -} - -bool TauComponents::has_beam() const { - return (_tau_mode == UNNORMALIZED_EVENT_SHAPE - || _tau_mode == NORMALIZED_EVENT_SHAPE); -} - -} // namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/Nsubjettiness/VERSION b/src/Tools/fjcontrib/Nsubjettiness/VERSION deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/VERSION +++ /dev/null @@ -1,1 +0,0 @@ -2.2.4 \ No newline at end of file diff --git a/src/Tools/fjcontrib/Nsubjettiness/XConePlugin.cc b/src/Tools/fjcontrib/Nsubjettiness/XConePlugin.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/Nsubjettiness/XConePlugin.cc +++ /dev/null @@ -1,48 +0,0 @@ -// Nsubjettiness Package -// Questions/Comments? jthaler@jthaler.net -// -// Copyright (c) 2011-14 -// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason -// -// $Id: XConePlugin.cc 745 2014-08-26 23:51:48Z jthaler $ -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "XConePlugin.hh" - -namespace Rivet { -namespace fjcontrib { - -namespace Nsubjettiness{ - -std::string XConePlugin::description() const { - std::stringstream stream; - stream << "XCone Jet Algorithm with N = " << _N << std::fixed << std::setprecision(2) << ", Rcut = " << _R0 << ", beta = " << _beta; - return stream.str(); -} - -std::string PseudoXConePlugin::description() const { - std::stringstream stream; - stream - << "PseudoXCone Jet Algorithm with N = " << _N << std::fixed << std::setprecision(2) << ", Rcut = " << _R0 << ", beta = " << _beta; - return stream.str(); -} - -} // namespace Nsubjettiness - -} -} diff --git a/src/Tools/fjcontrib/RecursiveTools/AUTHORS b/src/Tools/fjcontrib/RecursiveTools/AUTHORS deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/AUTHORS +++ /dev/null @@ -1,31 +0,0 @@ -The RecursiveTools FastJet contrib is developed and maintained by: - - Gavin P. Salam - Gregory Soyez - Jesse Thaler - Kevin Zhou - Frederic Dreyer - -The physics is based on: - - [ModifiedMassDropTagger] - Towards an understanding of jet substructure. - Mrinal Dasgupta, Alessandro Fregoso, Simone Marzani, and Gavin P. Salam. - JHEP 1309:029 (2013), arXiv:1307.0007. - - [SoftDrop] - Soft Drop. - Andrew J. Larkoski, Simone Marzani, Gregory Soyez, and Jesse Thaler. - JHEP 1405:146 (2014), arXiv:1402.2657 - - [IteratedSoftDrop] - Casimir Meets Poisson: Improved Quark/Gluon Discrimination with Counting Observables. - Christopher Frye, Andrew J. Larkoski, Jesse Thaler, Kevin Zhou. - JHEP 1709:083 (2017), arXiv:1704.06266 - - [RecursiveSoftDrop] - [BottomUpSoftDrop] - Recursive Soft Drop. - Frederic A. Dreyer, Lina Necib, Gregory Soyez, and Jesse Thaler - arXiv:1804.03657 - \ No newline at end of file diff --git a/src/Tools/fjcontrib/RecursiveTools/BottomUpSoftDrop.cc b/src/Tools/fjcontrib/RecursiveTools/BottomUpSoftDrop.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/BottomUpSoftDrop.cc +++ /dev/null @@ -1,329 +0,0 @@ -// $Id: BottomUpSoftDrop.cc 1064 2017-09-08 09:19:57Z gsoyez $ -// -// Copyright (c) 2017-, Gavin P. Salam, Gregory Soyez, Jesse Thaler, -// Kevin Zhou, Frederic Dreyer -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "BottomUpSoftDrop.hh" -#include -#include -#include -#include -#include "fastjet/ClusterSequenceActiveAreaExplicitGhosts.hh" -#include "fastjet/Selector.hh" -#include "fastjet/config.h" - - -using namespace std; - - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - -//namespace contrib{ - -namespace Rivet{ - namespace fjcontrib{ - - - -//---------------------------------------------------------------------- -// BottomUpSoftDrop class -//---------------------------------------------------------------------- - -// action on a single jet -PseudoJet BottomUpSoftDrop::result(const PseudoJet &jet) const{ - // soft drop can only be applied to jets that have constituents - if (!jet.has_constituents()){ - throw Error("BottomUpSoftDrop: trying to apply the Soft Drop transformer to a jet that has no constituents"); - } - - // if the jet has area support and there are explicit ghosts, we can - // transfer that support to the internal re-clustering - // - // Note that this is just meant to maintain the information since - // all the jes will have a 0 area - bool do_areas = jet.has_area() && _check_explicit_ghosts(jet); - - // build the soft drop plugin - BottomUpSoftDropPlugin * softdrop_plugin; - - // for some constructors, we get the recombiner from the - // input jet -- some acrobatics are needed - if (_get_recombiner_from_jet) { - JetDefinition jet_def = _jet_def; - - // if all the pieces have a shared recombiner, we'll use that - // one. Otherwise, use the one from _jet_def as a fallback. - JetDefinition jet_def_for_recombiner; - if (_check_common_recombiner(jet, jet_def_for_recombiner)){ -#if FASTJET_VERSION_NUMBER >= 30100 - // Note that this is better than the option directly passing the - // recombiner (for cases where th ejet def own its recombiner) - // but it's only available in FJ>=3.1 - jet_def.set_recombiner(jet_def_for_recombiner); -#else - jet_def.set_recombiner(jet_def_for_recombiner.recombiner()); -#endif - } - softdrop_plugin = new BottomUpSoftDropPlugin(jet_def, _beta, _symmetry_cut, _R0); - } else { - softdrop_plugin = new BottomUpSoftDropPlugin(_jet_def, _beta, _symmetry_cut, _R0); - } - - // now recluster the constituents of the jet with that plugin - JetDefinition internal_jet_def(softdrop_plugin); - // flag the plugin for automatic deletion _before_ we make - // copies (so that as long as the copies are also present - // it doesn't get deleted). - internal_jet_def.delete_plugin_when_unused(); - - ClusterSequence * cs; - if (do_areas){ - vector particles, ghosts; - SelectorIsPureGhost().sift(jet.constituents(), ghosts, particles); - // determine the ghost area from the 1st ghost (if none, any value - // will do, as the area will be 0 and subtraction will have - // no effect!) - double ghost_area = (ghosts.size()) ? ghosts[0].area() : 0.01; - cs = new ClusterSequenceActiveAreaExplicitGhosts(particles, internal_jet_def, - ghosts, ghost_area); - } else { - cs = new ClusterSequence(jet.constituents(), internal_jet_def); - } - - PseudoJet result_local = SelectorNHardest(1)(cs->inclusive_jets())[0]; - BottomUpSoftDropStructure * s = new BottomUpSoftDropStructure(result_local); - s->_beta = _beta; - s->_symmetry_cut = _symmetry_cut; - s->_R0 = _R0; - result_local.set_structure_shared_ptr(SharedPtr(s)); - - // make sure things remain persistent -- i.e. tell the jet definition - // and the cluster sequence that it is their responsibility to clean - // up memory once the "result" reaches the end of its life in the user's - // code. (The CS deletes itself when the result goes out of scope and - // that also triggers deletion of the plugin) - cs->delete_self_when_unused(); - - return result_local; -} - -// global grooming on a full event -// note: does not support jet areas -vector BottomUpSoftDrop::global_grooming(const vector & event) const { - // start by reclustering the event into one very large jet - ClusterSequence cs(event, _jet_def); - std::vector global_jet = SelectorNHardest(1)(cs.inclusive_jets()); - // if the event is empty, do nothing - if (global_jet.size() == 0) return vector(); - - // apply the groomer to the large jet - PseudoJet result = this->result(global_jet[0]); - return result.constituents(); -} - -// check if the jet has explicit_ghosts (knowing that there is an -// area support) -bool BottomUpSoftDrop::_check_explicit_ghosts(const PseudoJet &jet) const { - // if the jet comes from a Clustering check explicit ghosts in that - // clustering - if (jet.has_associated_cluster_sequence()) - return jet.validated_csab()->has_explicit_ghosts(); - - // if the jet has pieces, recurse in the pieces - if (jet.has_pieces()){ - vector pieces = jet.pieces(); - for (unsigned int i=0;ijet_def().has_same_recombiner(jet_def_for_recombiner); - - // otherwise, assign it. - jet_def_for_recombiner = jet.validated_cs()->jet_def(); - assigned = true; - return true; - } - - // if the jet has pieces, recurse in the pieces - if (jet.has_pieces()){ - vector pieces = jet.pieces(); - if (pieces.size() == 0) return false; - for (unsigned int i=0;i kept(internal_hist.size(), true); - const vector &sd_rej = softdrop_recombiner.rejected(); - for (unsigned int i=0;i internal2input(internal_hist.size()); - for (unsigned int i=0; i - Copyright (C) - - This program is free software; you can redistribute it and/or modify - it under the terms of the GNU General Public License as published by - the Free Software Foundation; either version 2 of the License, or - (at your option) any later version. - - This program is distributed in the hope that it will be useful, - but WITHOUT ANY WARRANTY; without even the implied warranty of - MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the - GNU General Public License for more details. - - You should have received a copy of the GNU General Public License - along with this program; if not, write to the Free Software - Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA - - -Also add information on how to contact you by electronic and paper mail. - -If the program is interactive, make it output a short notice like this -when it starts in an interactive mode: - - Gnomovision version 69, Copyright (C) year name of author - Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. - This is free software, and you are welcome to redistribute it - under certain conditions; type `show c' for details. - -The hypothetical commands `show w' and `show c' should show the appropriate -parts of the General Public License. Of course, the commands you use may -be called something other than `show w' and `show c'; they could even be -mouse-clicks or menu items--whatever suits your program. - -You should also get your employer (if you work as a programmer) or your -school, if any, to sign a "copyright disclaimer" for the program, if -necessary. Here is a sample; alter the names: - - Yoyodyne, Inc., hereby disclaims all copyright interest in the program - `Gnomovision' (which makes passes at compilers) written by James Hacker. - - , 1 April 1989 - Ty Coon, President of Vice - -This General Public License does not permit incorporating your program into -proprietary programs. If your program is a subroutine library, you may -consider it more useful to permit linking proprietary applications with the -library. If this is what you want to do, use the GNU Library General -Public License instead of this License. diff --git a/src/Tools/fjcontrib/RecursiveTools/ChangeLog b/src/Tools/fjcontrib/RecursiveTools/ChangeLog deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/ChangeLog +++ /dev/null @@ -1,674 +0,0 @@ -2018-05-29 Jesse Thaler - - * VERSION - * NEWS - Changed to 2.0.0-beta2, noted in news - -2018-04-21 Jesse Thaler - - * AUTHORS: updated arxiv number for RecursiveSoftDrop - -2018-04-21 Gavin Salam - - * README: updated arxiv number for RecursiveSoftDrop & Bottom-up - Soft Drop. - -2018-04-04 Gregory Soyez - - * RecursiveSoftDrop.cc (contrib): - fixed syntax of calls to structure_of<...> - (thanks to Attila Krasznahorkay) - -2018-01-24 Gregory Soyez - - * RecursiveSoftDrop.cc: - for the (dafault) dynamical R0 implementation, the dynamical R0 is - evolved independently in each branch. - -2017-10-10 Jesse Thaler - * AUTHORS - Added mention of BUSD - - * README - Some tweaks to the wording - -2017-10-11 Gregory Soyez - - * IteratedSoftDrop.hh: - IteratedSoftDropInfo::size() and multiplicity() now return an - unsigned int instead of a double - -2017-10-10 Jesse Thaler - * AUTHORS - Updated journal reference for ISD - - * example_isd.{cc,ref}: - Included soft drop multiplicity in example - - * README - Added warning at top that documentation has not been updated - - * VERSION - Changed to 2.0.0-beta1 - -2017-10-10 Gregory Soyez - - * example_isd.ref: - updated to reflect the bugfix below - - * IteratedSoftDrop.cc: - fixed issue in description (was taking sqrt of -ve number when - there were no angular cut) - - * NEWS: - drafted for RecursiveTools-2.0.0-beta1 - - * TODO: - updated list in view of a beta release - - * example_isd.cc: - pointed to the right header for IteratedSoftDrop - - * RecursiveSoftDrop.{hh,cc}: - * IteratedSoftDrop.{hh,cc}: - moved IteratedSoftDrop to its own file (replacing the old - implementation) - - * example_advanced_usage.ref: - updated reference file following the fix below. - -2017-09-28 Gregory Soyez - - * RecursiveSymmetryCutBase.cc: - when no substructure is found, keep the _symmetry, _delta_R and - _mu structure variables at -1. This for example allows one to - trigger on substructure by checking if delta_R>=0. - - Note: we could set it to 0 (as it was before) and trigger using - _delta_R>0 but there might be some genuine substructure exactly - collinear. - -2017-09-19 Gregory Soyez - - * example_isd.ref: - updated to the latest version of the example - -2017-09-18 Gregory Soyez - - * Makefile: - updating make check to use all the examples (failing on ISD as - expected, see below) - - * example_bottomup_softdrop.cc: - fixed typo - - * example_bottomup_softdrop.ref: - * example_recursive_softdrop.ref: - added reference output for this example - - * RecursiveSoftDrop.{hh,cc}: - * RecursiveSymmetryCutBase.{cc,hh}: - moved the "all_prongs" method from the base structure t oa - standalone function in RecursiveSoftDrop.hh - - In practice, this is irrelevant for mMDT and SD (since pieces() - gets the job done, and the substructure class does not have (as - is) reliable info to get the full structure) - - * RecursiveSymmetryCutBase.cc: - revamped a series of requests for substructure info to better - handle possible recursion into deeper jet substructure. - - * RecursiveSoftDrop.{hh,cc}: - updated "description" to reuse the info from the base class - - * example_isd.cc: - updated to use the newer implementation of ISD. Checked that it - gives the same results as the former implementation. - - Note: make check still needs fixing because the example now - computes a different set of angularities - - * RecursiveSoftDrop.hh: - added a few helpers to IteratedSoftDropInfo ([] operator and size, - meaning it can be used as a vector >) - - * RecursiveSymmetryCutBase.cc: - . fixed bugs in the calculation of the geometric distances for ee - coordinates - . fixed bug in the computation of the (zg,thetag) pairs [it was - returning the groomed ones instead of the ones passing the - condition] - - * example_recursive_softdrop.cc: - set the R0 parameter to the original jet radius - -2017-09-13 Gregory Soyez - - * example_recursive_softdrop.cc: - tied up a few comments and the code output - - * RecursiveSymmetryCutBase.{hh,cc}: - removed the unneeded _is_composite - - * RecursiveSoftDrop.cc: - fixed issue with "verbose" dropped info on branches with no - further substructure - -2017-09-11 Gregory Soyez - - * RecursiveSoftDrop.{hh,cc}: - have IteratedSoftDDrop returning by default an object of type - IteratedSoftDropInfo; added several helpers - -2017-09-08 Gregory Soyez - - * RecursiveSoftDrop.{hh,cc}: - updated IteratedSoftDrop to give it the flexibility of - RecursiveSoftDrop - - * RecursiveSymmetryCutBase.hh: - fixed typo in comment - - * example_mmdt_ee.cc: *** ADDED *** - added an example to illustrat usage in ee collisions - - * example_isd.cc: - * BottomUpSoftDrop.cc: - * IteratedSoftDrop.cc: - * RecursiveSoftDrop.cc: - Fixed compilation issues with FJ 3.0 (mostly the usage of features - introduced only in FJ3.1) - - * RecursiveSymmetryCutBase.{hh,cc}: - used the internal Recluster class for FJ<3.1.0 and the FJ antive - one for FJ>=3.1.0 - - * BottomUpSoftDrop.{hh,cc}: - moved the implementation of global_grooming to the source file and - fixed a few trivial compilation errors - -2017-09-08 Frédéric Dreyer - - * BottomUpSoftDrop.hh: - added the global_grooming method to process full event - -2017-09-07 Gregory Soyez - - * RecursiveSoftDrop.cc: - cleaned (mostly by removing older commented-out code) - - * RecursiveSoftDrop.{hh,cc}: - * RecursiveSymmetryCutBase.{hh,cc}: - * SoftDrop.cc: - - added support for ee coordinates. For that, the symmetry measure - has to be set to either theta_E (which uses the 3-vector angle - theta) or to cos_theta_E which uses sqrt(2*[1-cos(theta)]) - - Accordingly, the recursion_choice can be set to larger_E to - recurse into the branch with the largest energy. The larger_m - mode, recorsing into the larger-mass branch is also possible but - not advised (for the same reason as the pp version). - - * RecursiveSymmetryCutBase.{hh,cc}: - switched to the Recluster class provided with FastJet. ASlso - included the recluster description to RecursiveSymmetryCutBase - when it is user-defined. - -2017-09-06 Gregory Soyez - - * BottomUpSoftDrop.{hh,cc}: - . renamed SoftDropStructure -> BottomUpSoftDropStructure - SoftDropRecombiner -> BottomUpSoftDropRecombiner - SoftDropPlugin -> BottomUpSoftDropPlugin - . moved 'description' to source file (instead of header) - . kept the "area" information when available (jets will just - appear as having a 0 area) - . added most class description (main class still missing) - - * RecursiveSoftDrop.cc: - . put internal classes to handle CS history in an internal namespace - . replaced the "switch" in the mail loop by a series of if (allows - us a few simplificatins/cleaning) - . more uniform treatment of issues in the presence of an angular cut - (as part of the above reorganisation) - - * example_advanced_usage.ref: - updated reference output file following the bugfix (missing - "grooming mode" initialisation in one of the SoftDrop ctors) on - 2017-08-01 - - * RecursiveSymmetryCutBase.cc: - removed redundent code - -2017-08-10 Gregory Soyez - - * RecursiveSoftDrop.cc: - fixed trivial typo in variable name - ->>>>>>> .r1071 -2017-08-04 Gregory Soyez - - * RecursiveSoftDrop.cc: - do not impose an angular cut in IterativeSD if it is -ve. - -2017-08-01 Gregory Soyez - - * example_recursive_softdrop.cc: - added a series of optional flags - - * RecursiveSoftDrop.cc: - fixed a few issues with the fixed depth version - - * RecursiveSymmetryCutBase.hh: - a jet is now considered as havig substructure if deltaR>0 - (coherent with released version) - - * SoftDrop.hh: - bugfix: set the "grooming mode" by default in all ctors - EDIT: caused issue with make check, fixed on 2017-09-069 (see - above) - - * RecursiveSoftDrop.{hh,cc}: - added support for the "same depth" variant - - * RecursiveSymmetryCutBase.cc: - also associate a RecursiveSymmetryCutBase::StructureType structure - to the result jet in case it is just a single particle (in - grooming mode) - -2017-07-31 Gregory Soyez - - * RecursiveSymmetryCutBase.{hh,cc}: - added the option to pass an extra parameter to the symmetry cut - function - - * RecursiveSymmetryCutBase.{hh,cc}: - * ModifiedMassDropTagger.hh - * SoftDrop.hh: - minor adaptions due to the above change + added a few methods to - query the class information (symmetry cut, beta, ...) - - * RecursiveSoftDrop.{hh,cc}: - Added support for - - a dynamical R0 - - recursing only in the hardest branch - - imposing a min deltaR cut - - Added a tentative IterativeSoftDrop class - -2017-07-28 Gregory Soyez - - * RecursiveSoftDrop.cc: - adapted to the latest changes in RecursiveSymmetryCutBase - - * RecursiveSymmetryCutBase.{hh,cc}: - reorganised the output of the recursion step (recurse_one_step) - using an enum to make iot more readable (and to fix issues where - the dropped prong is actually 0, e.g. after subtraction) - -2017-07-26 Gregory Soyez - - * example_recursive_softdrop.cc: *** ADDED *** - added a draft example for the use of RecursiveSoftDrop - - * RecursiveSoftDrop.{hh,cc}: *** ADDED *** - added a first pass at an implementation of RecursiveSoftDrop. - - This is based on Frederic's initial version but keeps the - branching structure of the jet. Some of the features, like - dynamical R0, direct access to the substructure or the same depth - variant, are still unsupported. - - * SoftDrop.hh: - declared _beta, _symmetry_cut and _R0sqr as protected (was - private) so they ca n be used by RecursiveSoftDrop - - * RecursiveSymmetryCutBase.{hh,cc}: - extracted from result() the part that performs one step of the - resursion (implemented as recurse_one_step()). This is repeatedly - called by result(). It has specific conventions to indicate - whether or not some substructure has been found or if one ran into - some issue. - -2017-04-25 Kevin Zhou - - * IteratedSoftDrop.hh - . Added Doxygen documentation - - * RecursiveSymmetryCutBase.hh - . Added references to ISD - -2017-04-25 Jesse Thaler - * AUTHORS, COPYING: - . Added ISD arXiv number - - * example_isd.{cc,ref} - . Added ISD arXiv number - . Changing z_cut to be optimal value (with Lambda = 1 GeV) - . Tweaked formatting - - * IteratedSoftDrop.{hh,cc} - . Added ISD arXiv number - . Assert added if recluster does not return one jet. - - * README - . Added ISD arXiv number and tweaked wording - - -2017-04-20 Kevin Zhou - - * example_isd.{cc,ref} ** ADDED ** - * IteratedSoftDrop.{cc,hh} ** ADDED ** - - * Makefile - . Added IteratedSoftDrop (ISD) as appropriate - - * AUTHORS - . Added myself to author list - . Added placeholder for ISD paper - - * COPYING - . Added placeholder for ISD paper - - * README - . Added description of ISD - - * TODO - . Added tasks to integrate ISD with other classes, e.g. SD, - Recluster, and the future RecursiveSoftDrop (RSD) - - * NEWS - . Added dummy for release of 1.1.0 - - * VERSION: - . Switched version number to 1.1.0-dev - - * example_advanced_usage.cc: - . Added $Id$ tag, didn't change anything else - -2014-07-30 Gregory Soyez - - * Recluster.hh: fixed the name of the #define for the header - -2014-07-09 Gregory Soyez + Jesse - - * NEWS: - release of RecursiveTools v1.0.0 - - * VERSION: - switched version number to 1.0.0 - -2014-07-08 Gavin Salam - - * README (RecursionChoice): - added ref to .hh for constness specs of virtual functions (to - reduce risk of failed overloading due to wrong constness). - -2014-07-08 Gregory Soyez + Jesse - - * README: - partially rewrote the description of set_subtractor - -2014-07-07 Gregory Soyez + Jesse - - * example_advanced_usage.cc: - * example_softdrop.cc: - a few small fixed in the header of the files - - * VERSION: - switched over to 1.0.0-alpha2-devel - - * README: - Reordered a few things and added a few details. - - * Makefile (check): - Be quiter during "make check" - - * Recluster.cc (contrib): - Documented the "single" ctor argument - - * RecursiveSymmetryCutBase.cc (contrib): - If the user sets himself the reclustering, disable the "non-CA" - warning (we assume that he knows what he is doing). Mentioned in - the comments that non-CA reclustering has to be used at the user's - risk. Also throw when th input jet has no constituents or when - there is no cluster sequence after reclustering. - - * Recluster.cc (contrib): - replaced a remaining mention to "filtering" by reclustering - -2014-07-04 Jesse Thaler - * VERSION - . Ready for alpha release - -2014-06-17 Jesse Thaler - * example_advanced_usage.{cc,ref} ** ADDED ** - * Makefile - . New example file to test a bunch of soft drop options - . Put in makefile as well - . Fixed nasty memory bug with pointers to Recluster - - * RecursiveSymmetryCutBase.cc - * example_softdrop.ref - . description() now says Groomer vs. Tagger - - * RecursiveSymmetryCutBase.{cc,hh} - . Added optional verbose logging information about - kinematics of dropped branches - - * example_softdrop.cc - * example_advanced_usage.cc - . Fixed - - -2014-06-16 Gregory Soyez - - * Makefile: - also install the RecursiveSymmetryuCutBase.hh header - -2014-06-13 Jesse Thaler - * AUTHORS - . Added myself to author list - . Put complete bibliographic details on papers - - * COPYING - . Added boilerplate MC/GPLv2 statement - - * example.cc: ** REMOVED ** renamed to... - * example_mmdt.cc ** ADDED ** - * Makefile - . Made name change for consistency - . Made corresponding changes in Makefile - - * example_mmdt_sub.cc: - * example_mmdt.cc: - * example_recluster.cc: - . light editing of comments - - * example_softdrop.cc: - . light editing of comments - . added assert for sdjet != 0, since SoftDrop is a groomer - - * ModifiedMassDropTagger.hh - * Recluster.{cc,hh} - * RecursiveSymmetryCutBase.{cc,hh} - * SoftDrop.hh - . Updated some comments - - * README - . Updated to include basic usage description and some - technical details - - * TODO: - . Added some discussion points. - -2014-06-13 Gregory Soyez - - * example_softdrop.{cc,ref}: - added an example for SoftDrop - - * SoftDrop.{hh,cc}: - * ModifiedMassDropTagger.{hh,cc}: - * RecursiveSymmetryCutBase.{hh,cc}: *** ADDED *** - . added a base class for both the mMDT and SoftDrop - . made mMDT and SoftDrop inherit from RecursiveSymmetryCutBase - . moved the reclustering to the base class. By default, both - mMDT and SoftDrop now recluster the jet with C/A - . added set_grooming_mode and set_tagging_mode methods to the - base class - - * Merging the development branch 1.0-beta1-softdrop-addition back - into the trunk (will correspond to revision 682) - - * VERSION: - switched back to 1.0.0-devel - - * SoftDrop.{hh,cc}: - added support for re-clustering through set_reclustering(bool, - Recluster*). By default, reclustering is done with - Cambridge/Aachen. - - * example_recluster.{cc,ref}: *** ADDED *** - added an example of reclustering - - * Recluster.{hh,cc}: - added a 'single' ctor argument [true by default]. When true, the - hardest jet after reclustering is returned, otherwise, the result - is a composite jet with all the subjets as its pieces. - -2014-05-15 Gregory Soyez - - * VERSION: - set version number to 1.0-alpha-PUWS14.1 in preparation for a - fastjet-contrib release for the pileup-workshop at CERN on May - 2014. - -2014-04-25 Gregory Soyez - - * ModifiedMassDropTagger.hh: - * ModifiedMassDropTagger.cc: - Added comments at various places - - * AUTHORS: - * README: - Updated info about what is now included in this contrib - - * SoftDrop.hh: *** ADDED *** - * SoftDrop.cc: *** ADDED *** - * Recluster.hh: *** ADDED *** - * Recluster.cc; *** ADDED *** - Added tools for reclustering and softDrop - -2014-04-25 Gregory Soyez - - branch started at revision 611 to start including SoftDrop in the - Recursivetols contrib - -2014-04-24 Gregory Soyez - - * ModifiedMassDropTagger.hh: - added a mention of the fact that when result is called in the - presence of a subtractor, then the output is a subtracted - PseudoJet. - - * ModifiedMassDropTagger.hh: - declared _symmetry_cut as protected (rather than provate) so it - can be accessed if symmetry_cut_description() is overloaded. - - * example.cc: - trivial typo fixed in a comment - -2014-02-04 Gavin Salam - - * VERSION: - upped it to 1.0-beta1 - - * example_mmdt_sub.cc (main): - added an #if to make sure FJ3.1 features are only used if FJ3.1 - is available. (Currently FJ3.1 is only available to FJ developers). - -2014-01-26 Gavin Salam - - * VERSION: - renamed to 1.0-beta0 - - * ModifiedMassDropTagger.hh: - * README: - added info on author names - - * example_mmdt_sub.ref: *** ADDED *** - added reference output for the pileup test. - -2014-01-25 Gavin Salam - - * example_mmdt_sub.cc: - * Makefile: - added an extra example illustrating functionality with pileup - subtraction. - -2014-01-24 Gavin Salam - - * ModifiedMassDropTagger.cc: - - Reorganised code so that (sub)jet.m2()>0 check is only used when - absolutely necessary: so if using a scalar_z symmetry measure, - whenever scalar_z < zcut, then there is no point checking the mu - condition. This means that there's no issue if the (sub)jet mass - is negative, and one simply recurses down into the jet. (Whereas - before it would bail out, reducing the tagging efficiency). - - Also removed the verbose code. - -2014-01-23 Gavin Salam - - * ModifiedMassDropTagger.cc|hh: - * example.cc - replaced "asymmetry" with "symmetry" in a number of places; - implemented the structural information and added it to the example; - added a new simplified constructor; - improved doxygen documentation; - started renaming -> RecursiveTools - - * README - some tidying - - * VERSION - 1.0-b0 - -2014-01-22 Gavin Salam - - * ModifiedMassDropTagger.cc (contrib): - -ve mass now bails out also when using the "y" asymmetry measure. - Also, default my is now infinite. - -2014-01-20 Gavin Salam + Gregory - - - * ModifiedMassDropTagger.cc|hh: - introduced a virtual asymmetry_cut_fn (essentially a - dummy function returning a constant), to allow for derived classes - to do fancier things. - - added warning about non-C/A clustering. - explicitly labelled some (inherited) virtual functions as - virtual. - -2014-01-20 Gavin Salam - - * example.ref: - * example.cc (main): - got a first working example and make check setup. - - * ModifiedMassDropTagger.cc|hh: - improved doxygen comments; - added option whereby input jet is assumed already subtracted - -2014-01-19 Gavin Salam - - * ModifiedMassDropTagger.cc|hh: - * many other files - - Initial creation, with basic code for MMDT - diff --git a/src/Tools/fjcontrib/RecursiveTools/IteratedSoftDrop.cc b/src/Tools/fjcontrib/RecursiveTools/IteratedSoftDrop.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/IteratedSoftDrop.cc +++ /dev/null @@ -1,132 +0,0 @@ -// $Id: IteratedSoftDrop.cc 1084 2017-10-10 20:36:50Z gsoyez $ -// -// Copyright (c) 2017-, Jesse Thaler, Kevin Zhou, Gavin P. Salam -// andGregory Soyez -// -// based on arXiv:1704.06266 by Christopher Frye, Andrew J. Larkoski, -// Jesse Thaler, Kevin Zhou -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "IteratedSoftDrop.hh" -#include -#include -#include - -using namespace std; - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - - namespace Rivet{ - namespace fjcontrib{ - - -//======================================================================== -// implementation of IteratedSoftDropInfo -//======================================================================== - -// returns the angularity with angular exponent alpha and z -// exponent kappa calculated on the zg's and thetag's found by -// iterated SoftDrop -// -// returns 0 if no substructure was found -double IteratedSoftDropInfo::angularity(double alpha, double kappa) const{ - double sum = 0.0; - for (unsigned int i=0; i< _all_zg_thetag.size(); ++i) - sum += pow(_all_zg_thetag[i].first, kappa) * pow(_all_zg_thetag[i].second, alpha); - return sum; -} - -//======================================================================== -// implementation of IteratedSoftDrop -//======================================================================== - -// Constructor. Takes in the standard Soft Drop parameters, an angular cut \f$\theta_{\rm cut}\f$, -// and a choice of angular and symmetry measure. -// -// - beta the Soft Drop beta parameter -// - symmetry_cut the Soft Drop symmetry cut -// - angular_cut the angular cutoff to halt Iterated Soft Drop -// - R0 the angular distance normalization -IteratedSoftDrop::IteratedSoftDrop(double beta, double symmetry_cut, double angular_cut, double R0, - const FunctionOfPseudoJet * subtractor) : - _rsd(beta, symmetry_cut, -1, R0, subtractor){ - _rsd.set_hardest_branch_only(true); - if (angular_cut>0) - _rsd.set_min_deltaR_squared(angular_cut*angular_cut); -} - - -// Full constructor, which takes the following parameters: -// -// \param beta the value of the beta parameter -// \param symmetry_cut the value of the cut on the symmetry measure -// \param symmetry_measure the choice of measure to use to estimate the symmetry -// \param angular_cut the angular cutoff to halt Iterated Soft Drop -// \param R0 the angular distance normalisation [1 by default] -// \param mu_cut the maximal allowed value of mass drop variable mu = m_heavy/m_parent -// \param recursion_choice the strategy used to decide which subjet to recurse into -// \param subtractor an optional pointer to a pileup subtractor (ignored if zero) -IteratedSoftDrop::IteratedSoftDrop(double beta, - double symmetry_cut, - RecursiveSoftDrop::SymmetryMeasure symmetry_measure, - double angular_cut, - double R0, - double mu_cut, - RecursiveSoftDrop::RecursionChoice recursion_choice, - const FunctionOfPseudoJet * subtractor) - : _rsd(beta, symmetry_cut, symmetry_measure, -1, R0, mu_cut, recursion_choice, subtractor){ - _rsd.set_hardest_branch_only(true); - if (angular_cut>0) - _rsd.set_min_deltaR_squared(angular_cut*angular_cut); -} - - -// returns vector of ISD symmetry factors and splitting angles -IteratedSoftDropInfo IteratedSoftDrop::result(const PseudoJet& jet) const{ - PseudoJet rsd_jet = _rsd(jet); - if (! rsd_jet.has_structure_of()) - return IteratedSoftDropInfo(); - return IteratedSoftDropInfo(rsd_jet.structure_of().sorted_zg_and_thetag()); -} - - -std::string IteratedSoftDrop::description() const{ - std::ostringstream oss; - oss << "IteratedSoftDrop with beta =" << _rsd.beta() - << ", symmetry_cut=" << _rsd.symmetry_cut() - << ", R0=" << _rsd.R0(); - - if (_rsd.min_deltaR_squared() >= 0){ - oss << " and angular_cut=" << sqrt(_rsd.min_deltaR_squared()); - } else { - oss << " and no angular_cut"; - } - - if (_rsd.subtractor()){ - oss << ", and with internal subtraction using [" << _rsd.subtractor()->description() << "]"; - } - return oss.str(); -} - - - } } // namespace contrib - - //FASTJET_END_NAMESPACE diff --git a/src/Tools/fjcontrib/RecursiveTools/Makefile.am b/src/Tools/fjcontrib/RecursiveTools/Makefile.am deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/Makefile.am +++ /dev/null @@ -1,12 +0,0 @@ -noinst_LTLIBRARIES = libRivetRecursiveTools.la - -libRivetRecursiveTools_la_SOURCES = \ - BottomUpSoftDrop.cc \ - IteratedSoftDrop.cc \ - ModifiedMassDropTagger.cc \ - Recluster.cc \ - RecursiveSoftDrop.cc \ - RecursiveSymmetryCutBase.cc \ - SoftDrop.cc - -libRivetRecursiveTools_la_CPPFLAGS = -I${top_srcdir}/include/Rivet/Tools/fjcontrib $(AM_CPPFLAGS) diff --git a/src/Tools/fjcontrib/RecursiveTools/ModifiedMassDropTagger.cc b/src/Tools/fjcontrib/RecursiveTools/ModifiedMassDropTagger.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/ModifiedMassDropTagger.cc +++ /dev/null @@ -1,48 +0,0 @@ -// $Id: ModifiedMassDropTagger.cc 683 2014-06-13 14:38:38Z gsoyez $ -// -// Copyright (c) 2014-, Gavin P. Salam -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "ModifiedMassDropTagger.hh" -#include "fastjet/JetDefinition.hh" -#include "fastjet/ClusterSequenceAreaBase.hh" -#include -#include - -using namespace std; - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - - namespace Rivet{ - namespace fjcontrib{ - - -//---------------------------------------------------------------------- -string ModifiedMassDropTagger::symmetry_cut_description() const { - ostringstream ostr; - ostr << _symmetry_cut << " [ModifiedMassDropTagger]"; - return ostr.str(); -} - - - } } // namespace contrib - - //FASTJET_END_NAMESPACE diff --git a/src/Tools/fjcontrib/RecursiveTools/NEWS b/src/Tools/fjcontrib/RecursiveTools/NEWS deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/NEWS +++ /dev/null @@ -1,11 +0,0 @@ -2018/05/31: release of version 2.0.0-beta2 with corrected syntax - -2017/10/10: release of version 2.0.0-beta1 including implementations of - -* RecursiveSoftDrop (see example_rsd.hh for usage) -* IteratedSoftDrop (see example_isd.hh for usage) -* e+e- version of the recursive tools (see example_mmdt_ee.hh for usage) -* BottomUpSoftDrop (see example_bottomup_softdrop.cc for usage) - -2014/07/09: release of version 1.0.0 of RecursiveTools including - ModifiedMassDropTagger and SoftDrop (as well as Recluster) diff --git a/src/Tools/fjcontrib/RecursiveTools/README b/src/Tools/fjcontrib/RecursiveTools/README deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/README +++ /dev/null @@ -1,346 +0,0 @@ ------------------------------------------------------------------------- -RecursiveTools FastJet contrib ------------------------------------------------------------------------- - -The RecursiveTools FastJet contrib aims to provide a common contrib -for a number of tools that involve recursive reclustering/declustering -of a jet for tagging or grooming purposes. - -Currently it contains: - -- ModifiedMassDropTagger - This corresponds to arXiv:1307.0007 by Mrinal Dasgupta, Alessandro - Fregoso, Simone Marzani and Gavin P. Salam - -- SoftDrop - This corresponds to arXiv:1402.2657 by Andrew J. Larkoski, Simone - Marzani, Gregory Soyez, Jesse Thaler - -- RecursiveSoftDrop -- BottomUpSoftDrop - This corresponds to arXiv:1804.03657 by Frederic Dreyer, Lina - Necib, Gregory Soyez and Jesse Thaler - -- IteratedSoftDrop - This corresponds to arXiv:1704.06266 by Christopher Frye, Andrew J. - Larkoski, Jesse Thaler, Kevin Zhou - -- Recluster - A generic tool to recluster a given jet into subjets - Note: a Recluster class is available natively in FastJet since v3.1. - Users are therefore encouraged to use the FastJet version - rather than this one which is mostly provided for - compatibility of this contrib with older versions of FastJet. - -The interface for these tools is described in more detail below, with -all of the available options documented in the header files. - -One note about nomenclature. A groomer is a procedure that takes a -PseudoJet and always returns another (non-zero) PseudoJet. A tagger is -a procedure that takes a PseudoJet, and either returns another PseudoJet -(i.e. tags it) or returns an empty PseudoJet (i.e. doesn't tag it). - ------------------------------------------------------------------------- -ModifiedMassDropTagger ------------------------------------------------------------------------- - -The Modified Mass Drop Tagger (mMDT) recursively declusters a jet, -following the largest pT subjet until a pair of subjets is found that -satisfy the symmetry condition on the energy sharing - - z > z_cut - -where z_cut is a predetermined value. By default, z is calculated as -the scalar pT fraction of the softest subjet. Note that larger values -of z_cut correspond to a more restrictive tagging criteria. - -By default, mMDT will first recluster the jet using the CA clustering -algorithm, which means that mMDT can be called on any jet, regardless -of the original jet finding measure. - -A default mMDT can be created via - - double z_cut = 0.10; - ModifiedMassDropTagger mMDT(z_cut); - -More options are available in the full constructor. To apply mMDT, -one simply calls it on the jet of interest. - - PseudoJet tagged_jet = mMDT(original_jet); - -Note that mMDT is a tagger, such that tagged_jet will only be non-zero -if the symmetry cut z > z_cut is satisfied by some branching of the -clustering tree. - -To gain additional information about the mMDT procedure, one can use - - tagged_jet.structure_of() - -which gives access to information about the delta_R between the tagged -subjets, their z value, etc. - ------------------------------------------------------------------------- -SoftDrop ------------------------------------------------------------------------- - -The SoftDrop procedure is very similar to mMDT, albeit with a -generalised symmetry condition: - - z > z_cut * (R / R0)^beta - -Note that larger z_cut and smaller beta correspond to more aggressive -grooming of the jet. - -SoftDrop is intended to be used as a groomer (instead of as a tagger), -such that if the symmetry condition fails throughout the whole -clustering tree, SoftDrop will still return a single particle in the -end. Apart from the tagger/groomer distinction, SoftDrop with beta=0 is -the same as mMDT. - -A default SoftDrop groomer can be created via: - - double z_cut = 0.10; - double beta = 2.0; - double R0 = 1.0; // this is the default value - SoftDrop sd(z_cut,beta,R0); - -and acts on a desired jet as - - PseudoJet groomed_jet = sd(original_jet); - -and additional information can be obtained via - - groomed_jet.structure_of() - -SoftDrop is typically called with beta > 0, though beta < 0 is still a -viable option. Because beta < 0 is infrared-collinear unsafe in -grooming mode, one probably wants to switch to tagging mode for negative -beta, via set_tagging_mode(). - ------------------------------------------------------------------------- -RecursiveSoftDrop ------------------------------------------------------------------------- - -The RecursiveSoftDrop procedure applies the Soft Drop procedure N times -in a jet in order to find up to N+1 prongs. N=0 makes no modification -to the jet, and N=1 is equivalent to the original SoftDrop. - -Once one has more than one prong, one has to decide which will be -declustered next. At each step of the declustering procedure, one -undoes the clustering which has the largest declustering angle -(amongst all the branches that are searched for substructure). [see -"set_fixed_depth" below for an alternative] - -Compared to SoftDrop, RecursiveSoftDrop takes an extra argument N -specifying the number of times the SoftDrop procedure is recursively -applied. Negative N means that the procedure is applied until no -further substructure is found (i.e. corresponds to taking N=infinity). - - double z_cut = 0.10; - double beta = 2.0; - double R0 = 1.0; // this is the default value - int N = -1; - RecursiveSoftDrop rsd(z_cut, beta, N, R0); - -One then acts on a jet as - - PseudoJet groomed_jet = rsd(jet) - -and get additional information via - - groomed_jet.structure_of() - ------------------------------------------------------------------------- -IteratedSoftDrop ------------------------------------------------------------------------- - -Iterated Soft Drop (ISD) is a repeated variant of SoftDrop. After -performing the Soft Drop procedure once, it logs the groomed symmetry -factor, then recursively performs Soft Drop again on the harder -branch. This procedure is repeated down to an (optional) angular cut -theta_cut, yielding a set of symmetry factors from which observables -can be built. - -An IteratedSoftDrop tool can be created as follows: - - double beta = -1.0; - double z_cut = 0.005; - double theta_cut = 0.0; - double R0 = 0.5; // characteristic radius of jet algorithm - IteratedSoftDrop isd(beta, z_cut, double theta_cut, R0); - -By default, ISD applied on a jet gives a result of type -IteratedSoftDropInfo that can then be probed to obtain physical -observables - - IteratedSoftDropInfo isd_info = isd(jet); - - unsigned int multiplicity = isd_info.multiplicity(); - double kappa = 1.0; // changes angular scale of ISD angularity - double isd_width = isd_info.angularity(kappa); - vector > zg_thetags = isd_info.all_zg_thetag(); - vector > zg_thetags = isd_info(); - for (unsigned int i=0; i< isd_info.size(); ++i){ - cout << "(zg, theta_g)_" << i << " = " - << isd_info[i].first << " " << isd_info[i].second << endl; - } - -Alternatively, one can directly get the multiplicity, angularity, and -(zg,thetag) pairs from the IteratedSoftDrop class, at the expense of -re-running the declustering procedure: - - unsigned int multiplicity = isd.multiplicity(jet); - double isd_width = isd.angularity(jet, 1.0); - vector > zg_thetags = isd.all_zg_thetag(jet); - - -Note: the iterative declustering procedure is the same as what one - would obtain with RecursiveSoftDrop with an (optional) angular cut - and recursing only in the hardest branch [see the "Changing - behaviour" section below for details], except that it returns some - information about the jet instead of a modified jet as RSD does. - - ------------------------------------------------------------------------- -BottomUpSoftDrop ------------------------------------------------------------------------- - -This is a bottom-up version of the RecursiveSoftDrop procedure, in a -similar way as Pruning can be seen as a bottom-up version of Trimming. - -In practice, the jet is reclustered and at each step of the clustering -one checks the SoftDrop condition - - z > z_cut * (R / R0)^beta - -If the condition is met, the pair is recombined. If the condition is -not met, only the hardest of the two objects is kept for further -clustering and the softest is rejected. - ------------------------------------------------------------------------- -Recluster ------------------------------------------------------------------------- - - *** NOTE: this is provided only for backwards compatibility *** - *** with FastJet <3.1. For FastJet >=3.1, the native *** - *** fastjet::Recluster is used instead *** - -The Recluster class allows the constituents of a jet to be reclustered -with a different recursive clustering algorithm. This is used -internally in the mMDT/SoftDrop/RecursiveSoftDrop/IteratedSoftDrop -code in order to recluster the jet using the CA algorithm. This is -achieved via - - Recluster ca_reclusterer(cambridge_algorithm, - JetDefinition::max_allowable_R); - PseudoJet reclustered_jet = ca_reclusterer(original_jet); - -Note that reclustered_jet creates a new ClusterSequence that knows to -delete_self_when_unused. - ------------------------------------------------------------------------- -Changing behaviour ------------------------------------------------------------------------- - -The behaviour of the all the tools provided here -(ModifiedMassDropTagger, SoftDrop, RecursiveSoftDrop and -IteratedSoftDrop) can be tweaked using the following options: - -SymmetryMeasure = {scalar_z, vector_z, y, theta_E, cos_theta_E} - [constructor argument] - : The definition of the energy sharing between subjets, with 0 - corresponding to the most asymmetric. - . scalar_z = min(pt1,pt2)/(pt1+pt2) [default] - . vector_z = min(pt1,pt2)/pt_{1+2} - . y = min(pt1^2,pt2^2)/m_{12}^2 (original y from MDT) - . theta_E = min(E1,E2)/(E1+E2), - with angular measure theta_{12}^2 - . cos_theta_E = min(E1,E2)/(E1+E2), - with angular measure 2[1-cos(theta_{12})] - The last two variants are meant for use in e+e- collisions, - together with the "larger_E" recursion choice (see below) - -RecursionChoice = {larger_pt, larger_mt, larger_m, larger_E} - [constructor argument] - : The path to recurse through the tree after the symmetry condition - fails. Options refer to transverse momentum (pt), transverse mass - (mt=sqrt(pt^2+m^2), mass (m) or energy (E). the latter is meant - for use in e+e- collisions - -mu_cut [constructor argument] - : An optional mass drop condition - -set_subtractor(subtractor*) [or subtractor as a constructor argument] - : provide a subtractor. When a subtractor is supplied, the - kinematic constraints are applied on subtracted 4-vectors. In - this case, the result of the ModifiedMassDropTagger/SoftDrop is a - subtracted PseudoJet, and it is assumed that - ModifiedMassDropTagger/SoftDrop is applied to an unsubtracted jet. - The latter default can be changed by calling - set_input_jet_is_subtracted(). - -set_reclustering(bool, Recluster*) - : An optional setting to recluster a jet with a different jet - recursive jet algorithm. The code is only designed to give sensible - results with the CA algorithm, but other reclustering algorithm - (especially kT) may be appropriate in certain contexts. - Use at your own risk. - -set_grooming_mode()/set_tagging_mode() - : In grooming mode, the algorithm will return a single particle if the - symmetry condition fails for the whole tree. In tagging mode, the - algorithm will return an zero PseudoJet if no symmetry conditions - passes. Note that ModifiedMassDropTagger defaults to tagging mode - and SoftDrop defaults to grooming mode. - -set_verbose_structure(bool) - : when set to true, additional information will be stored in the jet - structure. This includes in particular values of symmetry, - delta_R, and mu of dropped branches - -For the specific case of RecursiveSoftDrop, additional tweaking is -possible via the following methods - -set_fixed_depth_mode(bool) - : when this is true, RSD will recurse (N times) into all the - branches found during the previous iteration [instead of recursing - through the largest declustering angle until N prongs have been - found]. This yields at most 2^N prong. For infinite N, the two - options are equivalent. - -set_dynamical_R0(bool) - : By default the angles in the SD condition are normalised to the - parameter R0. With "dynamical R0", RSD will dynamically adjust R0 - to be the angle between the two prongs found during the previous - iteration. - -set_hardest_branch_only(bool) - : When substructure is found, only recurse into the hardest of the - two branches for further substructure search. This uses the class - RecursionChoice. - -set_min_deltaR_squared(double): - : set a minimal angle (squared) at which we stop the declustering - procedure. This cut is ineffective for negative values of the - argument. - ------------------------------------------------------------------------- -Technical Details ------------------------------------------------------------------------- - -Both ModifiedMassDropTagger and SoftDrop inherit from -RecursiveSymmetryCutBase, which provides a common codebase for recursive -declustering of a jet with a symmetry cut condition. A generic -RecursiveSymmetryCutBase depends on the following (virtual) functions -(see header file for exact full specs, including constness): - -double symmetry_cut_fn(PseudoJet &, PseudoJet &) - : The function that defines the symmetry cut. This is what actually - defines different recursive declustering schemes, and all classes - that inherit from RecursiveSymmetryCutBase must define this - function. - -string symmetry_cut_description() - : the string description of the symmetry cut. - ------------------------------------------------------------------------- diff --git a/src/Tools/fjcontrib/RecursiveTools/Recluster.cc b/src/Tools/fjcontrib/RecursiveTools/Recluster.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/Recluster.cc +++ /dev/null @@ -1,394 +0,0 @@ -// $Id: Recluster.cc 699 2014-07-07 09:58:12Z gsoyez $ -// -// Copyright (c) 2014-, Matteo Cacciari, Gavin P. Salam and Gregory Soyez -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "Recluster.hh" -#include -#include -#include - -using namespace std; - -// Comments: -// -// - If the jet comes from a C/A clustering (or is a composite jet -// made of C/A clusterings) and we recluster it with a C/A -// algorithm, we just use exclusive jets instead of doing the -// clustering explicitly. In this specific case, we need to make -// sure that all the pieces share the same cluster sequence. -// -// - If the recombiner has to be taken from the original jet and that -// jet is composite, we need to check that all the pieces were -// obtained with the same recombiner. -// -// TODO: -// -// - check this actually works!!! - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - - namespace Rivet{ - namespace fjcontrib{ - - -LimitedWarning Recluster::_explicit_ghost_warning; - -// class description -string Recluster::description() const { - ostringstream ostr; - ostr << "Recluster with subjet_def = "; - if (_use_full_def) { - ostr << _subjet_def.description(); - } else { - if (_subjet_alg == kt_algorithm) { - ostr << "Longitudinally invariant kt algorithm with R = " << _subjet_radius; - } else if (_subjet_alg == cambridge_algorithm) { - ostr << "Longitudinally invariant Cambridge/Aachen algorithm with R = " << _subjet_radius; - } else if (_subjet_alg == antikt_algorithm) { - ostr << "Longitudinally invariant anti-kt algorithm with R = " << _subjet_radius; - } else if (_subjet_alg == genkt_algorithm) { - ostr << "Longitudinally invariant generalised kt algorithm with R = " << _subjet_radius - << ", p = " << _subjet_extra; - } else if (_subjet_alg == cambridge_for_passive_algorithm) { - ostr << "Longitudinally invariant Cambridge/Aachen algorithm with R = " << _subjet_radius - << " and a special hack whereby particles with kt < " - << _subjet_extra << "are treated as passive ghosts"; - } else if (_subjet_alg == ee_kt_algorithm) { - ostr << "e+e- kt (Durham) algorithm"; - } else if (_subjet_alg == ee_genkt_algorithm) { - ostr << "e+e- generalised kt algorithm with R = " << _subjet_radius - << ", p = " << _subjet_extra; - } else if (_subjet_alg == undefined_jet_algorithm) { - ostr << "uninitialised JetDefinition (jet_algorithm=undefined_jet_algorithm)" ; - } else { - ostr << "unrecognized jet_algorithm"; - } - ostr << ", a recombiner obtained from the jet being reclustered"; - } - - if (_single) - ostr << " and keeping the hardest subjet"; - else - ostr << " and joining all subjets in a composite jet"; - - return ostr.str(); -} - -// return a vector of subjets, which are the ones that would be kept -// by the filtering -PseudoJet Recluster::result(const PseudoJet &jet) const { - // generic sanity checks - //------------------------------------------------------------------- - // make sure that the jet has constituents - if (! jet.has_constituents()) - throw Error("Filter can only be applied on jets having constituents"); - - // tests particular to certain configurations - //------------------------------------------------------------------- - - // for a whole variety of checks, we shall need the "recursive" - // pieces of the jet (the jet itself or recursing down to its most - // fundamental pieces). So we do compute these once and for all. - // - // Note that the pieces are always needed (either for C/A or for the - // area checks) - vector all_pieces; //.clear(); - if ((!_get_all_pieces(jet, all_pieces)) || (all_pieces.size()==0)){ - throw Error("Recluster: failed to retrieve all the pieces composing the jet."); - } - - // decide which jet definition to use - //------------------------------------------------------------------- - JetDefinition subjet_def; - if (_use_full_def){ - subjet_def = _subjet_def; - } else { - _build_jet_def_with_recombiner(all_pieces, subjet_def); - } - - - // the vector that will ultimately hold the subjets - vector subjets; - - // check if we can apply the simplification for C/A jets reclustered - // with C/A - // - // we apply C/A clustering iff - // - the request subjet_def is C/A - // - the jet is either directly coming from C/A or if it is a - // superposition of C/A jets from the same cluster sequence - // - the pieces agree with the recombination scheme of subjet_def - // - // Note that in this case area support will be automatically - // inherted so we can only worry about this later - //------------------------------------------------------------------- - if (_check_ca(all_pieces, subjet_def)){ - _recluster_cafilt(all_pieces, subjets, subjet_def.R()); - subjets = sorted_by_pt(subjets); - return _single - ? subjets[0] - : join(subjets, *(subjet_def.recombiner())); - } - - // decide if area support has to be kept - //------------------------------------------------------------------- - bool include_area_support = jet.has_area(); - if ((include_area_support) && (!_check_explicit_ghosts(all_pieces))){ - _explicit_ghost_warning.warn("Recluster: the original cluster sequence is lacking explicit ghosts; area support will no longer be available after re-clustering"); - include_area_support = false; - } - - // extract the subjets - //------------------------------------------------------------------- - _recluster_generic(jet, subjets, subjet_def, include_area_support); - subjets = sorted_by_pt(subjets); - - return _single - ? subjets[0] - : join(subjets, *(subjet_def.recombiner())); -} - -//---------------------------------------------------------------------- -// the parts that really do the reclustering -//---------------------------------------------------------------------- - -// get the subjets in the simple case of C/A+C/A -void Recluster::_recluster_cafilt(const vector & all_pieces, - vector & subjets, - double Rfilt) const{ - subjets.clear(); - - // each individual piece should have a C/A cluster sequence - // associated to it. So a simple loop would do the job - for (vector::const_iterator piece_it = all_pieces.begin(); - piece_it!=all_pieces.end(); piece_it++){ - // just extract the exclusive subjets of 'jet' - const ClusterSequence *cs = piece_it->associated_cluster_sequence(); - vector local_subjets; - - double dcut = Rfilt / cs->jet_def().R(); - if (dcut>=1.0){ - local_subjets.push_back(*piece_it); - } else { - local_subjets = piece_it->exclusive_subjets(dcut*dcut); - } - - copy(local_subjets.begin(), local_subjets.end(), back_inserter(subjets)); - } -} - - -// set the filtered elements in the generic re-clustering case (w/o -// subtraction) -void Recluster::_recluster_generic(const PseudoJet & jet, - vector & subjets, - const JetDefinition & subjet_def, - bool do_areas) const{ - // create a new, internal, ClusterSequence from the jet constituents - // get the subjets directly from there - // - // If the jet has area support then we separate the ghosts from the - // "regular" particles so the subjets will also have area - // support. Note that we do this regardless of whether rho is zero - // or not. - // - // Note that to be able to separate the ghosts, one needs explicit - // ghosts!! - // --------------------------------------------------------------- - if (do_areas){ - vector all_constituents = jet.constituents(); - vector regular_constituents, ghosts; - - for (vector::iterator it = all_constituents.begin(); - it != all_constituents.end(); it++){ - if (it->is_pure_ghost()) - ghosts.push_back(*it); - else - regular_constituents.push_back(*it); - } - - // figure out the ghost area from the 1st ghost (if none, any value - // would probably do as the area will be 0 and subtraction will have - // no effect!) - double ghost_area = (ghosts.size()) ? ghosts[0].area() : 0.01; - ClusterSequenceActiveAreaExplicitGhosts * csa - = new ClusterSequenceActiveAreaExplicitGhosts(regular_constituents, - subjet_def, - ghosts, ghost_area); - - subjets = csa->inclusive_jets(); - - // allow the cs to be deleted when it's no longer used - // - // Note that there is at least one constituent in the jet so there - // is in principle at least one subjet But one may have used a - // nasty recombiner that left an empty set of subjets, so we'd - // rather play it safe - if (subjets.size()) - csa->delete_self_when_unused(); - else - delete csa; - } else { - ClusterSequence * cs = new ClusterSequence(jet.constituents(), subjet_def); - subjets = cs->inclusive_jets(); - // allow the cs to be deleted when it's no longer used (again, we - // add an extra safety check) - if (subjets.size()) - cs->delete_self_when_unused(); - else - delete cs; - } -} - - -//---------------------------------------------------------------------- -// various checks and internal constructs -//---------------------------------------------------------------------- - -// fundamental info for CompositeJets -//---------------------------------------------------------------------- - -// get the pieces down to the fundamental pieces -// -// Note that this just checks that there is an associated CS to the -// fundamental pieces, not that it is still valid -bool Recluster::_get_all_pieces(const PseudoJet &jet, vector &all_pieces) const{ - if (jet.has_associated_cluster_sequence()){ - all_pieces.push_back(jet); - return true; - } - - if (jet.has_pieces()){ - const vector pieces = jet.pieces(); - for (vector::const_iterator it=pieces.begin(); it!=pieces.end(); it++) - if (!_get_all_pieces(*it, all_pieces)) return false; - return true; - } - - return false; -} - -// treatment of recombiners -//---------------------------------------------------------------------- -// get the common recombiner to all pieces (NULL if none) -// -// Note that if the jet has an associated cluster sequence that is no -// longer valid, an error will be thrown (needed since it could be the -// 1st check called after the enumeration of the pieces) -const JetDefinition::Recombiner* Recluster::_get_common_recombiner(const vector &all_pieces) const{ - const JetDefinition & jd_ref = all_pieces[0].validated_cs()->jet_def(); - for (unsigned int i=1; ijet_def().has_same_recombiner(jd_ref)) return NULL; - - return jd_ref.recombiner(); -} - -void Recluster::_build_jet_def_with_recombiner(const vector &all_pieces, - JetDefinition &subjet_def) const{ - // the recombiner has to be guessed from the pieces - const JetDefinition::Recombiner * common_recombiner = _get_common_recombiner(all_pieces); - if (common_recombiner) { - if (typeid(*common_recombiner) == typeid(JetDefinition::DefaultRecombiner)) { - RecombinationScheme scheme = - static_cast(common_recombiner)->scheme(); - if (_has_subjet_extra) - subjet_def = JetDefinition(_subjet_alg, _subjet_radius, _subjet_extra, scheme); - else if (_has_subjet_radius) - subjet_def = JetDefinition(_subjet_alg, _subjet_radius, scheme); - else - subjet_def = JetDefinition(_subjet_alg, scheme); - } else { - if (_has_subjet_extra) - subjet_def = JetDefinition(_subjet_alg, _subjet_radius, _subjet_extra, common_recombiner); - else if (_has_subjet_radius) - subjet_def = JetDefinition(_subjet_alg, _subjet_radius, common_recombiner); - else - subjet_def = JetDefinition(_subjet_alg, common_recombiner); - } - } else { - throw Error("Recluster: requested to guess the recombination scheme (or recombiner) from the original jet but an inconsistency was found between the pieces constituing that jet."); - } -} - -// area support -//---------------------------------------------------------------------- - -// check if the jet (or all its pieces) have explicit ghosts -// (assuming the jet has area support). -// -// Note that if the jet has an associated cluster sequence that is no -// longer valid, an error will be thrown (needed since it could be the -// 1st check called after the enumeration of the pieces) -bool Recluster::_check_explicit_ghosts(const vector &all_pieces) const{ - for (vector::const_iterator it=all_pieces.begin(); it!=all_pieces.end(); it++) - if (! it->validated_csab()->has_explicit_ghosts()) return false; - return true; -} - -// C/A specific tests -//---------------------------------------------------------------------- - -// check if one can apply the simplification for C/A subjets -// -// This includes: -// - the subjet definition asks for C/A subjets -// - all the pieces share the same CS -// - that CS is C/A with the same recombiner as the subjet def -// - the filtering radius is not larger than any of the pairwise -// distance between the pieces -// -// Note that if the jet has an associated cluster sequence that is no -// longer valid, an error will be thrown (needed since it could be the -// 1st check called after the enumeration of the pieces) -bool Recluster::_check_ca(const vector &all_pieces, - const JetDefinition &subjet_def) const{ - if (subjet_def.jet_algorithm() != cambridge_algorithm) return false; - - // check that the 1st of all the pieces (we're sure there is at - // least one) is coming from a C/A clustering. Then check that all - // the following pieces share the same ClusterSequence - const ClusterSequence * cs_ref = all_pieces[0].validated_cs(); - if (cs_ref->jet_def().jet_algorithm() != cambridge_algorithm) return false; - for (unsigned int i=1; ijet_def().has_same_recombiner(subjet_def)) return false; - - // we also have to make sure that the reclustering radius is not larger - // than any of the inter-pieces distance - double Rsub2 = subjet_def.R(); - Rsub2 *= Rsub2; - for (unsigned int i=0; i. -//---------------------------------------------------------------------- - -#include "RecursiveSoftDrop.hh" -#include "fastjet/ClusterSequence.hh" - -using namespace std; - -// FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - - namespace Rivet{ - namespace fjcontrib{ - - -namespace internal_recursive_softdrop{ - -//======================================================================== -/// \class RSDHistoryElement -/// a helper class to help keeping track od the RSD tree -/// -/// The element is created at the top of a branch and updated each -/// time one grooms something away. -class RSDHistoryElement{ -public: - RSDHistoryElement(const PseudoJet &jet, const RecursiveSoftDrop* rsd_ptr, double R0sqr) : - R0_squared(R0sqr), - child1_in_history(-1), child2_in_history(-1), symmetry(-1.0), mu2(-1.0){ - reset(jet, rsd_ptr); - } - - void reset(const PseudoJet &jet, const RecursiveSoftDrop* rsd_ptr){ - current_in_ca_tree = jet.cluster_hist_index(); - PseudoJet piece1, piece2; - theta_squared = (jet.has_parents(piece1, piece2)) - ? rsd_ptr->squared_geometric_distance(piece1,piece2) : 0.0; - } - - int current_in_ca_tree; ///< (history) index of the current particle in the C/A tree - double theta_squared; ///< squared angle at which this decays - double R0_squared; ///< squared angle at the top of the branch - ///< (used for RSD with dynamic_R0) - int child1_in_history; ///< hardest of the 2 decay products (-1 if untagged) - int child2_in_history; ///< softest of the 2 decay products (-1 if untagged) - - // info about what has been dropped and the local substructure - vector dropped_delta_R; - vector dropped_symmetry; - vector dropped_mu; - double symmetry, mu2; -}; - - -/// \class OrderRSDHistoryElements -/// angular ordering of (pointers to) the history elements -/// -/// our priority queue will use pointers to these elements that are -/// ordered in angle (of the objects they point to) -class OrderRSDHistoryElements{ -public: - bool operator()(const RSDHistoryElement *e1, const RSDHistoryElement *e2) const { - return e1->theta_squared < e2->theta_squared; - } -}; - -} // internal_recursive_softdrop - -//======================================================================== - -// initialise all the flags and parameters to their default value -void RecursiveSoftDrop::set_defaults(){ - set_fixed_depth_mode(false); - set_dynamical_R0(false); - set_hardest_branch_only(false); - set_min_deltaR_squared(-1.0); -} - -// description of the tool -string RecursiveSoftDrop::description() const{ - ostringstream res; - res << "recursive application of [" - << RecursiveSymmetryCutBase::description() - << "]"; - - if (_fixed_depth){ - res << ", recursively applied down to a maximal depth of N="; - if (_n==-1) res << "infinity"; else res << _n; - } else { - res << ", applied N="; - if (_n==-1) res << "infinity"; else res << _n; - res << " times"; - } - - if (_dynamical_R0) - res << ", with R0 dynamically scaled"; - else - res << ", with R0 kept fixed"; - - if (_hardest_branch_only) - res << ", following only the hardest branch"; - - if (_min_dR2>0) - res << ", with minimal angle (squared) = " << _min_dR2; - - return res.str(); -} - - -// action on a single jet with RecursiveSoftDrop. -// -// uses "result_fixed_tags" by default (i.e. recurse from R0 to -// smaller angles until n SD conditions have been met), or -// "result_fixed_depth" where each of the previous SD branches are -// recirsed into down to a depth of n. -PseudoJet RecursiveSoftDrop::result(const PseudoJet &jet) const{ - return _fixed_depth ? result_fixed_depth(jet) : result_fixed_tags(jet); -} - -// this routine applies the Soft Drop criterion recursively on the -// CA tree until we find n subjets (or until it converges), and -// adds them together into a groomed PseudoJet -PseudoJet RecursiveSoftDrop::result_fixed_tags(const PseudoJet &jet) const { - // start by reclustering jet with C/A algorithm - PseudoJet ca_jet = _recluster_if_needed(jet); - - if (! ca_jet.has_valid_cluster_sequence()){ - throw Error("RecursiveSoftDrop can only be applied to jets associated to a (valid) cluster sequence"); - } - - const ClusterSequence *cs = ca_jet.validated_cluster_sequence(); - const vector &cs_history = cs->history(); - const vector &cs_jets = cs->jets(); - - // initialize counter to 1 subjet (i.e. the full ca_jet) - int n_tagged = 0; - int max_njet = ca_jet.constituents().size(); - - // create the list of branches - unsigned int max_history_size = 2*max_njet; - if ((_n>0) && (_n history; - history.reserve(max_history_size); // could be one shorter - history.push_back(internal_recursive_softdrop::RSDHistoryElement(ca_jet, this, _R0sqr)); - - // create a priority queue containing the subjets and a comparison definition - // initialise to the full ca_jet - priority_queue, internal_recursive_softdrop::OrderRSDHistoryElements> active_branches; - active_branches.push(& (history[0])); - - PseudoJet parent, piece1, piece2; - double sym, mu2; - - // which R0 to use - //double R0sqr = _R0sqr; - - // loop over C/A tree until we reach the appropriate number of subjets - while ((continue_grooming(n_tagged)) && (active_branches.size())) { - // get the element corresponding to the max dR - // and the associated PJ - internal_recursive_softdrop::RSDHistoryElement * elm = active_branches.top(); - PseudoJet parent = cs_jets[cs_history[elm->current_in_ca_tree].jetp_index]; - - // do one step of SD - RecursionStatus status = recurse_one_step(parent, piece1, piece2, sym, mu2, &elm->R0_squared); - - // check if we passed the SD condition - if (status==recursion_success){ - // check for the optional angular cut - if ((_min_dR2 > 0) && (squared_geometric_distance(piece1,piece2) < _min_dR2)) - break; - - // both subjets are kept in the list for potential further de-clustering - elm->child1_in_history = history.size(); - elm->child2_in_history = history.size()+1; - elm->symmetry = sym; - elm->mu2 = mu2; - active_branches.pop(); - - // update the history - double next_R0_squared = (_dynamical_R0) - ? piece1.squared_distance(piece2) : elm->R0_squared; - - internal_recursive_softdrop::RSDHistoryElement elm1(piece1, this, next_R0_squared); - history.push_back(elm1); - active_branches.push(&(history.back())); - internal_recursive_softdrop::RSDHistoryElement elm2(piece2, this, next_R0_squared); - history.push_back(elm2); - if (!_hardest_branch_only){ - active_branches.push(&(history.back())); - } - - ++n_tagged; - } else if (status==recursion_dropped){ - // check for the optional angular cut - if ((_min_dR2 > 0) && (squared_geometric_distance(piece1,piece2) < _min_dR2)) - break; - - active_branches.pop(); - // tagging failed and the softest branch should be dropped - // keep track of what has een groomed away - max_njet -= piece2.constituents().size(); - elm->dropped_delta_R .push_back((elm->theta_squared >= 0) ? sqrt(elm->theta_squared) : -sqrt(elm->theta_squared)); - elm->dropped_symmetry.push_back(sym); - elm->dropped_mu .push_back((mu2>=0) ? sqrt(mu2) : -sqrt(mu2)); - - // keep the hardest bhanch in the recursion - elm->reset(piece1, this); - active_branches.push(elm); - } else if (status==recursion_no_parents){ - if (_min_dR2 > 0) break; - active_branches.pop(); - // nothing specific to do: we just keep the curent jet as a "leaf" - } else { // recursion_issue - active_branches.pop(); - // we've met an issue - // if the piece2 is null as well, it means we've had a critical problem. - // In that case, return an empty PseudoJet - if (piece2 == 0) return PseudoJet(); - - // otherwise, we should consider "piece2" as a final particle - // not to be recursed into - if (_min_dR2 > 0) break; - max_njet -= (piece2.constituents().size()-1); - break; - } - - // If the missing number of tags is exactly the number of objects - // we have left in the recursion, stop - if (n_tagged == max_njet) break; - } - - // now we have a bunch of history elements that we can use to build the final jet - vector mapped_to_history(history.size()); - unsigned int history_index = history.size(); - do { - --history_index; - const internal_recursive_softdrop::RSDHistoryElement & elm = history[history_index]; - - // two kinds of events: either just a final leave, poteitially with grooming - // or a brandhing (also with potential grooming at the end) - if (elm.child1_in_history<0){ - // this is a leaf, i.e. with no further sustructure - PseudoJet & subjet = mapped_to_history[history_index] - = cs_jets[cs_history[elm.current_in_ca_tree].jetp_index]; - - StructureType * structure = new StructureType(subjet); - if (has_verbose_structure()){ - structure->set_verbose(true); - structure->set_dropped_delta_R (elm.dropped_delta_R); - structure->set_dropped_symmetry(elm.dropped_symmetry); - structure->set_dropped_mu (elm.dropped_mu); - } - subjet.set_structure_shared_ptr(SharedPtr(structure)); - } else { - PseudoJet & subjet = mapped_to_history[history_index] - = join(mapped_to_history[elm.child1_in_history], mapped_to_history[elm.child2_in_history]); - StructureType * structure = new StructureType(subjet, sqrt(elm.theta_squared), elm.symmetry, sqrt(elm.mu2)); - if (has_verbose_structure()){ - structure->set_verbose(true); - structure->set_dropped_delta_R (elm.dropped_delta_R); - structure->set_dropped_symmetry(elm.dropped_symmetry); - structure->set_dropped_mu (elm.dropped_mu); - } - subjet.set_structure_shared_ptr(SharedPtr(structure)); - } - } while (history_index>0); - - return mapped_to_history[0]; -} - -// this routine applies the Soft Drop criterion recursively on the -// CA tree, recursing into all the branches found during the previous iteration -// until n layers have been found (or until it converges) -PseudoJet RecursiveSoftDrop::result_fixed_depth(const PseudoJet &jet) const { - // start by reclustering jet with C/A algorithm - PseudoJet ca_jet = _recluster_if_needed(jet); - - if (! ca_jet.has_valid_cluster_sequence()){ - throw Error("RecursiveSoftDrop can only be applied to jets associated to a (valid) cluster sequence"); - } - - const ClusterSequence *cs = ca_jet.validated_cluster_sequence(); - const vector &cs_history = cs->history(); - const vector &cs_jets = cs->jets(); - - // initialize counter to 1 subjet (i.e. the full ca_jet) - int n_depth = 0; - int max_njet = ca_jet.constituents().size(); - - // create the list of branches - unsigned int max_history_size = 2*max_njet; - //if ((_n>0) && (_n history; - history.reserve(max_history_size); // could be one shorter - history.push_back(internal_recursive_softdrop::RSDHistoryElement(ca_jet, this, _R0sqr)); - history.back().theta_squared = _R0sqr; - - // create a priority queue containing the subjets and a comparison definition - // initialize counter to 1 subjet (i.e. the full ca_jet) - list active_branches; - active_branches.push_back(& (history[0])); - - PseudoJet parent, piece1, piece2; - - while ((continue_grooming(n_depth)) && (active_branches.size())) { - // loop over all the branches and look for substructure - list::iterator hist_it=active_branches.begin(); - while (hist_it!=active_branches.end()){ - // get the element corresponding to the max dR - // and the associated PJ - internal_recursive_softdrop::RSDHistoryElement * elm = (*hist_it); - PseudoJet parent = cs_jets[cs_history[elm->current_in_ca_tree].jetp_index]; - - // we need to iterate this branch until we find some substructure - PseudoJet result_sd; - if (_dynamical_R0){ - SoftDrop sd(_beta, _symmetry_cut, symmetry_measure(), sqrt(elm->theta_squared), - mu_cut(), recursion_choice(), subtractor()); - sd.set_reclustering(false); - sd.set_verbose_structure(has_verbose_structure()); - result_sd = sd(parent); - } else { - result_sd = SoftDrop::result(parent); - } - - // if we had an empty PJ, that means we ran into some problems. - // just return an empty PJ ourselves - if (result_sd == 0) return PseudoJet(); - - // update the history element to reflect our iteration - elm->current_in_ca_tree = result_sd.cluster_hist_index(); - - if (has_verbose_structure()){ - elm->dropped_delta_R = result_sd.structure_of().dropped_delta_R(); - elm->dropped_symmetry = result_sd.structure_of().dropped_symmetry(); - elm->dropped_mu = result_sd.structure_of().dropped_mu(); - } - - // if some substructure was found: - if (result_sd.structure_of().has_substructure()){ - // update the history element to reflect our iteration - elm->child1_in_history = history.size(); - elm->child2_in_history = history.size()+1; - elm->theta_squared = result_sd.structure_of().delta_R(); - elm->theta_squared *= elm->theta_squared; - elm->symmetry = result_sd.structure_of().symmetry(); - elm->mu2 = result_sd.structure_of().mu(); - elm->mu2 *= elm->mu2; - - // the next iteration will have to handle 2 new history - // elements (the R0squared argument here is unused) - result_sd.has_parents(piece1, piece2); - internal_recursive_softdrop::RSDHistoryElement elm1(piece1, this, _R0sqr); - history.push_back(elm1); - // insert it in the active branches if needed - if (elm1.theta_squared>0) - active_branches.insert(hist_it,&(history.back())); // insert just before - - internal_recursive_softdrop::RSDHistoryElement elm2(piece2, this, _R0sqr); - history.push_back(elm2); - if ((!_hardest_branch_only) && (elm2.theta_squared>0)){ - active_branches.insert(hist_it,&(history.back())); // insert just before - } - } - // otherwise we've just reached the end of the recursion the - // history information has been updated above - // - // we just need to make sure that we do not recurse into that - // element any longer - - list::iterator current = hist_it; - ++hist_it; - active_branches.erase(current); - } // loop over branches at current depth - ++n_depth; - } // loop over depth - - // now we have a bunch of history elements that we can use to build the final jet - vector mapped_to_history(history.size()); - unsigned int history_index = history.size(); - do { - --history_index; - const internal_recursive_softdrop::RSDHistoryElement & elm = history[history_index]; - - // two kinds of events: either just a final leave, poteitially with grooming - // or a brandhing (also with potential grooming at the end) - if (elm.child1_in_history<0){ - // this is a leaf, i.e. with no further sustructure - PseudoJet & subjet = mapped_to_history[history_index] - = cs_jets[cs_history[elm.current_in_ca_tree].jetp_index]; - - StructureType * structure = new StructureType(subjet); - if (has_verbose_structure()){ - structure->set_verbose(true); - } - subjet.set_structure_shared_ptr(SharedPtr(structure)); - } else { - PseudoJet & subjet = mapped_to_history[history_index] - = join(mapped_to_history[elm.child1_in_history], mapped_to_history[elm.child2_in_history]); - StructureType * structure = new StructureType(subjet, sqrt(elm.theta_squared), elm.symmetry, sqrt(elm.mu2)); - if (has_verbose_structure()){ - structure->set_verbose(true); - structure->set_dropped_delta_R (elm.dropped_delta_R); - structure->set_dropped_symmetry(elm.dropped_symmetry); - structure->set_dropped_mu (elm.dropped_mu); - } - subjet.set_structure_shared_ptr(SharedPtr(structure)); - } - } while (history_index>0); - - return mapped_to_history[0]; -} - - -//======================================================================== -// implementation of the helpers -//======================================================================== - -// helper to get all the prongs in a jet that has been obtained using -// RecursiveSoftDrop (instead of recursively parsing the 1->2 -// composite jet structure) -vector recursive_soft_drop_prongs(const PseudoJet & rsd_jet){ - // make sure that the jet has the appropriate RecursiveSoftDrop structure - if (!rsd_jet.has_structure_of()) - return vector(); - - // if this jet has no substructure, just return a 1-prong object - if (!rsd_jet.structure_of().has_substructure()) - return vector(1, rsd_jet); - - // otherwise fill a vector with all the prongs (no specific ordering) - vector prongs; - - // parse the list of PseudoJet we still need to deal with - vector to_parse = rsd_jet.pieces(); // valid both for a C/A recombination step or a RSD join - unsigned int i_parse = 0; - while (i_parse()) && - (current.structure_of().has_substructure())){ - // if this has some deeper substructure, add it to the list of - // things to further process - vector pieces = current.pieces(); - assert(pieces.size() == 2); - to_parse[i_parse] = pieces[0]; - to_parse.push_back(pieces[1]); - } else { - // no further substructure, just add this as a branch - prongs.push_back(current); - ++i_parse; - } - } - - return prongs; -} - - } } - - //FASTJET_END_NAMESPACE diff --git a/src/Tools/fjcontrib/RecursiveTools/RecursiveSymmetryCutBase.cc b/src/Tools/fjcontrib/RecursiveTools/RecursiveSymmetryCutBase.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/RecursiveSymmetryCutBase.cc +++ /dev/null @@ -1,646 +0,0 @@ -// $Id: RecursiveSymmetryCutBase.cc 1080 2017-09-28 07:51:37Z gsoyez $ -// -// Copyright (c) 2014-, Gavin P. Salam, Gregory Soyez, Jesse Thaler -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "RecursiveSymmetryCutBase.hh" -#include "fastjet/JetDefinition.hh" -#include "fastjet/ClusterSequenceAreaBase.hh" -#include -#include -#include - -using namespace std; - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - - namespace Rivet{ - namespace fjcontrib{ - - -LimitedWarning RecursiveSymmetryCutBase::_negative_mass_warning; -LimitedWarning RecursiveSymmetryCutBase::_mu2_gt1_warning; -//LimitedWarning RecursiveSymmetryCutBase::_nonca_warning; -LimitedWarning RecursiveSymmetryCutBase::_explicit_ghost_warning; - -bool RecursiveSymmetryCutBase::_verbose = false; - -//---------------------------------------------------------------------- -PseudoJet RecursiveSymmetryCutBase::result(const PseudoJet & jet) const { - // construct the input jet (by default, recluster with C/A) - if (! jet.has_constituents()){ - throw Error("RecursiveSymmetryCutBase can only be applied to jets with constituents"); - } - - PseudoJet j = _recluster_if_needed(jet); - - // sanity check: the jet must have a valid CS - if (! j.has_valid_cluster_sequence()){ - throw Error("RecursiveSymmetryCutBase can only be applied to jets associated to a (valid) cluster sequence"); - } - - // check that area information is there in case we have a subtractor - // GS: do we really need this since subtraction may not require areas? - if (_subtractor) { - const ClusterSequenceAreaBase * csab = - dynamic_cast(j.associated_cs()); - if (csab == 0 || (!csab->has_explicit_ghosts())) - _explicit_ghost_warning.warn("RecursiveSymmetryCutBase: there is no clustering sequence, or it lacks explicit ghosts: subtraction is not guaranteed to function properly"); - } - - // establish the first subjet and optionally subtract it - PseudoJet subjet = j; - if (_subtractor && (!_input_jet_is_subtracted)) { - subjet = (*_subtractor)(subjet); - } - - // variables for tracking what will happen - PseudoJet piece1, piece2; - - // vectors for storing optional verbose structure - // these hold the deltaR, symmetry, and mu values of dropped branches - std::vector dropped_delta_R; - std::vector dropped_symmetry; - std::vector dropped_mu; - - double sym, mu2; - - // now recurse into the jet's structure - RecursionStatus status; - while ((status=recurse_one_step(subjet, piece1, piece2, sym, mu2)) != recursion_success) { - // start with sanity checks: - if ((status == recursion_issue) || (status == recursion_no_parents)) { - // we should return piece1 by our convention for recurse_one_step - PseudoJet result; - if (status == recursion_issue){ - result = piece1; - if (_verbose) cout << "reached end; returning null jet " << endl; - } else { - result = _result_no_substructure(piece1); - if (_verbose) cout << "no parents found; returning last PJ or empty jet" << endl; - } - - if (result != 0) { - // if in grooming mode, add dummy structure information - StructureType * structure = new StructureType(result); - // structure->_symmetry = 0.0; - // structure->_mu = 0.0; - // structure->_delta_R = 0.0; - if (_verbose_structure) { // still want to store verbose information about dropped branches - structure->_has_verbose = true; - structure->_dropped_symmetry = dropped_symmetry; - structure->_dropped_mu = dropped_mu; - structure->_dropped_delta_R = dropped_delta_R; - } - result.set_structure_shared_ptr(SharedPtr(structure)); - } - - return result; - } - - assert(status == recursion_dropped); - - // if desired, store information about dropped branches before recursing - if (_verbose_structure) { - dropped_delta_R.push_back(piece1.delta_R(piece2)); - dropped_symmetry.push_back(sym); - dropped_mu.push_back((mu2 >= 0) ? sqrt(mu2) : -sqrt(-mu2)); - } - - subjet = piece1; - } - - - // we've tagged the splitting, return the jet with its substructure - StructureType * structure = new StructureType(subjet); - structure->_symmetry = sym; - structure->_mu = (mu2 >= 0) ? sqrt(mu2) : -sqrt(-mu2); - structure->_delta_R = sqrt(squared_geometric_distance(piece1, piece2)); - if (_verbose_structure) { - structure->_has_verbose = true; - structure->_dropped_symmetry = dropped_symmetry; - structure->_dropped_mu = dropped_mu; - structure->_dropped_delta_R = dropped_delta_R; - } - subjet.set_structure_shared_ptr(SharedPtr(structure)); - return subjet; -} - - - -//---------------------------------------------------------------------- -// the method below is the one actually performing one step of the -// recursion. -// -// It returns a status code (defined above) -// -// In case of success, all the information is filled -// In case of "no parents", piee1 is the same subjet -// In case of trouble, piece2 will be a 0 PJ and piece1 is the PJ we -// should return (either 0 itself if the issue was critical, or -// non-wero in case of a minor issue just causing the recursion to -// stop) -RecursiveSymmetryCutBase::RecursionStatus - RecursiveSymmetryCutBase::recurse_one_step(const PseudoJet & subjet, - PseudoJet &piece1, PseudoJet &piece2, - double &sym, double &mu2, - void *extra_parameters) const { - if (!subjet.has_parents(piece1, piece2)){ - piece1 = subjet; - piece2 = PseudoJet(); - return recursion_no_parents; - } - - // first sanity check: - // - zero or negative pts are not allowed for the input subjet - // - zero or negative masses are not allowed for configurations - // in which the mass will effectively appear in a denominator - // (The masses will be checked later) - if (subjet.pt2() <= 0){ // this is a critical problem, return an empty PJ - piece1 = piece2 = PseudoJet(); - return recursion_issue; - } - - if (_subtractor) { - piece1 = (*_subtractor)(piece1); - piece2 = (*_subtractor)(piece2); - } - - // determine the symmetry parameter - if (_symmetry_measure == y) { - // the original d_{ij}/m^2 choice from MDT - // first make sure the mass is sensible - if (subjet.m2() <= 0) { - _negative_mass_warning.warn("RecursiveSymmetryCutBase: cannot calculate y, because (sub)jet mass is negative; bailing out"); - // since rounding errors can give -ve masses, be a it more - // tolerant and consider that no substructure has been found - piece1 = _result_no_substructure(subjet); - piece2 = PseudoJet(); - return recursion_issue; - } - sym = piece1.kt_distance(piece2) / subjet.m2(); - - } else if (_symmetry_measure == vector_z) { - // min(pt1, pt2)/(pt), where the denominator is a vector sum - // of the two subjets - sym = min(piece1.pt(), piece2.pt()) / subjet.pt(); - } else if (_symmetry_measure == scalar_z) { - // min(pt1, pt2)/(pt1+pt2), where the denominator is a scalar sum - // of the two subjets - double pt1 = piece1.pt(); - double pt2 = piece2.pt(); - // make sure denominator is non-zero - sym = pt1 + pt2; - if (sym == 0){ // this is a critical problem, return an empty PJ - piece1 = piece2 = PseudoJet(); - return recursion_issue; - } - sym = min(pt1, pt2) / sym; - } else if ((_symmetry_measure == theta_E) || (_symmetry_measure == cos_theta_E)){ - // min(E1, E2)/(E1+E2) - double E1 = piece1.E(); - double E2 = piece2.E(); - // make sure denominator is non-zero - sym = E1 + E2; - if (sym == 0){ // this is a critical problem, return an empty PJ - piece1 = piece2 = PseudoJet(); - return recursion_issue; - } - sym = min(E1, E2) / sym; - } else { - throw Error ("Unrecognized choice of symmetry_measure"); - } - - // determine the symmetry cut - // (This function is specified in the derived classes) - double this_symmetry_cut = symmetry_cut_fn(piece1, piece2, extra_parameters); - - // and make a first tagging decision based on symmetry cut - bool tagged = (sym > this_symmetry_cut); - - // if tagged based on symmetry cut, then check the mu cut (if relevant) - // and update the tagging decision. Calculate mu^2 regardless, for cases - // of users not cutting on mu2, but still interested in its value. - bool use_mu_cut = (_mu_cut != numeric_limits::infinity()); - mu2 = max(piece1.m2(), piece2.m2())/subjet.m2(); - if (tagged && use_mu_cut) { - // first a sanity check -- mu2 won't be sensible if the subjet mass - // is negative, so we can't then trust the mu cut - bail out - if (subjet.m2() <= 0) { - _negative_mass_warning.warn("RecursiveSymmetryCutBase: cannot trust mu, because (sub)jet mass is negative; bailing out"); - piece1 = piece2 = PseudoJet(); - return recursion_issue; - } - if (mu2 > 1) _mu2_gt1_warning.warn("RecursiveSymmetryCutBase encountered mu^2 value > 1"); - if (mu2 > pow(_mu_cut,2)) tagged = false; - } - - // we'll continue unclustering, allowing for the different - // ways of choosing which parent to look into - if (_recursion_choice == larger_pt) { - if (piece1.pt2() < piece2.pt2()) std::swap(piece1, piece2); - } else if (_recursion_choice == larger_mt) { - if (piece1.mt2() < piece2.mt2()) std::swap(piece1, piece2); - } else if (_recursion_choice == larger_m) { - if (piece1.m2() < piece2.m2()) std::swap(piece1, piece2); - } else if (_recursion_choice == larger_E) { - if (piece1.E() < piece2.E()) std::swap(piece1, piece2); - } else { - throw Error ("Unrecognized value for recursion_choice"); - } - - return tagged ? recursion_success : recursion_dropped; -} - - -//---------------------------------------------------------------------- -string RecursiveSymmetryCutBase::description() const { - ostringstream ostr; - ostr << "Recursive " << (_grooming_mode ? "Groomer" : "Tagger") << " with a symmetry cut "; - - switch(_symmetry_measure) { - case y: - ostr << "y"; break; - case scalar_z: - ostr << "scalar_z"; break; - case vector_z: - ostr << "vector_z"; break; - case theta_E: - ostr << "theta_E"; break; - case cos_theta_E: - ostr << "cos_theta_E"; break; - default: - cerr << "failed to interpret symmetry_measure" << endl; exit(-1); - } - ostr << " > " << symmetry_cut_description(); - - if (_mu_cut != numeric_limits::infinity()) { - ostr << ", mass-drop cut mu=max(m1,m2)/m < " << _mu_cut; - } else { - ostr << ", no mass-drop requirement"; - } - - ostr << ", recursion into the subjet with larger "; - switch(_recursion_choice) { - case larger_pt: - ostr << "pt"; break; - case larger_mt: - ostr << "mt(=sqrt(m^2+pt^2))"; break; - case larger_m: - ostr << "mass"; break; - case larger_E: - ostr << "energy"; break; - default: - cerr << "failed to interpret recursion_choice" << endl; exit(-1); - } - - if (_subtractor) { - ostr << ", subtractor: " << _subtractor->description(); - if (_input_jet_is_subtracted) {ostr << " (input jet is assumed already subtracted)";} - } - - if (_recluster) { - ostr << " and reclustering using " << _recluster->description(); - } - - return ostr.str(); -} - -//---------------------------------------------------------------------- -// helper for handling the reclustering -PseudoJet RecursiveSymmetryCutBase::_recluster_if_needed(const PseudoJet &jet) const{ - if (! _do_reclustering) return jet; - if (_recluster) return (*_recluster)(jet); - if (is_ee()){ -#if FASTJET_VERSION_NUMBER >= 30100 - return Recluster(JetDefinition(ee_genkt_algorithm, JetDefinition::max_allowable_R, 0.0), true)(jet); -#else - return Recluster(JetDefinition(ee_genkt_algorithm, JetDefinition::max_allowable_R, 0.0))(jet); -#endif - } - - return Recluster(cambridge_algorithm, JetDefinition::max_allowable_R)(jet); -} - -//---------------------------------------------------------------------- -// decide what to return when no substructure has been found -double RecursiveSymmetryCutBase::squared_geometric_distance(const PseudoJet &j1, - const PseudoJet &j2) const{ - if (_symmetry_measure == theta_E){ - double dot_3d = j1.px()*j2.px() + j1.py()*j2.py() + j1.pz()*j2.pz(); - double cos_theta = max(-1.0,min(1.0, dot_3d/sqrt(j1.modp2()*j2.modp2()))); - double theta = acos(cos_theta); - return theta*theta; - } else if (_symmetry_measure == cos_theta_E){ - double dot_3d = j1.px()*j2.px() + j1.py()*j2.py() + j1.pz()*j2.pz(); - return max(0.0, 2*(1-dot_3d/sqrt(j1.modp2()*j2.modp2()))); - } - - return j1.squared_distance(j2); -} - -//---------------------------------------------------------------------- -PseudoJet RecursiveSymmetryCutBase::_result_no_substructure(const PseudoJet &last_parent) const{ - if (_grooming_mode){ - // in grooming mode, return the last parent - return last_parent; - } else { - // in tagging mode, return an empty PseudoJet - return PseudoJet(); - } -} - - -//======================================================================== -// implementation of the details of the structure - -// the number of dropped subjets -int RecursiveSymmetryCutBase::StructureType::dropped_count(bool global) const { - check_verbose("dropped_count()"); - - // if this jet has no substructure, just return an empty vector - if (!has_substructure()) return _dropped_delta_R.size(); - - // deal with the non-global case - if (!global) return _dropped_delta_R.size(); - - // for the global case, we've unfolded the recursion (likely more - // efficient as it requires less copying) - unsigned int count = 0; - vector to_parse; - to_parse.push_back(this); - - unsigned int i_parse = 0; - while (i_parse_dropped_delta_R.size(); - - // check if we need to recurse deeper in the substructure - // - // we can have 2 situations here for the underlying structure (the - // one we've wrapped around): - // - it's of the clustering type - // - it's a composite jet - // only in the 2nd case do we have to recurse deeper - const CompositeJetStructure *css = dynamic_cast(current->_structure.get()); - if (css == 0){ ++i_parse; continue; } - - vector prongs = css->pieces(PseudoJet()); // argument irrelevant - assert(prongs.size() == 2); - for (unsigned int i_prong=0; i_prong<2; ++i_prong){ - if (prongs[i_prong].has_structure_of()){ - RecursiveSymmetryCutBase::StructureType* prong_structure - = (RecursiveSymmetryCutBase::StructureType*) prongs[i_prong].structure_ptr(); - if (prong_structure->has_substructure()) - to_parse.push_back(prong_structure); - } - } - - ++i_parse; - } - return count; -} - -// the delta_R of all the dropped subjets -vector RecursiveSymmetryCutBase::StructureType::dropped_delta_R(bool global) const { - check_verbose("dropped_delta_R()"); - - // if this jet has no substructure, just return an empty vector - if (!has_substructure()) return vector(); - - // deal with the non-global case - if (!global) return _dropped_delta_R; - - // for the global case, we've unfolded the recursion (likely more - // efficient as it requires less copying) - vector all_dropped; - vector to_parse; - to_parse.push_back(this); - - unsigned int i_parse = 0; - while (i_parse_dropped_delta_R.begin(), current->_dropped_delta_R.end()); - - // check if we need to recurse deeper in the substructure - // - // we can have 2 situations here for the underlying structure (the - // one we've wrapped around): - // - it's of the clustering type - // - it's a composite jet - // only in the 2nd case do we have to recurse deeper - const CompositeJetStructure *css = dynamic_cast(current->_structure.get()); - if (css == 0){ ++i_parse; continue; } - - vector prongs = css->pieces(PseudoJet()); // argument irrelevant - assert(prongs.size() == 2); - for (unsigned int i_prong=0; i_prong<2; ++i_prong){ - if (prongs[i_prong].has_structure_of()){ - RecursiveSymmetryCutBase::StructureType* prong_structure - = (RecursiveSymmetryCutBase::StructureType*) prongs[i_prong].structure_ptr(); - if (prong_structure->has_substructure()) - to_parse.push_back(prong_structure); - } - } - - ++i_parse; - } - return all_dropped; -} - -// the symmetry of all the dropped subjets -vector RecursiveSymmetryCutBase::StructureType::dropped_symmetry(bool global) const { - check_verbose("dropped_symmetry()"); - - // if this jet has no substructure, just return an empty vector - if (!has_substructure()) return vector(); - - // deal with the non-global case - if (!global) return _dropped_symmetry; - - // for the global case, we've unfolded the recursion (likely more - // efficient as it requires less copying) - vector all_dropped; - vector to_parse; - to_parse.push_back(this); - - unsigned int i_parse = 0; - while (i_parse_dropped_symmetry.begin(), current->_dropped_symmetry.end()); - - // check if we need to recurse deeper in the substructure - // - // we can have 2 situations here for the underlying structure (the - // one we've wrapped around): - // - it's of the clustering type - // - it's a composite jet - // only in the 2nd case do we have to recurse deeper - const CompositeJetStructure *css = dynamic_cast(current->_structure.get()); - if (css == 0){ ++i_parse; continue; } - - vector prongs = css->pieces(PseudoJet()); // argument irrelevant - assert(prongs.size() == 2); - for (unsigned int i_prong=0; i_prong<2; ++i_prong){ - if (prongs[i_prong].has_structure_of()){ - RecursiveSymmetryCutBase::StructureType* prong_structure - = (RecursiveSymmetryCutBase::StructureType*) prongs[i_prong].structure_ptr(); - if (prong_structure->has_substructure()) - to_parse.push_back(prong_structure); - } - } - - ++i_parse; - } - return all_dropped; -} - -// the mu of all the dropped subjets -vector RecursiveSymmetryCutBase::StructureType::dropped_mu(bool global) const { - check_verbose("dropped_mu()"); - - // if this jet has no substructure, just return an empty vector - if (!has_substructure()) return vector(); - - // deal with the non-global case - if (!global) return _dropped_mu; - - // for the global case, we've unfolded the recursion (likely more - // efficient as it requires less copying) - vector all_dropped; - vector to_parse; - to_parse.push_back(this); - - unsigned int i_parse = 0; - while (i_parse_dropped_mu.begin(), current->_dropped_mu.end()); - - // check if we need to recurse deeper in the substructure - // - // we can have 2 situations here for the underlying structure (the - // one we've wrapped around): - // - it's of the clustering type - // - it's a composite jet - // only in the 2nd case do we have to recurse deeper - const CompositeJetStructure *css = dynamic_cast(current->_structure.get()); - if (css == 0){ ++i_parse; continue; } - - vector prongs = css->pieces(PseudoJet()); // argument irrelevant - assert(prongs.size() == 2); - for (unsigned int i_prong=0; i_prong<2; ++i_prong){ - if (prongs[i_prong].has_structure_of()){ - RecursiveSymmetryCutBase::StructureType* prong_structure - = (RecursiveSymmetryCutBase::StructureType*) prongs[i_prong].structure_ptr(); - if (prong_structure->has_substructure()) - to_parse.push_back(prong_structure); - } - } - - ++i_parse; - } - return all_dropped; -} - -// the maximum of the symmetry over the dropped subjets -double RecursiveSymmetryCutBase::StructureType::max_dropped_symmetry(bool global) const { - check_verbose("max_dropped_symmetry()"); - - // if there is no substructure, just exit - if (!has_substructure()){ return 0.0; } - - // local value of the max_dropped_symmetry - double local_max = (_dropped_symmetry.size() == 0) - ? 0.0 : *max_element(_dropped_symmetry.begin(),_dropped_symmetry.end()); - - // recurse down the structure if instructed to do so - if (global){ - // we can have 2 situations here for the underlying structure (the - // one we've wrapped around): - // - it's of the clustering type - // - it's a composite jet - // only in the 2nd case do we have to recurse deeper - const CompositeJetStructure *css = dynamic_cast(_structure.get()); - if (css == 0) return local_max; - - vector prongs = css->pieces(PseudoJet()); // argument irrelevant - assert(prongs.size() == 2); - for (unsigned int i_prong=0; i_prong<2; ++i_prong){ - // check if the prong has further substructure - if (prongs[i_prong].has_structure_of()){ - RecursiveSymmetryCutBase::StructureType* prong_structure - = (RecursiveSymmetryCutBase::StructureType*) prongs[i_prong].structure_ptr(); - local_max = max(local_max, prong_structure->max_dropped_symmetry(true)); - } - } - } - - return local_max; -} - -//------------------------------------------------------------------------ -// helper class to sort by decreasing thetag -class SortRecursiveSoftDropStructureZgThetagPair{ -public: - bool operator()(const pair &p1, const pair &p2) const{ - return p1.second > p2.second; - } -}; -//------------------------------------------------------------------------ - -// the (zg,thetag) pairs of all the splitting that were found and passed the SD condition -vector > RecursiveSymmetryCutBase::StructureType::sorted_zg_and_thetag() const { - //check_verbose("sorted_zg_and_thetag()"); - - // if this jet has no substructure, just return an empty vector - if (!has_substructure()) return vector >(); - - // otherwise fill a vector with all the prongs (no specific ordering) - vector > all; - vector to_parse; - to_parse.push_back(this); - - unsigned int i_parse = 0; - while (i_parse(current->_symmetry, current->_delta_R)); - - vector prongs = current->pieces(PseudoJet()); - assert(prongs.size() == 2); - for (unsigned int i_prong=0; i_prong<2; ++i_prong){ - if (prongs[i_prong].has_structure_of()){ - RecursiveSymmetryCutBase::StructureType* prong_structure - = (RecursiveSymmetryCutBase::StructureType*) prongs[i_prong].structure_ptr(); - if (prong_structure->has_substructure()) - to_parse.push_back(prong_structure); - } - } - - ++i_parse; - } - - sort(all.begin(), all.end(), SortRecursiveSoftDropStructureZgThetagPair()); - return all; -} - - } } // namespace contrib - - //FASTJET_END_NAMESPACE diff --git a/src/Tools/fjcontrib/RecursiveTools/SoftDrop.cc b/src/Tools/fjcontrib/RecursiveTools/SoftDrop.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/SoftDrop.cc +++ /dev/null @@ -1,78 +0,0 @@ -// $Id: SoftDrop.cc 1059 2017-09-07 20:48:00Z gsoyez $ -// -// Copyright (c) 2014-, Gregory Soyez, Jesse. Thaler -// based on arXiv:1402.2657 by Andrew J. Larkoski, Simone Marzani, -// Gregory Soyez, Jesse Thaler -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include "SoftDrop.hh" -#include -#include - -using namespace std; - -//FASTJET_BEGIN_NAMESPACE // defined in fastjet/internal/base.hh - - //namespace contrib{ - - namespace Rivet{ - namespace fjcontrib{ - - -//---------------------------------------------------------------------- -// TODO: -// -// - implement reclustering (at the moment we assume it's C/A as for mMDT -// -// - what is returned if no substructure is found? [and for negativeve -// pt, m2 or other situations where mMDT currentlty returns an empty -// PseudoJet] -// -// GS.: mMDT seeks substructure so it makes sense for it to return -// an empry PseudoJet when no substructure is found (+ it is the -// original behaviour). For SoftDrop, in grooming mode (beta>0), if -// would make sense to return a jet with a single particle. In -// tagging mode (beta<0), the situation is less clear. At the level -// of the implementation, having a virtual function could work -// (with a bit of care to cover the -ve pt or m2 cases) -// -// - Do we include Andrew and Simone in the "contrib-author" list -// since they are on the paper? Do we include Gavin in the author's -// list since he started this contrib? -// -//---------------------------------------------------------------------- - -//---------------------------------------------------------------------- -double SoftDrop::symmetry_cut_fn(const PseudoJet & p1, - const PseudoJet & p2, - void *optional_R0sqr_ptr) const{ - double R0sqr = (optional_R0sqr_ptr == 0) ? _R0sqr : *((double*) optional_R0sqr_ptr); - return _symmetry_cut * pow(squared_geometric_distance(p1,p2)/R0sqr, 0.5*_beta); -} - -//---------------------------------------------------------------------- -string SoftDrop::symmetry_cut_description() const { - ostringstream ostr; - ostr << _symmetry_cut << " (theta/" << sqrt(_R0sqr) << ")^" << _beta << " [SoftDrop]"; - return ostr.str(); -} - - } } // namespace contrib - -//FASTJET_END_NAMESPACE diff --git a/src/Tools/fjcontrib/RecursiveTools/TODO b/src/Tools/fjcontrib/RecursiveTools/TODO deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/TODO +++ /dev/null @@ -1,26 +0,0 @@ -For v2.0 --------- - -- CHECK: lines 100-102 of RecursiveSymmetryCutBase.hh: this was - initially setting the structure members to 0. I think it's highly - preferable to set them to their default of -1, signalling the - absence of substructure. [0 dR might happen with a perfectly - collinear splitting] - -- CHECK: are we happy with how the structure info is stored and the - recursive_soft_drop_prongs(...) function in RecursiveSoftDrop.hh? - -- QUESTION: do we move set_min_deltaR_squared in the base class? - -- Add option to define z wrt to original jet? - -For future releases? --------------------- -- JDT: In Recluster, change "single" to an enum for user clarity? - [do we have any other option that "sungle" or "subjets"] -- JDT: Should we have a FASTJET_CONTRIB_BEGIN_NAMESPACE? It would - help those of us who use XCode to edit, which is aggressive about - auto-indenting. [Consider this for the full contrib??] -- More generic kinematic cuts? -- KZ: common base class for IteratedSoftDrop and RecursiveSoftDrop? -- KZ: make IteratedSoftDrop use Recluster \ No newline at end of file diff --git a/src/Tools/fjcontrib/RecursiveTools/VERSION b/src/Tools/fjcontrib/RecursiveTools/VERSION deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/VERSION +++ /dev/null @@ -1,1 +0,0 @@ -2.0.0-beta2 diff --git a/src/Tools/fjcontrib/RecursiveTools/example_advanced_usage.cc b/src/Tools/fjcontrib/RecursiveTools/example_advanced_usage.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_advanced_usage.cc +++ /dev/null @@ -1,362 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_advanced_usage.cc -/// -/// This example program is meant to illustrate some advanced features -/// of the fastjet::contrib::SoftDrop class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_advanced_usage < ../data/single-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_advanced_usage.cc 1016 2017-04-20 16:51:52Z knzhou $ -// -// Copyright (c) 2014, Gavin P. Salam -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include -#include "fastjet/ClusterSequence.hh" -#include "SoftDrop.hh" // In external code, this should be fastjet/contrib/SoftDrop.hh - -#define MY_INF std::numeric_limits::infinity() - -using namespace std; -using namespace fastjet; -using namespace fastjet::contrib; - -// forward declaration to make things clearer -void read_event(vector &event); -ostream & operator<<(ostream &, const PseudoJet &); - -// Simple class to store SoftDrop objects along with display information -class SoftDropStruct { - -private: - string _name; - double _beta; - double _z_cut; - SoftDrop::SymmetryMeasure _symmetry_measure; - double _R0; - double _mu; //mass drop - SoftDrop::RecursionChoice _recursion_choice; - JetAlgorithm _recluster_algorithm; - bool _tagging_mode; - SoftDrop _soft_drop; - Recluster _reclusterer; - -public: - SoftDropStruct(string name, - double beta, - double z_cut, - SoftDrop::SymmetryMeasure symmetry_measure, - double R0, - double mu, - SoftDrop::RecursionChoice recursion_choice, - JetAlgorithm recluster_algorithm, - bool tagging_mode = false) - : _name(name), - _beta(beta), - _z_cut(z_cut), - _symmetry_measure(symmetry_measure), - _R0(R0), - _mu(mu), - _recursion_choice(recursion_choice), - _recluster_algorithm(recluster_algorithm), - _tagging_mode(tagging_mode), - _soft_drop(beta,z_cut,symmetry_measure,R0,mu,recursion_choice), - _reclusterer(recluster_algorithm,JetDefinition::max_allowable_R) - { - // no need to recluser if already CA algorithm - if (recluster_algorithm != cambridge_algorithm) { - _soft_drop.set_reclustering(true,&_reclusterer); - } - - // if beta is negative, typically want to use in tagging mode - // MMDT behavior is also tagging mode - // set this option here - if (tagging_mode) { - _soft_drop.set_tagging_mode(); - } - - //turn verbose structure on (off by default) - _soft_drop.set_verbose_structure(); - } - - const SoftDrop& soft_drop() const { return _soft_drop;} - string name() const { return _name;} - double beta() const { return _beta;} - double z_cut() const { return _z_cut;} - string symmetry_measure_name() const { - switch (_symmetry_measure) { - case SoftDrop::scalar_z: - return "scalar_z"; - case SoftDrop::vector_z: - return "vector_z"; - case SoftDrop::y: - return "y"; - default: - return "unknown"; - } - } - double R0() const { return _R0;} - double mu() const { return _mu;} - - string recursion_choice_name() const { - switch (_recursion_choice) { - case SoftDrop::larger_pt: - return "larger_pt"; - case SoftDrop::larger_mt: - return "larger_mt"; - case SoftDrop::larger_m: - return "larger_m"; - default: - return "unknown"; - } - } - - string reclustering_name() const { - switch (_recluster_algorithm) { - case kt_algorithm: - return "KT"; - case cambridge_algorithm: - return "CA"; - default: - return "unknown"; - } - } - - string tag_or_groom() const { - return (_tagging_mode ? "tag" : "groom"); - } - -}; - - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some anti-kt jets - double R = 1.0, ptmin = 20.0; - JetDefinition jet_def(antikt_algorithm, R); - ClusterSequence cs(event, jet_def); - vector jets = sorted_by_pt(cs.inclusive_jets(ptmin)); - - - //---------------------------------------------------------- - // Make vector of structs to store a bunch of different SoftDrop options - // This is a vector of pointers because SoftDropStruct doesn't - // have a valid copy constructor - vector sd_vec; - - // make some standard SoftDrop - sd_vec.push_back(new SoftDropStruct("beta=2.0 zcut=.1", 2.0,0.10,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("beta=1.0 zcut=.1", 1.0,0.10,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("beta=0.5 zcut=.1", 0.5,0.10,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("beta=2.0 zcut=.2", 2.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("beta=1.0 zcut=.2", 1.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("beta=0.5 zcut=.2", 0.5,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - - // make a mMDT-like tagger using SoftDrop - sd_vec.push_back(new SoftDropStruct("MMDT-like zcut=.1", 0.0,0.10,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - sd_vec.push_back(new SoftDropStruct("MMDT-like zcut=.2", 0.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - sd_vec.push_back(new SoftDropStruct("MMDT-like zcut=.3", 0.0,0.30,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - sd_vec.push_back(new SoftDropStruct("MMDT-like zcut=.4", 0.0,0.40,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - - // make some tagging SoftDrop (negative beta) - sd_vec.push_back(new SoftDropStruct("beta=-2.0 zcut=.05", -2.0,0.05,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - sd_vec.push_back(new SoftDropStruct("beta=-1.0 zcut=.05", -1.0,0.05,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - sd_vec.push_back(new SoftDropStruct("beta=-0.5 zcut=.05", -0.5,0.05,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm,true)); - - // make a SoftDrop with R0 parameter - sd_vec.push_back(new SoftDropStruct("b=.5 z=.3 R0=1.0", 0.5,0.30,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=.5 z=.3 R0=0.5", 0.5,0.30,SoftDrop::scalar_z,0.5,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=.5 z=.3 R0=0.2", 0.5,0.30,SoftDrop::scalar_z,0.2,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - - // make a SoftDrop with different symmetry measure - sd_vec.push_back(new SoftDropStruct("b=2 z=.4 scalar_z", 2.0,0.4,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=2 z=.4 vector_z", 2.0,0.4,SoftDrop::vector_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=2 z=.4 y", 2.0,0.4,SoftDrop::y, 1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - - // make a SoftDrop with different recursion choice - sd_vec.push_back(new SoftDropStruct("b=3 z=.2 larger_pt", 3.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=3 z=.2 larger_mt", 3.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_mt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=3 z=.2 larger_m", 3.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_m,cambridge_algorithm)); - - // make a SoftDrop with mass drop - sd_vec.push_back(new SoftDropStruct("b=2 z=.1 mu=1.0", 2.0,0.10,SoftDrop::scalar_z,1.0,1.0,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=2 z=.1 mu=0.8", 2.0,0.10,SoftDrop::scalar_z,1.0,0.8,SoftDrop::larger_pt,cambridge_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=2 z=.1 mu=0.5", 2.0,0.10,SoftDrop::scalar_z,1.0,0.5,SoftDrop::larger_pt,cambridge_algorithm)); - - // make a SoftDrop with a different clustering scheme (kT instead of default CA) - sd_vec.push_back(new SoftDropStruct("b=2.0 z=.2 kT", 2.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,kt_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=1.0 z=.2 kT", 1.0,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,kt_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=0.5 z=.2 kT", 0.5,0.20,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,kt_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=2.0 z=.4 kT", 2.0,0.40,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,kt_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=1.0 z=.4 kT", 1.0,0.40,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,kt_algorithm)); - sd_vec.push_back(new SoftDropStruct("b=0.5 z=.4 kT", 0.5,0.40,SoftDrop::scalar_z,1.0,MY_INF,SoftDrop::larger_pt,kt_algorithm)); - - //---------------------------------------------------------- - // Output information about the various Soft Drop algorithms - - - // header lines - cout << "---------------------------------------------------------------------------------------------" << endl; - cout << "Soft Drops to be tested:" << endl; - cout << "---------------------------------------------------------------------------------------------" << endl; - cout << std::setw(18) << "name" - << std::setw(8) << "beta" - << std::setw(8) << "z_cut" - << std::setw(9) << "sym" - << std::setw(8) << "R0" - << std::setw(8) << "mu" - << std::setw(10)<< "recurse" - << std::setw(8) << "reclust" - << std::setw(8) << "mode" - << endl; - - // set precision for display - cout << setprecision(3) << fixed; - - // line for each SoftDrop - for (unsigned i_sd = 0; i_sd < sd_vec.size(); i_sd++) { - cout << std::setw(18) << sd_vec[i_sd]->name() - << std::setw(8) << sd_vec[i_sd]->beta() - << std::setw(8) << sd_vec[i_sd]->z_cut() - << std::setw(9) << sd_vec[i_sd]->symmetry_measure_name() - << std::setw(8) << sd_vec[i_sd]->R0() - << std::setw(8) << sd_vec[i_sd]->mu() - << std::setw(10)<< sd_vec[i_sd]->recursion_choice_name() - << std::setw(8) << sd_vec[i_sd]->reclustering_name() - << std::setw(8) << sd_vec[i_sd]->tag_or_groom() - << endl; - } - cout << "---------------------------------------------------------------------------------------------" << endl; - - - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - - cout << "---------------------------------------------------------------------------------------------" << endl; - cout << "Analyzing Jet " << ijet + 1 << ":" << endl; - cout << "---------------------------------------------------------------------------------------------" << endl; - - cout << std::setw(18) << "name" - << std::setw(10) << "pt" - << std::setw(9) << "m" - << std::setw(8) << "y" - << std::setw(8) << "phi" - << std::setw(8) << "constit" - << std::setw(8) << "delta_R" - << std::setw(8) << "sym" - << std::setw(8) << "mu" - << std::setw(8) << "mxdropz" // max_dropped_z from verbose logging - - << endl; - - PseudoJet original_jet = jets[ijet]; - - // set precision for display - cout << setprecision(4) << fixed; - - cout << std::setw(18) << "Original Jet" - << std::setw(10) << original_jet.pt() - << std::setw(9) << original_jet.m() - << std::setw(8) << original_jet.rap() - << std::setw(8) << original_jet.phi() - << std::setw(8) << original_jet.constituents().size() - << endl; - - for (unsigned i_sd = 0; i_sd < sd_vec.size(); i_sd++) { - // the current Soft Drop - const SoftDrop & sd = (*sd_vec[i_sd]).soft_drop(); - PseudoJet sd_jet = sd(jets[ijet]); - - cout << std::setw(18) << sd_vec[i_sd]->name(); - if (sd_jet != 0.0) { - cout << std::setw(10) << sd_jet.pt() - << std::setw(9) << sd_jet.m() - << std::setw(8) << sd_jet.rap() - << std::setw(8) << sd_jet.phi() - << std::setw(8) << sd_jet.constituents().size() - << std::setw(8) << sd_jet.structure_of().delta_R() - << std::setw(8) << sd_jet.structure_of().symmetry() - << std::setw(8) << sd_jet.structure_of().mu() - // the next line is part of the verbose information that is off by default - << std::setw(8) << sd_jet.structure_of().max_dropped_symmetry(); - } else { - cout << " ---- untagged jet ----"; - } - cout << endl; - - } - } - - // clean up - for (unsigned i_sd = 0; i_sd < sd_vec.size(); i_sd++) { - delete sd_vec[i_sd]; - } - - return 0; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_advanced_usage.ref b/src/Tools/fjcontrib/RecursiveTools/example_advanced_usage.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_advanced_usage.ref +++ /dev/null @@ -1,144 +0,0 @@ ---------------------------------------------------------------------------------------------- -Soft Drops to be tested: ---------------------------------------------------------------------------------------------- - name beta z_cut sym R0 mu recurse reclust mode - beta=2.0 zcut=.1 2.000 0.100 scalar_z 1.000 inf larger_pt CA groom - beta=1.0 zcut=.1 1.000 0.100 scalar_z 1.000 inf larger_pt CA groom - beta=0.5 zcut=.1 0.500 0.100 scalar_z 1.000 inf larger_pt CA groom - beta=2.0 zcut=.2 2.000 0.200 scalar_z 1.000 inf larger_pt CA groom - beta=1.0 zcut=.2 1.000 0.200 scalar_z 1.000 inf larger_pt CA groom - beta=0.5 zcut=.2 0.500 0.200 scalar_z 1.000 inf larger_pt CA groom - MMDT-like zcut=.1 0.000 0.100 scalar_z 1.000 inf larger_pt CA tag - MMDT-like zcut=.2 0.000 0.200 scalar_z 1.000 inf larger_pt CA tag - MMDT-like zcut=.3 0.000 0.300 scalar_z 1.000 inf larger_pt CA tag - MMDT-like zcut=.4 0.000 0.400 scalar_z 1.000 inf larger_pt CA tag -beta=-2.0 zcut=.05 -2.000 0.050 scalar_z 1.000 inf larger_pt CA tag -beta=-1.0 zcut=.05 -1.000 0.050 scalar_z 1.000 inf larger_pt CA tag -beta=-0.5 zcut=.05 -0.500 0.050 scalar_z 1.000 inf larger_pt CA tag - b=.5 z=.3 R0=1.0 0.500 0.300 scalar_z 1.000 inf larger_pt CA groom - b=.5 z=.3 R0=0.5 0.500 0.300 scalar_z 0.500 inf larger_pt CA groom - b=.5 z=.3 R0=0.2 0.500 0.300 scalar_z 0.200 inf larger_pt CA groom - b=2 z=.4 scalar_z 2.000 0.400 scalar_z 1.000 inf larger_pt CA groom - b=2 z=.4 vector_z 2.000 0.400 vector_z 1.000 inf larger_pt CA groom - b=2 z=.4 y 2.000 0.400 y 1.000 inf larger_pt CA groom -b=3 z=.2 larger_pt 3.000 0.200 scalar_z 1.000 inf larger_pt CA groom -b=3 z=.2 larger_mt 3.000 0.200 scalar_z 1.000 inf larger_mt CA groom - b=3 z=.2 larger_m 3.000 0.200 scalar_z 1.000 inf larger_m CA groom - b=2 z=.1 mu=1.0 2.000 0.100 scalar_z 1.000 1.000 larger_pt CA groom - b=2 z=.1 mu=0.8 2.000 0.100 scalar_z 1.000 0.800 larger_pt CA groom - b=2 z=.1 mu=0.5 2.000 0.100 scalar_z 1.000 0.500 larger_pt CA groom - b=2.0 z=.2 kT 2.000 0.200 scalar_z 1.000 inf larger_pt KT groom - b=1.0 z=.2 kT 1.000 0.200 scalar_z 1.000 inf larger_pt KT groom - b=0.5 z=.2 kT 0.500 0.200 scalar_z 1.000 inf larger_pt KT groom - b=2.0 z=.4 kT 2.000 0.400 scalar_z 1.000 inf larger_pt KT groom - b=1.0 z=.4 kT 1.000 0.400 scalar_z 1.000 inf larger_pt KT groom - b=0.5 z=.4 kT 0.500 0.400 scalar_z 1.000 inf larger_pt KT groom ---------------------------------------------------------------------------------------------- ---------------------------------------------------------------------------------------------- -Analyzing Jet 1: ---------------------------------------------------------------------------------------------- - name pt m y phi constit delta_R sym mu mxdropz - Original Jet 983.3873 39.9912 -0.8673 2.9051 35 - beta=2.0 zcut=.1 980.4398 27.1382 -0.8675 2.9053 25 0.2092 0.0047 0.8353 0.0014 - beta=1.0 zcut=.1 971.5229 20.8426 -0.8686 2.9051 22 0.0954 0.0294 0.5476 0.0047 - beta=0.5 zcut=.1 933.8859 9.9073 -0.8697 2.9075 13 0.0217 0.0174 0.8786 0.0294 - beta=2.0 zcut=.2 975.8387 22.6675 -0.8684 2.9057 23 0.1316 0.0045 0.9195 0.0047 - beta=1.0 zcut=.2 971.5229 20.8426 -0.8686 2.9051 22 0.0954 0.0294 0.5476 0.0047 - beta=0.5 zcut=.2 917.6736 8.7048 -0.8696 2.9079 12 0.0200 0.1543 0.5576 0.0294 - MMDT-like zcut=.1 917.6736 8.7048 -0.8696 2.9079 12 0.0200 0.1543 0.5576 0.0294 - MMDT-like zcut=.2 669.6354 1.9839 -0.8677 2.9075 4 0.0044 0.2031 0.7051 0.1543 - MMDT-like zcut=.3 446.5813 1.0182 -0.8678 2.9069 2 0.0030 0.4251 0.4848 0.2031 - MMDT-like zcut=.4 446.5813 1.0182 -0.8678 2.9069 2 0.0030 0.4251 0.4848 0.2031 -beta=-2.0 zcut=.05 ---- untagged jet ---- -beta=-1.0 zcut=.05 ---- untagged jet ---- -beta=-0.5 zcut=.05 ---- untagged jet ---- - b=.5 z=.3 R0=1.0 917.6736 8.7048 -0.8696 2.9079 12 0.0200 0.1543 0.5576 0.0294 - b=.5 z=.3 R0=0.5 917.6736 8.7048 -0.8696 2.9079 12 0.0200 0.1543 0.5576 0.0294 - b=.5 z=.3 R0=0.2 917.6736 8.7048 -0.8696 2.9079 12 0.0200 0.1543 0.5576 0.0294 - b=2 z=.4 scalar_z 971.5229 20.8426 -0.8686 2.9051 22 0.0954 0.0294 0.5476 0.0047 - b=2 z=.4 vector_z 971.5229 20.8426 -0.8686 2.9051 22 0.0954 0.0294 0.5476 0.0047 - b=2 z=.4 y 971.5229 20.8426 -0.8686 2.9051 22 0.0954 0.0171 0.5476 0.0013 -b=3 z=.2 larger_pt 980.4398 27.1382 -0.8675 2.9053 25 0.2092 0.0047 0.8353 0.0014 -b=3 z=.2 larger_mt 980.4398 27.1382 -0.8675 2.9053 25 0.2092 0.0047 0.8353 0.0014 - b=3 z=.2 larger_m 980.4398 27.1382 -0.8675 2.9053 25 0.2092 0.0047 0.8353 0.0014 - b=2 z=.1 mu=1.0 980.4398 27.1382 -0.8675 2.9053 25 0.2092 0.0047 0.8353 0.0014 - b=2 z=.1 mu=0.8 971.5229 20.8426 -0.8686 2.9051 22 0.0954 0.0294 0.5476 0.0047 - b=2 z=.1 mu=0.5 446.5813 1.0182 -0.8678 2.9069 2 0.0030 0.4251 0.4848 0.2031 - b=2.0 z=.2 kT 983.3873 39.9912 -0.8673 2.9051 35 0.1119 0.0377 0.5030 0.0000 - b=1.0 z=.2 kT 983.3873 39.9912 -0.8673 2.9051 35 0.1119 0.0377 0.5030 0.0000 - b=0.5 z=.2 kT 946.4934 20.1170 -0.8699 2.9084 21 0.0233 0.1579 0.5951 0.0377 - b=2.0 z=.4 kT 983.3873 39.9912 -0.8673 2.9051 35 0.1119 0.0377 0.5030 0.0000 - b=1.0 z=.4 kT 946.4934 20.1170 -0.8699 2.9084 21 0.0233 0.1579 0.5951 0.0377 - b=0.5 z=.4 kT 946.4934 20.1170 -0.8699 2.9084 21 0.0233 0.1579 0.5951 0.0377 ---------------------------------------------------------------------------------------------- -Analyzing Jet 2: ---------------------------------------------------------------------------------------------- - name pt m y phi constit delta_R sym mu mxdropz - Original Jet 908.0979 87.7124 0.2195 6.0349 47 - beta=2.0 zcut=.1 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 - beta=1.0 zcut=.1 884.0262 8.9520 0.2228 6.0306 14 0.0570 0.0067 0.8862 0.0104 - beta=0.5 zcut=.1 872.2856 7.0426 0.2230 6.0308 12 0.0346 0.0232 0.7445 0.0104 - beta=2.0 zcut=.2 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 - beta=1.0 zcut=.2 872.2856 7.0426 0.2230 6.0308 12 0.0346 0.0232 0.7445 0.0104 - beta=0.5 zcut=.2 852.0552 5.2435 0.2230 6.0300 10 0.0154 0.0608 0.7701 0.0232 - MMDT-like zcut=.1 800.2694 4.0381 0.2230 6.0310 9 0.0101 0.2350 0.2291 0.0608 - MMDT-like zcut=.2 800.2694 4.0381 0.2230 6.0310 9 0.0101 0.2350 0.2291 0.0608 - MMDT-like zcut=.3 ---- untagged jet ---- - MMDT-like zcut=.4 ---- untagged jet ---- -beta=-2.0 zcut=.05 ---- untagged jet ---- -beta=-1.0 zcut=.05 ---- untagged jet ---- -beta=-0.5 zcut=.05 ---- untagged jet ---- - b=.5 z=.3 R0=1.0 852.0552 5.2435 0.2230 6.0300 10 0.0154 0.0608 0.7701 0.0232 - b=.5 z=.3 R0=0.5 852.0552 5.2435 0.2230 6.0300 10 0.0154 0.0608 0.7701 0.0232 - b=.5 z=.3 R0=0.2 800.2694 4.0381 0.2230 6.0310 9 0.0101 0.2350 0.2291 0.0608 - b=2 z=.4 scalar_z 884.0262 8.9520 0.2228 6.0306 14 0.0570 0.0067 0.8862 0.0104 - b=2 z=.4 vector_z 884.0262 8.9520 0.2228 6.0306 14 0.0570 0.0067 0.8862 0.0104 - b=2 z=.4 y 884.0262 8.9520 0.2228 6.0306 14 0.0570 0.0014 0.8862 0.0050 -b=3 z=.2 larger_pt 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 -b=3 z=.2 larger_mt 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 - b=3 z=.2 larger_m 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 - b=2 z=.1 mu=1.0 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 - b=2 z=.1 mu=0.8 887.9353 11.3171 0.2232 6.0303 15 0.1116 0.0044 0.7910 0.0104 - b=2 z=.1 mu=0.5 800.2694 4.0381 0.2230 6.0310 9 0.0101 0.2350 0.2291 0.0608 - b=2.0 z=.2 kT 900.1551 59.2173 0.2190 6.0292 31 0.0176 0.2314 0.8689 0.0104 - b=1.0 z=.2 kT 900.1551 59.2173 0.2190 6.0292 31 0.0176 0.2314 0.8689 0.0104 - b=0.5 z=.2 kT 900.1551 59.2173 0.2190 6.0292 31 0.0176 0.2314 0.8689 0.0104 - b=2.0 z=.4 kT 900.1551 59.2173 0.2190 6.0292 31 0.0176 0.2314 0.8689 0.0104 - b=1.0 z=.4 kT 900.1551 59.2173 0.2190 6.0292 31 0.0176 0.2314 0.8689 0.0104 - b=0.5 z=.4 kT 900.1551 59.2173 0.2190 6.0292 31 0.0176 0.2314 0.8689 0.0104 ---------------------------------------------------------------------------------------------- -Analyzing Jet 3: ---------------------------------------------------------------------------------------------- - name pt m y phi constit delta_R sym mu mxdropz - Original Jet 72.9429 23.4022 -1.1908 6.1199 43 - beta=2.0 zcut=.1 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - beta=1.0 zcut=.1 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - beta=0.5 zcut=.1 67.3730 11.3061 -1.1810 6.0789 29 0.3790 0.0855 0.6966 0.0730 - beta=2.0 zcut=.2 67.3730 11.3061 -1.1810 6.0789 29 0.3790 0.0855 0.6966 0.0730 - beta=1.0 zcut=.2 67.3730 11.3061 -1.1810 6.0789 29 0.3790 0.0855 0.6966 0.0730 - beta=0.5 zcut=.2 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.2463 0.7416 0.0855 - MMDT-like zcut=.1 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.2463 0.7416 0.0855 - MMDT-like zcut=.2 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.2463 0.7416 0.0855 - MMDT-like zcut=.3 43.4356 4.4251 -1.2213 6.0950 13 0.1136 0.4598 0.5944 0.2463 - MMDT-like zcut=.4 43.4356 4.4251 -1.2213 6.0950 13 0.1136 0.4598 0.5944 0.2463 -beta=-2.0 zcut=.05 ---- untagged jet ---- -beta=-1.0 zcut=.05 43.4356 4.4251 -1.2213 6.0950 13 0.1136 0.4598 0.5944 0.2463 -beta=-0.5 zcut=.05 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - b=.5 z=.3 R0=1.0 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.2463 0.7416 0.0855 - b=.5 z=.3 R0=0.5 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.2463 0.7416 0.0855 - b=.5 z=.3 R0=0.2 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.2463 0.7416 0.0855 - b=2 z=.4 scalar_z 67.3730 11.3061 -1.1810 6.0789 29 0.3790 0.0855 0.6966 0.0730 - b=2 z=.4 vector_z 67.3730 11.3061 -1.1810 6.0789 29 0.3790 0.0859 0.6966 0.0741 - b=2 z=.4 y 57.6019 5.9671 -1.1998 6.1138 16 0.1161 0.0763 0.7416 0.0376 -b=3 z=.2 larger_pt 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 -b=3 z=.2 larger_mt 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - b=3 z=.2 larger_m 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - b=2 z=.1 mu=1.0 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - b=2 z=.1 mu=0.8 71.5847 20.0943 -1.1761 6.1254 37 0.6803 0.0730 0.5627 0.0111 - b=2 z=.1 mu=0.5 10.2806 0.1396 -1.1761 6.0554 1 -1.0000 -1.0000 -1.0000 0.0000 - b=2.0 z=.2 kT 72.9429 23.4022 -1.1908 6.1199 43 0.2517 0.3453 0.4508 0.0000 - b=1.0 z=.2 kT 72.9429 23.4022 -1.1908 6.1199 43 0.2517 0.3453 0.4508 0.0000 - b=0.5 z=.2 kT 72.9429 23.4022 -1.1908 6.1199 43 0.2517 0.3453 0.4508 0.0000 - b=2.0 z=.4 kT 72.9429 23.4022 -1.1908 6.1199 43 0.2517 0.3453 0.4508 0.0000 - b=1.0 z=.4 kT 72.9429 23.4022 -1.1908 6.1199 43 0.2517 0.3453 0.4508 0.0000 - b=0.5 z=.4 kT 72.9429 23.4022 -1.1908 6.1199 43 0.2517 0.3453 0.4508 0.0000 diff --git a/src/Tools/fjcontrib/RecursiveTools/example_bottomup_softdrop.cc b/src/Tools/fjcontrib/RecursiveTools/example_bottomup_softdrop.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_bottomup_softdrop.cc +++ /dev/null @@ -1,123 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_bottomup_softdrop.cc -/// -/// This example program is meant to illustrate how the -/// fastjet::contrib::BottomUpSoftDrop class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_bottomup_softdrop < ../data/single-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_bottomup_softdrop.cc 1075 2017-09-18 15:27:09Z gsoyez $ -// -// Copyright (c) 2017-, Gavin P. Salam, Gregory Soyez, Jesse Thaler, -// Kevin Zhou, Frederic Dreyer -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include "fastjet/ClusterSequence.hh" -#include "BottomUpSoftDrop.hh" // In external code, this should be fastjet/contrib/BottomUpSoftDrop.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); -void print_prongs(const PseudoJet &jet, const string &pprefix); -ostream & operator<<(ostream &, const PseudoJet &); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some anti-kt jets - double R = 1.0, ptmin = 100.0; - JetDefinition jet_def(antikt_algorithm, R); - ClusterSequence cs(event, jet_def); - vector jets = sorted_by_pt(cs.inclusive_jets(ptmin)); - - //---------------------------------------------------------------------- - // give the soft drop groomer a short name - // Use a symmetry cut z > z_cut R^beta - // By default, there is no mass-drop requirement - double z_cut = 0.2; - double beta = 1.0; - contrib::BottomUpSoftDrop busd(beta, z_cut); - - //---------------------------------------------------------------------- - cout << "BottomUpSoftDrop groomer is: " << busd.description() << endl; - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - // Run SoftDrop and examine the output - PseudoJet busd_jet = busd(jets[ijet]); - cout << endl; - cout << "original jet: " << jets[ijet] << endl; - cout << "BottomUpSoftDropped jet: " << busd_jet << endl; - - assert(busd_jet != 0); //because bottom-up soft drop is a groomer (not a tagger), it should always return a soft-dropped jet - - } - - return 0; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_bottomup_softdrop.ref b/src/Tools/fjcontrib/RecursiveTools/example_bottomup_softdrop.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_bottomup_softdrop.ref +++ /dev/null @@ -1,21 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.3.1-devel -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -BottomUpSoftDrop groomer is: BottomUpSoftDrop with jet_definition = (Longitudinally invariant Cambridge/Aachen algorithm with R = 1000 and E scheme recombination), symmetry_cut = 0.2, beta = 1, R0 = 1 - -original jet: pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 -BottomUpSoftDropped jet: pt = 962.379 m = 19.9368 y = -0.868399 phi = 2.90497 - -original jet: pt = 908.098 m = 87.7124 y = 0.219482 phi = 6.03487 -BottomUpSoftDropped jet: pt = 872.286 m = 7.04265 y = 0.223045 phi = 6.03083 diff --git a/src/Tools/fjcontrib/RecursiveTools/example_isd.cc b/src/Tools/fjcontrib/RecursiveTools/example_isd.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_isd.cc +++ /dev/null @@ -1,170 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_isd.cc -/// -/// This example program is meant to illustrate how the -/// fastjet::contrib::IteratedSoftDrop class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_isd < ../data/single-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_isd.cc 1115 2018-04-21 13:37:04Z jthaler $ -// -// Copyright (c) 2017, Jesse Thaler, Kevin Zhou -// based on arXiv:1704.06266 by Christopher Frye, Andrew J. Larkoski, -// Jesse Thaler, Kevin Zhou -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include -#include "fastjet/ClusterSequence.hh" - -#include "IteratedSoftDrop.hh" // In external code, this should be fastjet/contrib/IteratedSoftDrop.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some anti-kt jets - double R = 0.5; - JetDefinition jet_def(antikt_algorithm, R); - double ptmin = 200.0; - Selector pt_min_selector = SelectorPtMin(ptmin); - - ClusterSequence cs(event,jet_def); - vector jets = pt_min_selector(sorted_by_pt(cs.inclusive_jets())); - - // Determine optimal scale from 1704.06266 for beta = -1 - double expected_jet_pt = 1000; // in GeV - double NP_scale = 1; // approximately Lambda_QCD in GeV - double optimal_z_cut = (NP_scale / expected_jet_pt / R); - - // set up iterated soft drop objects - double z_cut = optimal_z_cut; - double beta = -1.0; - double theta_cut = 0.0; - - contrib::IteratedSoftDrop isd(beta, z_cut, theta_cut, R); - contrib::IteratedSoftDrop isd_ee(beta, z_cut, contrib::RecursiveSoftDrop::theta_E, - theta_cut, R, std::numeric_limits::infinity(), - contrib::RecursiveSoftDrop::larger_E); - - cout << "---------------------------------------------------" << endl; - cout << "Iterated Soft Drop" << endl; - cout << "---------------------------------------------------" << endl; - - cout << endl; - cout << "Computing with:" << endl; - cout << " " << isd.description() << endl; - cout << " " << isd_ee.description() << endl; - cout << endl; - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - - cout << "---------------------------------------------------" << endl; - cout << "Processing Jet " << ijet << endl; - cout << "---------------------------------------------------" << endl; - - - cout << endl; - cout << "Jet pT: " << jets[ijet].pt() << " GeV" << endl; - cout << endl; - - - // Run full iterated soft drop - // - // Instead of asking for an IteratedSoftDropInfo which contains - // all the information to calculate physics observables, one can - // use - // isd.all_zg_thetag(jet) for the list of symmetry factors and angles - // isd.multiplicity(jet) for the ISD multiplicity - // isd.angularity(jet,alpha) for the angularity obtained from the (zg, thetag) - // - contrib::IteratedSoftDropInfo syms = isd(jets[ijet]); - - cout << "Soft Drop Multiplicity (pt_R measure, beta=" << beta << ", z_cut=" << z_cut << "):" << endl; - cout << syms.multiplicity() << endl; - cout << endl; - - cout << "Symmetry Factors (pt_R measure, beta=" << beta << ", z_cut=" << z_cut << "):" << endl; - for (unsigned i = 0; i < syms.size(); i++){ - cout << syms[i].first << " "; - } - cout << endl; - cout << endl; - - cout << "Soft Drop Angularities (pt_R measure, beta=" << beta << ", z_cut=" << z_cut << "):" << endl; - cout << " alpha = 0, kappa = 0 : " << syms.angularity(0.0, 0.0) << endl; - cout << " alpha = 0, kappa = 2 : " << syms.angularity(0.0, 2.0) << endl; - cout << " alpha = 0.5, kappa = 1 : " << syms.angularity(0.5) << endl; - cout << " alpha = 1, kappa = 1 : " << syms.angularity(1.0) << endl; - cout << " alpha = 2, kappa = 1 : " << syms.angularity(2.0) << endl; - cout << endl; - - // Alternative version with e+e- measure - contrib::IteratedSoftDropInfo syms_ee = isd_ee(jets[ijet]); - cout << "Symmetry Factors (E_theta measure, beta=" << beta << ", z_cut=" << z_cut << "):" << endl; - for (unsigned i = 0; i < syms_ee.size(); i++){ - cout << syms_ee[i].first << " "; - } - cout << endl; - cout << endl; - - - } - - return 0; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_isd.ref b/src/Tools/fjcontrib/RecursiveTools/example_isd.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_isd.ref +++ /dev/null @@ -1,66 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.2.0 -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- ---------------------------------------------------- -Iterated Soft Drop ---------------------------------------------------- - -Computing with: - IteratedSoftDrop with beta =-1, symmetry_cut=0.002, R0=0.5 and no angular_cut - IteratedSoftDrop with beta =-1, symmetry_cut=0.002, R0=0.5 and no angular_cut - ---------------------------------------------------- -Processing Jet 0 ---------------------------------------------------- - -Jet pT: 982.342 GeV - -Soft Drop Multiplicity (pt_R measure, beta=-1, z_cut=0.002): -3 - -Symmetry Factors (pt_R measure, beta=-1, z_cut=0.002): -0.0294298 0.154333 0.42512 - -Soft Drop Angularities (pt_R measure, beta=-1, z_cut=0.002): - alpha = 0, kappa = 0 : 3 - alpha = 0, kappa = 2 : 0.205411 - alpha = 0.5, kappa = 1 : 0.0541032 - alpha = 1, kappa = 1 : 0.0071635 - alpha = 2, kappa = 1 : 0.000333819 - -Symmetry Factors (E_theta measure, beta=-1, z_cut=0.002): -0.0286322 0.156129 - ---------------------------------------------------- -Processing Jet 1 ---------------------------------------------------- - -Jet pT: 899.993 GeV - -Soft Drop Multiplicity (pt_R measure, beta=-1, z_cut=0.002): -4 - -Symmetry Factors (pt_R measure, beta=-1, z_cut=0.002): -0.00357435 0.0033004 0.00518061 0.235041 - -Soft Drop Angularities (pt_R measure, beta=-1, z_cut=0.002): - alpha = 0, kappa = 0 : 4 - alpha = 0, kappa = 2 : 0.0552947 - alpha = 0.5, kappa = 1 : 0.0307531 - alpha = 1, kappa = 1 : 0.00657501 - alpha = 2, kappa = 1 : 0.00151741 - -Symmetry Factors (E_theta measure, beta=-1, z_cut=0.002): -0.00367919 0.00338026 0.00506002 0.235347 - diff --git a/src/Tools/fjcontrib/RecursiveTools/example_mmdt.cc b/src/Tools/fjcontrib/RecursiveTools/example_mmdt.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_mmdt.cc +++ /dev/null @@ -1,131 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_mmdt.cc -/// -/// This example program is meant to illustrate how the -/// fastjet::contrib::ModifiedMassDropTagger class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_mmdt < ../data/single-event.dat -/// \endverbatim -/// -/// It also shows operation in conjunction with a Filter. -//---------------------------------------------------------------------- - -// $Id: example_mmdt.cc 686 2014-06-14 03:25:09Z jthaler $ -// -// Copyright (c) 2014, Gavin P. Salam -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include -#include "fastjet/ClusterSequence.hh" -#include "fastjet/tools/Filter.hh" -#include "ModifiedMassDropTagger.hh" // In external code, this should be fastjet/contrib/ModifiedMassDropTagger.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); -ostream & operator<<(ostream &, const PseudoJet &); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some Cambridge/Aachen jets - double R = 1.0, ptmin = 20.0; - JetDefinition jet_def(cambridge_algorithm, R); - ClusterSequence cs(event, jet_def); - vector jets = sorted_by_pt(cs.inclusive_jets(ptmin)); - - // give the tagger a short name - typedef contrib::ModifiedMassDropTagger MMDT; - // use just a symmetry cut, with no mass-drop requirement - double z_cut = 0.10; - MMDT tagger(z_cut); - cout << "tagger is: " << tagger.description() << endl; - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - // first run MMDT and examine the output - PseudoJet tagged_jet = tagger(jets[ijet]); - cout << endl; - cout << "original jet: " << jets[ijet] << endl; - cout << "tagged jet: " << tagged_jet << endl; - if (tagged_jet == 0) continue; // If symmetry condition not satisified, jet is not tagged - cout << " delta_R between subjets: " << tagged_jet.structure_of().delta_R() << endl; - cout << " symmetry measure(z): " << tagged_jet.structure_of().symmetry() << endl; - cout << " mass drop(mu): " << tagged_jet.structure_of().mu() << endl; - - // then filter the jet (useful for studies at moderate pt) - // with a dynamic Rfilt choice (as in arXiv:0802.2470) - double Rfilt = min(0.3, tagged_jet.structure_of().delta_R()*0.5); - int nfilt = 3; - Filter filter(Rfilt, SelectorNHardest(nfilt)); - PseudoJet filtered_jet = filter(tagged_jet); - cout << "filtered jet: " << filtered_jet << endl; - cout << endl; - } - - return 0; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_mmdt.ref b/src/Tools/fjcontrib/RecursiveTools/example_mmdt.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_mmdt.ref +++ /dev/null @@ -1,39 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.1.0-devel -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code, -# CGAL and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -tagger is: Recursive Tagger with a symmetry cut scalar_z > 0.1 [ModifiedMassDropTagger], no mass-drop requirement, recursion into the subjet with larger pt - -original jet: pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 -tagged jet: pt = 917.674 m = 8.70484 y = -0.869593 phi = 2.90788 - delta_R between subjets: 0.0200353 - symmetry measure(z): 0.154333 - mass drop(mu): 0.557579 -filtered jet: pt = 917.674 m = 8.70484 y = -0.869593 phi = 2.90788 - - -original jet: pt = 910.164 m = 122.615 y = 0.223738 phi = 6.04265 -tagged jet: pt = 800.269 m = 4.03811 y = 0.223011 phi = 6.03096 - delta_R between subjets: 0.0101382 - symmetry measure(z): 0.235041 - mass drop(mu): 0.229134 -filtered jet: pt = 778.731 m = 3.66481 y = 0.223066 phi = 6.0309 - - -original jet: pt = 73.2118 m = 21.4859 y = -1.16399 phi = 6.11977 -tagged jet: pt = 57.6019 m = 5.96709 y = -1.19982 phi = 6.11382 - delta_R between subjets: 0.116062 - symmetry measure(z): 0.246343 - mass drop(mu): 0.741581 -filtered jet: pt = 46.7389 m = 4.6395 y = -1.19991 phi = 6.12566 - diff --git a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_ee.cc b/src/Tools/fjcontrib/RecursiveTools/example_mmdt_ee.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_ee.cc +++ /dev/null @@ -1,142 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_ee.cc -/// -/// This example program is meant to illustrate how to use -/// RecursiveTools for e+e- events. It is done using the -/// ModifiedMassDropTagger class but the same strategy would work as -/// well for SoftDrop, RecursiveSoftDrop and IteratedSoftDrop -/// -/// Run this example with -/// -/// \verbatim -/// ./example_ee < ../data/single-ee-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_mmdt_ee.cc 1064 2017-09-08 09:19:57Z gsoyez $ -// -// Copyright (c) 2017-, Gavin P. Salam, Gregory Soyez, Jesse Thaler, -// Kevin Zhou, Frederic Dreyer -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include -#include "fastjet/ClusterSequence.hh" -#include "fastjet/tools/Filter.hh" -#include "ModifiedMassDropTagger.hh" // In external code, this should be fastjet/contrib/ModifiedMassDropTagger.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); -ostream & operator<<(ostream &, const PseudoJet &); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some Cambridge/Aachen jets - double R = 1.0; - JetDefinition jet_def(ee_genkt_algorithm, R, 0.0); - ClusterSequence cs(event, jet_def); - - double Emin = 10.0; - Selector sel_jets = SelectorEMin(Emin); - vector jets = sorted_by_E(sel_jets(cs.inclusive_jets())); - - // give the tagger a short name - typedef contrib::ModifiedMassDropTagger MMDT; - - // This version uses the following setup: - // - use energy for the symmetry measure - // Note: since the mMDT does not require angular information, - // here we could use either theta_E or cos_theta_E (it would - // only change the value of DeltaR). For - // SoftDrop/RecursiveSoftDrop/IteratedSoftDrop, this would make - // a difference and we have - // DeltaR_{ij}^2 = theta_{ij}^2 (theta_E) - // DeltaR_{ij}^2 = 2 [1-cos(theta_{ij}^2)] (cos_theta_E) - // - // - recurse into the branch with the largest energy - // - // - use a symmetry cut, with no mass-drop requirement - double z_cut = 0.20; - MMDT tagger(z_cut, - MMDT::cos_theta_E, - std::numeric_limits::infinity(), - MMDT::larger_E); - cout << "tagger is: " << tagger.description() << endl; - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - // first run MMDT and examine the output - PseudoJet tagged_jet = tagger(jets[ijet]); - cout << endl; - cout << "original jet: " << jets[ijet] << endl; - cout << "tagged jet: " << tagged_jet << endl; - if (tagged_jet == 0) continue; // If symmetry condition not satisified, jet is not tagged - cout << " delta_R between subjets: " << tagged_jet.structure_of().delta_R() << endl; - cout << " symmetry measure(z): " << tagged_jet.structure_of().symmetry() << endl; - cout << " mass drop(mu): " << tagged_jet.structure_of().mu() << endl; - - cout << endl; - } - - return 0; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " E = " << jet.pt() - << " m = " << jet.m(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_ee.ref b/src/Tools/fjcontrib/RecursiveTools/example_mmdt_ee.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_ee.ref +++ /dev/null @@ -1,36 +0,0 @@ -# read an event with 70 particles -#-------------------------------------------------------------------------- -# FastJet release 3.3.1-devel -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -tagger is: Recursive Tagger with a symmetry cut cos_theta_E > 0.2 [ModifiedMassDropTagger], no mass-drop requirement, recursion into the subjet with larger energy - -original jet: E = 11.7799 m = 19.2957 -tagged jet: E = 6.02018 m = 6.52181 - delta_R between subjets: 0.270322 - symmetry measure(z): 0.49629 - mass drop(mu): 0.403395 - - -original jet: E = 15.9528 m = 7.28134 -tagged jet: E = 8.08779 m = 0.992053 - delta_R between subjets: 0.113801 - symmetry measure(z): 0.289966 - mass drop(mu): 0.413689 - - -original jet: E = 10.3356 m = 9.53801 -tagged jet: E = 10.3356 m = 9.53801 - delta_R between subjets: 0.779204 - symmetry measure(z): 0.35827 - mass drop(mu): 0.331619 - diff --git a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_sub.cc b/src/Tools/fjcontrib/RecursiveTools/example_mmdt_sub.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_sub.cc +++ /dev/null @@ -1,211 +0,0 @@ -// $Id: example_mmdt_sub.cc 686 2014-06-14 03:25:09Z jthaler $ -// -// Copyright (c) 2014, Gavin Salam -// -/// \file example_mmdt_sub.cc -/// -/// An example to illustrate how to use the ModifiedMassDropTagger -/// together with pileup subtraction. -/// -/// Usage: -/// -/// \verbatim -/// ./example_mmdt_sub < ../data/Pythia-Zp2jets-lhc-pileup-1ev.dat -/// \endverbatim -/// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include "fastjet/ClusterSequenceArea.hh" -#include "fastjet/tools/GridMedianBackgroundEstimator.hh" -#include "fastjet/tools/Subtractor.hh" -#include "fastjet/tools/Filter.hh" -#include "fastjet/config.h" -#include "ModifiedMassDropTagger.hh" // In external code, this should be fastjet/contrib/ModifiedMassDropTagger.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &hard_event, vector &full_event); -void do_analysis(const vector & jets, const Subtractor * subtractor); -ostream & operator<<(ostream &, const PseudoJet &); - -// give the tagger a short name -typedef contrib::ModifiedMassDropTagger MMDT; - -//---------------------------------------------------------------------- -int main(){ - - // Specify our basic jet tools: - // jet definition & area definition - double R = 1.0, rapmax = 5.0, ghost_area = 0.01; - int repeat = 1; - JetDefinition jet_def(cambridge_algorithm, R); - AreaDefinition area_def(active_area_explicit_ghosts, - GhostedAreaSpec(SelectorAbsRapMax(rapmax),repeat,ghost_area)); - cout << "# " << jet_def.description() << endl; - cout << "# " << area_def.description() << endl; - - // then our background estimator: use the (fast) grid-median method, - // and manually include reasonable rapidity dependence for - // particle-level 14 TeV events. - double grid_size = 0.55; - GridMedianBackgroundEstimator bge(rapmax, grid_size); - BackgroundRescalingYPolynomial rescaling (1.1685397, 0, -0.0246807, 0, 5.94119e-05); - bge.set_rescaling_class(&rescaling); - // define a corresponding subtractor - Subtractor subtractor(&bge); - - - //---------------------------------------------------------- - // next read in input particles and get corresponding jets - // for the hard event (no pileup) and full event (with pileup) - vector hard_event, full_event; - read_event(hard_event, full_event); - cout << "# read a hard event with " << hard_event.size() << " particles" ; -#if (FASTJET_VERSION_NUMBER >= 30100) // Selector.sum(..) works only for FJ >= 3.1 - cout << ", centre-of-mass energy = " << SelectorIdentity().sum(hard_event).m(); -#endif - cout << endl; - cout << "# read a full event with " << full_event.size() << " particles" << endl; - - // then get the CS and jets for both hard and full events - ClusterSequenceArea csa_hard(hard_event, jet_def, area_def); - ClusterSequenceArea csa_full(full_event, jet_def, area_def); - vector hard_jets = SelectorNHardest(2)(csa_hard.inclusive_jets()); - vector full_jets = SelectorNHardest(2)(csa_full.inclusive_jets()); - hard_jets = sorted_by_rapidity(hard_jets); - full_jets = sorted_by_rapidity(full_jets); - - // estimate the background (the subtractor is automatically tied to this) - bge.set_particles(full_event); - - // then do analyses with and without PU, and with and without subtraction - cout << endl << "-----------------------------------------" << endl - << "No pileup, no subtraction" << endl; - do_analysis(hard_jets, 0); - cout << endl << "-----------------------------------------" << endl - << "Pileup, no subtraction" << endl; - do_analysis(full_jets, 0); - cout << endl << "-----------------------------------------" << endl - << "Pileup, with subtraction" << endl; - do_analysis(full_jets, &subtractor); - - return 0; -} - - -//---------------------------------------------------------------------- -/// do a simple MMDT + filter analysis, optionally with a subtractor -void do_analysis(const vector & jets, const Subtractor * subtractor) { - - // use just a symmetry cut for the tagger, with no mass-drop requirement - double z_cut = 0.10; - MMDT tagger(z_cut); - cout << "tagger is: " << tagger.description() << endl; - // tell the tagger that we will subtract the input jet ourselves - tagger.set_subtractor(subtractor); - tagger.set_input_jet_is_subtracted(true); - - - PseudoJet jet; - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - if (subtractor) { - jet = (*subtractor)(jets[ijet]); - } else { - jet = jets[ijet]; - } - PseudoJet tagged_jet = tagger(jet); - cout << endl; - cout << "original jet" << jet << endl; - cout << "tagged jet" << tagged_jet << endl; - if (tagged_jet != 0) { // get additional informaition about satisified symmetry condition - cout << " delta_R between subjets: " << tagged_jet.structure_of().delta_R() << endl; - cout << " symmetry measure(z): " << tagged_jet.structure_of().symmetry() << endl; - cout << " mass drop(mu): " << tagged_jet.structure_of().mu() << endl; - } - - // then filter the jet (useful for studies at moderate pt) - // with a dynamic Rfilt choice (as in arXiv:0802.2470) - double Rfilt = min(0.3, tagged_jet.structure_of().delta_R()*0.5); - int nfilt = 3; - Filter filter(Rfilt, SelectorNHardest(nfilt)); - filter.set_subtractor(subtractor); - PseudoJet filtered_jet = filter(tagged_jet); - cout << "filtered jet: " << filtered_jet << endl; - cout << endl; - } - - -} - - -//------------------------------------------------------------------------ -// read the event with and without pileup -void read_event(vector &hard_event, vector &full_event){ - string line; - int nsub = 0; // counter to keep track of which sub-event we're reading - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {break;} - if (line.substr(0,9) == "#SUBSTART") { - // if more sub events follow, make copy of first one (the hard one) here - if (nsub == 1) hard_event = full_event; - nsub += 1; - } - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - full_event.push_back(particle); - } - - // if we have read in only one event, copy it across here... - if (nsub == 1) hard_event = full_event; - - // if there was nothing in the event - if (nsub == 0) { - cerr << "Error: read empty event\n"; - exit(-1); - } - - cout << "# " << nsub-1 << " pileup events on top of the hard event" << endl; -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_sub.ref b/src/Tools/fjcontrib/RecursiveTools/example_mmdt_sub.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_mmdt_sub.ref +++ /dev/null @@ -1,78 +0,0 @@ -# Longitudinally invariant Cambridge/Aachen algorithm with R = 1 and E scheme recombination -# Active area (explicit ghosts) with ghosts of area 0.00997331 (had requested 0.01), placed according to selector (|rap| <= 5), scattered wrt to perfect grid by (rel) 1, mean_ghost_pt = 1e-100, rel pt_scatter = 0.1, n repetitions of ghost distributions = 1 -# 20 pileup events on top of the hard event -# read a hard event with 309 particles, centre-of-mass energy = 14000 -# read a full event with 3109 particles -#-------------------------------------------------------------------------- -# FastJet release 3.1.0-devel -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code, -# CGAL and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- - ------------------------------------------ -No pileup, no subtraction -tagger is: Recursive Tagger with a symmetry cut scalar_z > 0.1 [ModifiedMassDropTagger], no mass-drop requirement, recursion into the subjet with larger pt - -original jet pt = 221.812 m = 32.2905 y = -1.05822 phi = 0.0756913 -tagged jet pt = 206.662 m = 6.022 y = -1.07399 phi = 0.074484 - delta_R between subjets: 0.0625986 - symmetry measure(z): 0.10081 - mass drop(mu): 0.629535 -filtered jet: pt = 198.618 m = 5.00366 y = -1.07664 phi = 0.0741277 - - -original jet pt = 147.272 m = 32.4736 y = 0.684444 phi = 3.45888 -tagged jet pt = 125.59 m = 9.00653 y = 0.673318 phi = 3.44347 - delta_R between subjets: 0.0732829 - symmetry measure(z): 0.348016 - mass drop(mu): 0.604284 -filtered jet: pt = 72.4046 m = 4.32252 y = 0.668029 phi = 3.44446 - - ------------------------------------------ -Pileup, no subtraction -tagger is: Recursive Tagger with a symmetry cut scalar_z > 0.1 [ModifiedMassDropTagger], no mass-drop requirement, recursion into the subjet with larger pt - -original jet pt = 238.971 m = 82.4889 y = -1.07084 phi = 0.0430975 -tagged jet pt = 206.662 m = 6.022 y = -1.07399 phi = 0.074484 - delta_R between subjets: 0.0625986 - symmetry measure(z): 0.10081 - mass drop(mu): 0.629535 -filtered jet: pt = 198.618 m = 5.00366 y = -1.07664 phi = 0.0741277 - - -original jet pt = 173.75 m = 74.6701 y = 0.73638 phi = 3.48392 -tagged jet pt = 133.133 m = 13.4499 y = 0.667906 phi = 3.44217 - delta_R between subjets: 0.10831 - symmetry measure(z): 0.107147 - mass drop(mu): 0.862327 -filtered jet: pt = 109.81 m = 7.14298 y = 0.676746 phi = 3.43921 - - ------------------------------------------ -Pileup, with subtraction -tagger is: Recursive Tagger with a symmetry cut scalar_z > 0.1 [ModifiedMassDropTagger], no mass-drop requirement, recursion into the subjet with larger pt - -original jet pt = 203.782 m = -0.967618 y = -1.05353 phi = 0.0821247 -tagged jet pt = 206.662 m = 6.022 y = -1.07399 phi = 0.074484 - delta_R between subjets: 0.0625986 - symmetry measure(z): 0.10081 - mass drop(mu): 0.629535 -filtered jet: pt = 198.618 m = 5.00366 y = -1.07664 phi = 0.0741277 - - -original jet pt = 146.109 m = 51.1929 y = 0.705189 phi = 3.44819 -tagged jet pt = 132.536 m = 13.3758 y = 0.668335 phi = 3.44231 - delta_R between subjets: 0.107608 - symmetry measure(z): 0.104244 - mass drop(mu): 0.866401 -filtered jet: pt = 109.661 m = 7.13553 y = 0.676798 phi = 3.43925 - diff --git a/src/Tools/fjcontrib/RecursiveTools/example_recluster.cc b/src/Tools/fjcontrib/RecursiveTools/example_recluster.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_recluster.cc +++ /dev/null @@ -1,173 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_recluster.cc -/// -/// This example program is meant to illustrate how the -/// fastjet::contrib::ModifiedMassDropTagger class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_recluster < ../data/single-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_recluster.cc 705 2014-07-07 14:37:03Z gsoyez $ -// -// Copyright (c) 2014, Gavin P. Salam -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include -#include "fastjet/ClusterSequence.hh" - -#include "Recluster.hh" // In external code, this should be fastjet/contrib/Recluster.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); -ostream & operator<<(ostream &, const PseudoJet &); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - double R = 1.0; - double ptmin = 20.0; - double Rsub = 0.3; - - //---------------------------------------------------------- - // start with an example from anti-kt jets - cout << "--------------------------------------------------" << endl; - JetDefinition jet_def_akt(antikt_algorithm, R); - ClusterSequence cs_akt(event, jet_def_akt); - vector jets_akt = sorted_by_pt(cs_akt.inclusive_jets(ptmin)); - PseudoJet jet_akt = jets_akt[0]; - cout << "Starting from a jet obtained from: " << jet_def_akt.description() << endl - << " " << jet_akt << endl << endl; - - // recluster with C/A ("infinite" radius) - contrib::Recluster recluster_ca_inf(cambridge_algorithm, JetDefinition::max_allowable_R); - PseudoJet rec_jet_ca_inf = recluster_ca_inf(jet_akt); - cout << "Reclustering with: " << recluster_ca_inf.description() << endl - << " " << rec_jet_ca_inf << endl << endl;; - - // recluster with C/A (small radius), keeping all subjets - contrib::Recluster recluster_ca_sub(cambridge_algorithm, Rsub, false); - PseudoJet rec_jet_ca_sub = recluster_ca_sub(jet_akt); - cout << "Reclustering with: " << recluster_ca_sub.description() << endl - << " " << rec_jet_ca_sub << endl; - vector pieces = rec_jet_ca_sub.pieces(); - cout << " subjets: " << endl; - for (unsigned int i=0;i jets_ca = sorted_by_pt(cs_ca.inclusive_jets(ptmin)); - PseudoJet jet_ca = jets_ca[0]; - cout << "Starting from a jet obtained from: " << jet_def_ca.description() << endl - << " " << jet_ca << endl << endl; - - // recluster with C/A ("infinite" radius) - rec_jet_ca_inf = recluster_ca_inf(jet_ca); - cout << "Reclustering with: " << recluster_ca_inf.description() << endl - << " " << rec_jet_ca_inf << endl << endl; - - // recluster with C/A (small radius), keeping all subjets - rec_jet_ca_sub = recluster_ca_sub(jet_ca); - cout << "Reclustering with: " << recluster_ca_sub.description() << endl - << " " << rec_jet_ca_sub << endl; - pieces = rec_jet_ca_sub.pieces(); - cout << " subjets: " << endl; - for (unsigned int i=0;i &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi() - << " ClusSeq = " << (jet.has_associated_cs() ? "yes" : "no"); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_recluster.ref b/src/Tools/fjcontrib/RecursiveTools/example_recluster.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_recluster.ref +++ /dev/null @@ -1,64 +0,0 @@ -# read an event with 354 particles --------------------------------------------------- -#-------------------------------------------------------------------------- -# FastJet release 3.1.0-devel -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code, -# CGAL and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -Starting from a jet obtained from: Longitudinally invariant anti-kt algorithm with R = 1 and E scheme recombination - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = yes - -Reclustering with: Recluster with subjet_def = Longitudinally invariant Cambridge/Aachen algorithm with R = 1000, a recombiner obtained from the jet being reclustered and keeping the hardest subjet - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = yes - -Reclustering with: Recluster with subjet_def = Longitudinally invariant Cambridge/Aachen algorithm with R = 0.3, a recombiner obtained from the jet being reclustered and joining all subjets in a composite jet - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = no - subjets: - pt = 980.937 m = 27.6178 y = -0.86753 phi = 2.90527 ClusSeq = yes - pt = 1.34048 m = 0.13498 y = -0.448775 phi = 2.76161 ClusSeq = yes - pt = 0.461349 m = 0.13957 y = -1.31787 phi = 3.17806 ClusSeq = yes - pt = 0.362421 m = 0.0359117 y = -0.593531 phi = 3.08425 ClusSeq = yes - pt = 0.241708 m = -3.12684e-06 y = -1.16407 phi = 2.42725 ClusSeq = yes - pt = 0.115239 m = 0.13957 y = -1.6844 phi = 2.54038 ClusSeq = yes - -Reclustering with: Recluster with subjet_def = Longitudinally invariant kt algorithm with R = 0.3, a recombiner obtained from the jet being reclustered and joining all subjets in a composite jet - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = no - subjets: - pt = 982.621 m = 32.8883 y = -0.866837 phi = 2.90514 ClusSeq = yes - pt = 0.461349 m = 0.13957 y = -1.31787 phi = 3.17806 ClusSeq = yes - pt = 0.241708 m = -3.12684e-06 y = -1.16407 phi = 2.42725 ClusSeq = yes - pt = 0.115239 m = 0.13957 y = -1.6844 phi = 2.54038 ClusSeq = yes - --------------------------------------------------- -Starting from a jet obtained from: Longitudinally invariant Cambridge/Aachen algorithm with R = 1 and E scheme recombination - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = yes - -Reclustering with: Recluster with subjet_def = Longitudinally invariant Cambridge/Aachen algorithm with R = 1000, a recombiner obtained from the jet being reclustered and keeping the hardest subjet - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = yes - -Reclustering with: Recluster with subjet_def = Longitudinally invariant Cambridge/Aachen algorithm with R = 0.3, a recombiner obtained from the jet being reclustered and joining all subjets in a composite jet - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = no - subjets: - pt = 980.937 m = 27.6178 y = -0.86753 phi = 2.90527 ClusSeq = yes - pt = 1.34048 m = 0.13498 y = -0.448775 phi = 2.76161 ClusSeq = yes - pt = 0.461349 m = 0.13957 y = -1.31787 phi = 3.17806 ClusSeq = yes - pt = 0.362421 m = 0.0359117 y = -0.593531 phi = 3.08425 ClusSeq = yes - pt = 0.241708 m = -3.12684e-06 y = -1.16407 phi = 2.42725 ClusSeq = yes - pt = 0.115239 m = 0.13957 y = -1.6844 phi = 2.54038 ClusSeq = yes - -Reclustering with: Recluster with subjet_def = Longitudinally invariant kt algorithm with R = 0.3, a recombiner obtained from the jet being reclustered and joining all subjets in a composite jet - pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 ClusSeq = no - subjets: - pt = 982.621 m = 32.8883 y = -0.866837 phi = 2.90514 ClusSeq = yes - pt = 0.461349 m = 0.13957 y = -1.31787 phi = 3.17806 ClusSeq = yes - pt = 0.241708 m = -3.12684e-06 y = -1.16407 phi = 2.42725 ClusSeq = yes - pt = 0.115239 m = 0.13957 y = -1.6844 phi = 2.54038 ClusSeq = yes - diff --git a/src/Tools/fjcontrib/RecursiveTools/example_recursive_softdrop.cc b/src/Tools/fjcontrib/RecursiveTools/example_recursive_softdrop.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_recursive_softdrop.cc +++ /dev/null @@ -1,240 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_recursive_softdrop.cc -/// -/// This example program is meant to illustrate how the -/// fastjet::contrib::RecursiveSoftDrop class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_recursive_softdrop < ../data/single-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_recursive_softdrop.cc 1074 2017-09-18 15:15:20Z gsoyez $ -// -// Copyright (c) 2017-, Gavin P. Salam, Gregory Soyez, Jesse Thaler, -// Kevin Zhou, Frederic Dreyer -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include "fastjet/ClusterSequence.hh" -#include "RecursiveSoftDrop.hh" // In external code, this should be fastjet/contrib/RecursiveSoftDrop.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); - -void print_prongs_with_clustering_info(const PseudoJet &jet, const string &pprefix); -void print_raw_prongs(const PseudoJet &jet); - -ostream & operator<<(ostream &, const PseudoJet &); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some anti-kt jets - double R = 1.0, ptmin = 100.0; - JetDefinition jet_def(antikt_algorithm, R); - ClusterSequence cs(event, jet_def); - vector jets = sorted_by_pt(cs.inclusive_jets(ptmin)); - - //---------------------------------------------------------------------- - // give the soft drop groomer a short name - // Use a symmetry cut z > z_cut R^beta - // By default, there is no mass-drop requirement - double z_cut = 0.2; - double beta = 0.5; - int n=4; // number of layers (-1 <> infinite) - contrib::RecursiveSoftDrop rsd(beta, z_cut, n, R); - - // keep addittional structure info (used below) - rsd.set_verbose_structure(true); - - // (optionally) use the same-depth variant - // - // instead of recursing into the largest Delta R branch until "n+1" - // branches hav ebeen found, the same-depth variant recurses n times - // into all the branches found in the previous iteration - // - //rsd.set_fixed_depth_mode(); - - // (optionally) use a dynamical R0 - // - // Instead of being normalised by the initial jet radios R0, angles - // are notrmalised by the delta R of the previous iteration - // - rsd.set_dynamical_R0(); - - // (optionally) recurse only in the hardest branch - // - // Instead of recursing into both branches found by the previous - // iteration, only keep recursing into the hardest one - // - //rsd.set_hardest_branch_only(); - - - //---------------------------------------------------------------------- - cout << "RecursiveSoftDrop groomer is: " << rsd.description() << endl; - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - // Run SoftDrop and examine the output - PseudoJet rsd_jet = rsd(jets[ijet]); - cout << endl; - cout << "original jet: " << jets[ijet] << endl; - cout << "RecursiveSoftDropped jet: " << rsd_jet << endl; - - assert(rsd_jet != 0); //because soft drop is a groomer (not a tagger), it should always return a soft-dropped jet - - // print the prong structure of the jet - // - // This can be done in 2 ways: - // - // - either keeping the clustering information and get the - // branches as a succession of 2->1 recombinations (this is - // done calling "pieces" recursively) - cout << endl - << "Prongs with clustering information" << endl - << "----------------------------------" << endl; - print_prongs_with_clustering_info(rsd_jet, " "); - // - // - or getting all the branches in a single go (done directly - // through the jet associated structure) - cout << endl - << "Prongs without clustering information" << endl - << "-------------------------------------" << endl; - print_raw_prongs(rsd_jet); - - cout << "Groomed prongs information:" << endl; - cout << "index zg thetag" << endl; - vector > ztg = rsd_jet.structure_of().sorted_zg_and_thetag(); - for (unsigned int i=0; i(); - double dR = structure.delta_R(); - cout << " " << left << setw(14) << (prefix.substr(0, prefix.size()-1)+"+--> ") << right - << setw(8) << jet.pt() << setw(14) << jet.m() - << setw(5) << structure.dropped_count(false) - << setw(5) << structure.dropped_count() - << setw(11) << structure.max_dropped_symmetry(false); - - if (structure.has_substructure()){ - cout << setw(11) << structure.symmetry() - << setw(11) << structure.delta_R(); - } - cout << endl; - - if (dR>=0){ - vector pieces = jet.pieces(); - assert(pieces.size()==2); - print_prongs_with_clustering_info(pieces[0], prefix+" |"); - print_prongs_with_clustering_info(pieces[1], prefix+" "); - } -} - -//---------------------------------------------------------------------- -// print all the prongs inside the jet (no clustering info) -void print_raw_prongs(const PseudoJet &jet){ - cout << "(Raw) list of prongs:" << endl; - if (!jet.has_structure_of()){ - cout << " None (bad structure)" << endl; - return; - } - - cout << setw(5) << " " << setw(11) << "pt" << setw(14) << "mass" << endl; - - vector prongs = contrib::recursive_soft_drop_prongs(jet); - for (unsigned int iprong=0; iprong(); - cout << setw(5) << iprong << setw(11) << prong.pt() << setw(14) << prong.m() << endl; - - assert(!structure.has_substructure()); - } - cout << endl; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_recursive_softdrop.ref b/src/Tools/fjcontrib/RecursiveTools/example_recursive_softdrop.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_recursive_softdrop.ref +++ /dev/null @@ -1,83 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.3.1-devel -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -RecursiveSoftDrop groomer is: recursive application of [Recursive Groomer with a symmetry cut scalar_z > 0.2 (theta/1)^0.5 [SoftDrop], no mass-drop requirement, recursion into the subjet with larger pt], applied N=4 times, with R0 dynamically scaled - -original jet: pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 -RecursiveSoftDropped jet: pt = 811.261 m = 6.45947 y = -0.87094 phi = 2.9083 - -Prongs with clustering information ----------------------------------- - branch branch N_groomed max loc substructure - pt mass loc tot zdrop zg thetag - +--> 811.261 6.45947 13 13 0.0294298 0.154333 0.0200353 - +--> 669.635 1.98393 2 2 0 - +--> 141.627 0.765329 0 0 0 0.199906 0.00734861 - +--> 113.316 0.462946 0 0 0 0.208947 0.00550512 - | +--> 89.6387 0.211913 0 0 0 - | +--> 23.6769 0.13957 0 0 0 - +--> 28.3123 0.170026 0 0 0 0.248173 0.00447786 - +--> 21.286 0.13957 0 0 0 - +--> 7.02636 -3.21461e-05 0 0 0 - -Prongs without clustering information -------------------------------------- -(Raw) list of prongs: - pt mass - 0 669.635 1.98393 - 1 89.6387 0.211913 - 2 21.286 0.13957 - 3 23.6769 0.13957 - 4 7.02636 -3.21461e-05 - -Groomed prongs information: -index zg thetag - 1 0.154333 0.0200353 - 2 0.199906 0.00734861 - 3 0.208947 0.00550512 - 4 0.248173 0.00447786 - -original jet: pt = 908.098 m = 87.7124 y = 0.219482 phi = 6.03487 -RecursiveSoftDropped jet: pt = 830.517 m = 4.91035 y = 0.223054 phi = 6.02995 - -Prongs with clustering information ----------------------------------- - branch branch N_groomed max loc substructure - pt mass loc tot zdrop zg thetag - +--> 830.517 4.91035 12 13 0.0232056 0.060784 0.0153863 - +--> 778.731 3.66481 0 1 0 0.235041 0.0101382 - | +--> 599.106 0.403809 1 1 0 - | +--> 179.628 0.853363 0 1 0 0.25773 0.00871739 - | +--> 131.15 0.378456 1 1 0.0606587 0.315246 0.00417298 - | | +--> 89.8058 0.107191 0 0 0 - | | +--> 41.3448 0.13957 0 0 0 - | +--> 48.4785 0.13957 0 0 0 - +--> 51.7916 0.13957 0 0 0 - -Prongs without clustering information -------------------------------------- -(Raw) list of prongs: - pt mass - 0 599.106 0.403809 - 1 51.7916 0.13957 - 2 89.8058 0.107191 - 3 48.4785 0.13957 - 4 41.3448 0.13957 - -Groomed prongs information: -index zg thetag - 1 0.060784 0.0153863 - 2 0.235041 0.0101382 - 3 0.25773 0.00871739 - 4 0.315246 0.00417298 diff --git a/src/Tools/fjcontrib/RecursiveTools/example_softdrop.cc b/src/Tools/fjcontrib/RecursiveTools/example_softdrop.cc deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_softdrop.cc +++ /dev/null @@ -1,122 +0,0 @@ -//---------------------------------------------------------------------- -/// \file example_softdrop.cc -/// -/// This example program is meant to illustrate how the -/// fastjet::contrib::SoftDrop class is used. -/// -/// Run this example with -/// -/// \verbatim -/// ./example_softdrop < ../data/single-event.dat -/// \endverbatim -//---------------------------------------------------------------------- - -// $Id: example_softdrop.cc 705 2014-07-07 14:37:03Z gsoyez $ -// -// Copyright (c) 2014, Gavin P. Salam -// -//---------------------------------------------------------------------- -// This file is part of FastJet contrib. -// -// It is free software; you can redistribute it and/or modify it under -// the terms of the GNU General Public License as published by the -// Free Software Foundation; either version 2 of the License, or (at -// your option) any later version. -// -// It is distributed in the hope that it will be useful, but WITHOUT -// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY -// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public -// License for more details. -// -// You should have received a copy of the GNU General Public License -// along with this code. If not, see . -//---------------------------------------------------------------------- - -#include -#include - -#include -#include -#include -#include "fastjet/ClusterSequence.hh" -#include "SoftDrop.hh" // In external code, this should be fastjet/contrib/SoftDrop.hh - -using namespace std; -using namespace fastjet; - -// forward declaration to make things clearer -void read_event(vector &event); -ostream & operator<<(ostream &, const PseudoJet &); - -//---------------------------------------------------------------------- -int main(){ - - //---------------------------------------------------------- - // read in input particles - vector event; - read_event(event); - cout << "# read an event with " << event.size() << " particles" << endl; - - // first get some anti-kt jets - double R = 1.0, ptmin = 20.0; - JetDefinition jet_def(antikt_algorithm, R); - ClusterSequence cs(event, jet_def); - vector jets = sorted_by_pt(cs.inclusive_jets(ptmin)); - - // give the soft drop groomer a short name - // Use a symmetry cut z > z_cut R^beta - // By default, there is no mass-drop requirement - double z_cut = 0.10; - double beta = 2.0; - contrib::SoftDrop sd(beta, z_cut); - cout << "SoftDrop groomer is: " << sd.description() << endl; - - for (unsigned ijet = 0; ijet < jets.size(); ijet++) { - // Run SoftDrop and examine the output - PseudoJet sd_jet = sd(jets[ijet]); - cout << endl; - cout << "original jet: " << jets[ijet] << endl; - cout << "SoftDropped jet: " << sd_jet << endl; - - assert(sd_jet != 0); //because soft drop is a groomer (not a tagger), it should always return a soft-dropped jet - - cout << " delta_R between subjets: " << sd_jet.structure_of().delta_R() << endl; - cout << " symmetry measure(z): " << sd_jet.structure_of().symmetry() << endl; - cout << " mass drop(mu): " << sd_jet.structure_of().mu() << endl; - } - - return 0; -} - -//---------------------------------------------------------------------- -/// read in input particles -void read_event(vector &event){ - string line; - while (getline(cin, line)) { - istringstream linestream(line); - // take substrings to avoid problems when there are extra "pollution" - // characters (e.g. line-feed). - if (line.substr(0,4) == "#END") {return;} - if (line.substr(0,1) == "#") {continue;} - double px,py,pz,E; - linestream >> px >> py >> pz >> E; - PseudoJet particle(px,py,pz,E); - - // push event onto back of full_event vector - event.push_back(particle); - } -} - -//---------------------------------------------------------------------- -/// overloaded jet info output -ostream & operator<<(ostream & ostr, const PseudoJet & jet) { - if (jet == 0) { - ostr << " 0 "; - } else { - ostr << " pt = " << jet.pt() - << " m = " << jet.m() - << " y = " << jet.rap() - << " phi = " << jet.phi(); - } - return ostr; -} diff --git a/src/Tools/fjcontrib/RecursiveTools/example_softdrop.ref b/src/Tools/fjcontrib/RecursiveTools/example_softdrop.ref deleted file mode 100644 --- a/src/Tools/fjcontrib/RecursiveTools/example_softdrop.ref +++ /dev/null @@ -1,33 +0,0 @@ -# read an event with 354 particles -#-------------------------------------------------------------------------- -# FastJet release 3.0.6 -# M. Cacciari, G.P. Salam and G. Soyez -# A software package for jet finding and analysis at colliders -# http://fastjet.fr -# -# Please cite EPJC72(2012)1896 [arXiv:1111.6097] if you use this package -# for scientific work and optionally PLB641(2006)57 [hep-ph/0512210]. -# -# FastJet is provided without warranty under the terms of the GNU GPLv2. -# It uses T. Chan's closest pair algorithm, S. Fortune's Voronoi code -# and 3rd party plugin jet algorithms. See COPYING file for details. -#-------------------------------------------------------------------------- -SoftDrop groomer is: Recursive Groomer with a symmetry cut scalar_z > 0.1 (theta/1)^2 [SoftDrop], no mass-drop requirement, recursion into the subjet with larger pt - -original jet: pt = 983.387 m = 39.9912 y = -0.867307 phi = 2.90511 -SoftDropped jet: pt = 980.44 m = 27.1382 y = -0.867485 phi = 2.90532 - delta_R between subjets: 0.209158 - symmetry measure(z): 0.0047049 - mass drop(mu): 0.835262 - -original jet: pt = 908.098 m = 87.7124 y = 0.219482 phi = 6.03487 -SoftDropped jet: pt = 887.935 m = 11.3171 y = 0.223247 phi = 6.03034 - delta_R between subjets: 0.111631 - symmetry measure(z): 0.00440752 - mass drop(mu): 0.791013 - -original jet: pt = 72.9429 m = 23.4022 y = -1.19083 phi = 6.1199 -SoftDropped jet: pt = 71.5847 m = 20.0943 y = -1.1761 phi = 6.12539 - delta_R between subjets: 0.680305 - symmetry measure(z): 0.0730158 - mass drop(mu): 0.56265 diff --git a/test/testCmp.cc b/test/testCmp.cc --- a/test/testCmp.cc +++ b/test/testCmp.cc @@ -1,43 +1,30 @@ #include #include #include "Rivet/Tools/Cmp.hh" using namespace std; -ostream & operator<<(ostream & os, Rivet::CmpState c) { - string s; - switch (c) { - case Rivet::CmpState::UNDEF : s = "UNDEF"; break; - case Rivet::CmpState::LT : s = "LT"; break; - case Rivet::CmpState::EQ : s = "EQ"; break; - case Rivet::CmpState::GT : s = "GT"; break; - default: s = "OTHER"; break; - } - os << s; - return os; -} - int main() { using namespace Rivet; CmpState cs = CmpState::UNDEF; cs = cmp(0.5, 0.6); cout << "cmp(0.5, 0.6) = " << cs << '\n'; - assert(cs == CmpState::LT); + assert(cs == CmpState::NEQ); cs = cmp(0.5, 0.5); cout << "cmp(0.5, 0.5) = " << cs << '\n'; assert(cs == CmpState::EQ); cs = cmp(0.6, 0.5); cout << "cmp(0.6, 0.5) = " << cs << '\n'; - assert(cs == CmpState::GT); + assert(cs == CmpState::NEQ); cs = cmp(1.,1.) || cmp(0.6, 0.5); cout << "cmp(1.,1.) || cmp(0.6, 0.5) = " << cs << '\n'; - assert(cs == CmpState::GT); + assert(cs == CmpState::NEQ); return EXIT_SUCCESS; }