diff --git a/ChangeLog b/ChangeLog --- a/ChangeLog +++ b/ChangeLog @@ -1,6590 +1,6599 @@ +2019-02-20 Andy Buckley + + * Move UnstableParticles to a consistently-named header, with + UnstableFinalState.hh retained for backward compatibility. + + * Improve/fix UnstableParticles projection's Cut constructor + argument to apply the cut on a Rivet::Particle rather than a + HepMC::FourVector, meaning that PID cuts can now be used. + 2019-02-17 Andy Buckley * Convert ATLAS_2013_I1217863 analysis variants to use the LMODE analysis option (from Jon Butterworth). 2019-02-15 Leif Lönnblad * Release 2.7.0 2019-02-12 Christian Bierlich * Introduced CentralityProjection, allowing an analysis to cut on percentiles of single event quantities, preloaded from a user generated or supplied (by experiment) histogram. Notably used for the centrality definition in heavy ion analyses. User specifies the centrality definition as a special analysis option called cent, eg: "MyAnalysis:cent=GEN". Example usage: Calibration analysis: MC_Cent_pPb_Calib, Analysis using that calibration: MC_Cent_pPb_Eta. * Introduced EventMixingFinalState to provide simple event mixing functionality. Currently only works with unit event weights. Example usage: ALICE_2016_I1507157. * Introduced Correlators, a framework for calculating single event correlators based on the generic framework (arXiv: 1010.0233 and arXiv: 1312.3572), and perfoming all event averages giving flow coefficents. Implemented as new analysis base class. Example usage: ALICE_2016_I1419244. * Introduced a PrimaryParticle projection, replicating experimental definitions of stable particles through decay chains. Recommended for analyses which would otherwise have to require stable particles at generator level. * Introduced AliceCommon and AtlasCommon convenience tools, defining several triggers, primary particle definitions and acceptances. * Contributed, validated analyses using above features: ALICE_2010_I880049: Multiplicity at mid-rapidity, PbPb @ 2.76 TeV/nn. ALICE_2012_I1127497: Nuclear modification factor, PbPb @ 2.76 TeV/nn. ALICE_2012_I930312: Di-hadron correlations, PbPb @ 2.76 TeV/nn. * Contributed, unvalidated analyses using above features: BRAHMS_2004_I647076: pi, K, p spectra as function of rapidity, AuAu @ 200 GeV/nn ALICE_2012_I1126966: pi, K, p spectra, PbPb @ 2.76 TeV/nn. ALICE_2013_I1225979: Charged multiplicity, PbPb @ 2.76 TeV/nn. ALICE_2014_I1243865: Multi-strange baryons, PbPb @ 2.76 TeV/nn. ALICE_2014_I1244523: Multi-strange baryons, pPb @ 5.02 TeV/nn. ALICE_2015_PBPBCentrality: Centrality calibration for PbPb. Note that the included 5.02 TeV/nn data is not well defined at particle level, and cannot be compared to experiment without full detector simulation. ALICE_2016_I1394676: Charged multiplicity, PbPb @ 2.76 TeV/nn. ALICE_2016_I1419244: Multiparticle correlations (flow) using generic framework, PbPb @ 5.02 TeV/nn. ALICE_2016_I1471838: Multi-strange baryons, pp @ 7 TeV. ALICE_2016_I1507090: Charged multiplicity, PbPb @ 5.02 TeV/nn. ALICE_2016_I1507157: Angular correlations, pp @ 7 TeV. ATLAS_2015_I1386475: Charged multiplicity, pPb @ 5.02 TeV/nn. ATLAS_PBPB_CENTRALITY: Forward energy flow + centrality calibration, data not unfolded, but well defined at particle level, PbPb @ 2.76 TeV/nn. ATLAS_2015_I1360290: Charged multiplicity + spectra, PbPb @ 2.76 TeV/nn. ATLAS_pPb_Calib: Forward energy flow + centrality calibration, data not unfolded, but well defined at particle level, pPb @ 5.02 TeV/nn. STAR_2016_I1414638: Di-hadron correlations, AuAu @ 200 GeV/nn. CMS_2017_I1471287: Multiparticle correlations (flow) using generic framework, pp @ 7 TeV. * Contributed analyses without data: ALICE_2015_PPCentrality: ALICE pp centrality (multiplicity classes) calibration. BRAHMS_2004_CENTRALITY: BRAHMS centrality calibration. STAR_BES_CALIB: STAR centrality calibration. MC_Cent_pPb_Calib: Example analysis, centrality calibration. MC_Cent_pPb_Eta: Example analysis, centrality usage. 2019-01-29 Andy Buckley * Add real CMS Run 1 and Run 2 MET resolution smearing functions, based on 8 TeV paper and 13 TeV PAS. 2019-01-07 Leif Lönnblad * Reintroduced the PXCONE option in FastJets using a local version of the Fortran based pxcone algorithm converted to c++ with f2c and slightly hacked to avoid dependency on Fortran runtime libraries. * Introduced rivet-merge for statistically correct merging of YODA files produced by Rivet. Only works on analysis with reentrant finalize. * Introduced --dump flag to the rivet script to periodically run finalize and write out the YODA file for anayses with reentrant finalize. * Introduced reentrant finalize. Rivet now produces YODA files where all analysis objects are stored in two version. One is prefixed by "/RAW" and gives the state of the object before finalize was run, and the other is the properly finalized object. Analyses must be flagged "Reentrant: True" in the .info file to properly use this feature. * Added an option system. Analyses can now be added to rivet with options. Adding eg. "MyAnalysis:Opt1=val1:Opt2=val2" will create and add a MyAnalysis object making the options available through the Analysis::getOption() function. Several objects of MyAnalysis with different options can be added in the same run. Allowed options must be specified in the MyAnalysis.info file. * Added several utilities for heavy ions. 2019-01-03 Andy Buckley * Add setting of cross-section error in AnalysisHandler and Run. 2018-12-21 Andy Buckley * Add hasNoTag jet-classification functor, to complement hasBTag and hasCTag. 2018-12-20 Andy Buckley * Rework VetoedFinalState to be based on Cuts, and to be constructible from Cut arguments. * Pre-emptively exclude all hadrons and partons from returning true in isDirect computations. * Cache the results of isDirect calculations on Particle (a bit awkwardly... roll on C++17). * Add a default-FinalState version of the DressedLeptons constructor. 2018-12-19 Andy Buckley * Add a FIRST/LAST enum switch for PartonicTops, to choose which top quark clone to use. 2018-12-14 Andy Buckley * Add a FastJet clustering mode for DressedLeptons. 2018-12-10 Andy Buckley * Release 2.6.2 * Info file bugfixes for LHCF_2016_I1385877, from Eugenio Berti. * Update references in three CMS analysis .info files. 2018-12-05 Andy Buckley * Rework doc directory no-build by default to be compatible with 'make dist' packaging. * Add fjcontrib RecursiveTools to Rivet/Tools/fjcontrib set. 2018-11-21 Andy Buckley * Add CMS_2018_I1653948, CMS_2018_I1653948, CMS_2018_I1682495, and CMS_2018_I1690148 analyses. * Add FastJet EnergyCorrelator and rejig the internal fjcontrib bundle a little. 2018-11-15 Andy Buckley * Merge ATLAS_2017_I1517194_MU and ATLAS_2018_I1656578. * Add signed calculation optional bool argument on all deltaPhi functions. 2018-11-12 Andy Buckley * Fix CMS_2012_I1102908 efficiency calculation. Thanks to Anton Karneyeu! 2018-11-09 Andy Buckley * Remove doc dir from default top-level make 2018-09-20 Andy Buckley * Use updated ATLAS R2 muon efficiencies. * Use proper ATLAS photon efficiency functions for Runs 1 and 2, from arXiv:1606.01813 and ATL-PHYS-PUB-2016-014. 2018-08-31 Andy Buckley * Update embedded yaml-cpp to v0.6.0. 2018-08-29 Andy Buckley * Add RIVET_WEIGHT_INDEX=-1 -> ignore event weights behaviour. Slow, but sometimes useful for debug. 2018-08-29 Christian Gutschow * Allow reference data file name to be different from plugin name via setRefDataName(fname) method, aiming to unify HepData records. 2018-08-14 Andy Buckley * Version 2.6.1 release. 2018-08-08 Andy Buckley * Add a RIVET_RANDOM_SEED variable to fix the smearing random-seed engine for validation comparisons. 2018-07-19 Andy Buckley * Merge in ATLAS_2017_I1604029 (ttbar+gamma), ATLAS_2017_I1626105 (dileptonic ttbar), ATLAS_2017_I1644367 (triphotons), and ATLAS_2017_I1645627 (photon + jets). * Postpone Particles enhancement now, since the required C++11 isn't supported on lxplus7 = CentOS7. * Add MC_DILEPTON analysis. 2018-07-10 Andy Buckley * Fix HepData tarball download handling: StringIO is *not* safe anymore 2018-07-08 Andy Buckley * Add LorentzTransform factory functions direct from FourMomentum, and operator()s 2018-06-20 Andy Buckley * Add FinalState(fs, cut) augmenting constructor, and PrevFS projection machinery. Validated for a abscharge > 0 cut. * Add hasProjection() methods to ProjectionHandler and ProjectionApplier. * Clone MC_GENERIC as MC_FSPARTICLES and deprecate the badly-named original. * Fix Spires -> Inspire ID for CMS_2017_I1518399. 2018-06-04 Andy Buckley * Fix installation of (In)DirectFinalState.hh 2018-05-31 Andy Buckley * Add init-time setting of a single weight-vector index from the RIVET_WEIGHT_INDEX environment variable. To be removed in v3, but really we should have done this years ago... and we don't know how long the handover will be. 2018-05-22 Neil Warrack * Include 'unphysical' photon parents in PartonicTops' veto of prompt leptons from photon conversions. 2018-05-20 Andy Buckley * Make Particles and Jets into actual specialisations of std::vector rather than typedefs, and update surrounding classes to use them. The specialisations can implicitly cast to vectors of FourMomentum (and maybe Pseudojet). 2018-05-18 Andy Buckley * Make CmpAnaHandle::operator() const, for GCC 8 (thanks to CMS) 2018-05-07 Andy Buckley * CMS_2016_I1421646.cc: Add patch from CMS to veto if leading jets outside |y| < 2.5, rather than only considering jets in that acceptance. Thanks to CMS and Markus Seidel. 2018-04-27 Andy Buckley * Tidy keywords and luminosity entries, and add both to BSM search .info files. * Add Luminosity_fb and Keywords placeholders in mkanalysis output. 2018-04-26 Andy Buckley * Add pairMass and pairPt functions. * Add (i)discardIfAnyDeltaRLess and (i)discardIfAnyDeltaPhiLess functions. * Add normalize() methods to Cutflow and Cutflows. * Add DirectFinalState and IndirectFinalState alias headers, for forward compatibility. 'Prompt' is confusing. 2018-04-24 Andy Buckley * Add initializer_list overload for binIndex. Needed for other util functions operating on vectors. * Fix function signature bug is isMT2 overload. * Add isSameSign, isOppSign, isSameFlav, isOppFlav, and isOSSF etc. functions on PIDs and Particles. 2018-03-27 Andy Buckley * Add RatioPlotLogY key to make-plots. Thanks to Antonin Maire. 2018-02-22 Andy Buckley * Adding boolean operator syntactic sugar for composition of bool functors. * Copy & paste error fixes in implementation of BoolJetAND,OR,NOT. 2018-02-01 Andy Buckley * Make the project() and compare() methods of projections public. * Fix a serious bug in the SmearedParticles and SmearedJets compare methods. * Add string representations and streamability to the Cut objects, for debugging. 2018-01-08 Andy Buckley * Add highlighted source to HTML analysis metadata listings. 2017-12-21 Andy Buckley * Version 2.6.0 release. 2017-12-20 Andy Buckley * Typo fix in TOTEM_2012_I1220862 data -- thanks to Anton Karneyeu. 2017-12-19 Andy Buckley * Adding contributed analyses: 1 ALICE, 6 ATLAS, 1 CMS. * Fix bugged PID codes in MC_PRINTEVENT. 2017-12-13 Andy Buckley * Protect Run methods and rivet script against being told to run from a missing or unreadable file. 2017-12-11 Andy Buckley * Replace manual event count & weight handling with a YODA Counter object. 2017-11-28 Andy Buckley * Providing neater & more YODA-consistent sumW and sumW2 methods on AnalysisHandler and Analysis. * Fix to Python version check for >= 2.7.10 (patch submitted to GNU) 2017-11-17 Andy Buckley * Various improvements to DISKinematics, DISLepton, and the ZEUS 2001 analysis. 2017-11-06 Andy Buckley * Extend AOPath regex to allow dots and underscores in weight names. 2017-10-27 Andy Buckley * Add energy to the list of cuts (both as Cuts::E and Cuts::energy) * Add missing pT (rather than Et) functions to SmearedMET, although they are just copies of the MET functions for now. 2017-10-09 Andy Buckley * Embed zstr and enable transparent reading of gzipped HepMC streams. 2017-10-03 Andy Buckley * Use Lester MT2 bisection header, and expose a few more mT2 function signatures. 2017-09-26 Andy Buckley * Use generic YODA read and write functions -- enables zipped yoda.gz output. * Add ChargedLeptons enum and mode argument to ZFinder and WFinder constructors, to allow control over whether the selected charged leptons are prompt. This is mostly cosmetic/for symmetry in the case of ZFinder, since the same can be achieved by passing a PromptFinalState as the fs argument, but for WFinder it's essential since passing a prompt final state screws up the MET calculation. Both are slightly different in the treatment of the lepton dressing, although conventionally this is an area where only prompt photons are used. 2017-09-25 Andy Buckley * Add deltaR2 functions for squared distances. 2017-09-10 Andy Buckley * Add white backgrounds to make-plots main and ratio plot frames. 2017-09-05 Andy Buckley * Add CMS_2016_PAS_TOP_15_006 jet multiplicity in lepton+jets ttbar at 8 TeV analysis. * Add CMS_2017_I1467451 Higgs -> WW -> emu + MET in 8 TeV pp analysis. * Add ATLAS_2017_I1609448 Z->ll + pTmiss analysis. * Add vectorMissingEt/Pt and vectorMET/MPT convenience methods to MissingMomentum. * Add ATLAS_2017_I1598613 J/psi + mu analysis. * Add CMS SUSY 0-lepton search CMS_2017_I1594909 (unofficial implementation, validated vs. published cutflows) 2017-09-04 Andy Buckley * Change license explicitly to GPLv3, cf. MCnet3 agreement. * Add a better jet smearing resolution parametrisation, based on GAMBIT code from Matthias Danninger. 2017-08-16 Andy Buckley * Protect make-plots against NaNs in error band values (patch from Dmitry Kalinkin). 2017-07-20 Andy Buckley * Add sumPt, sumP4, sumP3 utility functions. * Record truth particles as constituents of SmearedParticles output. * Rename UnstableFinalState -> UnstableParticles, and convert ZFinder to be a general ParticleFinder rather than FinalState. 2017-07-19 Andy Buckley * Add implicit cast from FourVector & FourMomentum to Vector3, and tweak mT implementation. * Add rawParticles() to ParticleFinder, and update DressedLeptons, WFinder, ZFinder and VetoedFinalState to cope. * Add isCharged() and isChargedLepton() to Particle. * Add constituents() and rawConstituents() to Particle. * Add support for specifying bin edges as braced initializer lists rather than explicit vector. 2017-07-18 Andy Buckley * Enable methods for booking of Histo2D and Profile2D from Scatter3D reference data. * Remove IsRef annotation from autobooked histogram objects. 2017-07-17 Andy Buckley * Add pair-smearing to SmearedJets. 2017-07-08 Andy Buckley * Add Event::centrality(), for non-HepMC access to the generator value if one has been recorded -- otherwise -1. 2017-06-28 Andy Buckley * Split the smearing functions into separate header files for generic/momentum, Particle, Jet, and experiment-specific smearings & efficiencies. 2017-06-27 Andy Buckley * Add 'JetFinder' alias for JetAlg, by analogy with ParticleFinder. 2017-06-26 Andy Buckley * Convert SmearedParticles to a more general list of combined efficiency+smearing functions, with extra constructors and some variadic template cleverness to allow implicit conversions from single-operation eff and smearing function. Yay for C++11 ;-) This work based on a macro-based version of combined eff/smear functions by Karl Nordstrom -- thanks! * Add *EffFn, *SmearFn, and *EffSmearFn types to SmearingFunctions.hh. 2017-06-23 Andy Buckley * Add portable OpenMP enabling flags to AM_CXXFLAGS. 2017-06-22 Andy Buckley * Fix the smearing random number seed and make it thread-specific if OpenMP is available (not yet in the build system). * Remove the UNUSED macro and find an alternative solution for the cases where it was used, since there was a risk of macro clashes with embedding codes. * Add a -o output directory option to make-plots. * Vector4.hh: Add mT2(vec,vec) functions. 2017-06-21 Andy Buckley * Add a full set of in-range kinematics functors: ptInRange, (abs)etaInRange, (abs)phiInRange, deltaRInRange, deltaPhiInRange, deltaEtaInRange, deltaRapInRange. * Add a convenience JET_BTAG_EFFS functor with several constructors to handle mistag rates. * Add const efficiency functors operating on Particle, Jet, and FourMomentum. * Add const-efficiency constructor variants for SmearedParticles. 2017-06-21 Jon Butterworth * Fix normalisations in CMS_2016_I1454211. * Fix analysis name in ref histo paths for ATLAS_2017_I1591327. 2017-06-18 Andy Buckley * Move all standard plugin files into subdirs of src/Analyses, with some custom make rules driving rivet-buildplugin. 2017-06-18 David Grellscheid * Parallelise rivet-buildplugin, with source-file cat'ing and use of a temporary Makefile. 2016-06-18 Holger Schulz * Version 2.5.4 release! 2016-06-17 Holger Schulz * Fix 8 TeV DY (ATLAS_2016_I1467454), EL/MU bits were bissing. * Add 13 TeV DY (ATLAS_2017_I1514251) and mark ATLAS_2015_CONF_2015_041 obsolete * Add missing install statement for ATLAS_2016_I1448301.yoda/plot/info leading to segfault 2017-06-09 Andy Buckley * Slight improvements to Particle constructors. * Improvement to Beam projection: before falling back to barcodes 1 & 2, try a manual search for status=4 particles. Based on a patch from Andrii Verbytskyi. 2017-06-05 Andy Buckley * Add CMS_2016_I1430892: dilepton channel ttbar charge asymmetry analysis. * Add CMS_2016_I1413748: dilepton channel ttbar spin correlations and polarisation analysis. * Add CMS_2017_I1518399: leading jet mass for boosted top quarks at 8 TeV. * Add convenience constructors for ChargedLeptons projection. 2017-06-03 Andy Buckley * Add FinalState and Cut (optional) constructor arguments and usage to DISFinalState. Thanks to Andrii Verbytskyi for the idea and initial patch. 2017-05-23 Andy Buckley * Add ATLAS_2016_I1448301, Z/gamma cross section measurement at 8 TeV. * Add ATLAS_2016_I1426515, WW production at 8 TeV. 2016-05-19 Holger Schulz * Add BELLE measurement of semileptonic B0bar -> D*+ ell nu decays. I took the liberty to correct the data in the sense that I take the bin widhts into account in the normalisation. BELLE_2017_I1512299. This is a nice analysis as it probes the hadronic and the leptonic side of the decay so very valuable for model building and of course it is rare as it is an unfolded B measurement. 2016-05-17 Holger Schulz * Add ALEPH measurement of hadronic tau decays, ALEPH_2014_I1267648. * Add ALEPH dimuon invariant mass (OS and SS) analysis, ALEPH_2016_I1492968 * The latter needed GENKTEE FastJet algorithm so I added that FastJets * Protection against logspace exception in histobooking of MC_JetAnalysis * Fix compiler complaints about uninitialised variable in OPAL_2004. 2016-05-16 Holger Schulz * Tidy ALEPH_1999 charm fragmentation analysis and normalise to data integral. Added DSTARPLUS and DSTARMINUS to PID. 2017-05-16 Andy Buckley * Add ATLAS_2016_CONF_2016_092, inclusive jet cross sections using early 13 TeV data. * Add ATLAS_2017_I1591327, isolated diphoton + X differential cross-sections. * Add ATLAS_2017_I1589844, ATLAS_2017_I1589844_EL, ATLAS_2017_I1589844_MU: kT splittings in Z events at 8 TeV. * Add ATLAS_2017_I1509919, track-based underlying event at 13 TeV in ATLAS. * Add ATLAS_2016_I1492320_2l2j and ATLAS_2016_I1492320_3l, the WWW cross-section at 8 TeV. 2017-05-12 Andy Buckley * Add ATLAS_2016_I1449082, charge asymmetry in top quark pair production in dilepton channel. * Add ATLAS_2015_I1394865, inclusive 4-lepton/ZZ lineshape. 2017-05-11 Andy Buckley * Add ATLAS_2013_I1234228, high-mass Drell-Yan at 7 TeV. 2017-05-10 Andy Buckley * Add CMS_2017_I1519995, search for new physics with dijet angular distributions in proton-proton collisions at sqrt{(s) = 13 TeV. * Add CMS_2017_I1511284, inclusive energy spectrum in the very forward direction in proton-proton collisions at 13 TeV. * Add CMS_2016_I1486238, studies of 2 b-jet + 2 jet production in proton-proton collisions at 7 TeV. * Add CMS_2016_I1454211, boosted ttbar in pp collisions at sqrtS = 8 TeV. * Add CMS_2016_I1421646, CMS azimuthal decorrelations at 8 TeV. 2017-05-09 Andy Buckley * Add CMS_2015_I1380605, per-event yield of the highest transverse momentum charged particle and charged-particle jet. * Add CMS_2015_I1370682_PARTON, a partonic-top version of the CMS 7 TeV pseudotop ttbar differential cross-section analysis. * Adding EHS_1988_I265504 from Felix Riehn: charged-particle production in K+ p, pi+ p and pp interactions at 250 GeV/c. * Fix ALICE_2012_I1116147 for pi0 and Lambda feed-down. 2017-05-08 Andy Buckley * Add protection against leptons from QED FSR photon conversions in assigning PartonicTop decay modes. Thanks to Markus Seidel for the report and suggested fix. * Reimplement FastJets methods in terms of new static helper functions. * Add new mkClusterInputs, mkJet and mkJets static methods to FastJets, to help with direct calls to FastJet where particle lookup for constituents and ghost tags are required. * Fix Doxygen config and Makefile target to allow working with out-of-source builds. Thanks to Christian Holm Christensen. * Improve DISLepton for HERA analyses: thanks to Andrii Verbytskyi for the patch! 2017-03-30 Andy Buckley * Replace non-template Analysis::refData functions with C++11 default T=Scatter2D. 2017-03-29 Andy Buckley * Allow yes/no and true/false values for LogX, etc. plot options. * Add --errs as an alias for --mc-errs to rivet-mkhtml and rivet-cmphistos. 2017-03-08 Peter Richardson * Added 6 analyses AMY_1990_I295160, HRS_1986_I18502, JADE_1983_I190818, PLUTO_1980_I154270, TASSO_1989_I277658, TPC_1987_I235694 for charged multiplicity in e+e- at CMS energies below the Z pole * Added 2 analyses for charged multiplicity at the Z pole DELPHI_1991_I301657, OPAL_1992_I321190 * Updated ALEPH_1991_S2435284 to plot the average charged multiplcity * Added analyses OPAL_2004_I631361, OPAL_2004_I631361_qq, OPAL_2004_I648738 for gluon jets in e+e-, most need fictitious e+e- > g g process 2017-03-29 Andy Buckley * Add Cut and functor selection args to HeavyHadrons accessor methods. 2017-03-03 Andy Buckley * bin/rivet-mkanalysis: Add FastJets.hh include by default -- it's almost always used. 2017-03-02 Andy Buckley * src/Analyses/CMS_2016_I1473674.cc: Patch from CMS to use partonic tops. * src/Analyses/CMS_2015_I1370682.cc: Patch to inline jet finding from CMS. 2017-03-01 Andy Buckley * Convert DressedLeptons use of fromDecay to instead veto photons that match fromHadron() || fromHadronicTau() -- meaning that electrons and muons from leptonic taus will now be dressed. * Move Particle and Jet std::function aliases to .fhh files, and replace many uses of templates for functor arguments with ParticleSelector meta-types instead. * Move the canonical implementations of hasAncestorWith, etc. and isLastWith, etc. from ParticleUtils.hh into Particle. * Disable the event-to-event beam consistency check if the ignore-beams mode is active. 2017-02-27 Andy Buckley * Add BoolParticleAND, BoolJetOR, etc. functor combiners to Tools/ParticleUtils.hh and Tools/JetUtils.hh. 2017-02-24 Andy Buckley * Mark ATLAS_2016_CONF_2016_078 and CMS_2016_PAS_SUS_16_14 analyses as validated, since their cutflows match the documentation. 2017-02-22 Andy Buckley * Add aggregate signal regions to CMS_2016_PAS_SUS_16_14. 2017-02-18 Andy Buckley * Add getEnvParam function, for neater use of environment variable parameters with a required default. 2017-02-05 Andy Buckley * Add HasBTag and HasCTag jet functors, with lower-case aliases. 2017-01-18 Andy Buckley * Use std::function in functor-expanded method signatures on JetAlg. 2017-01-16 Andy Buckley * Convert FinalState particles() accessors to use std::function rather than a template arg for sorting, and add filtering functor support -- including a mix of filtering and sorting functors. Yay for C++11! * Add ParticleEffFilter and JetEffFilter constructors from a double (encoding constant efficiency). * Add Vector3::abseta() 2016-12-13 Andy Buckley * Version 2.5.3 release. 2016-12-12 Holger Schulz * Add cut in BZ calculation in OPAL 4 jet analysis. Paper is not clear about treatment of parallel vectors, leads to division by zero and nan-fill and subsequent YODA RangeError (OPAL_2001_S4553896) 2016-12-12 Andy Buckley * Fix bugs in SmearedJets treatment of b & c tagging rates. * Adding ATLAS_2016_I1467454 analysis (high-mass Drell-Yan at 8 TeV) * Tweak to 'convert' call to improve the thumbnail quality from rivet-mkhtml/make-plots. 2016-12-07 Andy Buckley * Require Cython 0.24 or later. 2016-12-02 Andy Buckley * Adding L3_2004_I652683 (LEP 1 & 2 event shapes) and LHCB_2014_I1262703 (Z+jet at 7 TeV). * Adding leading dijet mass plots to MC_JetAnalysis (and all derived classes). Thanks to Chris Gutschow! * Adding CMS_2012_I1298807 (ZZ cross-section at 8 TeV), CMS_2016_I1459051 (inclusive jet cross-sections at 13 TeV) and CMS_PAS_FSQ_12_020 (preliminary 7 TeV leading-track underlying event). * Adding CDF_2015_1388868 (ppbar underlying event at 300, 900, and 1960 GeV) * Adding ATLAS_2016_I1467230 (13 TeV min bias), ATLAS_2016_I1468167 (13 TeV inelastic pp cross-section), and ATLAS_2016_I1479760 (7 TeV pp double-parton scattering with 4 jets). 2016-12-01 Andy Buckley * Adding ALICE_2012_I1116147 (eta and pi0 pTs and ratio) and ATLAS_2011_I929691 (7 TeV jet frag) 2016-11-30 Andy Buckley * Fix bash bugs in rivet-buildplugin, including fixing the --cmd mode. 2016-11-28 Andy Buckley * Add LHC Run 2 BSM analyses ATLAS_2016_CONF_2016_037 (3-lepton and same-sign 2-lepton), ATLAS_2016_CONF_2016_054 (1-lepton + jets), ATLAS_2016_CONF_2016_078 (ICHEP jets + MET), ATLAS_2016_CONF_2016_094 (1-lepton + many jets), CMS_2013_I1223519 (alphaT + b-jets), and CMS_2016_PAS_SUS_16_14 (jets + MET). * Provide convenience reversed-argument versions of apply and declare methods, to allow presentational choice of declare syntax in situations where the projection argument is very long, and reduce requirements on the user's memory since this is one situation in Rivet where there is no 'most natural' ordering choice. 2016-11-24 Andy Buckley * Adding pTvec() function to 4-vectors and ParticleBase. * Fix --pwd option of the rivet script 2016-11-21 Andy Buckley * Add weights and scaling to Cutflow/s. 2016-11-19 Andy Buckley * Add Et(const ParticleBase&) unbound function. 2016-11-18 Andy Buckley * Fix missing YAML quote mark in rivet-mkanalysis. 2016-11-15 Andy Buckley * Fix constness requirements on ifilter_select() and Particle/JetEffFilter::operator(). * src/Analyses/ATLAS_2016_I1458270.cc: Fix inverted particle efficiency filtering. 2016-10-24 Andy Buckley * Add rough ATLAS and CMS photon reco efficiency functions from Delphes (ATLAS and CMS versions are identical, hmmm) 2016-10-12 Andy Buckley * Tidying/fixing make-plots custom z-ticks code. Thanks to Dmitry Kalinkin. 2016-10-03 Holger Schulz * Fix SpiresID -> InspireID in some analyses (show-analysis pointed to non-existing web page) 2016-09-29 Holger Schulz * Add Luminosity_fb to AnalysisInfo * Added some keywords and Lumi to ATLAS_2016_I1458270 2016-09-28 Andy Buckley * Merge the ATLAS and CMS from-Delphes electron and muon tracking efficiency functions into generic trkeff functions -- this is how it should be. * Fix return type typo in Jet::bTagged(FN) templated method. * Add eta and pT cuts to ATLAS truth b-jet definition. * Use rounding rather than truncation in Cutflow percentage efficiency printing. 2016-09-28 Frank Siegert * make-plots bugfix in y-axis labels for RatioPlotMode=deviation 2016-09-27 Andy Buckley * Add vector and scalar pT (rather than Et) to MissingMomentum. 2016-09-27 Holger Schulz * Analysis keyword machinery * rivet -a @semileptonic * rivet -a @semileptonic@^bdecays -a @semileptonic@^ddecays 2016-09-22 Holger Schulz * Release version 2.5.2 2016-09-21 Andy Buckley * Add a requirement to DressedLeptons that the FinalState passed as 'bareleptons' will be filtered to only contain charged leptons, if that is not already the case. Thanks to Markus Seidel for the suggestion. 2016-09-21 Holger Schulz * Add Simone Amoroso's plugin for hadron spectra (ALEPH_1995_I382179) * Add Simone Amoroso's plugin for hadron spectra (OPAL_1993_I342766) 2016-09-20 Holger Schulz * Add CMS ttbar analysis from contrib, mark validated (CMS_2016_I1473674) * Extend rivet-mkhtml --booklet to also work with pdfmerge 2016-09-20 Andy Buckley * Fix make-plots automatic YMax calculation, which had a typo from code cleaning (mea culpa!). * Fix ChargedLeptons projection, which failed to exclude neutrinos!!! Thanks to Markus Seidel. * Add templated FN filtering arg versions of the Jet::*Tags() and Jet::*Tagged() functions. 2016-09-18 Andy Buckley * Add CMS partonic top analysis (CMS_2015_I1397174) 2016-09-18 Holger Schulz * Add L3 xp analysis of eta mesons, thanks Simone (L3_1992_I336180) * Add D0 1.8 TeV jet shapes analysis, thanks Simone (D0_1995_I398175) 2016-09-17 Andy Buckley * Add has{Ancestor,Parent,Child,Descendant}With functions and HasParticle{Ancestor,Parent,Child,Descendant}With functors. 2016-09-16 Holger Schulz * Add ATLAS 8TeV ttbar analysis from contrib (ATLAS_2015_I1404878) 2016-09-16 Andy Buckley * Add particles(GenParticlePtr) to RivetHepMC.hh * Add hasParent, hasParentWith, and hasAncestorWith to Particle. 2016-09-15 Holger Schulz * Add ATLAS 8TeV dijet analysis from contrib (ATLAS_2015_I1393758) * Add ATLAS 8TeV 'number of tracks in jets' analysis from contrib (ATLAS_2016_I1419070) * Add ATLAS 8TeV g->H->WW->enumunu analysis from contrib (ATLAS_2016_I1444991) 2016-09-14 Holger Schulz * Explicit std::toupper and std::tolower to make clang happy 2016-09-14 Andy Buckley * Add ATLAS Run 2 0-lepton SUSY and monojet search papers (ATLAS_2016_I1452559, ATLAS_2016_I1458270) 2016-09-13 Andy Buckley * Add experimental Cutflow and Cutflows objects for BSM cut tracking. * Add 'direct' versions of any, all, none to Utils.hh, with an implicity bool() transforming function. 2016-09-13 Holger Schulz * Add and mark validated B+ to omega analysis (BABAR_2013_I1116411) * Add and mark validated D0 to pi- analysis (BABAR_2015_I1334693) * Add a few more particle names and use PID names in recently added analyses * Add Simone's OPAL b-frag analysis (OPAL_2003_I599181) after some cleanup and heavy usage of new features * Restructured DELPHI_2011_I890503 in the same manner --- picks up a few more B-hadrons now (e.g. 20523 and such) * Clean up and add ATLAS 8TeV MinBias (from contrib ATLAS_2016_I1426695) 2016-09-12 Andy Buckley * Add a static constexpr DBL_NAN to Utils.hh for convenience, and move some utils stuff out of MathHeader.hh 2016-09-12 Holger Schulz * Add count function to Tools/Utils.h * Add and mark validated B0bar and Bminus-decay to pi analysis (BELLE_2013_I1238273) * Add and mark validated B0-decay analysis (BELLE_2011_I878990) * Add and mark validated B to D decay analysis (BELLE_2011_I878990) 2016-09-08 Andy Buckley * Add C-array version of multi-target Analysis::scale() and normalize(), and fix (semantic) constness. * Add == and != operators for cuts applied to integers. * Add missing delta{Phi,Eta,Rap}{Gtr,Less} functors to ParticleBaseUtils.hh 2016-09-07 Andy Buckley * Add templated functor filtering args to the Particle parent/child/descendent methods. 2016-09-06 Andy Buckley * Add ATLAS Run 1 medium and tight electron ID efficiency functions. * Update configure scripts to use newer (Py3-safe) Python testing macros. 2016-09-02 Andy Buckley * Add isFirstWith(out), isLastWith(out) functions, and functor wrappers, using Cut and templated function/functor args. * Add Particle::parent() method. * Add using import/typedef of HepMC *Ptr types (useful step for HepMC 2.07 and 3.00). * Various typo fixes (and canonical renaming) in ParticleBaseUtils functor collection. * Add ATLAS MV2c10 and MV2c20 b-tagging effs to SmearingFunctions.hh collection. 2016-09-01 Andy Buckley * Add a PartonicTops projection. * Add overloaded versions of the Event::allParticles() method with selection Cut or templated selection function arguments. 2016-08-25 Andy Buckley * Add rapidity scheme arg to DeltaR functor constructors. 2016-08-23 Andy Buckley * Provide an Analysis::bookCounter(d,x,y, title) function, for convenience and making the mkanalysis template valid. * Improve container utils functions, and provide combined remove_if+erase filter_* functions for both select- and discard-type selector functions. 2016-08-22 Holger Schulz * Bugfix in rivet-mkhtml (NoneType: ana.spiresID() --> spiresid) * Added include to Rivet/Tools/Utils.h to make gcc6 happy 2016-08-22 Andy Buckley * Add efffilt() functions and Particle/JetEffFilt functors to SmearingFunctions.hh 2016-08-20 Andy Buckley * Adding filterBy methods for Particle and Jet which accept generic boolean functions as well as the Cut specialisation. 2016-08-18 Andy Buckley * Add a Jet::particles(Cut&) method, for inline filtering of jet constituents. * Add 'conjugate' behaviours to container head and tail functions via negative length arg values. 2016-08-15 Andy Buckley * Add convenience headers for including all final-state and smearing projections, to save user typing. 2016-08-12 Andy Buckley * Add standard MET functions for ATLAS R1 (and currently copies for R2 and CMS). * Add lots of vector/container helpers for e.g. container slicing, summing, and min/max calculation. * Adapt SmearedMET to take *two* arguments, since SET is typically used to calculate MET resolution. * Adding functors for computing vector & ParticleBase differences w.r.t. another vector. 2016-08-12 Holger Schulz * Implemented a few more cuts in prompt photon analysis (CDF_1993_S2742446) but to no avail, the rise of the data towards larger costheta values cannot be reproduced --- maybe this is a candidate for more scrutiny and using the boosting machinery such that the c.m. cuts can be done in a non-approximate way 2016-08-11 Holger Schulz * Rename CDF_2009_S8383952 to CDF_2009_I856131 due to invalid Spires entry. * Add InspireID to all analysis known by their Spires key 2016-08-09 Holger Schulz * Release 2.5.1 2016-08-08 Andy Buckley * Add a simple MC_MET analysis for out-of-the-box MET distribution testing. 2016-08-08 Holger Schulz * Add DELPHI_2011_I890503 b-quark fragmentation function measurement, which superseded DELPHI_2002_069_CONF_603. The latter is marked OBSOLETE. 2016-08-05 Holger Schulz * Use Jet mass and energy smearing in CDF_1997_... six-jet analysis, mark validated. * Mark CDF_2001_S4563131 validated * D0_1996_S3214044 --- cut on jet Et rather than pT, fix filling of costheta and theta plots, mark validated. Concerning the jet algorithm, I tried with the implementation of fastjet fastjet/D0RunIConePlugin.hh but that really does not help. * D0_1996_S3324664 --- fix normalisations, sorting jets properly now, cleanup and mark validated. 2016-08-04 Holger Schulz * Use Jet mass and energy smearing in CDF_1996_S310 ... jet properties analysis. Cleanup analysis and mark validated. Added some more run info. The same for CDF_1996_S334... (pretty much the same cuts, different observables). * Minor fixes in SmearedJets projection 2016-08-03 Andy Buckley * Protect SmearedJets against loss of tagging information if a momentum smearing function is used (rather than a dedicated Jet smearing fn) via implicit casts. 2016-08-02 Andy Buckley * Add SmearedMET projection, wrapping MissingMomentum. * Include base truth-level projections in SmearedParticles/Jets compare() methods. 2016-07-29 Andy Buckley * Rename TOTEM_2012_002 to proper TOTEM_2012_I1220862 name. * Remove conditional building of obsolete, preliminary and unvalidated analyses. Now always built, since there are sufficient warnings. 2016-07-28 Holger Schulz * Mark D0_2000... W pT analysis validated * Mark LHCB_2011_S919... phi meson analysis validated 2016-07-25 Andy Buckley * Add unbound accessors for momentum properties of ParticleBase objects. * Add Rivet/Tools/ParticleBaseUtils.hh to collect tools like functors for particle & jet filtering. * Add vector versions of Analysis::scale() and ::normalize(), for batched scaling. * Add Analysis::scale() and Analysis::divide() methods for Counter types. * Utils.hh: add a generic sum() function for containers, and use auto in loop to support arrays. * Set data path as well as lib path in scripts with --pwd option, and use abs path to $PWD. * Add setAnalysisDataPaths and addAnalysisDataPath to RivetPaths.hh/cc and Python. * Pass absolutized RIVET_DATA_PATH from rivet-mkhtml to rivet-cmphistos. 2016-07-24 Holger Schulz * Mark CDF_2008_S77... b jet shapes validated * Added protection against low stats yoda exception in finalize for that analysis 2016-07-22 Andy Buckley * Fix newly introduced bug in make-plots which led to data point markers being skipped for all but the last bin. 2016-07-21 Andy Buckley * Add pid, abspid, charge, abscharge, charge3, and abscharge3 Cut enums, handled by Particle cut targets. * Add abscharge() and abscharge3() methods to Particle. * Add optional Cut and duplicate-removal flags to Particle children & descendants methods. * Add unbound versions of Particle is* and from* methods, for easier functor use. * Add Particle::isPrompt() as a member rather than unbound function. * Add protections against -ve mass from numerical precision errors in smearing functions. 2016-07-20 Andy Buckley * Move several internal system headers into the include/Rivet/Tools directory. * Fix median-computing safety logic in ATLAS_2010_S8914702 and tidy this and @todo markers in several similar analyses. * Add to_str/toString and stream functions for Particle, and a bit of Particle util function reorganisation. * Add isStrange/Charm/Bottom PID and Particle functions. * Add RangeError exception throwing from MathUtils.hh stats functions if given empty/mismatched datasets. * Add Rivet/Tools/PrettyPrint.hh, based on https://louisdx.github.io/cxx-prettyprint/ * Allow use of path regex group references in .plot file keyed values. 2016-07-20 Holger Schulz * Fix the --nskip behaviour on the main rivet script. 2016-07-07 Andy Buckley * Release version 2.5.0 2016-07-01 Andy Buckley * Fix pandoc interface flag version detection. 2016-06-28 Andy Buckley * Release version 2.4.3 * Add ATLAS_2016_I1468168 early ttbar fully leptonic fiducial cross-section analysis at 13 TeV. 2016-06-21 Andy Buckley * Add ATLAS_2016_I1457605 inclusive photon analysis at 8 TeV. 2016-06-15 Andy Buckley * Add a --show-bibtex option to the rivet script, for convenient outputting of a BibTeX db for the used analyses. 2016-06-14 Andy Buckley * Add and rename 4-vector boost calculation methods: new methods beta, betaVec, gamma & gammaVec are now preferred to the deprecated boostVector method. 2016-06-13 Andy Buckley * Add and use projection handling methods declare(proj, pname) and apply(evt, pname) rather than the longer and explicitly 'projectiony' addProjection & applyProjection. * Start using the DEFAULT_RIVET_ANALYSIS_CTOR macro (newly created preferred alias to long-present DEFAULT_RIVET_ANA_CONSTRUCTOR) * Add a DEFAULT_RIVET_PROJ_CLONE macro for implementing the clone() method boiler-plate code in projections. 2016-06-10 Andy Buckley * Add a NonPromptFinalState projection, and tweak the PromptFinalState and unbound Particle functions a little in response. May need some more finessing. * Add user-facing aliases to ProjectionApplier add, get, and apply methods... the templated versions of which can now be called without using the word 'projection', which makes the function names a bit shorter and pithier, and reduces semantic repetition. 2016-06-10 Andy Buckley * Adding ATLAS_2015_I1397635 Wt at 8 TeV analysis. * Adding ATLAS_2015_I1390114 tt+b(b) at 8 TeV analysis. 2016-06-09 Andy Buckley * Downgrade some non-fatal error messages from ERROR to WARNING status, because *sigh* ATLAS's software treats any appearance of the word 'ERROR' in its log file as a reason to report the job as failed (facepalm). 2016-06-07 Andy Buckley * Adding ATLAS 13 TeV minimum bias analysis, ATLAS_2016_I1419652. 2016-05-30 Andy Buckley * pyext/rivet/util.py: Add pandoc --wrap/--no-wrap CLI detection and batch conversion. * bin/rivet: add -o as a more standard 'output' option flag alias to -H. 2016-05-23 Andy Buckley * Remove the last ref-data bin from table 16 of ATLAS_2010_S8918562, due to data corruption. The corresponding HepData record will be amended by ATLAS. 2016-05-12 Holger Schulz * Mark ATLAS_2012_I1082009 as validated after exhaustive tests with Pythia8 and Sherpa in inclusive QCD mode. 2016-05-11 Andy Buckley * Specialise return error codes from the rivet script. 2016-05-11 Andy Buckley * Add Event::allParticles() to provide neater (but not *helpful*) access to Rivet-wrapped versions of the raw particles in the Event::genEvent() record, and hence reduce HepMC digging. 2016-05-05 Andy Buckley * Version 2.4.2 release! * Update SLD_2002_S4869273 ref data to match publication erratum, now updated in HepData. Thanks to Peter Skands for the report and Mike Whalley / Graeme Watt for the quick fix and heads-up. 2016-04-27 Andy Buckley * Add CMS_2014_I1305624 event shapes analysis, with standalone variable calculation struct embedded in an unnamed namespace. 2016-04-19 Andy Buckley * Various clean-ups and fixes in ATLAS analyses using isolated photons with median pT density correction. 2016-04-18 Andy Buckley * Add transformBy(LT) methods to Particle and Jet. * Add mkObjectTransform and mkFrameTransform factory methods to LorentzTransform. 2016-04-17 Andy Buckley * Add null GenVertex protection in Particle children & descendants methods. 2016-04-15 Andy Buckley * Add ATLAS_2015_I1397637, ATLAS 8 TeV boosted top cross-section vs. pT 2016-04-14 Andy Buckley * Add a --no-histos argument to the rivet script. 2016-04-13 Andy Buckley * Add ATLAS_2015_I1351916 (8 TeV Z FB asymmetry) and ATLAS_2015_I1408516 (8 TeV Z phi* and pT) analyses, and their _EL, _MU variants. 2016-04-12 Andy Buckley * Patch PID utils for ordering issues in baryon decoding. 2016-04-11 Andy Buckley * Actually implement ZEUS_2001_S4815815... only 10 years late! 2016-04-08 Andy Buckley * Add a --guess-prefix flag to rivet-config, cf. fastjet-config. * Add RIVET_DATA_PATH variable and related functions in C++ and Python as a common first-fallback for RIVET_REF_PATH, RIVET_INFO_PATH, and RIVET_PLOT_PATH. * Add --pwd options to rivet-mkhtml and rivet-cmphistos 2016-04-07 Andy Buckley * Remove implicit conventional event rotation for HERA -- this needs to be done explicitly from now. * Add comBoost functions and methods to Beam.hh, and tidy LorentzTransformation. * Restructure Beam projection functions for beam particle and sqrtS extraction, and add asqrtS functions. * Rename and improve PID and Particle Z,A,lambda functions -> nuclZ,nuclA,nuclNlambda. 2016-04-05 Andy Buckley * Improve binIndex function, with an optional argument to allow overflow lookup, and add it to testMath. * Adding setPE, setPM, setPtEtaPhiM, etc. methods and corresponding mk* static methods to FourMomentum, as well as adding more convenience aliases and vector attributes for completeness. Coordinate conversion functions taken from HEPUtils::P4. New attrs also mapped to ParticleBase. 2016-03-29 Andy Buckley * ALEPH_1996_S3196992.cc, ATLAS_2010_S8914702.cc, ATLAS_2011_I921594.cc, ATLAS_2011_S9120807.cc, ATLAS_2012_I1093738.cc, ATLAS_2012_I1199269.cc, ATLAS_2013_I1217867.cc, ATLAS_2013_I1244522.cc, ATLAS_2013_I1263495.cc, ATLAS_2014_I1307756.cc, ATLAS_2015_I1364361.cc, CDF_2008_S7540469.cc, CMS_2015_I1370682.cc, MC_JetSplittings.cc, STAR_2006_S6870392.cc: Updates for new FastJets interface, and other cleaning. * Deprecate 'standalone' FastJets constructors -- they are misleading. * More improvements around jets, including unbound conversion and filtering routines between collections of Particles, Jets, and PseudoJets. * Place 'Cut' forward declaration in a new Cuts.fhh header. * Adding a Cuts::OPEN extern const (a bit more standard- and constant-looking than Cuts::open()) 2016-03-28 Andy Buckley * Improvements to FastJets constructors, including specification of optional AreaDefinition as a constructor arg, disabling dodgy no-FS constructors which I suspect don't work properly in the brave new world of automatic ghost tagging, using a bit of judicious constructor delegation, and completing/exposing use of shared_ptr for internal memory management. 2016-03-26 Andy Buckley * Remove Rivet/Tools/RivetBoost.hh and Boost references from rivet-config, rivet-buildplugin, and configure.ac. It's gone ;-) * Replace Boost assign usage with C++11 brace initialisers. All Boost use is gone from Rivet! * Replace Boost lexical_cast and string algorithms. 2016-03-25 Andy Buckley * Bug-fix in semi-leptonic top selection of CMS_2015_I1370682. 2016-03-12 Andy Buckley * Allow multi-line major tick labels on make-plots linear x and y axes. Linebreaks are indicated by \n in the .dat file. 2016-03-09 Andy Buckley * Release 2.4.1 2016-03-03 Andy Buckley * Add a --nskip flag to the rivet command-line tool, to allow processing to begin in the middle of an event file (useful for batched processing of large files, in combination with --nevts) 2016-03-03 Holger Schulz * Add ATLAS 7 TeV event shapes in Z+jets analysis (ATLAS_2016_I1424838) 2016-02-29 Andy Buckley * Update make-plots to use multiprocessing rather than threading. * Add FastJets::trimJet method, thanks to James Monk for the suggestion and patch. * Add new preferred name PID::charge3 in place of PID::threeCharge, and also convenience PID::abscharge and PID::abscharge3 functions -- all derived from changes in external HEPUtils. * Add analyze(const GenEvent*) and analysis(string&) methods to AnalysisHandler, plus some docstring improvements. 2016-02-23 Andy Buckley * New ATLAS_2015_I1394679 analysis. * New MC_HHJETS analysis from Andreas Papaefstathiou. * Ref data updates for ATLAS_2013_I1219109, ATLAS_2014_I1312627, and ATLAS_2014_I1319490. * Add automatic output paging to 'rivet --show-analyses' 2016-02-16 Andy Buckley * Apply cross-section unit fixes and plot styling improvements to ATLAS_2013_I1217863 analyses, thanks to Christian Gutschow. * Fix to rivet-cmphistos to avoid overwriting RatioPlotYLabel if already set via e.g. the PLOT pseudo-file. Thanks to Johann Felix v. Soden-Fraunhofen. 2016-02-15 Andy Buckley * Add Analysis::bookCounter and some machinery in rivet-cmphistos to avoid getting tripped up by unplottable (for now) data types. * Add --font and --format options to rivet-mkhtml and make-plots, to replace the individual flags used for that purpose. Not fully cleaned up, but a necessary step. * Add new plot styling options to rivet-cmphistos and rivet-mkhtml. Thanks to Gavin Hesketh. * Modify rivet-cmphistos and rivet-mkhtml to apply plot hiding if *any* path component is hidden by an underscore prefix, as implemented in AOPath, plus other tidying using new AOPath methods. * Add pyext/rivet/aopaths.py, containing AOPath object for central & standard decoding of Rivet-standard analysis object path structures. 2016-02-12 Andy Buckley * Update ParticleIdUtils.hh (i.e. PID:: functions) to use the functions from the latest version of MCUtils' PIDUtils.h. 2016-01-15 Andy Buckley * Change rivet-cmphistos path matching logic from match to search (user can add explicit ^ marker if they want match semantics). 2015-12-20 Andy Buckley * Improve linspace (and hence also logspace) precision errors by using multiplication rather than repeated addition to build edge list (thanks to Holger Schulz for the suggestion). 2015-12-15 Andy Buckley * Add cmphistos and make-plots machinery for handling 'suffix' variations on plot paths, currently just by plotting every line, with the variations in a 70% faded tint. * Add Beam::pv() method for finding the beam interaction primary vertex 4-position. * Add a new Particle::setMomentum(E,x,y,z) method, and an origin position member which is automatically populated from the GenParticle, with access methods corresponding to the momentum ones. 2015-12-10 Andy Buckley * make-plots: improve custom tick attribute handling, allowing empty lists. Also, any whitespace now counts as a tick separator -- explicit whitespace in labels should be done via ~ or similar LaTeX markup. 2015-12-04 Andy Buckley * Pro-actively use -m/-M arguments when initially loading histograms in mkhtml, *before* passing them to cmphistos. 2015-12-03 Andy Buckley * Move contains() and has_key() functions on STL containers from std to Rivet namespaces. * Adding IsRef attributes to all YODA refdata files; this will be used to replace the /REF prefix in Rivet v3 onwards. The migration has also removed leading # characters from BEGIN/END blocks, as per YODA format evolution: new YODA versions as required by current Rivet releases are able to read both the old and new formats. 2015-12-02 Andy Buckley * Add handling of a command-line PLOT 'file' argument to rivet-mkhtml, cf. rivet-cmphistos. * Improvements to rivet-mkhtml behaviour re. consistency with rivet-cmphistos in how muti-part histo paths are decomposed into analysis-name + histo name, and removal of 'NONE' strings. 2015-11-30 Andy Buckley * Relax rivet/plotinfo.py pattern matching on .plot file components, to allow leading whitespace and around = signs, and to make the leading # optional on BEGIN/END blocks. 2015-11-26 Andy Buckley * Write out intermediate histogram files by default, with event interval of 10k. 2015-11-25 Andy Buckley * Protect make-plots against lock-up due to partial pstricks command when there are no data points. 2015-11-17 Andy Buckley * rivet-cmphistos: Use a ratio label that doesn't mention 'data' when plotting MC vs. MC. 2015-11-12 Andy Buckley * Tweak plot and subplot sizing defaults in make-plots so the total canvas is always the same size by default. 2015-11-10 Andy Buckley * Handle 2D histograms better in rivet-cmphistos (since they can't be overlaid) 2015-11-05 Andy Buckley * Allow comma-separated analysis name lists to be passed to a single -a/--analysis/--analyses option. * Convert namespace-global const variables to be static, to suppress compiler warnings. * Use standard MAX_DBL and MAX_INT macros as a source for MAXDOUBLE and MAXINT, to suppress GCC5 warnings. 2015-11-04 Holger Schulz * Adding LHCB inelastic xsection measurement (LHCB_2015_I1333223) * Adding ATLAS colour flow in ttbar->semileptonic measurement (ATLAS_2015_I1376945) 2015-10-07 Chris Pollard * Release 2.4.0 2015-10-06 Holger Schulz * Adding CMS_2015_I1327224 dijet analysis (Mjj>2 TeV) 2015-10-03 Holger Schulz * Adding CMS_2015_I1346843 Z+gamma 2015-09-30 Andy Buckley * Important improvement in FourVector & FourMomentum: new reverse() method to return a 4-vector in which only the spatial component has been inverted cf. operator- which flips the t/E component as well. 2015-09-28 Holger Schulz * Adding D0_2000_I503361 ZPT at 1800 GeV 2015-09-29 Chris Pollard * Adding ATLAS_2015_CONF_2015_041 2015-09-29 Chris Pollard * Adding ATLAS_2015_I1387176 2015-09-29 Chris Pollard * Adding ATLAS_2014_I1327229 2015-09-28 Chris Pollard * Adding ATLAS_2014_I1326641 2015-09-28 Holger Schulz * Adding CMS_2013_I1122847 FB assymetry in DY analysis 2015-09-28 Andy Buckley * Adding CMS_2015_I1385107 LHA pp 2.76 TeV track-jet underlying event. 2015-09-27 Andy Buckley * Adding CMS_2015_I1384119 LHC Run 2 minimum bias dN/deta with no B field. 2015-09-25 Andy Buckley * Adding TOTEM_2014_I1328627 forward charged density in eta analysis. 2015-09-23 Andy Buckley * Add CMS_2015_I1310737 Z+jets analysis. * Allow running MC_{W,Z}INC, MC_{W,Z}JETS as separate bare lepton analyses. 2015-09-23 Andy Buckley * FastJets now allows use of FastJet pure ghosts, by excluding them from the constituents of Rivet Jet objects. Thanks to James Monk for raising the issue and providing a patch. 2015-09-15 Andy Buckley * More MissingMomentum changes: add optional 'mass target' argument when retrieving the vector sum as a 4-momentum, with the mass defaulting to 0 rather than sqrt(sum(E)^2 - sum(p)^2). * Require Boost 1.55 for robust compilation, as pointed out by Andrii Verbytskyi. 2015-09-10 Andy Buckley * Allow access to MissingMomentum projection via WFinder. * Adding extra methods to MissingMomentum, to make it more user-friendly. 2015-09-09 Andy Buckley * Fix factor of 2 in LHCB_2013_I1218996 normalisation, thanks to Felix Riehn for the report. 2015-08-20 Frank Siegert * Add function to ZFinder to retrieve all fiducial dressed leptons, e.g. to allow vetoing on a third one (proposed by Christian Gutschow). 2015-08-18 Andy Buckley * Rename xs and counter AOs to start with underscores, and modify rivet-cmphistos to skip AOs whose basenames start with _. 2015-08-17 Andy Buckley * Add writing out of cross-section and total event counter by default. Need to add some name protection to avoid them being plotted. 2015-08-16 Andy Buckley * Add templated versions of Analysis::refData() to use data types other than Scatter2DPtr, and convert the cached ref data store to generic AnalysisObjectPtrs to make it possible. 2015-07-29 Andy Buckley * Add optional Cut arguments to all the Jet tag methods. * Add exception handling and pre-emptive testing for a non-writeable output directory (based on patch from Lukas Heinrich). 2015-07-24 Andy Buckley * Version 2.3.0 release. 2015-07-02 Holger Schulz * Tidy up ATLAS higgs combination analysis. * Add ALICE kaon, pion analysis (ALICE_2015_I1357424) * Add ALICE strange baryon analysis (ALICE_2014_I1300380) * Add CDF ZpT measurement in Z->ee events analysis (CDF_2012_I1124333) * Add validated ATLAS W+charm measurement (ATLAS_2014_I1282447) * Add validated CMS jet and dijet analysis (CMS_2013_I1208923) 2015-07-01 Andy Buckley * Define a private virtual operator= on Projection, to block 'sliced' accidental value copies of derived class instances. * Add new muon-in-jet options to FastJet constructors, pass that and invisibles enums correctly to JetAlg, tweak the default strategies, and add a FastJets constructor from a fastjet::JetDefinition (while deprecating the plugin-by-reference constructor). 2015-07-01 Holger Schulz * Add D0 phi* measurement (D0_2015_I1324946). * Remove WUD and MC_PHOTONJETUE analyses * Don't abort ATLAS_2015_I1364361 if there is no stable Higgs print a warning instead and veto event 2015-07-01 Andy Buckley * Add all, none, from-decay muon filtering options to JetAlg and FastJets. * Rename NONPROMPT_INVISIBLES to DECAY_INVISIBLES for clarity & extensibility. * Remove FastJets::ySubJet, splitJet, and filterJet methods -- they're BDRS-paper-specific and you can now use the FastJet objects directly to do this and much more. * Adding InvisiblesStrategy to JetAlg, using it rather than a bool in the useInvisibles method, and updating FastJets to use this approach for its particle filtering and to optionally use the enum in the constructor arguments. The new default invisibles-using behaviour is to still exclude _prompt_ invisibles, and the default is still to exclude them all. Only one analysis (src/Analyses/STAR_2006_S6870392.cc) required updating, since it was the only one to be using the FastJets legacy seed_threshold constructor argument. * Adding isVisible method to Particle, taken from VisibleFinalState (which now uses this). 2015-06-30 Andy Buckley * Marking many old & superseded ATLAS analyses as obsolete. * Adding cmpMomByMass and cmpMomByAscMass sorting functors. * Bump version to 2.3.0 and require YODA > 1.4.0 (current head at time of development). 2015-06-08 Andy Buckley * Add handling of -m/-M flags on rivet-cmphistos and rivet-mkhtml, moving current rivet-mkhtml -m/-M to -a/-A (for analysis name pattern matching). Requires YODA head (will be YODA 1.3.2 of 1.4.0). * src/Analyses/ATLAS_2015_I1364361.cc: Now use the built-in prompt photon selecting functions. * Tweak legend positions in MC_JETS .plot file. * Add a bit more debug output from ZFinder and WFinder. 2015-05-24 Holger Schulz * Normalisation discussion concerning ATLAS_2014_I1325553 is resolved. Changed YLabel accordingly. 2015-05-19 Holger Schulz * Add (preliminary) ATLAS combined Higgs analysis (ATLAS_2015_I1364361). Data will be updated and more histos added as soon as paper is published in journal. For now using data taken from public ressource https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HIGG-2014-11/ 2015-05-19 Peter Richardson * Fix ATLAS_2014_I1325553 normalisation of histograms was wrong by factor of two |y| vs y problem 2015-05-01 Andy Buckley * Fix MC_HJETS/HINC/HKTSPLITTINGS analyses to (ab)use the ZFinder with a mass range of 115-135 GeV and a mass target of 125 GeV (was previously 115-125 and mass target of mZ) 2015-04-30 Andy Buckley * Removing uses of boost::assign::list_of, preferring the existing comma-based assign override for now, for C++11 compatibility. * Convert MC_Z* analysis finalize methods to use scale() rather than normalize(). 2015-04-01 Holger Schulz * Add CMS 7 TeV rapidity gap analysis (CMS_2015_I1356998). * Remove FinalState Projection. 2015-03-30 Holger Schulz * Add ATLAS 7 TeV photon + jets analysis (ATLAS_2013_I1244522). 2015-03-26 Andy Buckley * Updates for HepMC 2.07 interface constness improvements. 2015-03-25 Holger Schulz * Add ATLAS double parton scattering in W+2j analysis (ATLAS_2013_I1216670). 2015-03-24 Andy Buckley * 2.2.1 release! 2015-03-23 Holger Schulz * Add ATLAS differential Higgs analysis (ATLAS_2014_I1306615). 2015-03-19 Chris Pollard * Add ATLAS V+gamma analyses (ATLAS_2013_I1217863) 2015-03-20 Andy Buckley * Adding ATLAS R-jets analysis i.e. ratios of W+jets and Z+jets observables (ATLAS_2014_I1312627 and _EL, _MU variants) * include/Rivet/Tools/ParticleUtils.hh: Adding same/oppSign and same/opp/diffCharge functions, operating on two Particles. * include/Rivet/Tools/ParticleUtils.hh: Adding HasAbsPID functor and removing optional abs arg from HasPID. 2015-03-19 Andy Buckley * Mark ATLAS_2012_I1083318 as VALIDATED and fix d25-x01-y02 ref data. 2015-03-19 Chris Pollard * Add ATLAS W and Z angular analyses (ATLAS_2011_I928289) 2015-03-19 Andy Buckley * Add LHCb charged particle multiplicities and densities analysis (LHCB_2014_I1281685) * Add LHCb Z y and phi* analysis (LHCB_2012_I1208102) 2015-03-19 Holger Schulz * Add ATLAS dijet analysis (ATLAS_2014_I1325553). * Add ATLAS Z pT analysis (ATLAS_2014_I1300647). * Add ATLAS low-mass Drell-Yan analysis (ATLAS_2014_I1288706). * Add ATLAS gap fractions analysis (ATLAS_2014_I1307243). 2015-03-18 Andy Buckley * Adding CMS_2014_I1298810 and CMS_2014_I1303894 analyses. 2015-03-18 Holger Schulz * Add PDG_TAUS analysis which makes use of the TauFinder. * Add ATLAS 'traditional' Underlying Event in Z->mumu analysis (ATLAS_2014_I1315949). 2015-03-18 Andy Buckley * Change UnstableFinalState duplicate resolution to use the last in a chain rather than the first. 2015-03-17 Holger Schulz * Update Taufinder to use decaytyoe (can be HADRONIC, LEPTONIC or ANY), in FastJet.cc --- set TauFinder mode to hadronic for tau-tagging 2015-03-16 Chris Pollard * Removed fuzzyEquals() from Vector3::angle() 2015-03-16 Andy Buckley * Adding Cuts-based constructor to PrimaryHadrons. * Adding missing compare() method to HeavyHadrons projection. 2015-03-15 Chris Pollard * Adding FinalPartons projection which selects the quarks and gluons immediately before hadronization 2015-03-05 Andy Buckley * Adding Cuts-based constructors and other tidying in UnstableFinalState and HeavyHadrons 2015-03-03 Andy Buckley * Add support for a PLOT meta-file argument to rivet-cmphistos. 2015-02-27 Andy Buckley * Improved time reporting. 2015-02-24 Andy Buckley * Add Particle::fromHadron and Particle::fromPromptTau, and add a boolean 'prompt' argument to Particle::fromTau. * Fix WFinder use-transverse-mass property setting. Thanks to Christian Gutschow. 2015-02-04 Andy Buckley * Add more protection against math domain errors with log axes. * Add some protection against nan-valued points and error bars in make-plots. 2015-02-03 Andy Buckley * Converting 'bitwise' to 'logical' Cuts combinations in all analyses. 2015-02-02 Andy Buckley * Use vector MET rather than scalar VET (doh...) in WFinder cut. Thanks to Ines Ochoa for the bug report. * Updating and tidying analyses with deprecation warnings. * Adding more Cuts/FS constructors for Charged,Neutral,UnstableFinalState. * Add &&, || and ! operators for without-parens-warnings Cut combining. Note these don't short-circuit, but this is ok since Cut comparisons don't have side-effects. * Add absetaIn, absrapIn Cut range definitions. * Updating use of sorted particle/jet access methods and cmp functors in projections and analyses. 2014-12-09 Andy Buckley * Adding a --cmd arg to rivet-buildplugin to allow the output paths to be sed'ed (to help deal with naive Grid distribution). For example BUILDROOT=`rivet-config --prefix`; rivet-buildplugin PHOTONS.cc --cmd | sed -e "s:$BUILDROOT:$SITEROOT:g" 2014-11-26 Andy Buckley * Interface improvements in DressedLeptons constructor. * Adding DEPRECATED macro to throw compiler deprecation warnings when using deprecated features. 2014-11-25 Andy Buckley * Adding Cut-based constructors, and various constructors with lists of PDG codes to IdentifiedFinalState. 2014-11-20 Andy Buckley * Analysis updates (ATLAS, CMS, CDF, D0) to apply the changes below. * Adding JetAlg jets(Cut, Sorter) methods, and other interface improvements for cut and sorted ParticleBase retrieval from JetAlg and ParticleFinder projections. Some old many-doubles versions removed, syntactic sugar sorting methods deprecated. * Adding Cuts::Et and Cuts::ptIn, Cuts::etIn, Cuts::massIn. * Moving FastJet includes, conversions, uses etc. into Tools/RivetFastJet.hh 2014-10-07 Andy Buckley * Fix a bug in the isCharmHadron(pid) function and remove isStrange* functions. 2014-09-30 Andy Buckley * 2.2.0 release! * Mark Jet::containsBottom and Jet::containsCharm as deprecated methods: use the new methods. Analyses updated. * Add Jet::bTagged(), Jet::cTagged() and Jet::tauTagged() as ghost-assoc-based replacements for the 'contains' tagging methods. 2014-09-17 Andy Buckley * Adding support for 1D and 3D YODA scatters, and helper methods for calling the efficiency, asymm and 2D histo divide functions. 2014-09-12 Andy Buckley * Adding 5 new ATLAS analyses: ATLAS_2011_I921594: Inclusive isolated prompt photon analysis with full 2010 LHC data ATLAS_2013_I1263495: Inclusive isolated prompt photon analysis with 2011 LHC data ATLAS_2014_I1279489: Measurements of electroweak production of dijets + $Z$ boson, and distributions sensitive to vector boson fusion ATLAS_2014_I1282441: The differential production cross section of the $\phi(1020)$ meson in $\sqrt{s}=7$ TeV $pp$ collisions measured with the ATLAS detector ATLAS_2014_I1298811: Leading jet underlying event at 7 TeV in ATLAS * Adding a median(vector) function and fixing the other stats functions to operate on vector rather than vector. 2014-09-03 Andy Buckley * Fix wrong behaviour of LorentzTransform with a null boost vector -- thanks to Michael Grosse. 2014-08-26 Andy Buckley * Add calc() methods to Hemispheres as requested, to allow it to be used with Jet or FourMomentum inputs outside the normal projection system. 2014-08-17 Andy Buckley * Improvements to the particles methods on ParticleFinder/FinalState, in particular adding the range of cuts arguments cf. JetAlg (and tweaking the sorted jets equivalent) and returning as a copy rather than a reference if cut/sorted to avoid accidentally messing up the cached copy. * Creating ParticleFinder projection base class, and moving Particles-accessing methods from FinalState into it. * Adding basic forms of MC_ELECTRONS, MC_MUONS, and MC_TAUS analyses. 2014-08-15 Andy Buckley * Version bump to 2.2.0beta1 for use at BOOST and MCnet school. 2014-08-13 Andy Buckley * New analyses: ATLAS_2014_I1268975 (high mass dijet cross-section at 7 TeV) ATLAS_2014_I1304688 (jet multiplicity and pT at 7 TeV) ATLAS_2014_I1307756 (scalar diphoton resonance search at 8 TeV -- no histograms!) CMSTOTEM_2014_I1294140 (charged particle pseudorapidity at 8 TeV) 2014-08-09 Andy Buckley * Adding PromptFinalState, based on code submitted by Alex Grohsjean and Will Bell. Thanks! 2014-08-06 Andy Buckley * Adding MC_HFJETS and MC_JETTAGS analyses. 2014-08-05 Andy Buckley * Update all analyses to use the xMin/Max/Mid, xMean, xWidth, etc. methods on YODA classes rather than the deprecated lowEdge etc. * Merge new HasPID functor from Holger Schulz into Rivet/Tools/ParticleUtils.hh, mainly for use with the any() function in Rivet/Tools/Utils.hh 2014-08-04 Andy Buckley * Add ghost tagging of charms, bottoms and taus to FastJets, and tag info accessors to Jet. * Add constructors from and cast operators to FastJet's PseudoJet object from Particle and Jet. * Convert inRange to not use fuzzy comparisons on closed intervals, providing old version as fuzzyInRange. 2014-07-30 Andy Buckley * Remove classifier functions accepting a Particle from the PID inner namespace. 2014-07-29 Andy Buckley * MC_JetAnalysis.cc: re-enable +- ratios for eta and y, now that YODA divide doesn't throw an exception. * ATLAS_2012_I1093734: fix a loop index error which led to the first bin value being unfilled for half the dphi plots. * Fix accidental passing of a GenParticle pointer as a PID code int in HeavyHadrons.cc. Effect limited to incorrect deductions about excited HF decay chains and should be small. Thanks to Tomasz Przedzinski for finding and reporting the issue during HepMC3 design work! 2014-07-23 Andy Buckley * Fix to logspace: make sure that start and end values are exact, not the result of exp(log(x)). 2014-07-16 Andy Buckley * Fix setting of library paths for doc building: Python can't influence the dynamic loader in its own process by setting an environment variable because the loader only looks at the variable once, when it starts. 2014-07-02 Andy Buckley * rivet-cmphistos now uses the generic yoda.read() function rather than readYODA() -- AIDA files can also be compared and plotted directly now. 2014-06-24 Andy Buckley * Add stupid missing include and std:: prefix in Rivet.hh 2014-06-20 Holger Schulz * bin/make-plots: Automatic generation of minor xtick labels if LogX is requested but data resides e.g. in [200, 700]. Fixes m_12 plots of, e.g. ATLAS_2010_S8817804 2014-06-17 David Grellscheid * pyext/rivet/Makefile.am: 'make distcheck' and out-of-source builds should work now. 2014-06-10 Andy Buckley * Fix use of the install command for bash completion installation on Macs. 2014-06-07 Andy Buckley * Removing direct includes of MathUtils.hh and others from analysis code files. 2014-06-02 Andy Buckley * Rivet 2.1.2 release! 2014-05-30 Andy Buckley * Using Particle absrap(), abseta() and abspid() where automatic conversion was feasible. * Adding a few extra kinematics mappings to ParticleBase. * Adding p3() accessors to the 3-momentum on FourMomentum, Particle, and Jet. * Using Jet and Particle kinematics methods directly (without momentum()) where possible. * More tweaks to make-plots 2D histo parsing behaviour. 2014-05-30 Holger Schulz * Actually fill the XQ 2D histo, .plot decorations. * Have make-plots produce colourmaps using YODA_3D_SCATTER objects. Remove the grid in colourmaps. * Some tweaks for the SFM analysis, trying to contact Martin Wunsch who did the unfolding back then. 2014-05-29 Holger Schulz * Re-enable 2D histo in MC_PDFS 2014-05-28 Andy Buckley * Updating analysis and project routines to use Particle::pid() by preference to Particle::pdgId(), and Particle::abspid() by preference to abs(Particle::pdgId()), etc. * Adding interfacing of smart pointer types and booking etc. for YODA 2D histograms and profiles. * Improving ParticleIdUtils and ParticleUtils functions based on merging of improved function collections from MCUtils, and dropping the compiled ParticleIdUtils.cc file. 2014-05-27 Andy Buckley * Adding CMS_2012_I1090423 (dijet angular distributions), CMS_2013_I1256943 (Zbb xsec and angular correlations), CMS_2013_I1261026 (jet and UE properties vs. Nch) and D0_2000_I499943 (bbbar production xsec and angular correlations). 2014-05-26 Andy Buckley * Fixing a bug in plot file handling, and adding a texpand() routine to rivet.util, to be used to expand some 'standard' physics TeX macros. * Adding ATLAS_2012_I1124167 (min bias event shapes), ATLAS_2012_I1203852 (ZZ cross-section), and ATLAS_2013_I1190187 (WW cross-section) analyses. 2014-05-16 Andy Buckley * Adding any(iterable, fn) and all(iterable, fn) template functions for convenience. 2014-05-15 Holger Schulz * Fix some bugs in identified hadron PIDs in OPAL_1998_S3749908. 2014-05-13 Andy Buckley * Writing out [UNVALIDATED], [PRELIMINARY], etc. in the --list-analyses output if analysis is not VALIDATED. 2014-05-12 Andy Buckley * Adding CMS_2013_I1265659 colour coherence analysis. 2014-05-07 Andy Buckley * Bug fixes in CMS_2013_I1209721 from Giulio Lenzi. * Fixing compiler warnings from clang, including one which indicated a misapplied cut bug in CDF_2006_S6653332. 2014-05-05 Andy Buckley * Fix missing abs() in Particle::abspid()!!!! 2014-04-14 Andy Buckley * Adding the namespace protection workaround for Boost described at http://www.boost.org/doc/libs/1_55_0/doc/html/foreach.html 2014-04-13 Andy Buckley * Adding a rivet.pc template file and installation rule for pkg-config to use. * Updating data/refdata/ALEPH_2001_S4656318.yoda to corrected version in HepData. 2014-03-27 Andy Buckley * Flattening PNG output of make-plots (i.e. no transparency) and other tweaks. 2014-03-23 Andy Buckley * Renaming the internal meta-particle class in DressedLeptons (and exposed in the W/ZFinders) from ClusteredLepton to DressedLepton for consistency with the change in name of its containing class. * Removing need for cmake and unportable yaml-cpp trickery by using libtool to build an embedded symbol-mangled copy of yaml-cpp rather than trying to mangle and build direct from the tarball. 2014-03-10 Andy Buckley * Rivet 2.1.1 release. 2014-03-07 Andy Buckley * Adding ATLAS multilepton search (no ref data file), ATLAS_2012_I1204447. 2014-03-05 Andy Buckley * Also renaming Breit-Wigner functions to cdfBW, invcdfBW and bwspace. * Renaming index_between() to the more Rivety binIndex(), since that's the only real use of such a function... plus a bit of SFINAE type relaxation trickery. 2014-03-04 Andy Buckley * Adding programmatic access to final histograms via AnalysisHandler::getData(). * Adding CMS 4 jet correlations analysis, CMS_2013_I1273574. * Adding CMS W + 2 jet double parton scattering analysis, CMS_2013_I1272853. * Adding ATLAS isolated diphoton measurement, ATLAS_2012_I1199269. * Improving the index_between function so the numeric types don't have to exactly match. * Adding better momentum comparison functors and sortBy, sortByX functions to use them easily on containers of Particle, Jet, and FourMomentum. 2014-02-10 Andy Buckley * Removing duplicate and unused ParticleBase sorting functors. * Removing unused HT increment and units in ATLAS_2012_I1180197 (unvalidated SUSY). * Fixing photon isolation logic bug in CMS_2013_I1258128 (Z rapidity). * Replacing internal uses of #include Rivet/Rivet.hh with Rivet/Config/RivetCommon.hh, removing the MAXRAPIDITY const, and repurposing Rivet/Rivet.hh as a convenience include for external API users. * Adding isStable, children, allDescendants, stableDescendants, and flightLength functions to Particle. * Replacing Particle and Jet deltaX functions with generic ones on ParticleBase, and adding deltaRap variants. * Adding a Jet.fhh forward declaration header, including fastjet::PseudoJet. * Adding a RivetCommon.hh header to allow Rivet.hh to be used externally. * Fixing HeavyHadrons to apply pT cuts if specified. 2014-02-06 Andy Buckley * 2.1.0 release! 2014-02-05 Andy Buckley * Protect against invalid prefix value if the --prefix configure option is unused. 2014-02-03 Andy Buckley * Adding the ATLAS_2012_I1093734 fwd-bwd / azimuthal minbias correlations analysis. * Adding the LHCB_2013_I1208105 forward energy flow analysis. 2014-01-31 Andy Buckley * Checking the YODA minimum version in the configure script. * Fixing the JADE_OPAL analysis ycut values to the midpoints, thanks to information from Christoph Pahl / Stefan Kluth. 2014-01-29 Andy Buckley * Removing unused/overrestrictive Isolation* headers. 2014-01-27 Andy Buckley * Re-bundling yaml-cpp, now built as a mangled static lib based on the LHAPDF6 experience. * Throw a UserError rather than an assert if AnalysisHandler::init is called more than once. 2014-01-25 David Grellscheid * src/Core/Cuts.cc: New Cuts machinery, already used in FinalState. Old-style "mineta, maxeta, minpt" constructors kept around for ease of transition. Minimal set of convenience functions available, like EtaIn(), should be expanded as needed. 2014-01-22 Andy Buckley * configure.ac: Remove opportunistic C++11 build, until this becomes mandatory (in version 2.2.0?). Anyone who wants C++11 can explicitly set the CXXFLAGS (and DYLDFLAGS for pre-Mavericks Macs) 2014-01-21 Leif Lonnblad * src/Core/Analysis.cc: Fixed bug in Analysis::isCompatible where an 'abs' was left out when checking that beam energes does not differ by more than 1GeV. * src/Analyses/CMS_2011_S8978280.cc: Fixed checking of beam energy and booking corresponding histograms. 2013-12-19 Andy Buckley * Adding pid() and abspid() methods to Particle. * Adding hasCharm and hasBottom methods to Particle. * Adding a sorting functor arg version of the ZFinder::constituents() method. * Adding pTmin cut accessors to HeavyHadrons. * Tweak to the WFinder constructor to place the target W (trans) mass argument last. 2013-12-18 Andy Buckley * Adding a GenParticle* cast operator to Particle, removing the Particle and Jet copies of the momentum cmp functors, and general tidying/improvement/unification of the momentum properties of jets and particles. 2013-12-17 Andy Buckley * Using SFINAE techniques to improve the math util functions. * Adding isNeutrino to ParticleIdUtils, and isHadron/isMeson/isBaryon/isLepton/isNeutrino methods to Particle. * Adding a FourMomentum cast operator to ParticleBase, so that Particle and Jet objects can be used directly as FourMomentums. 2013-12-16 Andy Buckley * LeptonClusters renamed to DressedLeptons. * Adding singular particle accessor functions to WFinder and ZFinder. * Removing ClusteredPhotons and converting ATLAS_2010_S8919674. 2013-12-12 Andy Buckley * Fixing a problem with --disable-analyses (thanks to David Hall) * Require FastJet version 3. * Bumped version to 2.1.0a0 * Adding -DNDEBUG to the default build flags, unless in --enable-debug mode. * Adding a special treatment of RIVET_*_PATH variables: if they end in :: the default search paths will not be appended. Used primarily to restrict the doc builds to look only inside the build dirs, but potentially also useful in other special circumstances. * Adding a definition of exec_prefix to rivet-buildplugin. * Adding -DNDEBUG to the default non-debug build flags. 2013-11-27 Andy Buckley * Removing accidentally still-present no-as-needed linker flag from rivet-config. * Lots of analysis clean-up and migration to use new features and W/Z finder APIs. * More momentum method forwarding on ParticleBase and adding abseta(), absrap() etc. functions. * Adding the DEFAULT_RIVET_ANA_CONSTRUCTOR cosmetic macro. * Adding deltaRap() etc. function variations * Adding no-decay photon clustering option to WFinder and ZFinder, and replacing opaque bool args with enums. * Adding an option for ignoring photons from hadron/tau decays in LeptonClusters. 2013-11-22 Andy Buckley * Adding Particle::fromBottom/Charm/Tau() members. LHCb were aready mocking this up, so it seemed sensible to add it to the interface as a more popular (and even less dangerous) version of hasAncestor(). * Adding an empty() member to the JetAlg interface. 2013-11-07 Andy Buckley * Adding the GSL lib path to the library path in the env scripts and the rivet-config --ldflags output. 2013-10-25 Andy Buckley * 2.0.0 release!!!!!! 2013-10-24 Andy Buckley * Supporting zsh completion via bash completion compatibility. 2013-10-22 Andy Buckley * Updating the manual to describe YODA rather than AIDA and the new rivet-cmphistos script. * bin/make-plots: Adding paths to error messages in histogram combination. * CDF_2005_S6217184: fixes to low stats errors and final scatter plot binning. 2013-10-21 Andy Buckley * Several small fixes in jet shape analyses, SFM_1984, etc. found in the last H++ validation run. 2013-10-18 Andy Buckley * Updates to configure and the rivetenv scripts to try harder to discover YODA. 2013-09-26 Andy Buckley * Now bundling Cython-generated files in the tarballs, so Cython is not a build requirement for non-developers. 2013-09-24 Andy Buckley * Removing unnecessary uses of a momentum() indirection when accessing kinematic variables. * Clean-up in Jet, Particle, and ParticleBase, in particular splitting PID functions on Particle from those on PID codes, and adding convenience kinematic functions to ParticleBase. 2013-09-23 Andy Buckley * Add the -avoid-version flag to libtool. * Final analysis histogramming issues resolved. 2013-08-16 Andy Buckley * Adding a ConnectBins flag in make-plots, to decide whether to connect adjacent, gapless bins with a vertical line. Enabled by default (good for the step-histo default look of MC lines), but now rivet-cmphistos disables it for the reference data. 2013-08-14 Andy Buckley * Making 2.0.0beta3 -- just a few remaining analysis migration issues remaining but it's worth making another beta since there were lots of framework fixes/improvements. 2013-08-11 Andy Buckley * ARGUS_1993_S2669951 also fixed using scatter autobooking. * Fixing remaining issues with booking in BABAR_2007_S7266081 using the feature below (far nicer than hard-coding). * Adding a copy_pts param to some Analysis::bookScatter2D methods: pre-setting the points with x values is sometimes genuinely useful. 2013-07-26 Andy Buckley * Removed the (officially) obsolete CDF 2008 LEADINGJETS and NOTE_9351 underlying event analyses -- superseded by the proper versions of these analyses based on the final combined paper. * Removed the semi-demo Multiplicity projection -- only the EXAMPLE analysis and the trivial ALEPH_1991_S2435284 needed adaptation. 2013-07-24 Andy Buckley * Adding a rejection of histo paths containing /TMP/ from the writeData function. Use this to handle booked temporary histograms... for now. 2013-07-23 Andy Buckley * Make rivet-cmphistos _not_ draw a ratio plot if there is only one line. * Improvements and fixes to HepData lookup with rivet-mkanalysis. 2013-07-22 Andy Buckley * Add -std=c++11 or -std=c++0x to the Rivet compiler flags if supported. * Various fixes to analyses with non-zero numerical diffs. 2013-06-12 Andy Buckley * Adding a new HeavyHadrons projection. * Adding optional extra include_end args to logspace() and linspace(). 2013-06-11 Andy Buckley * Moving Rivet/RivetXXX.hh tools headers into Rivet/Tools/. * Adding PrimaryHadrons projection. * Adding particles_in/out functions on GenParticle to RivetHepMC. * Moved STL extensions from Utils.hh to RivetSTL.hh and tidying. * Tidying, improving, extending, and documenting in RivetSTL.hh. * Adding a #include of Logging.hh into Projection.hh, and removing unnecessary #includes from all Projection headers. 2013-06-10 Andy Buckley * Moving htmlify() and detex() Python functions into rivet.util. * Add HepData URL for Inspire ID lookup to the rivet script. * Fix analyses' info files which accidentally listed the Inspire ID under the SpiresID metadata key. 2013-06-07 Andy Buckley * Updating mk-analysis-html script to produce MathJax output * Adding a version of Analysis::removeAnalysisObject() which takes an AO pointer as its argument. * bin/rivet: Adding pandoc-based conversion of TeX summary and description strings to plain text on the terminal output. * Add MathJax to rivet-mkhtml output, set up so the .info entries should render ok. * Mark the OPAL 1993 analysis as UNVALIDATED: from the H++ benchmark runs it looks nothing like the data, and there are some outstanding ambiguities. 2013-06-06 Andy Buckley * Releasing 2.0.0b2 beta version. 2013-06-05 Andy Buckley * Renaming Analysis::add() etc. to very explicit addAnalysisObject(), sorting out shared_pointer polymorphism issues via the Boost dynamic_pointer_cast, and adding a full set of getHisto1D(), etc. explicitly named and typed accessors, including ones with HepData dataset/axis ID signatures. * Adding histo booking from an explicit reference Scatter2D (and more placeholders for 2D histos / 3D scatters) and rewriting existing autobooking to use this. * Converting inappropriate uses of size_t to unsigned int in Analysis. * Moving Analysis::addPlot to Analysis::add() (or reg()?) and adding get() and remove() (or unreg()?) * Fixing attempted abstraction of import fallbacks in rivet.util.import_ET(). * Removing broken attempt at histoDir() caching which led to all histograms being registered under the same analysis name. 2013-06-04 Andy Buckley * Updating the Cython version requirement to 0.18 * Adding Analysis::integrate() functions and tidying the Analysis.hh file a bit. 2013-06-03 Andy Buckley * Adding explicit protection against using inf/nan scalefactors in ATLAS_2011_S9131140 and H1_2000_S4129130. * Making Analysis::scale noisly complain about invalid scalefactors. 2013-05-31 Andy Buckley * Reducing the TeX main memory to ~500MB. Turns out that it *can* be too large with new versions of TeXLive! 2013-05-30 Andy Buckley * Reverting bookScatter2D behaviour to never look at ref data, and updating a few affected analyses. This should fix some bugs with doubled datapoints introduced by the previous behaviour+addPoint. * Adding a couple of minor Utils.hh and MathUtils.hh features. 2013-05-29 Andy Buckley * Removing Constraints.hh header. * Minor bugfixes and improvements in Scatter2D booking and MC_JetAnalysis. 2013-05-28 Andy Buckley * Removing defunct HistoFormat.hh and HistoHandler.{hh,cc} 2013-05-27 Andy Buckley * Removing includes of Logging.hh, RivetYODA.hh, and ParticleIdUtils.hh from analyses (and adding an include of ParticleIdUtils.hh to Analysis.hh) * Removing now-unused .fhh files. * Removing lots of unnecessary .fhh includes from core classes: everything still compiling ok. A good opportunity to tidy this up before the release. * Moving the rivet-completion script from the data dir to bin (the completion is for scripts in bin, and this makes development easier). * Updating bash completion scripts for YODA format and compare-histos -> rivet-cmphistos. 2013-05-23 Andy Buckley * Adding Doxy comments and a couple of useful convenience functions to Utils.hh. * Final tweaks to ATLAS ttbar jet veto analysis (checked logic with Kiran Joshi). 2013-05-15 Andy Buckley * Many 1.0 -> weight bugfixes in ATLAS_2011_I945498. * yaml-cpp v3 support re-introduced in .info parsing. * Lots of analysis clean-ups for YODA TODO issues. 2013-05-13 Andy Buckley * Analysis histo booking improvements for Scatter2D, placeholders for 2D histos, and general tidying. 2013-05-12 Andy Buckley * Adding configure-time differentiation between yaml-cpp API versions 3 and 5. 2013-05-07 Andy Buckley * Converting info file reading to use the yaml-cpp 0.5.x API. 2013-04-12 Andy Buckley * Tagging as 2.0.0b1 2013-04-04 Andy Buckley * Removing bundling of yaml-cpp: it needs to be installed by the user / bootstrap script from now on. 2013-04-03 Andy Buckley * Removing svn:external m4 directory, and converting Boost detection to use better boost.m4 macros. 2013-03-22 Andy Buckley * Moving PID consts to the PID namespace and corresponding code updates and opportunistic clean-ups. * Adding Particle::fromDecay() method. 2013-03-09 Andy Buckley * Version bump to 2.0.0b1 in anticipation of first beta release. * Adding many more 'popular' particle ID code named-consts and aliases, and updating the RapScheme enum with ETA -> ETARAP, and fixing affected analyses (plus other opportunistic tidying / minor bug-fixing). * Fixing a symbol misnaming in ATLAS_2012_I1119557. 2013-03-07 Andy Buckley * Renaming existing uses of ParticleVector to the new 'Particles' type. * Updating util classes, projections, and analyses to deal with the HepMC return value changes. * Adding new Particle(const GenParticle*) constructor. * Converting Particle::genParticle() to return a const pointer rather than a reference, for the same reason as below (+ consistency within Rivet and with the HepMC pointer-centric coding design). * Converting Event to use a different implementation of original and modified GenParticles, and to manage the memory in a more future-proof way. Event::genParticle() now returns a const pointer rather than a reference, to signal that the user is leaving the happy pastures of 'normal' Rivet behind. * Adding a Particles typedef by analogy to Jets, and in preference to the cumbersome ParticleVector. * bin/: Lots of tidying/pruning of messy/defunct scripts. * Creating spiresbib, util, and plotinfo rivet python module submodules: this eliminates lighthisto and the standalone spiresbib modules. Util contains convenience functions for Python version testing, clean ElementTree import, and process renaming, for primary use by the rivet-* scripts. * Removing defunct scripts that have been replaced/obsoleted by YODA. 2013-03-06 Andy Buckley * Fixing doc build so that the reference histos and titles are ~correctly documented. We may want to truncate some of the lists! 2013-03-06 Hendrik Hoeth * Added ATLAS_2012_I1125575 analysis * Converted rivet-mkhtml to yoda * Introduced rivet-cmphistos as yoda based replacement for compare-histos 2013-03-05 Andy Buckley * Replacing all AIDA ref data with YODA versions. * Fixing the histograms entries in the documentation to be tolerant to plotinfo loading failures. * Making the findDatafile() function primarily find YODA data files, then fall back to AIDA. The ref data loader will use the appropriate YODA format reader. 2013-02-05 David Grellscheid * include/Rivet/Math/MathUtils.hh: added BWspace bin edge method to give equal-area Breit-Wigner bins 2013-02-01 Andy Buckley * Adding an element to the PhiMapping enum and a new mapAngle(angle, mapping) function. * Fixes to Vector3::azimuthalAngle and Vector3::polarAngle calculation (using the mapAngle functions). 2013-01-25 Frank Siegert * Split MC_*JETS analyses into three separate bits: MC_*INC (inclusive properties) MC_*JETS (jet properties) MC_*KTSPLITTINGS (kT splitting scales). 2013-01-22 Hendrik Hoeth * Fix TeX variable in the rivetenv scripts, especially for csh 2012-12-21 Andy Buckley * Version 1.8.2 release! 2012-12-20 Andy Buckley * Adding ATLAS_2012_I1119557 analysis (from Roman Lysak and Lily Asquith). 2012-12-18 Andy Buckley * Adding TOTEM_2012_002 analysis, from Sercan Sen. 2012-12-18 Hendrik Hoeth * Added CMS_2011_I954992 analysis 2012-12-17 Hendrik Hoeth * Added CMS_2012_I1193338 analysis * Fixed xi cut in ATLAS_2011_I894867 2012-12-17 Andy Buckley * Adding analysis descriptions to the HTML analysis page ToC. 2012-12-14 Hendrik Hoeth * Added CMS_2012_PAS_FWD_11_003 analysis * Added LHCB_2012_I1119400 analysis 2012-12-12 Andy Buckley * Correction to jet acceptance in CMS_2011_S9120041, from Sercan Sen: thanks! 2012-12-12 Hendrik Hoeth * Added CMS_2012_PAS_QCD_11_010 analysis 2012-12-07 Andy Buckley * Version number bump to 1.8.2 -- release approaching. * Rewrite of ALICE_2012_I1181770 analysis to make it a bit more sane and acceptable. * Adding a note on FourVector and FourMomentum that operator- and operator-= invert both the space and time components: use of -= can result in a vector with negative energy. * Adding particlesByRapidity and particlesByAbsRapidity to FinalState. 2012-12-07 Hendrik Hoeth * Added ALICE_2012_I1181770 analysis * Bump version to 1.8.2 2012-12-06 Hendrik Hoeth * Added ATLAS_2012_I1188891 analysis * Added ATLAS_2012_I1118269 analysis * Added CMS_2012_I1184941 analysis * Added LHCB_2010_I867355 analysis * Added TGraphErrors support to root2flat 2012-11-27 Andy Buckley * Converting CMS_2012_I1102908 analysis to use YODA. * Adding XLabel and YLabel setting in histo/profile/scatter booking. 2012-11-27 Hendrik Hoeth * Fix make-plots png creation for SL5 2012-11-23 Peter Richardson * Added ATLAS_2012_CONF_2012_153 4-lepton SUSY search 2012-11-17 Andy Buckley * Adding MC_PHOTONS by Steve Lloyd and AB, for testing general unisolated photon properties, especially those associated with charged leptons (e and mu). 2012-11-16 Andy Buckley * Adding MC_PRINTEVENT, a convenient (but verbose!) analysis for printing out event details to stdout. 2012-11-15 Andy Buckley * Removing the long-unused/defunct autopackage system. 2012-11-15 Hendrik Hoeth * Added LHCF_2012_I1115479 analysis * Added ATLAS_2011_I894867 analysis 2012-11-14 Hendrik Hoeth * Added CMS_2012_I1102908 analysis 2012-11-14 Andy Buckley * Converting the argument order of logspace, clarifying the arguments, updating affected code, and removing Analysis::logBinEdges. * Merging updates from the AIDA maintenance branch up to r4002 (latest revision for next merges is r4009). 2012-11-11 Andy Buckley * include/Math/: Various numerical fixes to Vector3::angle and changing the 4 vector mass treatment to permit spacelike virtualities (in some cases even the fuzzy isZero assert check was being violated). The angle check allows a clean-up of some workaround code in MC_VH2BB. 2012-10-15 Hendrik Hoeth * Added CMS_2012_I1107658 analysis 2012-10-11 Hendrik Hoeth * Added CDF_2012_NOTE10874 analysis 2012-10-04 Hendrik Hoeth * Added ATLAS_2012_I1183818 analysis 2012-07-17 Hendrik Hoeth * Cleanup and multiple fixes in CMS_2011_S9120041 * Bugfixed in ALEPH_2004_S5765862 and ATLAS_2010_CONF_2010_049 (thanks to Anil Pratap) 2012-08-09 Andy Buckley * Fixing aida2root command-line help message and converting to TH* rather than TGraph by default. 2012-07-24 Andy Buckley * Improvements/migrations to rivet-mkhtml, rivet-mkanalysis, and rivet-buildplugin. 2012-07-17 Hendrik Hoeth * Add CMS_2012_I1087342 2012-07-12 Andy Buckley * Fix rivet-mkanalysis a bit for YODA compatibility. 2012-07-05 Hendrik Hoeth * Version 1.8.1! 2012-07-05 Holger Schulz * Add ATLAS_2011_I945498 2012-07-03 Hendrik Hoeth * Bugfix for transverse mass (thanks to Gavin Hesketh) 2012-06-29 Hendrik Hoeth * Merge YODA branch into trunk. YODA is alive!!!!!! 2012-06-26 Holger Schulz * Add ATLAS_2012_I1091481 2012-06-20 Hendrik Hoeth * Added D0_2011_I895662: 3-jet mass 2012-04-24 Hendrik Hoeth * fixed a few bugs in rivet-rmgaps * Added new TOTEM dN/deta analysis 2012-03-19 Andy Buckley * Version 1.8.0! * src/Projections/UnstableFinalState.cc: Fix compiler warning. * Version bump for testing: 1.8.0beta1. * src/Core/AnalysisInfo.cc: Add printout of YAML parser exception error messages to aid debugging. * bin/Makefile.am: Attempt to fix rivet-nopy build on SLC5. * src/Analyses/LHCB_2010_S8758301.cc: Add two missing entries to the PDGID -> lifetime map. * src/Projections/UnstableFinalState.cc: Extend list of vetoed particles to include reggeons. 2012-03-16 Andy Buckley * Version change to 1.8.0beta0 -- nearly ready for long-awaited release! * pyext/setup.py.in: Adding handling for the YAML library: fix for Genser build from Anton Karneyeu. * src/Analyses/LHCB_2011_I917009.cc: Hiding lifetime-lookup error message if the offending particle is not a hadron. * include/Rivet/Math/MathHeader.hh: Using unnamespaced std::isnan and std::isinf as standard. 2012-03-16 Hendrik Hoeth * Improve default plot behaviour for 2D histograms 2012-03-15 Hendrik Hoeth * Make ATLAS_2012_I1084540 less verbose, and general code cleanup of that analysis. * New-style plugin hook in ATLAS_2011_I926145, ATLAS_2011_I944826 and ATLAS_2012_I1084540 * Fix compiler warnings in ATLAS_2011_I944826 and CMS_2011_S8973270 * CMS_2011_S8941262: Weights are double, not int. * disable inRange() tests in test/testMath.cc until we have a proper fix for the compiler warnings we see on SL5. 2012-03-07 Andy Buckley * Marking ATLAS_2011_I919017 as VALIDATED (this should have happened a long time ago) and adding more references. 2012-02-28 Hendrik Hoeth * lighthisto.py: Caching for re.compile(). This speeds up aida2flat and flat2aida by more than an order of magnitude. 2012-02-27 Andy Buckley * doc/mk-analysis-html: Adding more LaTeX/text -> HTML conversion replacements, including better <,> handling. 2012-02-26 Andy Buckley * Adding CMS_2011_S8973270, CMS_2011_S8941262, CMS_2011_S9215166, CMS_QCD_10_024, from CMS. * Adding LHCB_2011_I917009 analysis, from Alex Grecu. * src/Core/Analysis.cc, include/Rivet/Analysis.hh: Add a numeric-arg version of histoPath(). 2012-02-24 Holger Schulz * Adding ATLAS Ks/Lambda analysis. 2012-02-20 Andy Buckley * src/Analyses/ATLAS_2011_I925932.cc: Using new overflow-aware normalize() in place of counters and scale(..., 1/count) 2012-02-14 Andy Buckley * Splitting MC_GENERIC to put the PDF and PID plotting into MC_PDFS and MC_IDENTIFIED respectively. * Renaming MC_LEADINGJETS to MC_LEADJETUE. 2012-02-14 Hendrik Hoeth * DELPHI_1996_S3430090 and ALEPH_1996_S3486095: fix rapidity vs {Thrust,Sphericity}-axis. 2012-02-14 Andy Buckley * bin/compare-histos: Don't attempt to remove bins from MC histos where they aren't found in the ref file, if the ref file is not expt data, or if the new --no-rmgapbins arg is given. * bin/rivet: Remove the conversion of requested analysis names to upper-case: mixed-case analysis names will now work. 2012-02-14 Frank Siegert * Bugfixes and improvements for MC_TTBAR: - Avoid assert failure with logspace starting at 0.0 - Ignore charged lepton in jet finding (otherwise jet multi is always +1). - Add some dR/deta/dphi distributions as noted in TODO - Change pT plots to logspace as well (to avoid low-stat high pT bins) 2012-02-10 Hendrik Hoeth * rivet-mkhtml -c option now has the semantics of a .plot file. The contents are appended to the dat output by compare-histos. 2012-02-09 David Grellscheid * Fixed broken UnstableFS behaviour 2012-01-25 Frank Siegert * Improvements in make-plots: - Add PlotTickLabels and RatioPlotTickLabels options (cf. make-plots.txt) - Make ErrorBars and ErrorBands non-exclusive (and change their order, such that Bars are on top of Bands) 2012-01-25 Holger Schulz * Add ATLAS diffractive gap analysis 2012-01-23 Andy Buckley * bin/rivet: When using --list-analyses, the analysis summary is now printed out when log level is <= INFO, rather than < INFO. The effect on command line behaviour is that useful identifying info is now printed by default when using --list-analyses, rather than requiring --list-analyses -v. To get the old behaviour, e.g. if using the output of rivet --list-analyses for scripting, now use --list-analyses -q. 2012-01-22 Andy Buckley * Tidying lighthisto, including fixing the order in which +- error values are passed to the Bin constructor in fromFlatHisto. 2012-01-16 Frank Siegert * Bugfix in ATLAS_2012_I1083318: Include non-signal neutrinos in jet clustering. * Add first version of ATLAS_2012_I1083318 (W+jets). Still UNVALIDATED until final happiness with validation plots arises and data is in Hepdata. * Bugfix in ATLAS_2010_S8919674: Really use neutrino with highest pT for Etmiss. Doesn't seem to make very much difference, but is more correct in principle. 2012-01-16 Peter Richardson * Fixes to ATLAS_20111_S9225137 to include reference data 2012-01-13 Holger Schulz * Add ATLAS inclusive lepton analysis 2012-01-12 Hendrik Hoeth * Font selection support in rivet-mkhtml 2012-01-11 Peter Richardson * Added pi0 to list of particles. 2012-01-11 Andy Buckley * Removing references to Boost random numbers. 2011-12-30 Andy Buckley * Adding a placeholder rivet-which script (not currently installed). * Tweaking to avoid a very time-consuming debug printout in compare-histos with the -v flag, and modifying the Rivet env vars in rivet-mkhtml before calling compare-histos to eliminate problems induced by relative paths (i.e. "." does not mean the same thing when the directory is changed within the script). 2011-12-12 Andy Buckley * Adding a command line completion function for rivet-mkhtml. 2011-12-12 Frank Siegert * Fix for factor of 2.0 in normalisation of CMS_2011_S9086218 * Add --ignore-missing option to rivet-mkhtml to ignore non-existing AIDA files. 2011-12-06 Andy Buckley * Include underflow and overflow bins in the normalisation when calling Analysis::normalise(h) 2011-11-23 Andy Buckley * Bumping version to 1.8.0alpha0 since the Jet interface changes are quite a major break with backward compatibility (although the vast majority of analyses should be unaffected). * Removing crufty legacy stuff from the Jet class -- there is never any ambiguity between whether Particle or FourMomentum objects are the constituents now, and the jet 4-momentum is set explicitly by the jet alg so that e.g. there is no mismatch if the FastJet pt recombination scheme is used. * Adding default do-nothing implementations of Analysis::init() and Analysis::finalize(), since it is possible for analysis implementations to not need to do anything in these methods, and forcing analysis authors to write do-nothing boilerplate code is not "the Rivet way"! 2011-11-19 Andy Buckley * Adding variant constructors to FastJets with a more natural Plugin* argument, and decrufting the constructor implementations a bit. * bin/rivet: Adding a more helpful error message if the rivet module can't be loaded, grouping the option parser options, removing the -A option (this just doesn't seem useful anymore), and providing a --pwd option as a shortcut to append "." to the search path. 2011-11-18 Andy Buckley * Adding a guide to compiling a new analysis template to the output message of rivet-mkanalysis. 2011-11-11 Andy Buckley * Version 1.7.0 release! * Protecting the OPAL 2004 analysis against NaNs in the hemispheres projection -- I can't track the origin of these and suspect some occasional memory corruption. 2011-11-09 Andy Buckley * Renaming source files for EXAMPLE and PDG_HADRON_MULTIPLICITIES(_RATIOS) analyses to match the analysis names. * Cosmetic fixes in ATLAS_2011_S9212183 SUSY analysis. * Adding new ATLAS W pT analysis from Elena Yatsenko (slightly adapted). 2011-10-20 Frank Siegert * Extend API of W/ZFinder to allow for specification of input final state in which to search for leptons/photons. 2011-10-19 Andy Buckley * Adding new version of LHCB_2010_S8758301, based on submission from Alex Grecu. There is some slightly dodgy-looking GenParticle* fiddling going on, but apparently it's necessary (and hopefully robust). 2011-10-17 Andy Buckley * bin/rivet-nopy linker line tweak to make compilation work with GCC 4.6 (-lHepMC has to be explicitly added for some reason). 2011-10-13 Frank Siegert * Add four CMS QCD analyses kindly provided by CMS. 2011-10-12 Andy Buckley * Adding a separate test program for non-matrix/vector math functions, and adding a new set of int/float literal arg tests for the inRange functions in it. * Adding a jet multiplicity plot for jets with pT > 30 GeV to MC_TTBAR. 2011-10-11 Andy Buckley * Removing SVertex. 2011-10-11 James Monk * root2flat was missing the first bin (plus spurious last bin) * My version of bash does not understand the pipe syntax |& in rivet-buildplugin 2011-09-30 James Monk * Fix bug in ATLAS_2010_S8817804 that misidentified the akt4 jets as akt6 2011-09-29 Andy Buckley * Converting FinalStateHCM to a slightly more general DISFinalState. 2011-09-26 Andy Buckley * Adding a default libname argument to rivet-buildplugin. If the first argument doesn't have a .so library suffix, then use RivetAnalysis.so as the default. 2011-09-19 Hendrik Hoeth * make-plots: Fixing regex for \physicscoor. Adding "FrameColor" option. 2011-09-17 Andy Buckley * Improving interactive metadata printout, by not printing headings for missing info. * Bumping the release number to 1.7.0alpha0, since with these SPIRES/Inspire changes and the MissingMomentum API change we need more than a minor release. * Updating the mkanalysis, BibTeX-grabbing and other places that care about analysis SPIRES IDs to also be able to handle the new Inspire system record IDs. The missing link is getting to HepData from an Inspire code... * Using the .info file rather than an in-code declaration to specify that an analysis needs cross-section information. * Adding Inspire support to the AnalysisInfo and Analysis interfaces. Maybe we can find a way to combine the two, e.g. return the SPIRES code prefixed with an "S" if no Inspire ID is available... 2011-09-17 Hendrik Hoeth * Added ALICE_2011_S8909580 (strange particle production at 900 GeV) * Feed-down correction in ALICE_2011_S8945144 2011-09-16 Andy Buckley * Adding ATLAS track jet analysis, modified from the version provided by Seth Zenz: ATLAS_2011_I919017. Note that this analysis is currently using the Inspire ID rather than the Spires one: we're clearly going to have to update the API to handle Inspire codes, so might as well start now... 2011-09-14 Andy Buckley * Adding the ATLAS Z pT measurement at 7 TeV (ATLAS_2011_S9131140) and an MC analysis for VH->bb events (MC_VH2BB). 2011-09-12 Andy Buckley * Removing uses of getLog, cout, cerr, and endl from all standard analyses and projections, except in a very few special cases. 2011-09-10 Andy Buckley * Changing the behaviour and interface of the MissingMomentum projection to calculate vector ET correctly. This was previously calculated according to the common definition of -E*sin(theta) of the summed visible 4-momentum in the event, but that is incorrect because the timelike term grows monotonically. Instead, transverse 2-vectors of size ET need to be constructed for each visible particle, and vector-summed in the transverse plane. The rewrite of this behaviour made it opportune to make an API improvement: the previous method names scalarET/vectorET() have been renamed to scalar/vectorEt() to better match the Rivet FourMomentum::Et() method, and MissingMomentum::vectorEt() now returns a Vector3 rather than a double so that the transverse missing Et direction is also available. Only one data analysis has been affected by this change in behaviour: the D0_2004_S5992206 dijet delta(phi) analysis. It's expected that this change will not be very significant, as it is a *veto* on significant missing ET to reduce non-QCD contributions. MC studies using this analysis ~always run with QCD events only, so these contributions should be small. The analysis efficiency may have been greatly improved, as fewer events will now fail the missing ET veto cut. * Add sorting of the ParticleVector returned by the ChargedLeptons projection. * configure.ac: Adding a check to make sure that no-one tries to install into --prefix=$PWD. 2011-09-04 Andy Buckley * lighthisto fixes from Christian Roehr. 2011-08-26 Andy Buckley * Removing deprecated features: the setBeams(...) method on Analysis, the MaxRapidity constant, the split(...) function, the default init() method from AnalysisHandler and its test, and the deprecated TotalVisibleMomentum and PVertex projections. 2011-08-23 Andy Buckley * Adding a new DECLARE_RIVET_PLUGIN wrapper macro to hide the details of the plugin hook system from analysis authors. Migration of all analyses and the rivet-mkanalysis script to use this as the standard plugin hook syntax. * Also call the --cflags option on root-config when using the --root option with rivet-biuldplugin (thanks to Richard Corke for the report) 2011-08-23 Frank Siegert * Added ATLAS_2011_S9126244 * Added ATLAS_2011_S9128077 2011-08-23 Hendrik Hoeth * Added ALICE_2011_S8945144 * Remove obsolete setBeams() from the analyses * Update CMS_2011_S8957746 reference data to the official numbers * Use Inspire rather than Spires. 2011-08-19 Frank Siegert * More NLO parton level generator friendliness: Don't crash or fail when there are no beam particles. * Add --ignore-beams option to skip compatibility check. 2011-08-09 David Mallows * Fix aida2flat to ignore empty dataPointSet 2011-08-07 Andy Buckley * Adding TEXINPUTS and LATEXINPUTS prepend definitions to the variables provided by rivetenv.(c)sh. A manual setting of these variables that didn't include the Rivet TEXMFHOME path was breaking make-plots on lxplus, presumably since the system LaTeX packages are so old there. 2011-08-02 Frank Siegert Version 1.6.0 release! 2011-08-01 Frank Siegert * Overhaul of the WFinder and ZFinder projections, including a change of interface. This solves potential problems with leptons which are not W/Z constituents being excluded from the RemainingFinalState. 2011-07-29 Andy Buckley * Version 1.5.2 release! * New version of aida2root from James Monk. 2011-07-29 Frank Siegert * Fix implementation of --config file option in make-plots. 2011-07-27 David Mallows * Updated MC_TTBAR.plot to reflect updated analysis. 2011-07-25 Andy Buckley * Adding a useTransverseMass flag method and implementation to InvMassFinalState, and using it in the WFinder, after feedback from Gavin Hesketh. This was the neatest way I could do it :S Some other tidying up happened along the way. * Adding transverse mass massT and massT2 methods and functions for FourMomentum. 2011-07-22 Frank Siegert * Added ATLAS_2011_S9120807 * Add two more observables to MC_DIPHOTON and make its isolation cut more LHC-like * Add linear photon pT histo to MC_PHOTONJETS 2011-07-20 Andy Buckley * Making MC_TTBAR work with semileptonic ttbar events and generally tidying the code. 2011-07-19 Andy Buckley * Version bump to 1.5.2.b01 in preparation for a release in the very near future. 2011-07-18 David Mallows * Replaced MC_TTBAR: Added t,tbar reconstruction. Not yet working. 2011-07-18 Andy Buckley * bin/rivet-buildplugin.in: Pass the AM_CXXFLAGS variable (including the warning flags) to the C++ compiler when building user analysis plugins. * include/LWH/DataPointSet.h: Fix accidental setting of errorMinus = scalefactor * error_Plus_. Thanks to Anton Karneyeu for the bug report! 2011-07-18 Hendrik Hoeth * Added CMS_2011_S8884919 (charged hadron multiplicity in NSD events corrected to pT>=0). * Added CMS_2010_S8656010 and CMS_2010_S8547297 (charged hadron pT and eta in NSD events) * Added CMS_2011_S8968497 (chi_dijet) * Added CMS_2011_S8978280 (strangeness) 2011-07-13 Andy Buckley * Rivet PDF manual updates, to not spread disinformation about bootstrapping a Genser repo. 2011-07-12 Andy Buckley * bin/make-plots: Protect property reading against unstripped \r characters from DOS newlines. * bin/rivet-mkhtml: Add a -M unmatch regex flag (note that these are matching the analysis path rather than individual histos on this script), and speed up the initial analysis identification and selection by avoiding loops of regex comparisons for repeats of strings which have already been analysed. * bin/compare-histos: remove the completely (?) unused histogram list, and add -m and -M regex flags, as for aida2flat and flat2aida. 2011-06-30 Hendrik Hoeth * fix fromFlat() in lighthistos: It would ignore histogram paths before. * flat2aida: preserve histogram order from .dat files 2011-06-27 Andy Buckley * pyext/setup.py.in: Use CXXFLAGS and LDFLAGS safely in the Python extension build, and improve the use of build/src directory arguments. 2011-06-23 Andy Buckley * Adding a tentative rivet-updateanalyses script, based on lhapdf-getdata, which will download new analyses as requested. We could change our analysis-providing behaviour a bit to allow this sort of delivery mechanism to be used as the normal way of getting analysis updates without us having to make a whole new Rivet release. It is nice to be able to identify analyses with releases, though, for tracking whether bugs have been addressed. 2011-06-10 Frank Siegert * Bugfixes in WFinder. 2011-06-10 Andy Buckley * Adding \physicsxcoor and \physicsycoor treatment to make-plots. 2011-06-06 Hendrik Hoeth * Allow for negative cross-sections. NLO tools need this. * make-plots: For RatioPlotMode=deviation also consider the MC uncertainties, not just data. 2011-06-04 Hendrik Hoeth * Add support for goodness-of-fit calculations to make-plots. The results are shown in the legend, and one histogram can be selected to determine the color of the plot margin. See the documentation for more details. 2011-06-04 Andy Buckley * Adding auto conversion of Histogram2D to DataPointSets in the AnalysisHandler _normalizeTree method. 2011-06-03 Andy Buckley * Adding a file-weight feature to the Run object, which will optionally rescale the weights in the provided HepMC files. This should be useful for e.g. running on multiple differently-weighted AlpGen HepMC files/streams. The new functionality is used by the rivet command via an optional weight appended to the filename with a colon delimiter, e.g. "rivet fifo1.hepmc fifo2.hepmc:2.31" 2011-06-01 Hendrik Hoeth * Add BeamThrust projection 2011-05-31 Hendrik Hoeth * Fix LIBS for fastjet-3.0 * Add basic infrastructure for Taylor plots in make-plots * Fix OPAL_2004_S6132243: They are using charged+neutral. * Release 1.5.1 2011-05-22 Andy Buckley * Adding plots of stable and decayed PID multiplicities to MC_GENERIC (useful for sanity-checking generator setups). * Removing actually-unused ProjectionApplier.fhh forward declaration header. 2011-05-20 Andy Buckley * Removing import of ipython shell from rivet-rescale, having just seen it throw a multi-coloured warning message on a student's lxplus Rivet session! * Adding support for the compare-histos --no-ratio flag when using rivet-mkhtml. Adding --rel-ratio, --linear, etc. is an exercise for the enthusiast ;-) 2011-05-10 Andy Buckley * Internal minor changes to the ProjectionHandler and ProjectionApplier interfaces, in particular changing the ProjectionHandler::create() function to be called getInstance and to return a reference rather than a pointer. The reference change is to make way for an improved singleton implementation, which cannot yet be used due to a bug in projection memory management. The code of the improved singleton is available, but commented out, in ProjectionManager.hh to allow for easier migration and to avoid branching. 2011-05-08 Andy Buckley * Extending flat2aida to be able to read from and write to stdin/out as for aida2flat, and also eliminating the internal histo parsing representation in favour of the one in lighthisto. lighthisto's fromFlat also needed a bit of an overhaul: it has been extended to parse each histo's chunk of text (including BEGIN and END lines) in fromFlatHisto, and for fromFlat to parse a collection of histos from a file, in keeping with the behaviour of fromDPS/fromAIDA. Merging into Professor is now needed. * Extending aida2flat to have a better usage message, to accept input from stdin for command chaining via pipes, and to be a bit more sensibly internally structured (although it also now has to hold all histos in memory before writing out -- that shouldn't be a problem for anything other than truly huge histo files). 2011-05-04 Andy Buckley * compare-histos: If using --mc-errs style, prefer dotted and dashdotted line styles to dashed, since dashes are often too long to be distinguishable from solid lines. Even better might be to always use a solid line for MC errs style, and to add more colours. * rivet-mkhtml: use a no-mc-errors drawing style by default, to match the behaviour of compare-histos, which it calls. The --no-mc-errs option has been replaced with an --mc-errs option. 2011-05-04 Hendrik Hoeth * Ignore duplicate files in compare-histos. 2011-04-25 Andy Buckley * Adding some hadron-specific N and sumET vs. |eta| plots to MC_GENERIC. * Re-adding an explicit attempt to get the beam particles, since HepMC's IO_HERWIG seems to not always set them even though it's meant to. 2011-04-19 Hendrik Hoeth * Added ATLAS_2011_S9002537 W asymmetry analysis 2011-04-14 Hendrik Hoeth * deltaR, deltaPhi, deltaEta now available in all combinations of FourVector, FourMomentum, Vector3, doubles. They also accept jets and particles as arguments now. 2011-04-13 David Grellscheid * added ATLAS 8983313: 0-lepton BSM 2011-04-01 Andy Buckley * bin/rivet-mkanalysis: Don't try to download SPIRES or HepData info if it's not a standard analysis (i.e. if the SPIRES ID is not known), and make the default .info file validly parseable by YAML, which was an unfortunate gotcha for anyone writing a first analysis. 2011-03-31 Andy Buckley * bin/compare-histos: Write more appropriate ratio plot labels when not comparing to data, and use the default make-plots labels when comparing to data. * bin/rivet-mkhtml: Adding a timestamp to the generated pages, and a -t/--title option to allow setting the main HTML page title on the command line: otherwise it becomes impossible to tell these pages apart when you have a lot of them, except by URL! 2011-03-24 Andy Buckley * bin/aida2flat: Adding a -M option to *exclude* histograms whose paths match a regex. Writing a negative lookahead regex with positive matching was far too awkward! 2011-03-18 Leif Lonnblad * src/Core/AnalysisHandler.cc (AnalysisHandler::removeAnalysis): Fixed strange shared pointer assignment that caused seg-fault. 2011-03-13 Hendrik Hoeth * filling of functions works now in a more intuitive way (I hope). 2011-03-09 Andy Buckley * Version 1.5.0 release! 2011-03-08 Andy Buckley * Adding some extra checks for external packages in make-plots. 2011-03-07 Andy Buckley * Changing the accuracy of the beam energy checking to 1%, to make the UI a bit more forgiving. It's still best to specify exactly the right energy of course! 2011-03-01 Andy Buckley * Adding --no-plottitle to compare-histos (+ completion). * Fixing segfaults in UA1_1990_S2044935 and UA5_1982_S875503. * Bump ABI version numbers for 1.5.0 release. * Use AnalysisInfo for storage of the NeedsCrossSection analysis flag. * Allow field setting in AnalysisInfo. 2011-02-27 Hendrik Hoeth * Support LineStyle=dashdotted in make-plots * New command line option --style for compare-histos. Options are "default", "bw" and "talk". * cleaner uninstall 2011-02-26 Andy Buckley * Changing internal storage and return type of Particle::pdgId() to PdgId, and adding Particle::energy(). * Renaming Analysis::energies() as Analysis::requiredEnergies(). * Adding beam energies into beam consistency checking: Analysis::isCompatible methods now also require the beam energies to be provided. * Removing long-deprecated AnalysisHandler::init() constructor and AnalysisHandler::removeIncompatibleAnalyses() methods. 2011-02-25 Andy Buckley * Adding --disable-obsolete, which takes its value from the value of --disable-preliminary by default. * Replacing RivetUnvalidated and RivetPreliminary plugin libraries with optionally-configured analysis contents in the experiment-specific plugin libraries. This avoids issues with making libraries rebuild consistently when sources were reassigned between libraries. 2011-02-24 Andy Buckley * Changing analysis plugin registration to fall back through available paths rather than have RIVET_ANALYSIS_PATH totally override the built-in paths. The first analysis hook of a given name to be found is now the one that's used: any duplicates found will be warned about but unused. getAnalysisLibPaths() now returns *all* the search paths, in keeping with the new search behaviour. 2011-02-22 Andy Buckley * Moving the definition of the MSG_* macros into the Logging.hh header. They can't be used everywhere, though, as they depend on the existence of a this->getLog() method in the call scope. This move makes them available in e.g. AnalysisHandler and other bits of framework other than projections and analyses. * Adding a gentle print-out from the Rivet AnalysisHandler if preliminary analyses are being used, and strengthening the current warning if unvalidated analyses are used. * Adding documentation about the validation "process" and the (un)validated and preliminary analysis statuses. * Adding the new RivetPreliminary analysis library, and the corresponding --disable-preliminary configure flag. Analyses in this library are subject to change names, histograms, reference data values, etc. between releases: make sure you check any dependences on these analyses when upgrading Rivet. * Change the Python script ref data search behaviours to use Rivet ref data by default where available, rather than requiring a -R option. Where relevant, -R is still a valid option, to avoid breaking legacy scripts, and there is a new --no-rivet-refs option to turn the default searching *off*. * Add the prepending and appending optional arguments to the path searching functions. This will make it easier to combine the search functions with user-supplied paths in Python scripts. * Make make-plots killable! * Adding Rivet version to top of run printout. * Adding Run::crossSection() and printing out the cross-section in pb at the end of a Rivet run. 2011-02-22 Hendrik Hoeth * Make lighthisto.py aware of 2D histograms * Adding published versions of the CDF_2008 leading jets and DY analyses, and marking the preliminary ones as "OBSOLETE". 2011-02-21 Andy Buckley * Adding PDF documentation for path searching and .info/.plot files, and tidying overfull lines. * Removing unneeded const declarations from various return by value path and internal binning functions. Should not affect ABI compatibility but will force recompilation of external packages using the RivetPaths.hh and Utils.hh headers. * Adding findAnalysis*File(fname) functions, to be used by Rivet scripts and external programs to find files known to Rivet according to Rivet's (newly standard) lookup rule. * Changing search path function behaviour to always return *all* search directories rather than overriding the built-in locations if the environment variables are set. 2011-02-20 Andy Buckley * Adding the ATLAS 2011 transverse jet shapes analysis. 2011-02-18 Hendrik Hoeth * Support for transparency in make-plots 2011-02-18 Frank Siegert * Added ATLAS prompt photon analysis ATLAS_2010_S8914702 2011-02-10 Hendrik Hoeth * Simple NOOP constructor for Thrust projection * Add CMS event shape analysis. Data read off the plots. We will get the final numbers when the paper is accepted by the journal. 2011-02-10 Frank Siegert * Add final version of ATLAS dijet azimuthal decorrelation 2011-02-10 Hendrik Hoeth * remove ATLAS conf note analyses for which we have final data * reshuffle histograms in ATLAS minbias analysis to match Hepdata * small pT-cut fix in ATLAS track based UE analysis 2011-01-31 Andy Buckley * Doc tweaks and adding cmp-by-|p| functions for Jets, to match those added by Hendrik for Particles. * Don't sum photons around muons in the D0 2010 Z pT analysis. 2011-01-27 Andy Buckley * Adding ATLAS 2010 min bias and underlying event analyses and data. 2011-01-23 Andy Buckley * Make make-plots write out PDF rather than PS by default. 2011-01-12 Andy Buckley * Fix several rendering and comparison bugs in rivet-mkhtml. * Allow make-plots to write into an existing directory, at the user's own risk. * Make rivet-mkhtml produce PDF-based output rather than PS by default (most people want PDF these days). Can we do the same change of default for make-plots? * Add getAnalysisPlotPaths() function, and use it in compare-histos * Use proper .info file search path function internally in AnalysisInfo::make. 2011-01-11 Andy Buckley * Clean up ATLAS dijet analysis. 2010-12-30 Andy Buckley * Adding a run timeout option, and small bug-fixes to the event timeout handling, and making first event timeout work nicely with the run timeout. Run timeout is intended to be used in conjunction with timed batch token expiry, of the type that likes to make 0 byte AIDA files on LCG when Grid proxies time out. 2010-12-21 Andy Buckley * Fix the cuts in the CDF 1994 colour coherence analysis. 2010-12-19 Andy Buckley * Fixing CDF midpoint cone jet algorithm default construction to have an overlap threshold of 0.5 rather than 0.75. This was recommended by the FastJet manual, and noticed while adding the ATLAS and CMS cones. * Adding ATLAS and CMS old iterative cones as "official" FastJets constructor options (they could always have been used by explicit instantiation and attachment of a Fastjet plugin object). * Removing defunct and unused ClosestJetShape projection. 2010-12-16 Andy Buckley * bin/compare-histos, pyext/lighthisto.py: Take ref paths from rivet module API rather than getting the environment by hand. * pyext/lighthisto.py: Only read .plot info from the first matching file (speed-up compare-histos). 2010-12-14 Andy Buckley * Augmenting the physics vector functionality to make FourMomentum support maths operators with the correct return type (FourMomentum rather than FourVector). 2010-12-11 Andy Buckley * Adding a --event-timeout option to control the event timeout, adding it to the completion script, and making sure that the init time check is turned OFF once successful! * Adding an 3600 second timeout for initialising an event file. If it takes longer than (or anywhere close to) this long, chances are that the event source is inactive for some reason (perhaps accidentally unspecified and stdin is not active, or the event generator has died at the other end of the pipe. The reason for not making it something shorter is that e.g. Herwig++ or Sherpa can have long initialisation times to set up the MPI handler or to run the matrix element integration. An timeout after an hour is still better than a batch job which runs for two days before you realise that you forgot to generate any events! 2010-12-10 Andy Buckley * Fixing unbooked-histo segfault in UA1_1990_S2044935 at 63 GeV. 2010-12-08 Hendrik Hoeth * Fixes in ATLAS_2010_CONF_083, declaring it validated * Added ATLAS_2010_CONF_046, only two plots are implemented. The paper will be out soon, and we don't need the other plots right now. Data is read off the plots in the note. * New option "SmoothLine" for HISTOGRAM sections in make-plots * Changed CustomTicks to CustomMajorTicks and added CustomMinorTicks in make-plots. 2010-12-07 Andy Buckley * Update the documentation to explain this latest bump to path lookup behaviours. * Various improvements to existing path lookups. In particular, the analysis lib path locations are added to the info and ref paths to avoid having to set three variables when you have all three file types in the same personal plugin directory. * Adding setAnalysisLibPaths and addAnalysisLibPath functions. rivet --analysis-path{,-append} now use these and work correctly. Hurrah! * Add --show-analyses as an alias for --show-analysis, following a comment at the ATLAS tutorial. 2010-12-07 Hendrik Hoeth * Change LegendXPos behaviour in make-plots. Now the top left corner of the legend is used as anchor point. 2010-12-03 Andy Buckley * 1.4.0 release. * Add bin-skipping to compare-histos to avoid one use of rivet-rmgaps (it's still needed for non-plotting post-processing like Professor). 2010-12-03 Hendrik Hoeth * Fix normalisation issues in UA5 and ALEPH analyses 2010-11-27 Andy Buckley * MathUtils.hh: Adding fuzzyGtrEquals and fuzzyLessEquals, and tidying up the math utils collection a bit. * CDF 1994 colour coherence analysis overhauled and correction/norm factors fixed. Moved to VALIDATED status. * Adding programmable completion for aida2flat and flat2aida. * Improvements to programmable completion using the neat _filedir completion shell function which I just discovered. 2010-11-26 Andy Buckley * Tweak to floating point inRange to use fuzzyEquals for CLOSED interval equality comparisons. * Some BibTeX generation improvements, and fixing the ATLAS dijet BibTeX key. * Resolution upgrade in PNG make-plots output. * CDF_2005_S6217184.cc, CDF_2008_S7782535.cc: Updates to use the new per-jet JetAlg interface (and some other fixes). * JetAlg.cc: Changed the interface on request to return per-jet rather than per-event jet shapes, with an extra jet index argument. * MathUtils.hh: Adding index_between(...) function, which is handy for working out which bin a value falls in, given a set of bin edges. 2010-11-25 Andy Buckley * Cmp.hh: Adding ASC/DESC (and ANTISORTED) as preferred non-EQUIVALENT enum value synonyms over misleading SORTED/UNSORTED. * Change of rapidity scheme enum name to RapScheme * Reworking JetShape a bit further: constructor args now avoid inconsistencies (it was previously possible to define incompatible range-ends and interval). Internal binning implementation also reworked to use a vector of bin edges: the bin details are available via the interface. The general jet pT cuts can be applied via the JetShape constructor. * MathUtils.hh: Adding linspace and logspace utility functions. Useful for defining binnings. * Adding more general cuts on jet pT and (pseudo)rapidity. 2010-11-11 Andy Buckley * Adding special handling of FourMomentum::mass() for computed zero-mass vectors for which mass2 can go (very slightly) negative due to numerical precision. 2010-11-10 Hendrik Hoeth * Adding ATLAS-CONF-2010-083 conference note. Data is read from plots. When I run Pythia 6 the bins close to pi/2 are higher than in the note, so I call this "unvalidated". But then ... the note doesn't specify a tune or even just a version for the generators in the plots. Not even if they used Pythia 6 or Pythia 8. Probably 6, since they mention AGILe. 2010-11-10 Andy Buckley * Adding a JetAlg::useInvisibles(bool) mechanism to allow ATLAS jet studies to include neutrinos. Anyone who chooses to use this mechanism had better be careful to remove hard neutrinos manually in the provided FinalState object. 2010-11-09 Hendrik Hoeth * Adding ATLAS-CONF-2010-049 conference note. Data is read from plots. Fragmentation functions look good, but I can't reproduce the MC lines (or even the relative differences between them) in the jet cross-section plots. So consider those unvalidated for now. Oh, and it seems ATLAS screwed up the error bands in their ratio plots, too. They are upside-down. 2010-11-07 Hendrik Hoeth * Adding ATLAS-CONF-2010-081 conference note. Data is read from plots. 2010-11-06 Andy Buckley * Deprecating the old JetShape projection and renaming to ClosestJetShape: the algorithm has a tenuous relationship with that actually used in the CDF (and ATLAS) jet shape analyses. CDF analyses to be migrated to the new JetShape projection... and some of that projection's features, design elements, etc. to be finished off: we may as well take this opportunity to clear up what was one of our nastiest pieces of code. 2010-11-05 Hendrik Hoeth * Adding ATLAS-CONF-2010-031 conference note. Data is read from plots. 2010-10-29 Andy Buckley * Making rivet-buildplugin use the same C++ compiler and CXXFLAGS variable as used for the Rivet system build. * Fixing NeutralFinalState projection to, erm, actually select neutral particles (by Hendrik). * Allow passing a general FinalState reference to the JetShape projection, rather than requiring a VetoedFS. 2010-10-07 Andy Buckley * Adding a --with-root flag to rivet-buildplugin to add root-config --libs flags to the plugin build command. 2010-09-24 Andy Buckley * Releasing as Rivet 1.3.0. * Bundling underscore.sty to fix problems with running make-plots on dat files generated by compare-histos from AIDA files with underscores in their names. 2010-09-16 Andy Buckley * Fix error in N_effective definition for weighted profile errors. 2010-08-18 Andy Buckley * Adding MC_GENERIC analysis. NB. Frank Siegert also added MC_HJETS. 2010-08-03 Andy Buckley * Fixing compare-histos treatment of what is now a ref file, and speeding things up... again. What a mess! 2010-08-02 Andy Buckley * Adding rivet-nopy: a super-simple Rivet C++ command line interface which avoids Python to make profiling and debugging easier. * Adding graceful exception handling to the AnalysisHandler event loop methods. * Changing compare-histos behaviour to always show plots for which there is at least one MC histo. The default behaviour should now be the correct one in 99% of use cases. 2010-07-30 Andy Buckley * Merging in a fix for shared_ptrs not being compared for insertion into a set based on raw pointer value. 2010-07-16 Andy Buckley * Adding an explicit library dependency declaration on libHepMC, and hence removing the -lHepMC from the rivet-config --libs output. 2010-07-14 Andy Buckley * Adding a manual section on use of Rivet (and AGILe) as libraries, and how to use the -config scripts to aid compilation. * FastJets projection now allows setting of a jet area definition, plus a hacky mapping for getting the area-enabled cluster sequence. Requested by Pavel Starovoitov & Paolo Francavilla. * Lots of script updates in last two weeks! 2010-06-30 Andy Buckley * Minimising amount of Log class mapped into SWIG. * Making Python ext build checks fail with error rather than warning if it has been requested (or, rather, not explicitly disabled). 2010-06-28 Andy Buckley * Converting rivet Python module to be a package, with the dlopen flag setting etc. done around the SWIG generated core wrapper module (rivet.rivetwrap). * Requiring Python >= 2.4.0 in rivet scripts (and adding a Python version checker function to rivet module) * Adding --epspng option to make-plots (and converting to use subprocess.Popen). 2010-06-27 Andy Buckley * Converting JADE_OPAL analysis to use the fastjet exclusive_ymerge_*max* function, rather than just exclusive_ymerge: everything looks good now. It seems that fastjet >= 2.4.2 is needed for this to work properly. 2010-06-24 Andy Buckley * Making rivet-buildplugin look in its own bin directory when trying to find rivet-config. 2010-06-23 Andy Buckley * Adding protection and warning about numerical precision issues in jet mass calculation/histogramming to the MC_JetAnalysis analysis. * Numerical precision improvement in calculation of Vector4::mass2. * Adding relative scale ratio plot flag to compare-histos * Extended command completion to rivet-config, compare-histos, and make-plots. * Providing protected log messaging macros, MSG_{TRACE,DEBUG,INFO,WARNING,ERROR} cf. Athena. * Adding environment-aware functions for Rivet search path list access. 2010-06-21 Andy Buckley * Using .info file beam ID and energy info in HTML and LaTeX documentation. * Using .info file beam ID and energy info in command-line printout. * Fixing a couple of references to temporary variables in the analysis beam info, which had been introduced during refactoring: have reinstated reference-type returns as the more efficient solution. This should not affect API compatibility. * Making SWIG configure-time check include testing for incompatibilities with the C++ compiler (re. the recurring _const_ char* literals issue). * Various tweaks to scripts: make-plots and compare-histos processes are now renamed (on Linux), rivet-config is avoided when computing the Rivet version,and RIVET_REF_PATH is also set using the rivet --analysis-path* flags. compare-histos now uses multiple ref data paths for .aida file globbing. * Hendrik changed VetoedFinalState comparison to always return UNDEFINED if vetoing on the results of other FS projections is being used. This is the only simple way to avoid problems emanating from the remainingFinalState thing. 2010-06-19 Andy Buckley * Adding --analysis-path and --analysis-path-append command-line flags to the rivet script, as a "persistent" way to set or extend the RIVET_ANALYSIS_PATH variable. * Changing -Q/-V script verbosity arguments to more standard -q/-v, after Hendrik moaned about it ;) * Small fix to TinyXML operator precendence: removes a warning, and I think fixes a small bug. * Adding plotinfo entries for new jet rapidity and jet mass plots in MC_JetAnalysis derivatives. * Moving MC_JetAnalysis base class into a new libRivetAnalysisTools library, with analysis base class and helper headers to be stored in the reinstated Rivet/Analyses include directory. 2010-06-08 Andy Buckley * Removing check for CEDARSTD #define guard, since we no longer compile against AGILe and don't have to be careful about duplication. * Moving crappy closest approach and decay significance functions from Utils into SVertex, which is the only place they have ever been used (and is itself almost entirely pointless). * Overhauling particle ID <-> name system to clear up ambiguities between enums, ints, particles and beams. There are no more enums, although the names are still available as const static ints, and names are now obtained via a singleton class which wraps an STL map for name/ID lookups in both directions. 2010-05-18 Hendrik Hoeth * Fixing factor-of-2 bug in the error calculation when scaling histograms. * Fixing D0_2001_S4674421 analysis. 2010-05-11 Andy Buckley * Replacing TotalVisibleMomentum with MissingMomentum in analyses and WFinder. Using vector ET rather than scalar ET in some places. 2010-05-07 Andy Buckley * Revamping the AnalysisHandler constructor and data writing, with some LWH/AIDA mangling to bypass the stupid AIDA idea of having to specify the sole output file and format when making the data tree. Preferred AnalysisHandler constructor now takes only one arg -- the runname -- and there is a new AH.writeData(outfile) method to replace AH.commitData(). Doing this now to begin migration to more flexible histogramming in the long term. 2010-04-21 Hendrik Hoeth * Fixing LaTeX problems (make-plots) on ancient machines, like lxplus. 2010-04-29 Andy Buckley * Fixing (I hope!) the treatment of weighted profile bin errors in LWH. 2010-04-21 Andy Buckley * Removing defunct and unused KtJets and Configuration classes. * Hiding some internal details from Doxygen. * Add @brief Doxygen comments to all analyses, projections and core classes which were missing them. 2010-04-21 Hendrik Hoeth * remove obsolete reference to InitialQuarks from DELPHI_2002 * fix normalisation in CDF_2000_S4155203 2010-04-20 Hendrik Hoeth * bin/make-plots: real support for 2-dim histograms plotted as colormaps, updated the documentation accordingly. * fix misleading help comment in configure.ac 2010-04-08 Andy Buckley * bin/root2flat: Adding this little helper script, minimally modified from one which Holger Schulz made for internal use in ATLAS. 2010-04-05 Andy Buckley * Using spiresbib in rivet-mkanalysis: analysis templates made with rivet-mkanalysis will now contain a SPIRES-dumped BibTeX key and entry if possible! * Adding BibKey and BibTeX entries to analysis metadata files, and updating doc build to use them rather than the time-consuming SPIRES screen-scraping. Added SPIRES BibTeX dumps to all analysis metadata using new (uninstalled & unpackaged) doc/get-spires-data script hack. * Updating metadata files to add Energies, Beams and PtCuts entries to all of them. * Adding ToDo, NeedsCrossSection, and better treatment of Beams and Energies entries in metadata files and in AnalysisInfo and Analysis interfaces. 2010-04-03 Andy Buckley * Frank Siegert: Update of rivet-mkhtml to conform to improved compare-histos. * Frank Siegert: LWH output in precision-8 scientific notation, to solve a binning precision problem... the first time weve noticed a problem! * Improved treatment of data/reference datasets and labels in compare-histos. * Rewrite of rivet-mkanalysis in Python to make way for neat additions. * Improving SWIG tests, since once again the user's biuld system must include SWIG (no test to check that it's a 'good SWIG', since the meaning of that depends on which compiler is being used and we hope that the user system is consistent... evidence from Finkified Macs and bloody SLC5 notwithstanding). 2010-03-23 Andy Buckley * Tag as patch release 1.2.1. 2010-03-22 Andy Buckley * General tidying of return arguments and intentionally unused parameters to keep -Wextra happy (some complaints remain from TinyXML, FastJet, and HepMC). * Some extra bug fixes: in FastJets projection with explicit plugin argument, removing muon veto cut on FoxWolframMoments. * Adding UNUSED macro to help with places where compiler warnings can't be helped. * Turning on -Wextra warnings, and fixing some violations. 2010-03-21 Andy Buckley * Adding MissingMomentum projection, as replacement for ~all uses of now-deprecated TotalVisibleMomentum projection. * Fixing bug with TotalVisibleMomentum projection usage in MC_SUSY analysis. * Frank Siegert fixed major bug in pTmin param passing to FastJets projection. D'oh: requires patch release. 2010-03-02 Andy Buckley * Tagging for 1.2.0 release... at last! 2010-03-01 Andy Buckley * Updates to manual, manual generation scripts, analysis info etc. * Add HepData URL to metadata print-out with rivet --show-analysis * Fix average Et plot in UA1 analysis to only apply to the tracker acceptance (but to include neutral particle contributions, i.e. the region of the calorimeter in the tracker acceptance). * Use Et rather than pT in filling the scalar Et measure in TotalVisibleMomentum. * Fixes to UA1 normalisation (which is rather funny in the paper). 2010-02-26 Andy Buckley * Update WFinder to not place cuts and other restrictions on the neutrino. 2010-02-11 Andy Buckley * Change analysis loader behaviour to use ONLY RIVET_ANALYSIS_PATH locations if set, otherwise use ONLY the standard Rivet analysis install path. Should only impact users of personal plugin analyses, who should now explicitly set RIVET_ANALYSIS_PATH to load their analysis... and who can now create personal versions of standard analyses without getting an error message about duplicate loading. 2010-01-15 Andy Buckley * Add tests for "stable" heavy flavour hadrons in jets (rather than just testing for c/b hadrons in the ancestor lists of stable jet constituents) 2009-12-23 Hendrik Hoeth * New option "RatioPlotMode=deviation" in make-plots. 2009-12-14 Hendrik Hoeth * New option "MainPlot" in make-plots. For people who only want the ratio plot and nothing else. * New option "ConnectGaps" in make-plots. Set to 1 if you want to connect gaps in histograms with a line when ErrorBars=0. Works both in PLOT and in HISTOGRAM sections. * Eliminated global variables for coordinates in make-plots and enabled multithreading. 2009-12-14 Andy Buckley * AnalysisHandler::execute now calls AnalysisHandler::init(event) if it has not yet been initialised. * Adding more beam configuration features to Beam and AnalysisHandler: the setRunBeams(...) methods on the latter now allows a beam configuration for the run to be specified without using the Run class. 2009-12-11 Andy Buckley * Removing use of PVertex from few remaining analyses. Still used by SVertex, which is itself hardly used and could maybe be removed... 2009-12-10 Andy Buckley * Updating JADE_OPAL to do the histo booking in init(), since sqrtS() is now available at that stage. * Renaming and slightly re-engineering all MC_*_* analyses to not be collider-specific (now the Analysis::sqrtS/beams()) methods mean that histograms can be dynamically binned. * Creating RivetUnvalidated.so plugin library for unvalidated analyses. Unvalidated analyses now need to be explicitly enabled with a --enable-unvalidated flag on the configure script. * Various min bias analyses updated and validated. 2009-12-10 Hendrik Hoeth * Propagate SPECIAL and HISTOGRAM sections from .plot files through compare-histos * STAR_2006_S6860818: vs particle mass, validate analysis 2009-12-04 Andy Buckley * Use scaling rather than normalising in DELPHI_1996: this is generally desirable, since normalizing to 1 for 1/sig dsig/dx observables isn't correct if any events fall outside the histo bounds. * Many fixes to OPAL_2004. * Improved Hemispheres interface to remove unnecessary consts on returned doubles, and to also return non-squared versions of (scaled) hemisphere masses. * Add "make pyclean" make target at the top level to make it easier for developers to clean their Python module build when the API is extended. * Identify use of unvalidated analyses with a warning message at runtime. * Providing Analysis::sqrtS() and Analysis::beams(), and making sure they're available by the time the init methods are called. 2009-12-02 Andy Buckley * Adding passing of first event sqrt(s) and beams to analysis handler. * Restructuring running to only use one HepMC input file (no-one was using multiple ones, right?) and to break down the Run class to cleanly separate the init and event loop phases. End of file is now neater. 2009-12-01 Andy Buckley * Adding parsing of beam types and pairs of energies from YAML. 2009-12-01 Hendrik Hoeth * Fixing trigger efficiency in CDF_2009_S8233977 2009-11-30 Andy Buckley * Using shared pointers to make I/O object memory management neater and less error-prone. * Adding crossSectionPerEvent() method [== crossSection()/sumOfWeights()] to Analysis. Useful for histogram scaling since numerator of sumW_passed/sumW_total (to calculate pass-cuts xsec) is cancelled by dividing histo by sumW_passed. * Clean-up of Particle class and provision of inline PID:: functions which take a Particle as an argument to avoid having to explicitly call the Particle::pdgId() method. 2009-11-30 Hendrik Hoeth * Fixing division by zero in Profile1D bin errors for bins with just a single entry. 2009-11-24 Hendrik Hoeth * First working version of STAR_2006_S6860818 2009-11-23 Hendrik Hoeth * Adding missing CDF_2001_S4751469 plots to uemerge * New "ShowZero" option in make-plots * Improving lots of plot defaults * Fixing typos / non-existing bins in CDF_2001_S4751469 and CDF_2004_S5839831 reference data 2009-11-19 Hendrik Hoeth * Fixing our compare() for doubles. 2009-11-17 Hendrik Hoeth * Zeroth version of STAR_2006_S6860818 analysis (identified strange particles). Not working yet for unstable particles. 2009-11-11 Andy Buckley * Adding separate jet-oriented and photon-oriented observables to MC PHOTONJETUE analysis. * Bug fix in MC leading jets analysis, and general tidying of leading jet analyses to insert units, etc. (should not affect any current results) 2009-11-10 Hendrik Hoeth * Fixing last issues in STAR_2006_S6500200 and setting it to VALIDATED. * Noramlise STAR_2006_S6870392 to cross-section 2009-11-09 Andy Buckley * Overhaul of jet caching and ParticleBase interface. * Adding lists of analyses' histograms (obtained by scanning the plot info files) to the LaTeX documentation. 2009-11-07 Andy Buckley * Adding checking system to ensure that Projections aren't registered before the init phase of analyses. * Now that the ProjHandler isn't full of defunct pointers (which tend to coincidentally point to *new* Projection pointers rather than undefined memory, hence it wasn't noticed until recently!), use of a duplicate projection name is now banned with a helpful message at runtime. * (Huge) overhaul of ProjectionHandler system to use shared_ptr: projections are now deleted much more efficiently, naturally cleaning themselves out of the central repository as they go out of scope. 2009-11-06 Andy Buckley * Adding Cmp specialisation, using fuzzyEquals(). 2009-11-05 Hendrik Hoeth * Fixing histogram division code. 2009-11-04 Hendrik Hoeth * New analysis STAR_2006_S6500200 (pion and proton pT spectra in pp collisions at 200 GeV). It is still unclear if they used a cut in rapidity or pseudorapidity, thus the analysis is declared "UNDER DEVELOPMENT" and "DO NOT USE". * Fixing compare() in NeutralFinalState and MergedFinalState 2009-11-04 Andy Buckley * Adding consistence checking on beam ID and sqrt(s) vs. those from first event. 2009-11-03 Andy Buckley * Adding more assertion checks to linear algebra testing. 2009-11-02 Hendrik Hoeth * Fixing normalisation issue with stacked histograms in make-plots. 2009-10-30 Hendrik Hoeth * CDF_2009_S8233977: Updating data and axes labels to match final paper. Normalise to cross-section instead of data. 2009-10-23 Andy Buckley * Fixing Cheese-3 plot in CDF 2004... at last! 2009-10-23 Hendrik Hoeth * Fix muon veto in CDF_1994_S2952106, CDF_2005_S6217184, CDF_2008_S7782535, and D0_2004_S5992206 2009-10-19 Andy Buckley * Adding analysis info files for MC SUSY and PHOTONJETUE analyses. * Adding MC UE analysis in photon+jet events. 2009-10-19 Hendrik Hoeth * Adding new NeutralFinalState projection. Note that this final state takes E_T instead of p_T as argument (makes more sense for neutral particles). The compare() method does not yet work as expected (E_T comparison still missing). * Adding new MergedFinalState projection. This merges two final states, removing duplicate particles. Duplicates are identified by looking at the genParticle(), so users need to take care of any manually added particles themselves. * Fixing most open issues with the STAR_2009_UE_HELEN analysis. There is only one question left, regarding the away region. * Set the default split-merge value for SISCone in our FastJets projection to the recommended (but not Fastjet-default!) value of 0.75. 2009-10-17 Andy Buckley * Adding parsing of units in cross-sections passed to the "-x" flag, i.e. "-x 101 mb" is parsed internally into 1.01e11 pb. 2009-10-16 Hendrik Hoeth * Disabling DELPHI_2003_WUD_03_11 in the Makefiles, since I don't trust the data. * Getting STAR_2009_UE_HELEN to work. 2009-10-04 Andy Buckley * Adding triggers and other tidying to (still unvalidated) UA1_1990 analysis. * Fixing definition of UA5 trigger to not be intrinscally different for pp and ppbar: this is corrected for (although it takes some readng to work this out) in the 1982 paper, which I think is the only one to compare the two modes. * Moving projection setup and registration into init() method for remaining analyses. * Adding trigger implementations as projections for CDF Runs 0 & 1, and for UA5. 2009-10-01 Andy Buckley * Moving projection setup and registration into init() method for analyses from ALEPH, CDF and the MC_ group. * Adding generic SUSY validation analysis, based on plots used in ATLAS Herwig++ validation. * Adding sorted particle accessors to FinalState (cf. JetAlg). 2009-09-29 Andy Buckley * Adding optional use of args as regex match expressions with -l/--list-analyses. 2009-09-03 Andy Buckley * Passing GSL include path to compiler, since its absence was breaking builds on systems with no GSL installation in a standard location (such as SLC5, for some mysterious reason!) * Removing lib extension passing to compiler from the configure script, because Macs and Linux now both use .so extensions for the plugin analysis modules. 2009-09-02 Andy Buckley * Adding analysis info file path search with RIVET_DATA_PATH variable (and using this to fix doc build.) * Improvements to AnalysisLoader path search. * Moving analysis sources back into single directory, after a proletarian uprising ;) 2009-09-01 Andy Buckley * Adding WFinder and WAnalysis, based on Z proj and analysis, with some tidying of the Z code. * ClusteredPhotons now uses an IdentifiedFS to pick the photons to be looped over, and only clusters photons around *charged* signal particles. 2009-08-31 Andy Buckley * Splitting analyses by directory, to make it easier to disable building of particular analysis group plugin libs. * Removing/merging headers for all analyses except for the special MC_JetAnalysis base class. * Exit with an error message if addProjection is used twice from the same parent with distinct projections. 2009-08-28 Andy Buckley * Changed naming convention for analysis plugin libraries, since the loader has changed so much: they must now *start* with the word "Rivet" (i.e. no lib prefix). * Split standard plugin analyses into several plugin libraries: these will eventually move into separate subdirs for extra build convenience. * Started merging analysis headers into the source files, now that we can (the plugin hooks previously forbade this). * Replacement of analysis loader system with a new one based on ideas from ThePEG, which uses dlopen-time instantiation of templated global variables to reduce boilerplate plugin hooks to one line in analyses. 2009-07-14 Frank Siegert * Replacing in-source histo-booking metadata with .plot files. 2009-07-14 Andy Buckley * Making Python wrapper files copy into place based on bundled versions for each active HepMC interface (2.3, 2.4 & 2.5), using a new HepMC version detector test in configure. * Adding YAML metadata files and parser, removing same metadata from the analysis classes' source headers. 2009-07-07 Andy Buckley * Adding Jet::hadronicEnergy() * Adding VisibleFinalState and automatically using it in JetAlg projections. * Adding YAML parser for new metadata (and eventually ref data) files. 2009-07-02 Andy Buckley * Adding Jet::neutralEnergy() (and Jet::totalEnergy() for convenience/symmetry). 2009-06-25 Andy Buckley * Tidying and small efficiency improvements in CDF_2008_S7541902 W+jets analysis (remove unneeded second stage of jet storing, sorting the jets twice, using foreach, etc.). 2009-06-24 Andy Buckley * Fixing Jet's containsBottom and containsCharm methods, since B hadrons are not necessarily to be found in the final state. Discovered at the same time that HepMC::GenParticle defines a massively unhelpful copy constructor that actually loses the tree information; it would be better to hide it entirely! * Adding RivetHepMC.hh, which defines container-type accessors to HepMC particles and vertices, making it possible to use Boost foreach and hence avoiding the usual huge boilerplate for-loops. 2009-06-11 Andy Buckley * Adding --disable-pdfmanual option, to make the bootstrap a bit more robust. * Re-enabling D0IL in FastJets: adding 10^-10 to the pTmin removes the numerical instability! * Fixing CDF_2004 min/max cone analysis to use calo jets for the leading jet Et binning. Thanks to Markus Warsinsky for (re)discovering this bug: I was sure it had been fixed. I'm optimistic that this will fix the main distributions, although Swiss Cheese "minus 3" is still likely to be broken. Early tests look okay, but it'll take more stats before we can remove the "do not trust" sign. 2009-06-10 Andy Buckley * Providing "calc" methods so that Thrust and Sphericity projections can be used as calculators without having to use the projecting/caching system. 2009-06-09 Andy Buckley * 1.1.3 release! * More doc building and SWIG robustness tweaks. 2009-06-07 Andy Buckley * Make doc build from metadata work even before the library is installed. 2009-06-07 Hendrik Hoeth * Fix phi rotation in CDF_2008_LEADINGJETS. 2009-06-07 Andy Buckley * Disabling D0 IL midpoint cone (using CDF modpoint instead), since there seems to be a crashing bug in FastJet's implementation: we can't release that way, since ~no D0 analyses will run. 2009-06-03 Andy Buckley * Putting SWIG-generated source files under SVN control to make life easier for people who we advise to check out the SVN head version, but who don't have a sufficiently modern copy of SWIG to * Adding the --disable-analyses option, for people who just want to use Rivet as a framework for their own analyses. * Enabling HepMC cross-section reading, now that HepMC 2.5.0 has been released. 2009-05-23 Hendrik Hoeth * Using gsl-config to locate libgsl * Fix the paths for linking such that our own libraries are found before any system libraries, e.g. for the case that there is an outdated fastjet version installed on the system while we want to use our own up-to-date version. * Change dmerge to ymerge in the e+e- analyses using JADE or DURHAM from fastjet. That's what it is called in fastjet-2.4 now. 2009-05-18 Andy Buckley * Adding use of gsl-config in configure script. 2009-05-16 Andy Buckley * Removing argument to vetoEvent macro, since no weight subtraction is now needed. It's now just an annotated return, with built-in debug log message. * Adding an "open" FinalState, which is only calculated once per even, then used by all other FSes, avoiding the loop over non-status 1 particles. 2009-05-15 Andy Buckley * Removing incorrect setting of DPS x-errs in CDF_2008 jet shape analysis: the DPS autobooking already gets this bit right. * Using Jet rather than FastJet::PseudoJet where possible, as it means that the phi ranges match up nicely between Particle and the Jet object. The FastJet objects are only really needed if you want to do detailed things like look at split/merge scales for e.g. diff jet rates or "y-split" analyses. * Tidying and debugging CDF jet shape analyses and jet shape plugin... ongoing, but I think I've found at least one real bug, plus a lot of stuff that can be done a lot more nicely. * Fully removing deprecated math functions and updating affected analyses. 2009-05-14 Andy Buckley * Removing redundant rotation in DISKinematics... this was a legacy of Peter using theta rather than pi-theta in his rotation. * Adding convenience phi, rho, eta, theta, and perp,perp2 methods to the 3 and 4 vector classes. 2009-05-12 Andy Buckley * Adding event auto-rotation for events with one proton... more complete approach? 2009-05-09 Hendrik Hoeth * Renaming CDF_2008_NOTE_9337 to CDF_2009_S8233977. * Numerous small bug fixes in ALEPH_1996_S3486095. * Adding data for one of the Rick-Field-style STAR UE analyses. 2009-05-08 Andy Buckley * Adding rivet-mkanalysis script, to make generating new analysis source templates easier. 2009-05-07 Andy Buckley * Adding null vector check to Vector3::azimuthalAngle(). * Fixing definition of HCM/Breit frames in DISKinematics, and adding asserts to check that the transformation is doing what it should. 2009-05-05 Andy Buckley * Removing eta and Et cuts from CDF 2000 Z pT analysis, based on our reading of the paper, and converting most of the analysis to a call of the ZFinder projection. 2009-05-05 Hendrik Hoeth * Support non-default seed_threshold in CDF cone jet algorithms. * New analyses STAR_2006_S6870392 and STAR_2008_S7993412. In STAR_2008_S7993412 only the first distribution is filled at the moment. STAR_2006_S6870392 is normalised to data instead of the Monte Carlo cross-section, since we don't have that available in the HepMC stream yet. 2009-05-05 Andy Buckley * Changing Event wrapper to copy whole GenEvents rather than pointers, use std units if supported in HepMC, and run a placeholder function for event auto-orientation. 2009-04-28 Andy Buckley * Removing inclusion of IsolationTools header by analyses that aren't actually using the isolation tools... which is all of them. Leaving the isolation tools in place for now, as there might still be use cases for them and there's quite a lot of code there that deserves a second chance to be used! 2009-04-24 Andy Buckley * Deleting Rivet implementations of TrackJet and D0ILConeJets: the code from these has now been incorporated into FastJet 2.4.0. * Removed all mentions of the FastJet JADE patch and the HAVE_JADE preprocessor macro. * Bug fix to D0_2008_S6879055 to ensure that cuts compare to both electron and positron momenta (was just comparing against electrons, twice, probably thanks to the miracle of cut and paste). * Converting all D0 IL Cone jets to use FastJets. Involved tidying D0_2004 jet azimuthal decorrelation analysis and D0_2008_S6879055 as part of migration away from using the getLorentzJets method, and removing the D0ILConeJets header from quite a few analyses that weren't using it at all. * Updating CDF 2001 to use FastJets in place of TrackJet, and adding axis labels to its plots. * Note that ZEUS_2001_S4815815 uses the wrong jet definition: it should be a cone but curently uses kT. * Fixing CDF_2005_S6217184 to use correct (midpoint, R=0.7) jet definition. That this was using a kT definition with R=1.0 was only made obvious when the default FastJets constructor was removed. * Removing FastJets default constructor: since there are now several good (IRC safe) jet definitions available, there is no obvious safe default and analyses should have to specify which they use. * Moving FastJets constructors into implementation file to reduce recompilation dependencies, and adding new plugins. * Ensuring that axis labels actually get written to the output data file. 2009-04-22 Andy Buckley * Adding explicit FastJet CDF jet alg overlap_threshold constructor param values, since the default value from 2.3.x is now removed in version 2.4.0. * Removing use of HepMC ThreeVector::mag method (in one place only) since this has been removed in version 2.5.0b. * Fix to hepmc.i (included by rivet.i) to ignore new HepMC 2.5.0b GenEvent stream operator. 2009-04-21 Andy Buckley * Dependency on FastJet now requires version 2.4.0 or later. Jade algorithm is now native. * Moving all analysis constructors and Projection headers from the analysis header files into their .cc implementation files, cutting header dependencies. * Removing AIDA headers: now using LWH headers only, with enhancement to use axis labels. This facility is now used by the histo booking routines, and calling the booking function versions which don't specify axis labels will result in a runtime warning. 2009-04-07 Andy Buckley * Adding $(DESTDIR) prefix to call to Python module "setup.py install" * Moving HepMC SWIG mappings into Python Rivet module for now: seems to work-around the SL type-mapping bug. 2009-04-03 Andy Buckley * Adding MC analysis for LHC UE: higher-pT replica of Tevatron 2008 leading jets study. * Adding CDF_1990 pseudorapidity analysis. * Moving CDF_2001 constructor into implementation file. * Cleaning up CDF_2008_LEADINGJETS a bit, e.g. using foreach loops. * Adding function interface for specifying axis labels in histo bookings. Currently has no effect, since AIDA doesn't seem to have a mechanism for axis labels. It really is a piece of crap. 2009-03-18 Andy Buckley * Adding docs "make upload" and other tweaks to make the doc files fit in with the Rivet website. * Improving LaTex docs to show email addresses in printable form and to group analyses by collider or other metadata. * Adding doc script to include run info in LaTeX docs, and to make HTML docs. * Removing WZandh projection, which wasn't generator independent and whose sole usage was already replaced by ZFinder. * Improvements to constructors of ZFinder and InvMassFS. * Changing ExampleTree to use real FS-based Z finding. 2009-03-16 Andy Buckley * Allow the -H histo file spec to give a full name if wanted. If it doesn't end in the desired extension, it will be added. * Adding --runname option (and API elements) to choose a run name to be prepended as a "top level directory" in histo paths. An empty value results in no extra TLD. 2009-03-06 Andy Buckley * Adding R=0.2 photon clustering to the electrons in the CDF 2000 Z pT analysis. 2009-03-04 Andy Buckley * Fixing use of fastjet-config to not use the user's PATH variable. * Fixing SWIG type table for HepMC object interchange. 2009-02-20 Andy Buckley * Adding use of new metadata in command line analysis querying with the rivet command, and in building the PDF Rivet manual. * Adding extended metadata methods to the Analysis interface and the Python wrapper. All standard analyses comply with this new interface. 2009-02-19 Andy Buckley * Adding usefully-scoped config headers, a Rivet::version() function which uses them, and installing the generated headers to fix "external" builds against an installed copy of Rivet. The version() function has been added to the Python wrapper. 2009-02-05 Andy Buckley * Removing ROOT dependency and linking. Woo! There's no need for this now, because the front-end accepts no histo format switch and we'll just use aida2root for output conversions. Simpler this way, and it avoids about half of our compilation bug reports from 32/64 bit ROOT build confusions. 2009-02-04 Andy Buckley * Adding automatic generation of LaTeX manual entries for the standard analyses. 2009-01-20 Andy Buckley * Removing RivetGun and TCLAP source files! 2009-01-19 Andy Buckley * Added psyco Python optimiser to rivet, make-plots and compare-histos. * bin/aida2root: Added "-" -> "_" mangling, following requests. 2009-01-17 Andy Buckley * 1.1.2 release. 2009-01-15 Andy Buckley * Converting Python build system to bundle SWIG output in tarball. 2009-01-14 Andy Buckley * Converting AIDA/LWH divide function to return a DPS so that bin width factors don't get all screwed up. Analyses adapted to use the new division operation (a DPS/DPS divide would also be nice... but can wait for YODA). 2009-01-06 Andy Buckley * bin/make-plots: Added --png option for making PNG output files, using 'convert' (after making a PDF --- it's a bit messy) * bin/make-plots: Added --eps option for output filtering through ps2eps. 2009-01-05 Andy Buckley * Python: reworking Python extension build to use distutils and newer m4 Python macros. Probably breaks distcheck but is otherwise more robust and platform independent (i.e. it should now work on Macs). 2008-12-19 Andy Buckley * make-plots: Multi-threaded make-plots and cleaned up the LaTeX building a bit (necessary to remove the implicit single global state). 2008-12-18 Andy Buckley * make-plots: Made LaTeX run in no-stop mode. * compare-histos: Updated to use a nicer labelling syntax on the command line and to successfully build MC-MC plots. 2008-12-16 Andy Buckley * Made LWH bin edge comparisons safe against numerical errors. * Added Particle comparison functions for sorting. * Removing most bad things from ExampleTree and tidying up. Marked WZandh projection for removal. 2008-12-03 Hendrik Hoeth * Added the two missing observables to the CDF_2008_NOTE_9337 analysis, i.e. track pT and sum(ET). There is a small difference between our MC output and the MC plots of the analysis' author, we're still waiting for the author's comments. 2008-12-02 Andy Buckley * Overloading use of a std::set in the interface, since the version of SWIG on Sci Linux doesn't have a predefined mapping for STL sets. 2008-12-02 Hendrik Hoeth * Fixed uemerge. The output was seriously broken by a single line of debug information in fillAbove(). Also changed uemerge output to exponential notation. * Unified ref and mc histos in compare-histos. Histos with one bin are plotted linear. Option for disabling the ratio plot. Several fixes for labels, legends, output directories, ... * Changed rivetgun's fallback directory for parameter files to $PREFIX/share/AGILe, since that's where the steering files now are. * Running aida2flat in split mode now produces make-plots compatible dat-files for direct plotting. 2008-11-28 Andy Buckley * Replaced binreloc with an upgraded and symbol-independent copy. 2008-11-25 Andy Buckley * Added searching of $RIVET_REF_PATH for AIDA reference data files. 2008-11-24 Andy Buckley * Removing "get"s and other obsfucated syntax from ProjectionApplier (Projection and Analysis) interfaces. 2008-11-21 Andy Buckley * Using new "global" Jet and V4 sorting functors in TrackJet. Looks like there was a sorting direction problem before... * Verbose mode with --list-analyses now shows descriptions as well as analysis names. * Moved data/Rivet to data/refdata and moved data/RivetGun contents to AGILe (since generator steering is no longer a Rivet thing) * Added unchecked ratio plots to D0 Run II jet + photon analysis. * Added D0 inclusive photon analysis. * Added D0 Z rapidity analysis. * Tidied up constructor interface and projection chain implementation of InvMassFinalState. * Added ~complete set of Jet and FourMomentum sorting functors. 2008-11-20 Andy Buckley * Added IdentifiedFinalState. * Moved a lot of TrackJet and Jet code into .cc files. * Fixed a caching bug in Jet: cache flag resets should never be conditional, since they are then sensitive to initialisation errors. * Added quark enum values to ParticleName. * Rationalised JetAlg interfaces somewhat, with "size()" and "jets()" methods in the interface. * Added D0 W charge asymmetry and D0 inclusive jets analyses. 2008-11-18 Andy Buckley * Adding D0 inclusive Z pT shape analysis. * Added D0 Z + jet pT and photon + jet pT spectrum analyses. * Lots of tidying up of particle, event, particle name etc. * Now the first event is used to detect the beam type and remove incompatible analyses. 2008-11-17 Andy Buckley * Added bash completion for rivetgun. * Starting to provide stand-alone call methods on projections so they can be used without the caching infrastructure. This could also be handy for unit testing. * Adding functionality (sorting function and built-in sorting schemes) to the JetAlg interface. 2008-11-10 Hendrik Hoeth * Fix floating point number output format in aida2flat and flat2aida * Added CDF_2002_S4796047: CDF Run-I charged multiplicity distribution * Renamed CDF_2008_MINBIAS to CDF_2008_NOTE_9337, since the note is publicly available now. 2008-11-10 Hendrik Hoeth * Added DELPHI_2003_WUD_03_11: Delphi 4-jet angular distributions. There is still a problem with the analysis, so don't use it yet. But I need to commit the code because my laptop is broken ... 2008-11-06 Andy Buckley * Code review: lots of tidying up of projections and analyses. * Fixes for compatibility with the LLVM C & C++ compiler. * Change of Particle interface to remove "get"-prefixed method names. 2008-11-05 Andy Buckley * Adding ability to query analysis metadata from the command line. * Example of a plugin analyis now in plugindemo, with a make check test to make sure that the plugin analysis is recognised by the command line "rivet" tool. * GCC 4.3 fix to mat-vec tests. 2008-11-04 Andy Buckley * Adding native logger control from Python interface. 2008-11-03 Andy Buckley * Adding bash_completion for rivet executable. 2008-10-31 Andy Buckley * Clean-up of histo titles and analysis code review. * Added momentum construction functions from FastJet PseudoJets. 2008-10-28 Andy Buckley * Auto-booking of histograms with a name, rather than the HepData ID 3-tuple is now possible. * Fix in CDF 2001 pT spectra to get the normalisations to depend on the pT_lead cutoff. 2008-10-23 Andy Buckley * rivet handles signals neatly, as for rivetgun, so that premature killing of the analysis process will still result in an analysis file. * rivet now accepts cross-section as a command line argument and, if it is missing and is required, will prompt the user for it interactively. 2008-10-22 Andy Buckley * rivet (Python interface) now can list analyses, check when adding analyses that the given names are valid, specify histo file name, and provide sensibly graded event number logging. 2008-10-20 Andy Buckley * Corrections to CDF 2004 analysis based on correspondance with Joey Huston. M bias dbns now use whole event within |eta| < 0.7, and Cheese plots aren't filled at all if there are insufficient jets (and the correct ETlead is used). 2008-10-08 Andy Buckley * Added AnalysisHandler::commitData() method, to allow the Python interface to write out a histo file without having to know anything about the histogramming API. * Reduced SWIG interface file to just map a subset of Analysis and AnalysisHandler functionality. This will be the basis for a new command line interface. 2008-10-06 Andy Buckley * Converted FastJets plugin to use a Boost shared_pointer to the cached ClusterSequence. The nullness of the pointer is now used to indicate an empty tracks (and hence jets) set. Once FastJet natively support empty CSeqs, we can rewrite this a bit more neatly and ditch the shared_ptr. 2008-10-02 Andy Buckley * The CDF_2004 (Acosta) data file now includes the full range of pT for the min bias data at both 630 and 1800 GeV. Previously, only the small low-pT insert plot had been entered into HepData. 2008-09-30 Andy Buckley * Lots of updates to CDF_2004 (Acosta) UE analysis, including sorting jets by E rather than Et, and factorising transverse cone code into a function so that it can be called with a random "leading jet" in min bias mode. Min bias histos are now being trial-filled just with tracks in the transverse cones, since the paper is very unclear on this. * Discovered a serious caching problem in FastJets projection when an empty tracks vector is passed from the FinalState. Unfortunately, FastJet provides no API way to solve the problem, so we'll have to report this upstream. For now, we're solving this for CDF_2004 by explicitly vetoing events with no tracks. * Added Doxygen to the build with target "dox" * Moved detection of whether cross-section information is needed into the AnalysisHandler, with dynamic computation by scanning contained analyses. * Improved robustness of event reading to detect properly when the input file is smaller than expected. 2008-09-29 Hendrik Hoeth * New analysis: CDF_2000_S4155203 2008-09-23 Andy Buckley * rivetgun can now be built and run without AGILe. Based on a patch by Frank Siegert. 2008-09-23 Hendrik Hoeth * Some preliminary numbers for the CDF_2008_LEADINGJETS analysis (only transverse region and not all observables. But all we have now.) 2008-09-17 Andy Buckley * Breaking up the mammoth "generate" function, to make Python mapping easier, among other reasons. * Added if-zero-return-zero checking to angle mapping functions, to avoid problems where 1e-19 gets mapped on to 2 pi and then fails boundary asserts. * Added HistoHandler singleton class, which will be a central repository for holding analyses' histogram objects to be accessed via a user-chosen name. 2008-08-26 Andy Buckley * Allowing rivet-config to return combined flags. 2008-08-14 Andy Buckley * Fixed some g++ 4.3 compilation bugs, including "vector" not being a valid name for a method which returns a physics vector, since it clashes with std::vector (which we globally import). Took the opportunity to rationalise the Jet interface a bit, since "particle" was used to mean "FourMomentum", and "Particle" types required a call to "getFullParticle". I removed the "gets" at the same time, as part of our gradual migration to a coherent naming policy. 2008-08-11 Andy Buckley * Tidying of FastJets and added new data files from HepData. 2008-08-10 James Monk * FastJets now uses user_index property of fastjet::PseudoJet to reconstruct PID information in jet contents. 2008-08-07 Andy Buckley * Reworking of param file and command line parsing. Tab characters are now handled by the parser, in a way equivalent to spaces. 2008-08-06 Andy Buckley * Added extra histos and filling to Acosta analysis - all HepData histos should now be filled, depending on sqrt{s}. Also trialling use of LaTeX math mode in titles. 2008-08-05 Andy Buckley * More data files for CDF analyses (2 x 2008, 1 x 1994), and moved the RivetGun AtlasPythia6.params file to more standard fpythia-atlas.params (and added to the install list). 2008-08-04 Andy Buckley * Reduced size of available options blocks in RivetGun help text by removing "~" negating variants (which are hardly ever used in practice) and restricting beam particles to PROTON, ANTIPROTON,ELECTRON and POSITRON. * Fixed Et sorting in Acosta analysis. 2008-08-01 Andy Buckley * Added AIDA headers to the install list, since external (plugin-type) analyses need them to be present for compilation to succeed. 2008-07-29 Andy Buckley * Fixed missing ROOT compile flags for libRivet. * Added command line repetition to logging. 2008-07-29 Hendrik Hoeth * Included the missing numbers and three more observables in the CDF_2008_NOTE_9351 analysis. 2008-07-29 Andy Buckley * Fixed wrong flags on rivet-config 2008-07-28 Hendrik Hoeth * Renamed CDF_2008_DRELLYAN to CDF_2008_NOTE_9351. Updated numbers and cuts to the final version of this CDF note. 2008-07-28 Andy Buckley * Fixed polar angle calcuation to use atan2. * Added "mk" prefixes and x/setX convention to math classes. 2008-07-28 Hendrik Hoeth * Fixed definition of FourMomentum::pT (it had been returning pT2) 2008-07-27 Andy Buckley * Added better tests for Boost headers. * Added testing for -ansi, -pedantic and -Wall compiler flags. 2008-07-25 Hendrik Hoeth * updated DELPHI_2002_069_CONF_603 according to information from the author 2008-07-17 Andy Buckley * Improvements to aida2flat: now can produce one output file per histo, and there is a -g "gnuplot mode" which comments out the YODA/make_plot headers to make the format readable by gnuplot. * Import boost::assign namespace contents into the Rivet namespace --- provides very useful intuitive collection initialising functions. 2008-07-15 Andy Buckley * Fixed missing namespace in vector/matrix testing. * Removed Boost headers: now a system dependency. * Fixed polarRadius infinite loop. 2008-07-09 Andy Buckley * Fixed definitions of mapAngleMPiToPi, etc. and used them to fix the Jet::getPhi method. * Trialling use of "foreach" loop in CDF_2004: it works! Very nice. 2008-07-08 Andy Buckley * Removed accidental reference to an "FS" projection in FinalStateHCM's compare method. rivetgun -A now works again. * Added TASSO, SLD and D0_2008 reference data. The TASSO and SLD papers aren't installed or included in the tarball since there are currently no plans to implement these analyses. * Added Rivet namespacing to vector, matrix etc. classes. This required some re-writing and the opportunity was taken to move some canonical function definitions inside the classes and to improve the header structure of the Math area. 2008-07-07 Andy Buckley * Added Rivet namespace to Units.hh and Constants.hh. * Added Doxygen "@brief" flags to analyses. * Added "RIVET_" namespacing to all header guards. * Merged Giulio Lenzi's isolation/vetoing/invmass projections and D0 2008 analysis. 2008-06-23 Jon Butterworth * Modified FastJet to fix ysplit and split and filter. * Modified ExampleTree to show how to call them. 2008-06-19 Hendrik Hoeth * Added first version of the CDF_2008_DRELLYAN analysis described on http://www-cdf.fnal.gov/physics/new/qcd/abstracts/UEinDY_08.html There is a small difference between the analysis and this implementation, but it's too small to be visible. The fpythia-cdfdrellyan.params parameter file is for this analysis. * Added first version of the CDF_2008_MINBIAS analysis described on http://www-cdf.fnal.gov/physics/new/qcd/abstracts/minbias_08.html The .aida file is read from the plots on the web and will change. I'm still discussing some open questions about the analysis with the author. 2008-06-18 Jon Butterworth * Added First versions of splitJet and filterJet methods to fastjet.cc. Not yet tested, buyer beware. 2008-06-18 Andy Buckley * Added extra sorted Jets and Pseudojets methods to FastJets, and added ptmin argument to the JetAlg getJets() method, requiring a change to TrackJet. 2008-06-13 Andy Buckley * Fixed processing of "RG" parameters to ensure that invalid iterators are never used. 2008-06-10 Andy Buckley * Updated AIDA reference files, changing "/HepData" root path to "/REF". Still missing a couple of reference files due to upstream problems with the HepData records. 2008-06-09 Andy Buckley * rivetgun now handles termination signals (SIGTERM, SIGINT and SIGHUP) gracefully, finishing the event loop and finalising histograms. This means that histograms will always get written out, even if not all the requested events have been generated. 2008-06-04 Hendrik Hoeth * Added DELPHI_2002_069_CONF_603 analysis 2008-05-30 Hendrik Hoeth * Added InitialQuarks projection * Added OPAL_1998_S3780481 analysis 2008-05-29 Andy Buckley * distcheck compatibility fixes and autotools tweaks. 2008-05-28 Andy Buckley * Converted FastJet to use Boost smart_ptr for its plugin handling, to solve double-delete errors stemming from the heap cloning of projections. * Added (a subset of) Boost headers, particularly the smart pointers. 2008-05-24 Andy Buckley * Added autopackage spec files. * Merged these changes into the trunk. * Added a registerClonedProjection(...) method to ProjectionHandler: this is needed so that cloned projections will have valid pointer entries in the ProjectHandler repository. * Added clone() methods to all projections (need to use this, since the templated "new PROJ(proj)" approach to cloning can't handle object polymorphism. 2008-05-19 Andy Buckley * Moved projection-applying functions into ProjectionApplier base class (from which Projection and Analysis both derive). * Added Rivet-specific exceptions in place of std::runtime_error. * Removed unused HepML reference files. * Added error handling for requested analyses with wrong case convention / missing name. 2008-05-15 Hendrik Hoeth * New analysis PDG_Hadron_Multiplicities * flat2aida converter 2008-05-15 Andy Buckley * Removed unused mysterious Perl scripts! * Added RivetGun.HepMC logging of HepMC event details. 2008-05-14 Hendrik Hoeth * New analysis DELPHI_1995_S3137023. This analysis contains the xp spectra of Xi+- and Sigma(1385)+-. 2008-05-13 Andy Buckley * Improved logging interface: log levels are now integers (for cross-library compatibility and level setting also applies to existing loggers. 2008-05-09 Andy Buckley * Improvements to robustness of ROOT checks. * Added --version flag on config scripts and rivetgun. 2008-05-06 Hendrik Hoeth * New UnstableFinalState projection which selects all hadrons, leptons and real photons including unstable particles. * In the DELPHI_1996_S3430090 analysis the multiplicities for pi+/pi- and p0 are filled, using the UnstableFinalState projection. 2008-05-06 Andy Buckley * FastJets projection now protects against the case where no particles exist in the final state (where FastJet throws an exception). * AIDA file writing is now separated from the AnalysisHandler::finalize method... API users can choose what to do with the histo objects, be that writing out or further processing. 2008-04-29 Andy Buckley * Increased default tolerances in floating point comparisons as they were overly stringent and valid f.p. precision errors were being treated as significant. * Implemented remainder of Acosta UE analysis. * Added proper getEtSum() to Jet. * Added Et2() member and function to FourMomentum. * Added aida2flat conversion script. * Fixed ambiguity in TrackJet algorithm as to how the iteration continues when tracks are merged into jets in the inner loop. 2008-04-28 Andy Buckley * Merged in major "ProjectionHandler" branch. Projections are now all stored centrally in the singleton ProjectionHandler container, rather than as member pointers in projections and analyses. This also affects the applyProjection mechanism, which is now available as a templated method on Analysis and Projection. Still a few wrinkles need to be worked out. * The branch changes required a comprehensive review of all existing projections and analyses: lots of tidying up of these classes, as well as the auxiliary code like math utils, has taken place. Too much to list and track, unfortunately! 2008-03-28 Andy Buckley * Started second CDF UE analysis ("Acosta"): histograms defined. * Fixed anomalous factor of 2 in LWH conversion from Profile1D to DataPointSet. * Added pT distribution histos to CDF 2001 UE analysis. 2008-03-26 Andy Buckley * Removed charged+neutral versions of histograms and projections from DELPHI analysis since they just duplicate the more robust charged-only measurements and aren't really of interest for tuning. 2008-03-10 Andy Buckley * Profile histograms now use error computation with proper weighting, as described here: http://en.wikipedia.org/wiki/Weighted_average 2008-02-28 Andy Buckley * Added --enable-jade flag for Professor studies with patched FastJet. * Minor fixes to LCG tag generator and gfilt m4 macros. * Fixed projection slicing issues with Field UE analysis. * Added Analysis::vetoEvent(e) function, which keeps track of the correction to the sum of weights due to event vetoing in analysis classes. 2008-02-26 Andy Buckley * Vector and derived classes now initialise to have zeroed components when the no-arg constructor is used. * Added Analysis::scale() function to scale 1D histograms. Analysis::normalize() uses it internally, and the DELPHI (A)EEC, whose histo weights are not pure event weights, and normalised using scale(h, 1/sumEventWeights). 2008-02-21 Hendrik Hoeth * Added EEC and AEEC to the DELPHI_1996_S3430090 analysis. The normalisation of these histograms is still broken (ticket #163). 2008-02-19 Hendrik Hoeth * Many fixes to the DELPHI_1996_S3430090 analysis: bugfix in the calulation of eigenvalues/eigenvectors in MatrixDiag.hh for the sphericity, rewrite of Thrust/Major/Minor, fixed scaled momentum, hemisphere masses, normalisation in single particle events, final state slicing problems in the projections for Thrust, Sphericity and Hemispheres. 2008-02-08 Andy Buckley * Applied fixes and extensions to DIS classes, based on submissions by Dan Traynor. 2008-02-06 Andy Buckley * Made projection pointers used for cut combining into const pointers. Required some redefinition of the Projection* comparison operator. * Temporarily added FinalState member to ChargedFinalState to stop projection lifetime crash. 2008-02-01 Andy Buckley * Fixed another misplaced factor of bin width in the Analysis::normalize() method. 2008-01-30 Andy Buckley * Fixed the conversion of IHistogram1D to DPS, both via the explicit Analysis::normalize() method and the implicit AnalysisHandler::treeNormalize() route. The root of the problem is the AIDA choice of the word "height" to represent the sum of weights in a bin: i.e. the bin width is not taken into account either in computing bin height or error. 2008-01-22 Andy Buckley * Beam projection now uses HepMC GenEvent::beam_particles() method to get the beam particles. This is more portable and robust for C++ generators, and equivalent to the existing "first two" method for Fortran generators. 2008-01-17 Andy Buckley * Added angle range fix to pseudorapidity function (thanks to Piergiulio Lenzi). 2008-01-10 Andy Buckley * Changed autobooking plot codes to use zero-padding (gets the order right in JAS, file browser, ROOT etc.). Also changed the 'ds' part to 'd' for consistency. HepData's AIDA output has been correspondingly updated, as have the bundled data files. 2008-01-04 Andy Buckley * Tidied up JetShape projection a bit, including making the constructor params const references. This seems to have sorted the runtime segfault in the CDF_2005 analysis. * Added caching of the analysis bin edges from the AIDA file - each analysis object will now only read its reference file once, which massively speeds up the rivetgun startup time for analyses with large numbhers of autobooked histos (e.g. the DELPHI_1996_S3430090 analysis). 2008-01-02 Andy Buckley * CDF_2001_S4751469 now uses the LossyFinalState projection, with an 8% loss rate. * Added LossyFinalState and HadronicFinalState, and fixed a "polarity" bug in the charged final state projection (it was keeping only the *uncharged* particles). * Now using isatty(1) to determine whether or not color escapes can be used. Also removed --color argument, since it can't have an effect (TCLAP doesn't do position-based flag toggling). * Made Python extension build optional (and disabled by default). 2008-01-01 Andy Buckley * Removed some unwanted DEBUG statements, and lowered the level of some infrastructure DEBUGs to TRACE level. * Added bash color escapes to the logger system. 2007-12-21 Leif Lonnblad * include/LWH/ManagedObject.h: Fixed infinite loop in encodeForXML cf. ticket #135. 2007-12-20 Andy Buckley * Removed HepPID, HepPDT and Boost dependencies. * Fixed XML entity encoding in LWH. Updated CDF_2007_S7057202 analysis to not do its own XML encoding of titles. 2007-12-19 Andy Buckley * Changed units header to set GeV = 1 (HepMC convention) and using units in CDF UE analysis. 2007-12-15 Andy Buckley * Introduced analysis metadata methods for all analyses (and made them part of the Analysis interface). 2007-12-11 Andy Buckley * Added JetAlg base projection for TrackJet, FastJet etc. 2007-12-06 Andy Buckley * Added checking for Boost library, and the standard Boost test program for shared_ptr. * Got basic Python interface running - required some tweaking since Python and Rivet's uses of dlopen collide (another RTLD_GLOBAL issue - see http://muttley.hates-software.com/2006/01/25/c37456e6.html ) 2007-12-05 Andy Buckley * Replaced all use of KtJets projection with FastJets projection. KtJets projection disabled but left undeleted for now. CLHEP and KtJet libraries removed from configure searches and Makefile flags. 2007-12-04 Andy Buckley * Param file loading now falls back to the share/RivetGun directory if a local file can't be found and the provided name has no directory separators in it. * Converted TrackJet projection to update the jet centroid with each particle added, using pT weighting in the eta and phi averaging. 2007-12-03 Andy Buckley * Merged all command line handling functions into one large parse function, since only one executable now needs them. This removes a few awkward memory leaks. * Removed rivet executable - HepMC file reading functionality will move into rivetgun. * Now using HepMC IO_GenEvent format (IO_Ascii and IO_ExtendedAscii are deprecated). Now requires HepMC >= 2.3.0. * Added forward declarations of GSL diagonalisation routines, eliminating need for GSL headers to be installed on build machine. 2007-11-27 Andy Buckley * Removed charge differentiation from Multiplicity projection (use CFS proj) and updated ExampleAnalysis to produce more useful numbers. * Introduced binreloc for runtime path determination. * Fixed several bugs in FinalState, ChargedFinalState, TrackJet and Field analysis. * Completed move to new analysis naming scheme. 2007-11-26 Andy Buckley * Removed conditional HAVE_FASTJET bits: FastJet is now compulsory. * Merging appropriate RivetGun parts into Rivet. RivetGun currently broken. 2007-11-23 Andy Buckley * Renaming analyses to Spires-ID scheme: currently of form S, to become __. 2007-11-20 Andy Buckley * Merged replacement vectors, matrices and boosts into trunk. 2007-11-15 Leif Lonnblad * src/Analysis.cc, include/Rivet/Analysis.hh: Introduced normalize function. See ticket #126. 2007-10-31 Andy Buckley * Tagging as 1.0b2 for HERA-LHC meeting. 2007-10-25 Andy Buckley * Added AxesDefinition base interface to Sphericity and Thrust, used by Hemispheres. * Exposed BinaryCut class, improved its interface and fixed a few bugs. It's now used by VetoedFinalState for momentum cuts. * Removed extra output from autobooking AIDA reader. * Added automatic DPS booking. 2007-10-12 Andy Buckley * Improved a few features of the build system 2007-10-09 James Monk * Fixed dylib dlopen on Mac OS X. 2007-10-05 Andy Buckley * Added new reference files. 2007-10-03 Andy Buckley * Fixed bug in configure.ac which led to explicit CXX setting being ignored. * Including Logging.hh in Projection.hh, hence new transitive dependency on Logging.hh being installed. Since this is the normal behaviour, I don't think this is a problem. * Fixed segfaulting bug due to use of addProjection() in locally-scoped contained projections. This isn't a proper fix, since the whole framework should be designed to avoid the possibility of bugs like this. * Added newly built HepML and AIDA reference files for current analyses. 2007-10-02 Andy Buckley * Fixed possible null-pointer dereference in Particle copy constructor and copy assignment: this removes one of two blocker segfaults, the other of which is related to the copy-assignment of the TotalVisMomentum projection in the ExampleTree analysis. 2007-10-01 Andy Buckley * Fixed portable path to Rivet share directory. 2007-09-28 Andy Buckley * Added more functionality to the rivet-config script: now has libdir, includedir, cppflags, ldflags and ldlibs options. 2007-09-26 Andy Buckley * Added the analysis library closer function to the AnalysisHandler finalize() method, and also moved the analysis delete loop into AnalysisHandler::finalize() so as not to try deleting objects whose libraries have already closed. * Replaced the RivetPaths.cc.in method for portable paths with something using -D defines - much simpler! 2007-09-21 Lars Sonnenschein * Added HepEx0505013 analysis and JetShape projection (some fixes by AB.) * Added GetLorentzJets member function to D0 RunII cone jet projection 2007-09-21 Andy Buckley * Fixed lots if bugs and bad practice in HepEx0505013 (to make it compile-able!) * Downclassed the log messages from the Test analysis to DEBUG level. * Added isEmpty() method to final state projection. * Added testing for empty final state and useful debug log messages to sphericity projection. 2007-09-20 Andy Buckley * Added Hemispheres projection, which calculates event hemisphere masses and broadenings. 2007-09-19 Andy Buckley * Added an explicit copy assignment operator to Particle: the absence of one of these was responsible for the double-delete error. * Added a "fuzzy equals" utility function for float/double types to Utils.hh (which already contains a variety of handy little functions). * Removed deprecated Beam::operator(). * Added ChargedFinalState projection and de-pointered the contained FinalState projection in VetoedFinalState. 2007-09-18 Andy Buckley * Major bug fixes to the regularised version of the sphericity projection (and hence the Parisi tensor projection). Don't trust C & D param results from any previous version! * Added extra methods to thrust and sphericity projections to get the oblateness and the sphericity basis (currently returns dummy axes since I can't yet work out how to get the similarity transform eigenvectors from CLHEP) 2007-09-14 Andy Buckley * Merged in a branch of pluggable analysis mechanisms. 2007-06-25 Jon Butterworth * Fixed some bugs in the root output for DataPoint.h 2007-06-25 Andy Buckley * include/Rivet/**/Makefile.am: No longer installing headers for "internal" functionality. * include/Rivet/Projections/*.hh: Removed the private restrictions on copy-assignment operators. 2007-06-18 Leif Lonnblad * include/LWH/Tree.h: Fixed minor bug in listObjectNames. * include/LWH/DataPointSet.h: Fixed setCoordinate functions so that they resize the vector of DataPoints if it initially was empty. * include/LWH/DataPoint.h: Added constructor taking a vector of measuremts. 2007-06-16 Leif Lonnblad * include/LWH/Tree.h: Implemented the listObjectNames and ls functions. * include/Rivet/Projections/FinalStateHCM.hh, include/Rivet/Projections/VetoedFinalState.hh: removed _theParticles and corresponding access function. Use base class variable instead. * include/Rivet/Projections/FinalState.hh: Made _theParticles protected. 2007-06-13 Leif Lonnblad * src/Projections/FinalStateHCM.cc, src/Projections/DISKinematics.cc: Equality checks using GenParticle::operator== changed to check for pointer equality. * include/Rivet/Analysis/HepEx9506012.hh: Uses modified DISLepton projection. * include/Rivet/Particle.hh: Added member function to check if a GenParticle is associated. * include/Rivet/Projections/DISLepton.hh, src/Projections/DISLepton.cc: Fixed bug in projection. Introduced final state projection to limit searching for scattered lepton. Still not properly tested. 2007-06-08 Leif Lonnblad * include/Rivet/Projections/PVertex.hh, src/Projections/PVertex.cc: Fixed the projection to simply get the signal_process_vertex from the GenEvent. This is the way it should work. If the GenEvent does not have a signal_process_vertex properly set up in this way, the problem is with the class that fills the GenEvent. 2007-06-06 Jon Butterworth * Merged TotalVisibleMomentum and CalMET * Added pT ranges to Vetoed final state projection 2007-05-27 Jon Butterworth * Fixed initialization of VetoedFinalStateProjection in ExampleTree 2007-05-27 Leif Lonnblad * include/Rivet/Projections/KtJets.*: Make sure the KtEvent is deleted properly. 2007-05-26 Jon Butterworth * Added leptons to the ExampleTree. * Added TotalVisibleEnergy projection, and added output to ExampleTree. 2007-05-25 Jon Butterworth * Added a charged lepton projection 2007-05-23 Andy Buckley * src/Analysis/HepEx0409040.cc: Changed range of the histograms to the "pi" range rather than the "128" range. * src/Analysis/Analysis.cc: Fixed a bug in the AIDA path building. Histogram auto-booking now works. 2007-05-23 Leif Lonnblad * src/Analysis/HepEx9506012.cc: Now uses the histogram booking function in the Analysis class. 2007-05-23 Jon Butterworth * Fixed bug in PRD65092002 (was failing on zero jets) 2007-05-23 Andy Buckley * Added (but haven't properly tested) a VetoedFinalState projection. * Added normalize() method for AIDA 1D histograms. * Added configure checking for Mac OS X version, and setting the development target flag accordingly. 2007-05-22 Andy Buckley * Added an ostream method for AnalysisName enums. * Converted Analyses and Projections to use projection lists, cuts and beam constraints. * Added beam pair combining to the BeamPair sets of Projections by finding set meta-intersections. * Added methods to Cuts, Analysis and Projection to make Cut definition easier. * Fixed default fall-through in cut handling switch statement and now using -numeric_limits::max() rather than min() * Added more control of logging presentation via static flag methods on Log. 2007-05-13 Andy Buckley * Added self-consistency checking mechanisms for Cuts and Beam * Re-implemented the cut-handling part of RivetInfo as a Cuts class. * Changed names of Analysis and Projection name() and handler() methods to getName() and getHandler() to be more consistent with the rest of the public method names in those classes. 2007-05-02 Andy Buckley * Added auto-booking of histogram bins from AIDA XML files. The AIDA files are located via a C++ function which is generated from RivetPaths.cc.in by running configure. 2007-04-18 Andy Buckley * Added a preliminary version of the Rick Field UE analysis, under the name PRD65092002. 2007-04-19 Leif Lonnblad * src/Analysis/HepEx0409040.cc: The reason this did not compile under gcc-4 is that some iterators into a vector were wrongly assued to be pointers and were initialized to 0 and later compared to 0. I've changed this to initialize to end() of the corresponding vector and to compare with the same end() later. 2007-04-05 Andy Buckley * Lots of name changes in anticipation of the MCNet school. RivetHandler is now AnalysisHandler (since that's what it does!), BeamParticle has become ParticleName, and RivetInfo has been split into Cut and BeamConstraint portions. * Added BeamConstraint mechanism, which can be used to determine if an analysis is compatible with the beams being used in the generator. The ParticleName includes an "ANY" wildcard for this purpose. 2006-03-19 Andy Buckley * Added "rivet" executable which can read in HepMC ASCII dump files and apply Rivet analyses on the events. 2007-02-24 Leif Lonnblad * src/Projections/KtJets.cc: Added comparison of member variables in compare() function * all: Merged changes from polymorphic-projections branch into trunk 2007-02-17 Leif Lonnblad * all: projections and analysis handlers: All projections which uses other projctions now has a pointer rather than a copy of those projections to allow for polymorphism. The constructors has also been changed to require the used projections themselves, rather than the arguments needed to construct them. 2007-02-17 Leif Lonnblad * src/Projections/FinalState.cc, include/Rivet/Projections/FinalState.icc (Rivet), include/Rivet/Projections/FinalState.hh: Added cut in transverse momentum on the particles to be included in the final state. 2007-02-06 Leif Lonnblad * include/LWH/HistogramFactory.h: Fixed divide-by-zero in divide function. Also fixed bug in error calculation in divide function. Introduced checkBin function to make sure two histograms are equal even if they have variable bin widths. * include/LWH/Histogram1D.h: In normalize(double), do not do anything if the sum of the bins are zero to avoid dividing by zero. 2007-01-20 Leif Lonnblad * src/Test/testLWH.cc: Modified to output files using the Tree. * configure.ac: Removed AC_CONFIG_AUX_DIR([include/Rivet/Config]) since the directory does not exist anymore. 2006-12-21 Andy Buckley * Rivet will now conditionally install the AIDA and LWH headers if it can't find them when configure'ing. * Started integrating Leif's LWH package to fulfill the AIDA duties. * Replaced multitude of CLHEP wrapper headers with a single RivetCLHEP.h header. 2006-11-20 Andy Buckley * Introduced log4cpp logging. * Added analysis enum, which can be used as input to an analysis factory by Rivet users. 2006-11-02 Andy Buckley * Yet more, almost pointless, administrative moving around of things with the intention of making the structure a bit better-defined: * The RivetInfo and RivetHandler classes have been moved from src/Analysis into src as they are really the main Rivet interface classes. The Rivet.h header has also been moved into the "header root". * The build of a single shared library in lib has been disabled, with the library being built instead in src. 2006-10-14 Andy Buckley * Introduced a minimal subset of the Sherpa math tools, such as Vector{3,4}D, Matrix, etc. The intention is to eventually cut the dependency on CLHEP. 2006-07-28 Andy Buckley * Moving things around: all sources now in directories under src 2006-06-04 Leif Lonnblad * Analysis/Examples/HZ95108.*: Now uses CentralEtHCM. Also set GeV units on the relevant histograms. * Projections/CentralEtHCM.*: Making a special class just to get out one number - the summed Et in the central rapidity bin - may seem like an overkill. But in case some one else might nees it... 2006-06-03 Leif Lonnblad * Analysis/Examples/HZ95108.*: Added the hz95108 energy flow analysis from HZtool. * Projections/DISLepton.*: Since many HERA measurements do not care if we have electron or positron beam, it is now possible to specify lepton or anti-lepton. * Projections/Event.*: Added member and access function for the weight of an event (taken from the GenEvent object.weights()[0]. * Analysis/RivetHandler.*: Now depends explicitly on the AIDA interface. An AIDA analysis factory must be specified in the constructor, where a tree and histogram factory is automatically created. Added access functions to the relevant AIDA objects. * Analysis/AnalysisBase.*: Added access to the RivetHandler and its AIDA factories. 2005-12-27 Leif Lonnblad * configure.ac: Added -I$THEPEGPATH/include to AM_CPPFLAGS. * Config/Rivet.h: Added some std incudes and using std:: declaration. * Analysis/RivetInfo.*: Fixed some bugs. The RivetInfo facility now works, although it has not been thoroughly tested. * Analysis/Examples/TestMultiplicity.*: Re-introduced FinalStateHCM for testing purposes but commented it away again. * .: Made a number of changes to implement handling of RivetInfo objects. diff --git a/analyses/pluginALICE/ALICE_2011_S8909580.cc b/analyses/pluginALICE/ALICE_2011_S8909580.cc --- a/analyses/pluginALICE/ALICE_2011_S8909580.cc +++ b/analyses/pluginALICE/ALICE_2011_S8909580.cc @@ -1,103 +1,103 @@ #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ALICE_2011_S8909580 : public Analysis { public: ALICE_2011_S8909580() : Analysis("ALICE_2011_S8909580") {} public: void init() { - const UnstableFinalState ufs(Cuts::abseta < 15); + const UnstableParticles ufs(Cuts::abseta < 15); declare(ufs, "UFS"); _histPtK0s = bookHisto1D(1, 1, 1); _histPtLambda = bookHisto1D(2, 1, 1); _histPtAntiLambda = bookHisto1D(3, 1, 1); _histPtXi = bookHisto1D(4, 1, 1); _histPtPhi = bookHisto1D(5, 1, 1); _temp_h_Lambdas = bookHisto1D("TMP/h_Lambdas", refData(6, 1, 1)); _temp_h_Kzeros = bookHisto1D("TMP/h_Kzeros", refData(6, 1, 1)); _h_LamKzero = bookScatter2D(6, 1, 1); } void analyze(const Event& event) { const double weight = event.weight(); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); foreach (const Particle& p, ufs.particles()) { const double absrap = p.absrap(); const double pT = p.pT()/GeV; if (absrap < 0.8) { switch(p.pid()) { case 3312: case -3312: if ( !( p.hasAncestor(3334) || p.hasAncestor(-3334) ) ) { _histPtXi->fill(pT, weight); } break; if (absrap < 0.75) { case 310: _histPtK0s->fill(pT, weight); _temp_h_Kzeros->fill(pT, 2*weight); break; case 3122: if ( !( p.hasAncestor(3322) || p.hasAncestor(-3322) || p.hasAncestor(3312) || p.hasAncestor(-3312) || p.hasAncestor(3334) || p.hasAncestor(-3334) ) ) { _histPtLambda->fill(pT, weight); _temp_h_Lambdas->fill(pT, weight); } break; case -3122: if ( !( p.hasAncestor(3322) || p.hasAncestor(-3322) || p.hasAncestor(3312) || p.hasAncestor(-3312) || p.hasAncestor(3334) || p.hasAncestor(-3334) ) ) { _histPtAntiLambda->fill(pT, weight); _temp_h_Lambdas->fill(pT, weight); } break; } if (absrap<0.6) { case 333: _histPtPhi->fill(pT, weight); break; } } } } } void finalize() { scale(_histPtK0s, 1./(1.5*sumOfWeights())); scale(_histPtLambda, 1./(1.5*sumOfWeights())); scale(_histPtAntiLambda, 1./(1.5*sumOfWeights())); scale(_histPtXi, 1./(1.6*sumOfWeights())); scale(_histPtPhi, 1./(1.2*sumOfWeights())); divide(_temp_h_Lambdas, _temp_h_Kzeros, _h_LamKzero); } private: Histo1DPtr _histPtK0s, _histPtLambda, _histPtAntiLambda, _histPtXi, _histPtPhi; Histo1DPtr _temp_h_Lambdas, _temp_h_Kzeros; Scatter2DPtr _h_LamKzero; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALICE_2011_S8909580); } diff --git a/analyses/pluginALICE/ALICE_2012_I1116147.cc b/analyses/pluginALICE/ALICE_2012_I1116147.cc --- a/analyses/pluginALICE/ALICE_2012_I1116147.cc +++ b/analyses/pluginALICE/ALICE_2012_I1116147.cc @@ -1,87 +1,87 @@ //-*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ALICE_2012_I1116147 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ALICE_2012_I1116147); /// Initialise projections and histograms void init() { - const UnstableFinalState ufs(Cuts::absrap < RAPMAX); + const UnstableParticles ufs(Cuts::absrap < RAPMAX); addProjection(ufs, "UFS"); // Check if cm energy is 7 TeV or 0.9 TeV if (fuzzyEquals(sqrtS()/GeV, 900, 1E-3)) _cm_energy_case = 1; else if (fuzzyEquals(sqrtS()/GeV, 7000, 1E-3)) _cm_energy_case = 2; if (_cm_energy_case == 0) throw UserError("Center of mass energy of the given input is neither 900 nor 7000 GeV."); // Book histos if (_cm_energy_case == 1) { _h_pi0 = bookHisto1D(2,1,1); } else { _h_pi0 = bookHisto1D(1,1,1); _h_eta = bookHisto1D(3,1,1); _h_etaToPion = bookScatter2D(4,1,1); } // Temporary plots with the binning of _h_etaToPion to construct the eta/pi0 ratio _temp_h_pion = bookHisto1D("TMP/h_pion", refData(4,1,1)); _temp_h_eta = bookHisto1D("TMP/h_eta", refData(4,1,1)); } /// Per-event analysis void analyze(const Event& event) { const double weight = event.weight(); - const FinalState& ufs = apply(event, "UFS"); + const FinalState& ufs = apply(event, "UFS"); for (const Particle& p : ufs.particles()) { const double normfactor = TWOPI*p.pT()/GeV*2*RAPMAX; if (p.pid() == 111) { // Neutral pion; ALICE corrects for pi0 feed-down from K_0_s and Lambda if (p.hasAncestor(310) || p.hasAncestor(3122) || p.hasAncestor(-3122)) continue; //< K_0_s, Lambda, Anti-Lambda _h_pi0->fill(p.pT()/GeV, weight/normfactor); _temp_h_pion->fill(p.pT()/GeV, weight); } else if (p.pid() == 221 && _cm_energy_case == 2) { // eta meson (only for 7 TeV) _h_eta->fill(p.pT()/GeV, weight/normfactor); _temp_h_eta->fill(p.pT()/GeV, weight); } } } /// Normalize histos and construct ratio void finalize() { scale(_h_pi0, crossSection()/microbarn/sumOfWeights()); if (_cm_energy_case == 2) { divide(_temp_h_eta, _temp_h_pion, _h_etaToPion); scale(_h_eta, crossSection()/microbarn/sumOfWeights()); } } private: const double RAPMAX = 0.8; int _cm_energy_case = 0; Histo1DPtr _h_pi0, _h_eta; Histo1DPtr _temp_h_pion, _temp_h_eta; Scatter2DPtr _h_etaToPion; }; DECLARE_RIVET_PLUGIN(ALICE_2012_I1116147); } diff --git a/analyses/pluginALICE/ALICE_2014_I1300380.cc b/analyses/pluginALICE/ALICE_2014_I1300380.cc --- a/analyses/pluginALICE/ALICE_2014_I1300380.cc +++ b/analyses/pluginALICE/ALICE_2014_I1300380.cc @@ -1,120 +1,120 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ALICE_2014_I1300380 : public Analysis { public: ALICE_2014_I1300380() : Analysis("ALICE_2014_I1300380") {} public: void init() { - const UnstableFinalState cfs(Cuts::absrap<0.5); + const UnstableParticles cfs(Cuts::absrap<0.5); declare(cfs, "CFS"); // Plots from the paper _histPtSigmaStarPlus = bookHisto1D("d01-x01-y01"); // Sigma*+ _histPtSigmaStarMinus = bookHisto1D("d01-x01-y02"); // Sigma*- _histPtSigmaStarPlusAnti = bookHisto1D("d01-x01-y03"); // anti Sigma*- _histPtSigmaStarMinusAnti = bookHisto1D("d01-x01-y04"); // anti Sigma*+ _histPtXiStar = bookHisto1D("d02-x01-y01"); // 0.5 * (xi star + anti xi star) _histAveragePt = bookProfile1D("d03-x01-y01"); // profile } void analyze(const Event& event) { const double weight = event.weight(); - const UnstableFinalState& cfs = apply(event, "CFS"); + const UnstableParticles& cfs = apply(event, "CFS"); foreach (const Particle& p, cfs.particles()) { // protections against mc generators decaying long-lived particles if ( !(p.hasAncestor(310) || p.hasAncestor(-310) || // K0s p.hasAncestor(130) || p.hasAncestor(-130) || // K0l p.hasAncestor(3322) || p.hasAncestor(-3322) || // Xi0 p.hasAncestor(3122) || p.hasAncestor(-3122) || // Lambda p.hasAncestor(3222) || p.hasAncestor(-3222) || // Sigma+/- p.hasAncestor(3312) || p.hasAncestor(-3312) || // Xi-/+ p.hasAncestor(3334) || p.hasAncestor(-3334) )) // Omega-/+ { int aid = abs(p.pdgId()); if (aid == 211 || // pi+ aid == 321 || // K+ aid == 313 || // K*(892)0 aid == 2212 || // proton aid == 333 ) { // phi(1020) _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); } } // end if "rejection of long-lived particles" switch (p.pdgId()) { case 3224: _histPtSigmaStarPlus->fill(p.pT()/GeV, weight); _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case -3224: _histPtSigmaStarPlusAnti->fill(p.pT()/GeV, weight); _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case 3114: _histPtSigmaStarMinus->fill(p.pT()/GeV, weight); _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case -3114: _histPtSigmaStarMinusAnti->fill(p.pT()/GeV, weight); _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case 3324: _histPtXiStar->fill(p.pT()/GeV, weight); _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case -3324: _histPtXiStar->fill(p.pT()/GeV, weight); _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case 3312: _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case -3312: _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case 3334: _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; case -3334: _histAveragePt->fill(p.mass()/GeV, p.pT()/GeV, weight); break; } } } void finalize() { scale(_histPtSigmaStarPlus, 1./sumOfWeights()); scale(_histPtSigmaStarPlusAnti, 1./sumOfWeights()); scale(_histPtSigmaStarMinus, 1./sumOfWeights()); scale(_histPtSigmaStarMinusAnti, 1./sumOfWeights()); scale(_histPtXiStar, 1./sumOfWeights()/ 2.); } private: // plots from the paper Histo1DPtr _histPtSigmaStarPlus; Histo1DPtr _histPtSigmaStarPlusAnti; Histo1DPtr _histPtSigmaStarMinus; Histo1DPtr _histPtSigmaStarMinusAnti; Histo1DPtr _histPtXiStar; Profile1DPtr _histAveragePt; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALICE_2014_I1300380); } diff --git a/analyses/pluginALICE/ALICE_2017_I1512110.cc b/analyses/pluginALICE/ALICE_2017_I1512110.cc --- a/analyses/pluginALICE/ALICE_2017_I1512110.cc +++ b/analyses/pluginALICE/ALICE_2017_I1512110.cc @@ -1,88 +1,88 @@ //-*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ALICE_2017_I1512110 : public Analysis { public: /// Constructor ALICE_2017_I1512110() : Analysis("ALICE_2017_I1512110"), _rapmax(0.8) { } void init() { - const UnstableFinalState ufs(Cuts::absrap < _rapmax); + const UnstableParticles ufs(Cuts::absrap < _rapmax); addProjection(ufs, "UFS"); _h_pi0 = bookHisto1D(3,1,1); _h_eta = bookHisto1D(4,1,1); _h_etaToPion = bookScatter2D(5,1,1); // temporary plots with the binning of _h_etaToPion // to construct the eta/pi0 ratio in the end _temp_h_pion = bookHisto1D("TMP/h_pion",refData(5,1,1)); _temp_h_eta = bookHisto1D("TMP/h_eta",refData(5,1,1)); } void analyze(const Event& event) { const double weight = event.weight(); - const UnstableFinalState& ufs = applyProjection(event, "UFS"); + const UnstableParticles& ufs = applyProjection(event, "UFS"); for (const Particle& p : ufs.particles()) { if (p.pid() == 111) { // neutral pion; ALICE corrects for pi0 feed-down if ( !(p.hasAncestor(310) || p.hasAncestor(130) || // K0_s, K0_l p.hasAncestor(321) || p.hasAncestor(-321) || // K+,K- p.hasAncestor(3122) || p.hasAncestor(-3122) || // Lambda, Anti-Lambda p.hasAncestor(3212) || p.hasAncestor(-3212) || // Sigma0 p.hasAncestor(3222) || p.hasAncestor(-3222) || // Sigmas p.hasAncestor(3112) || p.hasAncestor(-3112) || // Sigmas p.hasAncestor(3322) || p.hasAncestor(-3322) || // Cascades p.hasAncestor(3312) || p.hasAncestor(-3312) )) // Cascades { _h_pi0->fill(p.pT()/GeV, weight /(TWOPI*p.pT()/GeV*2*_rapmax)); _temp_h_pion->fill(p.pT()/GeV, weight); } } else if (p.pid() == 221){ // eta meson _h_eta->fill(p.pT()/GeV, weight /(TWOPI*p.pT()/GeV*2*_rapmax)); _temp_h_eta->fill(p.pT()/GeV, weight); } } } void finalize() { scale(_h_pi0, crossSection()/picobarn/sumOfWeights()); scale(_h_eta, crossSection()/picobarn/sumOfWeights()); divide(_temp_h_eta, _temp_h_pion, _h_etaToPion); } private: double _rapmax; Histo1DPtr _h_pi0; Histo1DPtr _h_eta; Histo1DPtr _temp_h_pion; Histo1DPtr _temp_h_eta; Scatter2DPtr _h_etaToPion; }; DECLARE_RIVET_PLUGIN(ALICE_2017_I1512110); } diff --git a/analyses/pluginALICE/ALICE_2017_I1620477.cc b/analyses/pluginALICE/ALICE_2017_I1620477.cc --- a/analyses/pluginALICE/ALICE_2017_I1620477.cc +++ b/analyses/pluginALICE/ALICE_2017_I1620477.cc @@ -1,90 +1,90 @@ //-*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Tools/ParticleUtils.hh" namespace Rivet { class ALICE_2017_I1620477 : public Analysis { public: /// Constructor ALICE_2017_I1620477() : Analysis("ALICE_2017_I1620477"), _rapmax(0.8) { } void init() { - const UnstableFinalState ufs(Cuts::absrap < _rapmax); + const UnstableParticles ufs(Cuts::absrap < _rapmax); addProjection(ufs, "UFS"); _h_pi0 = bookHisto1D(1,1,1); _h_eta = bookHisto1D(2,1,1); _h_etaToPion = bookScatter2D(8,1,1); // temporary plots with the binning of _h_etaToPion // to construct the eta/pi0 ratio in the end _temp_h_pion = bookHisto1D("TMP/h_pion",refData(8,1,1)); _temp_h_eta = bookHisto1D("TMP/h_eta",refData(8,1,1)); } void analyze(const Event& event) { const double weight = event.weight(); - const UnstableFinalState& ufs = applyProjection(event, "UFS"); + const UnstableParticles& ufs = applyProjection(event, "UFS"); for(auto p: ufs.particles()) { if (p.pid() == 111) { // neutral pion; ALICE corrects for pi0 feed-down if ( !(p.hasAncestor(310) || p.hasAncestor(130) || // K0_s, K0_l p.hasAncestor(321) || p.hasAncestor(-321) || // K+,K- p.hasAncestor(3122) || p.hasAncestor(-3122) || // Lambda, Anti-Lambda p.hasAncestor(3212) || p.hasAncestor(-3212) || // Sigma0 p.hasAncestor(3222) || p.hasAncestor(-3222) || // Sigmas p.hasAncestor(3112) || p.hasAncestor(-3112) || // Sigmas p.hasAncestor(3322) || p.hasAncestor(-3322) || // Cascades p.hasAncestor(3312) || p.hasAncestor(-3312) )) // Cascades { _h_pi0->fill(p.pT()/GeV, weight /(TWOPI*p.pT()/GeV*2*_rapmax)); _temp_h_pion->fill(p.pT()/GeV, weight); } } else if (p.pid() == 221) { // eta meson _h_eta->fill(p.pT()/GeV, weight /(TWOPI*p.pT()/GeV*2*_rapmax)); _temp_h_eta->fill(p.pT()/GeV, weight); } } } void finalize() { scale(_h_pi0, crossSection()/picobarn/sumOfWeights()); scale(_h_eta, crossSection()/picobarn/sumOfWeights()); divide(_temp_h_eta, _temp_h_pion, _h_etaToPion); } private: double _rapmax; Histo1DPtr _h_pi0; Histo1DPtr _h_eta; Histo1DPtr _temp_h_pion; Histo1DPtr _temp_h_eta; Scatter2DPtr _h_etaToPion; }; DECLARE_RIVET_PLUGIN(ALICE_2017_I1620477); } diff --git a/analyses/pluginATLAS/ATLAS_2011_I944826.cc b/analyses/pluginATLAS/ATLAS_2011_I944826.cc --- a/analyses/pluginATLAS/ATLAS_2011_I944826.cc +++ b/analyses/pluginATLAS/ATLAS_2011_I944826.cc @@ -1,260 +1,260 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ATLAS_2011_I944826 : public Analysis { public: /// Constructor ATLAS_2011_I944826() : Analysis("ATLAS_2011_I944826") { _sum_w_ks = 0.0; _sum_w_lambda = 0.0; _sum_w_passed = 0.0; } /// Book histograms and initialise projections before the run void init() { - UnstableFinalState ufs(Cuts::pT > 100*MeV); + UnstableParticles ufs(Cuts::pT > 100*MeV); declare(ufs, "UFS"); ChargedFinalState mbts(Cuts::absetaIn(2.09, 3.84)); declare(mbts, "MBTS"); IdentifiedFinalState nstable(Cuts::abseta < 2.5 && Cuts::pT >= 100*MeV); nstable.acceptIdPair(PID::ELECTRON) .acceptIdPair(PID::MUON) .acceptIdPair(PID::PIPLUS) .acceptIdPair(PID::KPLUS) .acceptIdPair(PID::PROTON); declare(nstable, "nstable"); if (fuzzyEquals(sqrtS()/GeV, 7000, 1e-3)) { _hist_Ks_pT = bookHisto1D(1, 1, 1); _hist_Ks_y = bookHisto1D(2, 1, 1); _hist_Ks_mult = bookHisto1D(3, 1, 1); _hist_L_pT = bookHisto1D(7, 1, 1); _hist_L_y = bookHisto1D(8, 1, 1); _hist_L_mult = bookHisto1D(9, 1, 1); _hist_Ratio_v_y = bookScatter2D(13, 1, 1); _hist_Ratio_v_pT = bookScatter2D(14, 1, 1); // _temp_lambda_v_y = Histo1D(10, 0.0, 2.5); _temp_lambdabar_v_y = Histo1D(10, 0.0, 2.5); _temp_lambda_v_pT = Histo1D(18, 0.5, 4.1); _temp_lambdabar_v_pT = Histo1D(18, 0.5, 4.1); } else if (fuzzyEquals(sqrtS()/GeV, 900, 1E-3)) { _hist_Ks_pT = bookHisto1D(4, 1, 1); _hist_Ks_y = bookHisto1D(5, 1, 1); _hist_Ks_mult = bookHisto1D(6, 1, 1); _hist_L_pT = bookHisto1D(10, 1, 1); _hist_L_y = bookHisto1D(11, 1, 1); _hist_L_mult = bookHisto1D(12, 1, 1); _hist_Ratio_v_y = bookScatter2D(15, 1, 1); _hist_Ratio_v_pT = bookScatter2D(16, 1, 1); // _temp_lambda_v_y = Histo1D(5, 0.0, 2.5); _temp_lambdabar_v_y = Histo1D(5, 0.0, 2.5); _temp_lambda_v_pT = Histo1D(8, 0.5, 3.7); _temp_lambdabar_v_pT = Histo1D(8, 0.5, 3.7); } } // This function is required to impose the flight time cuts on Kaons and Lambdas double getPerpFlightDistance(const Rivet::Particle& p) { const HepMC::GenParticle* genp = p.genParticle(); const HepMC::GenVertex* prodV = genp->production_vertex(); const HepMC::GenVertex* decV = genp->end_vertex(); const HepMC::ThreeVector prodPos = prodV->point3d(); if (decV) { const HepMC::ThreeVector decPos = decV->point3d(); double dy = prodPos.y() - decPos.y(); double dx = prodPos.x() - decPos.x(); return add_quad(dx, dy); } return numeric_limits::max(); } bool daughtersSurviveCuts(const Rivet::Particle& p) { // We require the Kshort or Lambda to decay into two charged // particles with at least pT = 100 MeV inside acceptance region const HepMC::GenParticle* genp = p.genParticle(); const HepMC::GenVertex* decV = genp->end_vertex(); bool decision = true; if (!decV) return false; if (decV->particles_out_size() == 2) { std::vector pTs; std::vector charges; std::vector etas; foreach (const HepMC::GenParticle* gp, particles(decV, HepMC::children)) { pTs.push_back(gp->momentum().perp()); etas.push_back(fabs(gp->momentum().eta())); charges.push_back( Rivet::PID::threeCharge(gp->pdg_id()) ); // gp->print(); } if ( (pTs[0]/Rivet::GeV < 0.1) || (pTs[1]/Rivet::GeV < 0.1) ) { decision = false; MSG_DEBUG("Failed pT cut: " << pTs[0]/Rivet::GeV << " " << pTs[1]/Rivet::GeV); } if ( etas[0] > 2.5 || etas[1] > 2.5 ) { decision = false; MSG_DEBUG("Failed eta cut: " << etas[0] << " " << etas[1]); } if ( charges[0] * charges[1] >= 0 ) { decision = false; MSG_DEBUG("Failed opposite charge cut: " << charges[0] << " " << charges[1]); } } else { decision = false; MSG_DEBUG("Failed nDaughters cut: " << decV->particles_out_size()); } return decision; } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // ATLAS MBTS trigger requirement of at least one hit in either hemisphere if (apply(event, "MBTS").size() < 1) { MSG_DEBUG("Failed trigger cut"); vetoEvent; } // Veto event also when we find less than 2 particles in the acceptance region of type 211,2212,11,13,321 if (apply(event, "nstable").size() < 2) { MSG_DEBUG("Failed stable particle cut"); vetoEvent; } _sum_w_passed += weight; // This ufs holds all the Kaons and Lambdas - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); // Some conters int n_KS0 = 0; int n_LAMBDA = 0; // Particle loop foreach (const Particle& p, ufs.particles()) { // General particle quantities const double pT = p.pT(); const double y = p.rapidity(); const PdgId apid = p.abspid(); double flightd = 0.0; // Look for Kaons, Lambdas switch (apid) { case PID::K0S: flightd = getPerpFlightDistance(p); if (!inRange(flightd/mm, 4., 450.) ) { MSG_DEBUG("Kaon failed flight distance cut:" << flightd); break; } if (daughtersSurviveCuts(p) ) { _hist_Ks_y ->fill(y, weight); _hist_Ks_pT->fill(pT/GeV, weight); _sum_w_ks += weight; n_KS0++; } break; case PID::LAMBDA: if (pT < 0.5*GeV) { // Lambdas have an additional pT cut of 500 MeV MSG_DEBUG("Lambda failed pT cut:" << pT/GeV << " GeV"); break; } flightd = getPerpFlightDistance(p); if (!inRange(flightd/mm, 17., 450.)) { MSG_DEBUG("Lambda failed flight distance cut:" << flightd/mm << " mm"); break; } if ( daughtersSurviveCuts(p) ) { if (p.pid() == PID::LAMBDA) { _temp_lambda_v_y.fill(fabs(y), weight); _temp_lambda_v_pT.fill(pT/GeV, weight); _hist_L_y->fill(y, weight); _hist_L_pT->fill(pT/GeV, weight); _sum_w_lambda += weight; n_LAMBDA++; } else if (p.pid() == -PID::LAMBDA) { _temp_lambdabar_v_y.fill(fabs(y), weight); _temp_lambdabar_v_pT.fill(pT/GeV, weight); } } break; } } // Fill multiplicity histos _hist_Ks_mult->fill(n_KS0, weight); _hist_L_mult->fill(n_LAMBDA, weight); } /// Normalise histograms etc., after the run void finalize() { MSG_DEBUG("# Events that pass the trigger: " << _sum_w_passed); MSG_DEBUG("# Kshort events: " << _sum_w_ks); MSG_DEBUG("# Lambda events: " << _sum_w_lambda); /// @todo Replace with normalize()? scale(_hist_Ks_pT, 1.0/_sum_w_ks); scale(_hist_Ks_y, 1.0/_sum_w_ks); scale(_hist_Ks_mult, 1.0/_sum_w_passed); /// @todo Replace with normalize()? scale(_hist_L_pT, 1.0/_sum_w_lambda); scale(_hist_L_y, 1.0/_sum_w_lambda); scale(_hist_L_mult, 1.0/_sum_w_passed); // Division of histograms to obtain lambda_bar/lambda ratios divide(_temp_lambdabar_v_y, _temp_lambda_v_y, _hist_Ratio_v_y); divide(_temp_lambdabar_v_pT, _temp_lambda_v_pT, _hist_Ratio_v_pT); } private: /// Counters double _sum_w_ks, _sum_w_lambda, _sum_w_passed; /// @name Persistent histograms //@{ Histo1DPtr _hist_Ks_pT, _hist_Ks_y, _hist_Ks_mult; Histo1DPtr _hist_L_pT, _hist_L_y, _hist_L_mult; Scatter2DPtr _hist_Ratio_v_pT, _hist_Ratio_v_y; //@} /// @name Temporary histograms //@{ Histo1D _temp_lambda_v_y, _temp_lambdabar_v_y; Histo1D _temp_lambda_v_pT, _temp_lambdabar_v_pT; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2011_I944826); } diff --git a/analyses/pluginATLAS/ATLAS_2011_S9035664.cc b/analyses/pluginATLAS/ATLAS_2011_S9035664.cc --- a/analyses/pluginATLAS/ATLAS_2011_S9035664.cc +++ b/analyses/pluginATLAS/ATLAS_2011_S9035664.cc @@ -1,138 +1,138 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief J/psi production at ATLAS class ATLAS_2011_S9035664: public Analysis { public: /// Constructor ATLAS_2011_S9035664() : Analysis("ATLAS_2011_S9035664") {} /// @name Analysis methods //@{ void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _nonPrRapHigh = bookHisto1D( 14, 1, 1); _nonPrRapMedHigh = bookHisto1D( 13, 1, 1); _nonPrRapMedLow = bookHisto1D( 12, 1, 1); _nonPrRapLow = bookHisto1D( 11, 1, 1); _PrRapHigh = bookHisto1D( 18, 1, 1); _PrRapMedHigh = bookHisto1D( 17, 1, 1); _PrRapMedLow = bookHisto1D( 16, 1, 1); _PrRapLow = bookHisto1D( 15, 1, 1); _IncRapHigh = bookHisto1D( 20, 1, 1); _IncRapMedHigh = bookHisto1D( 21, 1, 1); _IncRapMedLow = bookHisto1D( 22, 1, 1); _IncRapLow = bookHisto1D( 23, 1, 1); } void analyze(const Event& e) { // Get event weight for histo filling const double weight = e.weight(); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if (p.abspid() != 443) continue; const GenVertex* gv = p.genParticle()->production_vertex(); bool nonPrompt = false; if (gv) { foreach (const GenParticle* pi, Rivet::particles(gv, HepMC::ancestors)) { const PdgId pid2 = pi->pdg_id(); if (PID::isHadron(pid2) && PID::hasBottom(pid2)) { nonPrompt = true; break; } } } double absrap = p.absrap(); double xp = p.perp(); if (absrap<=2.4 and absrap>2.) { if (nonPrompt) _nonPrRapHigh->fill(xp, weight); else if (!nonPrompt) _PrRapHigh->fill(xp, weight); _IncRapHigh->fill(xp, weight); } else if (absrap<=2. and absrap>1.5) { if (nonPrompt) _nonPrRapMedHigh->fill(xp, weight); else if (!nonPrompt) _PrRapMedHigh->fill(xp, weight); _IncRapMedHigh->fill(xp, weight); } else if (absrap<=1.5 and absrap>0.75) { if (nonPrompt) _nonPrRapMedLow->fill(xp, weight); else if (!nonPrompt) _PrRapMedLow->fill(xp, weight); _IncRapMedLow->fill(xp, weight); } else if (absrap<=0.75) { if (nonPrompt) _nonPrRapLow->fill(xp, weight); else if (!nonPrompt) _PrRapLow->fill(xp, weight); _IncRapLow->fill(xp, weight); } } } /// Finalize void finalize() { double factor = crossSection()/nanobarn*0.0593; scale(_PrRapHigh , factor/sumOfWeights()); scale(_PrRapMedHigh , factor/sumOfWeights()); scale(_PrRapMedLow , factor/sumOfWeights()); scale(_PrRapLow , factor/sumOfWeights()); scale(_nonPrRapHigh , factor/sumOfWeights()); scale(_nonPrRapMedHigh, factor/sumOfWeights()); scale(_nonPrRapMedLow , factor/sumOfWeights()); scale(_nonPrRapLow , factor/sumOfWeights()); scale(_IncRapHigh , 1000.*factor/sumOfWeights()); scale(_IncRapMedHigh , 1000.*factor/sumOfWeights()); scale(_IncRapMedLow , 1000.*factor/sumOfWeights()); scale(_IncRapLow , 1000.*factor/sumOfWeights()); } //@} private: Histo1DPtr _nonPrRapHigh; Histo1DPtr _nonPrRapMedHigh; Histo1DPtr _nonPrRapMedLow; Histo1DPtr _nonPrRapLow; Histo1DPtr _PrRapHigh; Histo1DPtr _PrRapMedHigh; Histo1DPtr _PrRapMedLow; Histo1DPtr _PrRapLow; Histo1DPtr _IncRapHigh; Histo1DPtr _IncRapMedHigh; Histo1DPtr _IncRapMedLow; Histo1DPtr _IncRapLow; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2011_S9035664); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1082009.cc b/analyses/pluginATLAS/ATLAS_2012_I1082009.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1082009.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1082009.cc @@ -1,147 +1,147 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/MissingMomentum.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/LeadingParticlesFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ATLAS_2012_I1082009 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor ATLAS_2012_I1082009() : Analysis("ATLAS_2012_I1082009"), _weight25_30(0.),_weight30_40(0.),_weight40_50(0.), _weight50_60(0.),_weight60_70(0.),_weight25_70(0.) { } //@} public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Input for the jets: No neutrinos, no muons VetoedFinalState veto; veto.addVetoPairId(PID::MUON); veto.vetoNeutrinos(); FastJets jets(veto, FastJets::ANTIKT, 0.6); declare(jets, "jets"); // unstable final-state for D* - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _h_pt25_30 = bookHisto1D( 8,1,1); _h_pt30_40 = bookHisto1D( 9,1,1); _h_pt40_50 = bookHisto1D(10,1,1); _h_pt50_60 = bookHisto1D(11,1,1); _h_pt60_70 = bookHisto1D(12,1,1); _h_pt25_70 = bookHisto1D(13,1,1); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // get the jets Jets jets; foreach (const Jet& jet, apply(event, "jets").jetsByPt(25.0*GeV)) { if ( jet.abseta() < 2.5 ) jets.push_back(jet); } // get the D* mesons - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); Particles Dstar; foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); if(id==413) Dstar.push_back(p); } // loop over the jobs foreach (const Jet& jet, jets ) { double perp = jet.perp(); bool found = false; double z(0.); if(perp<25.||perp>70.) continue; foreach(const Particle & p, Dstar) { if(p.perp()<7.5) continue; if(deltaR(p, jet.momentum())<0.6) { Vector3 axis = jet.p3().unit(); z = axis.dot(p.p3())/jet.E(); if(z<0.3) continue; found = true; break; } } _weight25_70 += weight; if(found) _h_pt25_70->fill(z,weight); if(perp>=25.&&perp<30.) { _weight25_30 += weight; if(found) _h_pt25_30->fill(z,weight); } else if(perp>=30.&&perp<40.) { _weight30_40 += weight; if(found) _h_pt30_40->fill(z,weight); } else if(perp>=40.&&perp<50.) { _weight40_50 += weight; if(found) _h_pt40_50->fill(z,weight); } else if(perp>=50.&&perp<60.) { _weight50_60 += weight; if(found) _h_pt50_60->fill(z,weight); } else if(perp>=60.&&perp<70.) { _weight60_70 += weight; if(found) _h_pt60_70->fill(z,weight); } } } /// Normalise histograms etc., after the run void finalize() { scale(_h_pt25_30,1./_weight25_30); scale(_h_pt30_40,1./_weight30_40); scale(_h_pt40_50,1./_weight40_50); scale(_h_pt50_60,1./_weight50_60); scale(_h_pt60_70,1./_weight60_70); scale(_h_pt25_70,1./_weight25_70); } //@} private: /// @name Histograms //@{ double _weight25_30,_weight30_40,_weight40_50; double _weight50_60,_weight60_70,_weight25_70; Histo1DPtr _h_pt25_30; Histo1DPtr _h_pt30_40; Histo1DPtr _h_pt40_50; Histo1DPtr _h_pt50_60; Histo1DPtr _h_pt60_70; Histo1DPtr _h_pt25_70; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2012_I1082009); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1093734.cc b/analyses/pluginATLAS/ATLAS_2012_I1093734.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1093734.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1093734.cc @@ -1,320 +1,320 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/MissingMomentum.hh" namespace Rivet { namespace { inline double sumAB(vector vecX, vector vecY, vector vecW) { assert(vecX.size() == vecY.size() && vecX.size() == vecW.size()); double sum(0); for (size_t i = 0; i < vecX.size(); i++) sum += vecW[i] * vecX[i] * vecY[i]; return sum; } inline double sumA(vector vecX, vector vecW) { assert(vecX.size() == vecW.size()); double sum(0); for (size_t i = 0; i < vecX.size(); i++) sum += vecX[i]*vecW[i]; return sum; } inline double sumW(vector vecW) { double sum(0); for (size_t i = 0; i < vecW.size(); i++) sum += vecW[i]; return sum; } inline double mean(vector vecX, vector vecW) { return sumA(vecX, vecW) / sumW(vecW); } inline double standard_deviation(vector vecX, vector vecW) { const double x_bar = mean(vecX, vecW); double sum(0); for (size_t i = 0; i < vecX.size(); i++) { sum += vecW[i] * sqr(vecX[i] - x_bar); } return sqrt( sum / sumW(vecW) ); } inline double a0_regression(vector vecX, vector vecY, vector vecW) { const double numerator = sumA(vecY, vecW) * sumAB(vecX, vecX, vecW) - sumA(vecX, vecW) * sumAB(vecX, vecY, vecW); const double denominator = sumW(vecW) * sumAB(vecX, vecX, vecW) - sumA(vecX, vecW) * sumA(vecX, vecW); return numerator / denominator; } inline double a1_regression(vector vecX, vector vecY, vector vecW) { const double numerator = sumW(vecW) * sumAB(vecX,vecY,vecW) - sumA(vecX, vecW) * sumA(vecY, vecW); const double denominator = sumW(vecW) * sumAB(vecX,vecX,vecW) - sumA(vecX, vecW) * sumA(vecX, vecW); return numerator/ denominator; } inline double a1_regression2(vector vecX, vector vecY, vector vecW) { const double x_bar = mean(vecX, vecW); const double y_bar = mean(vecY, vecW); double sumXY(0); for (size_t i = 0; i < vecX.size(); i++) { sumXY += vecW[i] * (vecY[i]-y_bar) * (vecX[i]-x_bar); } return sumXY / ( standard_deviation(vecX, vecW) * standard_deviation(vecY, vecW) * sumW(vecW) ); } inline double quadra_sum_residual(vector vecX, vector vecY, vector vecW) { const double a0 = a0_regression(vecX, vecY, vecW); const double a1 = a1_regression(vecX, vecY, vecW); double sum(0); for (size_t i = 0; i < vecX.size(); i++) { const double y_est = a0 + a1*vecX[i]; sum += vecW[i] * sqr(vecY[i] - y_est); } return sum; } inline double error_on_slope(vector vecX, vector vecY, vector vecW) { const double quadra_sum_res = quadra_sum_residual(vecX, vecY, vecW); const double sqrt_quadra_sum_x = standard_deviation(vecX, vecW) * sqrt(sumW(vecW)); return sqrt(quadra_sum_res/(sumW(vecW)-2)) / sqrt_quadra_sum_x; } } /// Forward-backward and azimuthal correlations in minimum bias events class ATLAS_2012_I1093734 : public Analysis { public: /// Constructor ATLAS_2012_I1093734() : Analysis("ATLAS_2012_I1093734") { // Stat convergence happens around 20k events, so it doesn't make sense to run this // analysis with much less than that. Given that, lets avoid some unnecessary vector // resizing by allocating sensible amounts in the first place. for (int ipt = 0; ipt < NPTBINS; ++ipt) { for (int k = 0; k < NETABINS; ++k) { _vecsNchF [ipt][k].reserve(10000); _vecsNchB [ipt][k].reserve(10000); _vecWeight[ipt][k].reserve(10000); if (ipt == 0) { _vecsSumptF[k].reserve(10000); _vecsSumptB[k].reserve(10000); } } } } public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // FB correlations part // Projections for (int ipt = 0; ipt < NPTBINS; ++ipt) { const double ptmin = PTMINVALUES[ipt]*MeV; for (int ieta = 0; ieta < NETABINS; ++ieta) { declare(ChargedFinalState(-ETAVALUES[ieta], -ETAVALUES[ieta]+0.5, ptmin), "Tracks"+ETABINNAMES[ieta]+"B"+PTBINNAMES[ipt]); declare(ChargedFinalState( ETAVALUES[ieta]-0.5, ETAVALUES[ieta], ptmin), "Tracks"+ETABINNAMES[ieta]+"F"+PTBINNAMES[ipt]); } declare(ChargedFinalState(-2.5, 2.5, ptmin), "CFS" + PTBINNAMES[ipt]); } // Histos if (fuzzyEquals(sqrtS(), 7000*GeV, 1e-3)) { for (int ipt = 0; ipt < NPTBINS ; ++ipt ) _s_NchCorr_vsEta[ipt] = bookScatter2D(1+ipt, 2, 1, true); for (int ieta = 0; ieta < NETABINS; ++ieta) _s_NchCorr_vsPt [ieta] = bookScatter2D(8+ieta, 2, 1, true); _s_PtsumCorr = bookScatter2D(13, 2, 1, true); } else if (fuzzyEquals(sqrtS(), 900*GeV, 1e-3)) { _s_NchCorr_vsEta[0] = bookScatter2D(14, 2, 1, true); _s_PtsumCorr = bookScatter2D(15, 2, 1, true); } // Azimuthal correlations part // Projections const double ptmin = 500*MeV; declare(ChargedFinalState(-2.5, 2.5, ptmin), "ChargedTracks25"); declare(ChargedFinalState(-2.0, 2.0, ptmin), "ChargedTracks20"); declare(ChargedFinalState(-1.0, 1.0, ptmin), "ChargedTracks10"); // Histos /// @todo Declare/book as temporary for (size_t ieta = 0; ieta < 3; ++ieta) { if (fuzzyEquals(sqrtS(), 7000*GeV, 1e-3)) { _s_dphiMin[ieta] = bookScatter2D(2+2*ieta, 1, 1, true); _s_diffSO[ieta] = bookScatter2D(8+2*ieta, 1, 1, true); _th_dphi[ieta] = YODA::Histo1D(refData(2+2*ieta, 1, 1)); _th_same[ieta] = YODA::Histo1D(refData(8+2*ieta, 1, 1)); _th_oppo[ieta] = YODA::Histo1D(refData(8+2*ieta, 1, 1)); } else if (fuzzyEquals(sqrtS(), 900*GeV, 1e-3)) { _s_dphiMin[ieta] = bookScatter2D(1+2*ieta, 1, 1, true); _s_diffSO[ieta] = bookScatter2D(7+2*ieta, 1, 1, true); _th_dphi[ieta] = YODA::Histo1D(refData(1+2*ieta, 1, 1)); _th_same[ieta] = YODA::Histo1D(refData(7+2*ieta, 1, 1)); _th_oppo[ieta] = YODA::Histo1D(refData(7+2*ieta, 1, 1)); } } } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); for (int ipt = 0; ipt < NPTBINS; ++ipt) { const FinalState& charged = apply(event, "CFS" + PTBINNAMES[ipt]); if (charged.particles().size() >= 2) { for (int ieta = 0; ieta < NETABINS; ++ieta) { const string fname = "Tracks" + ETABINNAMES[ieta] + "F" + PTBINNAMES[ipt]; const string bname = "Tracks" + ETABINNAMES[ieta] + "B" + PTBINNAMES[ipt]; const ParticleVector particlesF = apply(event, fname).particles(); const ParticleVector particlesB = apply(event, bname).particles(); _vecsNchF[ipt][ieta].push_back((double) particlesF.size()); _vecsNchB[ipt][ieta].push_back((double) particlesB.size()); _vecWeight[ipt][ieta].push_back(weight); // Sum pT only for 100 MeV particles if (ipt == 0) { double sumptF = 0; double sumptB = 0; foreach (const Particle& p, particlesF) sumptF += p.pT(); foreach (const Particle& p, particlesB) sumptB += p.pT(); _vecsSumptF[ieta].push_back(sumptF); _vecsSumptB[ieta].push_back(sumptB); } } } } string etabin[3] = { "10", "20", "25" }; for (int ieta = 0; ieta < 3; ieta++) { const string fname = "ChargedTracks" + etabin[ieta]; const ParticleVector partTrks = apply(event, fname).particlesByPt(); // Find the leading track and fill the temp histograms const Particle& plead = partTrks[0]; foreach (const Particle& p, partTrks) { if (&plead == &p) continue; ///< Don't compare the lead particle to itself const double dphi = deltaPhi(p.momentum(), plead.momentum()); _th_dphi[ieta].fill(dphi, weight); const bool sameside = (plead.eta() * p.eta() > 0); (sameside ? _th_same : _th_oppo)[ieta].fill(dphi, weight); } } } /// Finalize void finalize() { // FB part // @todo For 2D plots we will need _vecsNchF[i], _vecsNchB[j] for (int ipt = 0; ipt < NPTBINS; ++ipt) { for (int ieta = 0; ieta < NETABINS; ++ieta) { _s_NchCorr_vsEta[ipt]->point(ieta).setY(a1_regression2(_vecsNchF[ipt][ieta], _vecsNchB[ipt][ieta], _vecWeight[ipt][ieta])); _s_NchCorr_vsEta[ipt]->point(ieta).setYErr(error_on_slope(_vecsNchF[ipt][ieta], _vecsNchB[ipt][ieta], _vecWeight[ipt][ieta])); } // There is just one plot at 900 GeV so exit the loop here if (fuzzyEquals(sqrtS(), 900*GeV, 1e-3) && ipt == 0) break; } if (!fuzzyEquals(sqrtS(), 900*GeV, 1e-3)) { ///< No plots at 900 GeV for (int ieta = 0; ieta < NETABINS; ++ieta) { for (int ipt = 0; ipt < NPTBINS; ++ipt) { _s_NchCorr_vsPt[ieta]->point(ipt).setY(a1_regression2(_vecsNchF[ipt][ieta], _vecsNchB[ipt][ieta], _vecWeight[ipt][ieta])); _s_NchCorr_vsPt[ieta]->point(ipt).setYErr(error_on_slope(_vecsNchF[ipt][ieta], _vecsNchB[ipt][ieta], _vecWeight[ipt][ieta])); } } } // Sum pt only for 100 MeV particles for (int ieta = 0; ieta < NETABINS; ++ieta) { _s_PtsumCorr->point(ieta).setY(a1_regression2(_vecsSumptF[ieta], _vecsSumptB[ieta], _vecWeight[0][ieta])); _s_PtsumCorr->point(ieta).setYErr(error_on_slope(_vecsSumptF[ieta], _vecsSumptB[ieta], _vecWeight[0][ieta])); } // Azimuthal part for (int ieta = 0; ieta < 3; ieta++) { /// @note We don't just do a subtraction because of the risk of negative values and negative errors /// @todo Should the difference always be shown as positive?, i.e. y -> abs(y), etc. /// @todo Should the normalization be done _after_ the -ve value treatment? YODA::Histo1D hdiffSO = _th_same[ieta] - _th_oppo[ieta]; hdiffSO.normalize(hdiffSO.bin(0).xWidth()); for (size_t i = 0; i < hdiffSO.numBins(); ++i) { const double y = hdiffSO.bin(i).height() >= 0 ? hdiffSO.bin(i).height() : 0; const double yerr = hdiffSO.bin(i).heightErr() >= 0 ? hdiffSO.bin(i).heightErr() : 0; _s_diffSO[ieta]->point(i).setY(y, yerr); } // Extract minimal value double histMin = _th_dphi[ieta].bin(0).height(); for (size_t iphi = 1; iphi < _th_dphi[ieta].numBins(); ++iphi) { histMin = std::min(histMin, _th_dphi[ieta].bin(iphi).height()); } // Build scatter of differences double sumDiff = 0; for (size_t iphi = 0; iphi < _th_dphi[ieta].numBins(); ++iphi) { const double diff = _th_dphi[ieta].bin(iphi).height() - histMin; _s_dphiMin[ieta]->point(iphi).setY(diff, _th_dphi[ieta].bin(iphi).heightErr()); sumDiff += diff; } // Normalize _s_dphiMin[ieta]->scale(1, 1/sumDiff); } } //@} private: static const int NPTBINS = 7; static const int NETABINS = 5; static const double PTMINVALUES[NPTBINS]; static const string PTBINNAMES[NPTBINS]; static const double ETAVALUES[NETABINS]; static const string ETABINNAMES[NETABINS]; vector _vecWeight[NPTBINS][NETABINS]; vector _vecsNchF[NPTBINS][NETABINS]; vector _vecsNchB[NPTBINS][NETABINS]; vector _vecsSumptF[NETABINS]; vector _vecsSumptB[NETABINS]; /// @name Histograms //@{ Scatter2DPtr _s_NchCorr_vsEta[NPTBINS], _s_NchCorr_vsPt[NETABINS], _s_PtsumCorr; Scatter2DPtr _s_dphiMin[3], _s_diffSO[3]; YODA::Histo1D _th_dphi[3], _th_same[3], _th_oppo[3]; //@} }; /// @todo Initialize these inline at declaration with C++11 const double ATLAS_2012_I1093734::PTMINVALUES[] = {100, 200, 300, 500, 1000, 1500, 2000 }; const string ATLAS_2012_I1093734::PTBINNAMES[] = { "100", "200", "300", "500", "1000", "1500", "2000" }; const double ATLAS_2012_I1093734::ETAVALUES[] = {0.5, 1.0, 1.5, 2.0, 2.5}; const string ATLAS_2012_I1093734::ETABINNAMES[] = { "05", "10", "15", "20", "25" }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2012_I1093734); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1094568.cc b/analyses/pluginATLAS/ATLAS_2012_I1094568.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1094568.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1094568.cc @@ -1,366 +1,366 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/LeadingParticlesFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/HadronicFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { struct ATLAS_2012_I1094568_Plots { // Track which veto region this is, to match the autobooked histograms int region_index; // Lower rapidity boundary or veto region double y_low; // Upper rapidity boundary or veto region double y_high; double vetoJetPt_Q0; double vetoJetPt_Qsum; // Histograms to store the veto jet pT and sum(veto jet pT) histograms. Histo1DPtr _h_vetoJetPt_Q0; Histo1DPtr _h_vetoJetPt_Qsum; // Scatter2Ds for the gap fractions Scatter2DPtr _d_gapFraction_Q0; Scatter2DPtr _d_gapFraction_Qsum; }; /// Top pair production with central jet veto class ATLAS_2012_I1094568 : public Analysis { public: /// Constructor ATLAS_2012_I1094568() : Analysis("ATLAS_2012_I1094568") { } /// Book histograms and initialise projections before the run void init() { const FinalState fs(Cuts::abseta < 4.5); declare(fs, "ALL_FS"); /// Get electrons from truth record IdentifiedFinalState elec_fs(Cuts::abseta < 2.47 && Cuts::pT > 25*GeV); elec_fs.acceptIdPair(PID::ELECTRON); declare(elec_fs, "ELEC_FS"); /// Get muons which pass the initial kinematic cuts: IdentifiedFinalState muon_fs(Cuts::abseta < 2.5 && Cuts::pT > 20*GeV); muon_fs.acceptIdPair(PID::MUON); declare(muon_fs, "MUON_FS"); /// Get all neutrinos. These will not be used to form jets. /// We'll use the highest 2 pT neutrinos to calculate the MET IdentifiedFinalState neutrino_fs(Cuts::abseta < 4.5); neutrino_fs.acceptNeutrinos(); declare(neutrino_fs, "NEUTRINO_FS"); // Final state used as input for jet-finding. // We include everything except the muons and neutrinos VetoedFinalState jet_input(fs); jet_input.vetoNeutrinos(); jet_input.addVetoPairId(PID::MUON); declare(jet_input, "JET_INPUT"); // Get the jets FastJets jets(jet_input, FastJets::ANTIKT, 0.4); declare(jets, "JETS"); // Initialise weight counter m_total_weight = 0.0; // Init histogramming for the various regions m_plots[0].region_index = 1; m_plots[0].y_low = 0.0; m_plots[0].y_high = 0.8; initializePlots(m_plots[0]); // m_plots[1].region_index = 2; m_plots[1].y_low = 0.8; m_plots[1].y_high = 1.5; initializePlots(m_plots[1]); // m_plots[2].region_index = 3; m_plots[2].y_low = 1.5; m_plots[2].y_high = 2.1; initializePlots(m_plots[2]); // m_plots[3].region_index = 4; m_plots[3].y_low = 0.0; m_plots[3].y_high = 2.1; initializePlots(m_plots[3]); } void initializePlots(ATLAS_2012_I1094568_Plots& plots) { const string vetoPt_Q0_name = "TMP/vetoJetPt_Q0_" + to_str(plots.region_index); plots.vetoJetPt_Q0 = 0.0; plots._h_vetoJetPt_Q0 = bookHisto1D(vetoPt_Q0_name, 200, 0.0, 1000.0); plots._d_gapFraction_Q0 = bookScatter2D(plots.region_index, 1, 1); foreach (Point2D p, refData(plots.region_index, 1, 1).points()) { p.setY(0, 0); plots._d_gapFraction_Q0->addPoint(p); } const string vetoPt_Qsum_name = "TMP/vetoJetPt_Qsum_" + to_str(plots.region_index); plots._h_vetoJetPt_Qsum = bookHisto1D(vetoPt_Qsum_name, 200, 0.0, 1000.0); plots._d_gapFraction_Qsum = bookScatter2D(plots.region_index, 2, 1); plots.vetoJetPt_Qsum = 0.0; foreach (Point2D p, refData(plots.region_index, 2, 1).points()) { p.setY(0, 0); plots._d_gapFraction_Qsum->addPoint(p); } } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); /// Get the various sets of final state particles const Particles& elecFS = apply(event, "ELEC_FS").particlesByPt(); const Particles& muonFS = apply(event, "MUON_FS").particlesByPt(); const Particles& neutrinoFS = apply(event, "NEUTRINO_FS").particlesByPt(); // Get all jets with pT > 25 GeV const Jets& jets = apply(event, "JETS").jetsByPt(25.0*GeV); // Keep any jets that pass the initial rapidity cut vector central_jets; foreach(const Jet& j, jets) { if (j.absrap() < 2.4) central_jets.push_back(&j); } // For each of the jets that pass the rapidity cut, only keep those that are not // too close to any leptons vector good_jets; foreach(const Jet* j, central_jets) { bool goodJet = true; foreach (const Particle& e, elecFS) { double elec_jet_dR = deltaR(e.momentum(), j->momentum()); if (elec_jet_dR < 0.4) { goodJet = false; break; } } if (!goodJet) continue; if (!goodJet) continue; foreach (const Particle& m, muonFS) { double muon_jet_dR = deltaR(m.momentum(), j->momentum()); if (muon_jet_dR < 0.4) { goodJet = false; break; } } if (!goodJet) continue; good_jets.push_back(j); } // Get b hadrons with pT > 5 GeV - /// @todo This is a hack -- replace with UnstableFinalState + /// @todo This is a hack -- replace with UnstableParticles vector B_hadrons; vector allParticles = particles(event.genEvent()); for (size_t i = 0; i < allParticles.size(); i++) { const GenParticle* p = allParticles[i]; if (!PID::isHadron(p->pdg_id()) || !PID::hasBottom(p->pdg_id())) continue; if (p->momentum().perp() < 5*GeV) continue; B_hadrons.push_back(p); } // For each of the good jets, check whether any are b-jets (via dR matching) vector b_jets; foreach (const Jet* j, good_jets) { bool isbJet = false; foreach (const GenParticle* b, B_hadrons) { if (deltaR(j->momentum(), FourMomentum(b->momentum())) < 0.3) isbJet = true; } if (isbJet) b_jets.push_back(j); } // Check the good jets again and keep track of the "additional jets" // i.e. those which are not either of the 2 highest pT b-jets vector veto_jets; int n_bjets_matched = 0; foreach (const Jet* j, good_jets) { bool isBJet = false; foreach (const Jet* b, b_jets) { if (n_bjets_matched == 2) break; if (b == j){isBJet = true; ++ n_bjets_matched;} } if (!isBJet) veto_jets.push_back(j); } // Get the MET by taking the vector sum of all neutrinos /// @todo Use MissingMomentum instead? double MET = 0; FourMomentum p_MET; foreach (const Particle& p, neutrinoFS) { p_MET = p_MET + p.momentum(); } MET = p_MET.pT(); // Now we have everything we need to start doing the event selections bool passed_ee = false; vector vetoJets_ee; // We want exactly 2 electrons... if (elecFS.size() == 2) { // ... with opposite sign charges. if (charge(elecFS[0]) != charge(elecFS[1])) { // Check the MET if (MET >= 40*GeV) { // Do some dilepton mass cuts const double dilepton_mass = (elecFS[0].momentum() + elecFS[1].momentum()).mass(); if (dilepton_mass >= 15*GeV) { if (fabs(dilepton_mass - 91.0*GeV) >= 10.0*GeV) { // We need at least 2 b-jets if (b_jets.size() > 1) { // This event has passed all the cuts; passed_ee = true; } } } } } } bool passed_mumu = false; // Now do the same checks for the mumu channel vector vetoJets_mumu; // So we now want 2 good muons... if (muonFS.size() == 2) { // ...with opposite sign charges. if (charge(muonFS[0]) != charge(muonFS[1])) { // Check the MET if (MET >= 40*GeV) { // and do some di-muon mass cuts const double dilepton_mass = (muonFS.at(0).momentum() + muonFS.at(1).momentum()).mass(); if (dilepton_mass >= 15*GeV) { if (fabs(dilepton_mass - 91.0*GeV) >= 10.0*GeV) { // Need at least 2 b-jets if (b_jets.size() > 1) { // This event has passed all mumu-channel cuts passed_mumu = true; } } } } } } bool passed_emu = false; // Finally, the same again with the emu channel vector vetoJets_emu; // We want exactly 1 electron and 1 muon if (elecFS.size() == 1 && muonFS.size() == 1) { // With opposite sign charges if (charge(elecFS[0]) != charge(muonFS[0])) { // Calculate HT: scalar sum of the pTs of the leptons and all good jets double HT = 0; HT += elecFS[0].pT(); HT += muonFS[0].pT(); foreach (const Jet* j, good_jets) HT += fabs(j->pT()); // Keep events with HT > 130 GeV if (HT > 130.0*GeV) { // And again we want 2 or more b-jets if (b_jets.size() > 1) { passed_emu = true; } } } } if (passed_ee == true || passed_mumu == true || passed_emu == true) { // If the event passes the selection, we use it for all gap fractions m_total_weight += weight; // Loop over each veto jet foreach (const Jet* j, veto_jets) { const double pt = j->pT(); const double rapidity = fabs(j->rapidity()); // Loop over each region for (size_t i = 0; i < 4; ++i) { // If the jet falls into this region, get its pT and increment sum(pT) if (inRange(rapidity, m_plots[i].y_low, m_plots[i].y_high)) { m_plots[i].vetoJetPt_Qsum += pt; // If we've already got a veto jet, don't replace it if (m_plots[i].vetoJetPt_Q0 == 0.0) m_plots[i].vetoJetPt_Q0 = pt; } } } for (size_t i = 0; i < 4; ++i) { m_plots[i]._h_vetoJetPt_Q0->fill(m_plots[i].vetoJetPt_Q0, weight); m_plots[i]._h_vetoJetPt_Qsum->fill(m_plots[i].vetoJetPt_Qsum, weight); m_plots[i].vetoJetPt_Q0 = 0.0; m_plots[i].vetoJetPt_Qsum = 0.0; } } } /// Normalise histograms etc., after the run void finalize() { for (size_t i = 0; i < 4; ++i) { finalizeGapFraction(m_total_weight, m_plots[i]._d_gapFraction_Q0, m_plots[i]._h_vetoJetPt_Q0); finalizeGapFraction(m_total_weight, m_plots[i]._d_gapFraction_Qsum, m_plots[i]._h_vetoJetPt_Qsum); } } /// Convert temporary histos to cumulative efficiency scatters /// @todo Should be possible to replace this with a couple of YODA one-lines for diff -> integral and "efficiency division" void finalizeGapFraction(double total_weight, Scatter2DPtr gapFraction, Histo1DPtr vetoPt) { // Stores the cumulative frequency of the veto jet pT histogram double vetoPtWeightSum = 0.0; // Keep track of which gap fraction point we're currently populating (#final_points != #tmp_bins) size_t fgap_point = 0; for (size_t i = 0; i < vetoPt->numBins(); ++i) { // If we've done the last "final" point, stop if (fgap_point == gapFraction->numPoints()) break; // Increment the cumulative vetoPt counter for this temp histo bin /// @todo Get rid of this and use vetoPt->integral(i+1) when points and bins line up? vetoPtWeightSum += vetoPt->bin(i).sumW(); // If this temp histo bin's upper edge doesn't correspond to the reference point, don't finalise the scatter. // Note that points are ON the bin edges and have no width: they represent the integral up to exactly that point. if ( !fuzzyEquals(vetoPt->bin(i).xMax(), gapFraction->point(fgap_point).x()) ) continue; // Calculate the gap fraction and its uncertainty const double frac = (total_weight != 0.0) ? vetoPtWeightSum/total_weight : 0; const double fracErr = (total_weight != 0.0) ? sqrt(frac*(1-frac)/total_weight) : 0; gapFraction->point(fgap_point).setY(frac, fracErr); ++fgap_point; } } private: // Weight counter double m_total_weight; // Structs containing all the plots, for each event selection ATLAS_2012_I1094568_Plots m_plots[4]; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2012_I1094568); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1203852.cc b/analyses/pluginATLAS/ATLAS_2012_I1203852.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1203852.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1203852.cc @@ -1,373 +1,373 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/WFinder.hh" #include "Rivet/Projections/LeadingParticlesFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/MergedFinalState.hh" #include "Rivet/Projections/MissingMomentum.hh" #include "Rivet/Projections/InvMassFinalState.hh" namespace Rivet { /// Generic Z candidate struct Zstate : public ParticlePair { Zstate() { } Zstate(ParticlePair _particlepair) : ParticlePair(_particlepair) { } FourMomentum mom() const { return first.momentum() + second.momentum(); } operator FourMomentum() const { return mom(); } static bool cmppT(const Zstate& lx, const Zstate& rx) { return lx.mom().pT() < rx.mom().pT(); } }; /// @name ZZ analysis class ATLAS_2012_I1203852 : public Analysis { public: /// Default constructor ATLAS_2012_I1203852() : Analysis("ATLAS_2012_I1203852") { } void init() { // NB Missing ET is not required to be neutrinos FinalState fs(-5.0, 5.0, 0.0*GeV); // Final states to form Z bosons vids.push_back(make_pair(PID::ELECTRON, PID::POSITRON)); vids.push_back(make_pair(PID::MUON, PID::ANTIMUON)); IdentifiedFinalState Photon(fs); Photon.acceptIdPair(PID::PHOTON); IdentifiedFinalState bare_EL(fs); bare_EL.acceptIdPair(PID::ELECTRON); IdentifiedFinalState bare_MU(fs); bare_MU.acceptIdPair(PID::MUON); // Selection 1: ZZ-> llll selection Cut etaranges_lep = Cuts::abseta < 3.16 && Cuts::pT > 7*GeV; DressedLeptons electron_sel4l(Photon, bare_EL, 0.1, etaranges_lep); declare(electron_sel4l, "ELECTRON_sel4l"); DressedLeptons muon_sel4l(Photon, bare_MU, 0.1, etaranges_lep); declare(muon_sel4l, "MUON_sel4l"); // Selection 2: ZZ-> llnunu selection Cut etaranges_lep2 = Cuts::abseta < 2.5 && Cuts::pT > 10*GeV; DressedLeptons electron_sel2l2nu(Photon, bare_EL, 0.1, etaranges_lep2); declare(electron_sel2l2nu, "ELECTRON_sel2l2nu"); DressedLeptons muon_sel2l2nu(Photon, bare_MU, 0.1, etaranges_lep2); declare(muon_sel2l2nu, "MUON_sel2l2nu"); /// Get all neutrinos. These will not be used to form jets. IdentifiedFinalState neutrino_fs(Cuts::abseta < 4.5); neutrino_fs.acceptNeutrinos(); declare(neutrino_fs, "NEUTRINO_FS"); // Calculate missing ET from the visible final state, not by requiring neutrinos addProjection(MissingMomentum(Cuts::abseta < 4.5), "MISSING"); VetoedFinalState jetinput; jetinput.addVetoOnThisFinalState(bare_MU); jetinput.addVetoOnThisFinalState(neutrino_fs); FastJets jetpro(fs, FastJets::ANTIKT, 0.4); declare(jetpro, "jet"); // Both ZZ on-shell histos _h_ZZ_xsect = bookHisto1D(1, 1, 1); _h_ZZ_ZpT = bookHisto1D(3, 1, 1); _h_ZZ_phill = bookHisto1D(5, 1, 1); _h_ZZ_mZZ = bookHisto1D(7, 1, 1); // One Z off-shell (ZZstar) histos _h_ZZs_xsect = bookHisto1D(1, 1, 2); // ZZ -> llnunu histos _h_ZZnunu_xsect = bookHisto1D(1, 1, 3); _h_ZZnunu_ZpT = bookHisto1D(4, 1, 1); _h_ZZnunu_phill = bookHisto1D(6, 1, 1); _h_ZZnunu_mZZ = bookHisto1D(8, 1, 1); } /// Do the analysis void analyze(const Event& e) { const double weight = e.weight(); //////////////////////////////////////////////////////////////////// // preselection of leptons for ZZ-> llll final state //////////////////////////////////////////////////////////////////// Particles leptons_sel4l; const vector& mu_sel4l = apply(e, "MUON_sel4l").dressedLeptons(); const vector& el_sel4l = apply(e, "ELECTRON_sel4l").dressedLeptons(); vector leptonsFS_sel4l; leptonsFS_sel4l.insert( leptonsFS_sel4l.end(), mu_sel4l.begin(), mu_sel4l.end() ); leptonsFS_sel4l.insert( leptonsFS_sel4l.end(), el_sel4l.begin(), el_sel4l.end() ); //////////////////////////////////////////////////////////////////// // OVERLAP removal dR(l,l)>0.2 //////////////////////////////////////////////////////////////////// foreach ( const DressedLepton& l1, leptonsFS_sel4l) { bool isolated = true; foreach (DressedLepton& l2, leptonsFS_sel4l) { const double dR = deltaR(l1, l2); if (dR < 0.2 && l1 != l2) { isolated = false; break; } } if (isolated) leptons_sel4l.push_back(l1); } ////////////////////////////////////////////////////////////////// // Exactly two opposite charged leptons ////////////////////////////////////////////////////////////////// // calculate total 'flavour' charge double totalcharge = 0; foreach (Particle& l, leptons_sel4l) totalcharge += l.pid(); // Analyze 4 lepton events if (leptons_sel4l.size() == 4 && totalcharge == 0 ) { Zstate Z1, Z2; // Identifies Z states from 4 lepton pairs identifyZstates(Z1, Z2,leptons_sel4l); //////////////////////////////////////////////////////////////////////////// // Z MASS WINDOW // -ZZ: for both Z: 6620 GeV /////////////////////////////////////////////////////////////////////////// Zstate leadPtZ = std::max(Z1, Z2, Zstate::cmppT); double mZ1 = Z1.mom().mass(); double mZ2 = Z2.mom().mass(); double ZpT = leadPtZ.mom().pT(); double phill = fabs(deltaPhi(leadPtZ.first, leadPtZ.second)); if (phill > M_PI) phill = 2*M_PI-phill; double mZZ = (Z1.mom() + Z2.mom()).mass(); if (mZ1 > 20*GeV && mZ2 > 20*GeV) { // ZZ* selection if (inRange(mZ1, 66*GeV, 116*GeV) || inRange(mZ2, 66*GeV, 116*GeV)) { _h_ZZs_xsect -> fill(sqrtS()*GeV, weight); } // ZZ selection if (inRange(mZ1, 66*GeV, 116*GeV) && inRange(mZ2, 66*GeV, 116*GeV)) { _h_ZZ_xsect -> fill(sqrtS()*GeV, weight); _h_ZZ_ZpT -> fill(ZpT , weight); _h_ZZ_phill -> fill(phill , weight); _h_ZZ_mZZ -> fill(mZZ , weight); } } } //////////////////////////////////////////////////////////////////// /// preselection of leptons for ZZ-> llnunu final state //////////////////////////////////////////////////////////////////// Particles leptons_sel2l2nu; // output const vector& mu_sel2l2nu = apply(e, "MUON_sel2l2nu").dressedLeptons(); const vector& el_sel2l2nu = apply(e, "ELECTRON_sel2l2nu").dressedLeptons(); vector leptonsFS_sel2l2nu; leptonsFS_sel2l2nu.insert( leptonsFS_sel2l2nu.end(), mu_sel2l2nu.begin(), mu_sel2l2nu.end() ); leptonsFS_sel2l2nu.insert( leptonsFS_sel2l2nu.end(), el_sel2l2nu.begin(), el_sel2l2nu.end() ); // Lepton preselection for ZZ-> llnunu if ((mu_sel2l2nu.empty() || el_sel2l2nu.empty()) // cannot have opposite flavour && (leptonsFS_sel2l2nu.size() == 2) // exactly two leptons && (leptonsFS_sel2l2nu[0].charge() * leptonsFS_sel2l2nu[1].charge() < 1 ) // opposite charge && (deltaR(leptonsFS_sel2l2nu[0], leptonsFS_sel2l2nu[1]) > 0.3) // overlap removal && (leptonsFS_sel2l2nu[0].pT() > 20*GeV && leptonsFS_sel2l2nu[1].pT() > 20*GeV)) { // trigger requirement leptons_sel2l2nu.insert(leptons_sel2l2nu.end(), leptonsFS_sel2l2nu.begin(), leptonsFS_sel2l2nu.end()); } if (leptons_sel2l2nu.empty()) vetoEvent; // no further analysis, fine to veto Particles leptons_sel2l2nu_jetveto; foreach (const DressedLepton& l, mu_sel2l2nu) leptons_sel2l2nu_jetveto.push_back(l.constituentLepton()); foreach (const DressedLepton& l, el_sel2l2nu) leptons_sel2l2nu_jetveto.push_back(l.constituentLepton()); double ptll = (leptons_sel2l2nu[0].momentum() + leptons_sel2l2nu[1].momentum()).pT(); // Find Z1-> ll FinalState fs2(-3.2, 3.2); InvMassFinalState imfs(fs2, vids, 20*GeV, sqrtS()); imfs.calc(leptons_sel2l2nu); if (imfs.particlePairs().size() != 1) vetoEvent; const ParticlePair& Z1constituents = imfs.particlePairs()[0]; FourMomentum Z1 = Z1constituents.first.momentum() + Z1constituents.second.momentum(); // Z to neutrinos candidate from missing ET const MissingMomentum & missmom = applyProjection(e, "MISSING"); const FourMomentum Z2 = missmom.missingMomentum(ZMASS); double met_Znunu = missmom.missingEt(); //Z2.pT(); // mTZZ const double mT2_1st_term = add_quad(ZMASS, ptll) + add_quad(ZMASS, met_Znunu); const double mT2_2nd_term = Z1.px() + Z2.px(); const double mT2_3rd_term = Z1.py() + Z2.py(); const double mTZZ = sqrt(sqr(mT2_1st_term) - sqr(mT2_2nd_term) - sqr(mT2_3rd_term)); if (!inRange(Z2.mass(), 66*GeV, 116*GeV)) vetoEvent; if (!inRange(Z1.mass(), 76*GeV, 106*GeV)) vetoEvent; ///////////////////////////////////////////////////////////// // AXIAL MET < 75 GeV //////////////////////////////////////////////////////////// double dPhiZ1Z2 = fabs(deltaPhi(Z1, Z2)); if (dPhiZ1Z2 > M_PI) dPhiZ1Z2 = 2*M_PI - dPhiZ1Z2; const double axialEtmiss = -Z2.pT()*cos(dPhiZ1Z2); if (axialEtmiss < 75*GeV) vetoEvent; const double ZpT = Z1.pT(); double phill = fabs(deltaPhi(Z1constituents.first, Z1constituents.second)); if (phill > M_PI) phill = 2*M_PI - phill; //////////////////////////////////////////////////////////////////////////// // JETS // -"j": found by "jetpro" projection && pT() > 25 GeV && |eta| < 4.5 // -"goodjets": "j" && dR(electron/muon,jet) > 0.3 // // JETVETO: veto all events with at least one good jet /////////////////////////////////////////////////////////////////////////// vector good_jets; foreach (const Jet& j, apply(e, "jet").jetsByPt(25)) { if (j.abseta() > 4.5) continue; bool isLepton = 0; foreach (const Particle& l, leptons_sel2l2nu_jetveto) { const double dR = deltaR(l.momentum(), j.momentum()); if (dR < 0.3) { isLepton = true; break; } } if (!isLepton) good_jets.push_back(j); } size_t n_sel_jets = good_jets.size(); if (n_sel_jets != 0) vetoEvent; ///////////////////////////////////////////////////////////// // Fractional MET and lepton pair difference: "RatioMet"< 0.4 //////////////////////////////////////////////////////////// double ratioMet = fabs(Z2.pT() - Z1.pT()) / Z1.pT(); if (ratioMet > 0.4 ) vetoEvent; // End of ZZllnunu selection: now fill histograms _h_ZZnunu_xsect->fill(sqrtS()/GeV, weight); _h_ZZnunu_ZpT ->fill(ZpT, weight); _h_ZZnunu_phill->fill(phill, weight); _h_ZZnunu_mZZ ->fill(mTZZ, weight); } /// Finalize void finalize() { const double norm = crossSection()/sumOfWeights()/femtobarn; scale(_h_ZZ_xsect, norm); normalize(_h_ZZ_ZpT); normalize(_h_ZZ_phill); normalize(_h_ZZ_mZZ); scale(_h_ZZs_xsect, norm); scale(_h_ZZnunu_xsect, norm); normalize(_h_ZZnunu_ZpT); normalize(_h_ZZnunu_phill); normalize(_h_ZZnunu_mZZ); } private: void identifyZstates(Zstate& Z1, Zstate& Z2, const Particles& leptons_sel4l); Histo1DPtr _h_ZZ_xsect, _h_ZZ_ZpT, _h_ZZ_phill, _h_ZZ_mZZ; Histo1DPtr _h_ZZs_xsect; Histo1DPtr _h_ZZnunu_xsect, _h_ZZnunu_ZpT, _h_ZZnunu_phill, _h_ZZnunu_mZZ; vector< pair > vids; const double ZMASS = 91.1876; // GeV }; /// 4l to ZZ assignment -- algorithm void ATLAS_2012_I1203852::identifyZstates(Zstate& Z1, Zstate& Z2, const Particles& leptons_sel4l) { ///////////////////////////////////////////////////////////////////////////// /// ZZ->4l pairing /// - Exactly two same flavour opposite charged leptons /// - Ambiguities in pairing are resolved by choosing the combination /// that results in the smaller value of the sum |mll - mZ| for the two pairs ///////////////////////////////////////////////////////////////////////////// Particles part_pos_el, part_neg_el, part_pos_mu, part_neg_mu; foreach (const Particle& l , leptons_sel4l) { if (l.abspid() == PID::ELECTRON) { if (l.pid() < 0) part_neg_el.push_back(l); if (l.pid() > 0) part_pos_el.push_back(l); } else if (l.abspid() == PID::MUON) { if (l.pid() < 0) part_neg_mu.push_back(l); if (l.pid() > 0) part_pos_mu.push_back(l); } } // ee/mm channel if ( part_neg_el.size() == 2 || part_neg_mu.size() == 2) { Zstate Zcand_1, Zcand_2, Zcand_3, Zcand_4; if (part_neg_el.size() == 2) { // ee Zcand_1 = Zstate( ParticlePair( part_neg_el[0], part_pos_el[0] ) ); Zcand_2 = Zstate( ParticlePair( part_neg_el[0], part_pos_el[1] ) ); Zcand_3 = Zstate( ParticlePair( part_neg_el[1], part_pos_el[0] ) ); Zcand_4 = Zstate( ParticlePair( part_neg_el[1], part_pos_el[1] ) ); } else { // mumu Zcand_1 = Zstate( ParticlePair( part_neg_mu[0], part_pos_mu[0] ) ); Zcand_2 = Zstate( ParticlePair( part_neg_mu[0], part_pos_mu[1] ) ); Zcand_3 = Zstate( ParticlePair( part_neg_mu[1], part_pos_mu[0] ) ); Zcand_4 = Zstate( ParticlePair( part_neg_mu[1], part_pos_mu[1] ) ); } // We can have the following pairs: (Z1 + Z4) || (Z2 + Z3) double minValue_1, minValue_2; minValue_1 = fabs( Zcand_1.mom().mass() - ZMASS ) + fabs( Zcand_4.mom().mass() - ZMASS); minValue_2 = fabs( Zcand_2.mom().mass() - ZMASS ) + fabs( Zcand_3.mom().mass() - ZMASS); if (minValue_1 < minValue_2 ) { Z1 = Zcand_1; Z2 = Zcand_4; } else { Z1 = Zcand_2; Z2 = Zcand_3; } // emu channel } else if (part_neg_mu.size() == 1 && part_neg_el.size() == 1) { Z1 = Zstate ( ParticlePair (part_neg_mu[0], part_pos_mu[0] ) ); Z2 = Zstate ( ParticlePair (part_neg_el[0], part_pos_el[0] ) ); } } // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2012_I1203852); } diff --git a/analyses/pluginATLAS/ATLAS_2012_I1204447.cc b/analyses/pluginATLAS/ATLAS_2012_I1204447.cc --- a/analyses/pluginATLAS/ATLAS_2012_I1204447.cc +++ b/analyses/pluginATLAS/ATLAS_2012_I1204447.cc @@ -1,1060 +1,1060 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/VisibleFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class ATLAS_2012_I1204447 : public Analysis { public: /// Constructor ATLAS_2012_I1204447() : Analysis("ATLAS_2012_I1204447") { } /// Book histograms and initialise projections before the run void init() { // To calculate the acceptance without having the fiducial lepton efficiencies included, this part can be turned off _use_fiducial_lepton_efficiency = true; // Random numbers for simulation of ATLAS detector reconstruction efficiency srand(160385); // Read in all signal regions _signal_regions = getSignalRegions(); // Set number of events per signal region to 0 for (size_t i = 0; i < _signal_regions.size(); i++) _eventCountsPerSR[_signal_regions[i]] = 0.0; // Final state including all charged and neutral particles const FinalState fs(-5.0, 5.0, 1*GeV); declare(fs, "FS"); // Final state including all charged particles declare(ChargedFinalState(Cuts::abseta < 2.5 && Cuts::pT > 1*GeV), "CFS"); // Final state including all visible particles (to calculate MET, Jets etc.) declare(VisibleFinalState(Cuts::abseta < 5.0), "VFS"); // Final state including all AntiKt 04 Jets VetoedFinalState vfs; vfs.addVetoPairId(PID::MUON); declare(FastJets(vfs, FastJets::ANTIKT, 0.4), "AntiKtJets04"); // Final state including all unstable particles (including taus) - declare(UnstableFinalState(Cuts::abseta < 5.0 && Cuts::pT > 5*GeV), "UFS"); + declare(UnstableParticles(Cuts::abseta < 5.0 && Cuts::pT > 5*GeV), "UFS"); // Final state including all electrons IdentifiedFinalState elecs(Cuts::abseta < 2.47 && Cuts::pT > 10*GeV); elecs.acceptIdPair(PID::ELECTRON); declare(elecs, "elecs"); // Final state including all muons IdentifiedFinalState muons(Cuts::abseta < 2.5 && Cuts::pT > 10*GeV); muons.acceptIdPair(PID::MUON); declare(muons, "muons"); // Book histograms _h_HTlep_all = bookHisto1D("HTlep_all" , 30, 0, 1500); _h_HTjets_all = bookHisto1D("HTjets_all", 30, 0, 1500); _h_MET_all = bookHisto1D("MET_all" , 20, 0, 1000); _h_Meff_all = bookHisto1D("Meff_all" , 30, 0, 3000); _h_e_n = bookHisto1D("e_n" , 10, -0.5, 9.5); _h_mu_n = bookHisto1D("mu_n" , 10, -0.5, 9.5); _h_tau_n = bookHisto1D("tau_n", 10, -0.5, 9.5); _h_pt_1_3l = bookHisto1D("pt_1_3l", 100, 0, 2000); _h_pt_2_3l = bookHisto1D("pt_2_3l", 100, 0, 2000); _h_pt_3_3l = bookHisto1D("pt_3_3l", 100, 0, 2000); _h_pt_1_2ltau = bookHisto1D("pt_1_2ltau", 100, 0, 2000); _h_pt_2_2ltau = bookHisto1D("pt_2_2ltau", 100, 0, 2000); _h_pt_3_2ltau = bookHisto1D("pt_3_2ltau", 100, 0, 2000); _h_excluded = bookHisto1D("excluded", 2, -0.5, 1.5); } /// Perform the per-event analysis void analyze(const Event& event) { // Muons Particles muon_candidates; const Particles charged_tracks = apply(event, "CFS").particles(); const Particles visible_particles = apply(event, "VFS").particles(); foreach (const Particle& mu, apply(event, "muons").particlesByPt()) { // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of muon itself) double pTinCone = -mu.pT(); foreach (const Particle& track, charged_tracks) { if (deltaR(mu.momentum(), track.momentum()) < 0.3) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles within dR<0.3) double eTinCone = 0.; foreach (const Particle& visible_particle, visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(mu.momentum(), visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reco int muon_id = 13; if ( mu.hasAncestor(15) || mu.hasAncestor(-15)) muon_id = 14; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(muon_id, mu) : 1.0; const bool keep_muon = rand()/static_cast(RAND_MAX) <= eff; // Keep muon if pTCone30/pT < 0.15 and eTCone30/pT < 0.2 and reconstructed if (keep_muon && pTinCone/mu.pT() <= 0.15 && eTinCone/mu.pT() < 0.2) muon_candidates.push_back(mu); } // Electrons Particles electron_candidates; foreach (const Particle& e, apply(event, "elecs").particlesByPt()) { // Neglect electrons in crack regions if (inRange(e.abseta(), 1.37, 1.52)) continue; // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of electron itself) double pTinCone = -e.pT(); foreach (const Particle& track, charged_tracks) { if (deltaR(e.momentum(), track.momentum()) < 0.3) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles (except muons) within dR<0.3) double eTinCone = 0.; foreach (const Particle& visible_particle, visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(e.momentum(), visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reco int elec_id = 11; if (e.hasAncestor(15) || e.hasAncestor(-15)) elec_id = 12; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(elec_id, e) : 1.0; const bool keep_elec = rand()/static_cast(RAND_MAX) <= eff; // Keep electron if pTCone30/pT < 0.13 and eTCone30/pT < 0.2 and reconstructed if (keep_elec && pTinCone/e.pT() <= 0.13 && eTinCone/e.pT() < 0.2) electron_candidates.push_back(e); } // Taus /// @todo This could benefit from a tau finder projection Particles tau_candidates; - foreach (const Particle& tau, apply(event, "UFS").particlesByPt()) { + foreach (const Particle& tau, apply(event, "UFS").particlesByPt()) { // Only pick taus out of all unstable particles if (tau.abspid() != PID::TAU) continue; // Check that tau has decayed into daughter particles /// @todo Huh? Unstable taus with no decay vtx? Can use Particle.isStable()? But why in this situation? if (tau.genParticle()->end_vertex() == 0) continue; // Calculate visible tau pT from pT of tau neutrino in tau decay for pT and |eta| cuts FourMomentum daughter_tau_neutrino_momentum = get_tau_neutrino_mom(tau); Particle tau_vis = tau; tau_vis.setMomentum(tau.momentum()-daughter_tau_neutrino_momentum); // keep only taus in certain eta region and above 15 GeV of visible tau pT if ( tau_vis.pT() <= 15.0*GeV || tau_vis.abseta() > 2.5) continue; // Get prong number (number of tracks) in tau decay and check if tau decays leptonically unsigned int nprong = 0; bool lep_decaying_tau = false; get_prong_number(tau.genParticle(), nprong, lep_decaying_tau); // Apply reconstruction efficiency int tau_id = 15; if (nprong == 1) tau_id = 15; else if (nprong == 3) tau_id = 16; // Get fiducial lepton efficiency simulate reco efficiency const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(tau_id, tau_vis) : 1.0; const bool keep_tau = rand()/static_cast(RAND_MAX) <= eff; // Keep tau if nprong = 1, it decays hadronically, and it's reconstructed by the detector if ( !lep_decaying_tau && nprong == 1 && keep_tau) tau_candidates.push_back(tau_vis); } // Jets (all anti-kt R=0.4 jets with pT > 25 GeV and eta < 4.9) Jets jet_candidates; foreach (const Jet& jet, apply(event, "AntiKtJets04").jetsByPt(25*GeV)) { if (jet.abseta() < 4.9) jet_candidates.push_back(jet); } // ETmiss Particles vfs_particles = apply(event, "VFS").particles(); FourMomentum pTmiss; foreach (const Particle& p, vfs_particles) pTmiss -= p.momentum(); double eTmiss = pTmiss.pT()/GeV; //------------------ // Overlap removal // electron - electron Particles electron_candidates_2; for (size_t ie = 0; ie < electron_candidates.size(); ++ie) { const Particle & e = electron_candidates[ie]; bool away = true; // If electron pair within dR < 0.1: remove electron with lower pT for (size_t ie2=0; ie2 < electron_candidates_2.size(); ++ie2) { if ( deltaR( e.momentum(), electron_candidates_2[ie2].momentum()) < 0.1 ) { away = false; break; } } // If isolated keep it if ( away ) electron_candidates_2.push_back( e ); } // jet - electron Jets recon_jets; foreach (const Jet& jet, jet_candidates) { bool away = true; // if jet within dR < 0.2 of electron: remove jet foreach (const Particle& e, electron_candidates_2) { if (deltaR(e.momentum(), jet.momentum()) < 0.2) { away = false; break; } } // jet - tau if (away) { // If jet within dR < 0.2 of tau: remove jet foreach (const Particle& tau, tau_candidates) { if (deltaR(tau.momentum(), jet.momentum()) < 0.2) { away = false; break; } } } // If isolated keep it if ( away ) recon_jets.push_back( jet ); } // electron - jet Particles recon_leptons, recon_e; for (size_t ie = 0; ie < electron_candidates_2.size(); ++ie) { const Particle& e = electron_candidates_2[ie]; // If electron within 0.2 < dR < 0.4 from any jets: remove electron bool away = true; foreach (const Jet& jet, recon_jets) { if (deltaR(e.momentum(), jet.momentum()) < 0.4) { away = false; break; } } // electron - muon // if electron within dR < 0.1 of a muon: remove electron if (away) { foreach (const Particle& mu, muon_candidates) { if (deltaR(mu.momentum(), e.momentum()) < 0.1) { away = false; break; } } } // If isolated keep it if (away) { recon_e += e; recon_leptons += e; } } // tau - electron Particles recon_tau; foreach ( const Particle& tau, tau_candidates ) { bool away = true; // If tau within dR < 0.2 of an electron: remove tau foreach ( const Particle& e, recon_e ) { if (deltaR( tau.momentum(), e.momentum()) < 0.2) { away = false; break; } } // tau - muon // If tau within dR < 0.2 of a muon: remove tau if (away) { foreach (const Particle& mu, muon_candidates) { if (deltaR(tau.momentum(), mu.momentum()) < 0.2) { away = false; break; } } } // If isolated keep it if (away) recon_tau.push_back( tau ); } // Muon - jet isolation Particles recon_mu, trigger_mu; // If muon within dR < 0.4 of a jet, remove muon foreach (const Particle& mu, muon_candidates) { bool away = true; foreach (const Jet& jet, recon_jets) { if ( deltaR( mu.momentum(), jet.momentum()) < 0.4 ) { away = false; break; } } if (away) { recon_mu.push_back( mu ); recon_leptons.push_back( mu ); if (mu.abseta() < 2.4) trigger_mu.push_back( mu ); } } // End overlap removal //------------------ // Jet cleaning if (rand()/static_cast(RAND_MAX) <= 0.42) { foreach (const Jet& jet, recon_jets) { const double eta = jet.rapidity(); const double phi = jet.azimuthalAngle(MINUSPI_PLUSPI); if (jet.pT() > 25*GeV && inRange(eta, -0.1, 1.5) && inRange(phi, -0.9, -0.5)) vetoEvent; } } // Post-isolation event cuts // Require at least 3 charged tracks in event if (charged_tracks.size() < 3) vetoEvent; // And at least one e/mu passing trigger if (!( !recon_e .empty() && recon_e[0] .pT() > 25*GeV) && !( !trigger_mu.empty() && trigger_mu[0].pT() > 25*GeV) ) { MSG_DEBUG("Hardest lepton fails trigger"); vetoEvent; } // And only accept events with at least 2 electrons and muons and at least 3 leptons in total if (recon_mu.size() + recon_e.size() + recon_tau.size() < 3 || recon_leptons.size() < 2) vetoEvent; // Now it's worth getting the event weight const double weight = event.weight(); // Sort leptons by decreasing pT sortByPt(recon_leptons); sortByPt(recon_tau); // Calculate HTlep, fill lepton pT histograms & store chosen combination of 3 leptons double HTlep = 0.; Particles chosen_leptons; if ( recon_leptons.size() > 2 ) { _h_pt_1_3l->fill(recon_leptons[0].perp()/GeV, weight); _h_pt_2_3l->fill(recon_leptons[1].perp()/GeV, weight); _h_pt_3_3l->fill(recon_leptons[2].perp()/GeV, weight); HTlep = (recon_leptons[0].pT() + recon_leptons[1].pT() + recon_leptons[2].pT())/GeV; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_leptons[2] ); } else { _h_pt_1_2ltau->fill(recon_leptons[0].perp()/GeV, weight); _h_pt_2_2ltau->fill(recon_leptons[1].perp()/GeV, weight); _h_pt_3_2ltau->fill(recon_tau[0].perp()/GeV, weight); HTlep = (recon_leptons[0].pT() + recon_leptons[1].pT() + recon_tau[0].pT())/GeV ; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_tau[0] ); } // Number of prompt e/mu and had taus _h_e_n ->fill(recon_e.size() , weight); _h_mu_n ->fill(recon_mu.size() , weight); _h_tau_n->fill(recon_tau.size(), weight); // Calculate HTjets double HTjets = 0.; foreach ( const Jet & jet, recon_jets ) HTjets += jet.perp()/GeV; // Calculate meff double meff = eTmiss + HTjets; Particles all_leptons; foreach ( const Particle & e , recon_e ) { meff += e.perp()/GeV; all_leptons.push_back( e ); } foreach ( const Particle & mu, recon_mu ) { meff += mu.perp()/GeV; all_leptons.push_back( mu ); } foreach ( const Particle & tau, recon_tau ) { meff += tau.perp()/GeV; all_leptons.push_back( tau ); } // Fill histogram of kinematic variables _h_HTlep_all ->fill(HTlep , weight); _h_HTjets_all->fill(HTjets, weight); _h_MET_all ->fill(eTmiss, weight); _h_Meff_all ->fill(meff , weight); // Determine signal region (3l/2ltau, onZ/offZ) string basic_signal_region; if ( recon_mu.size() + recon_e.size() > 2 ) basic_signal_region += "3l_"; else if ( (recon_mu.size() + recon_e.size() == 2) && (recon_tau.size() > 0)) basic_signal_region += "2ltau_"; // Is there an OSSF pair or a three lepton combination with an invariant mass close to the Z mass int onZ = isonZ(chosen_leptons); if (onZ == 1) basic_signal_region += "onZ"; else if (onZ == 0) basic_signal_region += "offZ"; // Check in which signal regions this event falls and adjust event counters fillEventCountsPerSR(basic_signal_region, onZ, HTlep, eTmiss, HTjets, meff, weight); } /// Normalise histograms etc., after the run void finalize() { // Normalize to an integrated luminosity of 1 fb-1 double norm = crossSection()/femtobarn/sumOfWeights(); string best_signal_region = ""; double ratio_best_SR = 0.; // Loop over all signal regions and find signal region with best sensitivity (ratio signal events/visible cross-section) for (size_t i = 0; i < _signal_regions.size(); i++) { double signal_events = _eventCountsPerSR[_signal_regions[i]] * norm; // Use expected upper limits to find best signal region double UL95 = getUpperLimit(_signal_regions[i], false); double ratio = signal_events / UL95; if (ratio > ratio_best_SR) { best_signal_region = _signal_regions[i]; ratio_best_SR = ratio; } } double signal_events_best_SR = _eventCountsPerSR[best_signal_region] * norm; double exp_UL_best_SR = getUpperLimit(best_signal_region, false); double obs_UL_best_SR = getUpperLimit(best_signal_region, true); // Print out result cout << "----------------------------------------------------------------------------------------" << endl; cout << "Best signal region: " << best_signal_region << endl; cout << "Normalized number of signal events in this best signal region (per fb-1): " << signal_events_best_SR << endl; cout << "Efficiency*Acceptance: " << _eventCountsPerSR[best_signal_region]/sumOfWeights() << endl; cout << "Cross-section [fb]: " << crossSection()/femtobarn << endl; cout << "Expected visible cross-section (per fb-1): " << exp_UL_best_SR << endl; cout << "Ratio (signal events / expected visible cross-section): " << ratio_best_SR << endl; cout << "Observed visible cross-section (per fb-1): " << obs_UL_best_SR << endl; cout << "Ratio (signal events / observed visible cross-section): " << signal_events_best_SR/obs_UL_best_SR << endl; cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the EXPECTED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > exp_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% CL." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the OBSERVED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > obs_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% CL." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; // Normalize to cross section if (norm != 0) { scale(_h_HTlep_all, norm); scale(_h_HTjets_all, norm); scale(_h_MET_all, norm); scale(_h_Meff_all, norm); scale(_h_pt_1_3l, norm); scale(_h_pt_2_3l, norm); scale(_h_pt_3_3l, norm); scale(_h_pt_1_2ltau, norm); scale(_h_pt_2_2ltau, norm); scale(_h_pt_3_2ltau, norm); scale(_h_e_n, norm); scale(_h_mu_n, norm); scale(_h_tau_n, norm); scale(_h_excluded, signal_events_best_SR); } } /// Helper functions //@{ /// Function giving a list of all signal regions vector getSignalRegions() { // List of basic signal regions vector basic_signal_regions; basic_signal_regions.push_back("3l_offZ"); basic_signal_regions.push_back("3l_onZ"); basic_signal_regions.push_back("2ltau_offZ"); basic_signal_regions.push_back("2ltau_onZ"); // List of kinematic variables vector kinematic_variables; kinematic_variables.push_back("HTlep"); kinematic_variables.push_back("METStrong"); kinematic_variables.push_back("METWeak"); kinematic_variables.push_back("Meff"); kinematic_variables.push_back("MeffStrong"); vector signal_regions; // Loop over all kinematic variables and basic signal regions for (size_t i0 = 0; i0 < kinematic_variables.size(); i0++) { for (size_t i1 = 0; i1 < basic_signal_regions.size(); i1++) { // Is signal region onZ? int onZ = (basic_signal_regions[i1].find("onZ") != string::npos) ? 1 : 0; // Get cut values for this kinematic variable vector cut_values = getCutsPerSignalRegion(kinematic_variables[i0], onZ); // Loop over all cut values for (size_t i2 = 0; i2 < cut_values.size(); i2++) { // push signal region into vector signal_regions.push_back( (kinematic_variables[i0] + "_" + basic_signal_regions[i1] + "_cut_" + toString(i2)) ); } } } return signal_regions; } /// Function giving all cut vales per kinematic variable (taking onZ for MET into account) vector getCutsPerSignalRegion(const string& signal_region, int onZ=0) { vector cutValues; // Cut values for HTlep if (signal_region.compare("HTlep") == 0) { cutValues.push_back(0); cutValues.push_back(100); cutValues.push_back(150); cutValues.push_back(200); cutValues.push_back(300); } // Cut values for METStrong (HTjets > 100 GeV) and METWeak (HTjets < 100 GeV) else if (signal_region.compare("METStrong") == 0 || signal_region.compare("METWeak") == 0) { if (onZ == 0) cutValues.push_back(0); else if (onZ == 1) cutValues.push_back(20); cutValues.push_back(50); cutValues.push_back(75); } // Cut values for Meff and MeffStrong (MET > 75 GeV) if (signal_region.compare("Meff") == 0 || signal_region.compare("MeffStrong") == 0) { cutValues.push_back(0); cutValues.push_back(150); cutValues.push_back(300); cutValues.push_back(500); } return cutValues; } /// function fills map EventCountsPerSR by looping over all signal regions /// and looking if the event falls into this signal region void fillEventCountsPerSR(const string& basic_signal_region, int onZ, double HTlep, double eTmiss, double HTjets, double meff, double weight) { // Get cut values for HTlep, loop over them and add event if cut is passed vector cut_values = getCutsPerSignalRegion("HTlep", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (HTlep > cut_values[i]) _eventCountsPerSR[("HTlep_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets > 100.) _eventCountsPerSR[("METStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METWeak, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METWeak", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets <= 100.) _eventCountsPerSR[("METWeak_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for Meff, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("Meff", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i]) _eventCountsPerSR[("Meff_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MeffStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MeffStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i] && eTmiss > 75.) _eventCountsPerSR[("MeffStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } } /// Function returning 4-vector of daughter-particle if it is a tau neutrino /// @todo Move to TauFinder and make less HepMC-ish FourMomentum get_tau_neutrino_mom(const Particle& p) { assert(p.abspid() == PID::TAU); const GenVertex* dv = p.genParticle()->end_vertex(); assert(dv != NULL); for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { if (abs((*pp)->pdg_id()) == PID::NU_TAU) return FourMomentum((*pp)->momentum()); } return FourMomentum(); } /// Function calculating the prong number of taus /// @todo Move to TauFinder and make less HepMC-ish void get_prong_number(const GenParticle* p, unsigned int& nprong, bool& lep_decaying_tau) { assert(p != NULL); //const int tau_barcode = p->barcode(); const GenVertex* dv = p->end_vertex(); assert(dv != NULL); for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { // If they have status 1 and are charged they will produce a track and the prong number is +1 if ((*pp)->status() == 1 ) { const int id = (*pp)->pdg_id(); if (Rivet::PID::charge(id) != 0 ) ++nprong; // Check if tau decays leptonically // @todo Can a tau decay include a tau in its decay daughters?! if ((abs(id) == PID::ELECTRON || abs(id) == PID::MUON || abs(id) == PID::TAU) && abs(p->pdg_id()) == PID::TAU) lep_decaying_tau = true; } // If the status of the daughter particle is 2 it is unstable and the further decays are checked else if ((*pp)->status() == 2 ) { get_prong_number(*pp, nprong, lep_decaying_tau); } } } /// Function giving fiducial lepton efficiency double apply_reco_eff(int flavor, const Particle& p) { float pt = p.pT()/GeV; float eta = p.eta(); double eff = 0.; //double err = 0.; if (flavor == 11) { // weight prompt electron -- now including data/MC ID SF in eff. //float rho = 0.820; float p0 = 7.34; float p1 = 0.8977; //float ep0= 0.5 ; float ep1= 0.0087; eff = p1 - p0/pt; //double err0 = ep0/pt; // d(eff)/dp0 //double err1 = ep1; // d(eff)/dp1 //err = sqrt(err0*err0 + err1*err1 - 2*rho*err0*err1); double avgrate = 0.6867; float wz_ele_eta[] = {0.588717,0.603674,0.666135,0.747493,0.762202,0.675051,0.751606,0.745569,0.665333,0.610432,0.592693,}; //float ewz_ele_eta[] ={0.00292902,0.002476,0.00241209,0.00182319,0.00194339,0.00299785,0.00197339,0.00182004,0.00241793,0.00245997,0.00290394,}; int ibin = 3; if (eta >= -2.5 && eta < -2.0) ibin = 0; if (eta >= -2.0 && eta < -1.5) ibin = 1; if (eta >= -1.5 && eta < -1.0) ibin = 2; if (eta >= -1.0 && eta < -0.5) ibin = 3; if (eta >= -0.5 && eta < -0.1) ibin = 4; if (eta >= -0.1 && eta < 0.1) ibin = 5; if (eta >= 0.1 && eta < 0.5) ibin = 6; if (eta >= 0.5 && eta < 1.0) ibin = 7; if (eta >= 1.0 && eta < 1.5) ibin = 8; if (eta >= 1.5 && eta < 2.0) ibin = 9; if (eta >= 2.0 && eta < 2.5) ibin = 10; double eff_eta = wz_ele_eta[ibin]; //double err_eta = ewz_ele_eta[ibin]; eff = (eff*eff_eta)/avgrate; } if (flavor == 12) { // weight electron from tau //float rho = 0.884; float p0 = 6.799; float p1 = 0.842; //float ep0= 0.664; float ep1= 0.016; eff = p1 - p0/pt; //double err0 = ep0/pt; // d(eff)/dp0 //double err1 = ep1; // d(eff)/dp1 //err = sqrt(err0*err0 + err1*err1 - 2*rho*err0*err1); double avgrate = 0.5319; float wz_elet_eta[] = {0.468945,0.465953,0.489545,0.58709,0.59669,0.515829,0.59284,0.575828,0.498181,0.463536,0.481738,}; //float ewz_elet_eta[] ={0.00933795,0.00780868,0.00792679,0.00642083,0.00692652,0.0101568,0.00698452,0.00643524,0.0080002,0.00776238,0.0094699,}; int ibin = 3; if (eta >= -2.5 && eta < -2.0) ibin = 0; if (eta >= -2.0 && eta < -1.5) ibin = 1; if (eta >= -1.5 && eta < -1.0) ibin = 2; if (eta >= -1.0 && eta < -0.5) ibin = 3; if (eta >= -0.5 && eta < -0.1) ibin = 4; if (eta >= -0.1 && eta < 0.1) ibin = 5; if (eta >= 0.1 && eta < 0.5) ibin = 6; if (eta >= 0.5 && eta < 1.0) ibin = 7; if (eta >= 1.0 && eta < 1.5) ibin = 8; if (eta >= 1.5 && eta < 2.0) ibin = 9; if (eta >= 2.0 && eta < 2.5) ibin = 10; double eff_eta = wz_elet_eta[ibin]; //double err_eta = ewz_elet_eta[ibin]; eff = (eff*eff_eta)/avgrate; } if (flavor == 13) {// weight prompt muon //if eta>0.1 float p0 = -18.21; float p1 = 14.83; float p2 = 0.9312; //float ep0= 5.06; float ep1= 1.9; float ep2=0.00069; if ( fabs(eta) < 0.1) { p0 = 7.459; p1 = 2.615; p2 = 0.5138; //ep0 = 10.4; ep1 = 4.934; ep2 = 0.0034; } double arg = ( pt-p0 )/( 2.*p1 ) ; eff = 0.5 * p2 * (1.+erf(arg)); //err = 0.1*eff; } if (flavor == 14) {// weight muon from tau if (fabs(eta) < 0.1) { float p0 = -1.756; float p1 = 12.38; float p2 = 0.4441; //float ep0= 10.39; float ep1= 7.9; float ep2=0.022; double arg = ( pt-p0 )/( 2.*p1 ) ; eff = 0.5 * p2 * (1.+erf(arg)); //err = 0.1*eff; } else { float p0 = 2.102; float p1 = 0.8293; //float ep0= 0.271; float ep1= 0.0083; eff = p1 - p0/pt; //double err0 = ep0/pt; // d(eff)/dp0 //double err1 = ep1; // d(eff)/dp1 //err = sqrt(err0*err0 + err1*err1 - 2*rho*err0*err1); } } if (flavor == 15) {// weight hadronic tau 1p float wz_tau1p[] = {0.0249278,0.146978,0.225049,0.229212,0.21519,0.206152,0.201559,0.197917,0.209249,0.228336,0.193548,}; //float ewz_tau1p[] ={0.00178577,0.00425252,0.00535052,0.00592126,0.00484684,0.00612941,0.00792099,0.0083006,0.0138307,0.015568,0.0501751,}; int ibin = 0; if (pt > 15) ibin = 1; if (pt > 20) ibin = 2; if (pt > 25) ibin = 3; if (pt > 30) ibin = 4; if (pt > 40) ibin = 5; if (pt > 50) ibin = 6; if (pt > 60) ibin = 7; if (pt > 80) ibin = 8; if (pt > 100) ibin = 9; if (pt > 200) ibin = 10; eff = wz_tau1p[ibin]; //err = ewz_tau1p[ibin]; double avgrate = 0.1718; float wz_tau1p_eta[] = {0.162132,0.176393,0.139619,0.178813,0.185144,0.210027,0.203937,0.178688,0.137034,0.164216,0.163713,}; //float ewz_tau1p_eta[] ={0.00706705,0.00617989,0.00506798,0.00525172,0.00581865,0.00865675,0.00599245,0.00529877,0.00506368,0.00617025,0.00726219,}; ibin = 3; if (eta >= -2.5 && eta < -2.0) ibin = 0; if (eta >= -2.0 && eta < -1.5) ibin = 1; if (eta >= -1.5 && eta < -1.0) ibin = 2; if (eta >= -1.0 && eta < -0.5) ibin = 3; if (eta >= -0.5 && eta < -0.1) ibin = 4; if (eta >= -0.1 && eta < 0.1) ibin = 5; if (eta >= 0.1 && eta < 0.5) ibin = 6; if (eta >= 0.5 && eta < 1.0) ibin = 7; if (eta >= 1.0 && eta < 1.5) ibin = 8; if (eta >= 1.5 && eta < 2.0) ibin = 9; if (eta >= 2.0 && eta < 2.5) ibin = 10; double eff_eta = wz_tau1p_eta[ibin]; //double err_eta = ewz_tau1p_eta[ibin]; eff = (eff*eff_eta)/avgrate; } if (flavor == 16) { //weight hadronic tau 3p float wz_tau3p[] = {0.000587199,0.00247181,0.0013031,0.00280112,}; //float ewz_tau3p[] ={0.000415091,0.000617187,0.000582385,0.00197792,}; int ibin = 0; if (pt > 15) ibin = 1; if (pt > 20) ibin = 2; if (pt > 40) ibin = 3; if (pt > 80) ibin = 4; eff = wz_tau3p[ibin]; //err = ewz_tau3p[ibin]; } return eff; } /// Function giving observed upper limit (visible cross-section) double getUpperLimit(const string& signal_region, bool observed) { map upperLimitsObserved; upperLimitsObserved["HTlep_3l_offZ_cut_0"] = 11.; upperLimitsObserved["HTlep_3l_offZ_cut_100"] = 8.7; upperLimitsObserved["HTlep_3l_offZ_cut_150"] = 4.0; upperLimitsObserved["HTlep_3l_offZ_cut_200"] = 4.4; upperLimitsObserved["HTlep_3l_offZ_cut_300"] = 1.6; upperLimitsObserved["HTlep_2ltau_offZ_cut_0"] = 25.; upperLimitsObserved["HTlep_2ltau_offZ_cut_100"] = 14.; upperLimitsObserved["HTlep_2ltau_offZ_cut_150"] = 6.1; upperLimitsObserved["HTlep_2ltau_offZ_cut_200"] = 3.3; upperLimitsObserved["HTlep_2ltau_offZ_cut_300"] = 1.2; upperLimitsObserved["HTlep_3l_onZ_cut_0"] = 48.; upperLimitsObserved["HTlep_3l_onZ_cut_100"] = 38.; upperLimitsObserved["HTlep_3l_onZ_cut_150"] = 14.; upperLimitsObserved["HTlep_3l_onZ_cut_200"] = 7.2; upperLimitsObserved["HTlep_3l_onZ_cut_300"] = 4.5; upperLimitsObserved["HTlep_2ltau_onZ_cut_0"] = 85.; upperLimitsObserved["HTlep_2ltau_onZ_cut_100"] = 53.; upperLimitsObserved["HTlep_2ltau_onZ_cut_150"] = 11.0; upperLimitsObserved["HTlep_2ltau_onZ_cut_200"] = 5.2; upperLimitsObserved["HTlep_2ltau_onZ_cut_300"] = 3.0; upperLimitsObserved["METStrong_3l_offZ_cut_0"] = 2.6; upperLimitsObserved["METStrong_3l_offZ_cut_50"] = 2.1; upperLimitsObserved["METStrong_3l_offZ_cut_75"] = 2.1; upperLimitsObserved["METStrong_2ltau_offZ_cut_0"] = 4.2; upperLimitsObserved["METStrong_2ltau_offZ_cut_50"] = 3.1; upperLimitsObserved["METStrong_2ltau_offZ_cut_75"] = 2.6; upperLimitsObserved["METStrong_3l_onZ_cut_20"] = 11.0; upperLimitsObserved["METStrong_3l_onZ_cut_50"] = 6.4; upperLimitsObserved["METStrong_3l_onZ_cut_75"] = 5.1; upperLimitsObserved["METStrong_2ltau_onZ_cut_20"] = 5.9; upperLimitsObserved["METStrong_2ltau_onZ_cut_50"] = 3.4; upperLimitsObserved["METStrong_2ltau_onZ_cut_75"] = 1.2; upperLimitsObserved["METWeak_3l_offZ_cut_0"] = 11.; upperLimitsObserved["METWeak_3l_offZ_cut_50"] = 5.3; upperLimitsObserved["METWeak_3l_offZ_cut_75"] = 3.1; upperLimitsObserved["METWeak_2ltau_offZ_cut_0"] = 23.; upperLimitsObserved["METWeak_2ltau_offZ_cut_50"] = 4.3; upperLimitsObserved["METWeak_2ltau_offZ_cut_75"] = 3.1; upperLimitsObserved["METWeak_3l_onZ_cut_20"] = 41.; upperLimitsObserved["METWeak_3l_onZ_cut_50"] = 16.; upperLimitsObserved["METWeak_3l_onZ_cut_75"] = 8.0; upperLimitsObserved["METWeak_2ltau_onZ_cut_20"] = 80.; upperLimitsObserved["METWeak_2ltau_onZ_cut_50"] = 4.4; upperLimitsObserved["METWeak_2ltau_onZ_cut_75"] = 1.8; upperLimitsObserved["Meff_3l_offZ_cut_0"] = 11.; upperLimitsObserved["Meff_3l_offZ_cut_150"] = 8.1; upperLimitsObserved["Meff_3l_offZ_cut_300"] = 3.1; upperLimitsObserved["Meff_3l_offZ_cut_500"] = 2.1; upperLimitsObserved["Meff_2ltau_offZ_cut_0"] = 25.; upperLimitsObserved["Meff_2ltau_offZ_cut_150"] = 12.; upperLimitsObserved["Meff_2ltau_offZ_cut_300"] = 3.9; upperLimitsObserved["Meff_2ltau_offZ_cut_500"] = 2.2; upperLimitsObserved["Meff_3l_onZ_cut_0"] = 48.; upperLimitsObserved["Meff_3l_onZ_cut_150"] = 37.; upperLimitsObserved["Meff_3l_onZ_cut_300"] = 11.; upperLimitsObserved["Meff_3l_onZ_cut_500"] = 4.8; upperLimitsObserved["Meff_2ltau_onZ_cut_0"] = 85.; upperLimitsObserved["Meff_2ltau_onZ_cut_150"] = 28.; upperLimitsObserved["Meff_2ltau_onZ_cut_300"] = 5.9; upperLimitsObserved["Meff_2ltau_onZ_cut_500"] = 1.9; upperLimitsObserved["MeffStrong_3l_offZ_cut_0"] = 3.8; upperLimitsObserved["MeffStrong_3l_offZ_cut_150"] = 3.8; upperLimitsObserved["MeffStrong_3l_offZ_cut_300"] = 2.8; upperLimitsObserved["MeffStrong_3l_offZ_cut_500"] = 2.1; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_0"] = 3.9; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_150"] = 4.0; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_300"] = 2.9; upperLimitsObserved["MeffStrong_2ltau_offZ_cut_500"] = 1.5; upperLimitsObserved["MeffStrong_3l_onZ_cut_0"] = 10.0; upperLimitsObserved["MeffStrong_3l_onZ_cut_150"] = 10.0; upperLimitsObserved["MeffStrong_3l_onZ_cut_300"] = 6.8; upperLimitsObserved["MeffStrong_3l_onZ_cut_500"] = 3.9; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_0"] = 1.6; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_150"] = 1.4; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_300"] = 1.5; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_500"] = 0.9; // Expected upper limits are also given but not used in this analysis map upperLimitsExpected; upperLimitsExpected["HTlep_3l_offZ_cut_0"] = 11.; upperLimitsExpected["HTlep_3l_offZ_cut_100"] = 8.5; upperLimitsExpected["HTlep_3l_offZ_cut_150"] = 4.6; upperLimitsExpected["HTlep_3l_offZ_cut_200"] = 3.6; upperLimitsExpected["HTlep_3l_offZ_cut_300"] = 1.9; upperLimitsExpected["HTlep_2ltau_offZ_cut_0"] = 23.; upperLimitsExpected["HTlep_2ltau_offZ_cut_100"] = 14.; upperLimitsExpected["HTlep_2ltau_offZ_cut_150"] = 6.4; upperLimitsExpected["HTlep_2ltau_offZ_cut_200"] = 3.6; upperLimitsExpected["HTlep_2ltau_offZ_cut_300"] = 1.5; upperLimitsExpected["HTlep_3l_onZ_cut_0"] = 33.; upperLimitsExpected["HTlep_3l_onZ_cut_100"] = 25.; upperLimitsExpected["HTlep_3l_onZ_cut_150"] = 12.; upperLimitsExpected["HTlep_3l_onZ_cut_200"] = 6.5; upperLimitsExpected["HTlep_3l_onZ_cut_300"] = 3.1; upperLimitsExpected["HTlep_2ltau_onZ_cut_0"] = 94.; upperLimitsExpected["HTlep_2ltau_onZ_cut_100"] = 61.; upperLimitsExpected["HTlep_2ltau_onZ_cut_150"] = 9.9; upperLimitsExpected["HTlep_2ltau_onZ_cut_200"] = 4.5; upperLimitsExpected["HTlep_2ltau_onZ_cut_300"] = 1.9; upperLimitsExpected["METStrong_3l_offZ_cut_0"] = 3.1; upperLimitsExpected["METStrong_3l_offZ_cut_50"] = 2.4; upperLimitsExpected["METStrong_3l_offZ_cut_75"] = 2.3; upperLimitsExpected["METStrong_2ltau_offZ_cut_0"] = 4.8; upperLimitsExpected["METStrong_2ltau_offZ_cut_50"] = 3.3; upperLimitsExpected["METStrong_2ltau_offZ_cut_75"] = 2.1; upperLimitsExpected["METStrong_3l_onZ_cut_20"] = 8.7; upperLimitsExpected["METStrong_3l_onZ_cut_50"] = 4.9; upperLimitsExpected["METStrong_3l_onZ_cut_75"] = 3.8; upperLimitsExpected["METStrong_2ltau_onZ_cut_20"] = 7.3; upperLimitsExpected["METStrong_2ltau_onZ_cut_50"] = 2.8; upperLimitsExpected["METStrong_2ltau_onZ_cut_75"] = 1.5; upperLimitsExpected["METWeak_3l_offZ_cut_0"] = 10.; upperLimitsExpected["METWeak_3l_offZ_cut_50"] = 4.7; upperLimitsExpected["METWeak_3l_offZ_cut_75"] = 3.0; upperLimitsExpected["METWeak_2ltau_offZ_cut_0"] = 21.; upperLimitsExpected["METWeak_2ltau_offZ_cut_50"] = 4.0; upperLimitsExpected["METWeak_2ltau_offZ_cut_75"] = 2.6; upperLimitsExpected["METWeak_3l_onZ_cut_20"] = 30.; upperLimitsExpected["METWeak_3l_onZ_cut_50"] = 10.; upperLimitsExpected["METWeak_3l_onZ_cut_75"] = 5.4; upperLimitsExpected["METWeak_2ltau_onZ_cut_20"] = 88.; upperLimitsExpected["METWeak_2ltau_onZ_cut_50"] = 5.5; upperLimitsExpected["METWeak_2ltau_onZ_cut_75"] = 2.2; upperLimitsExpected["Meff_3l_offZ_cut_0"] = 11.; upperLimitsExpected["Meff_3l_offZ_cut_150"] = 8.8; upperLimitsExpected["Meff_3l_offZ_cut_300"] = 3.7; upperLimitsExpected["Meff_3l_offZ_cut_500"] = 2.1; upperLimitsExpected["Meff_2ltau_offZ_cut_0"] = 23.; upperLimitsExpected["Meff_2ltau_offZ_cut_150"] = 13.; upperLimitsExpected["Meff_2ltau_offZ_cut_300"] = 4.9; upperLimitsExpected["Meff_2ltau_offZ_cut_500"] = 2.4; upperLimitsExpected["Meff_3l_onZ_cut_0"] = 33.; upperLimitsExpected["Meff_3l_onZ_cut_150"] = 25.; upperLimitsExpected["Meff_3l_onZ_cut_300"] = 9.; upperLimitsExpected["Meff_3l_onZ_cut_500"] = 3.9; upperLimitsExpected["Meff_2ltau_onZ_cut_0"] = 94.; upperLimitsExpected["Meff_2ltau_onZ_cut_150"] = 35.; upperLimitsExpected["Meff_2ltau_onZ_cut_300"] = 6.8; upperLimitsExpected["Meff_2ltau_onZ_cut_500"] = 2.5; upperLimitsExpected["MeffStrong_3l_offZ_cut_0"] = 3.9; upperLimitsExpected["MeffStrong_3l_offZ_cut_150"] = 3.9; upperLimitsExpected["MeffStrong_3l_offZ_cut_300"] = 3.0; upperLimitsExpected["MeffStrong_3l_offZ_cut_500"] = 2.0; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_0"] = 3.8; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_150"] = 3.9; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_300"] = 3.1; upperLimitsExpected["MeffStrong_2ltau_offZ_cut_500"] = 1.6; upperLimitsExpected["MeffStrong_3l_onZ_cut_0"] = 6.9; upperLimitsExpected["MeffStrong_3l_onZ_cut_150"] = 7.1; upperLimitsExpected["MeffStrong_3l_onZ_cut_300"] = 4.9; upperLimitsExpected["MeffStrong_3l_onZ_cut_500"] = 3.0; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_0"] = 2.4; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_150"] = 2.5; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_300"] = 2.0; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_500"] = 1.1; if (observed) return upperLimitsObserved[signal_region]; else return upperLimitsExpected[signal_region]; } /// Function checking if there is an OSSF lepton pair or a combination of 3 leptons with an invariant mass close to the Z mass /// @todo Should the reference Z mass be 91.2? int isonZ (const Particles& particles) { int onZ = 0; double best_mass_2 = 999.; double best_mass_3 = 999.; // Loop over all 2 particle combinations to find invariant mass of OSSF pair closest to Z mass foreach ( const Particle& p1, particles ) { foreach ( const Particle& p2, particles ) { double mass_difference_2_old = fabs(91.0 - best_mass_2); double mass_difference_2_new = fabs(91.0 - (p1.momentum() + p2.momentum()).mass()/GeV); // If particle combination is OSSF pair calculate mass difference to Z mass if ( (p1.pid()*p2.pid() == -121 || p1.pid()*p2.pid() == -169) ) { // Get invariant mass closest to Z mass if (mass_difference_2_new < mass_difference_2_old) best_mass_2 = (p1.momentum() + p2.momentum()).mass()/GeV; // In case there is an OSSF pair take also 3rd lepton into account (e.g. from FSR and photon to electron conversion) foreach ( const Particle & p3 , particles ) { double mass_difference_3_old = fabs(91.0 - best_mass_3); double mass_difference_3_new = fabs(91.0 - (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV); if (mass_difference_3_new < mass_difference_3_old) best_mass_3 = (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV; } } } } // Pick the minimum invariant mass of the best OSSF pair combination and the best 3 lepton combination // If this mass is in a 20 GeV window around the Z mass, the event is classified as onZ double best_mass = min(best_mass_2, best_mass_3); if (fabs(91.0 - best_mass) < 20) onZ = 1; return onZ; } //@} private: /// Histograms //@{ Histo1DPtr _h_HTlep_all, _h_HTjets_all, _h_MET_all, _h_Meff_all; Histo1DPtr _h_pt_1_3l, _h_pt_2_3l, _h_pt_3_3l, _h_pt_1_2ltau, _h_pt_2_2ltau, _h_pt_3_2ltau; Histo1DPtr _h_e_n, _h_mu_n, _h_tau_n; Histo1DPtr _h_excluded; //@} /// Fiducial efficiencies to model the effects of the ATLAS detector bool _use_fiducial_lepton_efficiency; /// List of signal regions and event counts per signal region vector _signal_regions; map _eventCountsPerSR; }; DECLARE_RIVET_PLUGIN(ATLAS_2012_I1204447); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1282441.cc b/analyses/pluginATLAS/ATLAS_2014_I1282441.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1282441.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1282441.cc @@ -1,91 +1,91 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" namespace Rivet { class ATLAS_2014_I1282441 : public Analysis { public: ATLAS_2014_I1282441() : Analysis("ATLAS_2014_I1282441") { } void init() { // Use a large eta range such that we can discriminate on y /// @todo Convert to use a y-cut directly - UnstableFinalState ufs(Cuts::abseta < 10 && Cuts::pT > 500*MeV); + UnstableParticles ufs(Cuts::abseta < 10 && Cuts::pT > 500*MeV); IdentifiedFinalState phis(ufs); phis.acceptIdPair(PID::PHI); declare(phis, "Phis"); IdentifiedFinalState kpms(Cuts::abseta < 2.0 && Cuts::pT > 230*MeV); kpms.acceptIdPair(PID::KPLUS); declare(kpms, "Kpms"); _h_phi_rapidity = bookHisto1D(2,1,1); _h_phi_pT = bookHisto1D(1,1,1); } void analyze(const Event& event) { const Particles& ks_all = apply(event, "Kpms").particles(); Particles kp, km; foreach (const Particle& p, ks_all) { if (!p.hasAncestor(PID::PHI)) { MSG_DEBUG("-- K not from phi."); continue; } if (p.p3().mod() > 800*MeV) { MSG_DEBUG("-- p K too high."); continue; } (p.charge() > 0 ? kp : km).push_back(p); } const Particles& phis_all = apply(event, "Phis").particles(); Particles phis; /// @todo Use particles(Cuts&) instead foreach (const Particle& p, phis_all) { if ( p.absrap() > 0.8 ) { MSG_DEBUG("-- phi Y too high."); continue; } if ( p.pT() > 1.2*GeV ) { MSG_DEBUG("-- phi pT too high."); continue; } phis.push_back(p); } // Find Phi -> KK decays through matching of the kinematics if (!kp.empty() && !km.empty() && !phis.empty()) { const double w = event.weight(); MSG_DEBUG("Numbers of particles: #phi=" << phis.size() << ", #K+=" << kp.size() << ", #K-=" << km.size()); for (size_t ip = 0; ip < phis.size(); ++ip) { const Particle& phi = phis[ip]; for (size_t ikm = 0; ikm < km.size(); ++ikm) { for (size_t ikp = 0; ikp < kp.size(); ++ikp) { const FourMomentum mom = kp[ikp].mom() + km[ikm].mom(); if ( fuzzyEquals(mom.mass(), phi.mass(), 1e-5) ) { MSG_DEBUG("Accepted combinatoric: phi#:" << ip << " K+#:" << ikp << " K-#:" << ikm); _h_phi_rapidity->fill(phi.absrap(), w); _h_phi_pT->fill(phi.pT()/MeV, w); } else { MSG_DEBUG("Rejected combinatoric: phi#:" << ip << " K+#:" << ikp << " K-#:" << ikm << " Mass difference is " << mom.mass()-phi.mass()); } } } } } } void finalize() { scale(_h_phi_rapidity, crossSection()/microbarn/sumOfWeights()); scale(_h_phi_pT, crossSection()/microbarn/sumOfWeights()); } private: Histo1DPtr _h_phi_rapidity, _h_phi_pT; }; DECLARE_RIVET_PLUGIN(ATLAS_2014_I1282441); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1282447.cc b/analyses/pluginATLAS/ATLAS_2014_I1282447.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1282447.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1282447.cc @@ -1,597 +1,597 @@ // -*- C++ -*- // ATLAS W+c analysis ////////////////////////////////////////////////////////////////////////// /* Description of rivet analysis ATLAS_2014_I1282447 W+c production This rivet routine implements the ATLAS W+c analysis. Apart from those histograms, described and published on HEP Data, here are some helper histograms defined, these are: d02-x01-y01, d02-x01-y02 and d08-x01-y01 are ratios, the nominator ("_plus") and denominator ("_minus") histograms are also given, so that the ratios can be reconstructed if need be (e.g. when running on separate samples). d05 and d06 are ratios over inclusive W production. The routine has to be run on a sample for inclusive W production in order to make sure the denominator ("_winc") is correctly filled. The ratios can be constructed using the following sample code: python divideWCharm.py import yoda hists_wc = yoda.read("Rivet_Wc.yoda") hists_winc = yoda.read("Rivet_Winc.yoda") ## division histograms --> ONLY for different plus minus runs # (merge before using yodamerge Rivet_plus.yoda Rivet_minus.yoda > Rivet_Wc.yoda) d02y01_plus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y01_plus"] d02y01_minus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y01_minus"] ratio_d02y01 = d02y01_plus.divide(d02y01_minus) ratio_d02y01.path = "/ATLAS_2014_I1282447/d02-x01-y01" d02y02_plus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y02_plus"] d02y02_minus = hists_wc["/ATLAS_2014_I1282447/d02-x01-y02_minus"] ratio_d02y02= d02y02_plus.divide(d02y02_minus) ratio_d02y02.path = "/ATLAS_2014_I1282447/d02-x01-y02" d08y01_plus = hists_wc["/ATLAS_2014_I1282447/d08-x01-y01_plus"] d08y01_minus = hists_wc["/ATLAS_2014_I1282447/d08-x01-y01_minus"] ratio_d08y01= d08y01_plus.divide(d08y01_minus) ratio_d08y01.path = "/ATLAS_2014_I1282447/d08-x01-y01" # inclusive cross section h_winc = hists_winc["/ATLAS_2014_I1282447/d05-x01-y01"] h_d = hists_wc["/ATLAS_2014_I1282447/d01-x01-y02"] h_dstar= hists_wc["/ATLAS_2014_I1282447/d01-x01-y03"] ratio_wd = h_d.divide(h_winc) ratio_wd.path = "/ATLAS_2014_I1282447/d05-x01-y02" ratio_wdstar = h_d.divide(h_winc) ratio_wdstar.path = "/ATLAS_2014_I1282447/d05-x01-y03" # pT differential h_winc_plus = hists_winc["/ATLAS_2014_I1282447/d06-x01-y01_winc"] h_winc_minus = hists_winc["/ATLAS_2014_I1282447/d06-x01-y02_winc"] h_wd_plus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y01_wplus"] h_wd_minus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y02_wminus"] h_wdstar_plus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y03_wplus"] h_wdstar_minus = hists_wc["/ATLAS_2014_I1282447/d06-x01-y04_wminus"] ratio_wd_plus = h_wd_plus.divide(h_winc_plus) ratio_wd_plus.path = "/ATLAS_2014_I1282447/d06-x01-y01" ratio_wd_minus = h_wd_plus.divide(h_winc_minus) ratio_wd_minus.path = "/ATLAS_2014_I1282447/d06-x01-y02" ratio_wdstar_plus = h_wdstar_plus.divide(h_winc_plus) ratio_wdstar_plus.path = "/ATLAS_2014_I1282447/d06-x01-y03" ratio_wdstar_minus = h_wdstar_plus.divide(h_winc_minus) ratio_wdstar_minus.path = "/ATLAS_2014_I1282447/d06-x01-y04" ratio_wd_plus = h_wd_plus.divide(h_winc_plus) ratio_wd_plus.path = "/ATLAS_2014_I1282447/d06-x01-y01" ratio_wd_minus = h_wd_plus.divide(h_winc_minus) ratio_wd_minus.path = "/ATLAS_2014_I1282447/d06-x01-y02" h_winc_plus= hists_winc["/ATLAS_2014_I1282447/d06-x01-y01_winc"] h_winc_minus= hists_winc["/ATLAS_2014_I1282447/d06-x01-y02_winc"] ## copy other histograms for plotting d01x01y01= hists_wc["/ATLAS_2014_I1282447/d01-x01-y01"] d01x01y01.path = "/ATLAS_2014_I1282447/d01-x01-y01" d01x01y02= hists_wc["/ATLAS_2014_I1282447/d01-x01-y02"] d01x01y02.path = "/ATLAS_2014_I1282447/d01-x01-y02" d01x01y03= hists_wc["/ATLAS_2014_I1282447/d01-x01-y03"] d01x01y03.path = "/ATLAS_2014_I1282447/d01-x01-y03" d03x01y01= hists_wc["/ATLAS_2014_I1282447/d03-x01-y01"] d03x01y01.path = "/ATLAS_2014_I1282447/d03-x01-y01" d03x01y02= hists_wc["/ATLAS_2014_I1282447/d03-x01-y02"] d03x01y02.path = "/ATLAS_2014_I1282447/d03-x01-y02" d04x01y01= hists_wc["/ATLAS_2014_I1282447/d04-x01-y01"] d04x01y01.path = "/ATLAS_2014_I1282447/d04-x01-y01" d04x01y02= hists_wc["/ATLAS_2014_I1282447/d04-x01-y02"] d04x01y02.path = "/ATLAS_2014_I1282447/d04-x01-y02" d04x01y03= hists_wc["/ATLAS_2014_I1282447/d04-x01-y03"] d04x01y03.path = "/ATLAS_2014_I1282447/d04-x01-y03" d04x01y04= hists_wc["/ATLAS_2014_I1282447/d04-x01-y04"] d04x01y04.path = "/ATLAS_2014_I1282447/d04-x01-y04" d07x01y01= hists_wc["/ATLAS_2014_I1282447/d07-x01-y01"] d07x01y01.path = "/ATLAS_2014_I1282447/d07-x01-y01" yoda.write([ratio_d02y01,ratio_d02y02,ratio_d08y01, ratio_wd ,ratio_wdstar,ratio_wd_plus,ratio_wd_minus ,ratio_wdstar_plus,ratio_wdstar_minus,d01x01y01,d01x01y02,d01x01y03,d03x01y01,d03x01y02,d04x01y01,d04x01y02,d04x01y03,d04x01y04,d07x01y01],"validation.yoda") */ ////////////////////////////////////////////////////////////////////////// #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/WFinder.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" namespace Rivet { class ATLAS_2014_I1282447 : public Analysis { public: /// Constructor ATLAS_2014_I1282447() : Analysis("ATLAS_2014_I1282447") { setNeedsCrossSection(true); } /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { /// @todo Initialise and register projections here - UnstableFinalState fs; + UnstableParticles fs; Cut cuts = Cuts::etaIn(-2.5, 2.5) & (Cuts::pT > 20*GeV); /// should use sample WITHOUT QED radiation off the electron WFinder wfinder_born_el(fs, cuts, PID::ELECTRON, 25*GeV, 8000*GeV, 15*GeV, 0.1, WFinder::CLUSTERALL, WFinder::TRACK); declare(wfinder_born_el, "WFinder_born_el"); WFinder wfinder_born_mu(fs, cuts, PID::MUON , 25*GeV, 8000*GeV, 15*GeV, 0.1, WFinder::CLUSTERALL, WFinder::TRACK); declare(wfinder_born_mu, "WFinder_born_mu"); // all hadrons that could be coming from a charm decay -- // -- for safety, use region -3.5 - 3.5 - declare(UnstableFinalState(Cuts::abseta <3.5), "hadrons"); + declare(UnstableParticles(Cuts::abseta <3.5), "hadrons"); // Input for the jets: no neutrinos, no muons, and no electron which passed the electron cuts // also: NO electron, muon or tau (needed due to ATLAS jet truth reconstruction feature) VetoedFinalState veto; veto.addVetoOnThisFinalState(wfinder_born_el); veto.addVetoOnThisFinalState(wfinder_born_mu); veto.addVetoPairId(PID::ELECTRON); veto.addVetoPairId(PID::MUON); veto.addVetoPairId(PID::TAU); FastJets jets(veto, FastJets::ANTIKT, 0.4); declare(jets, "jets"); // Book histograms // charge separated integrated cross sections _hist_wcjet_charge = bookHisto1D("d01-x01-y01"); _hist_wd_charge = bookHisto1D("d01-x01-y02"); _hist_wdstar_charge = bookHisto1D("d01-x01-y03"); // charge integrated total cross sections _hist_wcjet_ratio = bookScatter2D("d02-x01-y01"); _hist_wd_ratio = bookScatter2D("d02-x01-y02"); _hist_wcjet_minus = bookHisto1D("d02-x01-y01_minus"); _hist_wd_minus = bookHisto1D("d02-x01-y02_minus"); _hist_wcjet_plus = bookHisto1D("d02-x01-y01_plus"); _hist_wd_plus = bookHisto1D("d02-x01-y02_plus"); // eta distributions _hist_wplus_wcjet_eta_lep = bookHisto1D("d03-x01-y01"); _hist_wminus_wcjet_eta_lep = bookHisto1D("d03-x01-y02"); _hist_wplus_wdminus_eta_lep = bookHisto1D("d04-x01-y01"); _hist_wminus_wdplus_eta_lep = bookHisto1D("d04-x01-y02"); _hist_wplus_wdstar_eta_lep = bookHisto1D("d04-x01-y03"); _hist_wminus_wdstar_eta_lep = bookHisto1D("d04-x01-y04"); // ratio of cross section (WD over W inclusive) // postprocess! _hist_w_inc = bookHisto1D("d05-x01-y01"); _hist_wd_winc_ratio = bookScatter2D("d05-x01-y02"); _hist_wdstar_winc_ratio = bookScatter2D("d05-x01-y03"); // ratio of cross section (WD over W inclusive -- function of pT of D meson) _hist_wplusd_wplusinc_pt_ratio = bookScatter2D("d06-x01-y01"); _hist_wminusd_wminusinc_pt_ratio = bookScatter2D("d06-x01-y02"); _hist_wplusdstar_wplusinc_pt_ratio = bookScatter2D("d06-x01-y03"); _hist_wminusdstar_wminusinc_pt_ratio = bookScatter2D("d06-x01-y04"); // could use for postprocessing! _hist_wplusd_wplusinc_pt = bookHisto1D("d06-x01-y01_wplus"); _hist_wminusd_wminusinc_pt = bookHisto1D("d06-x01-y02_wminus"); _hist_wplusdstar_wplusinc_pt = bookHisto1D("d06-x01-y03_wplus"); _hist_wminusdstar_wminusinc_pt = bookHisto1D("d06-x01-y04_wminus"); _hist_wplus_winc = bookHisto1D("d06-x01-y01_winc"); _hist_wminus_winc = bookHisto1D("d06-x01-y02_winc"); // jet multiplicity of charge integrated W+cjet cross section (+0 or +1 jet in addition to the charm jet) _hist_wcjet_jets = bookHisto1D("d07-x01-y01"); // jet multiplicity of W+cjet cross section ratio (+0 or +1 jet in addition to the charm jet) _hist_wcjet_jets_ratio = bookScatter2D("d08-x01-y01"); _hist_wcjet_jets_plus = bookHisto1D("d08-x01-y01_plus"); _hist_wcjet_jets_minus = bookHisto1D("d08-x01-y01_minus"); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); double charge_weight = 0; // account for OS/SS events int lepton_charge = 0; double lepton_eta = 0.; /// Find leptons const WFinder& wfinder_born_el = apply(event, "WFinder_born_el"); const WFinder& wfinder_born_mu = apply(event, "WFinder_born_mu"); if (wfinder_born_el.empty() && wfinder_born_mu.empty()) { MSG_DEBUG("No W bosons found"); vetoEvent; } bool keepevent = false; //check electrons if (!wfinder_born_el.empty()) { const FourMomentum nu = wfinder_born_el.constituentNeutrinos()[0]; if (wfinder_born_el.mT() > 40*GeV && nu.pT() > 25*GeV) { keepevent = true; lepton_charge = wfinder_born_el.constituentLeptons()[0].charge(); lepton_eta = fabs(wfinder_born_el.constituentLeptons()[0].pseudorapidity()); } } //check muons if (!wfinder_born_mu.empty()) { const FourMomentum nu = wfinder_born_mu.constituentNeutrinos()[0]; if (wfinder_born_mu.mT() > 40*GeV && nu.pT() > 25*GeV) { keepevent = true; lepton_charge = wfinder_born_mu.constituentLeptons()[0].charge(); lepton_eta = fabs(wfinder_born_mu.constituentLeptons()[0].pseudorapidity()); } } if (!keepevent) { MSG_DEBUG("Event does not pass mT and MET cuts"); vetoEvent; } if (lepton_charge > 0) { _hist_wplus_winc->fill(10., weight); _hist_wplus_winc->fill(16., weight); _hist_wplus_winc->fill(30., weight); _hist_wplus_winc->fill(60., weight); _hist_w_inc->fill(+1, weight); } else if (lepton_charge < 0) { _hist_wminus_winc->fill(10., weight); _hist_wminus_winc->fill(16., weight); _hist_wminus_winc->fill(30., weight); _hist_wminus_winc->fill(60., weight); _hist_w_inc->fill(-1, weight); } // Find hadrons in the event - const UnstableFinalState& fs = apply(event, "hadrons"); + const UnstableParticles& fs = apply(event, "hadrons"); /// FIND Different channels // 1: wcjet // get jets const Jets& jets = apply(event, "jets").jetsByPt(Cuts::pT>25.0*GeV && Cuts::abseta<2.5); // loop over jets to select jets used to match to charm Jets js; int matched_charmHadron = 0; double charm_charge = 0.; int njets = 0; int nj = 0; bool mat_jet = false; double ptcharm = 0; if (matched_charmHadron > -1) { for (const Jet& j : jets) { mat_jet = false; njets += 1; for (const Particle& p : fs.particles()) { /// @todo Avoid touching HepMC! const GenParticle* part = p.genParticle(); if (p.hasCharm()) { //if(isFromBDecay(p)) continue; if (p.fromBottom()) continue; if (p.pT() < 5*GeV ) continue; if (hasCharmedChildren(part)) continue; if (deltaR(p, j) < 0.3) { mat_jet = true; if (p.pT() > ptcharm) { charm_charge = part->pdg_id(); ptcharm = p.pT(); } } } } if (mat_jet) nj++; } if (charm_charge * lepton_charge > 0) charge_weight = -1; else charge_weight = +1; if (nj == 1) { if (lepton_charge > 0) { _hist_wcjet_charge ->fill( 1, weight*charge_weight); _hist_wcjet_plus ->fill( 0, weight*charge_weight); _hist_wplus_wcjet_eta_lep ->fill(lepton_eta, weight*charge_weight); _hist_wcjet_jets_plus ->fill(njets-1 , weight*charge_weight); } else if (lepton_charge < 0) { _hist_wcjet_charge ->fill( -1, weight*charge_weight); _hist_wcjet_minus ->fill( 0, weight*charge_weight); _hist_wminus_wcjet_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wcjet_jets_minus ->fill(njets-1 , weight*charge_weight); } _hist_wcjet_jets->fill(njets-1, weight*charge_weight); } } // // 1/2: w+d(*) meson for (const Particle& p : fs.particles()) { /// @todo Avoid touching HepMC! const GenParticle* part = p.genParticle(); if (p.pT() < 8*GeV) continue; if (fabs(p.eta()) > 2.2) continue; // W+D if (abs(part->pdg_id()) == 411) { if (lepton_charge * part->pdg_id() > 0) charge_weight = -1; else charge_weight = +1; // fill histos if (lepton_charge > 0) { _hist_wd_charge ->fill( 1, weight*charge_weight); _hist_wd_plus ->fill( 0, weight*charge_weight); _hist_wplus_wdminus_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wplusd_wplusinc_pt ->fill( p.pT(), weight*charge_weight); } else if (lepton_charge < 0) { _hist_wd_charge ->fill( -1, weight*charge_weight); _hist_wd_minus ->fill( 0, weight*charge_weight); _hist_wminus_wdplus_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wminusd_wminusinc_pt ->fill(p.pT() , weight*charge_weight); } } // W+Dstar if ( abs(part->pdg_id()) == 413 ) { if (lepton_charge*part->pdg_id() > 0) charge_weight = -1; else charge_weight = +1; if (lepton_charge > 0) { _hist_wdstar_charge->fill(+1, weight*charge_weight); _hist_wd_plus->fill( 0, weight*charge_weight); _hist_wplus_wdstar_eta_lep->fill( lepton_eta, weight*charge_weight); _hist_wplusdstar_wplusinc_pt->fill( p.pT(), weight*charge_weight); } else if (lepton_charge < 0) { _hist_wdstar_charge->fill(-1, weight*charge_weight); _hist_wd_minus->fill(0, weight*charge_weight); _hist_wminus_wdstar_eta_lep->fill(lepton_eta, weight*charge_weight); _hist_wminusdstar_wminusinc_pt->fill(p.pT(), weight*charge_weight); } } } } /// Normalise histograms etc., after the run void finalize() { const double sf = crossSection() / sumOfWeights(); // norm to cross section // d01 scale(_hist_wcjet_charge, sf); scale(_hist_wd_charge, sf); scale(_hist_wdstar_charge, sf); //d02 scale(_hist_wcjet_plus, sf); scale(_hist_wcjet_minus, sf); scale(_hist_wd_plus, sf); scale(_hist_wd_minus, sf); divide(_hist_wcjet_plus, _hist_wcjet_minus, _hist_wcjet_ratio); divide(_hist_wd_plus, _hist_wd_minus, _hist_wd_ratio ); //d03 scale(_hist_wplus_wcjet_eta_lep, sf); scale(_hist_wminus_wcjet_eta_lep, sf); //d04 scale(_hist_wplus_wdminus_eta_lep, crossSection()/sumOfWeights()); scale(_hist_wminus_wdplus_eta_lep, crossSection()/sumOfWeights()); scale(_hist_wplus_wdstar_eta_lep , crossSection()/sumOfWeights()); scale(_hist_wminus_wdstar_eta_lep, crossSection()/sumOfWeights()); //d05 scale(_hist_w_inc, 0.01 * sf); // in percent --> /100 divide(_hist_wd_charge, _hist_w_inc, _hist_wd_winc_ratio ); divide(_hist_wdstar_charge, _hist_w_inc, _hist_wdstar_winc_ratio); //d06, in percentage! scale(_hist_wplusd_wplusinc_pt, sf); scale(_hist_wminusd_wminusinc_pt, sf); scale(_hist_wplusdstar_wplusinc_pt, sf); scale(_hist_wminusdstar_wminusinc_pt, sf); scale(_hist_wplus_winc, 0.01 * sf); // in percent --> /100 scale(_hist_wminus_winc, 0.01 * sf); // in percent --> /100 divide(_hist_wplusd_wplusinc_pt, _hist_wplus_winc , _hist_wplusd_wplusinc_pt_ratio ); divide(_hist_wminusd_wminusinc_pt, _hist_wminus_winc, _hist_wminusd_wminusinc_pt_ratio ); divide(_hist_wplusdstar_wplusinc_pt, _hist_wplus_winc , _hist_wplusdstar_wplusinc_pt_ratio ); divide(_hist_wminusdstar_wminusinc_pt, _hist_wminus_winc, _hist_wminusdstar_wminusinc_pt_ratio); //d07 scale(_hist_wcjet_jets, sf); //d08 scale(_hist_wcjet_jets_minus, sf); scale(_hist_wcjet_jets_plus, sf); divide(_hist_wcjet_jets_plus, _hist_wcjet_jets_minus , _hist_wcjet_jets_ratio); } //@} private: // Data members like post-cuts event weight counters go here // Check whether particle comes from b-decay /// @todo Use built-in method and avoid HepMC bool isFromBDecay(const Particle& p) { bool isfromB = false; if (p.genParticle() == nullptr) return false; const GenParticle* part = p.genParticle(); const GenVertex* ivtx = const_cast(part->production_vertex()); while (ivtx) { if (ivtx->particles_in_size() < 1) { isfromB = false; break; } const HepMC::GenVertex::particles_in_const_iterator iPart_invtx = ivtx->particles_in_const_begin(); part = (*iPart_invtx); if (!part) { isfromB = false; break; } isfromB = PID::hasBottom(part->pdg_id()); if (isfromB == true) break; ivtx = const_cast(part->production_vertex()); if ( part->pdg_id() == 2212 || !ivtx ) break; // reached beam } return isfromB; } // Check whether particle has charmed children /// @todo Use built-in method and avoid HepMC! bool hasCharmedChildren(const GenParticle *part) { bool hasCharmedChild = false; if (part == nullptr) return false; const GenVertex* ivtx = const_cast(part->end_vertex()); if (ivtx == nullptr) return false; // if (ivtx->particles_out_size() < 2) return false; HepMC::GenVertex::particles_out_const_iterator iPart_invtx = ivtx->particles_out_const_begin(); HepMC::GenVertex::particles_out_const_iterator end_invtx = ivtx->particles_out_const_end(); for ( ; iPart_invtx != end_invtx; iPart_invtx++ ) { const GenParticle* p2 = (*iPart_invtx); if (p2 == part) continue; hasCharmedChild = PID::hasCharm(p2->pdg_id()); if (hasCharmedChild == true) break; hasCharmedChild = hasCharmedChildren(p2); if (hasCharmedChild == true) break; } return hasCharmedChild; } private: /// @name Histograms //@{ //d01-x01- Histo1DPtr _hist_wcjet_charge; Histo1DPtr _hist_wd_charge; Histo1DPtr _hist_wdstar_charge; //d02-x01- Scatter2DPtr _hist_wcjet_ratio; Scatter2DPtr _hist_wd_ratio; Histo1DPtr _hist_wcjet_plus; Histo1DPtr _hist_wd_plus; Histo1DPtr _hist_wcjet_minus; Histo1DPtr _hist_wd_minus; //d03-x01- Histo1DPtr _hist_wplus_wcjet_eta_lep; Histo1DPtr _hist_wminus_wcjet_eta_lep; //d04-x01- Histo1DPtr _hist_wplus_wdminus_eta_lep; Histo1DPtr _hist_wminus_wdplus_eta_lep; //d05-x01- Histo1DPtr _hist_wplus_wdstar_eta_lep; Histo1DPtr _hist_wminus_wdstar_eta_lep; // postprocessing histos //d05-x01 Histo1DPtr _hist_w_inc; Scatter2DPtr _hist_wd_winc_ratio; Scatter2DPtr _hist_wdstar_winc_ratio; //d06-x01 Histo1DPtr _hist_wplus_winc; Histo1DPtr _hist_wminus_winc; Scatter2DPtr _hist_wplusd_wplusinc_pt_ratio; Scatter2DPtr _hist_wminusd_wminusinc_pt_ratio; Scatter2DPtr _hist_wplusdstar_wplusinc_pt_ratio; Scatter2DPtr _hist_wminusdstar_wminusinc_pt_ratio; Histo1DPtr _hist_wplusd_wplusinc_pt ; Histo1DPtr _hist_wminusd_wminusinc_pt; Histo1DPtr _hist_wplusdstar_wplusinc_pt; Histo1DPtr _hist_wminusdstar_wminusinc_pt; // d07-x01 Histo1DPtr _hist_wcjet_jets ; //d08-x01 Scatter2DPtr _hist_wcjet_jets_ratio ; Histo1DPtr _hist_wcjet_jets_plus ; Histo1DPtr _hist_wcjet_jets_minus; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2014_I1282447); } diff --git a/analyses/pluginATLAS/ATLAS_2014_I1327229.cc b/analyses/pluginATLAS/ATLAS_2014_I1327229.cc --- a/analyses/pluginATLAS/ATLAS_2014_I1327229.cc +++ b/analyses/pluginATLAS/ATLAS_2014_I1327229.cc @@ -1,1330 +1,1330 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/VisibleFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class ATLAS_2014_I1327229 : public Analysis { public: /// Constructor ATLAS_2014_I1327229() : Analysis("ATLAS_2014_I1327229") { } /// Book histograms and initialise projections before the run void init() { // To calculate the acceptance without having the fiducial lepton efficiencies included, this part can be turned off _use_fiducial_lepton_efficiency = true; // Random numbers for simulation of ATLAS detector reconstruction efficiency /// @todo Replace with SmearedParticles etc. srand(160385); // Read in all signal regions _signal_regions = getSignalRegions(); // Set number of events per signal region to 0 for (size_t i = 0; i < _signal_regions.size(); i++) _eventCountsPerSR[_signal_regions[i]] = 0.0; // Final state including all charged and neutral particles const FinalState fs(-5.0, 5.0, 1*GeV); declare(fs, "FS"); // Final state including all charged particles declare(ChargedFinalState(-2.5, 2.5, 1*GeV), "CFS"); // Final state including all visible particles (to calculate MET, Jets etc.) declare(VisibleFinalState(-5.0,5.0),"VFS"); // Final state including all AntiKt 04 Jets VetoedFinalState vfs; vfs.addVetoPairId(PID::MUON); declare(FastJets(vfs, FastJets::ANTIKT, 0.4), "AntiKtJets04"); // Final state including all unstable particles (including taus) - declare(UnstableFinalState(Cuts::abseta < 5.0 && Cuts::pT > 5*GeV),"UFS"); + declare(UnstableParticles(Cuts::abseta < 5.0 && Cuts::pT > 5*GeV),"UFS"); // Final state including all electrons IdentifiedFinalState elecs(Cuts::abseta < 2.47 && Cuts::pT > 10*GeV); elecs.acceptIdPair(PID::ELECTRON); declare(elecs, "elecs"); // Final state including all muons IdentifiedFinalState muons(Cuts::abseta < 2.5 && Cuts::pT > 10*GeV); muons.acceptIdPair(PID::MUON); declare(muons, "muons"); /// Book histograms: _h_HTlep_all = bookHisto1D("HTlep_all", 30,0,3000); _h_HTjets_all = bookHisto1D("HTjets_all", 30,0,3000); _h_MET_all = bookHisto1D("MET_all", 30,0,1500); _h_Meff_all = bookHisto1D("Meff_all", 50,0,5000); _h_min_pT_all = bookHisto1D("min_pT_all", 50, 0, 2000); _h_mT_all = bookHisto1D("mT_all", 50, 0, 2000); _h_e_n = bookHisto1D("e_n", 10, -0.5, 9.5); _h_mu_n = bookHisto1D("mu_n", 10, -0.5, 9.5); _h_tau_n = bookHisto1D("tau_n", 10, -0.5, 9.5); _h_pt_1_3l = bookHisto1D("pt_1_3l", 100, 0, 2000); _h_pt_2_3l = bookHisto1D("pt_2_3l", 100, 0, 2000); _h_pt_3_3l = bookHisto1D("pt_3_3l", 100, 0, 2000); _h_pt_1_2ltau = bookHisto1D("pt_1_2ltau", 100, 0, 2000); _h_pt_2_2ltau = bookHisto1D("pt_2_2ltau", 100, 0, 2000); _h_pt_3_2ltau = bookHisto1D("pt_3_2ltau", 100, 0, 2000); _h_excluded = bookHisto1D("excluded", 2, -0.5, 1.5); } /// Perform the per-event analysis void analyze(const Event& event) { // Muons Particles muon_candidates; const Particles charged_tracks = apply(event, "CFS").particles(); const Particles visible_particles = apply(event, "VFS").particles(); for (const Particle& mu : apply(event, "muons").particlesByPt() ) { // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of muon itself) double pTinCone = -mu.pT(); for (const Particle& track : charged_tracks ) { if (deltaR(mu.momentum(),track.momentum()) < 0.3 ) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles within dR<0.3) double eTinCone = 0.; for (const Particle& visible_particle : visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(mu.momentum(),visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reconstruction int muon_id = 13; if (mu.hasAncestor(PID::TAU) || mu.hasAncestor(-PID::TAU)) muon_id = 14; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(muon_id,mu) : 1.0; const bool keep_muon = rand()/static_cast(RAND_MAX)<=eff; // Keep muon if pTCone30/pT < 0.15 and eTCone30/pT < 0.2 and reconstructed if (keep_muon && pTinCone/mu.pT() <= 0.1 && eTinCone/mu.pT() < 0.1) muon_candidates.push_back(mu); } // Electrons Particles electron_candidates; for (const Particle& e : apply(event, "elecs").particlesByPt() ) { // Neglect electrons in crack regions if (inRange(e.abseta(), 1.37, 1.52)) continue; // Calculate pTCone30 variable (pT of all tracks within dR<0.3 - pT of electron itself) double pTinCone = -e.pT(); for (const Particle& track : charged_tracks) { if (deltaR(e.momentum(), track.momentum()) < 0.3 ) pTinCone += track.pT(); } // Calculate eTCone30 variable (pT of all visible particles (except muons) within dR<0.3) double eTinCone = 0.; for (const Particle& visible_particle : visible_particles) { if (visible_particle.abspid() != PID::MUON && inRange(deltaR(e.momentum(),visible_particle.momentum()), 0.1, 0.3)) eTinCone += visible_particle.pT(); } // Apply reconstruction efficiency and simulate reconstruction int elec_id = 11; if (e.hasAncestor(15) || e.hasAncestor(-15)) elec_id = 12; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(elec_id,e) : 1.0; const bool keep_elec = rand()/static_cast(RAND_MAX)<=eff; // Keep electron if pTCone30/pT < 0.13 and eTCone30/pT < 0.2 and reconstructed if (keep_elec && pTinCone/e.pT() <= 0.1 && eTinCone/e.pT() < 0.1) electron_candidates.push_back(e); } // Taus Particles tau_candidates; - for (const Particle& tau : apply(event, "UFS").particles() ) { + for (const Particle& tau : apply(event, "UFS").particles() ) { // Only pick taus out of all unstable particles if ( tau.abspid() != PID::TAU) continue; // Check that tau has decayed into daughter particles if (tau.genParticle()->end_vertex() == 0) continue; // Calculate visible tau momentum using the tau neutrino momentum in the tau decay FourMomentum daughter_tau_neutrino_momentum = get_tau_neutrino_momentum(tau); Particle tau_vis = tau; tau_vis.setMomentum(tau.momentum()-daughter_tau_neutrino_momentum); // keep only taus in certain eta region and above 15 GeV of visible tau pT if ( tau_vis.pT()/GeV <= 15.0 || tau_vis.abseta() > 2.5) continue; // Get prong number (number of tracks) in tau decay and check if tau decays leptonically unsigned int nprong = 0; bool lep_decaying_tau = false; get_prong_number(tau.genParticle(),nprong,lep_decaying_tau); // Apply reconstruction efficiency and simulate reconstruction int tau_id = 15; if (nprong == 1) tau_id = 15; else if (nprong == 3) tau_id = 16; const double eff = (_use_fiducial_lepton_efficiency) ? apply_reco_eff(tau_id,tau_vis) : 1.0; const bool keep_tau = rand()/static_cast(RAND_MAX)<=eff; // Keep tau if nprong = 1, it decays hadronically and it is reconstructed if ( !lep_decaying_tau && nprong == 1 && keep_tau) tau_candidates.push_back(tau_vis); } // Jets (all anti-kt R=0.4 jets with pT > 30 GeV and eta < 4.9 Jets jet_candidates; for (const Jet& jet : apply(event, "AntiKtJets04").jetsByPt(30.0*GeV) ) { if (jet.abseta() < 4.9 ) jet_candidates.push_back(jet); } // ETmiss Particles vfs_particles = apply(event, "VFS").particles(); FourMomentum pTmiss; for (const Particle& p : vfs_particles) pTmiss -= p.momentum(); double eTmiss = pTmiss.pT()/GeV; // ------------------------- // Overlap removal // electron - electron Particles electron_candidates_2; for(size_t ie = 0; ie < electron_candidates.size(); ++ie) { const Particle& e = electron_candidates[ie]; bool away = true; // If electron pair within dR < 0.1: remove electron with lower pT for(size_t ie2 = 0; ie2 < electron_candidates_2.size(); ++ie2) { if (deltaR(e.momentum(),electron_candidates_2[ie2].momentum()) < 0.1 ) { away = false; break; } } // If isolated keep it if ( away ) electron_candidates_2.push_back( e ); } // jet - electron Jets recon_jets; for (const Jet& jet : jet_candidates) { bool away = true; // If jet within dR < 0.2 of electron: remove jet for (const Particle& e : electron_candidates_2) { if (deltaR(e.momentum(), jet.momentum()) < 0.2 ) { away = false; break; } } // jet - tau if ( away ) { // If jet within dR < 0.2 of tau: remove jet for (const Particle& tau : tau_candidates) { if (deltaR(tau.momentum(), jet.momentum()) < 0.2 ) { away = false; break; } } } // If isolated keep it if ( away ) recon_jets.push_back( jet ); } // electron - jet Particles recon_leptons, recon_e; for (size_t ie = 0; ie < electron_candidates_2.size(); ++ie) { const Particle& e = electron_candidates_2[ie]; // If electron within 0.2 < dR < 0.4 from any jets: remove electron bool away = true; for (const Jet& jet : recon_jets) { if (deltaR(e.momentum(), jet.momentum()) < 0.4 ) { away = false; break; } } // electron - muon // If electron within dR < 0.1 of a muon: remove electron if (away) { for (const Particle& mu : muon_candidates) { if (deltaR(mu.momentum(),e.momentum()) < 0.1) { away = false; break; } } } // If isolated keep it if ( away ) { recon_e.push_back( e ); recon_leptons.push_back( e ); } } // tau - electron Particles recon_tau; for (const Particle& tau : tau_candidates) { bool away = true; // If tau within dR < 0.2 of an electron: remove tau for (const Particle & e : recon_e) { if (deltaR(tau.momentum(),e.momentum()) < 0.2 ) { away = false; break; } } // tau - muon // If tau within dR < 0.2 of a muon: remove tau if (away) { for (const Particle& mu : muon_candidates) { if (deltaR(tau.momentum(), mu.momentum()) < 0.2 ) { away = false; break; } } } // If isolated keep it if (away) recon_tau.push_back( tau ); } // muon - jet Particles recon_mu, trigger_mu; // If muon within dR < 0.4 of a jet: remove muon for (const Particle& mu : muon_candidates ) { bool away = true; for (const Jet& jet : recon_jets) { if (deltaR(mu.momentum(), jet.momentum()) < 0.4 ) { away = false; break; } } if (away) { recon_mu.push_back( mu ); recon_leptons.push_back( mu ); if (mu.abseta() < 2.4) trigger_mu.push_back( mu ); } } // End overlap removal // --------------------- // Jet cleaning if (rand()/static_cast(RAND_MAX) <= 0.42) { for (const Jet& jet : recon_jets ) { const double eta = jet.rapidity(); const double phi = jet.azimuthalAngle(MINUSPI_PLUSPI); if(jet.pT() > 25*GeV && inRange(eta,-0.1,1.5) && inRange(phi,-0.9,-0.5)) vetoEvent; } } // Event selection // Require at least 3 charged tracks in event if (charged_tracks.size() < 3) vetoEvent; // And at least one e/mu passing trigger if( !( !recon_e.empty() && recon_e[0].pT()>26.*GeV) && !( !trigger_mu.empty() && trigger_mu[0].pT()>26.*GeV) ) { MSG_DEBUG("Hardest lepton fails trigger"); vetoEvent; } // And only accept events with at least 2 electrons and muons and at least 3 leptons in total if (recon_mu.size() + recon_e.size() + recon_tau.size() < 3 || recon_leptons.size() < 2) vetoEvent; // Getting the event weight const double weight = event.weight(); // Sort leptons by decreasing pT sortByPt(recon_leptons); sortByPt(recon_tau); // Calculate HTlep, fill lepton pT histograms & store chosen combination of 3 leptons double HTlep = 0.; Particles chosen_leptons; if (recon_leptons.size() > 2) { _h_pt_1_3l->fill(recon_leptons[0].pT()/GeV, weight); _h_pt_2_3l->fill(recon_leptons[1].pT()/GeV, weight); _h_pt_3_3l->fill(recon_leptons[2].pT()/GeV, weight); HTlep = (recon_leptons[0].pT() + recon_leptons[1].pT() + recon_leptons[2].pT())/GeV; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_leptons[2] ); } else { _h_pt_1_2ltau->fill(recon_leptons[0].pT()/GeV, weight); _h_pt_2_2ltau->fill(recon_leptons[1].pT()/GeV, weight); _h_pt_3_2ltau->fill(recon_tau[0].pT()/GeV, weight); HTlep = recon_leptons[0].pT()/GeV + recon_leptons[1].pT()/GeV + recon_tau[0].pT()/GeV; chosen_leptons.push_back( recon_leptons[0] ); chosen_leptons.push_back( recon_leptons[1] ); chosen_leptons.push_back( recon_tau[0] ); } // Calculate mT and mTW variable Particles mT_leptons; Particles mTW_leptons; for (size_t i1 = 0; i1 < 3; i1 ++) { for (size_t i2 = i1+1; i2 < 3; i2 ++) { double OSSF_inv_mass = isOSSF_mass(chosen_leptons[i1],chosen_leptons[i2]); if (OSSF_inv_mass != 0.) { for (size_t i3 = 0; i3 < 3 ; i3 ++) { if (i3 != i2 && i3 != i1) { mT_leptons.push_back(chosen_leptons[i3]); if ( fabs(91.0 - OSSF_inv_mass) < 20. ) mTW_leptons.push_back(chosen_leptons[i3]); } } } else { mT_leptons.push_back(chosen_leptons[0]); mTW_leptons.push_back(chosen_leptons[0]); } } } sortByPt(mT_leptons); sortByPt(mTW_leptons); double mT = sqrt(2*pTmiss.pT()/GeV*mT_leptons[0].pT()/GeV*(1-cos(pTmiss.phi()-mT_leptons[0].phi()))); double mTW = sqrt(2*pTmiss.pT()/GeV*mTW_leptons[0].pT()/GeV*(1-cos(pTmiss.phi()-mTW_leptons[0].phi()))); // Calculate Min pT variable double min_pT = chosen_leptons[2].pT()/GeV; // Number of prompt e/mu and had taus _h_e_n->fill(recon_e.size(),weight); _h_mu_n->fill(recon_mu.size(),weight); _h_tau_n->fill(recon_tau.size(),weight); // Calculate HTjets variable double HTjets = 0.; for (const Jet& jet : recon_jets) HTjets += jet.pT()/GeV; // Calculate meff variable double meff = eTmiss + HTjets; Particles all_leptons; for (const Particle& e : recon_e ) { meff += e.pT()/GeV; all_leptons.push_back( e ); } for (const Particle& mu : recon_mu) { meff += mu.pT()/GeV; all_leptons.push_back( mu ); } for (const Particle& tau : recon_tau) { meff += tau.pT()/GeV; all_leptons.push_back( tau ); } // Fill histograms of kinematic variables _h_HTlep_all->fill(HTlep,weight); _h_HTjets_all->fill(HTjets,weight); _h_MET_all->fill(eTmiss,weight); _h_Meff_all->fill(meff,weight); _h_min_pT_all->fill(min_pT,weight); _h_mT_all->fill(mT,weight); // Determine signal region (3l / 2ltau , onZ / offZ OSSF / offZ no-OSSF) // 3l vs. 2ltau string basic_signal_region; if (recon_mu.size() + recon_e.size() > 2) basic_signal_region += "3l_"; else if ( (recon_mu.size() + recon_e.size() == 2) && (recon_tau.size() > 0)) basic_signal_region += "2ltau_"; // Is there an OSSF pair or a three lepton combination with an invariant mass close to the Z mass int onZ = isonZ(chosen_leptons); if (onZ == 1) basic_signal_region += "onZ"; else if (onZ == 0) { bool OSSF = isOSSF(chosen_leptons); if (OSSF) basic_signal_region += "offZ_OSSF"; else basic_signal_region += "offZ_noOSSF"; } // Check in which signal regions this event falls and adjust event counters // INFO: The b-jet signal regions of the paper are not included in this Rivet implementation fillEventCountsPerSR(basic_signal_region,onZ,HTlep,eTmiss,HTjets,meff,min_pT,mTW,weight); } /// Normalise histograms etc., after the run void finalize() { // Normalize to an integrated luminosity of 1 fb-1 double norm = crossSection()/femtobarn/sumOfWeights(); string best_signal_region = ""; double ratio_best_SR = 0.; // Loop over all signal regions and find signal region with best sensitivity (ratio signal events/visible cross-section) for (size_t i = 0; i < _signal_regions.size(); i++) { double signal_events = _eventCountsPerSR[_signal_regions[i]] * norm; // Use expected upper limits to find best signal region: double UL95 = getUpperLimit(_signal_regions[i],false); double ratio = signal_events / UL95; if (ratio > ratio_best_SR) { best_signal_region = _signal_regions.at(i); ratio_best_SR = ratio; } } double signal_events_best_SR = _eventCountsPerSR[best_signal_region] * norm; double exp_UL_best_SR = getUpperLimit(best_signal_region, false); double obs_UL_best_SR = getUpperLimit(best_signal_region, true); // Print out result cout << "----------------------------------------------------------------------------------------" << endl; cout << "Number of total events: " << sumOfWeights() << endl; cout << "Best signal region: " << best_signal_region << endl; cout << "Normalized number of signal events in this best signal region (per fb-1): " << signal_events_best_SR << endl; cout << "Efficiency*Acceptance: " << _eventCountsPerSR[best_signal_region]/sumOfWeights() << endl; cout << "Cross-section [fb]: " << crossSection()/femtobarn << endl; cout << "Expected visible cross-section (per fb-1): " << exp_UL_best_SR << endl; cout << "Ratio (signal events / expected visible cross-section): " << ratio_best_SR << endl; cout << "Observed visible cross-section (per fb-1): " << obs_UL_best_SR << endl; cout << "Ratio (signal events / observed visible cross-section): " << signal_events_best_SR/obs_UL_best_SR << endl; cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the EXPECTED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > exp_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% C.L." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; cout << "Using the OBSERVED limits (visible cross-section) of the analysis: " << endl; if (signal_events_best_SR > obs_UL_best_SR) { cout << "Since the number of signal events > the visible cross-section, this model/grid point is EXCLUDED with 95% C.L." << endl; _h_excluded->fill(1); } else { cout << "Since the number of signal events < the visible cross-section, this model/grid point is NOT EXCLUDED." << endl; _h_excluded->fill(0); } cout << "----------------------------------------------------------------------------------------" << endl; cout << "INFO: The b-jet signal regions of the paper are not included in this Rivet implementation." << endl; cout << "----------------------------------------------------------------------------------------" << endl; /// Normalize to cross section if (norm != 0) { scale(_h_HTlep_all, norm); scale(_h_HTjets_all, norm); scale(_h_MET_all, norm); scale(_h_Meff_all, norm); scale(_h_min_pT_all, norm); scale(_h_mT_all, norm); scale(_h_pt_1_3l, norm); scale(_h_pt_2_3l, norm); scale(_h_pt_3_3l, norm); scale(_h_pt_1_2ltau, norm); scale(_h_pt_2_2ltau, norm); scale(_h_pt_3_2ltau, norm); scale(_h_e_n, norm); scale(_h_mu_n, norm); scale(_h_tau_n, norm); scale(_h_excluded, norm); } } /// Helper functions //@{ /// Function giving a list of all signal regions vector getSignalRegions() { // List of basic signal regions vector basic_signal_regions; basic_signal_regions.push_back("3l_offZ_OSSF"); basic_signal_regions.push_back("3l_offZ_noOSSF"); basic_signal_regions.push_back("3l_onZ"); basic_signal_regions.push_back("2ltau_offZ_OSSF"); basic_signal_regions.push_back("2ltau_offZ_noOSSF"); basic_signal_regions.push_back("2ltau_onZ"); // List of kinematic variables vector kinematic_variables; kinematic_variables.push_back("HTlep"); kinematic_variables.push_back("METStrong"); kinematic_variables.push_back("METWeak"); kinematic_variables.push_back("Meff"); kinematic_variables.push_back("MeffStrong"); kinematic_variables.push_back("MeffMt"); kinematic_variables.push_back("MinPt"); vector signal_regions; // Loop over all kinematic variables and basic signal regions for (size_t i0 = 0; i0 < kinematic_variables.size(); i0++) { for (size_t i1 = 0; i1 < basic_signal_regions.size(); i1++) { // Is signal region onZ? int onZ = (basic_signal_regions[i1].find("onZ") != string::npos) ? 1 : 0; // Get cut values for this kinematic variable vector cut_values = getCutsPerSignalRegion(kinematic_variables[i0], onZ); // Loop over all cut values for (size_t i2 = 0; i2 < cut_values.size(); i2++) { // Push signal region into vector signal_regions.push_back( kinematic_variables[i0] + "_" + basic_signal_regions[i1] + "_cut_" + toString(cut_values[i2]) ); } } } return signal_regions; } /// Function giving all cut values per kinematic variable vector getCutsPerSignalRegion(const string& signal_region, int onZ = 0) { vector cutValues; // Cut values for HTlep if (signal_region.compare("HTlep") == 0) { cutValues.push_back(0); cutValues.push_back(200); cutValues.push_back(500); cutValues.push_back(800); } // Cut values for MinPt else if (signal_region.compare("MinPt") == 0) { cutValues.push_back(0); cutValues.push_back(50); cutValues.push_back(100); cutValues.push_back(150); } // Cut values for METStrong (HTjets > 150 GeV) and METWeak (HTjets < 150 GeV) else if (signal_region.compare("METStrong") == 0 || signal_region.compare("METWeak") == 0) { cutValues.push_back(0); cutValues.push_back(100); cutValues.push_back(200); cutValues.push_back(300); } // Cut values for Meff if (signal_region.compare("Meff") == 0) { cutValues.push_back(0); cutValues.push_back(600); cutValues.push_back(1000); cutValues.push_back(1500); } // Cut values for MeffStrong (MET > 100 GeV) if ((signal_region.compare("MeffStrong") == 0 || signal_region.compare("MeffMt") == 0) && onZ ==1) { cutValues.push_back(0); cutValues.push_back(600); cutValues.push_back(1200); } return cutValues; } /// function fills map _eventCountsPerSR by looping over all signal regions /// and looking if the event falls into this signal region void fillEventCountsPerSR(const string& basic_signal_region, int onZ, double HTlep, double eTmiss, double HTjets, double meff, double min_pT, double mTW, double weight) { // Get cut values for HTlep, loop over them and add event if cut is passed vector cut_values = getCutsPerSignalRegion("HTlep", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (HTlep > cut_values[i]) _eventCountsPerSR[("HTlep_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MinPt, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MinPt", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (min_pT > cut_values[i]) _eventCountsPerSR[("MinPt_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets > 150.) _eventCountsPerSR[("METStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for METWeak, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("METWeak", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (eTmiss > cut_values[i] && HTjets <= 150.) _eventCountsPerSR[("METWeak_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for Meff, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("Meff", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i]) _eventCountsPerSR[("Meff_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MeffStrong, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MeffStrong", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i] && eTmiss > 100.) _eventCountsPerSR[("MeffStrong_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } // Get cut values for MeffMt, loop over them and add event if cut is passed cut_values = getCutsPerSignalRegion("MeffMt", onZ); for (size_t i = 0; i < cut_values.size(); i++) { if (meff > cut_values[i] && mTW > 100. && onZ == 1) _eventCountsPerSR[("MeffMt_" + basic_signal_region + "_cut_" + toString(cut_values[i]))] += weight; } } /// Function returning 4-momentum of daughter-particle if it is a tau neutrino FourMomentum get_tau_neutrino_momentum(const Particle& p) { assert(p.abspid() == PID::TAU); const GenVertex* dv = p.genParticle()->end_vertex(); assert(dv != NULL); // Loop over all daughter particles for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { if (abs((*pp)->pdg_id()) == PID::NU_TAU) return FourMomentum((*pp)->momentum()); } return FourMomentum(); } /// Function calculating the prong number of taus void get_prong_number(const GenParticle* p, unsigned int& nprong, bool& lep_decaying_tau) { assert(p != NULL); const GenVertex* dv = p->end_vertex(); assert(dv != NULL); for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { // If they have status 1 and are charged they will produce a track and the prong number is +1 if ((*pp)->status() == 1 ) { const int id = (*pp)->pdg_id(); if (Rivet::PID::charge(id) != 0 ) ++nprong; // Check if tau decays leptonically if (( abs(id) == PID::ELECTRON || abs(id) == PID::MUON || abs(id) == PID::TAU ) && abs(p->pdg_id()) == PID::TAU) lep_decaying_tau = true; } // If the status of the daughter particle is 2 it is unstable and the further decays are checked else if ((*pp)->status() == 2 ) { get_prong_number((*pp),nprong,lep_decaying_tau); } } } /// Function giving fiducial lepton efficiency double apply_reco_eff(int flavor, const Particle& p) { double pt = p.pT()/GeV; double eta = p.eta(); double eff = 0.; if (flavor == 11) { // weight prompt electron -- now including data/MC ID SF in eff. double avgrate = 0.685; const static double wz_ele[] = {0.0256,0.522,0.607,0.654,0.708,0.737,0.761,0.784,0.815,0.835,0.851,0.841,0.898}; // double ewz_ele[] = {0.000257,0.00492,0.00524,0.00519,0.00396,0.00449,0.00538,0.00513,0.00773,0.00753,0.0209,0.0964,0.259}; int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100 && pt < 200) ibin = 9; if(pt > 200 && pt < 400) ibin = 10; if(pt > 400 && pt < 600) ibin = 11; if(pt > 600) ibin = 12; double eff_pt = 0.; eff_pt = wz_ele[ibin]; eta = fabs(eta); const static double wz_ele_eta[] = {0.65,0.714,0.722,0.689,0.635,0.615}; // double ewz_ele_eta[] = {0.00642,0.00355,0.00335,0.004,0.00368,0.00422}; ibin = 0; if(eta > 0 && eta < 0.1) ibin = 0; if(eta > 0.1 && eta < 0.5) ibin = 1; if(eta > 0.5 && eta < 1.0) ibin = 2; if(eta > 1.0 && eta < 1.5) ibin = 3; if(eta > 1.5 && eta < 2.0) ibin = 4; if(eta > 2.0 && eta < 2.5) ibin = 5; double eff_eta = 0.; eff_eta = wz_ele_eta[ibin]; eff = (eff_pt * eff_eta) / avgrate; } if (flavor == 12) { // weight electron from tau double avgrate = 0.476; const static double wz_ele[] = {0.00855,0.409,0.442,0.55,0.632,0.616,0.615,0.642,0.72,0.617}; // double ewz_ele[] = {0.000573,0.0291,0.0366,0.0352,0.0363,0.0474,0.0628,0.0709,0.125,0.109}; int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100) ibin = 9; double eff_pt = 0.; eff_pt = wz_ele[ibin]; eta = fabs(eta); const static double wz_ele_eta[] = {0.546,0.5,0.513,0.421,0.47,0.433}; //double ewz_ele_eta[] = {0.0566,0.0257,0.0263,0.0263,0.0303,0.0321}; ibin = 0; if(eta > 0 && eta < 0.1) ibin = 0; if(eta > 0.1 && eta < 0.5) ibin = 1; if(eta > 0.5 && eta < 1.0) ibin = 2; if(eta > 1.0 && eta < 1.5) ibin = 3; if(eta > 1.5 && eta < 2.0) ibin = 4; if(eta > 2.0 && eta < 2.5) ibin = 5; double eff_eta = 0.; eff_eta = wz_ele_eta[ibin]; eff = (eff_pt * eff_eta) / avgrate; } if (flavor == 13) { // weight prompt muon int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100 && pt < 200) ibin = 9; if(pt > 200 && pt < 400) ibin = 10; if(pt > 400) ibin = 11; if(fabs(eta) < 0.1) { const static double wz_mu[] = {0.00705,0.402,0.478,0.49,0.492,0.499,0.527,0.512,0.53,0.528,0.465,0.465}; //double ewz_mu[] = {0.000298,0.0154,0.017,0.0158,0.0114,0.0123,0.0155,0.0133,0.0196,0.0182,0.0414,0.0414}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } if(fabs(eta) > 0.1) { const static double wz_mu[] = {0.0224,0.839,0.887,0.91,0.919,0.923,0.925,0.925,0.922,0.918,0.884,0.834}; //double ewz_mu[] = {0.000213,0.00753,0.0074,0.007,0.00496,0.00534,0.00632,0.00583,0.00849,0.00804,0.0224,0.0963}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } } if (flavor == 14) { // weight muon from tau int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100) ibin = 9; if(fabs(eta) < 0.1) { const static double wz_mu[] = {0.0,0.664,0.124,0.133,0.527,0.283,0.495,0.25,0.5,0.331}; //double ewz_mu[] = {0.0,0.192,0.0437,0.0343,0.128,0.107,0.202,0.125,0.25,0.191}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } if(fabs(eta) > 0.1) { const static double wz_mu[] = {0.0,0.617,0.655,0.676,0.705,0.738,0.712,0.783,0.646,0.745}; //double ewz_mu[] = {0.0,0.043,0.0564,0.0448,0.0405,0.0576,0.065,0.0825,0.102,0.132}; double eff_pt = 0.; eff_pt = wz_mu[ibin]; eff = eff_pt; } } if (flavor == 15) { // weight hadronic tau 1p double avgrate = 0.16; const static double wz_tau1p[] = {0.0,0.0311,0.148,0.229,0.217,0.292,0.245,0.307,0.227,0.277}; //double ewz_tau1p[] = {0.0,0.00211,0.0117,0.0179,0.0134,0.0248,0.0264,0.0322,0.0331,0.0427}; int ibin = 0; if(pt > 10 && pt < 15) ibin = 0; if(pt > 15 && pt < 20) ibin = 1; if(pt > 20 && pt < 25) ibin = 2; if(pt > 25 && pt < 30) ibin = 3; if(pt > 30 && pt < 40) ibin = 4; if(pt > 40 && pt < 50) ibin = 5; if(pt > 50 && pt < 60) ibin = 6; if(pt > 60 && pt < 80) ibin = 7; if(pt > 80 && pt < 100) ibin = 8; if(pt > 100) ibin = 9; double eff_pt = 0.; eff_pt = wz_tau1p[ibin]; const static double wz_tau1p_eta[] = {0.166,0.15,0.188,0.175,0.142,0.109}; //double ewz_tau1p_eta[] ={0.0166,0.00853,0.0097,0.00985,0.00949,0.00842}; ibin = 0; if(eta > 0.0 && eta < 0.1) ibin = 0; if(eta > 0.1 && eta < 0.5) ibin = 1; if(eta > 0.5 && eta < 1.0) ibin = 2; if(eta > 1.0 && eta < 1.5) ibin = 3; if(eta > 1.5 && eta < 2.0) ibin = 4; if(eta > 2.0 && eta < 2.5) ibin = 5; double eff_eta = 0.; eff_eta = wz_tau1p_eta[ibin]; eff = (eff_pt * eff_eta) / avgrate; } return eff; } /// Function giving observed and expected upper limits (on the visible cross-section) double getUpperLimit(const string& signal_region, bool observed) { map upperLimitsObserved; map upperLimitsExpected; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_200"] = 0.704; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_500"] = 0.182; upperLimitsObserved["HTlep_3l_offZ_OSSF_cut_800"] = 0.147; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_200"] = 1.677; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_500"] = 0.141; upperLimitsObserved["HTlep_2ltau_offZ_OSSF_cut_800"] = 0.155; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_200"] = 0.341; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_500"] = 0.221; upperLimitsObserved["HTlep_3l_offZ_noOSSF_cut_800"] = 0.140; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_200"] = 0.413; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_500"] = 0.138; upperLimitsObserved["HTlep_2ltau_offZ_noOSSF_cut_800"] = 0.150; upperLimitsObserved["HTlep_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["HTlep_3l_onZ_cut_200"] = 3.579; upperLimitsObserved["HTlep_3l_onZ_cut_500"] = 0.466; upperLimitsObserved["HTlep_3l_onZ_cut_800"] = 0.298; upperLimitsObserved["HTlep_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["HTlep_2ltau_onZ_cut_200"] = 3.141; upperLimitsObserved["HTlep_2ltau_onZ_cut_500"] = 0.290; upperLimitsObserved["HTlep_2ltau_onZ_cut_800"] = 0.157; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_0"] = 1.111; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_100"] = 0.354; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_200"] = 0.236; upperLimitsObserved["METStrong_3l_offZ_OSSF_cut_300"] = 0.150; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_0"] = 1.881; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_100"] = 0.406; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_200"] = 0.194; upperLimitsObserved["METStrong_2ltau_offZ_OSSF_cut_300"] = 0.134; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_0"] = 0.770; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_100"] = 0.295; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_200"] = 0.149; upperLimitsObserved["METStrong_3l_offZ_noOSSF_cut_300"] = 0.140; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_0"] = 2.003; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_100"] = 0.806; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_200"] = 0.227; upperLimitsObserved["METStrong_2ltau_offZ_noOSSF_cut_300"] = 0.138; upperLimitsObserved["METStrong_3l_onZ_cut_0"] = 6.383; upperLimitsObserved["METStrong_3l_onZ_cut_100"] = 0.959; upperLimitsObserved["METStrong_3l_onZ_cut_200"] = 0.549; upperLimitsObserved["METStrong_3l_onZ_cut_300"] = 0.182; upperLimitsObserved["METStrong_2ltau_onZ_cut_0"] = 10.658; upperLimitsObserved["METStrong_2ltau_onZ_cut_100"] = 0.637; upperLimitsObserved["METStrong_2ltau_onZ_cut_200"] = 0.291; upperLimitsObserved["METStrong_2ltau_onZ_cut_300"] = 0.227; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_0"] = 1.802; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_100"] = 0.344; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_200"] = 0.189; upperLimitsObserved["METWeak_3l_offZ_OSSF_cut_300"] = 0.148; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_0"] = 12.321; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_100"] = 0.430; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_200"] = 0.137; upperLimitsObserved["METWeak_2ltau_offZ_OSSF_cut_300"] = 0.134; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_0"] = 0.562; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_100"] = 0.153; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_200"] = 0.154; upperLimitsObserved["METWeak_3l_offZ_noOSSF_cut_300"] = 0.141; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_0"] = 2.475; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_100"] = 0.244; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_200"] = 0.141; upperLimitsObserved["METWeak_2ltau_offZ_noOSSF_cut_300"] = 0.142; upperLimitsObserved["METWeak_3l_onZ_cut_0"] = 24.769; upperLimitsObserved["METWeak_3l_onZ_cut_100"] = 0.690; upperLimitsObserved["METWeak_3l_onZ_cut_200"] = 0.198; upperLimitsObserved["METWeak_3l_onZ_cut_300"] = 0.138; upperLimitsObserved["METWeak_2ltau_onZ_cut_0"] = 194.360; upperLimitsObserved["METWeak_2ltau_onZ_cut_100"] = 0.287; upperLimitsObserved["METWeak_2ltau_onZ_cut_200"] = 0.144; upperLimitsObserved["METWeak_2ltau_onZ_cut_300"] = 0.130; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_600"] = 0.487; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_1000"] = 0.156; upperLimitsObserved["Meff_3l_offZ_OSSF_cut_1500"] = 0.140; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_600"] = 0.687; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_1000"] = 0.224; upperLimitsObserved["Meff_2ltau_offZ_OSSF_cut_1500"] = 0.155; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_600"] = 0.249; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_1000"] = 0.194; upperLimitsObserved["Meff_3l_offZ_noOSSF_cut_1500"] = 0.145; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_600"] = 0.772; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_1000"] = 0.218; upperLimitsObserved["Meff_2ltau_offZ_noOSSF_cut_1500"] = 0.204; upperLimitsObserved["Meff_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["Meff_3l_onZ_cut_600"] = 2.933; upperLimitsObserved["Meff_3l_onZ_cut_1000"] = 0.912; upperLimitsObserved["Meff_3l_onZ_cut_1500"] = 0.225; upperLimitsObserved["Meff_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["Meff_2ltau_onZ_cut_600"] = 1.486; upperLimitsObserved["Meff_2ltau_onZ_cut_1000"] = 0.641; upperLimitsObserved["Meff_2ltau_onZ_cut_1500"] = 0.204; upperLimitsObserved["MeffStrong_3l_offZ_OSSF_cut_0"] = 0.479; upperLimitsObserved["MeffStrong_3l_offZ_OSSF_cut_600"] = 0.353; upperLimitsObserved["MeffStrong_3l_offZ_OSSF_cut_1200"] = 0.187; upperLimitsObserved["MeffStrong_2ltau_offZ_OSSF_cut_0"] = 0.617; upperLimitsObserved["MeffStrong_2ltau_offZ_OSSF_cut_600"] = 0.320; upperLimitsObserved["MeffStrong_2ltau_offZ_OSSF_cut_1200"] = 0.281; upperLimitsObserved["MeffStrong_3l_offZ_noOSSF_cut_0"] = 0.408; upperLimitsObserved["MeffStrong_3l_offZ_noOSSF_cut_600"] = 0.240; upperLimitsObserved["MeffStrong_3l_offZ_noOSSF_cut_1200"] = 0.150; upperLimitsObserved["MeffStrong_2ltau_offZ_noOSSF_cut_0"] = 0.774; upperLimitsObserved["MeffStrong_2ltau_offZ_noOSSF_cut_600"] = 0.417; upperLimitsObserved["MeffStrong_2ltau_offZ_noOSSF_cut_1200"] = 0.266; upperLimitsObserved["MeffStrong_3l_onZ_cut_0"] = 1.208; upperLimitsObserved["MeffStrong_3l_onZ_cut_600"] = 0.837; upperLimitsObserved["MeffStrong_3l_onZ_cut_1200"] = 0.269; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_0"] = 0.605; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_600"] = 0.420; upperLimitsObserved["MeffStrong_2ltau_onZ_cut_1200"] = 0.141; upperLimitsObserved["MeffMt_3l_onZ_cut_0"] = 1.832; upperLimitsObserved["MeffMt_3l_onZ_cut_600"] = 0.862; upperLimitsObserved["MeffMt_3l_onZ_cut_1200"] = 0.222; upperLimitsObserved["MeffMt_2ltau_onZ_cut_0"] = 1.309; upperLimitsObserved["MeffMt_2ltau_onZ_cut_600"] = 0.481; upperLimitsObserved["MeffMt_2ltau_onZ_cut_1200"] = 0.146; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_50"] = 0.500; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_100"] = 0.203; upperLimitsObserved["MinPt_3l_offZ_OSSF_cut_150"] = 0.128; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_50"] = 0.859; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_100"] = 0.158; upperLimitsObserved["MinPt_2ltau_offZ_OSSF_cut_150"] = 0.155; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_50"] = 0.295; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_100"] = 0.148; upperLimitsObserved["MinPt_3l_offZ_noOSSF_cut_150"] = 0.137; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_50"] = 0.314; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_100"] = 0.134; upperLimitsObserved["MinPt_2ltau_offZ_noOSSF_cut_150"] = 0.140; upperLimitsObserved["MinPt_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["MinPt_3l_onZ_cut_50"] = 1.767; upperLimitsObserved["MinPt_3l_onZ_cut_100"] = 0.690; upperLimitsObserved["MinPt_3l_onZ_cut_150"] = 0.301; upperLimitsObserved["MinPt_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["MinPt_2ltau_onZ_cut_50"] = 1.050; upperLimitsObserved["MinPt_2ltau_onZ_cut_100"] = 0.155; upperLimitsObserved["MinPt_2ltau_onZ_cut_150"] = 0.146; upperLimitsObserved["nbtag_3l_offZ_OSSF_cut_0"] = 2.435; upperLimitsObserved["nbtag_3l_offZ_OSSF_cut_1"] = 0.865; upperLimitsObserved["nbtag_3l_offZ_OSSF_cut_2"] = 0.474; upperLimitsObserved["nbtag_2ltau_offZ_OSSF_cut_0"] = 13.901; upperLimitsObserved["nbtag_2ltau_offZ_OSSF_cut_1"] = 1.566; upperLimitsObserved["nbtag_2ltau_offZ_OSSF_cut_2"] = 0.426; upperLimitsObserved["nbtag_3l_offZ_noOSSF_cut_0"] = 1.054; upperLimitsObserved["nbtag_3l_offZ_noOSSF_cut_1"] = 0.643; upperLimitsObserved["nbtag_3l_offZ_noOSSF_cut_2"] = 0.321; upperLimitsObserved["nbtag_2ltau_offZ_noOSSF_cut_0"] = 4.276; upperLimitsObserved["nbtag_2ltau_offZ_noOSSF_cut_1"] = 2.435; upperLimitsObserved["nbtag_2ltau_offZ_noOSSF_cut_2"] = 1.073; upperLimitsObserved["nbtag_3l_onZ_cut_0"] = 29.804; upperLimitsObserved["nbtag_3l_onZ_cut_1"] = 3.908; upperLimitsObserved["nbtag_3l_onZ_cut_2"] = 0.704; upperLimitsObserved["nbtag_2ltau_onZ_cut_0"] = 205.091; upperLimitsObserved["nbtag_2ltau_onZ_cut_1"] = 9.377; upperLimitsObserved["nbtag_2ltau_onZ_cut_2"] = 0.657; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_200"] = 1.175; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_500"] = 0.265; upperLimitsExpected["HTlep_3l_offZ_OSSF_cut_800"] = 0.155; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_200"] = 1.803; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_500"] = 0.159; upperLimitsExpected["HTlep_2ltau_offZ_OSSF_cut_800"] = 0.155; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_200"] = 0.340; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_500"] = 0.218; upperLimitsExpected["HTlep_3l_offZ_noOSSF_cut_800"] = 0.140; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_200"] = 0.599; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_500"] = 0.146; upperLimitsExpected["HTlep_2ltau_offZ_noOSSF_cut_800"] = 0.148; upperLimitsExpected["HTlep_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["HTlep_3l_onZ_cut_200"] = 4.879; upperLimitsExpected["HTlep_3l_onZ_cut_500"] = 0.473; upperLimitsExpected["HTlep_3l_onZ_cut_800"] = 0.266; upperLimitsExpected["HTlep_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["HTlep_2ltau_onZ_cut_200"] = 3.676; upperLimitsExpected["HTlep_2ltau_onZ_cut_500"] = 0.235; upperLimitsExpected["HTlep_2ltau_onZ_cut_800"] = 0.150; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_0"] = 1.196; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_100"] = 0.423; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_200"] = 0.208; upperLimitsExpected["METStrong_3l_offZ_OSSF_cut_300"] = 0.158; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_0"] = 2.158; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_100"] = 0.461; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_200"] = 0.186; upperLimitsExpected["METStrong_2ltau_offZ_OSSF_cut_300"] = 0.138; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_0"] = 0.495; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_100"] = 0.284; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_200"] = 0.150; upperLimitsExpected["METStrong_3l_offZ_noOSSF_cut_300"] = 0.146; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_0"] = 1.967; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_100"] = 0.732; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_200"] = 0.225; upperLimitsExpected["METStrong_2ltau_offZ_noOSSF_cut_300"] = 0.147; upperLimitsExpected["METStrong_3l_onZ_cut_0"] = 7.157; upperLimitsExpected["METStrong_3l_onZ_cut_100"] = 1.342; upperLimitsExpected["METStrong_3l_onZ_cut_200"] = 0.508; upperLimitsExpected["METStrong_3l_onZ_cut_300"] = 0.228; upperLimitsExpected["METStrong_2ltau_onZ_cut_0"] = 12.441; upperLimitsExpected["METStrong_2ltau_onZ_cut_100"] = 0.534; upperLimitsExpected["METStrong_2ltau_onZ_cut_200"] = 0.243; upperLimitsExpected["METStrong_2ltau_onZ_cut_300"] = 0.218; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_0"] = 2.199; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_100"] = 0.391; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_200"] = 0.177; upperLimitsExpected["METWeak_3l_offZ_OSSF_cut_300"] = 0.144; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_0"] = 12.431; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_100"] = 0.358; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_200"] = 0.150; upperLimitsExpected["METWeak_2ltau_offZ_OSSF_cut_300"] = 0.135; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_0"] = 0.577; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_100"] = 0.214; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_200"] = 0.155; upperLimitsExpected["METWeak_3l_offZ_noOSSF_cut_300"] = 0.140; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_0"] = 2.474; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_100"] = 0.382; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_200"] = 0.144; upperLimitsExpected["METWeak_2ltau_offZ_noOSSF_cut_300"] = 0.146; upperLimitsExpected["METWeak_3l_onZ_cut_0"] = 26.305; upperLimitsExpected["METWeak_3l_onZ_cut_100"] = 1.227; upperLimitsExpected["METWeak_3l_onZ_cut_200"] = 0.311; upperLimitsExpected["METWeak_3l_onZ_cut_300"] = 0.188; upperLimitsExpected["METWeak_2ltau_onZ_cut_0"] = 205.198; upperLimitsExpected["METWeak_2ltau_onZ_cut_100"] = 0.399; upperLimitsExpected["METWeak_2ltau_onZ_cut_200"] = 0.166; upperLimitsExpected["METWeak_2ltau_onZ_cut_300"] = 0.140; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_600"] = 0.649; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_1000"] = 0.252; upperLimitsExpected["Meff_3l_offZ_OSSF_cut_1500"] = 0.150; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_600"] = 0.657; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_1000"] = 0.226; upperLimitsExpected["Meff_2ltau_offZ_OSSF_cut_1500"] = 0.154; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_600"] = 0.265; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_1000"] = 0.176; upperLimitsExpected["Meff_3l_offZ_noOSSF_cut_1500"] = 0.146; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_600"] = 0.678; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_1000"] = 0.243; upperLimitsExpected["Meff_2ltau_offZ_noOSSF_cut_1500"] = 0.184; upperLimitsExpected["Meff_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["Meff_3l_onZ_cut_600"] = 3.219; upperLimitsExpected["Meff_3l_onZ_cut_1000"] = 0.905; upperLimitsExpected["Meff_3l_onZ_cut_1500"] = 0.261; upperLimitsExpected["Meff_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["Meff_2ltau_onZ_cut_600"] = 1.680; upperLimitsExpected["Meff_2ltau_onZ_cut_1000"] = 0.375; upperLimitsExpected["Meff_2ltau_onZ_cut_1500"] = 0.178; upperLimitsExpected["MeffStrong_3l_offZ_OSSF_cut_0"] = 0.571; upperLimitsExpected["MeffStrong_3l_offZ_OSSF_cut_600"] = 0.386; upperLimitsExpected["MeffStrong_3l_offZ_OSSF_cut_1200"] = 0.177; upperLimitsExpected["MeffStrong_2ltau_offZ_OSSF_cut_0"] = 0.605; upperLimitsExpected["MeffStrong_2ltau_offZ_OSSF_cut_600"] = 0.335; upperLimitsExpected["MeffStrong_2ltau_offZ_OSSF_cut_1200"] = 0.249; upperLimitsExpected["MeffStrong_3l_offZ_noOSSF_cut_0"] = 0.373; upperLimitsExpected["MeffStrong_3l_offZ_noOSSF_cut_600"] = 0.223; upperLimitsExpected["MeffStrong_3l_offZ_noOSSF_cut_1200"] = 0.150; upperLimitsExpected["MeffStrong_2ltau_offZ_noOSSF_cut_0"] = 0.873; upperLimitsExpected["MeffStrong_2ltau_offZ_noOSSF_cut_600"] = 0.428; upperLimitsExpected["MeffStrong_2ltau_offZ_noOSSF_cut_1200"] = 0.210; upperLimitsExpected["MeffStrong_3l_onZ_cut_0"] = 2.034; upperLimitsExpected["MeffStrong_3l_onZ_cut_600"] = 1.093; upperLimitsExpected["MeffStrong_3l_onZ_cut_1200"] = 0.293; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_0"] = 0.690; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_600"] = 0.392; upperLimitsExpected["MeffStrong_2ltau_onZ_cut_1200"] = 0.156; upperLimitsExpected["MeffMt_3l_onZ_cut_0"] = 2.483; upperLimitsExpected["MeffMt_3l_onZ_cut_600"] = 0.845; upperLimitsExpected["MeffMt_3l_onZ_cut_1200"] = 0.255; upperLimitsExpected["MeffMt_2ltau_onZ_cut_0"] = 1.448; upperLimitsExpected["MeffMt_2ltau_onZ_cut_600"] = 0.391; upperLimitsExpected["MeffMt_2ltau_onZ_cut_1200"] = 0.146; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_50"] = 0.703; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_100"] = 0.207; upperLimitsExpected["MinPt_3l_offZ_OSSF_cut_150"] = 0.143; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_50"] = 0.705; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_100"] = 0.149; upperLimitsExpected["MinPt_2ltau_offZ_OSSF_cut_150"] = 0.155; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_50"] = 0.249; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_100"] = 0.135; upperLimitsExpected["MinPt_3l_offZ_noOSSF_cut_150"] = 0.136; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_50"] = 0.339; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_100"] = 0.149; upperLimitsExpected["MinPt_2ltau_offZ_noOSSF_cut_150"] = 0.145; upperLimitsExpected["MinPt_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["MinPt_3l_onZ_cut_50"] = 2.260; upperLimitsExpected["MinPt_3l_onZ_cut_100"] = 0.438; upperLimitsExpected["MinPt_3l_onZ_cut_150"] = 0.305; upperLimitsExpected["MinPt_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["MinPt_2ltau_onZ_cut_50"] = 1.335; upperLimitsExpected["MinPt_2ltau_onZ_cut_100"] = 0.162; upperLimitsExpected["MinPt_2ltau_onZ_cut_150"] = 0.149; upperLimitsExpected["nbtag_3l_offZ_OSSF_cut_0"] = 2.893; upperLimitsExpected["nbtag_3l_offZ_OSSF_cut_1"] = 0.923; upperLimitsExpected["nbtag_3l_offZ_OSSF_cut_2"] = 0.452; upperLimitsExpected["nbtag_2ltau_offZ_OSSF_cut_0"] = 14.293; upperLimitsExpected["nbtag_2ltau_offZ_OSSF_cut_1"] = 1.774; upperLimitsExpected["nbtag_2ltau_offZ_OSSF_cut_2"] = 0.549; upperLimitsExpected["nbtag_3l_offZ_noOSSF_cut_0"] = 0.836; upperLimitsExpected["nbtag_3l_offZ_noOSSF_cut_1"] = 0.594; upperLimitsExpected["nbtag_3l_offZ_noOSSF_cut_2"] = 0.298; upperLimitsExpected["nbtag_2ltau_offZ_noOSSF_cut_0"] = 4.132; upperLimitsExpected["nbtag_2ltau_offZ_noOSSF_cut_1"] = 2.358; upperLimitsExpected["nbtag_2ltau_offZ_noOSSF_cut_2"] = 0.958; upperLimitsExpected["nbtag_3l_onZ_cut_0"] = 32.181; upperLimitsExpected["nbtag_3l_onZ_cut_1"] = 3.868; upperLimitsExpected["nbtag_3l_onZ_cut_2"] = 0.887; upperLimitsExpected["nbtag_2ltau_onZ_cut_0"] = 217.801; upperLimitsExpected["nbtag_2ltau_onZ_cut_1"] = 9.397; upperLimitsExpected["nbtag_2ltau_onZ_cut_2"] = 0.787; if (observed) return upperLimitsObserved[signal_region]; else return upperLimitsExpected[signal_region]; } /// Function checking if there is an OSSF lepton pair or a combination of 3 leptons with an invariant mass close to the Z mass int isonZ (const Particles& particles) { int onZ = 0; double best_mass_2 = 999.; double best_mass_3 = 999.; // Loop over all 2 particle combinations to find invariant mass of OSSF pair closest to Z mass for (const Particle& p1 : particles) { for (const Particle& p2 : particles) { double mass_difference_2_old = fabs(91.0 - best_mass_2); double mass_difference_2_new = fabs(91.0 - (p1.momentum() + p2.momentum()).mass()/GeV); // If particle combination is OSSF pair calculate mass difference to Z mass if ((p1.pid()*p2.pid() == -121 || p1.pid()*p2.pid() == -169)) { // Get invariant mass closest to Z mass if (mass_difference_2_new < mass_difference_2_old) best_mass_2 = (p1.momentum() + p2.momentum()).mass()/GeV; // In case there is an OSSF pair take also 3rd lepton into account (e.g. from FSR and photon to electron conversion) for (const Particle& p3 : particles ) { double mass_difference_3_old = fabs(91.0 - best_mass_3); double mass_difference_3_new = fabs(91.0 - (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV); if (mass_difference_3_new < mass_difference_3_old) best_mass_3 = (p1.momentum() + p2.momentum() + p3.momentum()).mass()/GeV; } } } } // Pick the minimum invariant mass of the best OSSF pair combination and the best 3 lepton combination double best_mass = min(best_mass_2,best_mass_3); // if this mass is in a 20 GeV window around the Z mass, the event is classified as onZ if ( fabs(91.0 - best_mass) < 20. ) onZ = 1; return onZ; } /// function checking if two leptons are an OSSF lepton pair and giving out the invariant mass (0 if no OSSF pair) double isOSSF_mass (const Particle& p1, const Particle& p2) { double inv_mass = 0.; // Is particle combination OSSF pair? if ((p1.pid()*p2.pid() == -121 || p1.pid()*p2.pid() == -169)) { // Get invariant mass inv_mass = (p1.momentum() + p2.momentum()).mass()/GeV; } return inv_mass; } /// Function checking if there is an OSSF lepton pair bool isOSSF (const Particles& particles) { for (size_t i1=0 ; i1 < 3 ; i1 ++) { for (size_t i2 = i1+1 ; i2 < 3 ; i2 ++) { if ((particles[i1].pid()*particles[i2].pid() == -121 || particles[i1].pid()*particles[i2].pid() == -169)) { return true; } } } return false; } //@} private: /// Histograms //@{ Histo1DPtr _h_HTlep_all, _h_HTjets_all, _h_MET_all, _h_Meff_all, _h_min_pT_all, _h_mT_all; Histo1DPtr _h_pt_1_3l, _h_pt_2_3l, _h_pt_3_3l, _h_pt_1_2ltau, _h_pt_2_2ltau, _h_pt_3_2ltau; Histo1DPtr _h_e_n, _h_mu_n, _h_tau_n; Histo1DPtr _h_excluded; //@} /// Fiducial efficiencies to model the effects of the ATLAS detector bool _use_fiducial_lepton_efficiency; /// List of signal regions and event counts per signal region vector _signal_regions; map _eventCountsPerSR; }; DECLARE_RIVET_PLUGIN(ATLAS_2014_I1327229); } diff --git a/analyses/pluginATLAS/ATLAS_2015_I1394865.cc b/analyses/pluginATLAS/ATLAS_2015_I1394865.cc --- a/analyses/pluginATLAS/ATLAS_2015_I1394865.cc +++ b/analyses/pluginATLAS/ATLAS_2015_I1394865.cc @@ -1,270 +1,270 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/WFinder.hh" #include "Rivet/Projections/LeadingParticlesFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/MergedFinalState.hh" #include "Rivet/Projections/MissingMomentum.hh" #include "Rivet/Projections/InvMassFinalState.hh" namespace Rivet { /// Inclusive 4-lepton lineshape class ATLAS_2015_I1394865 : public Analysis { public: /// Default constructor DEFAULT_RIVET_ANALYSIS_CTOR(ATLAS_2015_I1394865); void init() { FinalState fs(Cuts::abseta < 5.0); IdentifiedFinalState photon(fs, PID::PHOTON); IdentifiedFinalState bare_EL(fs, {PID::ELECTRON, -PID::ELECTRON}); IdentifiedFinalState bare_MU(fs, {PID::MUON, -PID::MUON}); // Selection 1: ZZ-> llll selection Cut etaranges_el = Cuts::abseta < 2.5 && Cuts::pT > 7*GeV; Cut etaranges_mu = Cuts::abseta < 2.7 && Cuts::pT > 6*GeV; DressedLeptons electron_sel4l(photon, bare_EL, 0.1, etaranges_el); declare(electron_sel4l, "ELECTRON_sel4l"); DressedLeptons muon_sel4l(photon, bare_MU, 0.1, etaranges_mu); declare(muon_sel4l, "MUON_sel4l"); // Both ZZ on-shell histos _h_ZZ_mZZ = bookHisto1D(1, 1, 1); _h_ZZ_pTZZ = bookHisto1D(2, 1, 1); } /// Do the analysis void analyze(const Event& e) { const double weight = e.weight(); //////////////////////////////////////////////////////////////////// // Preselection of leptons for ZZ-> llll final state //////////////////////////////////////////////////////////////////// Particles leptons_sel4l; const vector& mu_sel4l = apply(e, "MUON_sel4l").dressedLeptons(); const vector& el_sel4l = apply(e, "ELECTRON_sel4l").dressedLeptons(); const vector leptonsFS_sel4l = mu_sel4l + el_sel4l; // leptonsFS_sel4l.insert( leptonsFS_sel4l.end(), mu_sel4l.begin(), mu_sel4l.end() ); // leptonsFS_sel4l.insert( leptonsFS_sel4l.end(), el_sel4l.begin(), el_sel4l.end() ); // mu: pT > 6 GeV, eta < 2.7; ele: pT > 7 GeV, eta < 2.5 for (const DressedLepton& l : leptonsFS_sel4l) { if (l.abspid() == PID::ELECTRON) leptons_sel4l.push_back(l); // REDUNDANT: if (l.pT() > 7*GeV && l.abseta() < 2.5) else if (l.abspid() == PID::MUON) leptons_sel4l.push_back(l); // REDUNDANT: if (l.pT() > 6*GeV && l.abseta() < 2.7) } ////////////////////////////////////////////////////////////////// // Exactly two opposite charged leptons ////////////////////////////////////////////////////////////////// // Calculate total 'flavour' charge double totalcharge = 0; for (const Particle& l : leptons_sel4l) totalcharge += l.pid(); // Analyze 4 lepton events if (leptons_sel4l.size() != 4 || totalcharge != 0) vetoEvent; // Identify Z states from 4 lepton pairs Zstate Z1, Z2, Z1_alt, Z2_alt; if ( !identifyZstates(Z1, Z2, Z1_alt, Z2_alt, leptons_sel4l) ) vetoEvent; const double mZ1 = Z1.mom().mass(); const double mZ2 = Z2.mom().mass(); const double mZ1_alt = Z1_alt.mom().mass(); const double mZ2_alt = Z2_alt.mom().mass(); const double pTZ1 = Z1.mom().pT(); const double pTZ2 = Z2.mom().pT(); const double mZZ = (Z1.mom() + Z2.mom()).mass(); const double pTZZ = (Z1.mom() + Z2.mom()).pT(); // Event selections // pT(Z) > 2 GeV bool pass = pTZ1 > 2*GeV && pTZ2 > 2*GeV; if (!pass) vetoEvent; // Lepton kinematics: pT > 20, 15, 10 (8 if muon) GeV int n1 = 0, n2 = 0, n3 = 0; for (Particle& l : leptons_sel4l) { if (l.pT() > 20*GeV) ++n1; if (l.pT() > 15*GeV) ++n2; if (l.pT() > 10*GeV && l.abspid() == PID::ELECTRON) ++n3; if (l.pT() > 8*GeV && l.abspid() == PID::MUON) ++n3; } pass = pass && n1>=1 && n2>=2 && n3>=3; if (!pass) vetoEvent; // Dilepton mass: 50 < mZ1 < 120 GeV, 12 < mZ2 < 120 GeV pass = pass && mZ1 > 50*GeV && mZ1 < 120*GeV; pass = pass && mZ2 > 12*GeV && mZ2 < 120*GeV; if (!pass) vetoEvent; // Lepton separation: deltaR(l, l') > 0.1 (0.2) for same- (different-) flavor leptons for (size_t i = 0; i < leptons_sel4l.size(); ++i) { for (size_t j = i + 1; j < leptons_sel4l.size(); ++j) { const Particle& l1 = leptons_sel4l[i]; const Particle& l2 = leptons_sel4l[j]; pass = pass && deltaR(l1, l2) > (l1.abspid() == l2.abspid() ? 0.1 : 0.2); if (!pass) vetoEvent; } } // J/Psi veto: m(l+l-) > 5 GeV pass = pass && mZ1 > 5*GeV && mZ2 > 5*GeV && mZ1_alt > 5*GeV && mZ2_alt > 5*GeV; if (!pass) vetoEvent; // 80 < m4l < 1000 GeV pass = pass && mZZ > 80*GeV && mZZ < 1000*GeV; if (!pass) vetoEvent; // Fill histograms _h_ZZ_mZZ->fill(mZZ, weight); _h_ZZ_pTZZ->fill(pTZZ, weight); } /// Finalize void finalize() { const double norm = crossSection()/sumOfWeights()/femtobarn/TeV; scale(_h_ZZ_mZZ, norm); scale(_h_ZZ_pTZZ, norm); } /// Generic Z candidate struct Zstate : public ParticlePair { Zstate() { } Zstate(ParticlePair _particlepair) : ParticlePair(_particlepair) { } FourMomentum mom() const { return first.momentum() + second.momentum(); } operator FourMomentum() const { return mom(); } static bool cmppT(const Zstate& lx, const Zstate& rx) { return lx.mom().pT() < rx.mom().pT(); } }; /// @brief 4l to ZZ assignment algorithm /// /// ZZ->4l pairing /// - At least two same flavour opposite sign (SFOS) lepton pairs /// - Ambiguities in pairing are resolved following the procedure /// 1. the leading Z (Z1) is choosen as the SFOS with dilepton mass closet to Z mass /// 2. the subleading Z (Z2) is choosen as the remaining SFOS dilepton pair /// /// Z1, Z2: the selected pairing /// Z1_alt, Z2_alt: the alternative pairing (the same as Z1, Z2 in 2e2m case) bool identifyZstates(Zstate& Z1, Zstate& Z2, Zstate& Z1_alt, Zstate& Z2_alt, const Particles& leptons_sel4l) { const double ZMASS = 91.1876*GeV; bool findZZ = false; Particles part_pos_el, part_neg_el, part_pos_mu, part_neg_mu; for (const Particle& l : leptons_sel4l) { if (l.abspid() == PID::ELECTRON) { if (l.pid() < 0) part_neg_el.push_back(l); if (l.pid() > 0) part_pos_el.push_back(l); } else if (l.abspid() == PID::MUON) { if (l.pid() < 0) part_neg_mu.push_back(l); if (l.pid() > 0) part_pos_mu.push_back(l); } } // eeee/mmmm channel if ((part_neg_el.size() == 2 && part_pos_el.size() == 2) || (part_neg_mu.size() == 2 && part_pos_mu.size() == 2)) { findZZ = true; Zstate Zcand_1, Zcand_2, Zcand_3, Zcand_4; Zstate Zcand_1_tmp, Zcand_2_tmp, Zcand_3_tmp, Zcand_4_tmp; if (part_neg_el.size() == 2) { // eeee Zcand_1_tmp = Zstate( ParticlePair( part_neg_el[0], part_pos_el[0] ) ); Zcand_2_tmp = Zstate( ParticlePair( part_neg_el[0], part_pos_el[1] ) ); Zcand_3_tmp = Zstate( ParticlePair( part_neg_el[1], part_pos_el[0] ) ); Zcand_4_tmp = Zstate( ParticlePair( part_neg_el[1], part_pos_el[1] ) ); } else { // mmmm Zcand_1_tmp = Zstate( ParticlePair( part_neg_mu[0], part_pos_mu[0] ) ); Zcand_2_tmp = Zstate( ParticlePair( part_neg_mu[0], part_pos_mu[1] ) ); Zcand_3_tmp = Zstate( ParticlePair( part_neg_mu[1], part_pos_mu[0] ) ); Zcand_4_tmp = Zstate( ParticlePair( part_neg_mu[1], part_pos_mu[1] ) ); } // We can have the following pairs: (Z1 + Z4) || (Z2 + Z3) // Firstly, reorder withing each quadruplet to have // - fabs(mZ1 - ZMASS) < fabs(mZ4 - ZMASS) // - fabs(mZ2 - ZMASS) < fabs(mZ3 - ZMASS) if (fabs(Zcand_1_tmp.mom().mass() - ZMASS) < fabs(Zcand_4_tmp.mom().mass() - ZMASS)) { Zcand_1 = Zcand_1_tmp; Zcand_4 = Zcand_4_tmp; } else { Zcand_1 = Zcand_4_tmp; Zcand_4 = Zcand_1_tmp; } if (fabs(Zcand_2_tmp.mom().mass() - ZMASS) < fabs(Zcand_3_tmp.mom().mass() - ZMASS)) { Zcand_2 = Zcand_2_tmp; Zcand_3 = Zcand_3_tmp; } else { Zcand_2 = Zcand_3_tmp; Zcand_3 = Zcand_2_tmp; } // We can have the following pairs: (Z1 + Z4) || (Z2 + Z3) // Secondly, select the leading and subleading Z following // 1. the leading Z (Z1) is choosen as the SFOS with dilepton mass closet to Z mass // 2. the subleading Z (Z2) is choosen as the remaining SFOS dilepton pair if (fabs(Zcand_1.mom().mass() - ZMASS) < fabs(Zcand_2.mom().mass() - ZMASS)) { Z1 = Zcand_1; Z2 = Zcand_4; Z1_alt = Zcand_2; Z2_alt = Zcand_3; } else { Z1 = Zcand_2; Z2 = Zcand_3; Z1_alt = Zcand_1; Z2_alt = Zcand_4; } } // end of eeee/mmmm channel else if (part_neg_el.size() == 1 && part_pos_el.size() == 1 && part_neg_mu.size() == 1 && part_pos_mu.size() == 1) { // 2e2m channel findZZ = true; Zstate Zcand_1, Zcand_2; Zcand_1 = Zstate( ParticlePair( part_neg_mu[0], part_pos_mu[0] ) ); Zcand_2 = Zstate( ParticlePair( part_neg_el[0], part_pos_el[0] ) ); if (fabs(Zcand_1.mom().mass() - ZMASS) < fabs(Zcand_2.mom().mass() - ZMASS)) { Z1 = Zcand_1; Z2 = Zcand_2; } else { Z1 = Zcand_2; Z2 = Zcand_1; } Z1_alt = Z1; Z2_alt = Z2; } return findZZ; } private: Histo1DPtr _h_ZZ_pTZZ, _h_ZZ_mZZ; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2015_I1394865); } diff --git a/analyses/pluginATLAS/ATLAS_2015_I1397637.cc b/analyses/pluginATLAS/ATLAS_2015_I1397637.cc --- a/analyses/pluginATLAS/ATLAS_2015_I1397637.cc +++ b/analyses/pluginATLAS/ATLAS_2015_I1397637.cc @@ -1,218 +1,218 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/PromptFinalState.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class ATLAS_2015_I1397637 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ATLAS_2015_I1397637); /// Book projections and histograms void init() { // Base final state definition const FinalState fs(Cuts::abseta < 4.5); // Neutrinos for MET IdentifiedFinalState nu_id; nu_id.acceptNeutrinos(); PromptFinalState neutrinos(nu_id); neutrinos.acceptTauDecays(true); declare(neutrinos, "neutrinos"); // Get photons used to dress leptons IdentifiedFinalState photons(fs); photons.acceptId(PID::PHOTON); // Use all bare muons as input to the DressedMuons projection IdentifiedFinalState mu_id(fs); mu_id.acceptIdPair(PID::MUON); PromptFinalState bare_mu(mu_id); bare_mu.acceptTauDecays(true); // Use all bare electrons as input to the DressedElectrons projection IdentifiedFinalState el_id(fs); el_id.acceptIdPair(PID::ELECTRON); PromptFinalState bare_el(el_id); bare_el.acceptTauDecays(true); // Use all bare leptons including taus for single-lepton filter IdentifiedFinalState lep_id(fs); lep_id.acceptIdPair(PID::MUON); lep_id.acceptIdPair(PID::ELECTRON); PromptFinalState bare_lep(lep_id); declare(bare_lep, "bare_lep"); // Tau finding /// @todo Use TauFinder - UnstableFinalState ufs; + UnstableParticles ufs; IdentifiedFinalState tau_id(ufs); tau_id.acceptIdPair(PID::TAU); PromptFinalState bare_tau(tau_id); declare(bare_tau, "bare_tau"); // Muons and electrons must have |eta| < 2.5 Cut eta_ranges = Cuts::abseta < 2.5; // Get dressed muons and the good muons (pt>25GeV) DressedLeptons all_dressed_mu(photons, bare_mu, 0.1, eta_ranges, true); DressedLeptons dressed_mu(photons, bare_mu, 0.1, eta_ranges && Cuts::pT > 25*GeV, true); declare(dressed_mu, "muons"); // Get dressed electrons and the good electrons (pt>25GeV) DressedLeptons all_dressed_el(photons, bare_el, 0.1, eta_ranges, true); DressedLeptons dressed_el(photons, bare_el, 0.1, eta_ranges && Cuts::pT > 25*GeV, true); declare(dressed_el, "electrons"); // Jet clustering VetoedFinalState vfs(fs); vfs.addVetoOnThisFinalState(all_dressed_el); vfs.addVetoOnThisFinalState(all_dressed_mu); vfs.addVetoOnThisFinalState(neutrinos); // Small-R jets /// @todo Use extra constructor args FastJets jets(vfs, FastJets::ANTIKT, 0.4); jets.useInvisibles(JetAlg::ALL_INVISIBLES); jets.useMuons(JetAlg::DECAY_MUONS); declare(jets, "jets"); // Large-R jets /// @todo Use extra constructor args FastJets large_jets(vfs, FastJets::ANTIKT, 1.0); large_jets.useInvisibles(JetAlg::ALL_INVISIBLES); large_jets.useMuons(JetAlg::DECAY_MUONS); declare(large_jets, "fat_jets"); /// Book histogram _h_pttop = bookHisto1D(1, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { // Single lepton filter on bare leptons with no cuts const Particles& bare_lep = apply(event, "bare_lep").particles(); const Particles& bare_tau = apply(event, "bare_tau").particles(); if (bare_lep.size() + bare_tau.size() != 1) vetoEvent; // Electrons and muons const vector& electrons = apply(event, "electrons").dressedLeptons(); const vector& muons = apply(event, "muons").dressedLeptons(); if (electrons.size() + muons.size() != 1) vetoEvent; const DressedLepton& lepton = muons.empty() ? electrons[0] : muons[0]; // Get the neutrinos from the event record (they have pT > 0.0 and |eta| < 4.5 at this stage const Particles& neutrinos = apply(event, "neutrinos").particlesByPt(); FourMomentum met; for (const Particle& nu : neutrinos) met += nu.momentum(); if (met.pT() < 20*GeV) vetoEvent; // Thin jets and trimmed fat jets /// @todo Use Rivet built-in FJ trimming support const Jets& jets = apply(event, "jets").jetsByPt(Cuts::pT > 25*GeV && Cuts::abseta < 2.5); const PseudoJets& fat_pjets = apply(event, "fat_jets").pseudoJetsByPt(); const double Rfilt = 0.3, ptFrac_min = 0.05; ///< @todo Need to be careful about the units for the pT cut passed to FJ? PseudoJets trimmed_fat_pjets; fastjet::Filter trimmer(fastjet::JetDefinition(fastjet::kt_algorithm, Rfilt), fastjet::SelectorPtFractionMin(ptFrac_min)); for (const PseudoJet& pjet : fat_pjets) trimmed_fat_pjets += trimmer(pjet); trimmed_fat_pjets = fastjet::sorted_by_pt(trimmed_fat_pjets); // Jet reclustering // Use a kT cluster sequence to recluster the trimmed jets so that a d12 can then be obtained from the reclustered jet vector splittingScales; for (const PseudoJet& tpjet : trimmed_fat_pjets) { const PseudoJets tpjet_constits = tpjet.constituents(); const fastjet::ClusterSequence kt_cs(tpjet_constits, fastjet::JetDefinition(fastjet::kt_algorithm, 1.5, fastjet::E_scheme, fastjet::Best)); const PseudoJets kt_jets = kt_cs.inclusive_jets(); const double d12 = 1.5 * sqrt(kt_jets[0].exclusive_subdmerge(1)); splittingScales += d12; } Jets trimmed_fat_jets; for (size_t i = 0; i < trimmed_fat_pjets.size(); ++i) { const Jet tj = trimmed_fat_pjets[i]; if (tj.mass() <= 100*GeV) continue; if (tj.pT() <= 300*GeV) continue; if (splittingScales[i] <= 40*GeV) continue; if (tj.abseta() >= 2.0) continue; trimmed_fat_jets += tj; } if (trimmed_fat_jets.empty()) vetoEvent; // Jet b-tagging Jets bjets, non_bjets; for (const Jet& jet : jets) (jet.bTagged() ? bjets : non_bjets) += jet; if (bjets.empty()) vetoEvent; // Boosted selection: lepton/jet overlap const double transmass = sqrt( 2 * lepton.pT() * met.pT() * (1 - cos(deltaPhi(lepton, met))) ); if (transmass + met.pt() <= 60*GeV) vetoEvent; int lepJetIndex = -1; for (size_t i = 0; i < jets.size(); ++i) { const Jet& jet = jets[i]; if (deltaR(jet, lepton) < 1.5) { lepJetIndex = i; break; } } if (lepJetIndex < 0) vetoEvent; const Jet& ljet = jets[lepJetIndex]; // Boosted selection: lepton-jet/fat-jet matching int fatJetIndex = -1; for (size_t j = 0; j < trimmed_fat_jets.size(); ++j) { const Jet& fjet = trimmed_fat_jets[j]; const double dR_fatjet = deltaR(ljet, fjet); const double dPhi_fatjet = deltaPhi(lepton, fjet); if (dR_fatjet > 1.5 && dPhi_fatjet > 2.3) { fatJetIndex = j; break; } } if (fatJetIndex < 0) vetoEvent; const Jet& fjet = trimmed_fat_jets[fatJetIndex]; // Boosted selection: b-tag matching const bool lepbtag = ljet.bTagged(); bool hadbtag = false; for (const Jet& bjet : bjets) { hadbtag |= (deltaR(fjet, bjet) < 1.0); } // Fill histo if selection passed if (hadbtag || lepbtag) _h_pttop->fill(fjet.pT()/GeV, event.weight()); } /// Normalise histograms etc., after the run void finalize() { scale(_h_pttop, crossSection()/femtobarn / sumOfWeights()); } private: Histo1DPtr _h_pttop; }; DECLARE_RIVET_PLUGIN(ATLAS_2015_I1397637); } diff --git a/analyses/pluginATLAS/ATLAS_2017_I1604029.cc b/analyses/pluginATLAS/ATLAS_2017_I1604029.cc --- a/analyses/pluginATLAS/ATLAS_2017_I1604029.cc +++ b/analyses/pluginATLAS/ATLAS_2017_I1604029.cc @@ -1,149 +1,149 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Projections/PromptFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { ///@brief: ttbar + gamma at 8 TeV class ATLAS_2017_I1604029 : public Analysis { public: // Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ATLAS_2017_I1604029); // Book histograms and initialise projections before the run void init() { const FinalState fs; // signal photons PromptFinalState prompt_ph(Cuts::abspid == PID::PHOTON && Cuts::pT > 15*GeV && Cuts::abseta < 2.37); declare(prompt_ph, "photons"); // bare leptons Cut base_cuts = (Cuts::abseta < 2.7) && (Cuts::pT > 10*GeV); IdentifiedFinalState bare_leps(base_cuts); bare_leps.acceptIdPair(PID::MUON); bare_leps.acceptIdPair(PID::ELECTRON); declare(bare_leps, "bare_leptons"); // dressed leptons Cut dressed_cuts = (Cuts::abseta < 2.5) && (Cuts::pT > 25*GeV); PromptFinalState prompt_mu(base_cuts && Cuts::abspid == PID::MUON); PromptFinalState prompt_el(base_cuts && Cuts::abspid == PID::ELECTRON); IdentifiedFinalState all_photons(fs, PID::PHOTON); DressedLeptons elecs(all_photons, prompt_el, 0.1, dressed_cuts); declare(elecs, "elecs"); DressedLeptons muons(all_photons, prompt_mu, 0.1, dressed_cuts); declare(muons, "muons"); // auxiliary projections for 'single-lepton ttbar filter' PromptFinalState prompt_lep(Cuts::abspid == PID::MUON || Cuts::abspid == PID::ELECTRON); declare(prompt_lep, "prompt_leps"); - declare(UnstableFinalState(), "ufs"); + declare(UnstableParticles(), "ufs"); // jets FastJets jets(fs, FastJets::ANTIKT, 0.4, JetAlg::NO_MUONS, JetAlg::NO_INVISIBLES); declare(jets, "jets"); // BOOK HISTOGRAMS _h["pt"] = bookHisto1D(2,1,1); _h["eta"] = bookHisto1D(3,1,1); } // Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // analysis extrapolated to 1-lepton-plus-jets channel, where "lepton" cannot be a tau // (i.e. contribution from dileptonic ttbar where one of the leptons is outside // the detector acceptance has been subtracted as a background) if (applyProjection(event, "prompt_leps").particles().size() != 1) vetoEvent; - for (const auto& p : apply(event, "ufs").particles()) { + for (const auto& p : apply(event, "ufs").particles()) { if (p.fromPromptTau()) vetoEvent; } // photon selection Particles photons = applyProjection(event, "photons").particlesByPt(); Particles bare_leps = apply(event, "bare_leptons").particles(); for (const Particle& lep : bare_leps) ifilter_discard(photons, deltaRLess(lep, 0.1)); if (photons.size() != 1) vetoEvent; const Particle& photon = photons[0]; // jet selection Jets jets = apply(event, "jets").jetsByPt(Cuts::abseta < 2.5 && Cuts::pT > 25*GeV); // lepton selection const vector& elecs = apply(event, "elecs").dressedLeptons(); const vector& all_muons = apply(event, "muons").dressedLeptons(); // jet photon/electron overlap removal for (const DressedLepton& e : elecs) ifilter_discard(jets, deltaRLess(e, 0.2, RAPIDITY)); for (const Particle& ph : photons) ifilter_discard(jets, deltaRLess(ph, 0.1, RAPIDITY)); if (jets.size() < 4) vetoEvent; // photon-jet minimum deltaR double mindR_phjet = 999.; for (Jet jet : jets) { const double dR_phjet = deltaR(photon, jet); if (dR_phjet < mindR_phjet) mindR_phjet = dR_phjet; } if (mindR_phjet < 0.5) vetoEvent; // muon jet overlap removal vector muons; foreach (DressedLepton mu, all_muons) { bool overlaps = false; foreach (Jet jet, jets) { if (deltaR(mu, jet) < 0.4) { overlaps = true; break; } } if (overlaps) continue; muons.push_back(mu); } // one electron XOR one muon bool isEl = elecs.size() == 1 && muons.size() == 0; bool isMu = muons.size() == 1 && elecs.size() == 0; if (!isEl && !isMu) vetoEvent; // photon-lepton deltaR double mindR_phlep = deltaR(photon, isEl? elecs[0] : muons[0]); if (mindR_phlep < 0.7) vetoEvent; // b-tagging Jets bjets; foreach (Jet jet, jets) { if (jet.bTagged(Cuts::pT > 5*GeV)) bjets +=jet; } if (bjets.empty()) vetoEvent; _h["pt"]->fill(photon.pT()/GeV, weight); _h["eta"]->fill(photon.abseta(), weight); } // Normalise histograms etc., after the run void finalize() { const double normto(crossSection() / femtobarn / sumOfWeights()); for (auto &hist : _h) { scale(hist.second, normto); } } private: map _h; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ATLAS_2017_I1604029); } diff --git a/analyses/pluginCMS/CMS_2011_S8973270.cc b/analyses/pluginCMS/CMS_2011_S8973270.cc --- a/analyses/pluginCMS/CMS_2011_S8973270.cc +++ b/analyses/pluginCMS/CMS_2011_S8973270.cc @@ -1,164 +1,164 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class CMS_2011_S8973270 : public Analysis { public: /// Constructor CMS_2011_S8973270() : Analysis("CMS_2011_S8973270") { } void init() { FinalState fs; FastJets jetproj(fs, FastJets::ANTIKT, 0.5); jetproj.useInvisibles(); declare(jetproj, "Jets"); - UnstableFinalState ufs; + UnstableParticles ufs; declare(ufs, "UFS"); // Book histograms _h_dsigma_dR_56GeV = bookHisto1D(1,1,1); _h_dsigma_dR_84GeV = bookHisto1D(2,1,1); _h_dsigma_dR_120GeV = bookHisto1D(3,1,1); _h_dsigma_dPhi_56GeV = bookHisto1D(4,1,1); _h_dsigma_dPhi_84GeV = bookHisto1D(5,1,1); _h_dsigma_dPhi_120GeV = bookHisto1D(6,1,1); _countMCDR56 = 0; _countMCDR84 = 0; _countMCDR120 = 0; _countMCDPhi56 = 0; _countMCDPhi84 = 0; _countMCDPhi120 = 0; } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); const Jets& jets = apply(event,"Jets").jetsByPt(); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); // Find the leading jet pT and eta if (jets.size() == 0) vetoEvent; const double ljpT = jets[0].pT(); const double ljeta = jets[0].eta(); MSG_DEBUG("Leading jet pT / eta: " << ljpT << " / " << ljeta); // Minimum requirement for event if (ljpT > 56*GeV && fabs(ljeta) < 3.0) { // Find B hadrons in event int nab = 0, nb = 0; //counters for all B and independent B hadrons double etaB1 = 7.7, etaB2 = 7.7; double phiB1 = 7.7, phiB2 = 7.7; double pTB1 = 7.7, pTB2 = 7.7; foreach (const Particle& p, ufs.particles()) { int aid = p.abspid(); if (aid/100 == 5 || aid/1000==5) { nab++; // 2J+1 == 1 (mesons) or 2 (baryons) if (aid%10 == 1 || aid%10 == 2) { // No B decaying to B if (aid != 5222 && aid != 5112 && aid != 5212 && aid != 5322) { if (nb==0) { etaB1 = p.eta(); phiB1 = p.phi(); pTB1 = p.pT(); } else if (nb==1) { etaB2 = p.eta(); phiB2 = p.phi(); pTB2 = p.pT(); } nb++; } } MSG_DEBUG("ID " << aid << " B hadron"); } } if (nb==2 && pTB1 > 15*GeV && pTB2 > 15*GeV && fabs(etaB1) < 2.0 && fabs(etaB2) < 2.0) { double dPhi = deltaPhi(phiB1, phiB2); double dR = deltaR(etaB1, phiB1, etaB2, phiB2); MSG_DEBUG("DR/DPhi " << dR << " " << dPhi); // MC counters if (dR > 2.4) _countMCDR56 += weight; if (dR > 2.4 && ljpT > 84*GeV) _countMCDR84 += weight; if (dR > 2.4 && ljpT > 120*GeV) _countMCDR120 += weight; if (dPhi > 3.*PI/4.) _countMCDPhi56 += weight; if (dPhi > 3.*PI/4. && ljpT > 84*GeV) _countMCDPhi84 += weight; if (dPhi > 3.*PI/4. && ljpT > 120*GeV) _countMCDPhi120 += weight; _h_dsigma_dR_56GeV->fill(dR, weight); if (ljpT > 84*GeV) _h_dsigma_dR_84GeV->fill(dR, weight); if (ljpT > 120*GeV) _h_dsigma_dR_120GeV->fill(dR, weight); _h_dsigma_dPhi_56GeV->fill(dPhi, weight); if (ljpT > 84*GeV) _h_dsigma_dPhi_84GeV->fill(dPhi, weight); if (ljpT > 120*GeV) _h_dsigma_dPhi_120GeV->fill(dPhi, weight); //MSG_DEBUG("nb " << nb << " " << nab); } } } /// Normalise histograms etc., after the run void finalize() { MSG_DEBUG("crossSection " << crossSection() << " sumOfWeights " << sumOfWeights()); // Hardcoded bin widths double DRbin = 0.4; double DPhibin = PI/8.0; // Find out the correct numbers double nDataDR56 = 25862.20; double nDataDR84 = 5675.55; double nDataDR120 = 1042.72; double nDataDPhi56 = 24220.00; double nDataDPhi84 = 4964.00; double nDataDPhi120 = 919.10; double normDR56 = (_countMCDR56 > 0.) ? nDataDR56/_countMCDR56 : crossSection()/sumOfWeights(); double normDR84 = (_countMCDR84 > 0.) ? nDataDR84/_countMCDR84 : crossSection()/sumOfWeights(); double normDR120 = (_countMCDR120 > 0.) ? nDataDR120/_countMCDR120 : crossSection()/sumOfWeights(); double normDPhi56 = (_countMCDPhi56 > 0.) ? nDataDPhi56/_countMCDPhi56 : crossSection()/sumOfWeights(); double normDPhi84 = (_countMCDPhi84 > 0.) ? nDataDPhi84/_countMCDPhi84 : crossSection()/sumOfWeights(); double normDPhi120 = (_countMCDPhi120 > 0.) ? nDataDPhi120/_countMCDPhi120 : crossSection()/sumOfWeights(); scale(_h_dsigma_dR_56GeV, normDR56*DRbin); scale(_h_dsigma_dR_84GeV, normDR84*DRbin); scale(_h_dsigma_dR_120GeV, normDR120*DRbin); scale(_h_dsigma_dPhi_56GeV, normDPhi56*DPhibin); scale(_h_dsigma_dPhi_84GeV, normDPhi84*DPhibin); scale(_h_dsigma_dPhi_120GeV, normDPhi120*DPhibin); } //@} private: /// @name Counters //@{ double _countMCDR56, _countMCDR84, _countMCDR120; double _countMCDPhi56, _countMCDPhi84, _countMCDPhi120; //@} /// @name Histograms //@{ Histo1DPtr _h_dsigma_dR_56GeV, _h_dsigma_dR_84GeV, _h_dsigma_dR_120GeV; Histo1DPtr _h_dsigma_dPhi_56GeV, _h_dsigma_dPhi_84GeV, _h_dsigma_dPhi_120GeV; //@} }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2011_S8973270); } diff --git a/analyses/pluginCMS/CMS_2011_S8978280.cc b/analyses/pluginCMS/CMS_2011_S8978280.cc --- a/analyses/pluginCMS/CMS_2011_S8978280.cc +++ b/analyses/pluginCMS/CMS_2011_S8978280.cc @@ -1,114 +1,114 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief CMS strange particle spectra (Ks, Lambda, Cascade) in pp at 900 and 7000 GeV /// @author Kevin Stenson class CMS_2011_S8978280 : public Analysis { public: /// Constructor CMS_2011_S8978280() : Analysis("CMS_2011_S8978280") { } void init() { - UnstableFinalState ufs(Cuts::absrap < 2); + UnstableParticles ufs(Cuts::absrap < 2); declare(ufs, "UFS"); // Particle distributions versus rapidity and transverse momentum if (fuzzyEquals(sqrtS()/GeV, 900*GeV)){ _h_dNKshort_dy = bookHisto1D(1, 1, 1); _h_dNKshort_dpT = bookHisto1D(2, 1, 1); _h_dNLambda_dy = bookHisto1D(3, 1, 1); _h_dNLambda_dpT = bookHisto1D(4, 1, 1); _h_dNXi_dy = bookHisto1D(5, 1, 1); _h_dNXi_dpT = bookHisto1D(6, 1, 1); // _h_LampT_KpT = bookScatter2D(7, 1, 1); _h_XipT_LampT = bookScatter2D(8, 1, 1); _h_Lamy_Ky = bookScatter2D(9, 1, 1); _h_Xiy_Lamy = bookScatter2D(10, 1, 1); } else if (fuzzyEquals(sqrtS()/GeV, 7000*GeV)){ _h_dNKshort_dy = bookHisto1D(1, 1, 2); _h_dNKshort_dpT = bookHisto1D(2, 1, 2); _h_dNLambda_dy = bookHisto1D(3, 1, 2); _h_dNLambda_dpT = bookHisto1D(4, 1, 2); _h_dNXi_dy = bookHisto1D(5, 1, 2); _h_dNXi_dpT = bookHisto1D(6, 1, 2); // _h_LampT_KpT = bookScatter2D(7, 1, 2); _h_XipT_LampT = bookScatter2D(8, 1, 2); _h_Lamy_Ky = bookScatter2D(9, 1, 2); _h_Xiy_Lamy = bookScatter2D(10, 1, 2); } } void analyze(const Event& event) { const double weight = event.weight(); - const UnstableFinalState& parts = apply(event, "UFS"); + const UnstableParticles& parts = apply(event, "UFS"); foreach (const Particle& p, parts.particles()) { switch (p.abspid()) { case PID::K0S: _h_dNKshort_dy->fill(p.absrap(), weight); _h_dNKshort_dpT->fill(p.pT(), weight); break; case PID::LAMBDA: // Lambda should not have Cascade or Omega ancestors since they should not decay. But just in case... if ( !( p.hasAncestor(3322) || p.hasAncestor(-3322) || p.hasAncestor(3312) || p.hasAncestor(-3312) || p.hasAncestor(3334) || p.hasAncestor(-3334) ) ) { _h_dNLambda_dy->fill(p.absrap(), weight); _h_dNLambda_dpT->fill(p.pT(), weight); } break; case PID::XIMINUS: // Cascade should not have Omega ancestors since it should not decay. But just in case... if ( !( p.hasAncestor(3334) || p.hasAncestor(-3334) ) ) { _h_dNXi_dy->fill(p.absrap(), weight); _h_dNXi_dpT->fill(p.pT(), weight); } break; } } } void finalize() { divide(_h_dNLambda_dpT,_h_dNKshort_dpT, _h_LampT_KpT); divide(_h_dNXi_dpT,_h_dNLambda_dpT, _h_XipT_LampT); divide(_h_dNLambda_dy,_h_dNKshort_dy, _h_Lamy_Ky); divide(_h_dNXi_dy,_h_dNLambda_dy, _h_Xiy_Lamy); const double normpT = 1.0/sumOfWeights(); const double normy = 0.5*normpT; // Accounts for using |y| instead of y scale(_h_dNKshort_dy, normy); scale(_h_dNKshort_dpT, normpT); scale(_h_dNLambda_dy, normy); scale(_h_dNLambda_dpT, normpT); scale(_h_dNXi_dy, normy); scale(_h_dNXi_dpT, normpT); } private: // Particle distributions versus rapidity and transverse momentum Histo1DPtr _h_dNKshort_dy, _h_dNKshort_dpT, _h_dNLambda_dy, _h_dNLambda_dpT, _h_dNXi_dy, _h_dNXi_dpT; Scatter2DPtr _h_LampT_KpT, _h_XipT_LampT, _h_Lamy_Ky, _h_Xiy_Lamy; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2011_S8978280); } diff --git a/analyses/pluginCMS/CMS_2012_PAS_QCD_11_010.cc b/analyses/pluginCMS/CMS_2012_PAS_QCD_11_010.cc --- a/analyses/pluginCMS/CMS_2012_PAS_QCD_11_010.cc +++ b/analyses/pluginCMS/CMS_2012_PAS_QCD_11_010.cc @@ -1,89 +1,89 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { class CMS_2012_PAS_QCD_11_010 : public Analysis { public: CMS_2012_PAS_QCD_11_010() : Analysis("CMS_2012_PAS_QCD_11_010") { } void init() { const FastJets jets(ChargedFinalState(Cuts::abseta < 2.5 && Cuts::pT > 0.5*GeV), FastJets::ANTIKT, 0.5); declare(jets, "Jets"); - const UnstableFinalState ufs(Cuts::abseta < 2 && Cuts::pT > 0.6*GeV); + const UnstableParticles ufs(Cuts::abseta < 2 && Cuts::pT > 0.6*GeV); declare(ufs, "UFS"); _h_nTrans_Lambda = bookProfile1D(1, 1, 1); _h_nTrans_Kaon = bookProfile1D(2, 1, 1); _h_ptsumTrans_Lambda = bookProfile1D(3, 1, 1); _h_ptsumTrans_Kaon = bookProfile1D(4, 1, 1); } void analyze(const Event& event) { const double weight = event.weight(); Jets jets = apply(event, "Jets").jetsByPt(1.0*GeV); if (jets.size() < 1) vetoEvent; if (fabs(jets[0].eta()) >= 2) { // cuts on leading jets vetoEvent; } FourMomentum p_lead = jets[0].momentum(); const double pTlead = p_lead.pT(); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); int numTrans_Kaon = 0; int numTrans_Lambda = 0; double ptSumTrans_Kaon = 0.; double ptSumTrans_Lambda = 0.; foreach (const Particle& p, ufs.particles()) { double dphi = deltaPhi(p, p_lead); double pT = p.pT(); const PdgId id = p.abspid(); if (dphi > PI/3. && dphi < 2./3.*PI) { if (id == 310 && pT > 0.6*GeV) { ptSumTrans_Kaon += pT/GeV; numTrans_Kaon++; } else if (id == 3122 && pT > 1.5*GeV) { ptSumTrans_Lambda += pT/GeV; numTrans_Lambda++; } } } _h_nTrans_Kaon->fill(pTlead/GeV, numTrans_Kaon / (8.0 * PI/3.0), weight); _h_nTrans_Lambda->fill(pTlead/GeV, numTrans_Lambda / (8.0 * PI/3.0), weight); _h_ptsumTrans_Kaon->fill(pTlead/GeV, ptSumTrans_Kaon / (GeV * (8.0 * PI/3.0)), weight); _h_ptsumTrans_Lambda->fill(pTlead/GeV, ptSumTrans_Lambda / (GeV * (8.0 * PI/3.0)), weight); } void finalize() { } private: Profile1DPtr _h_nTrans_Kaon; Profile1DPtr _h_nTrans_Lambda; Profile1DPtr _h_ptsumTrans_Kaon; Profile1DPtr _h_ptsumTrans_Lambda; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2012_PAS_QCD_11_010); } diff --git a/analyses/pluginCMS/CMS_2013_I1256943.cc b/analyses/pluginCMS/CMS_2013_I1256943.cc --- a/analyses/pluginCMS/CMS_2013_I1256943.cc +++ b/analyses/pluginCMS/CMS_2013_I1256943.cc @@ -1,194 +1,194 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ZFinder.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// CMS cross-section and angular correlations in Z boson + b-hadrons events at 7 TeV class CMS_2013_I1256943 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(CMS_2013_I1256943); /// Add projections and book histograms void init() { _sumW = 0; _sumW50 = 0; _sumWpT = 0; FinalState fs(Cuts::abseta < 2.4 && Cuts::pT > 20*GeV); declare(fs, "FS"); - UnstableFinalState ufs(Cuts::abseta < 2 && Cuts::pT > 15*GeV); + UnstableParticles ufs(Cuts::abseta < 2 && Cuts::pT > 15*GeV); declare(ufs, "UFS"); Cut zetacut = Cuts::abseta < 2.4; ZFinder zfindermu(fs, zetacut, PID::MUON, 81.0*GeV, 101.0*GeV, 0.1, ZFinder::NOCLUSTER, ZFinder::TRACK, 91.2*GeV); declare(zfindermu, "ZFinderMu"); ZFinder zfinderel(fs, zetacut, PID::ELECTRON, 81.0*GeV, 101.0*GeV, 0.1, ZFinder::NOCLUSTER, ZFinder::TRACK, 91.2*GeV); declare(zfinderel, "ZFinderEl"); // Histograms in non-boosted region of Z pT _h_dR_BB = bookHisto1D(1, 1, 1); _h_dphi_BB = bookHisto1D(2, 1, 1); _h_min_dR_ZB = bookHisto1D(3, 1, 1); _h_A_ZBB = bookHisto1D(4, 1, 1); // Histograms in boosted region of Z pT (pT > 50 GeV) _h_dR_BB_boost = bookHisto1D(5, 1, 1); _h_dphi_BB_boost = bookHisto1D(6, 1, 1); _h_min_dR_ZB_boost = bookHisto1D(7, 1, 1); _h_A_ZBB_boost = bookHisto1D(8, 1, 1); _h_min_ZpT = bookHisto1D(9,1,1); } /// Do the analysis void analyze(const Event& e) { - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); const ZFinder& zfindermu = apply(e, "ZFinderMu"); const ZFinder& zfinderel = apply(e, "ZFinderEl"); // Look for a Z --> mu+ mu- event in the final state if (zfindermu.empty() && zfinderel.empty()) vetoEvent; const Particles& z = !zfindermu.empty() ? zfindermu.bosons() : zfinderel.bosons(); const bool is_boosted = ( z[0].pT() > 50*GeV ); vector Bmom; // Loop over the unstable particles for (const Particle& p : ufs.particles()) { const PdgId pid = p.pid(); // Look for particles with a bottom quark if (PID::hasBottom(pid)) { bool good_B = false; const GenParticle* pgen = p.genParticle(); const GenVertex* vgen = pgen -> end_vertex(); // Loop over the decay products of each unstable particle, looking for a b-hadron pair /// @todo Avoid HepMC API if (vgen) { for (GenVertex::particles_out_const_iterator it = vgen->particles_out_const_begin(); it != vgen->particles_out_const_end(); ++it) { // If the particle produced has a bottom quark do not count it and go to the next loop cycle. if (!( PID::hasBottom( (*it)->pdg_id() ) ) ) { good_B = true; continue; } else { good_B = false; break; } } if (good_B ) Bmom.push_back( p.momentum() ); } else continue; } } // If there are more than two B's in the final state veto the event if (Bmom.size() != 2 ) { Bmom.clear(); vetoEvent; } // Calculate the observables double dphiBB = deltaPhi(Bmom[0], Bmom[1]); double dRBB = deltaR(Bmom[0], Bmom[1]); const FourMomentum& pZ = z[0].momentum(); const bool closest_B = ( deltaR(pZ, Bmom[0]) < deltaR(pZ, Bmom[1]) ); const double mindR_ZB = closest_B ? deltaR(pZ, Bmom[0]) : deltaR(pZ, Bmom[1]); const double maxdR_ZB = closest_B ? deltaR(pZ, Bmom[1]) : deltaR(pZ, Bmom[0]); const double AZBB = ( maxdR_ZB - mindR_ZB ) / ( maxdR_ZB + mindR_ZB ); // Get event weight for histogramming const double weight = e.weight(); // Fill the histograms in the non-boosted region _h_dphi_BB->fill(dphiBB, weight); _h_dR_BB->fill(dRBB, weight); _h_min_dR_ZB->fill(mindR_ZB, weight); _h_A_ZBB->fill(AZBB, weight); _sumW += weight; _sumWpT += weight; // Fill the histograms in the boosted region if (is_boosted) { _sumW50 += weight; _h_dphi_BB_boost->fill(dphiBB, weight); _h_dR_BB_boost->fill(dRBB, weight); _h_min_dR_ZB_boost->fill(mindR_ZB, weight); _h_A_ZBB_boost->fill(AZBB, weight); } // Fill Z pT (cumulative) histogram _h_min_ZpT->fill(0, weight); if (pZ.pT() > 40*GeV ) { _sumWpT += weight; _h_min_ZpT->fill(40, weight); } if (pZ.pT() > 80*GeV ) { _sumWpT += weight; _h_min_ZpT->fill(80, weight); } if (pZ.pT() > 120*GeV ) { _sumWpT += weight; _h_min_ZpT->fill(120, weight); } Bmom.clear(); } /// Finalize void finalize() { // Normalize excluding overflow bins (d'oh) normalize(_h_dR_BB, 0.7*crossSection()*_sumW/sumOfWeights(), false); // d01-x01-y01 normalize(_h_dphi_BB, 0.53*crossSection()*_sumW/sumOfWeights(), false); // d02-x01-y01 normalize(_h_min_dR_ZB, 0.84*crossSection()*_sumW/sumOfWeights(), false); // d03-x01-y01 normalize(_h_A_ZBB, 0.2*crossSection()*_sumW/sumOfWeights(), false); // d04-x01-y01 normalize(_h_dR_BB_boost, 0.84*crossSection()*_sumW50/sumOfWeights(), false); // d05-x01-y01 normalize(_h_dphi_BB_boost, 0.63*crossSection()*_sumW50/sumOfWeights(), false); // d06-x01-y01 normalize(_h_min_dR_ZB_boost, 1*crossSection()*_sumW50/sumOfWeights(), false); // d07-x01-y01 normalize(_h_A_ZBB_boost, 0.25*crossSection()*_sumW50/sumOfWeights(), false); // d08-x01-y01 normalize(_h_min_ZpT, 40*crossSection()*_sumWpT/sumOfWeights(), false); // d09-x01-y01 } private: /// @name Weight counters //@{ double _sumW, _sumW50, _sumWpT; //@} /// @name Histograms //@{ Histo1DPtr _h_dphi_BB, _h_dR_BB, _h_min_dR_ZB, _h_A_ZBB; Histo1DPtr _h_dphi_BB_boost, _h_dR_BB_boost, _h_min_dR_ZB_boost, _h_A_ZBB_boost, _h_min_ZpT; //@} }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2013_I1256943); } diff --git a/analyses/pluginCMS/CMS_2016_I1486238.cc b/analyses/pluginCMS/CMS_2016_I1486238.cc --- a/analyses/pluginCMS/CMS_2016_I1486238.cc +++ b/analyses/pluginCMS/CMS_2016_I1486238.cc @@ -1,124 +1,124 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/InitialQuarks.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" namespace Rivet { /// Studies of 2 b-jet + 2 jet production in proton-proton collisions at 7 TeV class CMS_2016_I1486238 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(CMS_2016_I1486238); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { FastJets akt(FinalState(), FastJets::ANTIKT, 0.5); addProjection(akt, "antikT"); _h_Deltaphi_newway = bookHisto1D(1,1,1); _h_deltaphiafterlight = bookHisto1D(9,1,1); _h_SumPLight = bookHisto1D(5,1,1); _h_LeadingBJetpt = bookHisto1D(11,1,1); _h_SubleadingBJetpt = bookHisto1D(15,1,1); _h_LeadingLightJetpt = bookHisto1D(13,1,1); _h_SubleadingLightJetpt = bookHisto1D(17,1,1); _h_LeadingBJeteta = bookHisto1D(10,1,1); _h_SubleadingBJeteta = bookHisto1D(14,1,1); _h_LeadingLightJeteta = bookHisto1D(12,1,1); _h_SubleadingLightJeteta = bookHisto1D(16,1,1); } /// Perform the per-event analysis void analyze(const Event& event) { const Jets& jets = apply(event, "antikT").jetsByPt(Cuts::absrap < 4.7 && Cuts::pT > 20*GeV); if (jets.size() < 4) vetoEvent; // Initial quarks /// @note Quark-level tagging... Particles bquarks; for (const GenParticle* p : particles(event.genEvent())) { if (abs(p->pdg_id()) == PID::BQUARK) bquarks += Particle(p); } Jets bjets, ljets; for (const Jet& j : jets) { const bool btag = any(bquarks, deltaRLess(j, 0.3)); // for (const Particle& b : bquarks) if (deltaR(j, b) < 0.3) btag = true; (btag && j.abseta() < 2.4 ? bjets : ljets).push_back(j); } // Fill histograms const double weight = event.weight(); if (bjets.size() >= 2 && ljets.size() >= 2) { _h_LeadingBJetpt->fill(bjets[0].pT()/GeV, weight); _h_SubleadingBJetpt->fill(bjets[1].pT()/GeV, weight); _h_LeadingLightJetpt->fill(ljets[0].pT()/GeV, weight); _h_SubleadingLightJetpt->fill(ljets[1].pT()/GeV, weight); // _h_LeadingBJeteta->fill(bjets[0].eta(), weight); _h_SubleadingBJeteta->fill(bjets[1].eta(), weight); _h_LeadingLightJeteta->fill(ljets[0].eta(), weight); _h_SubleadingLightJeteta->fill(ljets[1].eta(), weight); const double lightdphi = deltaPhi(ljets[0], ljets[1]); _h_deltaphiafterlight->fill(lightdphi, weight); const double vecsumlightjets = sqrt(sqr(ljets[0].px()+ljets[1].px()) + sqr(ljets[0].py()+ljets[1].py())); //< @todo Just (lj0+lj1).pT()? Or use add_quad const double term2 = vecsumlightjets/(sqrt(sqr(ljets[0].px()) + sqr(ljets[0].py())) + sqrt(sqr(ljets[1].px()) + sqr(ljets[1].py()))); //< @todo lj0.pT() + lj1.pT()? Or add_quad _h_SumPLight->fill(term2, weight); const double pxBsyst2 = bjets[0].px()+bjets[1].px(); // @todo (bj0+bj1).px() const double pyBsyst2 = bjets[0].py()+bjets[1].py(); // @todo (bj0+bj1).py() const double pxJetssyst2 = ljets[0].px()+ljets[1].px(); // @todo (lj0+lj1).px() const double pyJetssyst2 = ljets[0].py()+ljets[1].py(); // @todo (lj0+lj1).py() const double modulusB2 = sqrt(sqr(pxBsyst2)+sqr(pyBsyst2)); //< @todo add_quad const double modulusJets2 = sqrt(sqr(pxJetssyst2)+sqr(pyJetssyst2)); //< @todo add_quad const double cosphiBsyst2 = pxBsyst2/modulusB2; const double cosphiJetssyst2 = pxJetssyst2/modulusJets2; const double phiBsyst2 = ((pyBsyst2 > 0) ? 1 : -1) * acos(cosphiBsyst2); //< @todo sign(pyBsyst2) const double phiJetssyst2 = sign(pyJetssyst2) * acos(cosphiJetssyst2); const double Dphi2 = deltaPhi(phiBsyst2, phiJetssyst2); _h_Deltaphi_newway->fill(Dphi2,weight); } } /// Normalise histograms etc., after the run void finalize() { const double invlumi = crossSection()/picobarn/sumOfWeights(); normalize({_h_SumPLight, _h_deltaphiafterlight, _h_Deltaphi_newway}); scale({_h_LeadingLightJetpt, _h_SubleadingLightJetpt, _h_LeadingBJetpt, _h_SubleadingBJetpt}, invlumi); scale({_h_LeadingLightJeteta, _h_SubleadingLightJeteta, _h_LeadingBJeteta, _h_SubleadingBJeteta}, invlumi); } //@} private: /// @name Histograms //@{ Histo1DPtr _h_deltaphiafterlight, _h_Deltaphi_newway, _h_SumPLight; Histo1DPtr _h_LeadingBJetpt, _h_SubleadingBJetpt, _h_LeadingLightJetpt, _h_SubleadingLightJetpt; Histo1DPtr _h_LeadingBJeteta, _h_SubleadingBJeteta, _h_LeadingLightJeteta, _h_SubleadingLightJeteta; }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(CMS_2016_I1486238); } diff --git a/analyses/pluginLEP/ALEPH_1996_S3486095.cc b/analyses/pluginLEP/ALEPH_1996_S3486095.cc --- a/analyses/pluginLEP/ALEPH_1996_S3486095.cc +++ b/analyses/pluginLEP/ALEPH_1996_S3486095.cc @@ -1,557 +1,557 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief ALEPH QCD study with event shapes and identified particles /// @author Holger Schulz class ALEPH_1996_S3486095 : public Analysis { public: /// Constructor ALEPH_1996_S3486095() : Analysis("ALEPH_1996_S3486095") { _numChParticles = 0; _weightedTotalPartNum = 0; _weightedTotalNumPiPlus = 0; _weightedTotalNumKPlus = 0; _weightedTotalNumP = 0; _weightedTotalNumPhoton = 0; _weightedTotalNumPi0 = 0; _weightedTotalNumEta = 0; _weightedTotalNumEtaPrime = 0; _weightedTotalNumK0 = 0; _weightedTotalNumLambda0 = 0; _weightedTotalNumXiMinus = 0; _weightedTotalNumSigma1385Plus= 0; _weightedTotalNumXi1530_0 = 0; _weightedTotalNumRho = 0; _weightedTotalNumOmega782 = 0; _weightedTotalNumKStar892_0 = 0; _weightedTotalNumPhi = 0; _weightedTotalNumKStar892Plus = 0; } /// @name Analysis methods //@{ void init() { // Set up projections declare(Beam(), "Beams"); const ChargedFinalState cfs; declare(cfs, "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); declare(FastJets(cfs, FastJets::DURHAM, 0.7), "DurhamJets"); declare(Sphericity(cfs), "Sphericity"); declare(ParisiTensor(cfs), "Parisi"); const Thrust thrust(cfs); declare(thrust, "Thrust"); declare(Hemispheres(thrust), "Hemispheres"); // Book histograms _histSphericity = bookHisto1D(1, 1, 1); _histAplanarity = bookHisto1D(2, 1, 1); _hist1MinusT = bookHisto1D(3, 1, 1); _histTMinor = bookHisto1D(4, 1, 1); _histY3 = bookHisto1D(5, 1, 1); _histHeavyJetMass = bookHisto1D(6, 1, 1); _histCParam = bookHisto1D(7, 1, 1); _histOblateness = bookHisto1D(8, 1, 1); _histScaledMom = bookHisto1D(9, 1, 1); _histRapidityT = bookHisto1D(10, 1, 1); _histPtSIn = bookHisto1D(11, 1, 1); _histPtSOut = bookHisto1D(12, 1, 1); _histLogScaledMom = bookHisto1D(17, 1, 1); _histChMult = bookHisto1D(18, 1, 1); _histMeanChMult = bookHisto1D(19, 1, 1); _histMeanChMultRapt05= bookHisto1D(20, 1, 1); _histMeanChMultRapt10= bookHisto1D(21, 1, 1); _histMeanChMultRapt15= bookHisto1D(22, 1, 1); _histMeanChMultRapt20= bookHisto1D(23, 1, 1); // Particle spectra _histMultiPiPlus = bookHisto1D(25, 1, 1); _histMultiKPlus = bookHisto1D(26, 1, 1); _histMultiP = bookHisto1D(27, 1, 1); _histMultiPhoton = bookHisto1D(28, 1, 1); _histMultiPi0 = bookHisto1D(29, 1, 1); _histMultiEta = bookHisto1D(30, 1, 1); _histMultiEtaPrime = bookHisto1D(31, 1, 1); _histMultiK0 = bookHisto1D(32, 1, 1); _histMultiLambda0 = bookHisto1D(33, 1, 1); _histMultiXiMinus = bookHisto1D(34, 1, 1); _histMultiSigma1385Plus = bookHisto1D(35, 1, 1); _histMultiXi1530_0 = bookHisto1D(36, 1, 1); _histMultiRho = bookHisto1D(37, 1, 1); _histMultiOmega782 = bookHisto1D(38, 1, 1); _histMultiKStar892_0 = bookHisto1D(39, 1, 1); _histMultiPhi = bookHisto1D(40, 1, 1); _histMultiKStar892Plus = bookHisto1D(43, 1, 1); // Mean multiplicities _histMeanMultiPi0 = bookHisto1D(44, 1, 2); _histMeanMultiEta = bookHisto1D(44, 1, 3); _histMeanMultiEtaPrime = bookHisto1D(44, 1, 4); _histMeanMultiK0 = bookHisto1D(44, 1, 5); _histMeanMultiRho = bookHisto1D(44, 1, 6); _histMeanMultiOmega782 = bookHisto1D(44, 1, 7); _histMeanMultiPhi = bookHisto1D(44, 1, 8); _histMeanMultiKStar892Plus = bookHisto1D(44, 1, 9); _histMeanMultiKStar892_0 = bookHisto1D(44, 1, 10); _histMeanMultiLambda0 = bookHisto1D(44, 1, 11); _histMeanMultiSigma0 = bookHisto1D(44, 1, 12); _histMeanMultiXiMinus = bookHisto1D(44, 1, 13); _histMeanMultiSigma1385Plus = bookHisto1D(44, 1, 14); _histMeanMultiXi1530_0 = bookHisto1D(44, 1, 15); _histMeanMultiOmegaOmegaBar = bookHisto1D(44, 1, 16); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); _weightedTotalPartNum += numParticles * weight; // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Thrusts MSG_DEBUG("Calculating thrust"); const Thrust& thrust = apply(e, "Thrust"); _hist1MinusT->fill(1 - thrust.thrust(), weight); _histTMinor->fill(thrust.thrustMinor(), weight); _histOblateness->fill(thrust.oblateness(), weight); // Jets MSG_DEBUG("Calculating differential jet rate plots:"); const FastJets& durjet = apply(e, "DurhamJets"); if (durjet.clusterSeq()) { double y3 = durjet.clusterSeq()->exclusive_ymerge_max(2); if (y3>0.0) _histY3->fill(-1. * std::log(y3), weight); } // Sphericities MSG_DEBUG("Calculating sphericity"); const Sphericity& sphericity = apply(e, "Sphericity"); _histSphericity->fill(sphericity.sphericity(), weight); _histAplanarity->fill(sphericity.aplanarity(), weight); // C param MSG_DEBUG("Calculating Parisi params"); const ParisiTensor& parisi = apply(e, "Parisi"); _histCParam->fill(parisi.C(), weight); // Hemispheres MSG_DEBUG("Calculating hemisphere variables"); const Hemispheres& hemi = apply(e, "Hemispheres"); _histHeavyJetMass->fill(hemi.scaledM2high(), weight); // Iterate over all the charged final state particles. double Evis = 0.0; double rapt05 = 0.; double rapt10 = 0.; double rapt15 = 0.; double rapt20 = 0.; //int numChParticles = 0; MSG_DEBUG("About to iterate over charged FS particles"); foreach (const Particle& p, fs.particles()) { // Get momentum and energy of each particle. const Vector3 mom3 = p.p3(); const double energy = p.E(); Evis += energy; _numChParticles += weight; // Scaled momenta. const double mom = mom3.mod(); const double scaledMom = mom/meanBeamMom; const double logInvScaledMom = -std::log(scaledMom); _histLogScaledMom->fill(logInvScaledMom, weight); _histScaledMom->fill(scaledMom, weight); // Get momenta components w.r.t. thrust and sphericity. const double momT = dot(thrust.thrustAxis(), mom3); const double pTinS = dot(mom3, sphericity.sphericityMajorAxis()); const double pToutS = dot(mom3, sphericity.sphericityMinorAxis()); _histPtSIn->fill(fabs(pTinS/GeV), weight); _histPtSOut->fill(fabs(pToutS/GeV), weight); // Calculate rapidities w.r.t. thrust. const double rapidityT = 0.5 * std::log((energy + momT) / (energy - momT)); _histRapidityT->fill(fabs(rapidityT), weight); if (std::fabs(rapidityT) <= 0.5) { rapt05 += 1.0; } if (std::fabs(rapidityT) <= 1.0) { rapt10 += 1.0; } if (std::fabs(rapidityT) <= 1.5) { rapt15 += 1.0; } if (std::fabs(rapidityT) <= 2.0) { rapt20 += 1.0; } } _histChMult->fill(numParticles, weight); _histMeanChMultRapt05->fill(_histMeanChMultRapt05->bin(0).xMid(), rapt05 * weight); _histMeanChMultRapt10->fill(_histMeanChMultRapt10->bin(0).xMid(), rapt10 * weight); _histMeanChMultRapt15->fill(_histMeanChMultRapt15->bin(0).xMid(), rapt15 * weight); _histMeanChMultRapt20->fill(_histMeanChMultRapt20->bin(0).xMid(), rapt20 * weight); _histMeanChMult->fill(_histMeanChMult->bin(0).xMid(), numParticles*weight); //// Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); for (Particles::const_iterator p = ufs.particles().begin(); p != ufs.particles().end(); ++p) { const Vector3 mom3 = p->momentum().p3(); int id = abs(p->pid()); const double mom = mom3.mod(); const double energy = p->momentum().E(); const double scaledMom = mom/meanBeamMom; const double scaledEnergy = energy/meanBeamMom; // meanBeamMom is approximately beam energy switch (id) { case 22: _histMultiPhoton->fill(-1.*std::log(scaledMom), weight); _weightedTotalNumPhoton += weight; break; case -321: case 321: _weightedTotalNumKPlus += weight; _histMultiKPlus->fill(scaledMom, weight); break; case 211: case -211: _histMultiPiPlus->fill(scaledMom, weight); _weightedTotalNumPiPlus += weight; break; case 2212: case -2212: _histMultiP->fill(scaledMom, weight); _weightedTotalNumP += weight; break; case 111: _histMultiPi0->fill(scaledMom, weight); _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); _weightedTotalNumPi0 += weight; break; case 221: if (scaledMom >= 0.1) { _histMultiEta->fill(scaledEnergy, weight); _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); _weightedTotalNumEta += weight; } break; case 331: if (scaledMom >= 0.1) { _histMultiEtaPrime->fill(scaledEnergy, weight); _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); _weightedTotalNumEtaPrime += weight; } break; case 130: //klong case 310: //kshort _histMultiK0->fill(scaledMom, weight); _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); _weightedTotalNumK0 += weight; break; case 113: _histMultiRho->fill(scaledMom, weight); _histMeanMultiRho->fill(_histMeanMultiRho->bin(0).xMid(), weight); _weightedTotalNumRho += weight; break; case 223: _histMultiOmega782->fill(scaledMom, weight); _histMeanMultiOmega782->fill(_histMeanMultiOmega782->bin(0).xMid(), weight); _weightedTotalNumOmega782 += weight; break; case 333: _histMultiPhi->fill(scaledMom, weight); _histMeanMultiPhi->fill(_histMeanMultiPhi->bin(0).xMid(), weight); _weightedTotalNumPhi += weight; break; case 313: case -313: _histMultiKStar892_0->fill(scaledMom, weight); _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); _weightedTotalNumKStar892_0 += weight; break; case 323: case -323: _histMultiKStar892Plus->fill(scaledEnergy, weight); _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); _weightedTotalNumKStar892Plus += weight; break; case 3122: case -3122: _histMultiLambda0->fill(scaledMom, weight); _histMeanMultiLambda0->fill(_histMeanMultiLambda0->bin(0).xMid(), weight); _weightedTotalNumLambda0 += weight; break; case 3212: case -3212: _histMeanMultiSigma0->fill(_histMeanMultiSigma0->bin(0).xMid(), weight); break; case 3312: case -3312: _histMultiXiMinus->fill(scaledEnergy, weight); _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); _weightedTotalNumXiMinus += weight; break; case 3114: case -3114: case 3224: case -3224: _histMultiSigma1385Plus->fill(scaledEnergy, weight); _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _weightedTotalNumSigma1385Plus += weight; break; case 3324: case -3324: _histMultiXi1530_0->fill(scaledEnergy, weight); _histMeanMultiXi1530_0->fill(_histMeanMultiXi1530_0->bin(0).xMid(), weight); _weightedTotalNumXi1530_0 += weight; break; case 3334: _histMeanMultiOmegaOmegaBar->fill(_histMeanMultiOmegaOmegaBar->bin(0).xMid(), weight); break; } } } /// Finalize void finalize() { // Normalize inclusive single particle distributions to the average number // of charged particles per event. const double avgNumParts = _weightedTotalPartNum / sumOfWeights(); normalize(_histPtSIn, avgNumParts); normalize(_histPtSOut, avgNumParts); normalize(_histRapidityT, avgNumParts); normalize(_histY3); normalize(_histLogScaledMom, avgNumParts); normalize(_histScaledMom, avgNumParts); // particle spectra scale(_histMultiPiPlus ,1./sumOfWeights()); scale(_histMultiKPlus ,1./sumOfWeights()); scale(_histMultiP ,1./sumOfWeights()); scale(_histMultiPhoton ,1./sumOfWeights()); scale(_histMultiPi0 ,1./sumOfWeights()); scale(_histMultiEta ,1./sumOfWeights()); scale(_histMultiEtaPrime ,1./sumOfWeights()); scale(_histMultiK0 ,1./sumOfWeights()); scale(_histMultiLambda0 ,1./sumOfWeights()); scale(_histMultiXiMinus ,1./sumOfWeights()); scale(_histMultiSigma1385Plus ,1./sumOfWeights()); scale(_histMultiXi1530_0 ,1./sumOfWeights()); scale(_histMultiRho ,1./sumOfWeights()); scale(_histMultiOmega782 ,1./sumOfWeights()); scale(_histMultiKStar892_0 ,1./sumOfWeights()); scale(_histMultiPhi ,1./sumOfWeights()); scale(_histMultiKStar892Plus ,1./sumOfWeights()); //normalize(_histMultiPiPlus ,_weightedTotalNumPiPlus / sumOfWeights()); //normalize(_histMultiKPlus ,_weightedTotalNumKPlus/sumOfWeights()); //normalize(_histMultiP ,_weightedTotalNumP/sumOfWeights()); //normalize(_histMultiPhoton ,_weightedTotalNumPhoton/sumOfWeights()); //normalize(_histMultiPi0 ,_weightedTotalNumPi0/sumOfWeights()); //normalize(_histMultiEta ,_weightedTotalNumEta/sumOfWeights()); //normalize(_histMultiEtaPrime ,_weightedTotalNumEtaPrime/sumOfWeights()); //normalize(_histMultiK0 ,_weightedTotalNumK0/sumOfWeights()); //normalize(_histMultiLambda0 ,_weightedTotalNumLambda0/sumOfWeights()); //normalize(_histMultiXiMinus ,_weightedTotalNumXiMinus/sumOfWeights()); //normalize(_histMultiSigma1385Plus ,_weightedTotalNumSigma1385Plus/sumOfWeights()); //normalize(_histMultiXi1530_0 ,_weightedTotalNumXi1530_0 /sumOfWeights()); //normalize(_histMultiRho ,_weightedTotalNumRho/sumOfWeights()); //normalize(_histMultiOmegaMinus ,_weightedTotalNumOmegaMinus/sumOfWeights()); //normalize(_histMultiKStar892_0 ,_weightedTotalNumKStar892_0/sumOfWeights()); //normalize(_histMultiPhi ,_weightedTotalNumPhi/sumOfWeights()); //normalize(_histMultiKStar892Plus ,_weightedTotalNumKStar892Plus/sumOfWeights()); // event shape normalize(_hist1MinusT); normalize(_histTMinor); normalize(_histOblateness); normalize(_histSphericity); normalize(_histAplanarity); normalize(_histHeavyJetMass); normalize(_histCParam); // mean multiplicities scale(_histChMult , 2.0/sumOfWeights()); // taking into account the binwidth of 2 scale(_histMeanChMult , 1.0/sumOfWeights()); scale(_histMeanChMultRapt05 , 1.0/sumOfWeights()); scale(_histMeanChMultRapt10 , 1.0/sumOfWeights()); scale(_histMeanChMultRapt15 , 1.0/sumOfWeights()); scale(_histMeanChMultRapt20 , 1.0/sumOfWeights()); scale(_histMeanMultiPi0 , 1.0/sumOfWeights()); scale(_histMeanMultiEta , 1.0/sumOfWeights()); scale(_histMeanMultiEtaPrime , 1.0/sumOfWeights()); scale(_histMeanMultiK0 , 1.0/sumOfWeights()); scale(_histMeanMultiRho , 1.0/sumOfWeights()); scale(_histMeanMultiOmega782 , 1.0/sumOfWeights()); scale(_histMeanMultiPhi , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892Plus , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892_0 , 1.0/sumOfWeights()); scale(_histMeanMultiLambda0 , 1.0/sumOfWeights()); scale(_histMeanMultiSigma0 , 1.0/sumOfWeights()); scale(_histMeanMultiXiMinus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385Plus, 1.0/sumOfWeights()); scale(_histMeanMultiXi1530_0 , 1.0/sumOfWeights()); scale(_histMeanMultiOmegaOmegaBar, 1.0/sumOfWeights()); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. double _weightedTotalPartNum; double _weightedTotalNumPiPlus; double _weightedTotalNumKPlus; double _weightedTotalNumP; double _weightedTotalNumPhoton; double _weightedTotalNumPi0; double _weightedTotalNumEta; double _weightedTotalNumEtaPrime; double _weightedTotalNumK0; double _weightedTotalNumLambda0; double _weightedTotalNumXiMinus; double _weightedTotalNumSigma1385Plus; double _weightedTotalNumXi1530_0; double _weightedTotalNumRho; double _weightedTotalNumOmega782; double _weightedTotalNumKStar892_0; double _weightedTotalNumPhi; double _weightedTotalNumKStar892Plus; double _numChParticles; /// @name Histograms //@{ Histo1DPtr _histSphericity; Histo1DPtr _histAplanarity; Histo1DPtr _hist1MinusT; Histo1DPtr _histTMinor; Histo1DPtr _histY3; Histo1DPtr _histHeavyJetMass; Histo1DPtr _histCParam; Histo1DPtr _histOblateness; Histo1DPtr _histScaledMom; Histo1DPtr _histRapidityT; Histo1DPtr _histPtSIn; Histo1DPtr _histPtSOut; Histo1DPtr _histJetRate2Durham; Histo1DPtr _histJetRate3Durham; Histo1DPtr _histJetRate4Durham; Histo1DPtr _histJetRate5Durham; Histo1DPtr _histLogScaledMom; Histo1DPtr _histChMult; Histo1DPtr _histMultiPiPlus; Histo1DPtr _histMultiKPlus; Histo1DPtr _histMultiP; Histo1DPtr _histMultiPhoton; Histo1DPtr _histMultiPi0; Histo1DPtr _histMultiEta; Histo1DPtr _histMultiEtaPrime; Histo1DPtr _histMultiK0; Histo1DPtr _histMultiLambda0; Histo1DPtr _histMultiXiMinus; Histo1DPtr _histMultiSigma1385Plus; Histo1DPtr _histMultiXi1530_0; Histo1DPtr _histMultiRho; Histo1DPtr _histMultiOmega782; Histo1DPtr _histMultiKStar892_0; Histo1DPtr _histMultiPhi; Histo1DPtr _histMultiKStar892Plus; // mean multiplicities Histo1DPtr _histMeanChMult; Histo1DPtr _histMeanChMultRapt05; Histo1DPtr _histMeanChMultRapt10; Histo1DPtr _histMeanChMultRapt15; Histo1DPtr _histMeanChMultRapt20; Histo1DPtr _histMeanMultiPi0; Histo1DPtr _histMeanMultiEta; Histo1DPtr _histMeanMultiEtaPrime; Histo1DPtr _histMeanMultiK0; Histo1DPtr _histMeanMultiRho; Histo1DPtr _histMeanMultiOmega782; Histo1DPtr _histMeanMultiPhi; Histo1DPtr _histMeanMultiKStar892Plus; Histo1DPtr _histMeanMultiKStar892_0; Histo1DPtr _histMeanMultiLambda0; Histo1DPtr _histMeanMultiSigma0; Histo1DPtr _histMeanMultiXiMinus; Histo1DPtr _histMeanMultiSigma1385Plus; Histo1DPtr _histMeanMultiXi1530_0; Histo1DPtr _histMeanMultiOmegaOmegaBar; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_1996_S3486095); } diff --git a/analyses/pluginLEP/ALEPH_1999_S4193598.cc b/analyses/pluginLEP/ALEPH_1999_S4193598.cc --- a/analyses/pluginLEP/ALEPH_1999_S4193598.cc +++ b/analyses/pluginLEP/ALEPH_1999_S4193598.cc @@ -1,78 +1,78 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class ALEPH_1999_S4193598 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor ALEPH_1999_S4193598() : Analysis("ALEPH_1999_S4193598") { } //@} public: /// Book histograms and initialise projections before the run void init() { declare(Beam(), "Beams"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); declare(ChargedFinalState(), "CFS"); _h_Xe_Ds = bookHisto1D(1, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // Trigger condition const ChargedFinalState& cfs = apply(event, "CFS"); if (cfs.size() < 5) vetoEvent; - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); // Get beams and average beam momentum const ParticlePair& beams = apply(event, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0/GeV; // Accept all D*+- decays. Normalisation to data in finalize for (const Particle& p : filter_select(ufs.particles(), Cuts::abspid==PID::DSTARPLUS)) { // Scaled energy. const double energy = p.E()/GeV; const double scaledEnergy = energy/meanBeamMom; _h_Xe_Ds->fill(scaledEnergy, weight); } } /// Normalise histograms etc., after the run void finalize() { // Normalize to data integral normalize(_h_Xe_Ds, 0.00498); } private: Histo1DPtr _h_Xe_Ds; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_1999_S4193598); } diff --git a/analyses/pluginLEP/ALEPH_2002_S4823664.cc b/analyses/pluginLEP/ALEPH_2002_S4823664.cc --- a/analyses/pluginLEP/ALEPH_2002_S4823664.cc +++ b/analyses/pluginLEP/ALEPH_2002_S4823664.cc @@ -1,91 +1,91 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief ALEPH eta/omega fragmentation function paper /// @author Peter Richardson class ALEPH_2002_S4823664 : public Analysis { public: /// Constructor ALEPH_2002_S4823664() : Analysis("ALEPH_2002_S4823664") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpEta = bookHisto1D( 2, 1, 2); _histXpOmega = bookHisto1D( 3, 1, 2); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if(p.abspid()==221) { double xp = p.p3().mod()/meanBeamMom; _histXpEta->fill(xp, weight); } else if(p.abspid()==223) { double xp = p.p3().mod()/meanBeamMom; _histXpOmega->fill(xp, weight); } } } /// Finalize void finalize() { scale(_histXpEta , 1./sumOfWeights()); scale(_histXpOmega, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXpEta; Histo1DPtr _histXpOmega; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_2002_S4823664); } diff --git a/analyses/pluginLEP/ALEPH_2014_I1267648.cc b/analyses/pluginLEP/ALEPH_2014_I1267648.cc --- a/analyses/pluginLEP/ALEPH_2014_I1267648.cc +++ b/analyses/pluginLEP/ALEPH_2014_I1267648.cc @@ -1,142 +1,142 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class ALEPH_2014_I1267648 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(ALEPH_2014_I1267648); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_pip0 = bookHisto1D(1, 1, 1); _h_pi2p0 = bookHisto1D(2, 1, 1); _h_pi3p0 = bookHisto1D(3, 1, 1); _h_3pi = bookHisto1D(4, 1, 1); _h_3pip0 = bookHisto1D(5, 1, 1); } // Helper function to look for specific decays bool isSpecificDecay(const Particle& mother, vector ids) { // Trivial check to ignore any other decays but the one in question modulo photons const Particles children = mother.children(Cuts::pid!=PID::PHOTON); if (children.size()!=ids.size()) return false; // Specific bits for tau -> pi decays unsigned int n_pi0(0), n_piplus(0), n_piminus(0), n_nutau(0), n_nutaubar(0); for (int id : ids) { if (id == PID::PI0) n_pi0++; else if (id == PID::PIPLUS) n_piplus++; else if (id == PID::PIMINUS) n_piminus++; else if (id == PID::NU_TAU) n_nutau++; else if (id == PID::NU_TAUBAR) n_nutaubar++; } // Check for the explicit decay -- easy as we only deal with pi0 and pi+/- if ( count(children, hasPID(PID::PI0)) != n_pi0 ) return false; if ( count(children, hasPID(PID::PIPLUS)) != n_piplus ) return false; if ( count(children, hasPID(PID::PIMINUS)) != n_piminus ) return false; if ( count(children, hasPID(PID::NU_TAU)) != n_nutau ) return false; if ( count(children, hasPID(PID::NU_TAUBAR)) != n_nutaubar ) return false; return true; } // Conveniece function to get m2 of sum of all hadronic tau decay product 4-vectors double hadronicm2(const Particle& mother) { FourMomentum p_tot(0,0,0,0); // Iterate over all children that are mesons for (const Particle & meson : filter_select(mother.children(), isMeson)) { // Add this mesons 4-momentum to total 4-momentum p_tot += meson.momentum(); } return p_tot.mass2(); } /// Perform the per-event analysis void analyze(const Event& event) { // Loop over taus - for(const Particle& tau : apply(event, "UFS").particles(Cuts::abspid==PID::TAU)) { + for(const Particle& tau : apply(event, "UFS").particles(Cuts::abspid==PID::TAU)) { // tau -> pi pi0 nu_tau (both charges) if (isSpecificDecay(tau, {PID::PIPLUS, PID::PI0, PID::NU_TAUBAR}) || isSpecificDecay(tau, {PID::PIMINUS, PID::PI0, PID::NU_TAU}) ) { _h_pip0->fill(hadronicm2(tau), event.weight()); } // tau -> pi pi0 pi0 nu_tau (both charges) else if (isSpecificDecay(tau, {PID::PIPLUS, PID::PI0, PID::PI0, PID::NU_TAUBAR}) || isSpecificDecay(tau, {PID::PIMINUS, PID::PI0, PID::PI0, PID::NU_TAU}) ) { _h_pi2p0->fill(hadronicm2(tau), event.weight()); } // tau -> pi pi0 pi0 pi0 (3,1,1) else if (isSpecificDecay(tau, {PID::PIPLUS, PID::PI0, PID::PI0, PID::PI0, PID::NU_TAUBAR}) || isSpecificDecay(tau, {PID::PIMINUS, PID::PI0, PID::PI0, PID::PI0, PID::NU_TAU}) ) { _h_pi3p0->fill(hadronicm2(tau), event.weight()); } // tau -> 3 charged pions (4,1,1) else if (isSpecificDecay(tau, {PID::PIPLUS, PID::PIPLUS, PID::PIMINUS, PID::NU_TAUBAR}) || isSpecificDecay(tau, {PID::PIMINUS, PID::PIMINUS, PID::PIPLUS, PID::NU_TAU}) ) { _h_3pi->fill(hadronicm2(tau), event.weight()); } // tau -> 3 charged pions + pi0 (5,1,1) else if (isSpecificDecay(tau, {PID::PIPLUS, PID::PIPLUS, PID::PIMINUS, PID::PI0, PID::NU_TAUBAR}) || isSpecificDecay(tau, {PID::PIMINUS, PID::PIMINUS, PID::PIPLUS, PID::PI0, PID::NU_TAU}) ) { _h_3pip0->fill(hadronicm2(tau), event.weight()); } // } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_pip0); // normalize to unity normalize(_h_pi2p0); // normalize to unity normalize(_h_pi3p0); // nor\pi^0malize to unity normalize(_h_3pi); // normalize to unity normalize(_h_3pip0); // normalize to unity } //@} private: /// @name Histograms //@{ Histo1DPtr _h_pip0; Histo1DPtr _h_pi2p0; Histo1DPtr _h_pi3p0; Histo1DPtr _h_3pi; Histo1DPtr _h_3pip0; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ALEPH_2014_I1267648); } diff --git a/analyses/pluginLEP/DELPHI_1995_S3137023.cc b/analyses/pluginLEP/DELPHI_1995_S3137023.cc --- a/analyses/pluginLEP/DELPHI_1995_S3137023.cc +++ b/analyses/pluginLEP/DELPHI_1995_S3137023.cc @@ -1,107 +1,107 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief DELPHI strange baryon paper /// @author Hendrik Hoeth class DELPHI_1995_S3137023 : public Analysis { public: /// Constructor DELPHI_1995_S3137023() : Analysis("DELPHI_1995_S3137023") { _weightedTotalNumXiMinus = 0; _weightedTotalNumSigma1385Plus = 0; } /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpXiMinus = bookHisto1D(2, 1, 1); _histXpSigma1385Plus = bookHisto1D(3, 1, 1); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); switch (id) { case 3312: _histXpXiMinus->fill(p.p3().mod()/meanBeamMom, weight); _weightedTotalNumXiMinus += weight; break; case 3114: case 3224: _histXpSigma1385Plus->fill(p.p3().mod()/meanBeamMom, weight); _weightedTotalNumSigma1385Plus += weight; break; } } } /// Finalize void finalize() { normalize(_histXpXiMinus , _weightedTotalNumXiMinus/sumOfWeights()); normalize(_histXpSigma1385Plus , _weightedTotalNumSigma1385Plus/sumOfWeights()); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. double _weightedTotalNumXiMinus; double _weightedTotalNumSigma1385Plus; Histo1DPtr _histXpXiMinus; Histo1DPtr _histXpSigma1385Plus; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_1995_S3137023); } diff --git a/analyses/pluginLEP/DELPHI_1996_S3430090.cc b/analyses/pluginLEP/DELPHI_1996_S3430090.cc --- a/analyses/pluginLEP/DELPHI_1996_S3430090.cc +++ b/analyses/pluginLEP/DELPHI_1996_S3430090.cc @@ -1,552 +1,552 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/Sphericity.hh" #include "Rivet/Projections/Thrust.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/ParisiTensor.hh" #include "Rivet/Projections/Hemispheres.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /** * @brief DELPHI event shapes and identified particle spectra * @author Andy Buckley * @author Hendrik Hoeth * * This is the paper which was used for the original PROFESSOR MC tuning * study. It studies a wide range of e+ e- event shape variables, differential * jet rates in the Durham and JADE schemes, and incorporates identified * particle spectra, from other LEP analyses. * * @par Run conditions * * @arg LEP1 beam energy: \f$ \sqrt{s} = \f$ 91.2 GeV * @arg Run with generic QCD events. * @arg No \f$ p_\perp^\text{min} \f$ cutoff is required */ class DELPHI_1996_S3430090 : public Analysis { public: /// Constructor DELPHI_1996_S3430090() : Analysis("DELPHI_1996_S3430090") { _weightedTotalPartNum = 0.0; _passedCutWeightSum = 0.0; _passedCut3WeightSum = 0.0; _passedCut4WeightSum = 0.0; _passedCut5WeightSum = 0.0; } /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); // Don't try to introduce a pT or eta cut here. It's all corrected // back. (See Section 2 of the paper.) const ChargedFinalState cfs; declare(cfs, "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); declare(FastJets(cfs, FastJets::JADE, 0.7), "JadeJets"); declare(FastJets(cfs, FastJets::DURHAM, 0.7), "DurhamJets"); declare(Sphericity(cfs), "Sphericity"); declare(ParisiTensor(cfs), "Parisi"); const Thrust thrust(cfs); declare(thrust, "Thrust"); declare(Hemispheres(thrust), "Hemispheres"); _histPtTIn = bookHisto1D(1, 1, 1); _histPtTOut = bookHisto1D(2, 1, 1); _histPtSIn = bookHisto1D(3, 1, 1); _histPtSOut = bookHisto1D(4, 1, 1); _histRapidityT = bookHisto1D(5, 1, 1); _histRapidityS = bookHisto1D(6, 1, 1); _histScaledMom = bookHisto1D(7, 1, 1); _histLogScaledMom = bookHisto1D(8, 1, 1); _histPtTOutVsXp = bookProfile1D(9, 1, 1); _histPtVsXp = bookProfile1D(10, 1, 1); _hist1MinusT = bookHisto1D(11, 1, 1); _histTMajor = bookHisto1D(12, 1, 1); _histTMinor = bookHisto1D(13, 1, 1); _histOblateness = bookHisto1D(14, 1, 1); _histSphericity = bookHisto1D(15, 1, 1); _histAplanarity = bookHisto1D(16, 1, 1); _histPlanarity = bookHisto1D(17, 1, 1); _histCParam = bookHisto1D(18, 1, 1); _histDParam = bookHisto1D(19, 1, 1); _histHemiMassH = bookHisto1D(20, 1, 1); _histHemiMassL = bookHisto1D(21, 1, 1); _histHemiMassD = bookHisto1D(22, 1, 1); _histHemiBroadW = bookHisto1D(23, 1, 1); _histHemiBroadN = bookHisto1D(24, 1, 1); _histHemiBroadT = bookHisto1D(25, 1, 1); _histHemiBroadD = bookHisto1D(26, 1, 1); // Binned in y_cut _histDiffRate2Durham = bookHisto1D(27, 1, 1); _histDiffRate2Jade = bookHisto1D(28, 1, 1); _histDiffRate3Durham = bookHisto1D(29, 1, 1); _histDiffRate3Jade = bookHisto1D(30, 1, 1); _histDiffRate4Durham = bookHisto1D(31, 1, 1); _histDiffRate4Jade = bookHisto1D(32, 1, 1); // Binned in cos(chi) _histEEC = bookHisto1D(33, 1, 1); _histAEEC = bookHisto1D(34, 1, 1); _histMultiCharged = bookHisto1D(35, 1, 1); _histMultiPiPlus = bookHisto1D(36, 1, 1); _histMultiPi0 = bookHisto1D(36, 1, 2); _histMultiKPlus = bookHisto1D(36, 1, 3); _histMultiK0 = bookHisto1D(36, 1, 4); _histMultiEta = bookHisto1D(36, 1, 5); _histMultiEtaPrime = bookHisto1D(36, 1, 6); _histMultiDPlus = bookHisto1D(36, 1, 7); _histMultiD0 = bookHisto1D(36, 1, 8); _histMultiBPlus0 = bookHisto1D(36, 1, 9); _histMultiF0 = bookHisto1D(37, 1, 1); _histMultiRho = bookHisto1D(38, 1, 1); _histMultiKStar892Plus = bookHisto1D(38, 1, 2); _histMultiKStar892_0 = bookHisto1D(38, 1, 3); _histMultiPhi = bookHisto1D(38, 1, 4); _histMultiDStar2010Plus = bookHisto1D(38, 1, 5); _histMultiF2 = bookHisto1D(39, 1, 1); _histMultiK2Star1430_0 = bookHisto1D(39, 1, 2); _histMultiP = bookHisto1D(40, 1, 1); _histMultiLambda0 = bookHisto1D(40, 1, 2); _histMultiXiMinus = bookHisto1D(40, 1, 3); _histMultiOmegaMinus = bookHisto1D(40, 1, 4); _histMultiDeltaPlusPlus = bookHisto1D(40, 1, 5); _histMultiSigma1385Plus = bookHisto1D(40, 1, 6); _histMultiXi1530_0 = bookHisto1D(40, 1, 7); _histMultiLambdaB0 = bookHisto1D(40, 1, 8); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); const double weight = e.weight(); _passedCutWeightSum += weight; _weightedTotalPartNum += numParticles * weight; // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Thrusts MSG_DEBUG("Calculating thrust"); const Thrust& thrust = apply(e, "Thrust"); _hist1MinusT->fill(1 - thrust.thrust(), weight); _histTMajor->fill(thrust.thrustMajor(), weight); _histTMinor->fill(thrust.thrustMinor(), weight); _histOblateness->fill(thrust.oblateness(), weight); // Jets const FastJets& durjet = apply(e, "DurhamJets"); const FastJets& jadejet = apply(e, "JadeJets"); if (numParticles >= 3) { _passedCut3WeightSum += weight; if (durjet.clusterSeq()) _histDiffRate2Durham->fill(durjet.clusterSeq()->exclusive_ymerge_max(2), weight); if (jadejet.clusterSeq()) _histDiffRate2Jade->fill(jadejet.clusterSeq()->exclusive_ymerge_max(2), weight); } if (numParticles >= 4) { _passedCut4WeightSum += weight; if (durjet.clusterSeq()) _histDiffRate3Durham->fill(durjet.clusterSeq()->exclusive_ymerge_max(3), weight); if (jadejet.clusterSeq()) _histDiffRate3Jade->fill(jadejet.clusterSeq()->exclusive_ymerge_max(3), weight); } if (numParticles >= 5) { _passedCut5WeightSum += weight; if (durjet.clusterSeq()) _histDiffRate4Durham->fill(durjet.clusterSeq()->exclusive_ymerge_max(4), weight); if (jadejet.clusterSeq()) _histDiffRate4Jade->fill(jadejet.clusterSeq()->exclusive_ymerge_max(4), weight); } // Sphericities MSG_DEBUG("Calculating sphericity"); const Sphericity& sphericity = apply(e, "Sphericity"); _histSphericity->fill(sphericity.sphericity(), weight); _histAplanarity->fill(sphericity.aplanarity(), weight); _histPlanarity->fill(sphericity.planarity(), weight); // C & D params MSG_DEBUG("Calculating Parisi params"); const ParisiTensor& parisi = apply(e, "Parisi"); _histCParam->fill(parisi.C(), weight); _histDParam->fill(parisi.D(), weight); // Hemispheres MSG_DEBUG("Calculating hemisphere variables"); const Hemispheres& hemi = apply(e, "Hemispheres"); _histHemiMassH->fill(hemi.scaledM2high(), weight); _histHemiMassL->fill(hemi.scaledM2low(), weight); _histHemiMassD->fill(hemi.scaledM2diff(), weight); _histHemiBroadW->fill(hemi.Bmax(), weight); _histHemiBroadN->fill(hemi.Bmin(), weight); _histHemiBroadT->fill(hemi.Bsum(), weight); _histHemiBroadD->fill(hemi.Bdiff(), weight); // Iterate over all the charged final state particles. double Evis = 0.0; double Evis2 = 0.0; MSG_DEBUG("About to iterate over charged FS particles"); foreach (const Particle& p, fs.particles()) { // Get momentum and energy of each particle. const Vector3 mom3 = p.p3(); const double energy = p.E(); Evis += energy; // Scaled momenta. const double mom = mom3.mod(); const double scaledMom = mom/meanBeamMom; const double logInvScaledMom = -std::log(scaledMom); _histLogScaledMom->fill(logInvScaledMom, weight); _histScaledMom->fill(scaledMom, weight); // Get momenta components w.r.t. thrust and sphericity. const double momT = dot(thrust.thrustAxis(), mom3); const double momS = dot(sphericity.sphericityAxis(), mom3); const double pTinT = dot(mom3, thrust.thrustMajorAxis()); const double pToutT = dot(mom3, thrust.thrustMinorAxis()); const double pTinS = dot(mom3, sphericity.sphericityMajorAxis()); const double pToutS = dot(mom3, sphericity.sphericityMinorAxis()); const double pT = sqrt(pow(pTinT, 2) + pow(pToutT, 2)); _histPtTIn->fill(fabs(pTinT/GeV), weight); _histPtTOut->fill(fabs(pToutT/GeV), weight); _histPtSIn->fill(fabs(pTinS/GeV), weight); _histPtSOut->fill(fabs(pToutS/GeV), weight); _histPtVsXp->fill(scaledMom, fabs(pT/GeV), weight); _histPtTOutVsXp->fill(scaledMom, fabs(pToutT/GeV), weight); // Calculate rapidities w.r.t. thrust and sphericity. const double rapidityT = 0.5 * std::log((energy + momT) / (energy - momT)); const double rapidityS = 0.5 * std::log((energy + momS) / (energy - momS)); _histRapidityT->fill(fabs(rapidityT), weight); _histRapidityS->fill(fabs(rapidityS), weight); MSG_TRACE(fabs(rapidityT) << " " << scaledMom/GeV); } Evis2 = Evis*Evis; // (A)EEC // Need iterators since second loop starts at current outer loop iterator, i.e. no "foreach" here! for (Particles::const_iterator p_i = fs.particles().begin(); p_i != fs.particles().end(); ++p_i) { for (Particles::const_iterator p_j = p_i; p_j != fs.particles().end(); ++p_j) { if (p_i == p_j) continue; const Vector3 mom3_i = p_i->momentum().p3(); const Vector3 mom3_j = p_j->momentum().p3(); const double energy_i = p_i->momentum().E(); const double energy_j = p_j->momentum().E(); const double cosij = dot(mom3_i.unit(), mom3_j.unit()); const double eec = (energy_i*energy_j) / Evis2; _histEEC->fill(cosij, eec*weight); if (cosij < 0) _histAEEC->fill( cosij, eec*weight); else _histAEEC->fill(-cosij, -eec*weight); } } _histMultiCharged->fill(_histMultiCharged->bin(0).xMid(), numParticles*weight); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { int id = p.abspid(); switch (id) { case 211: _histMultiPiPlus->fill(_histMultiPiPlus->bin(0).xMid(), weight); break; case 111: _histMultiPi0->fill(_histMultiPi0->bin(0).xMid(), weight); break; case 321: _histMultiKPlus->fill(_histMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMultiK0->fill(_histMultiK0->bin(0).xMid(), weight); break; case 221: _histMultiEta->fill(_histMultiEta->bin(0).xMid(), weight); break; case 331: _histMultiEtaPrime->fill(_histMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMultiDPlus->fill(_histMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMultiD0->fill(_histMultiD0->bin(0).xMid(), weight); break; case 511: case 521: case 531: _histMultiBPlus0->fill(_histMultiBPlus0->bin(0).xMid(), weight); break; case 9010221: _histMultiF0->fill(_histMultiF0->bin(0).xMid(), weight); break; case 113: _histMultiRho->fill(_histMultiRho->bin(0).xMid(), weight); break; case 323: _histMultiKStar892Plus->fill(_histMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMultiKStar892_0->fill(_histMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMultiPhi->fill(_histMultiPhi->bin(0).xMid(), weight); break; case 413: _histMultiDStar2010Plus->fill(_histMultiDStar2010Plus->bin(0).xMid(), weight); break; case 225: _histMultiF2->fill(_histMultiF2->bin(0).xMid(), weight); break; case 315: _histMultiK2Star1430_0->fill(_histMultiK2Star1430_0->bin(0).xMid(), weight); break; case 2212: _histMultiP->fill(_histMultiP->bin(0).xMid(), weight); break; case 3122: _histMultiLambda0->fill(_histMultiLambda0->bin(0).xMid(), weight); break; case 3312: _histMultiXiMinus->fill(_histMultiXiMinus->bin(0).xMid(), weight); break; case 3334: _histMultiOmegaMinus->fill(_histMultiOmegaMinus->bin(0).xMid(), weight); break; case 2224: _histMultiDeltaPlusPlus->fill(_histMultiDeltaPlusPlus->bin(0).xMid(), weight); break; case 3114: _histMultiSigma1385Plus->fill(_histMultiSigma1385Plus->bin(0).xMid(), weight); break; case 3324: _histMultiXi1530_0->fill(_histMultiXi1530_0->bin(0).xMid(), weight); break; case 5122: _histMultiLambdaB0->fill(_histMultiLambdaB0->bin(0).xMid(), weight); break; } } } // Finalize void finalize() { // Normalize inclusive single particle distributions to the average number // of charged particles per event. const double avgNumParts = _weightedTotalPartNum / _passedCutWeightSum; normalize(_histPtTIn, avgNumParts); normalize(_histPtTOut, avgNumParts); normalize(_histPtSIn, avgNumParts); normalize(_histPtSOut, avgNumParts); normalize(_histRapidityT, avgNumParts); normalize(_histRapidityS, avgNumParts); normalize(_histLogScaledMom, avgNumParts); normalize(_histScaledMom, avgNumParts); scale(_histEEC, 1.0/_passedCutWeightSum); scale(_histAEEC, 1.0/_passedCutWeightSum); scale(_histMultiCharged, 1.0/_passedCutWeightSum); scale(_histMultiPiPlus, 1.0/_passedCutWeightSum); scale(_histMultiPi0, 1.0/_passedCutWeightSum); scale(_histMultiKPlus, 1.0/_passedCutWeightSum); scale(_histMultiK0, 1.0/_passedCutWeightSum); scale(_histMultiEta, 1.0/_passedCutWeightSum); scale(_histMultiEtaPrime, 1.0/_passedCutWeightSum); scale(_histMultiDPlus, 1.0/_passedCutWeightSum); scale(_histMultiD0, 1.0/_passedCutWeightSum); scale(_histMultiBPlus0, 1.0/_passedCutWeightSum); scale(_histMultiF0, 1.0/_passedCutWeightSum); scale(_histMultiRho, 1.0/_passedCutWeightSum); scale(_histMultiKStar892Plus, 1.0/_passedCutWeightSum); scale(_histMultiKStar892_0, 1.0/_passedCutWeightSum); scale(_histMultiPhi, 1.0/_passedCutWeightSum); scale(_histMultiDStar2010Plus, 1.0/_passedCutWeightSum); scale(_histMultiF2, 1.0/_passedCutWeightSum); scale(_histMultiK2Star1430_0, 1.0/_passedCutWeightSum); scale(_histMultiP, 1.0/_passedCutWeightSum); scale(_histMultiLambda0, 1.0/_passedCutWeightSum); scale(_histMultiXiMinus, 1.0/_passedCutWeightSum); scale(_histMultiOmegaMinus, 1.0/_passedCutWeightSum); scale(_histMultiDeltaPlusPlus, 1.0/_passedCutWeightSum); scale(_histMultiSigma1385Plus, 1.0/_passedCutWeightSum); scale(_histMultiXi1530_0, 1.0/_passedCutWeightSum); scale(_histMultiLambdaB0, 1.0/_passedCutWeightSum); scale(_hist1MinusT, 1.0/_passedCutWeightSum); scale(_histTMajor, 1.0/_passedCutWeightSum); scale(_histTMinor, 1.0/_passedCutWeightSum); scale(_histOblateness, 1.0/_passedCutWeightSum); scale(_histSphericity, 1.0/_passedCutWeightSum); scale(_histAplanarity, 1.0/_passedCutWeightSum); scale(_histPlanarity, 1.0/_passedCutWeightSum); scale(_histHemiMassD, 1.0/_passedCutWeightSum); scale(_histHemiMassH, 1.0/_passedCutWeightSum); scale(_histHemiMassL, 1.0/_passedCutWeightSum); scale(_histHemiBroadW, 1.0/_passedCutWeightSum); scale(_histHemiBroadN, 1.0/_passedCutWeightSum); scale(_histHemiBroadT, 1.0/_passedCutWeightSum); scale(_histHemiBroadD, 1.0/_passedCutWeightSum); scale(_histCParam, 1.0/_passedCutWeightSum); scale(_histDParam, 1.0/_passedCutWeightSum); scale(_histDiffRate2Durham, 1.0/_passedCut3WeightSum); scale(_histDiffRate2Jade, 1.0/_passedCut3WeightSum); scale(_histDiffRate3Durham, 1.0/_passedCut4WeightSum); scale(_histDiffRate3Jade, 1.0/_passedCut4WeightSum); scale(_histDiffRate4Durham, 1.0/_passedCut5WeightSum); scale(_histDiffRate4Jade, 1.0/_passedCut5WeightSum); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. double _weightedTotalPartNum; /// @name Sums of weights past various cuts //@{ double _passedCutWeightSum; double _passedCut3WeightSum; double _passedCut4WeightSum; double _passedCut5WeightSum; //@} /// @name Histograms //@{ Histo1DPtr _histPtTIn; Histo1DPtr _histPtTOut; Histo1DPtr _histPtSIn; Histo1DPtr _histPtSOut; Histo1DPtr _histRapidityT; Histo1DPtr _histRapidityS; Histo1DPtr _histScaledMom, _histLogScaledMom; Profile1DPtr _histPtTOutVsXp, _histPtVsXp; Histo1DPtr _hist1MinusT; Histo1DPtr _histTMajor; Histo1DPtr _histTMinor; Histo1DPtr _histOblateness; Histo1DPtr _histSphericity; Histo1DPtr _histAplanarity; Histo1DPtr _histPlanarity; Histo1DPtr _histCParam; Histo1DPtr _histDParam; Histo1DPtr _histHemiMassD; Histo1DPtr _histHemiMassH; Histo1DPtr _histHemiMassL; Histo1DPtr _histHemiBroadW; Histo1DPtr _histHemiBroadN; Histo1DPtr _histHemiBroadT; Histo1DPtr _histHemiBroadD; Histo1DPtr _histDiffRate2Durham; Histo1DPtr _histDiffRate2Jade; Histo1DPtr _histDiffRate3Durham; Histo1DPtr _histDiffRate3Jade; Histo1DPtr _histDiffRate4Durham; Histo1DPtr _histDiffRate4Jade; Histo1DPtr _histEEC, _histAEEC; Histo1DPtr _histMultiCharged; Histo1DPtr _histMultiPiPlus; Histo1DPtr _histMultiPi0; Histo1DPtr _histMultiKPlus; Histo1DPtr _histMultiK0; Histo1DPtr _histMultiEta; Histo1DPtr _histMultiEtaPrime; Histo1DPtr _histMultiDPlus; Histo1DPtr _histMultiD0; Histo1DPtr _histMultiBPlus0; Histo1DPtr _histMultiF0; Histo1DPtr _histMultiRho; Histo1DPtr _histMultiKStar892Plus; Histo1DPtr _histMultiKStar892_0; Histo1DPtr _histMultiPhi; Histo1DPtr _histMultiDStar2010Plus; Histo1DPtr _histMultiF2; Histo1DPtr _histMultiK2Star1430_0; Histo1DPtr _histMultiP; Histo1DPtr _histMultiLambda0; Histo1DPtr _histMultiXiMinus; Histo1DPtr _histMultiOmegaMinus; Histo1DPtr _histMultiDeltaPlusPlus; Histo1DPtr _histMultiSigma1385Plus; Histo1DPtr _histMultiXi1530_0; Histo1DPtr _histMultiLambdaB0; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_1996_S3430090); } diff --git a/analyses/pluginLEP/DELPHI_1999_S3960137.cc b/analyses/pluginLEP/DELPHI_1999_S3960137.cc --- a/analyses/pluginLEP/DELPHI_1999_S3960137.cc +++ b/analyses/pluginLEP/DELPHI_1999_S3960137.cc @@ -1,99 +1,99 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief DELPHI rho,f_0 and f_2 fragmentation function paper /// @author Peter Richardson class DELPHI_1999_S3960137 : public Analysis { public: /// Constructor DELPHI_1999_S3960137() : Analysis("DELPHI_1999_S3960137") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpRho = bookHisto1D( 1, 1, 1); _histXpf0 = bookHisto1D( 1, 1, 2); _histXpf2 = bookHisto1D( 1, 1, 3); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); double xp = p.p3().mod()/meanBeamMom; switch (id) { case 9010221: _histXpf0->fill(xp, weight); break; case 225: _histXpf2->fill(xp, weight); break; case 113: _histXpRho->fill(xp, weight); break; } } } /// Finalize void finalize() { scale(_histXpf0 , 1./sumOfWeights()); scale(_histXpf2 , 1./sumOfWeights()); scale(_histXpRho, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXpf0; Histo1DPtr _histXpf2; Histo1DPtr _histXpRho; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_1999_S3960137); } diff --git a/analyses/pluginLEP/DELPHI_2011_I890503.cc b/analyses/pluginLEP/DELPHI_2011_I890503.cc --- a/analyses/pluginLEP/DELPHI_2011_I890503.cc +++ b/analyses/pluginLEP/DELPHI_2011_I890503.cc @@ -1,85 +1,85 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class DELPHI_2011_I890503 : public Analysis { public: /// Constructor DELPHI_2011_I890503() : Analysis("DELPHI_2011_I890503") { } /// Book projections and histograms void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXbweak = bookHisto1D(1, 1, 1); _histMeanXbweak = bookProfile1D(2, 1, 1); } void analyze(const Event& e) { // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (apply(e, "FS").particles().size() < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); // Get Bottom hadrons const Particles bhads = filter_select(ufs.particles(), isBottomHadron); for (const Particle& bhad : bhads) { // Check for weak decay, i.e. no more bottom present in children if (bhad.children(lastParticleWith(hasBottom)).empty()) { const double xp = bhad.E()/meanBeamMom; _histXbweak->fill(xp, weight); _histMeanXbweak->fill(_histMeanXbweak->bin(0).xMid(), xp, weight); } } } // Finalize void finalize() { normalize(_histXbweak); } private: Histo1DPtr _histXbweak; Profile1DPtr _histMeanXbweak; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(DELPHI_2011_I890503); } diff --git a/analyses/pluginLEP/L3_1992_I336180.cc b/analyses/pluginLEP/L3_1992_I336180.cc --- a/analyses/pluginLEP/L3_1992_I336180.cc +++ b/analyses/pluginLEP/L3_1992_I336180.cc @@ -1,90 +1,90 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief L3 inclusive eta production in hadronic Z0 decays /// @author Simone Amoroso class L3_1992_I336180 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(L3_1992_I336180); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _histXpEta = bookHisto1D( 1, 1, 1); _histLnXpEta = bookHisto1D( 2, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. const FinalState& fs = apply(event, "FS"); if (fs.particles().size() < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get event weight for histo filling const double weight = event.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(event, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const Particles& etas = apply(event, "UFS").particles(Cuts::abspid==PID::ETA); + const Particles& etas = apply(event, "UFS").particles(Cuts::abspid==PID::ETA); foreach (const Particle& p, etas) { double xp = p.p3().mod()/meanBeamMom; MSG_DEBUG("Eta xp = " << xp); _histXpEta->fill(xp, weight); _histLnXpEta->fill(log(1./xp), weight); } } /// Normalise histograms etc., after the run void finalize() { scale(_histXpEta, 1./sumOfWeights()); scale(_histLnXpEta, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXpEta; Histo1DPtr _histLnXpEta; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(L3_1992_I336180); } diff --git a/analyses/pluginLEP/OPAL_1993_I342766.cc b/analyses/pluginLEP/OPAL_1993_I342766.cc --- a/analyses/pluginLEP/OPAL_1993_I342766.cc +++ b/analyses/pluginLEP/OPAL_1993_I342766.cc @@ -1,91 +1,91 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/Beam.hh" namespace Rivet { /// @brief A Measurement of K*+- (892) production in hadronic Z0 decays /// @author Simone Amoroso class OPAL_1993_I342766 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(OPAL_1993_I342766); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _histXeKStar892 = bookHisto1D( 1, 1, 1); _histMeanKStar892 = bookHisto1D( 2, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { const FinalState& fs = apply(event, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = event.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(event, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); foreach (const Particle& p, ufs.particles(Cuts::abspid==323)) { double xp = p.p3().mod()/meanBeamMom; _histXeKStar892->fill(xp, weight); _histMeanKStar892->fill(_histMeanKStar892->bin(0).xMid(), weight); } } /// Normalise histograms etc., after the run void finalize() { scale(_histXeKStar892, 1./sumOfWeights()); scale(_histMeanKStar892, 1./sumOfWeights()); } //@} private: /// @name Histograms Histo1DPtr _histXeKStar892; Histo1DPtr _histMeanKStar892; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1993_I342766); } diff --git a/analyses/pluginLEP/OPAL_1995_S3198391.cc b/analyses/pluginLEP/OPAL_1995_S3198391.cc --- a/analyses/pluginLEP/OPAL_1995_S3198391.cc +++ b/analyses/pluginLEP/OPAL_1995_S3198391.cc @@ -1,84 +1,84 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL Delta++ fragmentation function paper /// @author Peter Richardson class OPAL_1995_S3198391 : public Analysis { public: /// Constructor OPAL_1995_S3198391() : Analysis("OPAL_1995_S3198391") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpDelta = bookHisto1D( 1, 1, 1); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if(p.abspid()==2224) { double xp = p.p3().mod()/meanBeamMom; _histXpDelta->fill(xp, weight); } } } /// Finalize void finalize() { scale(_histXpDelta, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXpDelta; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1995_S3198391); } diff --git a/analyses/pluginLEP/OPAL_1996_S3257789.cc b/analyses/pluginLEP/OPAL_1996_S3257789.cc --- a/analyses/pluginLEP/OPAL_1996_S3257789.cc +++ b/analyses/pluginLEP/OPAL_1996_S3257789.cc @@ -1,97 +1,97 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL J/Psi fragmentation function paper /// @author Peter Richardson class OPAL_1996_S3257789 : public Analysis { public: /// Constructor OPAL_1996_S3257789() : Analysis("OPAL_1996_S3257789"), _weightSum(0.) {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpJPsi = bookHisto1D( 1, 1, 1); _multJPsi = bookHisto1D( 2, 1, 1); _multPsiPrime = bookHisto1D( 2, 1, 2); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if(p.abspid()==443) { double xp = p.p3().mod()/meanBeamMom; _histXpJPsi->fill(xp, weight); _multJPsi->fill(91.2,weight); _weightSum += weight; } else if(p.abspid()==100443) { _multPsiPrime->fill(91.2,weight); } } } /// Finalize void finalize() { if(_weightSum>0.) scale(_histXpJPsi , 0.1/_weightSum); scale(_multJPsi , 1./sumOfWeights()); scale(_multPsiPrime, 1./sumOfWeights()); } //@} private: double _weightSum; Histo1DPtr _histXpJPsi; Histo1DPtr _multJPsi; Histo1DPtr _multPsiPrime; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1996_S3257789); } diff --git a/analyses/pluginLEP/OPAL_1997_S3396100.cc b/analyses/pluginLEP/OPAL_1997_S3396100.cc --- a/analyses/pluginLEP/OPAL_1997_S3396100.cc +++ b/analyses/pluginLEP/OPAL_1997_S3396100.cc @@ -1,163 +1,163 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL strange baryon paper /// @author Peter Richardson class OPAL_1997_S3396100 : public Analysis { public: /// Constructor OPAL_1997_S3396100() : Analysis("OPAL_1997_S3396100"), _weightedTotalNumLambda(0.) ,_weightedTotalNumXiMinus(0.), _weightedTotalNumSigma1385Plus(0.),_weightedTotalNumSigma1385Minus(0.), _weightedTotalNumXi1530(0.) ,_weightedTotalNumLambda1520(0.) {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpLambda = bookHisto1D( 1, 1, 1); _histXiLambda = bookHisto1D( 2, 1, 1); _histXpXiMinus = bookHisto1D( 3, 1, 1); _histXiXiMinus = bookHisto1D( 4, 1, 1); _histXpSigma1385Plus = bookHisto1D( 5, 1, 1); _histXiSigma1385Plus = bookHisto1D( 6, 1, 1); _histXpSigma1385Minus = bookHisto1D( 7, 1, 1); _histXiSigma1385Minus = bookHisto1D( 8, 1, 1); _histXpXi1530 = bookHisto1D( 9, 1, 1); _histXiXi1530 = bookHisto1D(10, 1, 1); _histXpLambda1520 = bookHisto1D(11, 1, 1); _histXiLambda1520 = bookHisto1D(12, 1, 1); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); double xp = p.p3().mod()/meanBeamMom; double xi = -log(xp); switch (id) { case 3312: _histXpXiMinus->fill(xp, weight); _histXiXiMinus->fill(xi, weight); _weightedTotalNumXiMinus += weight; break; case 3224: _histXpSigma1385Plus->fill(xp, weight); _histXiSigma1385Plus->fill(xi, weight); _weightedTotalNumSigma1385Plus += weight; break; case 3114: _histXpSigma1385Minus->fill(xp, weight); _histXiSigma1385Minus->fill(xi, weight); _weightedTotalNumSigma1385Minus += weight; break; case 3122: _histXpLambda->fill(xp, weight); _histXiLambda->fill(xi, weight); _weightedTotalNumLambda += weight; break; case 3324: _histXpXi1530->fill(xp, weight); _histXiXi1530->fill(xi, weight); _weightedTotalNumXi1530 += weight; break; case 3124: _histXpLambda1520->fill(xp, weight); _histXiLambda1520->fill(xi, weight); _weightedTotalNumLambda1520 += weight; break; } } } /// Finalize void finalize() { normalize(_histXpLambda , _weightedTotalNumLambda /sumOfWeights()); normalize(_histXiLambda , _weightedTotalNumLambda /sumOfWeights()); normalize(_histXpXiMinus , _weightedTotalNumXiMinus /sumOfWeights()); normalize(_histXiXiMinus , _weightedTotalNumXiMinus /sumOfWeights()); normalize(_histXpSigma1385Plus , _weightedTotalNumSigma1385Plus/sumOfWeights()); normalize(_histXiSigma1385Plus , _weightedTotalNumSigma1385Plus/sumOfWeights()); normalize(_histXpSigma1385Minus, _weightedTotalNumSigma1385Plus/sumOfWeights()); normalize(_histXiSigma1385Minus, _weightedTotalNumSigma1385Plus/sumOfWeights()); normalize(_histXpXi1530 , _weightedTotalNumXi1530 /sumOfWeights()); normalize(_histXiXi1530 , _weightedTotalNumXi1530 /sumOfWeights()); normalize(_histXpLambda1520 , _weightedTotalNumLambda1520 /sumOfWeights()); normalize(_histXiLambda1520 , _weightedTotalNumLambda1520 /sumOfWeights()); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles - used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. double _weightedTotalNumLambda; double _weightedTotalNumXiMinus; double _weightedTotalNumSigma1385Plus; double _weightedTotalNumSigma1385Minus; double _weightedTotalNumXi1530; double _weightedTotalNumLambda1520; Histo1DPtr _histXpLambda ; Histo1DPtr _histXiLambda ; Histo1DPtr _histXpXiMinus ; Histo1DPtr _histXiXiMinus ; Histo1DPtr _histXpSigma1385Plus ; Histo1DPtr _histXiSigma1385Plus ; Histo1DPtr _histXpSigma1385Minus; Histo1DPtr _histXiSigma1385Minus; Histo1DPtr _histXpXi1530 ; Histo1DPtr _histXiXi1530 ; Histo1DPtr _histXpLambda1520 ; Histo1DPtr _histXiLambda1520 ; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1997_S3396100); } diff --git a/analyses/pluginLEP/OPAL_1997_S3608263.cc b/analyses/pluginLEP/OPAL_1997_S3608263.cc --- a/analyses/pluginLEP/OPAL_1997_S3608263.cc +++ b/analyses/pluginLEP/OPAL_1997_S3608263.cc @@ -1,85 +1,85 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL K*0 fragmentation function paper /// @author Peter Richardson class OPAL_1997_S3608263 : public Analysis { public: /// Constructor OPAL_1997_S3608263() : Analysis("OPAL_1997_S3608263") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXeK0 = bookHisto1D( 1, 1, 1); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); if (id==313) { double xp = p.p3().mod()/meanBeamMom; _histXeK0->fill(xp, weight); } } } /// Finalize void finalize() { scale(_histXeK0, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXeK0; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1997_S3608263); } diff --git a/analyses/pluginLEP/OPAL_1998_S3702294.cc b/analyses/pluginLEP/OPAL_1998_S3702294.cc --- a/analyses/pluginLEP/OPAL_1998_S3702294.cc +++ b/analyses/pluginLEP/OPAL_1998_S3702294.cc @@ -1,99 +1,99 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL f0,f2 and phi fragmentation function paper /// @author Peter Richardson class OPAL_1998_S3702294 : public Analysis { public: /// Constructor OPAL_1998_S3702294() : Analysis("OPAL_1998_S3702294") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXpf0 = bookHisto1D( 2, 1, 1); _histXpf2 = bookHisto1D( 2, 1, 2); _histXpPhi = bookHisto1D( 2, 1, 3); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); double xp = p.p3().mod()/meanBeamMom; switch (id) { case 9010221: _histXpf0->fill(xp, weight); break; case 225: _histXpf2->fill(xp, weight); break; case 333: _histXpPhi->fill(xp, weight); break; } } } /// Finalize void finalize() { scale(_histXpf0 , 1./sumOfWeights()); scale(_histXpf2 , 1./sumOfWeights()); scale(_histXpPhi, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXpf0; Histo1DPtr _histXpf2; Histo1DPtr _histXpPhi; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1998_S3702294); } diff --git a/analyses/pluginLEP/OPAL_1998_S3749908.cc b/analyses/pluginLEP/OPAL_1998_S3749908.cc --- a/analyses/pluginLEP/OPAL_1998_S3749908.cc +++ b/analyses/pluginLEP/OPAL_1998_S3749908.cc @@ -1,152 +1,152 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL photon/light meson paper /// @author Peter Richardson class OPAL_1998_S3749908 : public Analysis { public: /// Constructor OPAL_1998_S3749908() : Analysis("OPAL_1998_S3749908") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXePhoton = bookHisto1D( 2, 1, 1); _histXiPhoton = bookHisto1D( 3, 1, 1); _histXePi = bookHisto1D( 4, 1, 1); _histXiPi = bookHisto1D( 5, 1, 1); _histXeEta = bookHisto1D( 6, 1, 1); _histXiEta = bookHisto1D( 7, 1, 1); _histXeRho = bookHisto1D( 8, 1, 1); _histXiRho = bookHisto1D( 9, 1, 1); _histXeOmega = bookHisto1D(10, 1, 1); _histXiOmega = bookHisto1D(11, 1, 1); _histXeEtaPrime = bookHisto1D(12, 1, 1); _histXiEtaPrime = bookHisto1D(13, 1, 1); _histXeA0 = bookHisto1D(14, 1, 1); _histXiA0 = bookHisto1D(15, 1, 1); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); double xi = -log(p.p3().mod()/meanBeamMom); double xE = p.E()/meanBeamMom; switch (id) { case 22: // Photons _histXePhoton->fill(xE, weight); _histXiPhoton->fill(xi, weight); break; case 111: // Neutral pions _histXePi->fill(xE, weight); _histXiPi->fill(xi, weight); break; case 221: // eta _histXeEta->fill(xE, weight); _histXiEta->fill(xi, weight); break; case 213: // Charged rho (770) _histXeRho->fill(xE, weight); _histXiRho->fill(xi, weight); break; case 223: // omega (782) _histXeOmega->fill(xE, weight); _histXiOmega->fill(xi, weight); break; case 331: // eta' (958) _histXeEtaPrime->fill(xE, weight); _histXiEtaPrime->fill(xi, weight); break; case 9000211: // Charged a_0 (980) _histXeA0->fill(xE, weight); _histXiA0->fill(xi, weight); break; } } } /// Finalize void finalize() { scale(_histXePhoton , 1./sumOfWeights()); scale(_histXiPhoton , 1./sumOfWeights()); scale(_histXePi , 1./sumOfWeights()); scale(_histXiPi , 1./sumOfWeights()); scale(_histXeEta , 1./sumOfWeights()); scale(_histXiEta , 1./sumOfWeights()); scale(_histXeRho , 1./sumOfWeights()); scale(_histXiRho , 1./sumOfWeights()); scale(_histXeOmega , 1./sumOfWeights()); scale(_histXiOmega , 1./sumOfWeights()); scale(_histXeEtaPrime, 1./sumOfWeights()); scale(_histXiEtaPrime, 1./sumOfWeights()); scale(_histXeA0 , 1./sumOfWeights()); scale(_histXiA0 , 1./sumOfWeights()); } //@} private: Histo1DPtr _histXePhoton ; Histo1DPtr _histXiPhoton ; Histo1DPtr _histXePi ; Histo1DPtr _histXiPi ; Histo1DPtr _histXeEta ; Histo1DPtr _histXiEta ; Histo1DPtr _histXeRho ; Histo1DPtr _histXiRho ; Histo1DPtr _histXeOmega ; Histo1DPtr _histXiOmega ; Histo1DPtr _histXeEtaPrime; Histo1DPtr _histXiEtaPrime; Histo1DPtr _histXeA0 ; Histo1DPtr _histXiA0 ; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_1998_S3749908); } diff --git a/analyses/pluginLEP/OPAL_2000_S4418603.cc b/analyses/pluginLEP/OPAL_2000_S4418603.cc --- a/analyses/pluginLEP/OPAL_2000_S4418603.cc +++ b/analyses/pluginLEP/OPAL_2000_S4418603.cc @@ -1,86 +1,86 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief OPAL K0 fragmentation function paper /// @author Peter Richardson class OPAL_2000_S4418603 : public Analysis { public: /// Constructor OPAL_2000_S4418603() : Analysis("OPAL_2000_S4418603") {} /// @name Analysis methods //@{ void init() { declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histXeK0 = bookHisto1D( 3, 1, 1); } void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); if (id == PID::K0S || id == PID::K0L) { double xE = p.E()/meanBeamMom; _histXeK0->fill(xE, weight); } } } /// Finalize void finalize() { scale(_histXeK0, 1./sumOfWeights()); } //@} private: Histo1DPtr _histXeK0; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2000_S4418603); } diff --git a/analyses/pluginLEP/OPAL_2003_I599181.cc b/analyses/pluginLEP/OPAL_2003_I599181.cc --- a/analyses/pluginLEP/OPAL_2003_I599181.cc +++ b/analyses/pluginLEP/OPAL_2003_I599181.cc @@ -1,83 +1,83 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/Beam.hh" namespace Rivet { /// @brief OPAL b-fragmentation measurement for weak B-hadron decays /// @author Simone Amoroso class OPAL_2003_I599181 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(OPAL_2003_I599181); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections declare(Beam(), "Beams"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _histXbweak = bookHisto1D(1, 1, 1); _histMeanXbweak = bookProfile1D(2, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { // Get event weight for histo filling const double weight = event.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(event, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() +beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); // Get Bottom hadrons const Particles bhads = filter_select(ufs.particles(), isBottomHadron); for (const Particle& bhad : bhads) { // Check for weak decay, i.e. no more bottom present in children if (bhad.children(lastParticleWith(hasBottom)).empty()) { const double xp = bhad.E()/meanBeamMom; _histXbweak->fill(xp, weight); _histMeanXbweak->fill(_histMeanXbweak->bin(0).xMid(), xp, weight); } } } /// Normalise histograms etc., after the run void finalize() { normalize(_histXbweak); } //@} private: Histo1DPtr _histXbweak; Profile1DPtr _histMeanXbweak; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(OPAL_2003_I599181); } diff --git a/analyses/pluginLEP/SLD_1999_S3743934.cc b/analyses/pluginLEP/SLD_1999_S3743934.cc --- a/analyses/pluginLEP/SLD_1999_S3743934.cc +++ b/analyses/pluginLEP/SLD_1999_S3743934.cc @@ -1,642 +1,642 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/InitialQuarks.hh" #include "Rivet/Projections/Thrust.hh" namespace Rivet { /// @brief SLD flavour-dependent fragmentation paper /// @author Peter Richardson class SLD_1999_S3743934 : public Analysis { public: /// Constructor SLD_1999_S3743934() : Analysis("SLD_1999_S3743934"), _SumOfudsWeights(0.), _SumOfcWeights(0.), _SumOfbWeights(0.), _multPiPlus(4,0.),_multKPlus(4,0.),_multK0(4,0.), _multKStar0(4,0.),_multPhi(4,0.), _multProton(4,0.),_multLambda(4,0.) { } /// @name Analysis methods //@{ void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed ncharged cut"); vetoEvent; } MSG_DEBUG("Passed ncharged cut"); // Get event weight for histo filling const double weight = e.weight(); // Get beams and average beam momentum const ParticlePair& beams = apply(e, "Beams").beams(); const double meanBeamMom = ( beams.first.p3().mod() + beams.second.p3().mod() ) / 2.0; MSG_DEBUG("Avg beam momentum = " << meanBeamMom); int flavour = 0; const InitialQuarks& iqf = apply(e, "IQF"); // If we only have two quarks (qqbar), just take the flavour. // If we have more than two quarks, look for the highest energetic q-qbar pair. /// @todo Can we make this based on hadron flavour instead? Particles quarks; if (iqf.particles().size() == 2) { flavour = iqf.particles().front().abspid(); quarks = iqf.particles(); } else { map quarkmap; foreach (const Particle& p, iqf.particles()) { if (quarkmap.find(p.pid()) == quarkmap.end()) quarkmap[p.pid()] = p; else if (quarkmap[p.pid()].E() < p.E()) quarkmap[p.pid()] = p; } double maxenergy = 0.; for (int i = 1; i <= 5; ++i) { double energy(0.); if (quarkmap.find( i) != quarkmap.end()) energy += quarkmap[ i].E(); if (quarkmap.find(-i) != quarkmap.end()) energy += quarkmap[-i].E(); if (energy > maxenergy) flavour = i; } if (quarkmap.find(flavour) != quarkmap.end()) quarks.push_back(quarkmap[flavour]); if (quarkmap.find(-flavour) != quarkmap.end()) quarks.push_back(quarkmap[-flavour]); } switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _SumOfudsWeights += weight; break; case PID::CQUARK: _SumOfcWeights += weight; break; case PID::BQUARK: _SumOfbWeights += weight; break; } // thrust axis for projections Vector3 axis = apply(e, "Thrust").thrustAxis(); double dot(0.); if (!quarks.empty()) { dot = quarks[0].p3().dot(axis); if (quarks[0].pid() < 0) dot *= -1; } foreach (const Particle& p, fs.particles()) { const double xp = p.p3().mod()/meanBeamMom; // if in quark or antiquark hemisphere bool quark = p.p3().dot(axis)*dot > 0.; _h_XpChargedN->fill(xp, weight); _temp_XpChargedN1->fill(xp, weight); _temp_XpChargedN2->fill(xp, weight); _temp_XpChargedN3->fill(xp, weight); int id = p.abspid(); // charged pions if (id == PID::PIPLUS) { _h_XpPiPlusN->fill(xp, weight); _multPiPlus[0] += weight; switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multPiPlus[1] += weight; _h_XpPiPlusLight->fill(xp, weight); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RPiPlus->fill(xp, weight); else _h_RPiMinus->fill(xp, weight); break; case PID::CQUARK: _multPiPlus[2] += weight; _h_XpPiPlusCharm->fill(xp, weight); break; case PID::BQUARK: _multPiPlus[3] += weight; _h_XpPiPlusBottom->fill(xp, weight); break; } } else if (id == PID::KPLUS) { _h_XpKPlusN->fill(xp, weight); _multKPlus[0] += weight; switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multKPlus[1] += weight; _temp_XpKPlusLight->fill(xp, weight); _h_XpKPlusLight->fill(xp, weight); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RKPlus->fill(xp, weight); else _h_RKMinus->fill(xp, weight); break; break; case PID::CQUARK: _multKPlus[2] += weight; _h_XpKPlusCharm->fill(xp, weight); _temp_XpKPlusCharm->fill(xp, weight); break; case PID::BQUARK: _multKPlus[3] += weight; _h_XpKPlusBottom->fill(xp, weight); break; } } else if (id == PID::PROTON) { _h_XpProtonN->fill(xp, weight); _multProton[0] += weight; switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multProton[1] += weight; _temp_XpProtonLight->fill(xp, weight); _h_XpProtonLight->fill(xp, weight); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RProton->fill(xp, weight); else _h_RPBar ->fill(xp, weight); break; break; case PID::CQUARK: _multProton[2] += weight; _temp_XpProtonCharm->fill(xp, weight); _h_XpProtonCharm->fill(xp, weight); break; case PID::BQUARK: _multProton[3] += weight; _h_XpProtonBottom->fill(xp, weight); break; } } } - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { const double xp = p.p3().mod()/meanBeamMom; // if in quark or antiquark hemisphere bool quark = p.p3().dot(axis)*dot>0.; int id = p.abspid(); if (id == PID::LAMBDA) { _multLambda[0] += weight; _h_XpLambdaN->fill(xp, weight); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multLambda[1] += weight; _h_XpLambdaLight->fill(xp, weight); if( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RLambda->fill(xp, weight); else _h_RLBar ->fill(xp, weight); break; case PID::CQUARK: _multLambda[2] += weight; _h_XpLambdaCharm->fill(xp, weight); break; case PID::BQUARK: _multLambda[3] += weight; _h_XpLambdaBottom->fill(xp, weight); break; } } else if (id == 313) { _multKStar0[0] += weight; _h_XpKStar0N->fill(xp, weight); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multKStar0[1] += weight; _temp_XpKStar0Light->fill(xp, weight); _h_XpKStar0Light->fill(xp, weight); if ( ( quark && p.pid()>0 ) || ( !quark && p.pid()<0 )) _h_RKS0 ->fill(xp, weight); else _h_RKSBar0->fill(xp, weight); break; break; case PID::CQUARK: _multKStar0[2] += weight; _temp_XpKStar0Charm->fill(xp, weight); _h_XpKStar0Charm->fill(xp, weight); break; case PID::BQUARK: _multKStar0[3] += weight; _h_XpKStar0Bottom->fill(xp, weight); break; } } else if (id == 333) { _multPhi[0] += weight; _h_XpPhiN->fill(xp, weight); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multPhi[1] += weight; _h_XpPhiLight->fill(xp, weight); break; case PID::CQUARK: _multPhi[2] += weight; _h_XpPhiCharm->fill(xp, weight); break; case PID::BQUARK: _multPhi[3] += weight; _h_XpPhiBottom->fill(xp, weight); break; } } else if (id == PID::K0S || id == PID::K0L) { _multK0[0] += weight; _h_XpK0N->fill(xp, weight); switch (flavour) { case PID::DQUARK: case PID::UQUARK: case PID::SQUARK: _multK0[1] += weight; _h_XpK0Light->fill(xp, weight); break; case PID::CQUARK: _multK0[2] += weight; _h_XpK0Charm->fill(xp, weight); break; case PID::BQUARK: _multK0[3] += weight; _h_XpK0Bottom->fill(xp, weight); break; } } } } void init() { // Projections declare(Beam(), "Beams"); declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); declare(InitialQuarks(), "IQF"); declare(Thrust(FinalState()), "Thrust"); _temp_XpChargedN1 = bookHisto1D("TMP/XpChargedN1", refData( 1, 1, 1)); _temp_XpChargedN2 = bookHisto1D("TMP/XpChargedN2", refData( 2, 1, 1)); _temp_XpChargedN3 = bookHisto1D("TMP/XpChargedN3", refData( 3, 1, 1)); _h_XpPiPlusN = bookHisto1D( 1, 1, 2); _h_XpKPlusN = bookHisto1D( 2, 1, 2); _h_XpProtonN = bookHisto1D( 3, 1, 2); _h_XpChargedN = bookHisto1D( 4, 1, 1); _h_XpK0N = bookHisto1D( 5, 1, 1); _h_XpLambdaN = bookHisto1D( 7, 1, 1); _h_XpKStar0N = bookHisto1D( 8, 1, 1); _h_XpPhiN = bookHisto1D( 9, 1, 1); _h_XpPiPlusLight = bookHisto1D(10, 1, 1); _h_XpPiPlusCharm = bookHisto1D(10, 1, 2); _h_XpPiPlusBottom = bookHisto1D(10, 1, 3); _h_XpKPlusLight = bookHisto1D(12, 1, 1); _h_XpKPlusCharm = bookHisto1D(12, 1, 2); _h_XpKPlusBottom = bookHisto1D(12, 1, 3); _h_XpKStar0Light = bookHisto1D(14, 1, 1); _h_XpKStar0Charm = bookHisto1D(14, 1, 2); _h_XpKStar0Bottom = bookHisto1D(14, 1, 3); _h_XpProtonLight = bookHisto1D(16, 1, 1); _h_XpProtonCharm = bookHisto1D(16, 1, 2); _h_XpProtonBottom = bookHisto1D(16, 1, 3); _h_XpLambdaLight = bookHisto1D(18, 1, 1); _h_XpLambdaCharm = bookHisto1D(18, 1, 2); _h_XpLambdaBottom = bookHisto1D(18, 1, 3); _h_XpK0Light = bookHisto1D(20, 1, 1); _h_XpK0Charm = bookHisto1D(20, 1, 2); _h_XpK0Bottom = bookHisto1D(20, 1, 3); _h_XpPhiLight = bookHisto1D(22, 1, 1); _h_XpPhiCharm = bookHisto1D(22, 1, 2); _h_XpPhiBottom = bookHisto1D(22, 1, 3); _temp_XpKPlusCharm = bookHisto1D("TMP/XpKPlusCharm", refData(13, 1, 1)); _temp_XpKPlusLight = bookHisto1D("TMP/XpKPlusLight", refData(13, 1, 1)); _temp_XpKStar0Charm = bookHisto1D("TMP/XpKStar0Charm", refData(15, 1, 1)); _temp_XpKStar0Light = bookHisto1D("TMP/XpKStar0Light", refData(15, 1, 1)); _temp_XpProtonCharm = bookHisto1D("TMP/XpProtonCharm", refData(17, 1, 1)); _temp_XpProtonLight = bookHisto1D("TMP/XpProtonLight", refData(17, 1, 1)); _h_RPiPlus = bookHisto1D( 26, 1, 1); _h_RPiMinus = bookHisto1D( 26, 1, 2); _h_RKS0 = bookHisto1D( 28, 1, 1); _h_RKSBar0 = bookHisto1D( 28, 1, 2); _h_RKPlus = bookHisto1D( 30, 1, 1); _h_RKMinus = bookHisto1D( 30, 1, 2); _h_RProton = bookHisto1D( 32, 1, 1); _h_RPBar = bookHisto1D( 32, 1, 2); _h_RLambda = bookHisto1D( 34, 1, 1); _h_RLBar = bookHisto1D( 34, 1, 2); _s_Xp_PiPl_Ch = bookScatter2D(1, 1, 1); _s_Xp_KPl_Ch = bookScatter2D(2, 1, 1); _s_Xp_Pr_Ch = bookScatter2D(3, 1, 1); _s_Xp_PiPlCh_PiPlLi = bookScatter2D(11, 1, 1); _s_Xp_PiPlBo_PiPlLi = bookScatter2D(11, 1, 2); _s_Xp_KPlCh_KPlLi = bookScatter2D(13, 1, 1); _s_Xp_KPlBo_KPlLi = bookScatter2D(13, 1, 2); _s_Xp_KS0Ch_KS0Li = bookScatter2D(15, 1, 1); _s_Xp_KS0Bo_KS0Li = bookScatter2D(15, 1, 2); _s_Xp_PrCh_PrLi = bookScatter2D(17, 1, 1); _s_Xp_PrBo_PrLi = bookScatter2D(17, 1, 2); _s_Xp_LaCh_LaLi = bookScatter2D(19, 1, 1); _s_Xp_LaBo_LaLi = bookScatter2D(19, 1, 2); _s_Xp_K0Ch_K0Li = bookScatter2D(21, 1, 1); _s_Xp_K0Bo_K0Li = bookScatter2D(21, 1, 2); _s_Xp_PhiCh_PhiLi = bookScatter2D(23, 1, 1); _s_Xp_PhiBo_PhiLi = bookScatter2D(23, 1, 2); _s_PiM_PiP = bookScatter2D(27, 1, 1); _s_KSBar0_KS0 = bookScatter2D(29, 1, 1); _s_KM_KP = bookScatter2D(31, 1, 1); _s_Pr_PBar = bookScatter2D(33, 1, 1); _s_Lam_LBar = bookScatter2D(35, 1, 1); } /// Finalize void finalize() { // Get the ratio plots sorted out first divide(_h_XpPiPlusN, _temp_XpChargedN1, _s_Xp_PiPl_Ch); divide(_h_XpKPlusN, _temp_XpChargedN2, _s_Xp_KPl_Ch); divide(_h_XpProtonN, _temp_XpChargedN3, _s_Xp_Pr_Ch); divide(_h_XpPiPlusCharm, _h_XpPiPlusLight, _s_Xp_PiPlCh_PiPlLi); _s_Xp_PiPlCh_PiPlLi->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpPiPlusBottom, _h_XpPiPlusLight, _s_Xp_PiPlBo_PiPlLi); _s_Xp_PiPlBo_PiPlLi->scale(1.,_SumOfudsWeights/_SumOfbWeights); divide(_temp_XpKPlusCharm , _temp_XpKPlusLight, _s_Xp_KPlCh_KPlLi); _s_Xp_KPlCh_KPlLi->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpKPlusBottom, _h_XpKPlusLight, _s_Xp_KPlBo_KPlLi); _s_Xp_KPlBo_KPlLi->scale(1.,_SumOfudsWeights/_SumOfbWeights); divide(_temp_XpKStar0Charm, _temp_XpKStar0Light, _s_Xp_KS0Ch_KS0Li); _s_Xp_KS0Ch_KS0Li->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpKStar0Bottom, _h_XpKStar0Light, _s_Xp_KS0Bo_KS0Li); _s_Xp_KS0Bo_KS0Li->scale(1.,_SumOfudsWeights/_SumOfbWeights); divide(_temp_XpProtonCharm, _temp_XpProtonLight, _s_Xp_PrCh_PrLi); _s_Xp_PrCh_PrLi->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpProtonBottom, _h_XpProtonLight, _s_Xp_PrBo_PrLi); _s_Xp_PrBo_PrLi->scale(1.,_SumOfudsWeights/_SumOfbWeights); divide(_h_XpLambdaCharm, _h_XpLambdaLight, _s_Xp_LaCh_LaLi); _s_Xp_LaCh_LaLi->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpLambdaBottom, _h_XpLambdaLight, _s_Xp_LaBo_LaLi); _s_Xp_LaBo_LaLi->scale(1.,_SumOfudsWeights/_SumOfbWeights); divide(_h_XpK0Charm, _h_XpK0Light, _s_Xp_K0Ch_K0Li); _s_Xp_K0Ch_K0Li->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpK0Bottom, _h_XpK0Light, _s_Xp_K0Bo_K0Li); _s_Xp_K0Bo_K0Li->scale(1.,_SumOfudsWeights/_SumOfbWeights); divide(_h_XpPhiCharm, _h_XpPhiLight, _s_Xp_PhiCh_PhiLi); _s_Xp_PhiCh_PhiLi->scale(1.,_SumOfudsWeights/_SumOfcWeights); divide(_h_XpPhiBottom, _h_XpPhiLight, _s_Xp_PhiBo_PhiLi); _s_Xp_PhiBo_PhiLi->scale(1.,_SumOfudsWeights/_SumOfbWeights); // Then the leading particles divide(*_h_RPiMinus - *_h_RPiPlus, *_h_RPiMinus + *_h_RPiPlus, _s_PiM_PiP); divide(*_h_RKSBar0 - *_h_RKS0, *_h_RKSBar0 + *_h_RKS0, _s_KSBar0_KS0); divide(*_h_RKMinus - *_h_RKPlus, *_h_RKMinus + *_h_RKPlus, _s_KM_KP); divide(*_h_RProton - *_h_RPBar, *_h_RProton + *_h_RPBar, _s_Pr_PBar); divide(*_h_RLambda - *_h_RLBar, *_h_RLambda + *_h_RLBar, _s_Lam_LBar); // Then the rest scale(_h_XpPiPlusN, 1/sumOfWeights()); scale(_h_XpKPlusN, 1/sumOfWeights()); scale(_h_XpProtonN, 1/sumOfWeights()); scale(_h_XpChargedN, 1/sumOfWeights()); scale(_h_XpK0N, 1/sumOfWeights()); scale(_h_XpLambdaN, 1/sumOfWeights()); scale(_h_XpKStar0N, 1/sumOfWeights()); scale(_h_XpPhiN, 1/sumOfWeights()); scale(_h_XpPiPlusLight, 1/_SumOfudsWeights); scale(_h_XpPiPlusCharm, 1/_SumOfcWeights); scale(_h_XpPiPlusBottom, 1/_SumOfbWeights); scale(_h_XpKPlusLight, 1/_SumOfudsWeights); scale(_h_XpKPlusCharm, 1/_SumOfcWeights); scale(_h_XpKPlusBottom, 1/_SumOfbWeights); scale(_h_XpKStar0Light, 1/_SumOfudsWeights); scale(_h_XpKStar0Charm, 1/_SumOfcWeights); scale(_h_XpKStar0Bottom, 1/_SumOfbWeights); scale(_h_XpProtonLight, 1/_SumOfudsWeights); scale(_h_XpProtonCharm, 1/_SumOfcWeights); scale(_h_XpProtonBottom, 1/_SumOfbWeights); scale(_h_XpLambdaLight, 1/_SumOfudsWeights); scale(_h_XpLambdaCharm, 1/_SumOfcWeights); scale(_h_XpLambdaBottom, 1/_SumOfbWeights); scale(_h_XpK0Light, 1/_SumOfudsWeights); scale(_h_XpK0Charm, 1/_SumOfcWeights); scale(_h_XpK0Bottom, 1/_SumOfbWeights); scale(_h_XpPhiLight, 1/_SumOfudsWeights); scale(_h_XpPhiCharm , 1/_SumOfcWeights); scale(_h_XpPhiBottom, 1/_SumOfbWeights); scale(_h_RPiPlus, 1/_SumOfudsWeights); scale(_h_RPiMinus, 1/_SumOfudsWeights); scale(_h_RKS0, 1/_SumOfudsWeights); scale(_h_RKSBar0, 1/_SumOfudsWeights); scale(_h_RKPlus, 1/_SumOfudsWeights); scale(_h_RKMinus, 1/_SumOfudsWeights); scale(_h_RProton, 1/_SumOfudsWeights); scale(_h_RPBar, 1/_SumOfudsWeights); scale(_h_RLambda, 1/_SumOfudsWeights); scale(_h_RLBar, 1/_SumOfudsWeights); // Multiplicities double avgNumPartsAll, avgNumPartsLight,avgNumPartsCharm, avgNumPartsBottom; // pi+/- // all avgNumPartsAll = _multPiPlus[0]/sumOfWeights(); bookScatter2D(24, 1, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multPiPlus[1]/_SumOfudsWeights; bookScatter2D(24, 1, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multPiPlus[2]/_SumOfcWeights; bookScatter2D(24, 1, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multPiPlus[3]/_SumOfbWeights; bookScatter2D(24, 1, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 1, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 1, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // K+/- // all avgNumPartsAll = _multKPlus[0]/sumOfWeights(); bookScatter2D(24, 2, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multKPlus[1]/_SumOfudsWeights; bookScatter2D(24, 2, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multKPlus[2]/_SumOfcWeights; bookScatter2D(24, 2, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multKPlus[3]/_SumOfbWeights; bookScatter2D(24, 2, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 2, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 2, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // K0 // all avgNumPartsAll = _multK0[0]/sumOfWeights(); bookScatter2D(24, 3, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multK0[1]/_SumOfudsWeights; bookScatter2D(24, 3, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multK0[2]/_SumOfcWeights; bookScatter2D(24, 3, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multK0[3]/_SumOfbWeights; bookScatter2D(24, 3, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 3, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 3, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // K*0 // all avgNumPartsAll = _multKStar0[0]/sumOfWeights(); bookScatter2D(24, 4, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multKStar0[1]/_SumOfudsWeights; bookScatter2D(24, 4, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multKStar0[2]/_SumOfcWeights; bookScatter2D(24, 4, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multKStar0[3]/_SumOfbWeights; bookScatter2D(24, 4, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 4, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 4, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // phi // all avgNumPartsAll = _multPhi[0]/sumOfWeights(); bookScatter2D(24, 5, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multPhi[1]/_SumOfudsWeights; bookScatter2D(24, 5, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multPhi[2]/_SumOfcWeights; bookScatter2D(24, 5, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multPhi[3]/_SumOfbWeights; bookScatter2D(24, 5, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 5, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 5, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // p // all avgNumPartsAll = _multProton[0]/sumOfWeights(); bookScatter2D(24, 6, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multProton[1]/_SumOfudsWeights; bookScatter2D(24, 6, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multProton[2]/_SumOfcWeights; bookScatter2D(24, 6, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multProton[3]/_SumOfbWeights; bookScatter2D(24, 6, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 6, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 6, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); // Lambda // all avgNumPartsAll = _multLambda[0]/sumOfWeights(); bookScatter2D(24, 7, 1, true)->point(0).setY(avgNumPartsAll); // light avgNumPartsLight = _multLambda[1]/_SumOfudsWeights; bookScatter2D(24, 7, 2, true)->point(0).setY(avgNumPartsLight); // charm avgNumPartsCharm = _multLambda[2]/_SumOfcWeights; bookScatter2D(24, 7, 3, true)->point(0).setY(avgNumPartsCharm); // bottom avgNumPartsBottom = _multLambda[3]/_SumOfbWeights; bookScatter2D(24, 7, 4, true)->point(0).setY(avgNumPartsBottom); // charm-light bookScatter2D(25, 7, 1, true)->point(0).setY(avgNumPartsCharm - avgNumPartsLight); // bottom-light bookScatter2D(25, 7, 2, true)->point(0).setY(avgNumPartsBottom - avgNumPartsLight); } //@} private: /// Store the weighted sums of numbers of charged / charged+neutral /// particles. Used to calculate average number of particles for the /// inclusive single particle distributions' normalisations. double _SumOfudsWeights, _SumOfcWeights, _SumOfbWeights; vector _multPiPlus, _multKPlus, _multK0, _multKStar0, _multPhi, _multProton, _multLambda; Histo1DPtr _h_XpPiPlusSig, _h_XpPiPlusN; Histo1DPtr _h_XpKPlusSig, _h_XpKPlusN; Histo1DPtr _h_XpProtonSig, _h_XpProtonN; Histo1DPtr _h_XpChargedN; Histo1DPtr _h_XpK0N, _h_XpLambdaN; Histo1DPtr _h_XpKStar0N, _h_XpPhiN; Histo1DPtr _h_XpPiPlusLight, _h_XpPiPlusCharm, _h_XpPiPlusBottom; Histo1DPtr _h_XpKPlusLight, _h_XpKPlusCharm, _h_XpKPlusBottom; Histo1DPtr _h_XpKStar0Light, _h_XpKStar0Charm, _h_XpKStar0Bottom; Histo1DPtr _h_XpProtonLight, _h_XpProtonCharm, _h_XpProtonBottom; Histo1DPtr _h_XpLambdaLight, _h_XpLambdaCharm, _h_XpLambdaBottom; Histo1DPtr _h_XpK0Light, _h_XpK0Charm, _h_XpK0Bottom; Histo1DPtr _h_XpPhiLight, _h_XpPhiCharm, _h_XpPhiBottom; Histo1DPtr _temp_XpChargedN1, _temp_XpChargedN2, _temp_XpChargedN3; Histo1DPtr _temp_XpKPlusCharm , _temp_XpKPlusLight; Histo1DPtr _temp_XpKStar0Charm, _temp_XpKStar0Light; Histo1DPtr _temp_XpProtonCharm, _temp_XpProtonLight; Histo1DPtr _h_RPiPlus, _h_RPiMinus; Histo1DPtr _h_RKS0, _h_RKSBar0; Histo1DPtr _h_RKPlus, _h_RKMinus; Histo1DPtr _h_RProton, _h_RPBar; Histo1DPtr _h_RLambda, _h_RLBar; Scatter2DPtr _s_Xp_PiPl_Ch, _s_Xp_KPl_Ch, _s_Xp_Pr_Ch; Scatter2DPtr _s_Xp_PiPlCh_PiPlLi, _s_Xp_PiPlBo_PiPlLi; Scatter2DPtr _s_Xp_KPlCh_KPlLi, _s_Xp_KPlBo_KPlLi; Scatter2DPtr _s_Xp_KS0Ch_KS0Li, _s_Xp_KS0Bo_KS0Li; Scatter2DPtr _s_Xp_PrCh_PrLi, _s_Xp_PrBo_PrLi; Scatter2DPtr _s_Xp_LaCh_LaLi, _s_Xp_LaBo_LaLi; Scatter2DPtr _s_Xp_K0Ch_K0Li, _s_Xp_K0Bo_K0Li; Scatter2DPtr _s_Xp_PhiCh_PhiLi, _s_Xp_PhiBo_PhiLi; Scatter2DPtr _s_PiM_PiP, _s_KSBar0_KS0, _s_KM_KP, _s_Pr_PBar, _s_Lam_LBar; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(SLD_1999_S3743934); } diff --git a/analyses/pluginLHCb/LHCB_2010_S8758301.cc b/analyses/pluginLHCb/LHCB_2010_S8758301.cc --- a/analyses/pluginLHCb/LHCB_2010_S8758301.cc +++ b/analyses/pluginLHCb/LHCB_2010_S8758301.cc @@ -1,341 +1,341 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Math/Constants.hh" #include "Rivet/Math/Units.hh" #include "HepMC/GenEvent.h" #include "HepMC/GenParticle.h" #include "HepMC/GenVertex.h" #include "HepMC/SimpleVector.h" namespace Rivet { using namespace HepMC; using namespace std; // Lifetime cut: longest living ancestor ctau < 10^-11 [m] namespace { const double MAX_CTAU = 1.0E-11; // [m] const double MIN_PT = 0.0001; // [GeV/c] } class LHCB_2010_S8758301 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2010_S8758301() : Analysis("LHCB_2010_S8758301"), sumKs0_30(0.0), sumKs0_35(0.0), sumKs0_40(0.0), sumKs0_badnull(0), sumKs0_badlft(0), sumKs0_all(0), sumKs0_outup(0), sumKs0_outdwn(0), sum_low_pt_loss(0), sum_high_pt_loss(0) { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { MSG_DEBUG("Initializing analysis!"); fillMap(partLftMap); _h_K0s_pt_30 = bookHisto1D(1,1,1); _h_K0s_pt_35 = bookHisto1D(1,1,2); _h_K0s_pt_40 = bookHisto1D(1,1,3); _h_K0s_pt_y_30 = bookHisto1D(2,1,1); _h_K0s_pt_y_35 = bookHisto1D(2,1,2); _h_K0s_pt_y_40 = bookHisto1D(2,1,3); _h_K0s_pt_y_all = bookHisto1D(3,1,1); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); } /// Perform the per-event analysis void analyze(const Event& event) { int id; double y, pT; const double weight = event.weight(); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); double ancestor_lftime; foreach (const Particle& p, ufs.particles()) { id = p.pid(); if ((id != 310) && (id != -310)) continue; sumKs0_all ++; ancestor_lftime = 0.; const GenParticle* long_ancestor = getLongestLivedAncestor(p, ancestor_lftime); if ( !(long_ancestor) ) { sumKs0_badnull ++; continue; } if ( ancestor_lftime > MAX_CTAU ) { sumKs0_badlft ++; MSG_DEBUG("Ancestor " << long_ancestor->pdg_id() << ", ctau: " << ancestor_lftime << " [m]"); continue; } const FourMomentum& qmom = p.momentum(); y = 0.5 * log((qmom.E() + qmom.pz())/(qmom.E() - qmom.pz())); pT = sqrt((qmom.px() * qmom.px()) + (qmom.py() * qmom.py())); if (pT < MIN_PT) { sum_low_pt_loss ++; MSG_DEBUG("Small pT K^0_S: " << pT << " GeV/c."); } if (pT > 1.6) { sum_high_pt_loss ++; } if (y > 2.5 && y < 4.0) { _h_K0s_pt_y_all->fill(pT, weight); if (y > 2.5 && y < 3.0) { _h_K0s_pt_y_30->fill(pT, weight); _h_K0s_pt_30->fill(pT, weight); sumKs0_30 += weight; } else if (y > 3.0 && y < 3.5) { _h_K0s_pt_y_35->fill(pT, weight); _h_K0s_pt_35->fill(pT, weight); sumKs0_35 += weight; } else if (y > 3.5 && y < 4.0) { _h_K0s_pt_y_40->fill(pT, weight); _h_K0s_pt_40->fill(pT, weight); sumKs0_40 += weight; } } else if (y < 2.5) { sumKs0_outdwn ++; } else if (y > 4.0) { sumKs0_outup ++; } } } /// Normalise histograms etc., after the run void finalize() { MSG_DEBUG("Total number Ks0: " << sumKs0_all << endl << "Sum of weights: " << sumOfWeights() << endl << "Weight Ks0 (2.5 < y < 3.0): " << sumKs0_30 << endl << "Weight Ks0 (3.0 < y < 3.5): " << sumKs0_35 << endl << "Weight Ks0 (3.5 < y < 4.0): " << sumKs0_40 << endl << "Nb. unprompt Ks0 [null mother]: " << sumKs0_badnull << endl << "Nb. unprompt Ks0 [mother lifetime exceeded]: " << sumKs0_badlft << endl << "Nb. Ks0 (y > 4.0): " << sumKs0_outup << endl << "Nb. Ks0 (y < 2.5): " << sumKs0_outdwn << endl << "Nb. Ks0 (pT < " << (MIN_PT/MeV) << " MeV/c): " << sum_low_pt_loss << endl << "Nb. Ks0 (pT > 1.6 GeV/c): " << sum_high_pt_loss << endl << "Cross-section [mb]: " << crossSection()/millibarn << endl << "Nb. events: " << numEvents()); // Compute cross-section; multiply by bin width for correct scaling // cross-section given by Rivet in pb double xsection_factor = crossSection()/sumOfWeights(); // Multiply bin width for correct scaling, xsection in mub scale(_h_K0s_pt_30, 0.2*xsection_factor/microbarn); scale(_h_K0s_pt_35, 0.2*xsection_factor/microbarn); scale(_h_K0s_pt_40, 0.2*xsection_factor/microbarn); // Divide by dy (rapidity window width), xsection in mb scale(_h_K0s_pt_y_30, xsection_factor/0.5/millibarn); scale(_h_K0s_pt_y_35, xsection_factor/0.5/millibarn); scale(_h_K0s_pt_y_40, xsection_factor/0.5/millibarn); scale(_h_K0s_pt_y_all, xsection_factor/1.5/millibarn); } //@} private: /// Get particle lifetime from hardcoded data double getLifeTime(int pid) { double lft = -1.0; if (pid < 0) pid = - pid; // Correct Pythia6 PIDs for f0(980), f0(1370) mesons if (pid == 10331) pid = 30221; if (pid == 10221) pid = 9010221; map::iterator pPartLft = partLftMap.find(pid); // search stable particle list if (pPartLft == partLftMap.end()) { if (pid <= 100 || pid == 990) return 0.0; for (unsigned int i=0; i < sizeof(stablePDGIds)/sizeof(unsigned int); i++ ) { if (pid == stablePDGIds[i]) { lft = 0.0; break; } } } else { lft = (*pPartLft).second; } if (lft < 0.0) MSG_ERROR("Could not determine lifetime for particle with PID " << pid << "... This K_s^0 will be considered unprompt!"); return lft; } const GenParticle* getLongestLivedAncestor(const Particle& p, double& lifeTime) { const GenParticle* ret = NULL; lifeTime = 1.; if (p.genParticle() == NULL) return NULL; const GenParticle* pmother = p.genParticle(); double longest_ctau = 0.; double mother_ctau; int mother_pid, n_inparts; const GenVertex* ivertex = pmother->production_vertex(); while (ivertex) { n_inparts = ivertex->particles_in_size(); if (n_inparts < 1) {ret = NULL; break;} // error: should never happen! const GenVertex::particles_in_const_iterator iPart_invtx = ivertex->particles_in_const_begin(); pmother = (*iPart_invtx); // first mother particle mother_pid = pmother->pdg_id(); ivertex = pmother->production_vertex(); // get next vertex if ( (mother_pid == 2212) || (mother_pid <= 100) ) { if (ret == NULL) ret = pmother; continue; } mother_ctau = getLifeTime(mother_pid); if (mother_ctau < 0.) { ret= NULL; break; } // error:should never happen! if (mother_ctau > longest_ctau) { longest_ctau = mother_ctau; ret = pmother; } } if (ret) lifeTime = longest_ctau * c_light; return ret; } // Fill the PDG Id to Lifetime[seconds] map // Data was extract from LHCb Particle Table using ParticleSvc bool fillMap(map &m) { m[6] = 4.707703E-25; m[11] = 1.E+16; m[12] = 1.E+16; m[13] = 2.197019E-06; m[14] = 1.E+16; m[15] = 2.906E-13; m[16] = 1.E+16; m[22] = 1.E+16; m[23] = 2.637914E-25; m[24] = 3.075758E-25; m[25] = 9.4E-26; m[35] = 9.4E-26; m[36] = 9.4E-26; m[37] = 9.4E-26; m[84] = 3.335641E-13; m[85] = 1.290893E-12; m[111] = 8.4E-17; m[113] = 4.405704E-24; m[115] = 6.151516E-24; m[117] = 4.088275E-24; m[119] = 2.102914E-24; m[130] = 5.116E-08; m[150] = 1.525E-12; m[211] = 2.6033E-08; m[213] = 4.405704E-24; m[215] = 6.151516E-24; m[217] = 4.088275E-24; m[219] = 2.102914E-24; m[221] = 5.063171E-19; m[223] = 7.752794E-23; m[225] = 3.555982E-24; m[227] = 3.91793E-24; m[229] = 2.777267E-24; m[310] = 8.953E-11; m[313] = 1.308573E-23; m[315] = 6.038644E-24; m[317] = 4.139699E-24; m[319] = 3.324304E-24; m[321] = 1.238E-08; m[323] = 1.295693E-23; m[325] = 6.682357E-24; m[327] = 4.139699E-24; m[329] = 3.324304E-24; m[331] = 3.210791E-21; m[333] = 1.545099E-22; m[335] = 9.016605E-24; m[337] = 7.565657E-24; m[350] = 1.407125E-12; m[411] = 1.04E-12; m[413] = 6.856377E-21; m[415] = 1.778952E-23; m[421] = 4.101E-13; m[423] = 1.000003E-19; m[425] = 1.530726E-23; m[431] = 5.E-13; m[433] = 1.000003E-19; m[435] = 3.291061E-23; m[441] = 2.465214E-23; m[443] = 7.062363E-21; m[445] = 3.242425E-22; m[510] = 1.525E-12; m[511] = 1.525E-12; m[513] = 1.000019E-19; m[515] = 1.31E-23; m[521] = 1.638E-12; m[523] = 1.000019E-19; m[525] = 1.31E-23; m[530] = 1.536875E-12; m[531] = 1.472E-12; m[533] = 1.E-19; m[535] = 1.31E-23; m[541] = 4.5E-13; m[553] = 1.218911E-20; m[1112] = 4.539394E-24; m[1114] = 5.578069E-24; m[1116] = 1.994582E-24; m[1118] = 2.269697E-24; m[1212] = 4.539394E-24; m[1214] = 5.723584E-24; m[1216] = 1.994582E-24; m[1218] = 1.316424E-24; m[2112] = 8.857E+02; m[2114] = 5.578069E-24; m[2116] = 4.388081E-24; m[2118] = 2.269697E-24; m[2122] = 4.539394E-24; m[2124] = 5.723584E-24; m[2126] = 1.994582E-24; m[2128] = 1.316424E-24; m[2212] = 1.E+16; m[2214] = 5.578069E-24; m[2216] = 4.388081E-24; m[2218] = 2.269697E-24; m[2222] = 4.539394E-24; m[2224] = 5.578069E-24; m[2226] = 1.994582E-24; m[2228] = 2.269697E-24; m[3112] = 1.479E-10; m[3114] = 1.670589E-23; m[3116] = 5.485102E-24; m[3118] = 3.656734E-24; m[3122] = 2.631E-10; m[3124] = 4.219309E-23; m[3126] = 8.227653E-24; m[3128] = 3.291061E-24; m[3212] = 7.4E-20; m[3214] = 1.828367E-23; m[3216] = 5.485102E-24; m[3218] = 3.656734E-24; m[3222] = 8.018E-11; m[3224] = 1.838582E-23; m[3226] = 5.485102E-24; m[3228] = 3.656734E-24; m[3312] = 1.639E-10; m[3314] = 6.648608E-23; m[3322] = 2.9E-10; m[3324] = 7.233101E-23; m[3334] = 8.21E-11; m[4112] = 2.991874E-22; m[4114] = 4.088274E-23; m[4122] = 2.E-13; m[4132] = 1.12E-13; m[4212] = 3.999999E-22; m[4214] = 3.291061E-22; m[4222] = 2.951624E-22; m[4224] = 4.417531E-23; m[4232] = 4.42E-13; m[4332] = 6.9E-14; m[4412] = 3.335641E-13; m[4422] = 3.335641E-13; m[4432] = 3.335641E-13; m[5112] = 1.E-19; m[5122] = 1.38E-12; m[5132] = 1.42E-12; m[5142] = 1.290893E-12; m[5212] = 1.E-19; m[5222] = 1.E-19; m[5232] = 1.42E-12; m[5242] = 1.290893E-12; m[5312] = 1.E-19; m[5322] = 1.E-19; m[5332] = 1.55E-12; m[5342] = 1.290893E-12; m[5442] = 1.290893E-12; m[5512] = 1.290893E-12; m[5522] = 1.290893E-12; m[5532] = 1.290893E-12; m[5542] = 1.290893E-12; m[10111] = 2.48382E-24; m[10113] = 4.635297E-24; m[10115] = 2.54136E-24; m[10211] = 2.48382E-24; m[10213] = 4.635297E-24; m[10215] = 2.54136E-24; m[10223] = 1.828367E-24; m[10225] = 3.636531E-24; m[10311] = 2.437823E-24; m[10313] = 7.313469E-24; m[10315] = 3.538775E-24; m[10321] = 2.437823E-24; m[10323] = 7.313469E-24; m[10325] = 3.538775E-24; m[10331] = 4.804469E-24; m[10411] = 4.38E-24; m[10413] = 3.29E-23; m[10421] = 4.38E-24; m[10423] = 3.22653E-23; m[10431] = 6.5821E-22; m[10433] = 6.5821E-22; m[10441] = 6.453061E-23; m[10511] = 4.39E-24; m[10513] = 1.65E-23; m[10521] = 4.39E-24; m[10523] = 1.65E-23; m[10531] = 4.39E-24; m[10533] = 1.65E-23; m[11114] = 2.194041E-24; m[11116] = 1.828367E-24; m[11212] = 1.880606E-24; m[11216] = 1.828367E-24; m[12112] = 2.194041E-24; m[12114] = 2.194041E-24; m[12116] = 5.063171E-24; m[12126] = 1.828367E-24; m[12212] = 2.194041E-24; m[12214] = 2.194041E-24; m[12216] = 5.063171E-24; m[12224] = 2.194041E-24; m[12226] = 1.828367E-24; m[13112] = 6.582122E-24; m[13114] = 1.09702E-23; m[13116] = 5.485102E-24; m[13122] = 1.316424E-23; m[13124] = 1.09702E-23; m[13126] = 6.928549E-24; m[13212] = 6.582122E-24; m[13214] = 1.09702E-23; m[13216] = 5.485102E-24; m[13222] = 6.582122E-24; m[13224] = 1.09702E-23; m[13226] = 5.485102E-24; m[13312] = 4.135667E-22; m[13314] = 2.742551E-23; m[13324] = 2.742551E-23; m[14122] = 1.828367E-22; m[20022] = 1.E+16; m[20113] = 1.567172E-24; m[20213] = 1.567172E-24; m[20223] = 2.708692E-23; m[20313] = 3.782829E-24; m[20315] = 2.384827E-24; m[20323] = 3.782829E-24; m[20325] = 2.384827E-24; m[20333] = 1.198929E-23; m[20413] = 2.63E-24; m[20423] = 2.63E-24; m[20433] = 6.5821E-22; m[20443] = 7.395643E-22; m[20513] = 2.63E-24; m[20523] = 2.63E-24; m[20533] = 2.63E-24; m[21112] = 2.632849E-24; m[21114] = 3.291061E-24; m[21212] = 2.632849E-24; m[21214] = 6.582122E-24; m[22112] = 4.388081E-24; m[22114] = 3.291061E-24; m[22122] = 2.632849E-24; m[22124] = 6.582122E-24; m[22212] = 4.388081E-24; m[22214] = 3.291061E-24; m[22222] = 2.632849E-24; m[22224] = 3.291061E-24; m[23112] = 7.313469E-24; m[23114] = 2.991874E-24; m[23122] = 4.388081E-24; m[23124] = 6.582122E-24; m[23126] = 3.291061E-24; m[23212] = 7.313469E-24; m[23214] = 2.991874E-24; m[23222] = 7.313469E-24; m[23224] = 2.991874E-24; m[30113] = 2.632849E-24; m[30213] = 2.632849E-24; m[30221] = 1.880606E-24; m[30223] = 2.089563E-24; m[30313] = 2.056913E-24; m[30323] = 2.056913E-24; m[30443] = 2.419898E-23; m[31114] = 1.880606E-24; m[31214] = 3.291061E-24; m[32112] = 3.989164E-24; m[32114] = 1.880606E-24; m[32124] = 3.291061E-24; m[32212] = 3.989164E-24; m[32214] = 1.880606E-24; m[32224] = 1.880606E-24; m[33122] = 1.880606E-23; m[42112] = 6.582122E-24; m[42212] = 6.582122E-24; m[43122] = 2.194041E-24; m[53122] = 4.388081E-24; m[100111] = 1.645531E-24; m[100113] = 1.64553E-24; m[100211] = 1.645531E-24; m[100213] = 1.64553E-24; m[100221] = 1.196749E-23; m[100223] = 3.061452E-24; m[100313] = 2.837122E-24; m[100323] = 2.837122E-24; m[100331] = 4.459432E-25; m[100333] = 4.388081E-24; m[100441] = 4.701516E-23; m[100443] = 2.076379E-21; m[100553] = 2.056913E-20; m[200553] = 3.242425E-20; m[300553] = 3.210791E-23; m[9000111] = 8.776163E-24; m[9000211] = 8.776163E-24; m[9000443] = 8.227652E-24; m[9000553] = 5.983747E-24; m[9010111] = 3.164482E-24; m[9010211] = 3.164482E-24; m[9010221] = 9.403031E-24; m[9010443] = 8.438618E-24; m[9010553] = 8.3318E-24; m[9020221] = 8.093281E-23; m[9020443] = 1.061633E-23; m[9030221] = 6.038644E-24; m[9042413] = 2.07634E-21; m[9050225] = 1.394517E-24; m[9060225] = 3.291061E-24; m[9080225] = 4.388081E-24; m[9090225] = 2.056913E-24; m[9910445] = 2.07634E-21; m[9920443] = 2.07634E-21; return true; } /// @name Histograms //@{ Histo1DPtr _h_K0s_pt_y_30; // histogram for 2.5 < y < 3.0 (d2sigma) Histo1DPtr _h_K0s_pt_y_35; // histogram for 3.0 < y < 3.5 (d2sigma) Histo1DPtr _h_K0s_pt_y_40; // histogram for 3.5 < y < 4.0 (d2sigma) Histo1DPtr _h_K0s_pt_30; // histogram for 2.5 < y < 3.0 (sigma) Histo1DPtr _h_K0s_pt_35; // histogram for 3.0 < y < 3.5 (sigma) Histo1DPtr _h_K0s_pt_40; // histogram for 3.5 < y < 4.0 (sigma) Histo1DPtr _h_K0s_pt_y_all; // histogram for 2.5 < y < 4.0 (d2sigma) double sumKs0_30; // Sum of weights 2.5 < y < 3.0 double sumKs0_35; // Sum of weights 3.0 < y < 3.5 double sumKs0_40; // Sum of weights 3.5 < y < 4.0 // Various counters mainly for debugging and comparisons between different generators size_t sumKs0_badnull; // Nb of particles for which mother could not be identified size_t sumKs0_badlft; // Nb of mesons with long lived mothers size_t sumKs0_all; // Nb of all Ks0 generated size_t sumKs0_outup; // Nb of mesons with y > 4.0 size_t sumKs0_outdwn; // Nb of mesons with y < 2.5 size_t sum_low_pt_loss; // Nb of mesons with very low pT (indicates when units are mixed-up) size_t sum_high_pt_loss; // Nb of mesons with pT > 1.6 GeV/c // Map between PDG id and particle lifetimes in seconds std::map partLftMap; // Set of PDG Ids for stable particles (PDG Id <= 100 are considered stable) static const int stablePDGIds[205]; //@} }; // Actual initialization according to ISO C++ requirements const int LHCB_2010_S8758301::stablePDGIds[205] = { 311, 543, 545, 551, 555, 557, 1103, 2101, 2103, 2203, 3101, 3103, 3201, 3203, 3303, 4101, 4103, 4124, 4201, 4203, 4301, 4303, 4312, 4314, 4322, 4324, 4334, 4403, 4414, 4424, 4434, 4444, 5101, 5103, 5114, 5201, 5203, 5214, 5224, 5301, 5303, 5314, 5324, 5334, 5401, 5403, 5412, 5414, 5422, 5424, 5432, 5434, 5444, 5503, 5514, 5524, 5534, 5544, 5554, 10022, 10333, 10335, 10443, 10541, 10543, 10551, 10553, 10555, 11112, 12118, 12122, 12218, 12222, 13316, 13326, 20543, 20553, 20555, 23314, 23324, 30343, 30353, 30363, 30553, 33314, 33324, 41214, 42124, 52114, 52214, 100311, 100315, 100321, 100325, 100411, 100413, 100421, 100423, 100551, 100555, 100557, 110551, 110553, 110555, 120553, 120555, 130553, 200551, 200555, 210551, 210553, 220553, 1000001, 1000002, 1000003, 1000004, 1000005, 1000006, 1000011, 1000012, 1000013, 1000014, 1000015, 1000016, 1000021, 1000022, 1000023, 1000024, 1000025, 1000035, 1000037, 1000039, 2000001, 2000002, 2000003, 2000004, 2000005, 2000006, 2000011, 2000012, 2000013, 2000014, 2000015, 2000016, 3000111, 3000113, 3000211, 3000213, 3000221, 3000223, 3000331, 3100021, 3100111, 3100113, 3200111, 3200113, 3300113, 3400113, 4000001, 4000002, 4000011, 4000012, 5000039, 9000221, 9900012, 9900014, 9900016, 9900023, 9900024, 9900041, 9900042}; // Hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2010_S8758301); } diff --git a/analyses/pluginLHCb/LHCB_2011_I917009.cc b/analyses/pluginLHCb/LHCB_2011_I917009.cc --- a/analyses/pluginLHCb/LHCB_2011_I917009.cc +++ b/analyses/pluginLHCb/LHCB_2011_I917009.cc @@ -1,323 +1,323 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class LHCB_2011_I917009 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2011_I917009() : Analysis("LHCB_2011_I917009"), rap_beam(0.0), pt_min(0.0), pt1_edge(0.65), pt2_edge(1.0), pt3_edge(2.5), rap_min(2.), rap_max(0.0), dsShift(0) { } //@} public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { int y_nbins = 4; fillMap(partLftMap); if (fuzzyEquals(sqrtS(), 0.9*TeV)) { rap_beam = 6.87; rap_max = 4.; pt_min = 0.25; } else if (fuzzyEquals(sqrtS(), 7*TeV)) { rap_beam = 8.92; rap_max = 4.5; pt_min = 0.15; y_nbins = 5; dsShift = 8; } else { MSG_ERROR("Incompatible beam energy!"); } // Create the sets of temporary histograms that will be used to make the ratios in the finalize() for (size_t i = 0; i < 12; ++i) _tmphistos[i] = YODA::Histo1D(y_nbins, rap_min, rap_max); for (size_t i = 12; i < 15; ++i) _tmphistos[i] = YODA::Histo1D(refData(dsShift+5, 1, 1)); for (size_t i = 15; i < 18; ++i) _tmphistos[i] = YODA::Histo1D(y_nbins, rap_beam - rap_max, rap_beam - rap_min); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); double ancestor_lftsum = 0.0; double y, pT; int id; int partIdx = -1; foreach (const Particle& p, ufs.particles()) { id = p.pid(); // continue if particle not a K0s nor (anti-)Lambda if ( (id == 310) || (id == -310) ) { partIdx = 2; } else if ( id == 3122 ) { partIdx = 1; } else if ( id == -3122 ) { partIdx = 0; } else { continue; } ancestor_lftsum = getMotherLifeTimeSum(p); // Lifetime cut: ctau sum of all particle ancestors < 10^-9 m according to the paper (see eq. 5) const double MAX_CTAU = 1.0E-9; // [m] if ( (ancestor_lftsum < 0.0) || (ancestor_lftsum > MAX_CTAU) ) continue; const FourMomentum& qmom = p.momentum(); y = log((qmom.E() + qmom.pz())/(qmom.E() - qmom.pz()))/2.; // skip this particle if it has too high or too low rapidity (extremely rare cases when E = +- pz) if ( std::isnan(y) || std::isinf(y) ) continue; y = fabs(y); if (!inRange(y, rap_min, rap_max)) continue; pT = sqrt((qmom.px() * qmom.px()) + (qmom.py() * qmom.py())); if (!inRange(pT, pt_min, pt3_edge)) continue; // Filling corresponding temporary histograms for pT intervals if (inRange(pT, pt_min, pt1_edge)) _tmphistos[partIdx*3].fill(y, weight); if (inRange(pT, pt1_edge, pt2_edge)) _tmphistos[partIdx*3+1].fill(y, weight); if (inRange(pT, pt2_edge, pt3_edge)) _tmphistos[partIdx*3+2].fill(y, weight); // Fill histo in rapidity for whole pT interval _tmphistos[partIdx+9].fill(y, weight); // Fill histo in pT for whole rapidity interval _tmphistos[partIdx+12].fill(pT, weight); // Fill histo in rapidity loss for whole pT interval _tmphistos[partIdx+15].fill(rap_beam - y, weight); } } // Generate the ratio histograms void finalize() { int dsId = dsShift + 1; for (size_t j = 0; j < 3; ++j) { /// @todo Compactify to two one-liners Scatter2DPtr s1 = bookScatter2D(dsId, 1, j+1); divide(_tmphistos[j], _tmphistos[3+j], s1); Scatter2DPtr s2 = bookScatter2D(dsId+1, 1, j+1); divide(_tmphistos[j], _tmphistos[6+j], s2); } dsId += 2; for (size_t j = 3; j < 6; ++j) { /// @todo Compactify to two one-liners Scatter2DPtr s1 = bookScatter2D(dsId, 1, 1); divide(_tmphistos[3*j], _tmphistos[3*j+1], s1); dsId += 1; Scatter2DPtr s2 = bookScatter2D(dsId, 1, 1); divide(_tmphistos[3*j], _tmphistos[3*j+2], s2); dsId += 1; } } //@} private: // Get particle lifetime from hardcoded data double getLifeTime(int pid) { double lft = -1.0; if (pid < 0) pid = - pid; // Correct Pythia6 PIDs for f0(980), f0(1370) mesons if (pid == 10331) pid = 30221; if (pid == 10221) pid = 9010221; map::iterator pPartLft = partLftMap.find(pid); // search stable particle list if (pPartLft == partLftMap.end()) { if (pid <= 100) return 0.0; for (size_t i=0; i < sizeof(stablePDGIds)/sizeof(unsigned int); i++) { if (pid == stablePDGIds[i]) { lft = 0.0; break; } } } else { lft = (*pPartLft).second; } if (lft < 0.0 && PID::isHadron(pid)) { MSG_ERROR("Could not determine lifetime for particle with PID " << pid << "... This V^0 will be considered unprompt!"); } return lft; } // Data members like post-cuts event weight counters go here const double getMotherLifeTimeSum(const Particle& p) { if (p.genParticle() == NULL) return -1.; double lftSum = 0.; double plft = 0.; const GenParticle* part = p.genParticle(); const GenVertex* ivtx = part->production_vertex(); while (ivtx) { if (ivtx->particles_in_size() < 1) { lftSum = -1.; break; }; const GenVertex::particles_in_const_iterator iPart_invtx = ivtx->particles_in_const_begin(); part = (*iPart_invtx); if ( !(part) ) { lftSum = -1.; break; }; ivtx = part->production_vertex(); if ( (part->pdg_id() == 2212) || !(ivtx) ) break; //reached beam plft = getLifeTime(part->pdg_id()); if (plft < 0.) { lftSum = -1.; break; }; lftSum += plft; }; return (lftSum * c_light); } /// @name Private variables //@{ // The rapidity of the beam according to the selected beam energy double rap_beam; // The edges of the intervals of transverse momentum double pt_min, pt1_edge, pt2_edge, pt3_edge; // The limits of the rapidity window double rap_min; double rap_max; // Indicates which set of histograms will be output to yoda file (according to beam energy) int dsShift; // Map between PDG id and particle lifetimes in seconds std::map partLftMap; // Set of PDG Ids for stable particles (PDG Id <= 100 are considered stable) static const int stablePDGIds[205]; //@} /// @name Helper histograms //@{ /// Histograms are defined in the following order: anti-Lambda, Lambda and K0s. /// First 3 suites of 3 histograms correspond to each particle in bins of y for the 3 pT intervals. (9 histos) /// Next 3 histograms contain the particles in y bins for the whole pT interval (3 histos) /// Next 3 histograms contain the particles in y_loss bins for the whole pT interval (3 histos) /// Last 3 histograms contain the particles in pT bins for the whole rapidity (y) interval (3 histos) YODA::Histo1D _tmphistos[18]; //@} // Fill the PDG Id to Lifetime[seconds] map // Data was extracted from LHCb Particle Table through LHCb::ParticlePropertySvc bool fillMap(map& m) { m[6] = 4.707703E-25; m[11] = 1.E+16; m[12] = 1.E+16; m[13] = 2.197019E-06; m[14] = 1.E+16; m[15] = 2.906E-13; m[16] = 1.E+16; m[22] = 1.E+16; m[23] = 2.637914E-25; m[24] = 3.075758E-25; m[25] = 9.4E-26; m[35] = 9.4E-26; m[36] = 9.4E-26; m[37] = 9.4E-26; m[84] = 3.335641E-13; m[85] = 1.290893E-12; m[111] = 8.4E-17; m[113] = 4.405704E-24; m[115] = 6.151516E-24; m[117] = 4.088275E-24; m[119] = 2.102914E-24; m[130] = 5.116E-08; m[150] = 1.525E-12; m[211] = 2.6033E-08; m[213] = 4.405704E-24; m[215] = 6.151516E-24; m[217] = 4.088275E-24; m[219] = 2.102914E-24; m[221] = 5.063171E-19; m[223] = 7.752794E-23; m[225] = 3.555982E-24; m[227] = 3.91793E-24; m[229] = 2.777267E-24; m[310] = 8.953E-11; m[313] = 1.308573E-23; m[315] = 6.038644E-24; m[317] = 4.139699E-24; m[319] = 3.324304E-24; m[321] = 1.238E-08; m[323] = 1.295693E-23; m[325] = 6.682357E-24; m[327] = 4.139699E-24; m[329] = 3.324304E-24; m[331] = 3.210791E-21; m[333] = 1.545099E-22; m[335] = 9.016605E-24; m[337] = 7.565657E-24; m[350] = 1.407125E-12; m[411] = 1.04E-12; m[413] = 6.856377E-21; m[415] = 1.778952E-23; m[421] = 4.101E-13; m[423] = 1.000003E-19; m[425] = 1.530726E-23; m[431] = 5.E-13; m[433] = 1.000003E-19; m[435] = 3.291061E-23; m[441] = 2.465214E-23; m[443] = 7.062363E-21; m[445] = 3.242425E-22; m[510] = 1.525E-12; m[511] = 1.525E-12; m[513] = 1.000019E-19; m[515] = 1.31E-23; m[521] = 1.638E-12; m[523] = 1.000019E-19; m[525] = 1.31E-23; m[530] = 1.536875E-12; m[531] = 1.472E-12; m[533] = 1.E-19; m[535] = 1.31E-23; m[541] = 4.5E-13; m[553] = 1.218911E-20; m[1112] = 4.539394E-24; m[1114] = 5.578069E-24; m[1116] = 1.994582E-24; m[1118] = 2.269697E-24; m[1212] = 4.539394E-24; m[1214] = 5.723584E-24; m[1216] = 1.994582E-24; m[1218] = 1.316424E-24; m[2112] = 8.857E+02; m[2114] = 5.578069E-24; m[2116] = 4.388081E-24; m[2118] = 2.269697E-24; m[2122] = 4.539394E-24; m[2124] = 5.723584E-24; m[2126] = 1.994582E-24; m[2128] = 1.316424E-24; m[2212] = 1.E+16; m[2214] = 5.578069E-24; m[2216] = 4.388081E-24; m[2218] = 2.269697E-24; m[2222] = 4.539394E-24; m[2224] = 5.578069E-24; m[2226] = 1.994582E-24; m[2228] = 2.269697E-24; m[3112] = 1.479E-10; m[3114] = 1.670589E-23; m[3116] = 5.485102E-24; m[3118] = 3.656734E-24; m[3122] = 2.631E-10; m[3124] = 4.219309E-23; m[3126] = 8.227653E-24; m[3128] = 3.291061E-24; m[3212] = 7.4E-20; m[3214] = 1.828367E-23; m[3216] = 5.485102E-24; m[3218] = 3.656734E-24; m[3222] = 8.018E-11; m[3224] = 1.838582E-23; m[3226] = 5.485102E-24; m[3228] = 3.656734E-24; m[3312] = 1.639E-10; m[3314] = 6.648608E-23; m[3322] = 2.9E-10; m[3324] = 7.233101E-23; m[3334] = 8.21E-11; m[4112] = 2.991874E-22; m[4114] = 4.088274E-23; m[4122] = 2.E-13; m[4132] = 1.12E-13; m[4212] = 3.999999E-22; m[4214] = 3.291061E-22; m[4222] = 2.951624E-22; m[4224] = 4.417531E-23; m[4232] = 4.42E-13; m[4332] = 6.9E-14; m[4412] = 3.335641E-13; m[4422] = 3.335641E-13; m[4432] = 3.335641E-13; m[5112] = 1.E-19; m[5122] = 1.38E-12; m[5132] = 1.42E-12; m[5142] = 1.290893E-12; m[5212] = 1.E-19; m[5222] = 1.E-19; m[5232] = 1.42E-12; m[5242] = 1.290893E-12; m[5312] = 1.E-19; m[5322] = 1.E-19; m[5332] = 1.55E-12; m[5342] = 1.290893E-12; m[5442] = 1.290893E-12; m[5512] = 1.290893E-12; m[5522] = 1.290893E-12; m[5532] = 1.290893E-12; m[5542] = 1.290893E-12; m[10111] = 2.48382E-24; m[10113] = 4.635297E-24; m[10115] = 2.54136E-24; m[10211] = 2.48382E-24; m[10213] = 4.635297E-24; m[10215] = 2.54136E-24; m[10223] = 1.828367E-24; m[10225] = 3.636531E-24; m[10311] = 2.437823E-24; m[10313] = 7.313469E-24; m[10315] = 3.538775E-24; m[10321] = 2.437823E-24; m[10323] = 7.313469E-24; m[10325] = 3.538775E-24; m[10331] = 4.804469E-24; m[10411] = 4.38E-24; m[10413] = 3.29E-23; m[10421] = 4.38E-24; m[10423] = 3.22653E-23; m[10431] = 6.5821E-22; m[10433] = 6.5821E-22; m[10441] = 6.453061E-23; m[10511] = 4.39E-24; m[10513] = 1.65E-23; m[10521] = 4.39E-24; m[10523] = 1.65E-23; m[10531] = 4.39E-24; m[10533] = 1.65E-23; m[11114] = 2.194041E-24; m[11116] = 1.828367E-24; m[11212] = 1.880606E-24; m[11216] = 1.828367E-24; m[12112] = 2.194041E-24; m[12114] = 2.194041E-24; m[12116] = 5.063171E-24; m[12126] = 1.828367E-24; m[12212] = 2.194041E-24; m[12214] = 2.194041E-24; m[12216] = 5.063171E-24; m[12224] = 2.194041E-24; m[12226] = 1.828367E-24; m[13112] = 6.582122E-24; m[13114] = 1.09702E-23; m[13116] = 5.485102E-24; m[13122] = 1.316424E-23; m[13124] = 1.09702E-23; m[13126] = 6.928549E-24; m[13212] = 6.582122E-24; m[13214] = 1.09702E-23; m[13216] = 5.485102E-24; m[13222] = 6.582122E-24; m[13224] = 1.09702E-23; m[13226] = 5.485102E-24; m[13314] = 2.742551E-23; m[13324] = 2.742551E-23; m[14122] = 1.828367E-22; m[20022] = 1.E+16; m[20113] = 1.567172E-24; m[20213] = 1.567172E-24; m[20223] = 2.708692E-23; m[20313] = 3.782829E-24; m[20315] = 2.384827E-24; m[20323] = 3.782829E-24; m[20325] = 2.384827E-24; m[20333] = 1.198929E-23; m[20413] = 2.63E-24; m[20423] = 2.63E-24; m[20433] = 6.5821E-22; m[20443] = 7.395643E-22; m[20513] = 2.63E-24; m[20523] = 2.63E-24; m[20533] = 2.63E-24; m[21112] = 2.632849E-24; m[21114] = 3.291061E-24; m[21212] = 2.632849E-24; m[21214] = 6.582122E-24; m[22112] = 4.388081E-24; m[22114] = 3.291061E-24; m[22122] = 2.632849E-24; m[22124] = 6.582122E-24; m[22212] = 4.388081E-24; m[22214] = 3.291061E-24; m[22222] = 2.632849E-24; m[22224] = 3.291061E-24; m[23112] = 7.313469E-24; m[23114] = 2.991874E-24; m[23122] = 4.388081E-24; m[23124] = 6.582122E-24; m[23126] = 3.291061E-24; m[23212] = 7.313469E-24; m[23214] = 2.991874E-24; m[23222] = 7.313469E-24; m[23224] = 2.991874E-24; m[30113] = 2.632849E-24; m[30213] = 2.632849E-24; m[30221] = 1.880606E-24; m[30223] = 2.089563E-24; m[30313] = 2.056913E-24; m[30323] = 2.056913E-24; m[30443] = 2.419898E-23; m[31114] = 1.880606E-24; m[31214] = 3.291061E-24; m[32112] = 3.989164E-24; m[32114] = 1.880606E-24; m[32124] = 3.291061E-24; m[32212] = 3.989164E-24; m[32214] = 1.880606E-24; m[32224] = 1.880606E-24; m[33122] = 1.880606E-23; m[42112] = 6.582122E-24; m[42212] = 6.582122E-24; m[43122] = 2.194041E-24; m[53122] = 4.388081E-24; m[100111] = 1.645531E-24; m[100113] = 1.64553E-24; m[100211] = 1.645531E-24; m[100213] = 1.64553E-24; m[100221] = 1.196749E-23; m[100223] = 3.061452E-24; m[100313] = 2.837122E-24; m[100323] = 2.837122E-24; m[100331] = 4.459432E-25; m[100333] = 4.388081E-24; m[100441] = 4.701516E-23; m[100443] = 2.076379E-21; m[100553] = 2.056913E-20; m[200553] = 3.242425E-20; m[300553] = 3.210791E-23; m[9000111] = 8.776163E-24; m[9000211] = 8.776163E-24; m[9000443] = 8.227652E-24; m[9000553] = 5.983747E-24; m[9010111] = 3.164482E-24; m[9010211] = 3.164482E-24; m[9010221] = 9.403031E-24; m[9010443] = 8.438618E-24; m[9010553] = 8.3318E-24; m[9020443] = 1.061633E-23; m[9030221] = 6.038644E-24; m[9042413] = 2.07634E-21; m[9050225] = 1.394517E-24; m[9060225] = 3.291061E-24; m[9080225] = 4.388081E-24; m[9090225] = 2.056913E-24; m[9910445] = 2.07634E-21; m[9920443] = 2.07634E-21; return true; } }; const int LHCB_2011_I917009::stablePDGIds[205] = { 311, 543, 545, 551, 555, 557, 1103, 2101, 2103, 2203, 3101, 3103, 3201, 3203, 3303, 4101, 4103, 4124, 4201, 4203, 4301, 4303, 4312, 4314, 4322, 4324, 4334, 4403, 4414, 4424, 4434, 4444, 5101, 5103, 5114, 5201, 5203, 5214, 5224, 5301, 5303, 5314, 5324, 5334, 5401, 5403, 5412, 5414, 5422, 5424, 5432, 5434, 5444, 5503, 5514, 5524, 5534, 5544, 5554, 10022, 10333, 10335, 10443, 10541, 10543, 10551, 10553, 10555, 11112, 12118, 12122, 12218, 12222, 13316, 13326, 20543, 20553, 20555, 23314, 23324, 30343, 30353, 30363, 30553, 33314, 33324, 41214, 42124, 52114, 52214, 100311, 100315, 100321, 100325, 100411, 100413, 100421, 100423, 100551, 100555, 100557, 110551, 110553, 110555, 120553, 120555, 130553, 200551, 200555, 210551, 210553, 220553, 1000001, 1000002, 1000003, 1000004, 1000005, 1000006, 1000011, 1000012, 1000013, 1000014, 1000015, 1000016, 1000021, 1000022, 1000023, 1000024, 1000025, 1000035, 1000037, 1000039, 2000001, 2000002, 2000003, 2000004, 2000005, 2000006, 2000011, 2000012, 2000013, 2000014, 2000015, 2000016, 3000111, 3000113, 3000211, 3000213, 3000221, 3000223, 3000331, 3100021, 3100111, 3100113, 3200111, 3200113, 3300113, 3400113, 4000001, 4000002, 4000011, 4000012, 5000039, 9000221, 9900012, 9900014, 9900016, 9900023, 9900024, 9900041, 9900042 }; // Hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2011_I917009); } diff --git a/analyses/pluginLHCb/LHCB_2011_I919315.cc b/analyses/pluginLHCb/LHCB_2011_I919315.cc --- a/analyses/pluginLHCb/LHCB_2011_I919315.cc +++ b/analyses/pluginLHCb/LHCB_2011_I919315.cc @@ -1,96 +1,96 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Tools/BinnedHistogram.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { class LHCB_2011_I919315 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2011_I919315() : Analysis("LHCB_2011_I919315") { } //@} public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _h_Phi_pT_y.addHistogram( 2.44, 2.62, bookHisto1D(2, 1, 1)); _h_Phi_pT_y.addHistogram( 2.62, 2.80, bookHisto1D(2, 1, 2)); _h_Phi_pT_y.addHistogram( 2.80, 2.98, bookHisto1D(3, 1, 1)); _h_Phi_pT_y.addHistogram( 2.98, 3.16, bookHisto1D(3, 1, 2)); _h_Phi_pT_y.addHistogram( 3.16, 3.34, bookHisto1D(4, 1, 1)); _h_Phi_pT_y.addHistogram( 3.34, 3.52, bookHisto1D(4, 1, 2)); _h_Phi_pT_y.addHistogram( 3.52, 3.70, bookHisto1D(5, 1, 1)); _h_Phi_pT_y.addHistogram( 3.70, 3.88, bookHisto1D(5, 1, 2)); _h_Phi_pT_y.addHistogram( 3.88, 4.06, bookHisto1D(6, 1, 1)); _h_Phi_pT = bookHisto1D(7, 1, 1); _h_Phi_y = bookHisto1D(8, 1, 1); } /// Perform the per-event analysis void analyze (const Event& event) { const double weight = event.weight(); - const UnstableFinalState& ufs = apply (event, "UFS"); + const UnstableParticles& ufs = apply (event, "UFS"); foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); if (id == 333) { // id 333 = phi-meson double y = p.rapidity(); double pT = p.perp(); if (pT < 0.6*GeV || pT > 5.0*GeV || y < 2.44 || y > 4.06) { continue; } _h_Phi_y->fill (y, weight); _h_Phi_pT->fill (pT/MeV, weight); _h_Phi_pT_y.fill(y, pT/GeV, weight); } } } /// Normalise histograms etc., after the run void finalize() { double scale_factor = crossSectionPerEvent()/microbarn; scale (_h_Phi_y, scale_factor); scale (_h_Phi_pT, scale_factor); _h_Phi_pT_y.scale(scale_factor/1000., this); } //@} private: /// @name Histograms //@{ Histo1DPtr _h_Phi_y; Histo1DPtr _h_Phi_pT; BinnedHistogram _h_Phi_pT_y; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2011_I919315); } //@} diff --git a/analyses/pluginLHCb/LHCB_2013_I1218996.cc b/analyses/pluginLHCb/LHCB_2013_I1218996.cc --- a/analyses/pluginLHCb/LHCB_2013_I1218996.cc +++ b/analyses/pluginLHCb/LHCB_2013_I1218996.cc @@ -1,138 +1,138 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Tools/BinnedHistogram.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// LHCb prompt charm hadron pT and rapidity spectra class LHCB_2013_I1218996 : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor LHCB_2013_I1218996() : Analysis("LHCB_2013_I1218996") { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { /// Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); /// Book histograms _h_pdg411_Dplus_pT_y.addHistogram( 2.0, 2.5, bookHisto1D(3, 1, 1)); _h_pdg411_Dplus_pT_y.addHistogram( 2.5, 3.0, bookHisto1D(3, 1, 2)); _h_pdg411_Dplus_pT_y.addHistogram( 3.0, 3.5, bookHisto1D(3, 1, 3)); _h_pdg411_Dplus_pT_y.addHistogram( 3.5, 4.0, bookHisto1D(3, 1, 4)); _h_pdg411_Dplus_pT_y.addHistogram( 4.0, 4.5, bookHisto1D(3, 1, 5)); _h_pdg421_Dzero_pT_y.addHistogram( 2.0, 2.5, bookHisto1D(2, 1, 1)); _h_pdg421_Dzero_pT_y.addHistogram( 2.5, 3.0, bookHisto1D(2, 1, 2)); _h_pdg421_Dzero_pT_y.addHistogram( 3.0, 3.5, bookHisto1D(2, 1, 3)); _h_pdg421_Dzero_pT_y.addHistogram( 3.5, 4.0, bookHisto1D(2, 1, 4)); _h_pdg421_Dzero_pT_y.addHistogram( 4.0, 4.5, bookHisto1D(2, 1, 5)); _h_pdg431_Dsplus_pT_y.addHistogram( 2.0, 2.5, bookHisto1D(5, 1, 1)); _h_pdg431_Dsplus_pT_y.addHistogram( 2.5, 3.0, bookHisto1D(5, 1, 2)); _h_pdg431_Dsplus_pT_y.addHistogram( 3.0, 3.5, bookHisto1D(5, 1, 3)); _h_pdg431_Dsplus_pT_y.addHistogram( 3.5, 4.0, bookHisto1D(5, 1, 4)); _h_pdg431_Dsplus_pT_y.addHistogram( 4.0, 4.5, bookHisto1D(5, 1, 5)); _h_pdg413_Dstarplus_pT_y.addHistogram( 2.0, 2.5, bookHisto1D(4, 1, 1)); _h_pdg413_Dstarplus_pT_y.addHistogram( 2.5, 3.0, bookHisto1D(4, 1, 2)); _h_pdg413_Dstarplus_pT_y.addHistogram( 3.0, 3.5, bookHisto1D(4, 1, 3)); _h_pdg413_Dstarplus_pT_y.addHistogram( 3.5, 4.0, bookHisto1D(4, 1, 4)); _h_pdg413_Dstarplus_pT_y.addHistogram( 4.0, 4.5, bookHisto1D(4, 1, 5)); _h_pdg4122_Lambdac_pT = bookHisto1D(1, 1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); /// @todo Use PrimaryHadrons to avoid double counting and automatically remove the contributions from unstable? - const UnstableFinalState &ufs = apply (event, "UFS"); + const UnstableParticles &ufs = apply (event, "UFS"); foreach (const Particle& p, ufs.particles() ) { // We're only interested in charm hadrons if (!p.isHadron() || !p.hasCharm()) continue; // Kinematic acceptance const double y = p.absrap(); ///< Double analysis efficiency with a "two-sided LHCb" const double pT = p.pT(); // Fiducial acceptance of the measurements if (pT > 8.0*GeV || y < 2.0 || y > 4.5) continue; /// Experimental selection removes non-prompt charm hadrons: we ignore those from b decays if (p.fromBottom()) continue; switch (p.abspid()) { case 411: _h_pdg411_Dplus_pT_y.fill(y, pT/GeV, weight); break; case 421: _h_pdg421_Dzero_pT_y.fill(y, pT/GeV, weight); break; case 431: _h_pdg431_Dsplus_pT_y.fill(y, pT/GeV, weight); break; case 413: _h_pdg413_Dstarplus_pT_y.fill(y, pT/GeV, weight); break; case 4122: _h_pdg4122_Lambdac_pT->fill(pT/GeV, weight); break; } } } /// Normalise histograms etc., after the run void finalize() { const double scale_factor = 0.5 * crossSection()/microbarn / sumOfWeights(); /// Avoid the implicit division by the bin width in the BinnedHistogram::scale method. foreach (Histo1DPtr h, _h_pdg411_Dplus_pT_y.getHistograms()) h->scaleW(scale_factor); foreach (Histo1DPtr h, _h_pdg421_Dzero_pT_y.getHistograms()) h->scaleW(scale_factor); foreach (Histo1DPtr h, _h_pdg431_Dsplus_pT_y.getHistograms()) h->scaleW(scale_factor); foreach (Histo1DPtr h, _h_pdg413_Dstarplus_pT_y.getHistograms()) h->scaleW(scale_factor); _h_pdg4122_Lambdac_pT->scaleW(scale_factor); } //@} private: /// @name Histograms //@{ BinnedHistogram _h_pdg411_Dplus_pT_y; BinnedHistogram _h_pdg421_Dzero_pT_y; BinnedHistogram _h_pdg431_Dsplus_pT_y; BinnedHistogram _h_pdg413_Dstarplus_pT_y; Histo1DPtr _h_pdg4122_Lambdac_pT; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(LHCB_2013_I1218996); } diff --git a/analyses/pluginLHCf/LHCF_2012_I1115479.cc b/analyses/pluginLHCf/LHCF_2012_I1115479.cc --- a/analyses/pluginLHCf/LHCF_2012_I1115479.cc +++ b/analyses/pluginLHCf/LHCF_2012_I1115479.cc @@ -1,65 +1,65 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Tools/BinnedHistogram.hh" namespace Rivet { class LHCF_2012_I1115479 : public Analysis { public: LHCF_2012_I1115479() : Analysis("LHCF_2012_I1115479") { } public: void init() { - declare(UnstableFinalState(),"UFS"); + declare(UnstableParticles(),"UFS"); _binnedHistos_y_pT.addHistogram( 8.9, 9.0, bookHisto1D(1, 1, 1)); _binnedHistos_y_pT.addHistogram( 9.0, 9.2, bookHisto1D(2, 1, 1)); _binnedHistos_y_pT.addHistogram( 9.2, 9.4, bookHisto1D(3, 1, 1)); _binnedHistos_y_pT.addHistogram( 9.4, 9.6, bookHisto1D(4, 1, 1)); _binnedHistos_y_pT.addHistogram( 9.6, 10.0, bookHisto1D(5, 1, 1)); _binnedHistos_y_pT.addHistogram(10.0, 11.0, bookHisto1D(6, 1, 1)); } void analyze(const Event& event) { - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); const double weight = event.weight(); const double dphi = TWOPI; foreach (const Particle& p, ufs.particles()) { if (p.pid() == 111) { double pT = p.pT(); double y = p.rapidity(); if (pT > 0.6*GeV) continue; const double scaled_weight = weight/(dphi*pT/GeV); _binnedHistos_y_pT.fill(y, pT/GeV, scaled_weight); } } } void finalize() { _binnedHistos_y_pT.scale( 1./sumOfWeights() , this); } private: BinnedHistogram _binnedHistos_y_pT; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(LHCF_2012_I1115479); } diff --git a/analyses/pluginLHCf/LHCF_2016_I1385877.cc b/analyses/pluginLHCf/LHCF_2016_I1385877.cc --- a/analyses/pluginLHCf/LHCF_2016_I1385877.cc +++ b/analyses/pluginLHCf/LHCF_2016_I1385877.cc @@ -1,230 +1,230 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" #include "Rivet/Tools/BinnedHistogram.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class LHCF_2016_I1385877 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(LHCF_2016_I1385877); //In case of some models there can be very small value pT but greater than 0. //In order to avoid unphysical behavior in the first bin a cutoff is needed //If you are sure the model does not have this problem you can set pt_cutoff to 0. const double pt_cutoff = 0.01; /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - addProjection(UnstableFinalState(), "UFS"); + addProjection(UnstableParticles(), "UFS"); addProjection(Beam(), "Beam"); // calculate beam rapidity const Particle bm1 = beams().first; const Particle bm2 = beams().second; _beam_rap_1 = bm1.rap(); _beam_rap_2 = bm2.rap(); MSG_INFO("Beam 1 : momentum " << bm1.pz() << " PID " << bm1.pid() << " rapidity " << bm1.rap() ); MSG_INFO("Beam 2 : momentum " << bm2.pz() << " PID " << bm2.pid() << " rapidity " << bm2.rap() ); const double _sqrts = sqrtS(); MSG_INFO("CM energy: " << _sqrts ); _beam_rap = _beam_rap_1; if(bm1.pid()==2212 && bm2.pid()==2212) { //p-p _pp_Pb = true; if( fuzzyEquals( _sqrts/GeV, 7000., 1E-3) ) { _p_pi0_rap_apT = bookProfile1D(1, 1, 2); _h_pi0_rap_pT.addHistogram( 8.8, 9.0, bookHisto1D(2, 1, 2)); _h_pi0_rap_pT.addHistogram( 9.0, 9.2, bookHisto1D(3, 1, 2)); _h_pi0_rap_pT.addHistogram( 9.2, 9.4, bookHisto1D(4, 1, 2)); _h_pi0_rap_pT.addHistogram( 9.4, 9.6, bookHisto1D(5, 1, 2)); _h_pi0_rap_pT.addHistogram( 9.6, 9.8, bookHisto1D(6, 1, 2)); _h_pi0_rap_pT.addHistogram( 9.8, 10.0, bookHisto1D(7, 1, 2)); _h_pi0_rap_pT.addHistogram( 10.0, 10.2, bookHisto1D(8, 1, 2)); _h_pi0_rap_pT.addHistogram( 10.2, 10.4, bookHisto1D(9, 1, 2)); _h_pi0_rap_pT.addHistogram( 10.4, 10.6, bookHisto1D(10, 1, 2)); _h_pi0_rap_pT.addHistogram( 10.6, 10.8, bookHisto1D(11, 1, 2)); _h_pi0_pT_pZ.addHistogram( 0.0, 0.2, bookHisto1D(12, 1, 2)); _h_pi0_pT_pZ.addHistogram( 0.2, 0.4, bookHisto1D(13, 1, 2)); _h_pi0_pT_pZ.addHistogram( 0.4, 0.6, bookHisto1D(14, 1, 2)); _h_pi0_pT_pZ.addHistogram( 0.6, 0.8, bookHisto1D(15, 1, 2)); _h_pi0_pT_pZ.addHistogram( 0.8, 1.0, bookHisto1D(16, 1, 2)); _h_pi0_rap = bookHisto1D(21, 1, 2); _p_pi0_raploss_apT = bookProfile1D(22, 1, 2); _h_pi0_raploss = bookHisto1D(23, 1, 2); } else if(fuzzyEquals( _sqrts/GeV, 2760., 1E-3) ){ _p_pi0_rap_apT = bookProfile1D(1, 1, 1); _h_pi0_rap_pT.addHistogram( 8.8, 9.0, bookHisto1D(2, 1, 1)); _h_pi0_rap_pT.addHistogram( 9.0, 9.2, bookHisto1D(3, 1, 1)); _h_pi0_rap_pT.addHistogram( 9.2, 9.4, bookHisto1D(4, 1, 1)); _h_pi0_rap_pT.addHistogram( 9.4, 9.6, bookHisto1D(5, 1, 1)); _h_pi0_rap_pT.addHistogram( 9.6, 9.8, bookHisto1D(6, 1, 1)); _h_pi0_pT_pZ.addHistogram( 0.0, 0.2, bookHisto1D(12, 1, 1)); _h_pi0_pT_pZ.addHistogram( 0.2, 0.4, bookHisto1D(13, 1, 1)); _h_pi0_rap = bookHisto1D(21, 1, 1); _p_pi0_raploss_apT = bookProfile1D(22, 1, 1); _h_pi0_raploss = bookHisto1D(23, 1, 1); }else{ MSG_INFO("p-p collisions : energy out of range!"); } } else if (bm1.pid()==2212 && bm2.pid()==1000822080){ //p-Pb _pp_Pb = false; if( fuzzyEquals( _sqrts/sqrt(208.)/GeV, 5020., 1E-3) ) { _p_pi0_rap_apT = bookProfile1D(1, 1, 3); _h_pi0_rap_pT.addHistogram( 8.8, 9.0, bookHisto1D(2, 1, 3)); _h_pi0_rap_pT.addHistogram( 9.0, 9.2, bookHisto1D(3, 1, 3)); _h_pi0_rap_pT.addHistogram( 9.2, 9.4, bookHisto1D(4, 1, 3)); _h_pi0_rap_pT.addHistogram( 9.4, 9.6, bookHisto1D(5, 1, 3)); _h_pi0_rap_pT.addHistogram( 9.6, 9.8, bookHisto1D(6, 1, 3)); _h_pi0_rap_pT.addHistogram( 9.8, 10.0, bookHisto1D(7, 1, 3)); _h_pi0_rap_pT.addHistogram( 10.0, 10.2, bookHisto1D(8, 1, 3)); _h_pi0_rap_pT.addHistogram( 10.2, 10.4, bookHisto1D(9, 1, 3)); _h_pi0_rap_pT.addHistogram( 10.4, 10.6, bookHisto1D(10, 1, 3)); _h_pi0_rap_pT.addHistogram( 10.6, 10.8, bookHisto1D(11, 1, 3)); _h_pi0_pT_pZ.addHistogram( 0.0, 0.2, bookHisto1D(12, 1, 3)); _h_pi0_pT_pZ.addHistogram( 0.2, 0.4, bookHisto1D(13, 1, 3)); _h_pi0_pT_pZ.addHistogram( 0.4, 0.6, bookHisto1D(14, 1, 3)); _h_pi0_pT_pZ.addHistogram( 0.6, 0.8, bookHisto1D(15, 1, 3)); _h_pi0_pT_pZ.addHistogram( 0.8, 1.0, bookHisto1D(16, 1, 3)); //_h_pi0_rap = bookHisto1D(21, 1, 3); _p_pi0_raploss_apT = bookProfile1D(22, 1, 3); //_h_pi0_raploss = bookHisto1D(23, 1, 3); }else{ MSG_INFO("p-Pb collisions : energy out of range!"); } } else { MSG_INFO("Beam PDGID out of range!"); } _nevt = 0.; } /// Perform the per-event analysis void analyze(const Event& event) { _nevt = _nevt + 1.; - const UnstableFinalState &ufs = applyProjection (event, "UFS"); + const UnstableParticles &ufs = applyProjection (event, "UFS"); Particles ufs_particles = ufs.particles(); for (Particle& p: ufs_particles ) { // select neutral pion if(p.abspid() != 111) continue; if(p.pz()/GeV<0.) continue; if(p.pT()/GeVfill( rap , pT_MeV , 1.0 ); _h_pi0_rap_pT.fill( rap, pT , 1.0 / pT ); _h_pi0_pT_pZ.fill( pT, pZ , en / pT); _h_pi0_rap->fill( rap, 1.0 ); _p_pi0_raploss_apT->fill( raploss , pT_MeV , 1.0 ); _h_pi0_raploss->fill( raploss, 1.0 ); } else {//pPb collisions const double pZ = p.pz()/GeV; const double pT = p.pT()/GeV; const double pT_MeV = p.pT()/MeV; const double en = p.E()/GeV; const double rap = p.rap(); const double raploss = _beam_rap_1 - p.rap(); //mitsuka-like _p_pi0_rap_apT->fill( rap , pT_MeV , 1.0 ); _h_pi0_rap_pT.fill( rap, pT , 1.0 / pT ); _h_pi0_pT_pZ.fill( pT, pZ , en / pT); //_h_pi0_rap->fill( rap, 1.0 ); _p_pi0_raploss_apT->fill( raploss , pT_MeV , 1.0 ); //_h_pi0_raploss->fill( raploss, 1.0 ); } } } /// Normalise histograms etc., after the run void finalize() { const double inv_scale_factor = 1. / _nevt / (2.*PI); const double pt_bin_width = 0.2; for (Histo1DPtr h: _h_pi0_pT_pZ.getHistograms()){ if(h->path() == "/LHCF_2016_I1385877/d12-x01-y01" || h->path() == "/LHCF_2016_I1385877/d12-x01-y02" || h->path() == "/LHCF_2016_I1385877/d12-x01-y03") h->scaleW( inv_scale_factor / (pt_bin_width-pt_cutoff) ); else h->scaleW( inv_scale_factor / pt_bin_width ); } const double scale_factor = 1. / _nevt / (2.*PI); const double rap_bin_width = 0.2; for (Histo1DPtr h: _h_pi0_rap_pT.getHistograms()) { const int cutoff_bin = h->binIndexAt(pt_cutoff); if(cutoff_bin>=0) { // for(unsigned int ibin=0; ibinnumBins(); ++ibin) // cout << ibin << " " << h->bin(ibin).area() << endl; const double cutoff_wdt = h->bin(cutoff_bin).xMax()-h->bin(cutoff_bin).xMin(); h->bin(cutoff_bin).scaleW((cutoff_wdt)/(cutoff_wdt-pt_cutoff)); // for(unsigned int ibin=0; ibinnumBins(); ++ibin) // cout << ibin << " " << h->bin(ibin).area() << endl; } h->scaleW( scale_factor / rap_bin_width ); } if(_pp_Pb) { scale( _h_pi0_rap , 1. / _nevt ); scale( _h_pi0_raploss , 1. / _nevt ); } } //@} private: /// @name Histograms //@{ bool _pp_Pb; double _nevt; double _beam_rap; double _beam_rap_1; double _beam_rap_2; BinnedHistogram _h_pi0_pT_pZ; BinnedHistogram _h_pi0_rap_pT; Profile1DPtr _p_pi0_rap_apT; Histo1DPtr _h_pi0_rap; Profile1DPtr _p_pi0_raploss_apT; Histo1DPtr _h_pi0_raploss; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(LHCF_2016_I1385877); } diff --git a/analyses/pluginMC/MC_HFJETS.cc b/analyses/pluginMC/MC_HFJETS.cc --- a/analyses/pluginMC/MC_HFJETS.cc +++ b/analyses/pluginMC/MC_HFJETS.cc @@ -1,151 +1,151 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/PrimaryHadrons.hh" #include "Rivet/Projections/HeavyHadrons.hh" namespace Rivet { class MC_HFJETS : public Analysis { public: /// Constructor MC_HFJETS() : Analysis("MC_HFJETS") { } public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { FastJets fj(FinalState(-5, 5), FastJets::ANTIKT, 0.6); fj.useInvisibles(); declare(fj, "Jets"); declare(HeavyHadrons(Cuts::abseta < 5 && Cuts::pT > 500*MeV), "BCHadrons"); _h_ptCJetLead = bookHisto1D("ptCJetLead", linspace(5, 0, 20, false) + logspace(25, 20, 200)); _h_ptCHadrLead = bookHisto1D("ptCHadrLead", linspace(5, 0, 10, false) + logspace(25, 10, 200)); _h_ptFracC = bookHisto1D("ptfracC", 50, 0, 1.5); _h_eFracC = bookHisto1D("efracC", 50, 0, 1.5); _h_ptBJetLead = bookHisto1D("ptBJetLead", linspace(5, 0, 20, false) + logspace(25, 20, 200)); _h_ptBHadrLead = bookHisto1D("ptBHadrLead", linspace(5, 0, 10, false) + logspace(25, 10, 200)); _h_ptFracB = bookHisto1D("ptfracB", 50, 0, 1.5); _h_eFracB = bookHisto1D("efracB", 50, 0, 1.5); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // Get jets and heavy hadrons const Jets& jets = apply(event, "Jets").jetsByPt(); const Particles bhadrons = sortByPt(apply(event, "BCHadrons").bHadrons()); const Particles chadrons = sortByPt(apply(event, "BCHadrons").cHadrons()); MSG_DEBUG("# b hadrons = " << bhadrons.size() << ", # c hadrons = " << chadrons.size()); // Max HF hadron--jet axis dR to be regarded as a jet tag const double MAX_DR = 0.3; // Tag the leading b and c jets with a deltaR < 0.3 match // b-tagged jet are excluded from also being considered as c-tagged /// @todo Do this again with the ghost match? MSG_DEBUG("Getting b/c-tags"); bool gotLeadingB = false, gotLeadingC = false;; foreach (const Jet& j, jets) { if (!gotLeadingB) { FourMomentum leadBJet, leadBHadr; double dRmin = MAX_DR; foreach (const Particle& b, bhadrons) { const double dRcand = min(dRmin, deltaR(j, b)); if (dRcand < dRmin) { dRmin = dRcand; leadBJet = j.momentum(); leadBHadr = b.momentum(); MSG_DEBUG("New closest b-hadron jet tag candidate: dR = " << dRmin << " for jet pT = " << j.pT()/GeV << " GeV, " << " b hadron pT = " << b.pT()/GeV << " GeV, PID = " << b.pid()); } } if (dRmin < MAX_DR) { // A jet has been tagged, so fill the histos and break the loop _h_ptBJetLead->fill(leadBJet.pT()/GeV, weight); _h_ptBHadrLead->fill(leadBHadr.pT()/GeV, weight); _h_ptFracB->fill(leadBHadr.pT() / leadBJet.pT(), weight); _h_eFracB->fill(leadBHadr.E() / leadBJet.E(), weight); gotLeadingB = true; continue; // escape this loop iteration so the same jet isn't c-tagged } } if (!gotLeadingC) { FourMomentum leadCJet, leadCHadr; double dRmin = MAX_DR; foreach (const Particle& c, chadrons) { const double dRcand = min(dRmin, deltaR(j, c)); if (dRcand < dRmin) { dRmin = dRcand; leadCJet = j.momentum(); leadCHadr = c.momentum(); MSG_DEBUG("New closest c-hadron jet tag candidate: dR = " << dRmin << " for jet pT = " << j.pT()/GeV << " GeV, " << " c hadron pT = " << c.pT()/GeV << " GeV, PID = " << c.pid()); } } if (dRmin < MAX_DR) { // A jet has been tagged, so fill the histos and break the loop _h_ptCJetLead->fill(leadCJet.pT()/GeV, weight); _h_ptCHadrLead->fill(leadCHadr.pT()/GeV, weight); _h_ptFracC->fill(leadCHadr.pT() / leadCJet.pT(), weight); _h_eFracC->fill(leadCHadr.E() / leadCJet.E(), weight); gotLeadingB = true; } } // If we've found both a leading b and a leading c jet, break the loop over jets if (gotLeadingB && gotLeadingC) break; } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_ptCJetLead); normalize(_h_ptCHadrLead); normalize(_h_ptFracC); normalize(_h_eFracC); normalize(_h_ptBJetLead); normalize(_h_ptBHadrLead); normalize(_h_ptFracB); normalize(_h_eFracB); } //@} private: /// @name Histograms //@{ Histo1DPtr _h_ptCJetLead, _h_ptCHadrLead, _h_ptFracC, _h_eFracC; Histo1DPtr _h_ptBJetLead, _h_ptBHadrLead, _h_ptFracB, _h_eFracB; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(MC_HFJETS); } diff --git a/analyses/pluginMC/MC_IDENTIFIED.cc b/analyses/pluginMC/MC_IDENTIFIED.cc --- a/analyses/pluginMC/MC_IDENTIFIED.cc +++ b/analyses/pluginMC/MC_IDENTIFIED.cc @@ -1,104 +1,104 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// Generic analysis looking at various distributions of final state particles /// @todo Rename as MC_HADRONS class MC_IDENTIFIED : public Analysis { public: /// Constructor MC_IDENTIFIED() : Analysis("MC_IDENTIFIED") { } public: /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Projections const FinalState cnfs(Cuts::abseta < 5.0 && Cuts::pT > 500*MeV); declare(cnfs, "FS"); - declare(UnstableFinalState(Cuts::abseta < 5.0 && Cuts::pT > 500*MeV), "UFS"); + declare(UnstableParticles(Cuts::abseta < 5.0 && Cuts::pT > 500*MeV), "UFS"); // Histograms // @todo Choose E/pT ranged based on input energies... can't do anything about kin. cuts, though _histStablePIDs = bookHisto1D("MultsStablePIDs", 3335, -0.5, 3334.5); _histDecayedPIDs = bookHisto1D("MultsDecayedPIDs", 3335, -0.5, 3334.5); _histAllPIDs = bookHisto1D("MultsAllPIDs", 3335, -0.5, 3334.5); _histEtaPi = bookHisto1D("EtaPi", 25, 0, 5); _histEtaK = bookHisto1D("EtaK", 25, 0, 5); _histEtaLambda = bookHisto1D("EtaLambda", 25, 0, 5); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); // Unphysical (debug) plotting of all PIDs in the event, physical or otherwise foreach (const GenParticle* gp, particles(event.genEvent())) { _histAllPIDs->fill(abs(gp->pdg_id()), weight); } // Charged + neutral final state PIDs const FinalState& cnfs = apply(event, "FS"); foreach (const Particle& p, cnfs.particles()) { _histStablePIDs->fill(p.abspid(), weight); } // Unstable PIDs and identified particle eta spectra - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); foreach (const Particle& p, ufs.particles()) { _histDecayedPIDs->fill(p.pid(), weight); const double eta_abs = p.abseta(); const PdgId pid = p.abspid(); //if (PID::isMeson(pid) && PID::hasStrange()) { if (pid == 211 || pid == 111) _histEtaPi->fill(eta_abs, weight); else if (pid == 321 || pid == 130 || pid == 310) _histEtaK->fill(eta_abs, weight); else if (pid == 3122) _histEtaLambda->fill(eta_abs, weight); } } /// Finalize void finalize() { scale(_histStablePIDs, 1/sumOfWeights()); scale(_histDecayedPIDs, 1/sumOfWeights()); scale(_histAllPIDs, 1/sumOfWeights()); scale(_histEtaPi, 1/sumOfWeights()); scale(_histEtaK, 1/sumOfWeights()); scale(_histEtaLambda, 1/sumOfWeights()); } //@} private: /// @name Histograms //@{ Histo1DPtr _histStablePIDs, _histDecayedPIDs, _histAllPIDs; Histo1DPtr _histEtaPi, _histEtaK, _histEtaLambda; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(MC_IDENTIFIED); } diff --git a/analyses/pluginMC/MC_VH2BB.cc b/analyses/pluginMC/MC_VH2BB.cc --- a/analyses/pluginMC/MC_VH2BB.cc +++ b/analyses/pluginMC/MC_VH2BB.cc @@ -1,262 +1,262 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ZFinder.hh" #include "Rivet/Projections/WFinder.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/FastJets.hh" #include "Rivet/Math/LorentzTrans.hh" namespace Rivet { class MC_VH2BB : public Analysis { public: /// @name Constructors etc. //@{ /// Constructor MC_VH2BB() : Analysis("MC_VH2BB") { } //@} /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { FinalState fs; Cut cut = Cuts::abseta < 3.5 && Cuts::pT > 25*GeV; ZFinder zeefinder(fs, cut, PID::ELECTRON, 65*GeV, 115*GeV, 0.2); declare(zeefinder, "ZeeFinder"); ZFinder zmmfinder(fs, cut, PID::MUON, 65*GeV, 115*GeV, 0.2); declare(zmmfinder, "ZmmFinder"); WFinder wefinder(fs, cut, PID::ELECTRON, 60*GeV, 100*GeV, 25*GeV, 0.2); declare(wefinder, "WeFinder"); WFinder wmfinder(fs, cut, PID::MUON, 60*GeV, 100*GeV, 25*GeV, 0.2); declare(wmfinder, "WmFinder"); declare(fs, "FinalState"); declare(FastJets(fs, FastJets::ANTIKT, 0.4), "AntiKT04"); declare(FastJets(fs, FastJets::ANTIKT, 0.5), "AntiKT05"); declare(FastJets(fs, FastJets::ANTIKT, 0.6), "AntiKT06"); /// Book histograms _h_jet_bb_Delta_eta = bookHisto1D("jet_bb_Delta_eta", 50, 0, 4); _h_jet_bb_Delta_phi = bookHisto1D("jet_bb_Delta_phi", 50, 0, M_PI); _h_jet_bb_Delta_pT = bookHisto1D("jet_bb_Delta_pT", 50,0, 500); _h_jet_bb_Delta_R = bookHisto1D("jet_bb_Delta_R", 50, 0, 5); _h_jet_b_jet_eta = bookHisto1D("jet_b_jet_eta", 50, -4, 4); _h_jet_b_jet_multiplicity = bookHisto1D("jet_b_jet_multiplicity", 11, -0.5, 10.5); _h_jet_b_jet_phi = bookHisto1D("jet_b_jet_phi", 50, 0, 2.*M_PI); _h_jet_b_jet_pT = bookHisto1D("jet_b_jet_pT", 50, 0, 500); _h_jet_H_eta_using_bb = bookHisto1D("jet_H_eta_using_bb", 50, -4, 4); _h_jet_H_mass_using_bb = bookHisto1D("jet_H_mass_using_bb", 50, 50, 200); _h_jet_H_phi_using_bb = bookHisto1D("jet_H_phi_using_bb", 50, 0, 2.*M_PI); _h_jet_H_pT_using_bb = bookHisto1D("jet_H_pT_using_bb", 50, 0, 500); _h_jet_eta = bookHisto1D("jet_eta", 50, -4, 4); _h_jet_multiplicity = bookHisto1D("jet_multiplicity", 11, -0.5, 10.5); _h_jet_phi = bookHisto1D("jet_phi", 50, 0, 2.*M_PI); _h_jet_pT = bookHisto1D("jet_pT", 50, 0, 500); _h_jet_VBbb_Delta_eta = bookHisto1D("jet_VBbb_Delta_eta", 50, 0, 4); _h_jet_VBbb_Delta_phi = bookHisto1D("jet_VBbb_Delta_phi", 50, 0, M_PI); _h_jet_VBbb_Delta_pT = bookHisto1D("jet_VBbb_Delta_pT", 50, 0, 500); _h_jet_VBbb_Delta_R = bookHisto1D("jet_VBbb_Delta_R", 50, 0, 8); _h_VB_eta = bookHisto1D("VB_eta", 50, -4, 4); _h_VB_mass = bookHisto1D("VB_mass", 50, 60, 110); _h_Z_multiplicity = bookHisto1D("Z_multiplicity", 11, -0.5, 10.5); _h_W_multiplicity = bookHisto1D("W_multiplicity", 11, -0.5, 10.5); _h_VB_phi = bookHisto1D("VB_phi", 50, 0, 2.*M_PI); _h_VB_pT = bookHisto1D("VB_pT", 50, 0, 500); _h_jet_bVB_angle_Hframe = bookHisto1D("jet_bVB_angle_Hframe", 50, 0, M_PI); _h_jet_bVB_cosangle_Hframe = bookHisto1D("jet_bVB_cosangle_Hframe", 50, -1, 1); _h_jet_bb_angle_Hframe = bookHisto1D("jet_bb_angle_Hframe", 50, 0, M_PI); _h_jet_bb_cosangle_Hframe = bookHisto1D("jet_bb_cosangle_Hframe", 50, -1, 1); } /// Perform the per-event analysis void analyze(const Event& event) { const double weight = event.weight(); const double JETPTCUT = 30*GeV; const ZFinder& zeefinder = apply(event, "ZeeFinder"); const ZFinder& zmmfinder = apply(event, "ZmmFinder"); const WFinder& wefinder = apply(event, "WeFinder"); const WFinder& wmfinder = apply(event, "WmFinder"); const Particles vectorBosons = zeefinder.bosons() + zmmfinder.bosons() + wefinder.bosons() + wmfinder.bosons(); _h_Z_multiplicity->fill(zeefinder.bosons().size() + zmmfinder.bosons().size(), weight); _h_W_multiplicity->fill(wefinder.bosons().size() + wmfinder.bosons().size(), weight); const Jets jets = apply(event, "AntiKT04").jetsByPt(JETPTCUT); _h_jet_multiplicity->fill(jets.size(), weight); // Identify the b-jets Jets bjets; foreach (const Jet& jet, jets) { const double jetEta = jet.eta(); const double jetPhi = jet.phi(); const double jetPt = jet.pT(); _h_jet_eta->fill(jetEta, weight); _h_jet_phi->fill(jetPhi, weight); _h_jet_pT->fill(jetPt/GeV, weight); if (jet.bTagged() && jet.pT() > JETPTCUT) { bjets.push_back(jet); _h_jet_b_jet_eta->fill( jetEta , weight ); _h_jet_b_jet_phi->fill( jetPhi , weight ); _h_jet_b_jet_pT->fill( jetPt , weight ); } } _h_jet_b_jet_multiplicity->fill(bjets.size(), weight); // Plot vector boson properties foreach (const Particle& v, vectorBosons) { _h_VB_phi->fill(v.phi(), weight); _h_VB_pT->fill(v.pT(), weight); _h_VB_eta->fill(v.eta(), weight); _h_VB_mass->fill(v.mass(), weight); } // rest of analysis requires at least 1 b jets if(bjets.empty()) vetoEvent; // Construct Higgs candidates from pairs of b-jets for (size_t i = 0; i < bjets.size()-1; ++i) { for (size_t j = i+1; j < bjets.size(); ++j) { const Jet& jet1 = bjets[i]; const Jet& jet2 = bjets[j]; const double deltaEtaJJ = fabs(jet1.eta() - jet2.eta()); const double deltaPhiJJ = deltaPhi(jet1.momentum(), jet2.momentum()); const double deltaRJJ = deltaR(jet1.momentum(), jet2.momentum()); const double deltaPtJJ = fabs(jet1.pT() - jet2.pT()); _h_jet_bb_Delta_eta->fill(deltaEtaJJ, weight); _h_jet_bb_Delta_phi->fill(deltaPhiJJ, weight); _h_jet_bb_Delta_pT->fill(deltaPtJJ, weight); _h_jet_bb_Delta_R->fill(deltaRJJ, weight); const FourMomentum phiggs = jet1.momentum() + jet2.momentum(); _h_jet_H_eta_using_bb->fill(phiggs.eta(), weight); _h_jet_H_mass_using_bb->fill(phiggs.mass(), weight); _h_jet_H_phi_using_bb->fill(phiggs.phi(), weight); _h_jet_H_pT_using_bb->fill(phiggs.pT(), weight); foreach (const Particle& v, vectorBosons) { const double deltaEtaVH = fabs(phiggs.eta() - v.eta()); const double deltaPhiVH = deltaPhi(phiggs, v.momentum()); const double deltaRVH = deltaR(phiggs, v.momentum()); const double deltaPtVH = fabs(phiggs.pT() - v.pT()); _h_jet_VBbb_Delta_eta->fill(deltaEtaVH, weight); _h_jet_VBbb_Delta_phi->fill(deltaPhiVH, weight); _h_jet_VBbb_Delta_pT->fill(deltaPtVH, weight); _h_jet_VBbb_Delta_R->fill(deltaRVH, weight); // Calculate boost angles const vector angles = boostAngles(jet1.momentum(), jet2.momentum(), v.momentum()); _h_jet_bVB_angle_Hframe->fill(angles[0], weight); _h_jet_bb_angle_Hframe->fill(angles[1], weight); _h_jet_bVB_cosangle_Hframe->fill(cos(angles[0]), weight); _h_jet_bb_cosangle_Hframe->fill(cos(angles[1]), weight); } } } } /// Normalise histograms etc., after the run void finalize() { scale(_h_jet_bb_Delta_eta, crossSection()/sumOfWeights()); scale(_h_jet_bb_Delta_phi, crossSection()/sumOfWeights()); scale(_h_jet_bb_Delta_pT, crossSection()/sumOfWeights()); scale(_h_jet_bb_Delta_R, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_eta, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_multiplicity, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_phi, crossSection()/sumOfWeights()); scale(_h_jet_b_jet_pT, crossSection()/sumOfWeights()); scale(_h_jet_H_eta_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_H_mass_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_H_phi_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_H_pT_using_bb, crossSection()/sumOfWeights()); scale(_h_jet_eta, crossSection()/sumOfWeights()); scale(_h_jet_multiplicity, crossSection()/sumOfWeights()); scale(_h_jet_phi, crossSection()/sumOfWeights()); scale(_h_jet_pT, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_eta, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_phi, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_pT, crossSection()/sumOfWeights()); scale(_h_jet_VBbb_Delta_R, crossSection()/sumOfWeights()); scale(_h_VB_eta, crossSection()/sumOfWeights()); scale(_h_VB_mass, crossSection()/sumOfWeights()); scale(_h_Z_multiplicity, crossSection()/sumOfWeights()); scale(_h_W_multiplicity, crossSection()/sumOfWeights()); scale(_h_VB_phi, crossSection()/sumOfWeights()); scale(_h_VB_pT, crossSection()/sumOfWeights()); scale(_h_jet_bVB_angle_Hframe, crossSection()/sumOfWeights()); scale(_h_jet_bb_angle_Hframe, crossSection()/sumOfWeights()); scale(_h_jet_bVB_cosangle_Hframe, crossSection()/sumOfWeights()); scale(_h_jet_bb_cosangle_Hframe, crossSection()/sumOfWeights()); } /// This should take in the four-momenta of two b's (jets/hadrons) and a vector boson, for the process VB*->VBH with H->bb /// It should return the smallest angle between the virtual vector boson and one of the b's, in the rest frame of the Higgs boson. /// It should also return (as the second element of the vector) the angle between the b's, in the rest frame of the Higgs boson. vector boostAngles(const FourMomentum& b1, const FourMomentum& b2, const FourMomentum& vb) { const FourMomentum higgsMomentum = b1 + b2; const FourMomentum virtualVBMomentum = higgsMomentum + vb; const LorentzTransform lt = LorentzTransform::mkFrameTransformFromBeta(higgsMomentum.betaVec()); const FourMomentum virtualVBMomentumBOOSTED = lt.transform(virtualVBMomentum); const FourMomentum b1BOOSTED = lt.transform(b1); const FourMomentum b2BOOSTED = lt.transform(b2); const double angle1 = b1BOOSTED.angle(virtualVBMomentumBOOSTED); const double angle2 = b2BOOSTED.angle(virtualVBMomentumBOOSTED); const double anglebb = b1BOOSTED.angle(b2BOOSTED); vector rtn; rtn.push_back(angle1 < angle2 ? angle1 : angle2); rtn.push_back(anglebb); return rtn; } //@} private: /// @name Histograms //@{ Histo1DPtr _h_Z_multiplicity, _h_W_multiplicity; Histo1DPtr _h_jet_bb_Delta_eta, _h_jet_bb_Delta_phi, _h_jet_bb_Delta_pT, _h_jet_bb_Delta_R; Histo1DPtr _h_jet_b_jet_eta, _h_jet_b_jet_multiplicity, _h_jet_b_jet_phi, _h_jet_b_jet_pT; Histo1DPtr _h_jet_H_eta_using_bb, _h_jet_H_mass_using_bb, _h_jet_H_phi_using_bb, _h_jet_H_pT_using_bb; Histo1DPtr _h_jet_eta, _h_jet_multiplicity, _h_jet_phi, _h_jet_pT; Histo1DPtr _h_jet_VBbb_Delta_eta, _h_jet_VBbb_Delta_phi, _h_jet_VBbb_Delta_pT, _h_jet_VBbb_Delta_R; Histo1DPtr _h_VB_eta, _h_VB_mass, _h_VB_phi, _h_VB_pT; Histo1DPtr _h_jet_bVB_angle_Hframe, _h_jet_bb_angle_Hframe, _h_jet_bVB_cosangle_Hframe, _h_jet_bb_cosangle_Hframe; //Histo1DPtr _h_jet_cuts_bb_deltaR_v_HpT; //@} }; // This global object acts as a hook for the plugin system DECLARE_RIVET_PLUGIN(MC_VH2BB); } diff --git a/analyses/pluginMisc/ARGUS_1993_S2653028.cc b/analyses/pluginMisc/ARGUS_1993_S2653028.cc --- a/analyses/pluginMisc/ARGUS_1993_S2653028.cc +++ b/analyses/pluginMisc/ARGUS_1993_S2653028.cc @@ -1,177 +1,177 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief BELLE pi+/-, K+/- and proton/antiproton spectrum at Upsilon(4S) /// @author Peter Richardson class ARGUS_1993_S2653028 : public Analysis { public: ARGUS_1993_S2653028() : Analysis("ARGUS_1993_S2653028"), _weightSum(0.) { } void analyze(const Event& e) { const double weight = e.weight(); // Find the upsilons Particles upsilons; // First in unstable final state - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if (p.pid() == 300553) upsilons.push_back(p); } // Then in whole event if that failed if (upsilons.empty()) { foreach (const GenParticle* p, particles(e.genEvent())) { if (p->pdg_id() != 300553) continue; const GenVertex* pv = p->production_vertex(); bool passed = true; if (pv) { foreach (const GenParticle* pp, particles_in(pv)) { if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } // Find an upsilon foreach (const Particle& p, upsilons) { _weightSum += weight; vector pionsA,pionsB,protonsA,protonsB,kaons; // Find the decay products we want findDecayProducts(p.genParticle(), pionsA, pionsB, protonsA, protonsB, kaons); LorentzTransform cms_boost; if (p.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); for (size_t ix = 0; ix < pionsA.size(); ++ix) { FourMomentum ptemp(pionsA[ix]->momentum()); FourMomentum p2 = cms_boost.transform(ptemp); double pcm = cms_boost.transform(ptemp).vector3().mod(); _histPiA->fill(pcm,weight); } _multPiA->fill(10.58,double(pionsA.size())*weight); for (size_t ix = 0; ix < pionsB.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(pionsB[ix]->momentum())).vector3().mod(); _histPiB->fill(pcm,weight); } _multPiB->fill(10.58,double(pionsB.size())*weight); for (size_t ix = 0; ix < protonsA.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(protonsA[ix]->momentum())).vector3().mod(); _histpA->fill(pcm,weight); } _multpA->fill(10.58,double(protonsA.size())*weight); for (size_t ix = 0; ix < protonsB.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(protonsB[ix]->momentum())).vector3().mod(); _histpB->fill(pcm,weight); } _multpB->fill(10.58,double(protonsB.size())*weight); for (size_t ix = 0 ;ix < kaons.size(); ++ix) { double pcm = cms_boost.transform(FourMomentum(kaons[ix]->momentum())).vector3().mod(); _histKA->fill(pcm,weight); _histKB->fill(pcm,weight); } _multK->fill(10.58,double(kaons.size())*weight); } } void finalize() { if (_weightSum > 0.) { scale(_histPiA, 1./_weightSum); scale(_histPiB, 1./_weightSum); scale(_histKA , 1./_weightSum); scale(_histKB , 1./_weightSum); scale(_histpA , 1./_weightSum); scale(_histpB , 1./_weightSum); scale(_multPiA, 1./_weightSum); scale(_multPiB, 1./_weightSum); scale(_multK , 1./_weightSum); scale(_multpA , 1./_weightSum); scale(_multpB , 1./_weightSum); } } void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // spectra _histPiA = bookHisto1D(1, 1, 1); _histPiB = bookHisto1D(2, 1, 1); _histKA = bookHisto1D(3, 1, 1); _histKB = bookHisto1D(6, 1, 1); _histpA = bookHisto1D(4, 1, 1); _histpB = bookHisto1D(5, 1, 1); // multiplicities _multPiA = bookHisto1D( 7, 1, 1); _multPiB = bookHisto1D( 8, 1, 1); _multK = bookHisto1D( 9, 1, 1); _multpA = bookHisto1D(10, 1, 1); _multpB = bookHisto1D(11, 1, 1); } // init private: //@{ /// Count of weights double _weightSum; /// Spectra Histo1DPtr _histPiA, _histPiB, _histKA, _histKB, _histpA, _histpB; /// Multiplicities Histo1DPtr _multPiA, _multPiB, _multK, _multpA, _multpB; //@} void findDecayProducts(const GenParticle* p, vector& pionsA, vector& pionsB, vector& protonsA, vector& protonsB, vector& kaons) { int parentId = p->pdg_id(); const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { int id = abs((*pp)->pdg_id()); if (id == PID::PIPLUS) { if (parentId != PID::LAMBDA && parentId != PID::K0S) { pionsA.push_back(*pp); pionsB.push_back(*pp); } else pionsB.push_back(*pp); } else if (id == PID::PROTON) { if (parentId != PID::LAMBDA && parentId != PID::K0S) { protonsA.push_back(*pp); protonsB.push_back(*pp); } else protonsB.push_back(*pp); } else if (id == PID::KPLUS) { kaons.push_back(*pp); } else if ((*pp)->end_vertex()) findDecayProducts(*pp, pionsA, pionsB, protonsA, protonsB, kaons); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ARGUS_1993_S2653028); } diff --git a/analyses/pluginMisc/ARGUS_1993_S2669951.cc b/analyses/pluginMisc/ARGUS_1993_S2669951.cc --- a/analyses/pluginMisc/ARGUS_1993_S2669951.cc +++ b/analyses/pluginMisc/ARGUS_1993_S2669951.cc @@ -1,192 +1,192 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Production of the $\eta'(958)$ and $f_0(980)$ in $e^+e^-$ annihilation in the Upsilon region /// @author Peter Richardson class ARGUS_1993_S2669951 : public Analysis { public: ARGUS_1993_S2669951() : Analysis("ARGUS_1993_S2669951"), _count_etaPrime_highZ(2, 0.), _count_etaPrime_allZ(3, 0.), _count_f0(3, 0.), _weightSum_cont(0.), _weightSum_Ups1(0.), _weightSum_Ups2(0.) { } void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _hist_cont_f0 = bookHisto1D(2, 1, 1); _hist_Ups1_f0 = bookHisto1D(3, 1, 1); _hist_Ups2_f0 = bookHisto1D(4, 1, 1); } void analyze(const Event& e) { // Find the Upsilons among the unstables - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); Particles upsilons; // First in unstable final state foreach (const Particle& p, ufs.particles()) if (p.pid() == 553 || p.pid() == 100553) upsilons.push_back(p); // Then in whole event if fails if (upsilons.empty()) { /// @todo Replace HepMC digging with Particle::descendents etc. calls foreach (const GenParticle* p, Rivet::particles(e.genEvent())) { if ( p->pdg_id() != 553 && p->pdg_id() != 100553 ) continue; // Discard it if its parent has the same PDG ID code (avoid duplicates) const GenVertex* pv = p->production_vertex(); bool passed = true; if (pv) { foreach (const GenParticle* pp, particles_in(pv)) { if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } // Finding done, now fill counters const double weight = e.weight(); if (upsilons.empty()) { // Continuum MSG_DEBUG("No Upsilons found => continuum event"); _weightSum_cont += weight; unsigned int nEtaA(0), nEtaB(0), nf0(0); foreach (const Particle& p, ufs.particles()) { const int id = p.abspid(); const double xp = 2.*p.E()/sqrtS(); const double beta = p.p3().mod() / p.E(); if (id == 9010221) { _hist_cont_f0->fill(xp, weight/beta); nf0 += 1; } else if (id == 331) { if (xp > 0.35) nEtaA += 1; nEtaB += 1; } } _count_f0[2] += nf0*weight; _count_etaPrime_highZ[1] += nEtaA*weight; _count_etaPrime_allZ[2] += nEtaB*weight; } else { // Upsilon(s) found MSG_DEBUG("Upsilons found => resonance event"); foreach (const Particle& ups, upsilons) { const int parentId = ups.pid(); ((parentId == 553) ? _weightSum_Ups1 : _weightSum_Ups2) += weight; Particles unstable; // Find the decay products we want findDecayProducts(ups.genParticle(), unstable); LorentzTransform cms_boost; if (ups.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(ups.momentum().betaVec()); const double mass = ups.mass(); unsigned int nEtaA(0), nEtaB(0), nf0(0); foreach(const Particle& p, unstable) { const int id = p.abspid(); const FourMomentum p2 = cms_boost.transform(p.momentum()); const double xp = 2.*p2.E()/mass; const double beta = p2.p3().mod()/p2.E(); if (id == 9010221) { //< ? ((parentId == 553) ? _hist_Ups1_f0 : _hist_Ups2_f0)->fill(xp, weight/beta); nf0 += 1; } else if (id == 331) { //< ? if (xp > 0.35) nEtaA += 1; nEtaB += 1; } } if (parentId == 553) { _count_f0[0] += nf0*weight; _count_etaPrime_highZ[0] += nEtaA*weight; _count_etaPrime_allZ[0] += nEtaB*weight; } else { _count_f0[1] += nf0*weight; _count_etaPrime_allZ[1] += nEtaB*weight; } } } } void finalize() { // High-Z eta' multiplicity Scatter2DPtr s111 = bookScatter2D(1, 1, 1, true); if (_weightSum_Ups1 > 0) // Point at 9.460 s111->point(0).setY(_count_etaPrime_highZ[0] / _weightSum_Ups1, 0); if (_weightSum_cont > 0) // Point at 9.905 s111->point(1).setY(_count_etaPrime_highZ[1] / _weightSum_cont, 0); // All-Z eta' multiplicity Scatter2DPtr s112 = bookScatter2D(1, 1, 2, true); if (_weightSum_Ups1 > 0) // Point at 9.460 s112->point(0).setY(_count_etaPrime_allZ[0] / _weightSum_Ups1, 0); if (_weightSum_cont > 0) // Point at 9.905 s112->point(1).setY(_count_etaPrime_allZ[2] / _weightSum_cont, 0); if (_weightSum_Ups2 > 0) // Point at 10.02 s112->point(2).setY(_count_etaPrime_allZ[1] / _weightSum_Ups2, 0); // f0 multiplicity Scatter2DPtr s511 = bookScatter2D(5, 1, 1, true); if (_weightSum_Ups1 > 0) // Point at 9.46 s511->point(0).setY(_count_f0[0] / _weightSum_Ups1, 0); if (_weightSum_Ups2 > 0) // Point at 10.02 s511->point(1).setY(_count_f0[1] / _weightSum_Ups2, 0); if (_weightSum_cont > 0) // Point at 10.45 s511->point(2).setY(_count_f0[2] / _weightSum_cont, 0); // Scale histos if (_weightSum_cont > 0.) scale(_hist_cont_f0, 1./_weightSum_cont); if (_weightSum_Ups1 > 0.) scale(_hist_Ups1_f0, 1./_weightSum_Ups1); if (_weightSum_Ups2 > 0.) scale(_hist_Ups2_f0, 1./_weightSum_Ups2); } private: /// @name Counters //@{ vector _count_etaPrime_highZ, _count_etaPrime_allZ, _count_f0; double _weightSum_cont,_weightSum_Ups1,_weightSum_Ups2; //@} /// Histos Histo1DPtr _hist_cont_f0, _hist_Ups1_f0, _hist_Ups2_f0; /// Recursively walk the HepMC tree to find decay products of @a p void findDecayProducts(const GenParticle* p, Particles& unstable) { const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { const int id = abs((*pp)->pdg_id()); if (id == 331 || id == 9010221) unstable.push_back(Particle(*pp)); else if ((*pp)->end_vertex()) findDecayProducts(*pp, unstable); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ARGUS_1993_S2669951); } diff --git a/analyses/pluginMisc/ARGUS_1993_S2789213.cc b/analyses/pluginMisc/ARGUS_1993_S2789213.cc --- a/analyses/pluginMisc/ARGUS_1993_S2789213.cc +++ b/analyses/pluginMisc/ARGUS_1993_S2789213.cc @@ -1,256 +1,256 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief ARGUS vector meson production /// @author Peter Richardson class ARGUS_1993_S2789213 : public Analysis { public: ARGUS_1993_S2789213() : Analysis("ARGUS_1993_S2789213"), _weightSum_cont(0.),_weightSum_Ups1(0.),_weightSum_Ups4(0.) { } void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _mult_cont_Omega = bookHisto1D( 1, 1, 1); _mult_cont_Rho0 = bookHisto1D( 1, 1, 2); _mult_cont_KStar0 = bookHisto1D( 1, 1, 3); _mult_cont_KStarPlus = bookHisto1D( 1, 1, 4); _mult_cont_Phi = bookHisto1D( 1, 1, 5); _mult_Ups1_Omega = bookHisto1D( 2, 1, 1); _mult_Ups1_Rho0 = bookHisto1D( 2, 1, 2); _mult_Ups1_KStar0 = bookHisto1D( 2, 1, 3); _mult_Ups1_KStarPlus = bookHisto1D( 2, 1, 4); _mult_Ups1_Phi = bookHisto1D( 2, 1, 5); _mult_Ups4_Omega = bookHisto1D( 3, 1, 1); _mult_Ups4_Rho0 = bookHisto1D( 3, 1, 2); _mult_Ups4_KStar0 = bookHisto1D( 3, 1, 3); _mult_Ups4_KStarPlus = bookHisto1D( 3, 1, 4); _mult_Ups4_Phi = bookHisto1D( 3, 1, 5); _hist_cont_KStarPlus = bookHisto1D( 4, 1, 1); _hist_Ups1_KStarPlus = bookHisto1D( 5, 1, 1); _hist_Ups4_KStarPlus = bookHisto1D( 6, 1, 1); _hist_cont_KStar0 = bookHisto1D( 7, 1, 1); _hist_Ups1_KStar0 = bookHisto1D( 8, 1, 1); _hist_Ups4_KStar0 = bookHisto1D( 9, 1, 1); _hist_cont_Rho0 = bookHisto1D(10, 1, 1); _hist_Ups1_Rho0 = bookHisto1D(11, 1, 1); _hist_Ups4_Rho0 = bookHisto1D(12, 1, 1); _hist_cont_Omega = bookHisto1D(13, 1, 1); _hist_Ups1_Omega = bookHisto1D(14, 1, 1); } void analyze(const Event& e) { const double weight = e.weight(); // Find the upsilons Particles upsilons; // First in unstable final state - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) if (p.pid() == 300553 || p.pid() == 553) upsilons.push_back(p); // Then in whole event if that failed if (upsilons.empty()) { foreach (const GenParticle* p, Rivet::particles(e.genEvent())) { if (p->pdg_id() != 300553 && p->pdg_id() != 553) continue; const GenVertex* pv = p->production_vertex(); bool passed = true; if (pv) { foreach (const GenParticle* pp, particles_in(pv)) { if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } if (upsilons.empty()) { // continuum _weightSum_cont += weight; unsigned int nOmega(0), nRho0(0), nKStar0(0), nKStarPlus(0), nPhi(0); foreach (const Particle& p, ufs.particles()) { int id = p.abspid(); double xp = 2.*p.E()/sqrtS(); double beta = p.p3().mod()/p.E(); if (id == 113) { _hist_cont_Rho0->fill(xp, weight/beta); ++nRho0; } else if (id == 313) { _hist_cont_KStar0->fill(xp, weight/beta); ++nKStar0; } else if (id == 223) { _hist_cont_Omega->fill(xp, weight/beta); ++nOmega; } else if (id == 323) { _hist_cont_KStarPlus->fill(xp,weight/beta); ++nKStarPlus; } else if (id == 333) { ++nPhi; } } /// @todo Replace with Counters and fill one-point Scatters at the end _mult_cont_Omega ->fill(10.45, weight*nOmega ); _mult_cont_Rho0 ->fill(10.45, weight*nRho0 ); _mult_cont_KStar0 ->fill(10.45, weight*nKStar0 ); _mult_cont_KStarPlus->fill(10.45, weight*nKStarPlus); _mult_cont_Phi ->fill(10.45, weight*nPhi ); } else { // found an upsilon foreach (const Particle& ups, upsilons) { const int parentId = ups.pid(); (parentId == 553 ? _weightSum_Ups1 : _weightSum_Ups4) += weight; Particles unstable; // Find the decay products we want findDecayProducts(ups.genParticle(),unstable); /// @todo Update to new LT mk* functions LorentzTransform cms_boost; if (ups.p3().mod() > 0.001) cms_boost = LorentzTransform::mkFrameTransformFromBeta(ups.momentum().betaVec()); double mass = ups.mass(); unsigned int nOmega(0),nRho0(0),nKStar0(0),nKStarPlus(0),nPhi(0); foreach(const Particle & p , unstable) { int id = p.abspid(); FourMomentum p2 = cms_boost.transform(p.momentum()); double xp = 2.*p2.E()/mass; double beta = p2.p3().mod()/p2.E(); if (id == 113) { if (parentId == 553) _hist_Ups1_Rho0->fill(xp,weight/beta); else _hist_Ups4_Rho0->fill(xp,weight/beta); ++nRho0; } else if (id == 313) { if (parentId == 553) _hist_Ups1_KStar0->fill(xp,weight/beta); else _hist_Ups4_KStar0->fill(xp,weight/beta); ++nKStar0; } else if (id == 223) { if (parentId == 553) _hist_Ups1_Omega->fill(xp,weight/beta); ++nOmega; } else if (id == 323) { if (parentId == 553) _hist_Ups1_KStarPlus->fill(xp,weight/beta); else _hist_Ups4_KStarPlus->fill(xp,weight/beta); ++nKStarPlus; } else if (id == 333) { ++nPhi; } } if (parentId == 553) { _mult_Ups1_Omega ->fill(9.46,weight*nOmega ); _mult_Ups1_Rho0 ->fill(9.46,weight*nRho0 ); _mult_Ups1_KStar0 ->fill(9.46,weight*nKStar0 ); _mult_Ups1_KStarPlus->fill(9.46,weight*nKStarPlus); _mult_Ups1_Phi ->fill(9.46,weight*nPhi ); } else { _mult_Ups4_Omega ->fill(10.58,weight*nOmega ); _mult_Ups4_Rho0 ->fill(10.58,weight*nRho0 ); _mult_Ups4_KStar0 ->fill(10.58,weight*nKStar0 ); _mult_Ups4_KStarPlus->fill(10.58,weight*nKStarPlus); _mult_Ups4_Phi ->fill(10.58,weight*nPhi ); } } } } void finalize() { if (_weightSum_cont > 0.) { /// @todo Replace with Counters and fill one-point Scatters at the end scale(_mult_cont_Omega , 1./_weightSum_cont); scale(_mult_cont_Rho0 , 1./_weightSum_cont); scale(_mult_cont_KStar0 , 1./_weightSum_cont); scale(_mult_cont_KStarPlus, 1./_weightSum_cont); scale(_mult_cont_Phi , 1./_weightSum_cont); scale(_hist_cont_KStarPlus, 1./_weightSum_cont); scale(_hist_cont_KStar0 , 1./_weightSum_cont); scale(_hist_cont_Rho0 , 1./_weightSum_cont); scale(_hist_cont_Omega , 1./_weightSum_cont); } if (_weightSum_Ups1 > 0.) { /// @todo Replace with Counters and fill one-point Scatters at the end scale(_mult_Ups1_Omega , 1./_weightSum_Ups1); scale(_mult_Ups1_Rho0 , 1./_weightSum_Ups1); scale(_mult_Ups1_KStar0 , 1./_weightSum_Ups1); scale(_mult_Ups1_KStarPlus, 1./_weightSum_Ups1); scale(_mult_Ups1_Phi , 1./_weightSum_Ups1); scale(_hist_Ups1_KStarPlus, 1./_weightSum_Ups1); scale(_hist_Ups1_KStar0 , 1./_weightSum_Ups1); scale(_hist_Ups1_Rho0 , 1./_weightSum_Ups1); scale(_hist_Ups1_Omega , 1./_weightSum_Ups1); } if (_weightSum_Ups4 > 0.) { /// @todo Replace with Counters and fill one-point Scatters at the end scale(_mult_Ups4_Omega , 1./_weightSum_Ups4); scale(_mult_Ups4_Rho0 , 1./_weightSum_Ups4); scale(_mult_Ups4_KStar0 , 1./_weightSum_Ups4); scale(_mult_Ups4_KStarPlus, 1./_weightSum_Ups4); scale(_mult_Ups4_Phi , 1./_weightSum_Ups4); scale(_hist_Ups4_KStarPlus, 1./_weightSum_Ups4); scale(_hist_Ups4_KStar0 , 1./_weightSum_Ups4); scale(_hist_Ups4_Rho0 , 1./_weightSum_Ups4); } } private: //@{ Histo1DPtr _mult_cont_Omega, _mult_cont_Rho0, _mult_cont_KStar0, _mult_cont_KStarPlus, _mult_cont_Phi; Histo1DPtr _mult_Ups1_Omega, _mult_Ups1_Rho0, _mult_Ups1_KStar0, _mult_Ups1_KStarPlus, _mult_Ups1_Phi; Histo1DPtr _mult_Ups4_Omega, _mult_Ups4_Rho0, _mult_Ups4_KStar0, _mult_Ups4_KStarPlus, _mult_Ups4_Phi; Histo1DPtr _hist_cont_KStarPlus, _hist_Ups1_KStarPlus, _hist_Ups4_KStarPlus; Histo1DPtr _hist_cont_KStar0, _hist_Ups1_KStar0, _hist_Ups4_KStar0 ; Histo1DPtr _hist_cont_Rho0, _hist_Ups1_Rho0, _hist_Ups4_Rho0; Histo1DPtr _hist_cont_Omega, _hist_Ups1_Omega; double _weightSum_cont,_weightSum_Ups1,_weightSum_Ups4; //@} void findDecayProducts(const GenParticle* p, Particles& unstable) { const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { int id = abs((*pp)->pdg_id()); if (id == 113 || id == 313 || id == 323 || id == 333 || id == 223 ) { unstable.push_back(Particle(*pp)); } else if ((*pp)->end_vertex()) findDecayProducts(*pp, unstable); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(ARGUS_1993_S2789213); } diff --git a/analyses/pluginMisc/BABAR_2003_I593379.cc b/analyses/pluginMisc/BABAR_2003_I593379.cc --- a/analyses/pluginMisc/BABAR_2003_I593379.cc +++ b/analyses/pluginMisc/BABAR_2003_I593379.cc @@ -1,186 +1,186 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Babar charmonium spectra /// @author Peter Richardson class BABAR_2003_I593379 : public Analysis { public: BABAR_2003_I593379() : Analysis("BABAR_2003_I593379"), _weightSum(0.) { } void analyze(const Event& e) { const double weight = e.weight(); // Find the charmonia Particles upsilons; // First in unstable final state - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) if (p.pid() == 300553) upsilons.push_back(p); // Then in whole event if fails if (upsilons.empty()) { foreach (const GenParticle* p, Rivet::particles(e.genEvent())) { if (p->pdg_id() != 300553) continue; const GenVertex* pv = p->production_vertex(); bool passed = true; if (pv) { foreach (const GenParticle* pp, particles_in(pv)) { if ( p->pdg_id() == pp->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(*p)); } } // Find upsilons foreach (const Particle& p, upsilons) { _weightSum += weight; // Find the charmonium resonances /// @todo Use Rivet::Particles vector allJpsi, primaryJpsi, Psiprime, all_chi_c1, all_chi_c2, primary_chi_c1, primary_chi_c2; findDecayProducts(p.genParticle(), allJpsi, primaryJpsi, Psiprime, all_chi_c1, all_chi_c2, primary_chi_c1, primary_chi_c2); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.mom().betaVec()); for (size_t i = 0; i < allJpsi.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(allJpsi[i]->momentum())).p(); _hist_all_Jpsi->fill(pcm, weight); } _mult_JPsi->fill(10.58, weight*double(allJpsi.size())); for (size_t i = 0; i < primaryJpsi.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(primaryJpsi[i]->momentum())).p(); _hist_primary_Jpsi->fill(pcm, weight); } _mult_JPsi_direct->fill(10.58, weight*double(primaryJpsi.size())); for (size_t i=0; imomentum())).p(); _hist_Psi_prime->fill(pcm, weight); } _mult_Psi2S->fill(10.58, weight*double(Psiprime.size())); for (size_t i = 0; i < all_chi_c1.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(all_chi_c1[i]->momentum())).p(); _hist_chi_c1->fill(pcm, weight); } _mult_chi_c1->fill(10.58, weight*double(all_chi_c1.size())); _mult_chi_c1_direct->fill(10.58, weight*double(primary_chi_c1.size())); for (size_t i = 0; i < all_chi_c2.size(); i++) { const double pcm = cms_boost.transform(FourMomentum(all_chi_c2[i]->momentum())).p(); _hist_chi_c2->fill(pcm, weight); } _mult_chi_c2->fill(10.58, weight*double(all_chi_c2.size())); _mult_chi_c2_direct->fill(10.58, weight*double(primary_chi_c2.size())); } } // analyze void finalize() { scale(_hist_all_Jpsi , 0.5*0.1/_weightSum); scale(_hist_chi_c1 , 0.5*0.1/_weightSum); scale(_hist_chi_c2 , 0.5*0.1/_weightSum); scale(_hist_Psi_prime , 0.5*0.1/_weightSum); scale(_hist_primary_Jpsi , 0.5*0.1/_weightSum); scale(_mult_JPsi , 0.5*100./_weightSum); scale(_mult_JPsi_direct , 0.5*100./_weightSum); scale(_mult_chi_c1 , 0.5*100./_weightSum); scale(_mult_chi_c1_direct, 0.5*100./_weightSum); scale(_mult_chi_c2 , 0.5*100./_weightSum); scale(_mult_chi_c2_direct, 0.5*100./_weightSum); scale(_mult_Psi2S , 0.5*100./_weightSum); } // finalize void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _mult_JPsi = bookHisto1D(1, 1, 1); _mult_JPsi_direct = bookHisto1D(1, 1, 2); _mult_chi_c1 = bookHisto1D(1, 1, 3); _mult_chi_c1_direct = bookHisto1D(1, 1, 4); _mult_chi_c2 = bookHisto1D(1, 1, 5); _mult_chi_c2_direct = bookHisto1D(1, 1, 6); _mult_Psi2S = bookHisto1D(1, 1, 7); _hist_all_Jpsi = bookHisto1D(6, 1, 1); _hist_chi_c1 = bookHisto1D(7, 1, 1); _hist_chi_c2 = bookHisto1D(7, 1, 2); _hist_Psi_prime = bookHisto1D(8, 1, 1); _hist_primary_Jpsi = bookHisto1D(10, 1, 1); } // init private: //@{ // count of weights double _weightSum; /// Histograms Histo1DPtr _hist_all_Jpsi; Histo1DPtr _hist_chi_c1; Histo1DPtr _hist_chi_c2; Histo1DPtr _hist_Psi_prime; Histo1DPtr _hist_primary_Jpsi; Histo1DPtr _mult_JPsi; Histo1DPtr _mult_JPsi_direct; Histo1DPtr _mult_chi_c1; Histo1DPtr _mult_chi_c1_direct; Histo1DPtr _mult_chi_c2; Histo1DPtr _mult_chi_c2_direct; Histo1DPtr _mult_Psi2S; //@} void findDecayProducts(const GenParticle* p, vector& allJpsi, vector& primaryJpsi, vector& Psiprime, vector& all_chi_c1, vector& all_chi_c2, vector& primary_chi_c1, vector& primary_chi_c2) { const GenVertex* dv = p->end_vertex(); bool isOnium = false; /// @todo Use better looping for (GenVertex::particles_in_const_iterator pp = dv->particles_in_const_begin() ; pp != dv->particles_in_const_end() ; ++pp) { int id = (*pp)->pdg_id(); id = id%1000; id -= id%10; id /= 10; if (id==44) isOnium = true; } /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { int id = (*pp)->pdg_id(); if (id==100443) { Psiprime.push_back(*pp); } else if (id==20443) { all_chi_c1.push_back(*pp); if (!isOnium) primary_chi_c1.push_back(*pp); } else if (id==445) { all_chi_c2.push_back(*pp); if (!isOnium) primary_chi_c2.push_back(*pp); } else if (id==443) { allJpsi.push_back(*pp); if (!isOnium) primaryJpsi.push_back(*pp); } if ((*pp)->end_vertex()) { findDecayProducts(*pp, allJpsi, primaryJpsi, Psiprime, all_chi_c1, all_chi_c2, primary_chi_c1, primary_chi_c2); } } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2003_I593379); } diff --git a/analyses/pluginMisc/BABAR_2005_S6181155.cc b/analyses/pluginMisc/BABAR_2005_S6181155.cc --- a/analyses/pluginMisc/BABAR_2005_S6181155.cc +++ b/analyses/pluginMisc/BABAR_2005_S6181155.cc @@ -1,145 +1,145 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief BABAR Xi_c baryons from fragmentation /// @author Peter Richardson class BABAR_2005_S6181155 : public Analysis { public: BABAR_2005_S6181155() : Analysis("BABAR_2005_S6181155") { } void init() { declare(Beam(), "Beams"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histOnResonanceA = bookHisto1D(1,1,1); _histOnResonanceB = bookHisto1D(2,1,1); _histOffResonance = bookHisto1D(2,1,2); _sigma = bookHisto1D(3,1,1); _histOnResonanceA_norm = bookHisto1D(4,1,1); _histOnResonanceB_norm = bookHisto1D(5,1,1); _histOffResonance_norm = bookHisto1D(5,1,2); } void analyze(const Event& e) { const double weight = e.weight(); // Loop through unstable FS particles and look for charmed mesons/baryons - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); const Beam beamproj = apply(e, "Beams"); const ParticlePair& beams = beamproj.beams(); const FourMomentum mom_tot = beams.first.momentum() + beams.second.momentum(); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(mom_tot.betaVec()); const double s = sqr(beamproj.sqrtS()); const bool onresonance = fuzzyEquals(beamproj.sqrtS()/GeV, 10.58, 2E-3); foreach (const Particle& p, ufs.particles()) { // 3-momentum in CMS frame const double mom = cms_boost.transform(p.momentum()).vector3().mod(); // Only looking at Xi_c^0 if (p.abspid() != 4132 ) continue; if (onresonance) { _histOnResonanceA_norm->fill(mom,weight); _histOnResonanceB_norm->fill(mom,weight); } else { _histOffResonance_norm->fill(mom,s/sqr(10.58)*weight); } MSG_DEBUG("mom = " << mom); // off-resonance cross section if (checkDecay(p.genParticle())) { if (onresonance) { _histOnResonanceA->fill(mom,weight); _histOnResonanceB->fill(mom,weight); } else { _histOffResonance->fill(mom,s/sqr(10.58)*weight); _sigma->fill(10.6,weight); } } } } void finalize() { scale(_histOnResonanceA, crossSection()/femtobarn/sumOfWeights()); scale(_histOnResonanceB, crossSection()/femtobarn/sumOfWeights()); scale(_histOffResonance, crossSection()/femtobarn/sumOfWeights()); scale(_sigma , crossSection()/femtobarn/sumOfWeights()); normalize(_histOnResonanceA_norm); normalize(_histOnResonanceB_norm); normalize(_histOffResonance_norm); } private: //@{ /// Histograms Histo1DPtr _histOnResonanceA; Histo1DPtr _histOnResonanceB; Histo1DPtr _histOffResonance; Histo1DPtr _sigma ; Histo1DPtr _histOnResonanceA_norm; Histo1DPtr _histOnResonanceB_norm; Histo1DPtr _histOffResonance_norm; //@} bool checkDecay(const GenParticle* p) { unsigned int nstable = 0, npip = 0, npim = 0; unsigned int nXim = 0, nXip = 0; findDecayProducts(p, nstable, npip, npim, nXip, nXim); int id = p->pdg_id(); // Xi_c if (id == 4132) { if (nstable == 2 && nXim == 1 && npip == 1) return true; } else if (id == -4132) { if (nstable == 2 && nXip == 1 && npim == 1) return true; } return false; } void findDecayProducts(const GenParticle* p, unsigned int& nstable, unsigned int& npip, unsigned int& npim, unsigned int& nXip, unsigned int& nXim) { const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { int id = (*pp)->pdg_id(); if (id==3312) { ++nXim; ++nstable; } else if (id == -3312) { ++nXip; ++nstable; } else if(id == 111 || id == 221) { ++nstable; } else if ((*pp)->end_vertex()) { findDecayProducts(*pp, nstable, npip, npim, nXip, nXim); } else { if (id != 22) ++nstable; if (id == 211) ++npip; else if(id == -211) ++npim; } } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2005_S6181155); } diff --git a/analyses/pluginMisc/BABAR_2007_S6895344.cc b/analyses/pluginMisc/BABAR_2007_S6895344.cc --- a/analyses/pluginMisc/BABAR_2007_S6895344.cc +++ b/analyses/pluginMisc/BABAR_2007_S6895344.cc @@ -1,86 +1,86 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief BABAR Lambda_c from fragmentation /// @author Peter Richardson class BABAR_2007_S6895344 : public Analysis { public: BABAR_2007_S6895344() : Analysis("BABAR_2007_S6895344") { } void init() { declare(Beam(), "Beams"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histOff = bookHisto1D(1,1,1); _sigmaOff = bookHisto1D(2,1,1); _histOn = bookHisto1D(3,1,1); _sigmaOn = bookHisto1D(4,1,1); } void analyze(const Event& e) { const double weight = e.weight(); // Loop through unstable FS particles and look for charmed mesons/baryons - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); const Beam beamproj = apply(e, "Beams"); const ParticlePair& beams = beamproj.beams(); const FourMomentum mom_tot = beams.first.momentum() + beams.second.momentum(); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(mom_tot.betaVec()); const double s = sqr(beamproj.sqrtS()); const bool onresonance = fuzzyEquals(beamproj.sqrtS(), 10.58, 2E-3); // Particle masses from PDGlive (accessed online 16. Nov. 2009). foreach (const Particle& p, ufs.particles()) { // Only looking at Lambda_c if (p.abspid() != 4122) continue; MSG_DEBUG("Lambda_c found"); const double mH2 = 5.22780; // 2.28646^2 const double mom = FourMomentum(cms_boost.transform(p.momentum())).p(); const double xp = mom/sqrt(s/4.0 - mH2); if (onresonance) { _histOn ->fill(xp,weight); _sigmaOn ->fill(10.58, weight); } else { _histOff ->fill(xp,weight); _sigmaOff->fill(10.54, weight); } } } void finalize() { scale(_sigmaOn , 1./sumOfWeights()); scale(_sigmaOff, 1./sumOfWeights()); scale(_histOn , 1./sumOfWeights()); scale(_histOff , 1./sumOfWeights()); } private: //@{ // Histograms for the continuum cross sections Histo1DPtr _sigmaOn ; Histo1DPtr _sigmaOff; Histo1DPtr _histOn ; Histo1DPtr _histOff ; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2007_S6895344); } diff --git a/analyses/pluginMisc/BABAR_2007_S7266081.cc b/analyses/pluginMisc/BABAR_2007_S7266081.cc --- a/analyses/pluginMisc/BABAR_2007_S7266081.cc +++ b/analyses/pluginMisc/BABAR_2007_S7266081.cc @@ -1,181 +1,181 @@ // -*- C++ -*- #include #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief BABAR tau lepton to three charged hadrons /// @author Peter Richardson class BABAR_2007_S7266081 : public Analysis { public: BABAR_2007_S7266081() : Analysis("BABAR_2007_S7266081"), _weight_total(0), _weight_pipipi(0), _weight_Kpipi(0), _weight_KpiK(0), _weight_KKK(0) { } void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _hist_pipipi_pipipi = bookHisto1D( 1, 1, 1); _hist_pipipi_pipi = bookHisto1D( 2, 1, 1); _hist_Kpipi_Kpipi = bookHisto1D( 3, 1, 1); _hist_Kpipi_Kpi = bookHisto1D( 4, 1, 1); _hist_Kpipi_pipi = bookHisto1D( 5, 1, 1); _hist_KpiK_KpiK = bookHisto1D( 6, 1, 1); _hist_KpiK_KK = bookHisto1D( 7, 1, 1); _hist_KpiK_piK = bookHisto1D( 8, 1, 1); _hist_KKK_KKK = bookHisto1D( 9, 1, 1); _hist_KKK_KK = bookHisto1D(10, 1, 1); } void analyze(const Event& e) { double weight = e.weight(); // Find the taus Particles taus; - foreach(const Particle& p, apply(e, "UFS").particles(Cuts::pid==PID::TAU)) { + foreach(const Particle& p, apply(e, "UFS").particles(Cuts::pid==PID::TAU)) { _weight_total += weight; Particles pip, pim, Kp, Km; unsigned int nstable = 0; // Get the boost to the rest frame LorentzTransform cms_boost; if (p.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); // Find the decay products we want findDecayProducts(p.genParticle(), nstable, pip, pim, Kp, Km); if (p.pid() < 0) { swap(pip, pim); swap(Kp, Km ); } if (nstable != 4) continue; // pipipi if (pim.size() == 2 && pip.size() == 1) { _weight_pipipi += weight; _hist_pipipi_pipipi-> fill((pip[0].momentum()+pim[0].momentum()+pim[1].momentum()).mass(), weight); _hist_pipipi_pipi-> fill((pip[0].momentum()+pim[0].momentum()).mass(), weight); _hist_pipipi_pipi-> fill((pip[0].momentum()+pim[1].momentum()).mass(), weight); } else if (pim.size() == 1 && pip.size() == 1 && Km.size() == 1) { _weight_Kpipi += weight; _hist_Kpipi_Kpipi-> fill((pim[0].momentum()+pip[0].momentum()+Km[0].momentum()).mass(), weight); _hist_Kpipi_Kpi-> fill((pip[0].momentum()+Km[0].momentum()).mass(), weight); _hist_Kpipi_pipi-> fill((pim[0].momentum()+pip[0].momentum()).mass(), weight); } else if (Kp.size() == 1 && Km.size() == 1 && pim.size() == 1) { _weight_KpiK += weight; _hist_KpiK_KpiK-> fill((Kp[0].momentum()+Km[0].momentum()+pim[0].momentum()).mass(), weight); _hist_KpiK_KK-> fill((Kp[0].momentum()+Km[0].momentum()).mass(), weight); _hist_KpiK_piK-> fill((Kp[0].momentum()+pim[0].momentum()).mass(), weight); } else if (Kp.size() == 1 && Km.size() == 2) { _weight_KKK += weight; _hist_KKK_KKK-> fill((Kp[0].momentum()+Km[0].momentum()+Km[1].momentum()).mass(), weight); _hist_KKK_KK-> fill((Kp[0].momentum()+Km[0].momentum()).mass(), weight); _hist_KKK_KK-> fill((Kp[0].momentum()+Km[1].momentum()).mass(), weight); } } } void finalize() { if (_weight_pipipi > 0.) { scale(_hist_pipipi_pipipi, 1.0/_weight_pipipi); scale(_hist_pipipi_pipi , 0.5/_weight_pipipi); } if (_weight_Kpipi > 0.) { scale(_hist_Kpipi_Kpipi , 1.0/_weight_Kpipi); scale(_hist_Kpipi_Kpi , 1.0/_weight_Kpipi); scale(_hist_Kpipi_pipi , 1.0/_weight_Kpipi); } if (_weight_KpiK > 0.) { scale(_hist_KpiK_KpiK , 1.0/_weight_KpiK); scale(_hist_KpiK_KK , 1.0/_weight_KpiK); scale(_hist_KpiK_piK , 1.0/_weight_KpiK); } if (_weight_KKK > 0.) { scale(_hist_KKK_KKK , 1.0/_weight_KKK); scale(_hist_KKK_KK , 0.5/_weight_KKK); } /// @note Using autobooking for these scatters since their x values are not really obtainable from the MC data bookScatter2D(11, 1, 1, true)->point(0).setY(100*_weight_pipipi/_weight_total, 100*sqrt(_weight_pipipi)/_weight_total); bookScatter2D(12, 1, 1, true)->point(0).setY(100*_weight_Kpipi/_weight_total, 100*sqrt(_weight_Kpipi)/_weight_total); bookScatter2D(13, 1, 1, true)->point(0).setY(100*_weight_KpiK/_weight_total, 100*sqrt(_weight_KpiK)/_weight_total); bookScatter2D(14, 1, 1, true)->point(0).setY(100*_weight_KKK/_weight_total, 100*sqrt(_weight_KKK)/_weight_total); } private: //@{ // Histograms Histo1DPtr _hist_pipipi_pipipi, _hist_pipipi_pipi; Histo1DPtr _hist_Kpipi_Kpipi, _hist_Kpipi_Kpi, _hist_Kpipi_pipi; Histo1DPtr _hist_KpiK_KpiK, _hist_KpiK_KK, _hist_KpiK_piK; Histo1DPtr _hist_KKK_KKK, _hist_KKK_KK; // Weights counters double _weight_total, _weight_pipipi, _weight_Kpipi, _weight_KpiK, _weight_KKK; //@} void findDecayProducts(const GenParticle* p, unsigned int & nstable, Particles& pip, Particles& pim, Particles& Kp, Particles& Km) { const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { int id = (*pp)->pdg_id(); if (id == PID::PI0 ) ++nstable; else if (id == PID::K0S) ++nstable; else if (id == PID::PIPLUS) { pip.push_back(Particle(**pp)); ++nstable; } else if (id == PID::PIMINUS) { pim.push_back(Particle(**pp)); ++nstable; } else if (id == PID::KPLUS) { Kp.push_back(Particle(**pp)); ++nstable; } else if (id == PID::KMINUS) { Km.push_back(Particle(**pp)); ++nstable; } else if ((*pp)->end_vertex()) { findDecayProducts(*pp, nstable, pip, pim, Kp, Km); } else ++nstable; } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2007_S7266081); } diff --git a/analyses/pluginMisc/BABAR_2013_I1116411.cc b/analyses/pluginMisc/BABAR_2013_I1116411.cc --- a/analyses/pluginMisc/BABAR_2013_I1116411.cc +++ b/analyses/pluginMisc/BABAR_2013_I1116411.cc @@ -1,84 +1,84 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class BABAR_2013_I1116411 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BABAR_2013_I1116411); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_q2 = bookHisto1D(1, 1, 1); } // Calculate the Q2 using mother and daughter charged lepton double q2(const Particle& B) { const Particle chlept = filter_select(B.children(), Cuts::pid==PID::POSITRON || Cuts::pid==PID::ANTIMUON)[0]; FourMomentum q = B.mom() - chlept.mom(); return q*q; } // Check for explicit decay into pdgids bool isSemileptonicDecay(const Particle& mother, vector ids) { // Trivial check to ignore any other decays but the one in question modulo photons const Particles children = mother.children(Cuts::pid!=PID::PHOTON); if (children.size()!=ids.size()) return false; // Check for the explicit decay return all(ids, [&](int i){return count(children, hasPID(i))==1;}); } /// Perform the per-event analysis void analyze(const Event& event) { // Get B+ Mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::BPLUS)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::BPLUS)) { if (isSemileptonicDecay(p, {PID::OMEGA, PID::POSITRON, PID::NU_E}) || isSemileptonicDecay(p, {PID::OMEGA, PID::ANTIMUON, PID::NU_MU})) { _h_q2->fill(q2(p), event.weight()); } } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_q2, 1.21); // normalize to BF } //@} private: /// @name Histograms //@{ Histo1DPtr _h_q2; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2013_I1116411); } diff --git a/analyses/pluginMisc/BABAR_2015_I1334693.cc b/analyses/pluginMisc/BABAR_2015_I1334693.cc --- a/analyses/pluginMisc/BABAR_2015_I1334693.cc +++ b/analyses/pluginMisc/BABAR_2015_I1334693.cc @@ -1,83 +1,83 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class BABAR_2015_I1334693 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BABAR_2015_I1334693); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_q2 = bookHisto1D(1, 1, 1); } // Calculate the Q2 using mother and daugher meson double q2(const Particle& B, int mesonID) { FourMomentum q = B.mom() - filter_select(B.children(), Cuts::pid==mesonID)[0]; return q*q; } // Check for explicit decay into pdgids bool isSemileptonicDecay(const Particle& mother, vector ids) { // Trivial check to ignore any other decays but the one in question modulo photons const Particles children = mother.children(Cuts::pid!=PID::PHOTON); if (children.size()!=ids.size()) return false; // Check for the explicit decay return all(ids, [&](int i){return count(children, hasPID(i))==1;}); } /// Perform the per-event analysis void analyze(const Event& event) { // Loop over D0 mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::D0)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::D0)) { if (isSemileptonicDecay(p, {PID::PIMINUS, PID::POSITRON, PID::NU_E})) { _h_q2->fill(q2(p, PID::PIMINUS), event.weight()); } } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_q2, 375.4); // normalize to data } //@} private: /// @name Histograms //@{ Histo1DPtr _h_q2; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BABAR_2015_I1334693); } diff --git a/analyses/pluginMisc/BELLE_2001_S4598261.cc b/analyses/pluginMisc/BELLE_2001_S4598261.cc --- a/analyses/pluginMisc/BELLE_2001_S4598261.cc +++ b/analyses/pluginMisc/BELLE_2001_S4598261.cc @@ -1,106 +1,106 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief BELLE pi0 spectrum at Upsilon(4S) /// @author Peter Richardson class BELLE_2001_S4598261 : public Analysis { public: BELLE_2001_S4598261() : Analysis("BELLE_2001_S4598261"), _weightSum(0.) { } void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _histdSigDp = bookHisto1D(1, 1, 1); // spectrum _histMult = bookHisto1D(2, 1, 1); // multiplicity } void analyze(const Event& e) { const double weight = e.weight(); // Find the upsilons Particles upsilons; // First in unstable final state - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) if (p.pid()==300553) upsilons.push_back(p); // Then in whole event if fails if (upsilons.empty()) { foreach (const GenParticle* p, Rivet::particles(e.genEvent())) { if (p->pdg_id() != 300553) continue; const GenVertex* pv = p->production_vertex(); bool passed = true; if (pv) { /// @todo Use better looping for (GenVertex::particles_in_const_iterator pp = pv->particles_in_const_begin() ; pp != pv->particles_in_const_end() ; ++pp) { if ( p->pdg_id() == (*pp)->pdg_id() ) { passed = false; break; } } } if (passed) upsilons.push_back(Particle(p)); } } // Find upsilons foreach (const Particle& p, upsilons) { _weightSum += weight; // Find the neutral pions from the decay vector pions; findDecayProducts(p.genParticle(), pions); const LorentzTransform cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); for (size_t ix=0; ixmomentum())).p(); _histdSigDp->fill(pcm,weight); } _histMult->fill(0., pions.size()*weight); } } void finalize() { scale(_histdSigDp, 1./_weightSum); scale(_histMult , 1./_weightSum); } private: //@{ // count of weights double _weightSum; /// Histograms Histo1DPtr _histdSigDp; Histo1DPtr _histMult; //@} void findDecayProducts(const GenParticle* p, vector& pions) { const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { const int id = (*pp)->pdg_id(); if (id == 111) { pions.push_back(*pp); } else if ((*pp)->end_vertex()) findDecayProducts(*pp, pions); } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2001_S4598261); } diff --git a/analyses/pluginMisc/BELLE_2008_I786560.cc b/analyses/pluginMisc/BELLE_2008_I786560.cc --- a/analyses/pluginMisc/BELLE_2008_I786560.cc +++ b/analyses/pluginMisc/BELLE_2008_I786560.cc @@ -1,112 +1,112 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief BELLE tau lepton to pi pi /// @author Peter Richardson class BELLE_2008_I786560 : public Analysis { public: BELLE_2008_I786560() : Analysis("BELLE_2008_I786560"), _weight_total(0), _weight_pipi(0) { } void init() { - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); _hist_pipi = bookHisto1D( 1, 1, 1); } void analyze(const Event& e) { // Find the taus Particles taus; - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); foreach (const Particle& p, ufs.particles()) { if (p.abspid() != PID::TAU) continue; _weight_total += 1.; Particles pip, pim, pi0; unsigned int nstable = 0; // get the boost to the rest frame LorentzTransform cms_boost; if (p.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(p.momentum().betaVec()); // find the decay products we want findDecayProducts(p.genParticle(), nstable, pip, pim, pi0); if (p.pid() < 0) { swap(pip, pim); } if (nstable != 3) continue; // pipi if (pim.size() == 1 && pi0.size() == 1) { _weight_pipi += 1.; _hist_pipi->fill((pi0[0].momentum()+pim[0].momentum()).mass2(),1.); } } } void finalize() { if (_weight_pipi > 0.) scale(_hist_pipi, 1./_weight_pipi); } private: //@{ // Histograms Histo1DPtr _hist_pipi; // Weights counters double _weight_total, _weight_pipi; //@} void findDecayProducts(const GenParticle* p, unsigned int & nstable, Particles& pip, Particles& pim, Particles& pi0) { const GenVertex* dv = p->end_vertex(); /// @todo Use better looping for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { int id = (*pp)->pdg_id(); if (id == PID::PI0 ) { pi0.push_back(Particle(**pp)); ++nstable; } else if (id == PID::K0S) ++nstable; else if (id == PID::PIPLUS) { pip.push_back(Particle(**pp)); ++nstable; } else if (id == PID::PIMINUS) { pim.push_back(Particle(**pp)); ++nstable; } else if (id == PID::KPLUS) { ++nstable; } else if (id == PID::KMINUS) { ++nstable; } else if ((*pp)->end_vertex()) { findDecayProducts(*pp, nstable, pip, pim, pi0); } else ++nstable; } } }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2008_I786560); } diff --git a/analyses/pluginMisc/BELLE_2011_I878990.cc b/analyses/pluginMisc/BELLE_2011_I878990.cc --- a/analyses/pluginMisc/BELLE_2011_I878990.cc +++ b/analyses/pluginMisc/BELLE_2011_I878990.cc @@ -1,82 +1,82 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class BELLE_2011_I878990 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BELLE_2011_I878990); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_q2 = bookHisto1D(1, 1, 1); } // Calculate the Q2 using mother and daugher meson double q2(const Particle& B, int mesonID) { FourMomentum q = B.mom() - filter_select(B.children(), Cuts::pid==mesonID)[0]; return q*q; } // Check for explicit decay into pdgids bool isSemileptonicDecay(const Particle& mother, vector ids) { // Trivial check to ignore any other decays but the one in question modulo photons const Particles children = mother.children(Cuts::pid!=PID::PHOTON); if (children.size()!=ids.size()) return false; // Check for the explicit decay return all(ids, [&](int i){return count(children, hasPID(i))==1;}); } /// Perform the per-event analysis void analyze(const Event& event) { // Loop over B0 mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::B0)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::B0)) { if (isSemileptonicDecay(p, {PID::PIMINUS, PID::POSITRON, PID::NU_E}) || isSemileptonicDecay(p, {PID::PIMINUS, PID::ANTIMUON, PID::NU_MU})) { _h_q2->fill(q2(p, PID::PIMINUS), event.weight()); } } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_q2, 3000.86); // normalize to BF*dQ2 } //@} private: /// @name Histograms //@{ Histo1DPtr _h_q2; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2011_I878990); } diff --git a/analyses/pluginMisc/BELLE_2013_I1238273.cc b/analyses/pluginMisc/BELLE_2013_I1238273.cc --- a/analyses/pluginMisc/BELLE_2013_I1238273.cc +++ b/analyses/pluginMisc/BELLE_2013_I1238273.cc @@ -1,114 +1,114 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class BELLE_2013_I1238273 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BELLE_2013_I1238273); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_q2_B0bar_pi = bookHisto1D(1, 1, 1); _h_q2_B0bar_rho = bookHisto1D(3, 1, 1); _h_q2_Bminus_pi = bookHisto1D(2, 1, 1); _h_q2_Bminus_rho = bookHisto1D(4, 1, 1); _h_q2_Bminus_omega = bookHisto1D(5, 1, 1); } // Calculate the Q2 using mother and daugher meson double q2(const Particle& B, int mesonID) { FourMomentum q = B.mom() - filter_select(B.children(), Cuts::pid==mesonID)[0]; return q*q; } // Check for explicit decay into pdgids bool isSemileptonicDecay(const Particle& mother, vector ids) { // Trivial check to ignore any other decays but the one in question modulo photons const Particles children = mother.children(Cuts::pid!=PID::PHOTON); if (children.size()!=ids.size()) return false; // Check for the explicit decay return all(ids, [&](int i){return count(children, hasPID(i))==1;}); } /// Perform the per-event analysis void analyze(const Event& event) { // Loop over B0bar Mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::B0BAR)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::B0BAR)) { if (isSemileptonicDecay(p, {PID::PIPLUS, PID::ELECTRON, PID::NU_EBAR}) || isSemileptonicDecay(p, {PID::PIPLUS, PID::MUON, PID::NU_MUBAR})) { _h_q2_B0bar_pi->fill(q2(p, PID::PIPLUS), event.weight()); } if (isSemileptonicDecay(p, {PID::RHOPLUS, PID::ELECTRON, PID::NU_EBAR}) || isSemileptonicDecay(p, {PID::RHOPLUS, PID::MUON, PID::NU_MUBAR})) { _h_q2_B0bar_rho->fill(q2(p, PID::RHOPLUS), event.weight()); } } // Loop over B- Mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::BMINUS)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::BMINUS)) { if (isSemileptonicDecay(p, {PID::PI0, PID::ELECTRON, PID::NU_EBAR}) || isSemileptonicDecay(p, {PID::PI0, PID::MUON, PID::NU_MUBAR})) { _h_q2_Bminus_pi->fill(q2(p, PID::PI0), event.weight()); } if (isSemileptonicDecay(p, {PID::RHO0, PID::ELECTRON, PID::NU_EBAR}) || isSemileptonicDecay(p, {PID::RHO0, PID::MUON, PID::NU_MUBAR})) { _h_q2_Bminus_rho->fill(q2(p,PID::RHO0), event.weight()); } if (isSemileptonicDecay(p, {PID::OMEGA, PID::ELECTRON, PID::NU_EBAR}) || isSemileptonicDecay(p, {PID::OMEGA, PID::MUON, PID::NU_MUBAR})) { _h_q2_Bminus_omega->fill(q2(p, PID::OMEGA), event.weight()); } } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_q2_B0bar_pi , 298.8); // normalize to BF*dQ2 normalize(_h_q2_B0bar_rho , 1304.8); // normalize to BF*dQ2 normalize(_h_q2_Bminus_pi , 324.8); // normalize to BF*dQ2 normalize(_h_q2_Bminus_rho , 367.0); // normalize to BF*dQ2 normalize(_h_q2_Bminus_omega, 793.1); // normalize to BF*dQ2 } //@} private: /// @name Histograms //@{ Histo1DPtr _h_q2_B0bar_pi ; Histo1DPtr _h_q2_B0bar_rho ; Histo1DPtr _h_q2_Bminus_pi ; Histo1DPtr _h_q2_Bminus_rho ; Histo1DPtr _h_q2_Bminus_omega; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2013_I1238273); } diff --git a/analyses/pluginMisc/BELLE_2015_I1397632.cc b/analyses/pluginMisc/BELLE_2015_I1397632.cc --- a/analyses/pluginMisc/BELLE_2015_I1397632.cc +++ b/analyses/pluginMisc/BELLE_2015_I1397632.cc @@ -1,98 +1,98 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class BELLE_2015_I1397632 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BELLE_2015_I1397632); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_B_Denu = bookHisto1D(1, 1, 1); _h_B_Dmunu = bookHisto1D(1, 1, 2); _h_B_Deplusnu = bookHisto1D(1, 1, 3); _h_B_Dmuplusnu = bookHisto1D(1, 1, 4); } // Check for explicit decay into pdgids bool isSemileptonicDecay(const Particle& mother, vector ids) { // Trivial check to ignore any other decays but the one in question modulo photons const Particles children = mother.children(Cuts::pid!=PID::PHOTON); if (children.size()!=ids.size()) return false; // Check for the explicit decay return all(ids, [&](int i){return count(children, hasPID(i))==1;}); } // Calculate the recoil w using mother and daugher meson double recoilW(const Particle& B, int mesonID) { // TODO why does that not work with const? Particle D = filter_select(B.children(), Cuts::pid==mesonID)[0]; FourMomentum q = B.mom() - D.mom(); return (B.mom()*B.mom() + D.mom()*D.mom() - q*q )/ (2. * sqrt(B.mom()*B.mom()) * sqrt(D.mom()*D.mom()) ); } /// Perform the per-event analysis void analyze(const Event& event) { // Get B0 Mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::B0)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::B0)) { if (isSemileptonicDecay(p, {PID::DMINUS,PID::POSITRON,PID::NU_E})) _h_B_Denu->fill( recoilW(p, PID::DMINUS), event.weight()); if (isSemileptonicDecay(p, {PID::DMINUS,PID::ANTIMUON,PID::NU_MU})) _h_B_Dmunu->fill(recoilW(p, PID::DMINUS), event.weight()); } // Get B+ Mesons - foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::BPLUS)) { + foreach(const Particle& p, apply(event, "UFS").particles(Cuts::pid==PID::BPLUS)) { if (isSemileptonicDecay(p, {PID::D0BAR,PID::POSITRON,PID::NU_E})) _h_B_Deplusnu->fill( recoilW(p, PID::D0BAR), event.weight()); if (isSemileptonicDecay(p, {PID::D0BAR,PID::ANTIMUON,PID::NU_MU})) _h_B_Dmuplusnu->fill(recoilW(p, PID::D0BAR), event.weight()); } } /// Normalise histograms etc., after the run void finalize() { normalize(_h_B_Denu); normalize(_h_B_Dmunu); normalize(_h_B_Deplusnu); normalize(_h_B_Dmuplusnu); } //@} private: /// @name Histograms //@{ Histo1DPtr _h_B_Denu; Histo1DPtr _h_B_Dmunu; Histo1DPtr _h_B_Deplusnu; Histo1DPtr _h_B_Dmuplusnu; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2015_I1397632); } diff --git a/analyses/pluginMisc/BELLE_2017_I1512299.cc b/analyses/pluginMisc/BELLE_2017_I1512299.cc --- a/analyses/pluginMisc/BELLE_2017_I1512299.cc +++ b/analyses/pluginMisc/BELLE_2017_I1512299.cc @@ -1,166 +1,166 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Add a short analysis description here class BELLE_2017_I1512299 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BELLE_2017_I1512299); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // Book histograms _h_w = bookHisto1D(1, 1, 1); _h_costhv = bookHisto1D(2, 1, 1); _h_costhl = bookHisto1D(3, 1, 1); _h_chi = bookHisto1D(4, 1, 1); } /// Perform the per-event analysis bool analyzeDecay(Particle mother, vector ids) { // There is no point in looking for decays with less particles than to be analysed if (mother.children().size() == ids.size()) { bool decayfound = true; for (int id : ids) { if (!contains(mother, id)) decayfound = false; } return decayfound; } return false; } bool contains(Particle& mother, int id) { return any(mother.children(), HasPID(id)); } double recoilW(const Particle& mother) { FourMomentum lepton, neutrino, meson, q; foreach(const Particle& c, mother.children()) { if (c.isNeutrino()) neutrino=c.mom(); if (c.isLepton() &! c.isNeutrino()) lepton =c.mom(); if (c.isHadron()) meson=c.mom(); } q = lepton + neutrino; //no hadron before double mb2= mother.mom()*mother.mom(); double mD2 = meson*meson; return (mb2 + mD2 - q*q )/ (2. * sqrt(mb2) * sqrt(mD2) ); } /// Perform the per-event analysis void analyze(const Event& event) { FourMomentum pl, pnu, pB, pD, pDs, ppi; // Iterate of B0bar mesons - for(const Particle& p : apply(event, "UFS").particles(Cuts::pid==-511)) { + for(const Particle& p : apply(event, "UFS").particles(Cuts::pid==-511)) { pB = p.momentum(); // Find semileptonic decays if (analyzeDecay(p, {PID::DSTARPLUS,-12,11}) || analyzeDecay(p, {PID::DSTARPLUS,-14,13}) ) { _h_w->fill(recoilW(p), event.weight()); // Get the necessary momenta for the angles bool foundDdecay=false; for (const Particle c : p.children()) { if ( (c.pid() == PID::DSTARPLUS) && (analyzeDecay(c, {PID::PIPLUS, PID::D0}) || analyzeDecay(c, {PID::PI0, PID::DPLUS})) ) { foundDdecay=true; pDs = c.momentum(); for (const Particle dc : c.children()) { if (dc.hasCharm()) pD = dc.momentum(); else ppi = dc.momentum(); } } if (c.pid() == 11 || c.pid() == 13) pl = c.momentum(); if (c.pid() == -12 || c.pid() == -14) pnu = c.momentum(); } // This is the angle analysis if (foundDdecay) { // First boost all relevant momenta into the B-rest frame const LorentzTransform B_boost = LorentzTransform::mkFrameTransformFromBeta(pB.betaVec()); // Momenta in B rest frame: FourMomentum lv_brest_Dstar = B_boost.transform(pDs);//lab2brest(gp_Dstar.particle.p()); FourMomentum lv_brest_w = B_boost.transform(pB - pDs); //lab2brest(p_lv_w); FourMomentum lv_brest_D = B_boost.transform(pD); //lab2brest(gp_D.particle.p()); FourMomentum lv_brest_lep = B_boost.transform(pl); //lab2brest(gp_lep.p()); const LorentzTransform Ds_boost = LorentzTransform::mkFrameTransformFromBeta(pDs.betaVec()); FourMomentum lv_Dstarrest_D = Ds_boost.transform(lv_brest_D); const LorentzTransform W_boost = LorentzTransform::mkFrameTransformFromBeta((pB-pDs).betaVec()); FourMomentum lv_wrest_lep = W_boost.transform(lv_brest_lep); double cos_thetaV = cos(lv_brest_Dstar.p3().angle(lv_Dstarrest_D.p3())); _h_costhv->fill(cos_thetaV, event.weight()); double cos_thetaL = cos(lv_brest_w.p3().angle(lv_wrest_lep.p3())); _h_costhl->fill(cos_thetaL, event.weight()); Vector3 LTrans = lv_wrest_lep.p3() - cos_thetaL*lv_wrest_lep.p3().perp()*lv_brest_w.p3().unit(); Vector3 VTrans = lv_Dstarrest_D.p3() - cos_thetaV*lv_Dstarrest_D.p3().perp()*lv_brest_Dstar.p3().unit(); float chi = atan2(LTrans.cross(VTrans).dot(lv_brest_w.p3().unit()), LTrans.dot(VTrans)); if(chi < 0) chi += TWOPI; _h_chi->fill(chi, event.weight()); //const LorentzTransform W_boost = LorentzTransform::mkFrameTransformFromBeta((pl+pnu).betaVec()); //const LorentzTransform D_boost = LorentzTransform::mkFrameTransformFromBeta((pD+ppi).betaVec()); //FourMomentum pl_t = FourMomentum(W_boost.transform(pl)); //FourMomentum pD_t = FourMomentum(D_boost.transform(pD)); //double thetal = (pl+pnu).angle(pl_t); //double thetav = (pD+ppi).angle(pD_t); //_h_costhv->fill(cos(thetav), event.weight()); //_h_costhl->fill(cos(thetal), event.weight()); } } } } //else if (analyzeDecay(p, {413,-14,13}) ) { //_h_w->fill(recoilW(p), event.weight()); //} /// Normalise histograms etc., after the run void finalize() { double GAMMA_B0 = 4.32e-13; // Total width in GeV, calculated from mean life time of 1.52 pico seconds double BR_B0_DSPLUS_ELL_NU = 0.0495; // Branching fraction from the same paper for B0bar to D*+ ell nu double NORM = GAMMA_B0 * BR_B0_DSPLUS_ELL_NU; // Normalise histos to partial width normalize(_h_w, NORM); normalize(_h_costhv, NORM); normalize(_h_costhl, NORM); normalize(_h_chi, NORM); } //@} /// @name Histograms //@{ Histo1DPtr _h_w; Histo1DPtr _h_costhv; Histo1DPtr _h_costhl; Histo1DPtr _h_chi; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BELLE_2017_I1512299); } diff --git a/analyses/pluginMisc/CLEO_2004_S5809304.cc b/analyses/pluginMisc/CLEO_2004_S5809304.cc --- a/analyses/pluginMisc/CLEO_2004_S5809304.cc +++ b/analyses/pluginMisc/CLEO_2004_S5809304.cc @@ -1,165 +1,165 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/Beam.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief CLEO charmed mesons and baryons from fragmentation /// @author Peter Richardson class CLEO_2004_S5809304 : public Analysis { public: DEFAULT_RIVET_ANALYSIS_CTOR(CLEO_2004_S5809304); void init() { declare(Beam(), "Beams"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); // continuum cross sections _sigmaDPlus = bookHisto1D(1,1,1); _sigmaD0A = bookHisto1D(1,1,2); _sigmaD0B = bookHisto1D(1,1,3); _sigmaDStarPlusA = bookHisto1D(1,1,4); _sigmaDStarPlusB = bookHisto1D(1,1,5); _sigmaDStar0A = bookHisto1D(1,1,6); _sigmaDStar0B = bookHisto1D(1,1,7); // histograms for continuum data _histXpDplus = bookHisto1D(2, 1, 1); _histXpD0A = bookHisto1D(3, 1, 1); _histXpD0B = bookHisto1D(4, 1, 1); _histXpDStarPlusA = bookHisto1D(5, 1, 1); _histXpDStarPlusB = bookHisto1D(6, 1, 1); _histXpDStar0A = bookHisto1D(7, 1, 1); _histXpDStar0B = bookHisto1D(8, 1, 1); _histXpTotal = bookHisto1D(9, 1, 1); } void analyze(const Event& e) { const double weight = e.weight(); // Loop through unstable FS particles and look for charmed mesons/baryons - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); const Beam beamproj = apply(e, "Beams"); const ParticlePair& beams = beamproj.beams(); const FourMomentum mom_tot = beams.first.momentum() + beams.second.momentum(); LorentzTransform cms_boost; if (mom_tot.p3().mod() > 1*MeV) cms_boost = LorentzTransform::mkFrameTransformFromBeta(mom_tot.betaVec()); const double s = sqr(beamproj.sqrtS()); // Particle masses from PDGlive (accessed online 16. Nov. 2009). for (const Particle& p : ufs.particles()) { double xp = 0.0; double mH2 = 0.0; // 3-momentum in CMS frame const double mom = cms_boost.transform(p.momentum()).vector3().mod(); const int pdgid = p.abspid(); MSG_DEBUG("pdgID = " << pdgid << " mom = " << mom); switch (pdgid) { case 421: MSG_DEBUG("D0 found"); mH2 = 3.47763; // 1.86484^2 xp = mom/sqrt(s/4.0 - mH2); _sigmaD0A->fill(10.6,weight); _sigmaD0B->fill(10.6,weight); _histXpD0A->fill(xp, weight); _histXpD0B->fill(xp, weight); _histXpTotal->fill(xp, weight); break; case 411: MSG_DEBUG("D+ found"); mH2 = 3.49547; // 1.86962^2 xp = mom/sqrt(s/4.0 - mH2); _sigmaDPlus->fill(10.6,weight); _histXpDplus->fill(xp, weight); _histXpTotal->fill(xp, weight); break; case 413: MSG_DEBUG("D*+ found"); mH2 = 4.04119; // 2.01027^2 xp = mom/sqrt(s/4.0 - mH2); _sigmaDStarPlusA->fill(10.6,weight); _sigmaDStarPlusB->fill(10.6,weight); _histXpDStarPlusA->fill(xp, weight); _histXpDStarPlusB->fill(xp, weight); _histXpTotal->fill(xp, weight); break; case 423: MSG_DEBUG("D*0 found"); mH2 = 4.02793; // 2.00697**2 xp = mom/sqrt(s/4.0 - mH2); _sigmaDStar0A->fill(10.6,weight); _sigmaDStar0B->fill(10.6,weight); _histXpDStar0A->fill(xp, weight); _histXpDStar0B->fill(xp, weight); _histXpTotal->fill(xp, weight); break; } } } void finalize() { scale(_sigmaDPlus , crossSection()/picobarn/sumOfWeights()); scale(_sigmaD0A , crossSection()/picobarn/sumOfWeights()); scale(_sigmaD0B , crossSection()/picobarn/sumOfWeights()); scale(_sigmaDStarPlusA, crossSection()/picobarn/sumOfWeights()); scale(_sigmaDStarPlusB, crossSection()/picobarn/sumOfWeights()); scale(_sigmaDStar0A , crossSection()/picobarn/sumOfWeights()); scale(_sigmaDStar0B , crossSection()/picobarn/sumOfWeights()); scale(_histXpDplus , crossSection()/picobarn/sumOfWeights()); scale(_histXpD0A , crossSection()/picobarn/sumOfWeights()); scale(_histXpD0B , crossSection()/picobarn/sumOfWeights()); scale(_histXpDStarPlusA, crossSection()/picobarn/sumOfWeights()); scale(_histXpDStarPlusB, crossSection()/picobarn/sumOfWeights()); scale(_histXpDStar0A , crossSection()/picobarn/sumOfWeights()); scale(_histXpDStar0B , crossSection()/picobarn/sumOfWeights()); scale(_histXpTotal , crossSection()/picobarn/sumOfWeights()/4.); } private: //@{ // Histograms for the continuum cross sections Histo1DPtr _sigmaDPlus ; Histo1DPtr _sigmaD0A ; Histo1DPtr _sigmaD0B ; Histo1DPtr _sigmaDStarPlusA; Histo1DPtr _sigmaDStarPlusB; Histo1DPtr _sigmaDStar0A ; Histo1DPtr _sigmaDStar0B ; // histograms for continuum data Histo1DPtr _histXpDplus ; Histo1DPtr _histXpD0A ; Histo1DPtr _histXpD0B ; Histo1DPtr _histXpDStarPlusA; Histo1DPtr _histXpDStarPlusB; Histo1DPtr _histXpDStar0A ; Histo1DPtr _histXpDStar0B ; Histo1DPtr _histXpTotal ; //@} }; DECLARE_RIVET_PLUGIN(CLEO_2004_S5809304); } diff --git a/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES.cc b/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES.cc --- a/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES.cc +++ b/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES.cc @@ -1,770 +1,770 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Implementation of PDG hadron multiplicities /// @author Hendrik Hoeth class PDG_HADRON_MULTIPLICITIES : public Analysis { public: /// Constructor PDG_HADRON_MULTIPLICITIES() : Analysis("PDG_HADRON_MULTIPLICITIES") { } /// @name Analysis methods //@{ void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); MSG_DEBUG("sqrt(s) = " << sqrtS()/GeV << " GeV"); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); if (sqrtS()/GeV >= 9.5 && sqrtS()/GeV <= 10.5) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _histMeanMultiPiPlus->fill(_histMeanMultiPiPlus->bin(0).xMid(), weight); break; case 111: _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 221: _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); break; case 331: _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMeanMultiDPlus->fill(_histMeanMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMeanMultiD0->fill(_histMeanMultiD0->bin(0).xMid(), weight); break; case 431: _histMeanMultiDPlus_s->fill(_histMeanMultiDPlus_s->bin(0).xMid(), weight); break; case 9010221: _histMeanMultiF0_980->fill(_histMeanMultiF0_980->bin(0).xMid(), weight); break; case 113: _histMeanMultiRho770_0->fill(_histMeanMultiRho770_0->bin(0).xMid(), weight); break; case 223: _histMeanMultiOmega782->fill(_histMeanMultiOmega782->bin(0).xMid(), weight); break; case 323: _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMeanMultiPhi1020->fill(_histMeanMultiPhi1020->bin(0).xMid(), weight); break; case 413: _histMeanMultiDStar2010Plus->fill(_histMeanMultiDStar2010Plus->bin(0).xMid(), weight); break; case 423: _histMeanMultiDStar2007_0->fill(_histMeanMultiDStar2007_0->bin(0).xMid(), weight); break; case 433: _histMeanMultiDStar_s2112Plus->fill(_histMeanMultiDStar_s2112Plus->bin(0).xMid(), weight); break; case 443: _histMeanMultiJPsi1S->fill(_histMeanMultiJPsi1S->bin(0).xMid(), weight); break; case 225: _histMeanMultiF2_1270->fill(_histMeanMultiF2_1270->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; case 3212: _histMeanMultiSigma0->fill(_histMeanMultiSigma0->bin(0).xMid(), weight); break; case 3312: _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); break; case 2224: _histMeanMultiDelta1232PlusPlus->fill(_histMeanMultiDelta1232PlusPlus->bin(0).xMid(), weight); break; case 3114: _histMeanMultiSigma1385Minus->fill(_histMeanMultiSigma1385Minus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3224: _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3324: _histMeanMultiXi1530_0->fill(_histMeanMultiXi1530_0->bin(0).xMid(), weight); break; case 3334: _histMeanMultiOmegaMinus->fill(_histMeanMultiOmegaMinus->bin(0).xMid(), weight); break; case 4122: _histMeanMultiLambda_c_Plus->fill(_histMeanMultiLambda_c_Plus->bin(0).xMid(), weight); break; case 4222: case 4112: _histMeanMultiSigma_c_PlusPlus_0->fill(_histMeanMultiSigma_c_PlusPlus_0->bin(0).xMid(), weight); break; case 3124: _histMeanMultiLambda1520->fill(_histMeanMultiLambda1520->bin(0).xMid(), weight); break; } } } if (sqrtS()/GeV >= 29 && sqrtS()/GeV <= 35) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _histMeanMultiPiPlus->fill(_histMeanMultiPiPlus->bin(0).xMid(), weight); break; case 111: _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 221: _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); break; case 331: _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMeanMultiDPlus->fill(_histMeanMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMeanMultiD0->fill(_histMeanMultiD0->bin(0).xMid(), weight); break; case 431: _histMeanMultiDPlus_s->fill(_histMeanMultiDPlus_s->bin(0).xMid(), weight); break; case 9010221: _histMeanMultiF0_980->fill(_histMeanMultiF0_980->bin(0).xMid(), weight); break; case 113: _histMeanMultiRho770_0->fill(_histMeanMultiRho770_0->bin(0).xMid(), weight); break; case 323: _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMeanMultiPhi1020->fill(_histMeanMultiPhi1020->bin(0).xMid(), weight); break; case 413: _histMeanMultiDStar2010Plus->fill(_histMeanMultiDStar2010Plus->bin(0).xMid(), weight); break; case 423: _histMeanMultiDStar2007_0->fill(_histMeanMultiDStar2007_0->bin(0).xMid(), weight); break; case 225: _histMeanMultiF2_1270->fill(_histMeanMultiF2_1270->bin(0).xMid(), weight); break; case 325: _histMeanMultiK2Star1430Plus->fill(_histMeanMultiK2Star1430Plus->bin(0).xMid(), weight); break; case 315: _histMeanMultiK2Star1430_0->fill(_histMeanMultiK2Star1430_0->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; case 3312: _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); break; case 3114: _histMeanMultiSigma1385Minus->fill(_histMeanMultiSigma1385Minus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3224: _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3334: _histMeanMultiOmegaMinus->fill(_histMeanMultiOmegaMinus->bin(0).xMid(), weight); break; case 4122: _histMeanMultiLambda_c_Plus->fill(_histMeanMultiLambda_c_Plus->bin(0).xMid(), weight); break; } } } if (sqrtS()/GeV >= 89.5 && sqrtS()/GeV <= 91.8) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _histMeanMultiPiPlus->fill(_histMeanMultiPiPlus->bin(0).xMid(), weight); break; case 111: _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 221: _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); break; case 331: _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMeanMultiDPlus->fill(_histMeanMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMeanMultiD0->fill(_histMeanMultiD0->bin(0).xMid(), weight); break; case 431: _histMeanMultiDPlus_s->fill(_histMeanMultiDPlus_s->bin(0).xMid(), weight); break; case 511: _histMeanMultiBPlus_B0_d->fill(_histMeanMultiBPlus_B0_d->bin(0).xMid(), weight); break; case 521: _histMeanMultiBPlus_B0_d->fill(_histMeanMultiBPlus_B0_d->bin(0).xMid(), weight); _histMeanMultiBPlus_u->fill(_histMeanMultiBPlus_u->bin(0).xMid(), weight); break; case 531: _histMeanMultiB0_s->fill(_histMeanMultiB0_s->bin(0).xMid(), weight); break; case 9010221: _histMeanMultiF0_980->fill(_histMeanMultiF0_980->bin(0).xMid(), weight); break; case 9000211: _histMeanMultiA0_980Plus->fill(_histMeanMultiA0_980Plus->bin(0).xMid(), weight); break; case 113: _histMeanMultiRho770_0->fill(_histMeanMultiRho770_0->bin(0).xMid(), weight); break; case 213: _histMeanMultiRho770Plus->fill(_histMeanMultiRho770Plus->bin(0).xMid(), weight); break; case 223: _histMeanMultiOmega782->fill(_histMeanMultiOmega782->bin(0).xMid(), weight); break; case 323: _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMeanMultiPhi1020->fill(_histMeanMultiPhi1020->bin(0).xMid(), weight); break; case 413: _histMeanMultiDStar2010Plus->fill(_histMeanMultiDStar2010Plus->bin(0).xMid(), weight); break; case 433: _histMeanMultiDStar_s2112Plus->fill(_histMeanMultiDStar_s2112Plus->bin(0).xMid(), weight); break; case 513: case 523: case 533: _histMeanMultiBStar->fill(_histMeanMultiBStar->bin(0).xMid(), weight); break; case 443: _histMeanMultiJPsi1S->fill(_histMeanMultiJPsi1S->bin(0).xMid(), weight); break; case 100443: _histMeanMultiPsi2S->fill(_histMeanMultiPsi2S->bin(0).xMid(), weight); break; case 553: _histMeanMultiUpsilon1S->fill(_histMeanMultiUpsilon1S->bin(0).xMid(), weight); break; case 20223: _histMeanMultiF1_1285->fill(_histMeanMultiF1_1285->bin(0).xMid(), weight); break; case 20333: _histMeanMultiF1_1420->fill(_histMeanMultiF1_1420->bin(0).xMid(), weight); break; case 445: _histMeanMultiChi_c1_3510->fill(_histMeanMultiChi_c1_3510->bin(0).xMid(), weight); break; case 225: _histMeanMultiF2_1270->fill(_histMeanMultiF2_1270->bin(0).xMid(), weight); break; case 335: _histMeanMultiF2Prime1525->fill(_histMeanMultiF2Prime1525->bin(0).xMid(), weight); break; case 315: _histMeanMultiK2Star1430_0->fill(_histMeanMultiK2Star1430_0->bin(0).xMid(), weight); break; case 515: case 525: case 535: _histMeanMultiBStarStar->fill(_histMeanMultiBStarStar->bin(0).xMid(), weight); break; case 10433: case 20433: _histMeanMultiDs1Plus->fill(_histMeanMultiDs1Plus->bin(0).xMid(), weight); break; case 435: _histMeanMultiDs2Plus->fill(_histMeanMultiDs2Plus->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; case 3212: _histMeanMultiSigma0->fill(_histMeanMultiSigma0->bin(0).xMid(), weight); break; case 3112: _histMeanMultiSigmaMinus->fill(_histMeanMultiSigmaMinus->bin(0).xMid(), weight); _histMeanMultiSigmaPlusMinus->fill(_histMeanMultiSigmaPlusMinus->bin(0).xMid(), weight); break; case 3222: _histMeanMultiSigmaPlus->fill(_histMeanMultiSigmaPlus->bin(0).xMid(), weight); _histMeanMultiSigmaPlusMinus->fill(_histMeanMultiSigmaPlusMinus->bin(0).xMid(), weight); break; case 3312: _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); break; case 2224: _histMeanMultiDelta1232PlusPlus->fill(_histMeanMultiDelta1232PlusPlus->bin(0).xMid(), weight); break; case 3114: _histMeanMultiSigma1385Minus->fill(_histMeanMultiSigma1385Minus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3224: _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3324: _histMeanMultiXi1530_0->fill(_histMeanMultiXi1530_0->bin(0).xMid(), weight); break; case 3334: _histMeanMultiOmegaMinus->fill(_histMeanMultiOmegaMinus->bin(0).xMid(), weight); break; case 4122: _histMeanMultiLambda_c_Plus->fill(_histMeanMultiLambda_c_Plus->bin(0).xMid(), weight); break; case 5122: _histMeanMultiLambda_b_0->fill(_histMeanMultiLambda_b_0->bin(0).xMid(), weight); break; case 3124: _histMeanMultiLambda1520->fill(_histMeanMultiLambda1520->bin(0).xMid(), weight); break; } } } if (sqrtS()/GeV >= 130 && sqrtS()/GeV <= 200) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _histMeanMultiPiPlus->fill(_histMeanMultiPiPlus->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; } } } } void init() { declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); if (sqrtS()/GeV >= 9.5 && sqrtS()/GeV <= 10.5) { _histMeanMultiPiPlus = bookHisto1D( 1, 1, 1); _histMeanMultiPi0 = bookHisto1D( 2, 1, 1); _histMeanMultiKPlus = bookHisto1D( 3, 1, 1); _histMeanMultiK0 = bookHisto1D( 4, 1, 1); _histMeanMultiEta = bookHisto1D( 5, 1, 1); _histMeanMultiEtaPrime = bookHisto1D( 6, 1, 1); _histMeanMultiDPlus = bookHisto1D( 7, 1, 1); _histMeanMultiD0 = bookHisto1D( 8, 1, 1); _histMeanMultiDPlus_s = bookHisto1D( 9, 1, 1); _histMeanMultiF0_980 = bookHisto1D(13, 1, 1); _histMeanMultiRho770_0 = bookHisto1D(15, 1, 1); _histMeanMultiOmega782 = bookHisto1D(17, 1, 1); _histMeanMultiKStar892Plus = bookHisto1D(18, 1, 1); _histMeanMultiKStar892_0 = bookHisto1D(19, 1, 1); _histMeanMultiPhi1020 = bookHisto1D(20, 1, 1); _histMeanMultiDStar2010Plus = bookHisto1D(21, 1, 1); _histMeanMultiDStar2007_0 = bookHisto1D(22, 1, 1); _histMeanMultiDStar_s2112Plus = bookHisto1D(23, 1, 1); _histMeanMultiJPsi1S = bookHisto1D(25, 1, 1); _histMeanMultiF2_1270 = bookHisto1D(31, 1, 1); _histMeanMultiP = bookHisto1D(38, 1, 1); _histMeanMultiLambda = bookHisto1D(39, 1, 1); _histMeanMultiSigma0 = bookHisto1D(40, 1, 1); _histMeanMultiXiMinus = bookHisto1D(44, 1, 1); _histMeanMultiDelta1232PlusPlus = bookHisto1D(45, 1, 1); _histMeanMultiSigma1385Minus = bookHisto1D(46, 1, 1); _histMeanMultiSigma1385Plus = bookHisto1D(47, 1, 1); _histMeanMultiSigma1385PlusMinus = bookHisto1D(48, 1, 1); _histMeanMultiXi1530_0 = bookHisto1D(49, 1, 1); _histMeanMultiOmegaMinus = bookHisto1D(50, 1, 1); _histMeanMultiLambda_c_Plus = bookHisto1D(51, 1, 1); _histMeanMultiSigma_c_PlusPlus_0 = bookHisto1D(53, 1, 1); _histMeanMultiLambda1520 = bookHisto1D(54, 1, 1); } if (sqrtS()/GeV >= 29 && sqrtS()/GeV <= 35) { _histMeanMultiPiPlus = bookHisto1D( 1, 1, 2); _histMeanMultiPi0 = bookHisto1D( 2, 1, 2); _histMeanMultiKPlus = bookHisto1D( 3, 1, 2); _histMeanMultiK0 = bookHisto1D( 4, 1, 2); _histMeanMultiEta = bookHisto1D( 5, 1, 2); _histMeanMultiEtaPrime = bookHisto1D( 6, 1, 2); _histMeanMultiDPlus = bookHisto1D( 7, 1, 2); _histMeanMultiD0 = bookHisto1D( 8, 1, 2); _histMeanMultiDPlus_s = bookHisto1D( 9, 1, 2); _histMeanMultiF0_980 = bookHisto1D(13, 1, 2); _histMeanMultiRho770_0 = bookHisto1D(15, 1, 2); _histMeanMultiKStar892Plus = bookHisto1D(18, 1, 2); _histMeanMultiKStar892_0 = bookHisto1D(19, 1, 2); _histMeanMultiPhi1020 = bookHisto1D(20, 1, 2); _histMeanMultiDStar2010Plus = bookHisto1D(21, 1, 2); _histMeanMultiDStar2007_0 = bookHisto1D(22, 1, 2); _histMeanMultiF2_1270 = bookHisto1D(31, 1, 2); _histMeanMultiK2Star1430Plus = bookHisto1D(33, 1, 1); _histMeanMultiK2Star1430_0 = bookHisto1D(34, 1, 1); _histMeanMultiP = bookHisto1D(38, 1, 2); _histMeanMultiLambda = bookHisto1D(39, 1, 2); _histMeanMultiXiMinus = bookHisto1D(44, 1, 2); _histMeanMultiSigma1385Minus = bookHisto1D(46, 1, 2); _histMeanMultiSigma1385Plus = bookHisto1D(47, 1, 2); _histMeanMultiSigma1385PlusMinus = bookHisto1D(48, 1, 2); _histMeanMultiOmegaMinus = bookHisto1D(50, 1, 2); _histMeanMultiLambda_c_Plus = bookHisto1D(51, 1, 2); } if (sqrtS()/GeV >= 89.5 && sqrtS()/GeV <= 91.8) { _histMeanMultiPiPlus = bookHisto1D( 1, 1, 3); _histMeanMultiPi0 = bookHisto1D( 2, 1, 3); _histMeanMultiKPlus = bookHisto1D( 3, 1, 3); _histMeanMultiK0 = bookHisto1D( 4, 1, 3); _histMeanMultiEta = bookHisto1D( 5, 1, 3); _histMeanMultiEtaPrime = bookHisto1D( 6, 1, 3); _histMeanMultiDPlus = bookHisto1D( 7, 1, 3); _histMeanMultiD0 = bookHisto1D( 8, 1, 3); _histMeanMultiDPlus_s = bookHisto1D( 9, 1, 3); _histMeanMultiBPlus_B0_d = bookHisto1D(10, 1, 1); _histMeanMultiBPlus_u = bookHisto1D(11, 1, 1); _histMeanMultiB0_s = bookHisto1D(12, 1, 1); _histMeanMultiF0_980 = bookHisto1D(13, 1, 3); _histMeanMultiA0_980Plus = bookHisto1D(14, 1, 1); _histMeanMultiRho770_0 = bookHisto1D(15, 1, 3); _histMeanMultiRho770Plus = bookHisto1D(16, 1, 1); _histMeanMultiOmega782 = bookHisto1D(17, 1, 2); _histMeanMultiKStar892Plus = bookHisto1D(18, 1, 3); _histMeanMultiKStar892_0 = bookHisto1D(19, 1, 3); _histMeanMultiPhi1020 = bookHisto1D(20, 1, 3); _histMeanMultiDStar2010Plus = bookHisto1D(21, 1, 3); _histMeanMultiDStar_s2112Plus = bookHisto1D(23, 1, 2); _histMeanMultiBStar = bookHisto1D(24, 1, 1); _histMeanMultiJPsi1S = bookHisto1D(25, 1, 2); _histMeanMultiPsi2S = bookHisto1D(26, 1, 1); _histMeanMultiUpsilon1S = bookHisto1D(27, 1, 1); _histMeanMultiF1_1285 = bookHisto1D(28, 1, 1); _histMeanMultiF1_1420 = bookHisto1D(29, 1, 1); _histMeanMultiChi_c1_3510 = bookHisto1D(30, 1, 1); _histMeanMultiF2_1270 = bookHisto1D(31, 1, 3); _histMeanMultiF2Prime1525 = bookHisto1D(32, 1, 1); _histMeanMultiK2Star1430_0 = bookHisto1D(34, 1, 2); _histMeanMultiBStarStar = bookHisto1D(35, 1, 1); _histMeanMultiDs1Plus = bookHisto1D(36, 1, 1); _histMeanMultiDs2Plus = bookHisto1D(37, 1, 1); _histMeanMultiP = bookHisto1D(38, 1, 3); _histMeanMultiLambda = bookHisto1D(39, 1, 3); _histMeanMultiSigma0 = bookHisto1D(40, 1, 2); _histMeanMultiSigmaMinus = bookHisto1D(41, 1, 1); _histMeanMultiSigmaPlus = bookHisto1D(42, 1, 1); _histMeanMultiSigmaPlusMinus = bookHisto1D(43, 1, 1); _histMeanMultiXiMinus = bookHisto1D(44, 1, 3); _histMeanMultiDelta1232PlusPlus = bookHisto1D(45, 1, 2); _histMeanMultiSigma1385Minus = bookHisto1D(46, 1, 3); _histMeanMultiSigma1385Plus = bookHisto1D(47, 1, 3); _histMeanMultiSigma1385PlusMinus = bookHisto1D(48, 1, 3); _histMeanMultiXi1530_0 = bookHisto1D(49, 1, 2); _histMeanMultiOmegaMinus = bookHisto1D(50, 1, 3); _histMeanMultiLambda_c_Plus = bookHisto1D(51, 1, 3); _histMeanMultiLambda_b_0 = bookHisto1D(52, 1, 1); _histMeanMultiLambda1520 = bookHisto1D(54, 1, 2); } if (sqrtS()/GeV >= 130 && sqrtS()/GeV <= 200) { _histMeanMultiPiPlus = bookHisto1D( 1, 1, 4); _histMeanMultiKPlus = bookHisto1D( 3, 1, 4); _histMeanMultiK0 = bookHisto1D( 4, 1, 4); _histMeanMultiP = bookHisto1D(38, 1, 4); _histMeanMultiLambda = bookHisto1D(39, 1, 4); } } // Finalize void finalize() { if (sqrtS()/GeV >= 9.5 && sqrtS()/GeV <= 10.5) { scale(_histMeanMultiPiPlus , 1.0/sumOfWeights()); scale(_histMeanMultiPi0 , 1.0/sumOfWeights()); scale(_histMeanMultiKPlus , 1.0/sumOfWeights()); scale(_histMeanMultiK0 , 1.0/sumOfWeights()); scale(_histMeanMultiEta , 1.0/sumOfWeights()); scale(_histMeanMultiEtaPrime , 1.0/sumOfWeights()); scale(_histMeanMultiDPlus , 1.0/sumOfWeights()); scale(_histMeanMultiD0 , 1.0/sumOfWeights()); scale(_histMeanMultiDPlus_s , 1.0/sumOfWeights()); scale(_histMeanMultiF0_980 , 1.0/sumOfWeights()); scale(_histMeanMultiRho770_0 , 1.0/sumOfWeights()); scale(_histMeanMultiOmega782 , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892Plus , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892_0 , 1.0/sumOfWeights()); scale(_histMeanMultiPhi1020 , 1.0/sumOfWeights()); scale(_histMeanMultiDStar2010Plus , 1.0/sumOfWeights()); scale(_histMeanMultiDStar2007_0 , 1.0/sumOfWeights()); scale(_histMeanMultiDStar_s2112Plus , 1.0/sumOfWeights()); scale(_histMeanMultiJPsi1S , 1.0/sumOfWeights()); scale(_histMeanMultiF2_1270 , 1.0/sumOfWeights()); scale(_histMeanMultiP , 1.0/sumOfWeights()); scale(_histMeanMultiLambda , 1.0/sumOfWeights()); scale(_histMeanMultiSigma0 , 1.0/sumOfWeights()); scale(_histMeanMultiXiMinus , 1.0/sumOfWeights()); scale(_histMeanMultiDelta1232PlusPlus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385Minus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385Plus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385PlusMinus, 1.0/sumOfWeights()); scale(_histMeanMultiXi1530_0 , 1.0/sumOfWeights()); scale(_histMeanMultiOmegaMinus , 1.0/sumOfWeights()); scale(_histMeanMultiLambda_c_Plus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma_c_PlusPlus_0, 1.0/sumOfWeights()); scale(_histMeanMultiLambda1520 , 1.0/sumOfWeights()); } if (sqrtS()/GeV >= 29 && sqrtS()/GeV <= 35) { scale(_histMeanMultiPiPlus , 5.0/sumOfWeights()); scale(_histMeanMultiPi0 , 5.0/sumOfWeights()); scale(_histMeanMultiKPlus , 5.0/sumOfWeights()); scale(_histMeanMultiK0 , 5.0/sumOfWeights()); scale(_histMeanMultiEta , 5.0/sumOfWeights()); scale(_histMeanMultiEtaPrime , 5.0/sumOfWeights()); scale(_histMeanMultiDPlus , 5.0/sumOfWeights()); scale(_histMeanMultiD0 , 5.0/sumOfWeights()); scale(_histMeanMultiDPlus_s , 5.0/sumOfWeights()); scale(_histMeanMultiF0_980 , 5.0/sumOfWeights()); scale(_histMeanMultiRho770_0 , 5.0/sumOfWeights()); scale(_histMeanMultiKStar892Plus , 5.0/sumOfWeights()); scale(_histMeanMultiKStar892_0 , 5.0/sumOfWeights()); scale(_histMeanMultiPhi1020 , 5.0/sumOfWeights()); scale(_histMeanMultiDStar2010Plus , 5.0/sumOfWeights()); scale(_histMeanMultiDStar2007_0 , 5.0/sumOfWeights()); scale(_histMeanMultiF2_1270 , 5.0/sumOfWeights()); scale(_histMeanMultiK2Star1430Plus , 5.0/sumOfWeights()); scale(_histMeanMultiK2Star1430_0 , 5.0/sumOfWeights()); scale(_histMeanMultiP , 5.0/sumOfWeights()); scale(_histMeanMultiLambda , 5.0/sumOfWeights()); scale(_histMeanMultiXiMinus , 5.0/sumOfWeights()); scale(_histMeanMultiSigma1385Minus , 5.0/sumOfWeights()); scale(_histMeanMultiSigma1385Plus , 5.0/sumOfWeights()); scale(_histMeanMultiSigma1385PlusMinus, 5.0/sumOfWeights()); scale(_histMeanMultiOmegaMinus , 5.0/sumOfWeights()); scale(_histMeanMultiLambda_c_Plus , 5.0/sumOfWeights()); } if (sqrtS()/GeV >= 89.5 && sqrtS()/GeV <= 91.8) { scale(_histMeanMultiPiPlus , 1.0/sumOfWeights()); scale(_histMeanMultiPi0 , 1.0/sumOfWeights()); scale(_histMeanMultiKPlus , 1.0/sumOfWeights()); scale(_histMeanMultiK0 , 1.0/sumOfWeights()); scale(_histMeanMultiEta , 1.0/sumOfWeights()); scale(_histMeanMultiEtaPrime , 1.0/sumOfWeights()); scale(_histMeanMultiDPlus , 1.0/sumOfWeights()); scale(_histMeanMultiD0 , 1.0/sumOfWeights()); scale(_histMeanMultiDPlus_s , 1.0/sumOfWeights()); scale(_histMeanMultiBPlus_B0_d , 1.0/sumOfWeights()); scale(_histMeanMultiBPlus_u , 1.0/sumOfWeights()); scale(_histMeanMultiB0_s , 1.0/sumOfWeights()); scale(_histMeanMultiF0_980 , 1.0/sumOfWeights()); scale(_histMeanMultiA0_980Plus , 1.0/sumOfWeights()); scale(_histMeanMultiRho770_0 , 1.0/sumOfWeights()); scale(_histMeanMultiRho770Plus , 1.0/sumOfWeights()); scale(_histMeanMultiOmega782 , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892Plus , 1.0/sumOfWeights()); scale(_histMeanMultiKStar892_0 , 1.0/sumOfWeights()); scale(_histMeanMultiPhi1020 , 1.0/sumOfWeights()); scale(_histMeanMultiDStar2010Plus , 1.0/sumOfWeights()); scale(_histMeanMultiDStar_s2112Plus , 1.0/sumOfWeights()); scale(_histMeanMultiBStar , 1.0/sumOfWeights()); scale(_histMeanMultiJPsi1S , 1.0/sumOfWeights()); scale(_histMeanMultiPsi2S , 1.0/sumOfWeights()); scale(_histMeanMultiUpsilon1S , 1.0/sumOfWeights()); scale(_histMeanMultiF1_1285 , 1.0/sumOfWeights()); scale(_histMeanMultiF1_1420 , 1.0/sumOfWeights()); scale(_histMeanMultiChi_c1_3510 , 1.0/sumOfWeights()); scale(_histMeanMultiF2_1270 , 1.0/sumOfWeights()); scale(_histMeanMultiF2Prime1525 , 1.0/sumOfWeights()); scale(_histMeanMultiK2Star1430_0 , 1.0/sumOfWeights()); scale(_histMeanMultiBStarStar , 1.0/sumOfWeights()); scale(_histMeanMultiDs1Plus , 1.0/sumOfWeights()); scale(_histMeanMultiDs2Plus , 1.0/sumOfWeights()); scale(_histMeanMultiP , 1.0/sumOfWeights()); scale(_histMeanMultiLambda , 1.0/sumOfWeights()); scale(_histMeanMultiSigma0 , 1.0/sumOfWeights()); scale(_histMeanMultiSigmaMinus , 1.0/sumOfWeights()); scale(_histMeanMultiSigmaPlus , 1.0/sumOfWeights()); scale(_histMeanMultiSigmaPlusMinus , 1.0/sumOfWeights()); scale(_histMeanMultiXiMinus , 1.0/sumOfWeights()); scale(_histMeanMultiDelta1232PlusPlus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385Minus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385Plus , 1.0/sumOfWeights()); scale(_histMeanMultiSigma1385PlusMinus, 1.0/sumOfWeights()); scale(_histMeanMultiXi1530_0 , 1.0/sumOfWeights()); scale(_histMeanMultiOmegaMinus , 1.0/sumOfWeights()); scale(_histMeanMultiLambda_c_Plus , 1.0/sumOfWeights()); scale(_histMeanMultiLambda_b_0 , 1.0/sumOfWeights()); scale(_histMeanMultiLambda1520 , 1.0/sumOfWeights()); } if (sqrtS()/GeV >= 130 && sqrtS()/GeV <= 200) { scale(_histMeanMultiPiPlus , 70.0/sumOfWeights()); scale(_histMeanMultiKPlus , 70.0/sumOfWeights()); scale(_histMeanMultiK0 , 70.0/sumOfWeights()); scale(_histMeanMultiP , 70.0/sumOfWeights()); scale(_histMeanMultiLambda , 70.0/sumOfWeights()); } } //@} private: Histo1DPtr _histMeanMultiPiPlus; Histo1DPtr _histMeanMultiPi0; Histo1DPtr _histMeanMultiKPlus; Histo1DPtr _histMeanMultiK0; Histo1DPtr _histMeanMultiEta; Histo1DPtr _histMeanMultiEtaPrime; Histo1DPtr _histMeanMultiDPlus; Histo1DPtr _histMeanMultiD0; Histo1DPtr _histMeanMultiDPlus_s; Histo1DPtr _histMeanMultiBPlus_B0_d; Histo1DPtr _histMeanMultiBPlus_u; Histo1DPtr _histMeanMultiB0_s; Histo1DPtr _histMeanMultiF0_980; Histo1DPtr _histMeanMultiA0_980Plus; Histo1DPtr _histMeanMultiRho770_0; Histo1DPtr _histMeanMultiRho770Plus; Histo1DPtr _histMeanMultiOmega782; Histo1DPtr _histMeanMultiKStar892Plus; Histo1DPtr _histMeanMultiKStar892_0; Histo1DPtr _histMeanMultiPhi1020; Histo1DPtr _histMeanMultiDStar2010Plus; Histo1DPtr _histMeanMultiDStar2007_0; Histo1DPtr _histMeanMultiDStar_s2112Plus; Histo1DPtr _histMeanMultiBStar; Histo1DPtr _histMeanMultiJPsi1S; Histo1DPtr _histMeanMultiPsi2S; Histo1DPtr _histMeanMultiUpsilon1S; Histo1DPtr _histMeanMultiF1_1285; Histo1DPtr _histMeanMultiF1_1420; Histo1DPtr _histMeanMultiChi_c1_3510; Histo1DPtr _histMeanMultiF2_1270; Histo1DPtr _histMeanMultiF2Prime1525; Histo1DPtr _histMeanMultiK2Star1430Plus; Histo1DPtr _histMeanMultiK2Star1430_0; Histo1DPtr _histMeanMultiBStarStar; Histo1DPtr _histMeanMultiDs1Plus; Histo1DPtr _histMeanMultiDs2Plus; Histo1DPtr _histMeanMultiP; Histo1DPtr _histMeanMultiLambda; Histo1DPtr _histMeanMultiSigma0; Histo1DPtr _histMeanMultiSigmaMinus; Histo1DPtr _histMeanMultiSigmaPlus; Histo1DPtr _histMeanMultiSigmaPlusMinus; Histo1DPtr _histMeanMultiXiMinus; Histo1DPtr _histMeanMultiDelta1232PlusPlus; Histo1DPtr _histMeanMultiSigma1385Minus; Histo1DPtr _histMeanMultiSigma1385Plus; Histo1DPtr _histMeanMultiSigma1385PlusMinus; Histo1DPtr _histMeanMultiXi1530_0; Histo1DPtr _histMeanMultiOmegaMinus; Histo1DPtr _histMeanMultiLambda_c_Plus; Histo1DPtr _histMeanMultiLambda_b_0; Histo1DPtr _histMeanMultiSigma_c_PlusPlus_0; Histo1DPtr _histMeanMultiLambda1520; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(PDG_HADRON_MULTIPLICITIES); } diff --git a/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES_RATIOS.cc b/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES_RATIOS.cc --- a/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES_RATIOS.cc +++ b/analyses/pluginMisc/PDG_HADRON_MULTIPLICITIES_RATIOS.cc @@ -1,764 +1,764 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Implementation of PDG hadron multiplicities as ratios to \f$ \pi^\pm \f$ multiplicity /// @author Holger Schulz class PDG_HADRON_MULTIPLICITIES_RATIOS : public Analysis { public: /// Constructor PDG_HADRON_MULTIPLICITIES_RATIOS() : Analysis("PDG_HADRON_MULTIPLICITIES_RATIOS") { _weightedTotalNumPiPlus = 0; } /// @name Analysis methods //@{ void analyze(const Event& e) { // First, veto on leptonic events by requiring at least 4 charged FS particles const FinalState& fs = apply(e, "FS"); const size_t numParticles = fs.particles().size(); // Even if we only generate hadronic events, we still need a cut on numCharged >= 2. if (numParticles < 2) { MSG_DEBUG("Failed leptonic event cut"); vetoEvent; } MSG_DEBUG("Passed leptonic event cut"); // Get event weight for histo filling const double weight = e.weight(); MSG_DEBUG("sqrt(S) = " << sqrtS()/GeV << " GeV"); // Final state of unstable particles to get particle spectra - const UnstableFinalState& ufs = apply(e, "UFS"); + const UnstableParticles& ufs = apply(e, "UFS"); if (sqrtS()/GeV >= 9.5 && sqrtS()/GeV <= 10.5) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _weightedTotalNumPiPlus += weight; break; case 111: _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 221: _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); break; case 331: _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMeanMultiDPlus->fill(_histMeanMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMeanMultiD0->fill(_histMeanMultiD0->bin(0).xMid(), weight); break; case 431: _histMeanMultiDPlus_s->fill(_histMeanMultiDPlus_s->bin(0).xMid(), weight); break; case 9010221: _histMeanMultiF0_980->fill(_histMeanMultiF0_980->bin(0).xMid(), weight); break; case 113: _histMeanMultiRho770_0->fill(_histMeanMultiRho770_0->bin(0).xMid(), weight); break; case 223: _histMeanMultiOmega782->fill(_histMeanMultiOmega782->bin(0).xMid(), weight); break; case 323: _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMeanMultiPhi1020->fill(_histMeanMultiPhi1020->bin(0).xMid(), weight); break; case 413: _histMeanMultiDStar2010Plus->fill(_histMeanMultiDStar2010Plus->bin(0).xMid(), weight); break; case 423: _histMeanMultiDStar2007_0->fill(_histMeanMultiDStar2007_0->bin(0).xMid(), weight); break; case 433: _histMeanMultiDStar_s2112Plus->fill(_histMeanMultiDStar_s2112Plus->bin(0).xMid(), weight); break; case 443: _histMeanMultiJPsi1S->fill(_histMeanMultiJPsi1S->bin(0).xMid(), weight); break; case 225: _histMeanMultiF2_1270->fill(_histMeanMultiF2_1270->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; case 3212: _histMeanMultiSigma0->fill(_histMeanMultiSigma0->bin(0).xMid(), weight); break; case 3312: _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); break; case 2224: _histMeanMultiDelta1232PlusPlus->fill(_histMeanMultiDelta1232PlusPlus->bin(0).xMid(), weight); break; case 3114: _histMeanMultiSigma1385Minus->fill(_histMeanMultiSigma1385Minus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3224: _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3324: _histMeanMultiXi1530_0->fill(_histMeanMultiXi1530_0->bin(0).xMid(), weight); break; case 3334: _histMeanMultiOmegaMinus->fill(_histMeanMultiOmegaMinus->bin(0).xMid(), weight); break; case 4122: _histMeanMultiLambda_c_Plus->fill(_histMeanMultiLambda_c_Plus->bin(0).xMid(), weight); break; case 4222: case 4112: _histMeanMultiSigma_c_PlusPlus_0->fill(_histMeanMultiSigma_c_PlusPlus_0->bin(0).xMid(), weight); break; case 3124: _histMeanMultiLambda1520->fill(_histMeanMultiLambda1520->bin(0).xMid(), weight); break; } } } if (sqrtS()/GeV >= 29 && sqrtS()/GeV <= 35) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _weightedTotalNumPiPlus += weight; break; case 111: _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 221: _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); break; case 331: _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMeanMultiDPlus->fill(_histMeanMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMeanMultiD0->fill(_histMeanMultiD0->bin(0).xMid(), weight); break; case 431: _histMeanMultiDPlus_s->fill(_histMeanMultiDPlus_s->bin(0).xMid(), weight); break; case 9010221: _histMeanMultiF0_980->fill(_histMeanMultiF0_980->bin(0).xMid(), weight); break; case 113: _histMeanMultiRho770_0->fill(_histMeanMultiRho770_0->bin(0).xMid(), weight); break; case 323: _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMeanMultiPhi1020->fill(_histMeanMultiPhi1020->bin(0).xMid(), weight); break; case 413: _histMeanMultiDStar2010Plus->fill(_histMeanMultiDStar2010Plus->bin(0).xMid(), weight); break; case 423: _histMeanMultiDStar2007_0->fill(_histMeanMultiDStar2007_0->bin(0).xMid(), weight); break; case 225: _histMeanMultiF2_1270->fill(_histMeanMultiF2_1270->bin(0).xMid(), weight); break; case 325: _histMeanMultiK2Star1430Plus->fill(_histMeanMultiK2Star1430Plus->bin(0).xMid(), weight); break; case 315: _histMeanMultiK2Star1430_0->fill(_histMeanMultiK2Star1430_0->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; case 3312: _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); break; case 3114: _histMeanMultiSigma1385Minus->fill(_histMeanMultiSigma1385Minus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3224: _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3334: _histMeanMultiOmegaMinus->fill(_histMeanMultiOmegaMinus->bin(0).xMid(), weight); break; case 4122: _histMeanMultiLambda_c_Plus->fill(_histMeanMultiLambda_c_Plus->bin(0).xMid(), weight); break; } } } if (sqrtS()/GeV >= 89.5 && sqrtS()/GeV <= 91.8) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _weightedTotalNumPiPlus += weight; break; case 111: _histMeanMultiPi0->fill(_histMeanMultiPi0->bin(0).xMid(), weight); break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 221: _histMeanMultiEta->fill(_histMeanMultiEta->bin(0).xMid(), weight); break; case 331: _histMeanMultiEtaPrime->fill(_histMeanMultiEtaPrime->bin(0).xMid(), weight); break; case 411: _histMeanMultiDPlus->fill(_histMeanMultiDPlus->bin(0).xMid(), weight); break; case 421: _histMeanMultiD0->fill(_histMeanMultiD0->bin(0).xMid(), weight); break; case 431: _histMeanMultiDPlus_s->fill(_histMeanMultiDPlus_s->bin(0).xMid(), weight); break; case 511: _histMeanMultiBPlus_B0_d->fill(_histMeanMultiBPlus_B0_d->bin(0).xMid(), weight); break; case 521: _histMeanMultiBPlus_B0_d->fill(_histMeanMultiBPlus_B0_d->bin(0).xMid(), weight); _histMeanMultiBPlus_u->fill(_histMeanMultiBPlus_u->bin(0).xMid(), weight); break; case 531: _histMeanMultiB0_s->fill(_histMeanMultiB0_s->bin(0).xMid(), weight); break; case 9010221: _histMeanMultiF0_980->fill(_histMeanMultiF0_980->bin(0).xMid(), weight); break; case 9000211: _histMeanMultiA0_980Plus->fill(_histMeanMultiA0_980Plus->bin(0).xMid(), weight); break; case 113: _histMeanMultiRho770_0->fill(_histMeanMultiRho770_0->bin(0).xMid(), weight); break; case 213: _histMeanMultiRho770Plus->fill(_histMeanMultiRho770Plus->bin(0).xMid(), weight); break; case 223: _histMeanMultiOmega782->fill(_histMeanMultiOmega782->bin(0).xMid(), weight); break; case 323: _histMeanMultiKStar892Plus->fill(_histMeanMultiKStar892Plus->bin(0).xMid(), weight); break; case 313: _histMeanMultiKStar892_0->fill(_histMeanMultiKStar892_0->bin(0).xMid(), weight); break; case 333: _histMeanMultiPhi1020->fill(_histMeanMultiPhi1020->bin(0).xMid(), weight); break; case 413: _histMeanMultiDStar2010Plus->fill(_histMeanMultiDStar2010Plus->bin(0).xMid(), weight); break; case 433: _histMeanMultiDStar_s2112Plus->fill(_histMeanMultiDStar_s2112Plus->bin(0).xMid(), weight); break; case 513: case 523: case 533: _histMeanMultiBStar->fill(_histMeanMultiBStar->bin(0).xMid(), weight); break; case 443: _histMeanMultiJPsi1S->fill(_histMeanMultiJPsi1S->bin(0).xMid(), weight); break; case 100443: _histMeanMultiPsi2S->fill(_histMeanMultiPsi2S->bin(0).xMid(), weight); break; case 553: _histMeanMultiUpsilon1S->fill(_histMeanMultiUpsilon1S->bin(0).xMid(), weight); break; case 20223: _histMeanMultiF1_1285->fill(_histMeanMultiF1_1285->bin(0).xMid(), weight); break; case 20333: _histMeanMultiF1_1420->fill(_histMeanMultiF1_1420->bin(0).xMid(), weight); break; case 445: _histMeanMultiChi_c1_3510->fill(_histMeanMultiChi_c1_3510->bin(0).xMid(), weight); break; case 225: _histMeanMultiF2_1270->fill(_histMeanMultiF2_1270->bin(0).xMid(), weight); break; case 335: _histMeanMultiF2Prime1525->fill(_histMeanMultiF2Prime1525->bin(0).xMid(), weight); break; case 315: _histMeanMultiK2Star1430_0->fill(_histMeanMultiK2Star1430_0->bin(0).xMid(), weight); break; case 515: case 525: case 535: _histMeanMultiBStarStar->fill(_histMeanMultiBStarStar->bin(0).xMid(), weight); break; case 10433: case 20433: _histMeanMultiDs1Plus->fill(_histMeanMultiDs1Plus->bin(0).xMid(), weight); break; case 435: _histMeanMultiDs2Plus->fill(_histMeanMultiDs2Plus->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; case 3212: _histMeanMultiSigma0->fill(_histMeanMultiSigma0->bin(0).xMid(), weight); break; case 3112: _histMeanMultiSigmaMinus->fill(_histMeanMultiSigmaMinus->bin(0).xMid(), weight); _histMeanMultiSigmaPlusMinus->fill(_histMeanMultiSigmaPlusMinus->bin(0).xMid(), weight); break; case 3222: _histMeanMultiSigmaPlus->fill(_histMeanMultiSigmaPlus->bin(0).xMid(), weight); _histMeanMultiSigmaPlusMinus->fill(_histMeanMultiSigmaPlusMinus->bin(0).xMid(), weight); break; case 3312: _histMeanMultiXiMinus->fill(_histMeanMultiXiMinus->bin(0).xMid(), weight); break; case 2224: _histMeanMultiDelta1232PlusPlus->fill(_histMeanMultiDelta1232PlusPlus->bin(0).xMid(), weight); break; case 3114: _histMeanMultiSigma1385Minus->fill(_histMeanMultiSigma1385Minus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3224: _histMeanMultiSigma1385Plus->fill(_histMeanMultiSigma1385Plus->bin(0).xMid(), weight); _histMeanMultiSigma1385PlusMinus->fill(_histMeanMultiSigma1385PlusMinus->bin(0).xMid(), weight); break; case 3324: _histMeanMultiXi1530_0->fill(_histMeanMultiXi1530_0->bin(0).xMid(), weight); break; case 3334: _histMeanMultiOmegaMinus->fill(_histMeanMultiOmegaMinus->bin(0).xMid(), weight); break; case 4122: _histMeanMultiLambda_c_Plus->fill(_histMeanMultiLambda_c_Plus->bin(0).xMid(), weight); break; case 5122: _histMeanMultiLambda_b_0->fill(_histMeanMultiLambda_b_0->bin(0).xMid(), weight); break; case 3124: _histMeanMultiLambda1520->fill(_histMeanMultiLambda1520->bin(0).xMid(), weight); break; } } } if (sqrtS()/GeV >= 130 && sqrtS()/GeV <= 200) { foreach (const Particle& p, ufs.particles()) { const PdgId id = p.abspid(); switch (id) { case 211: _weightedTotalNumPiPlus += weight; break; case 321: _histMeanMultiKPlus->fill(_histMeanMultiKPlus->bin(0).xMid(), weight); break; case 130: case 310: _histMeanMultiK0->fill(_histMeanMultiK0->bin(0).xMid(), weight); break; case 2212: _histMeanMultiP->fill(_histMeanMultiP->bin(0).xMid(), weight); break; case 3122: _histMeanMultiLambda->fill(_histMeanMultiLambda->bin(0).xMid(), weight); break; } } } } void init() { declare(ChargedFinalState(), "FS"); - declare(UnstableFinalState(), "UFS"); + declare(UnstableParticles(), "UFS"); if (sqrtS()/GeV >= 9.5 && sqrtS()/GeV <= 10.5) { _histMeanMultiPi0 = bookHisto1D( 2, 1, 1); _histMeanMultiKPlus = bookHisto1D( 3, 1, 1); _histMeanMultiK0 = bookHisto1D( 4, 1, 1); _histMeanMultiEta = bookHisto1D( 5, 1, 1); _histMeanMultiEtaPrime = bookHisto1D( 6, 1, 1); _histMeanMultiDPlus = bookHisto1D( 7, 1, 1); _histMeanMultiD0 = bookHisto1D( 8, 1, 1); _histMeanMultiDPlus_s = bookHisto1D( 9, 1, 1); _histMeanMultiF0_980 = bookHisto1D(13, 1, 1); _histMeanMultiRho770_0 = bookHisto1D(15, 1, 1); _histMeanMultiOmega782 = bookHisto1D(17, 1, 1); _histMeanMultiKStar892Plus = bookHisto1D(18, 1, 1); _histMeanMultiKStar892_0 = bookHisto1D(19, 1, 1); _histMeanMultiPhi1020 = bookHisto1D(20, 1, 1); _histMeanMultiDStar2010Plus = bookHisto1D(21, 1, 1); _histMeanMultiDStar2007_0 = bookHisto1D(22, 1, 1); _histMeanMultiDStar_s2112Plus = bookHisto1D(23, 1, 1); _histMeanMultiJPsi1S = bookHisto1D(25, 1, 1); _histMeanMultiF2_1270 = bookHisto1D(31, 1, 1); _histMeanMultiP = bookHisto1D(38, 1, 1); _histMeanMultiLambda = bookHisto1D(39, 1, 1); _histMeanMultiSigma0 = bookHisto1D(40, 1, 1); _histMeanMultiXiMinus = bookHisto1D(44, 1, 1); _histMeanMultiDelta1232PlusPlus = bookHisto1D(45, 1, 1); _histMeanMultiSigma1385Minus = bookHisto1D(46, 1, 1); _histMeanMultiSigma1385Plus = bookHisto1D(47, 1, 1); _histMeanMultiSigma1385PlusMinus = bookHisto1D(48, 1, 1); _histMeanMultiXi1530_0 = bookHisto1D(49, 1, 1); _histMeanMultiOmegaMinus = bookHisto1D(50, 1, 1); _histMeanMultiLambda_c_Plus = bookHisto1D(51, 1, 1); _histMeanMultiSigma_c_PlusPlus_0 = bookHisto1D(53, 1, 1); _histMeanMultiLambda1520 = bookHisto1D(54, 1, 1); } if (sqrtS()/GeV >= 29 && sqrtS()/GeV <= 35) { _histMeanMultiPi0 = bookHisto1D( 2, 1, 2); _histMeanMultiKPlus = bookHisto1D( 3, 1, 2); _histMeanMultiK0 = bookHisto1D( 4, 1, 2); _histMeanMultiEta = bookHisto1D( 5, 1, 2); _histMeanMultiEtaPrime = bookHisto1D( 6, 1, 2); _histMeanMultiDPlus = bookHisto1D( 7, 1, 2); _histMeanMultiD0 = bookHisto1D( 8, 1, 2); _histMeanMultiDPlus_s = bookHisto1D( 9, 1, 2); _histMeanMultiF0_980 = bookHisto1D(13, 1, 2); _histMeanMultiRho770_0 = bookHisto1D(15, 1, 2); _histMeanMultiKStar892Plus = bookHisto1D(18, 1, 2); _histMeanMultiKStar892_0 = bookHisto1D(19, 1, 2); _histMeanMultiPhi1020 = bookHisto1D(20, 1, 2); _histMeanMultiDStar2010Plus = bookHisto1D(21, 1, 2); _histMeanMultiDStar2007_0 = bookHisto1D(22, 1, 2); _histMeanMultiF2_1270 = bookHisto1D(31, 1, 2); _histMeanMultiK2Star1430Plus = bookHisto1D(33, 1, 1); _histMeanMultiK2Star1430_0 = bookHisto1D(34, 1, 1); _histMeanMultiP = bookHisto1D(38, 1, 2); _histMeanMultiLambda = bookHisto1D(39, 1, 2); _histMeanMultiXiMinus = bookHisto1D(44, 1, 2); _histMeanMultiSigma1385Minus = bookHisto1D(46, 1, 2); _histMeanMultiSigma1385Plus = bookHisto1D(47, 1, 2); _histMeanMultiSigma1385PlusMinus = bookHisto1D(48, 1, 2); _histMeanMultiOmegaMinus = bookHisto1D(50, 1, 2); _histMeanMultiLambda_c_Plus = bookHisto1D(51, 1, 2); } if (sqrtS()/GeV >= 89.5 && sqrtS()/GeV <= 91.8) { _histMeanMultiPi0 = bookHisto1D( 2, 1, 3); _histMeanMultiKPlus = bookHisto1D( 3, 1, 3); _histMeanMultiK0 = bookHisto1D( 4, 1, 3); _histMeanMultiEta = bookHisto1D( 5, 1, 3); _histMeanMultiEtaPrime = bookHisto1D( 6, 1, 3); _histMeanMultiDPlus = bookHisto1D( 7, 1, 3); _histMeanMultiD0 = bookHisto1D( 8, 1, 3); _histMeanMultiDPlus_s = bookHisto1D( 9, 1, 3); _histMeanMultiBPlus_B0_d = bookHisto1D(10, 1, 1); _histMeanMultiBPlus_u = bookHisto1D(11, 1, 1); _histMeanMultiB0_s = bookHisto1D(12, 1, 1); _histMeanMultiF0_980 = bookHisto1D(13, 1, 3); _histMeanMultiA0_980Plus = bookHisto1D(14, 1, 1); _histMeanMultiRho770_0 = bookHisto1D(15, 1, 3); _histMeanMultiRho770Plus = bookHisto1D(16, 1, 1); _histMeanMultiOmega782 = bookHisto1D(17, 1, 2); _histMeanMultiKStar892Plus = bookHisto1D(18, 1, 3); _histMeanMultiKStar892_0 = bookHisto1D(19, 1, 3); _histMeanMultiPhi1020 = bookHisto1D(20, 1, 3); _histMeanMultiDStar2010Plus = bookHisto1D(21, 1, 3); _histMeanMultiDStar_s2112Plus = bookHisto1D(23, 1, 2); _histMeanMultiBStar = bookHisto1D(24, 1, 1); _histMeanMultiJPsi1S = bookHisto1D(25, 1, 2); _histMeanMultiPsi2S = bookHisto1D(26, 1, 1); _histMeanMultiUpsilon1S = bookHisto1D(27, 1, 1); _histMeanMultiF1_1285 = bookHisto1D(28, 1, 1); _histMeanMultiF1_1420 = bookHisto1D(29, 1, 1); _histMeanMultiChi_c1_3510 = bookHisto1D(30, 1, 1); _histMeanMultiF2_1270 = bookHisto1D(31, 1, 3); _histMeanMultiF2Prime1525 = bookHisto1D(32, 1, 1); _histMeanMultiK2Star1430_0 = bookHisto1D(34, 1, 2); _histMeanMultiBStarStar = bookHisto1D(35, 1, 1); _histMeanMultiDs1Plus = bookHisto1D(36, 1, 1); _histMeanMultiDs2Plus = bookHisto1D(37, 1, 1); _histMeanMultiP = bookHisto1D(38, 1, 3); _histMeanMultiLambda = bookHisto1D(39, 1, 3); _histMeanMultiSigma0 = bookHisto1D(40, 1, 2); _histMeanMultiSigmaMinus = bookHisto1D(41, 1, 1); _histMeanMultiSigmaPlus = bookHisto1D(42, 1, 1); _histMeanMultiSigmaPlusMinus = bookHisto1D(43, 1, 1); _histMeanMultiXiMinus = bookHisto1D(44, 1, 3); _histMeanMultiDelta1232PlusPlus = bookHisto1D(45, 1, 2); _histMeanMultiSigma1385Minus = bookHisto1D(46, 1, 3); _histMeanMultiSigma1385Plus = bookHisto1D(47, 1, 3); _histMeanMultiSigma1385PlusMinus = bookHisto1D(48, 1, 3); _histMeanMultiXi1530_0 = bookHisto1D(49, 1, 2); _histMeanMultiOmegaMinus = bookHisto1D(50, 1, 3); _histMeanMultiLambda_c_Plus = bookHisto1D(51, 1, 3); _histMeanMultiLambda_b_0 = bookHisto1D(52, 1, 1); _histMeanMultiLambda1520 = bookHisto1D(54, 1, 2); } if (sqrtS()/GeV >= 130 && sqrtS()/GeV <= 200) { _histMeanMultiKPlus = bookHisto1D( 3, 1, 4); _histMeanMultiK0 = bookHisto1D( 4, 1, 4); _histMeanMultiP = bookHisto1D(38, 1, 4); _histMeanMultiLambda = bookHisto1D(39, 1, 4); } } // Finalize void finalize() { if (sqrtS()/GeV >= 9.5 && sqrtS()/GeV <= 10.5) { scale(_histMeanMultiPi0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiEta , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiEtaPrime , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiD0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDPlus_s , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF0_980 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiRho770_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiOmega782 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKStar892Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKStar892_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiPhi1020 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar2010Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar2007_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar_s2112Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiJPsi1S , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF2_1270 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiP , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiXiMinus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDelta1232PlusPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385Minus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385PlusMinus, 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiXi1530_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiOmegaMinus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda_c_Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma_c_PlusPlus_0, 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda1520 , 1.0/_weightedTotalNumPiPlus); } if (sqrtS()/GeV >= 29 && sqrtS()/GeV <= 35) { scale(_histMeanMultiPi0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKPlus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiEta , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiEtaPrime , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDPlus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiD0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDPlus_s , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF0_980 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiRho770_0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKStar892Plus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKStar892_0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiPhi1020 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar2010Plus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar2007_0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF2_1270 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK2Star1430Plus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK2Star1430_0 , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiP , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiXiMinus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385Minus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385Plus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385PlusMinus, 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiOmegaMinus , 5.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda_c_Plus , 5.0/_weightedTotalNumPiPlus); } if (sqrtS()/GeV >= 89.5 && sqrtS()/GeV <= 91.8) { scale(_histMeanMultiPi0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiEta , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiEtaPrime , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiD0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDPlus_s , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiBPlus_B0_d , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiBPlus_u , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiB0_s , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF0_980 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiA0_980Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiRho770_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiRho770Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiOmega782 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKStar892Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiKStar892_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiPhi1020 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar2010Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDStar_s2112Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiBStar , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiJPsi1S , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiPsi2S , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiUpsilon1S , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF1_1285 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF1_1420 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiChi_c1_3510 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF2_1270 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiF2Prime1525 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK2Star1430_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiBStarStar , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDs1Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDs2Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiP , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigmaMinus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigmaPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigmaPlusMinus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiXiMinus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiDelta1232PlusPlus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385Minus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiSigma1385PlusMinus, 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiXi1530_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiOmegaMinus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda_c_Plus , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda_b_0 , 1.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda1520 , 1.0/_weightedTotalNumPiPlus); } if (sqrtS()/GeV >= 130 && sqrtS()/GeV <= 200) { scale(_histMeanMultiKPlus , 70.0/_weightedTotalNumPiPlus); scale(_histMeanMultiK0 , 70.0/_weightedTotalNumPiPlus); scale(_histMeanMultiP , 70.0/_weightedTotalNumPiPlus); scale(_histMeanMultiLambda , 70.0/_weightedTotalNumPiPlus); } } //@} private: double _weightedTotalNumPiPlus; Histo1DPtr _histMeanMultiPi0; Histo1DPtr _histMeanMultiKPlus; Histo1DPtr _histMeanMultiK0; Histo1DPtr _histMeanMultiEta; Histo1DPtr _histMeanMultiEtaPrime; Histo1DPtr _histMeanMultiDPlus; Histo1DPtr _histMeanMultiD0; Histo1DPtr _histMeanMultiDPlus_s; Histo1DPtr _histMeanMultiBPlus_B0_d; Histo1DPtr _histMeanMultiBPlus_u; Histo1DPtr _histMeanMultiB0_s; Histo1DPtr _histMeanMultiF0_980; Histo1DPtr _histMeanMultiA0_980Plus; Histo1DPtr _histMeanMultiRho770_0; Histo1DPtr _histMeanMultiRho770Plus; Histo1DPtr _histMeanMultiOmega782; Histo1DPtr _histMeanMultiKStar892Plus; Histo1DPtr _histMeanMultiKStar892_0; Histo1DPtr _histMeanMultiPhi1020; Histo1DPtr _histMeanMultiDStar2010Plus; Histo1DPtr _histMeanMultiDStar2007_0; Histo1DPtr _histMeanMultiDStar_s2112Plus; Histo1DPtr _histMeanMultiBStar; Histo1DPtr _histMeanMultiJPsi1S; Histo1DPtr _histMeanMultiPsi2S; Histo1DPtr _histMeanMultiUpsilon1S; Histo1DPtr _histMeanMultiF1_1285; Histo1DPtr _histMeanMultiF1_1420; Histo1DPtr _histMeanMultiChi_c1_3510; Histo1DPtr _histMeanMultiF2_1270; Histo1DPtr _histMeanMultiF2Prime1525; Histo1DPtr _histMeanMultiK2Star1430Plus; Histo1DPtr _histMeanMultiK2Star1430_0; Histo1DPtr _histMeanMultiBStarStar; Histo1DPtr _histMeanMultiDs1Plus; Histo1DPtr _histMeanMultiDs2Plus; Histo1DPtr _histMeanMultiP; Histo1DPtr _histMeanMultiLambda; Histo1DPtr _histMeanMultiSigma0; Histo1DPtr _histMeanMultiSigmaMinus; Histo1DPtr _histMeanMultiSigmaPlus; Histo1DPtr _histMeanMultiSigmaPlusMinus; Histo1DPtr _histMeanMultiXiMinus; Histo1DPtr _histMeanMultiDelta1232PlusPlus; Histo1DPtr _histMeanMultiSigma1385Minus; Histo1DPtr _histMeanMultiSigma1385Plus; Histo1DPtr _histMeanMultiSigma1385PlusMinus; Histo1DPtr _histMeanMultiXi1530_0; Histo1DPtr _histMeanMultiOmegaMinus; Histo1DPtr _histMeanMultiLambda_c_Plus; Histo1DPtr _histMeanMultiLambda_b_0; Histo1DPtr _histMeanMultiSigma_c_PlusPlus_0; Histo1DPtr _histMeanMultiLambda1520; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(PDG_HADRON_MULTIPLICITIES_RATIOS); } diff --git a/analyses/pluginRHIC/BRAHMS_2004_I647076.cc b/analyses/pluginRHIC/BRAHMS_2004_I647076.cc --- a/analyses/pluginRHIC/BRAHMS_2004_I647076.cc +++ b/analyses/pluginRHIC/BRAHMS_2004_I647076.cc @@ -1,217 +1,217 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/SingleValueProjection.hh" #include "Rivet/Projections/ImpactParameterProjection.hh" #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Projections/ChargedFinalState.hh" namespace Rivet { /// @brief BRAHMS Centrality projection. class BRAHMSCentrality : public SingleValueProjection { public: // Constructor BRAHMSCentrality() : SingleValueProjection() { // Using here the BRAHMS reaction centrality from eg. 1602.01183, which // might not be correct. declare(ChargedFinalState(Cuts::pT > 0.1*GeV && Cuts::abseta < 2.2), "ChargedFinalState"); } // Destructor virtual ~BRAHMSCentrality() {} // Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(BRAHMSCentrality); protected: // Do the projection. Count the number of charged particles in // the specified range. virtual void project(const Event& e) { clear(); set(apply (e, "ChargedFinalState").particles().size()); } // Compare to another projection. virtual int compare(const Projection& p) const { // This projection is only used for the analysis below. return UNDEFINED; } }; /// @brief Brahms centrality calibration analysis based on the // BrahmsCentrality projection. No data is given for this // analysis, so one MUST do a calibration run. class BRAHMS_2004_CENTRALITY : public Analysis { public: // Constructor BRAHMS_2004_CENTRALITY() : Analysis("BRAHMS_2004_CENTRALITY") {} // Initialize the analysis void init() { declare(BRAHMSCentrality(),"Centrality"); declare(ImpactParameterProjection(), "IMP"); // The central multiplicity. mult = bookHisto1D("mult",450,0,4500); // Safeguard against filling preloaded histograms. done = (mult->numEntries() > 0); // The impact parameter. imp = bookHisto1D("mult_IMP",100,0,20); } // Analyse a single event void analyze(const Event& event) { if (done) return; // Fill impact parameter. imp->fill(apply(event,"IMP")(), event.weight()); // Fill multiplicity. mult->fill(apply(event,"Centrality")(), event.weight()); } // Finalize the analysis void finalize() { // Normalize the distributions, safeguarding against // yoda normalization error. if(mult->numEntries() > 0) mult->normalize(); if(imp->numEntries() > 0) imp->normalize(); } private: // Histograms. Histo1DPtr mult; Histo1DPtr imp; // Flag to test if we have preloaded histograms. bool done; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BRAHMS_2004_CENTRALITY); /// @brief Brahms pT spectra for id particles (pi+, pi-, K+, K-) // in small bins of rapidity, 5% central collisions. // System: AuAu @ 200GeV/nn. class BRAHMS_2004_I647076 : public Analysis { public: /// Constructor DEFAULT_RIVET_ANALYSIS_CTOR(BRAHMS_2004_I647076); /// @name Analysis methods //@{ /// Book histograms and initialise projections before the run void init() { // Initialise and register projections // Centrality Projection. declareCentrality(BRAHMSCentrality(), "BRAHMS_2004_CENTRALITY","mult","BCEN"); // TODO: Feed down correction is unclear. declare(FinalState(Cuts::rap < 4 && Cuts::rap > -0.1 && Cuts::pT > 100*MeV), "FS"); // The measured rapidity intervals for pions. rapIntervalsPi = {{-0.1,0.},{0.,0.1},{0.4,0.6},{0.6,0.8},{0.8,1.0}, {1.0,1.2},{1.2,1.4},{2.1,2.3},{2.4,2.6},{3.0,3.1},{3.1,3.2},{3.2,3.3}, {3.3,3.4},{3.4,3.66}}; // The measured rapidity intervals for kaons. rapIntervalsK = {{-0.1,0.},{0.,0.1},{0.4,0.6},{0.6,0.8},{0.8,1.0}, {1.0,1.2},{2.0,2.2},{2.3,2.5},{2.9,3.0},{3.0,3.1},{3.1,3.2},{3.2,3.4}}; // Book histograms for (int i = 1, N = rapIntervalsPi.size(); i <= N; ++i) { piPlus.push_back(bookHisto1D(1, 1, i)); piMinus.push_back(bookHisto1D(1, 1, 14 + i)); } for (int i = 1, N = rapIntervalsK.size(); i <= N; ++i) { kPlus.push_back(bookHisto1D(2, 1, i)); kMinus.push_back(bookHisto1D(2, 1, 12 + i)); } // Counter for accepted sum of weights (centrality cut). centSow = bookCounter("centSow"); } /// Perform the per-event analysis void analyze(const Event& event) { const double w = event.weight(); // Reject all non-central events. The paper does not speak of // any other event trigger, which in any case should matter // little for central events. if(apply(event,"BCEN")() > 5.0) return; // Keep track of sum of weights. centSow->fill(w); const FinalState& fs = apply(event,"FS"); // Loop over particles. for (const auto& p : fs.particles()) { const double y = p.rapidity(); const double pT = p.pT(); const int id = p.pid(); // First pions. if (abs(id) == 211) { // Protect against decaying K0S and Lambda if (p.hasAncestor(310) || p.hasAncestor(-310) || p.hasAncestor(3122) || p.hasAncestor(3122)) continue; for (int i = 0, N = rapIntervalsPi.size(); i < N; ++i) { if (y > rapIntervalsPi[i].first && y <= rapIntervalsPi[i].second) { const double dy = rapIntervalsPi[i].second - rapIntervalsPi[i].first; const double nWeight = w / ( 2.*M_PI*pT*dy); if (id == 211) piPlus[i]->fill(pT, nWeight); else piMinus[i]->fill(pT, nWeight); break; } } } // Then kaons. else if (abs(id) == 321) { for (int i = 0, N = rapIntervalsK.size(); i < N; ++i) { if (y > rapIntervalsK[i].first && y <= rapIntervalsK[i].second) { const double dy = rapIntervalsK[i].second - rapIntervalsK[i].first; const double nWeight = w / ( 2.*M_PI*pT*dy); if (id == 321) kPlus[i]->fill(pT, nWeight); else kMinus[i]->fill(pT, nWeight); break; } } } } } /// Normalise histograms etc., after the run void finalize() { // Normalize all histograms to per-event yields. for (int i = 0, N = rapIntervalsPi.size(); i < N; ++i) { piPlus[i]->scaleW(1./centSow->sumW()); piMinus[i]->scaleW(1./centSow->sumW()); } for (int i = 0, N = rapIntervalsK.size(); i < N; ++i) { kPlus[i]->scaleW(1./centSow->sumW()); kMinus[i]->scaleW(1./centSow->sumW()); } } //@} // The rapidity intervals. vector > rapIntervalsPi; vector > rapIntervalsK; /// @name Histograms //@{ vector piPlus; vector piMinus; vector kPlus; vector kMinus; CounterPtr centSow; //@} }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(BRAHMS_2004_I647076); } diff --git a/analyses/pluginRHIC/STAR_2006_S6860818.cc b/analyses/pluginRHIC/STAR_2006_S6860818.cc --- a/analyses/pluginRHIC/STAR_2006_S6860818.cc +++ b/analyses/pluginRHIC/STAR_2006_S6860818.cc @@ -1,193 +1,193 @@ // -*- C++ -*- #include "Rivet/Analysis.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief STAR strange particle spectra in pp at 200 GeV class STAR_2006_S6860818 : public Analysis { public: /// Constructor STAR_2006_S6860818() : Analysis("STAR_2006_S6860818"), _sumWeightSelected(0.0) { for (size_t i = 0; i < 4; i++) { _nBaryon[i] = 0; _nAntiBaryon[i] = 0; _nWeightedBaryon[i] = 0.; _nWeightedAntiBaryon[i] = 0.; } } /// Book projections and histograms void init() { ChargedFinalState bbc1(Cuts::etaIn(-5.0, -3.5)); // beam-beam-counter trigger ChargedFinalState bbc2(Cuts::etaIn( 3.5, 5.0)); // beam-beam-counter trigger declare(bbc1, "BBC1"); declare(bbc2, "BBC2"); - UnstableFinalState ufs(Cuts::abseta < 2.5); + UnstableParticles ufs(Cuts::abseta < 2.5); declare(ufs, "UFS"); _h_pT_k0s = bookHisto1D(1, 1, 1); _h_pT_kminus = bookHisto1D(1, 2, 1); _h_pT_kplus = bookHisto1D(1, 3, 1); _h_pT_lambda = bookHisto1D(1, 4, 1); _h_pT_lambdabar = bookHisto1D(1, 5, 1); _h_pT_ximinus = bookHisto1D(1, 6, 1); _h_pT_xiplus = bookHisto1D(1, 7, 1); //_h_pT_omega = bookHisto1D(1, 8, 1); _h_antibaryon_baryon_ratio = bookScatter2D(2, 1, 1); _h_lambar_lam = bookScatter2D(2, 2, 1); _h_xiplus_ximinus = bookScatter2D(2, 3, 1); _h_pT_vs_mass = bookProfile1D(3, 1, 1); } /// Do the analysis void analyze(const Event& event) { const ChargedFinalState& bbc1 = apply(event, "BBC1"); const ChargedFinalState& bbc2 = apply(event, "BBC2"); if (bbc1.size()<1 || bbc2.size()<1) { MSG_DEBUG("Failed beam-beam-counter trigger"); vetoEvent; } const double weight = event.weight(); - const UnstableFinalState& ufs = apply(event, "UFS"); + const UnstableParticles& ufs = apply(event, "UFS"); foreach (const Particle& p, ufs.particles()) { if (p.absrap() < 0.5) { const PdgId pid = p.pid(); const double pT = p.pT() / GeV; switch (abs(pid)) { case PID::PIPLUS: if (pid < 0) _h_pT_vs_mass->fill(0.1396, pT, weight); break; case PID::PROTON: if (pid < 0) _h_pT_vs_mass->fill(0.9383, pT, weight); if (pT > 0.4) { pid > 0 ? _nBaryon[0]++ : _nAntiBaryon[0]++; pid > 0 ? _nWeightedBaryon[0]+=weight : _nWeightedAntiBaryon[0]+=weight; } break; case PID::K0S: if (pT > 0.2) { _h_pT_k0s->fill(pT, weight/pT); } _h_pT_vs_mass->fill(0.5056, pT, weight); break; case PID::K0L: _h_pT_vs_mass->fill(0.5056, pT, weight); break; case 113: // rho0(770) _h_pT_vs_mass->fill(0.7755, pT, weight); break; case 313: // K0*(892) _h_pT_vs_mass->fill(0.8960, pT, weight); break; case 333: // phi(1020) _h_pT_vs_mass->fill(1.0190, pT, weight); break; case 3214: // Sigma(1385) _h_pT_vs_mass->fill(1.3840, pT, weight); break; case 3124: // Lambda(1520) _h_pT_vs_mass->fill(1.5200, pT, weight); break; case PID::KPLUS: if (pid < 0) _h_pT_vs_mass->fill(0.4856, pT, weight); if (pT > 0.2) { pid > 0 ? _h_pT_kplus->fill(pT, weight/pT) : _h_pT_kminus->fill(pT, weight/pT); } break; case PID::LAMBDA: pid > 0 ? _h_pT_vs_mass->fill(1.1050, pT, weight) : _h_pT_vs_mass->fill(1.1250, pT, weight); if (pT > 0.3) { pid > 0 ? _h_pT_lambda->fill(pT, weight/pT) : _h_pT_lambdabar->fill(pT, weight/pT); pid > 0 ? _nBaryon[1]++ : _nAntiBaryon[1]++; pid > 0 ? _nWeightedBaryon[1]+=weight : _nWeightedAntiBaryon[1]+=weight; } break; case PID::XIMINUS: pid > 0 ? _h_pT_vs_mass->fill(1.3120, pT, weight) : _h_pT_vs_mass->fill(1.3320, pT, weight); if (pT > 0.5) { pid > 0 ? _h_pT_ximinus->fill(pT, weight/pT) : _h_pT_xiplus->fill(pT, weight/pT); pid > 0 ? _nBaryon[2]++ : _nAntiBaryon[2]++; pid > 0 ? _nWeightedBaryon[2]+=weight : _nWeightedAntiBaryon[2]+=weight; } break; case PID::OMEGAMINUS: _h_pT_vs_mass->fill(1.6720, pT, weight); if (pT > 0.5) { //_h_pT_omega->fill(pT, weight/pT); pid > 0 ? _nBaryon[3]++ : _nAntiBaryon[3]++; pid > 0 ? _nWeightedBaryon[3]+=weight : _nWeightedAntiBaryon[3]+=weight; } break; } } } _sumWeightSelected += event.weight(); } /// Finalize void finalize() { std::vector points; for (size_t i=0 ; i<4 ; i++) { if (_nWeightedBaryon[i]==0 || _nWeightedAntiBaryon[i]==0) { points.push_back(Point2D(i,0,0.5,0)); } else { double y = _nWeightedAntiBaryon[i]/_nWeightedBaryon[i]; double dy = sqrt( 1./_nAntiBaryon[i] + 1./_nBaryon[i] ); points.push_back(Point2D(i,y,0.5,y*dy)); } } _h_antibaryon_baryon_ratio->addPoints( points ); divide(_h_pT_lambdabar,_h_pT_lambda, _h_lambar_lam); divide(_h_pT_xiplus,_h_pT_ximinus, _h_xiplus_ximinus); scale(_h_pT_k0s, 1./(2*M_PI*_sumWeightSelected)); scale(_h_pT_kminus, 1./(2*M_PI*_sumWeightSelected)); scale(_h_pT_kplus, 1./(2*M_PI*_sumWeightSelected)); scale(_h_pT_lambda, 1./(2*M_PI*_sumWeightSelected)); scale(_h_pT_lambdabar, 1./(2*M_PI*_sumWeightSelected)); scale(_h_pT_ximinus, 1./(2*M_PI*_sumWeightSelected)); scale(_h_pT_xiplus, 1./(2*M_PI*_sumWeightSelected)); //scale(_h_pT_omega, 1./(2*M_PI*_sumWeightSelected)); MSG_DEBUG("sumOfWeights() = " << sumOfWeights()); MSG_DEBUG("_sumWeightSelected = " << _sumWeightSelected); } private: double _sumWeightSelected; int _nBaryon[4]; int _nAntiBaryon[4]; double _nWeightedBaryon[4]; double _nWeightedAntiBaryon[4]; Histo1DPtr _h_pT_k0s, _h_pT_kminus, _h_pT_kplus, _h_pT_lambda, _h_pT_lambdabar, _h_pT_ximinus, _h_pT_xiplus; //Histo1DPtr _h_pT_omega; Scatter2DPtr _h_antibaryon_baryon_ratio; Profile1DPtr _h_pT_vs_mass; Scatter2DPtr _h_lambar_lam; Scatter2DPtr _h_xiplus_ximinus; }; // The hook for the plugin system DECLARE_RIVET_PLUGIN(STAR_2006_S6860818); } diff --git a/include/Rivet/Analysis.hh b/include/Rivet/Analysis.hh --- a/include/Rivet/Analysis.hh +++ b/include/Rivet/Analysis.hh @@ -1,1239 +1,1244 @@ // -*- C++ -*- #ifndef RIVET_Analysis_HH #define RIVET_Analysis_HH #include "Rivet/Config/RivetCommon.hh" #include "Rivet/AnalysisInfo.hh" #include "Rivet/Event.hh" #include "Rivet/Projection.hh" #include "Rivet/ProjectionApplier.hh" #include "Rivet/ProjectionHandler.hh" #include "Rivet/AnalysisLoader.hh" #include "Rivet/Tools/Cuts.hh" #include "Rivet/Tools/Logging.hh" #include "Rivet/Tools/ParticleUtils.hh" #include "Rivet/Tools/BinnedHistogram.hh" #include "Rivet/Tools/RivetMT2.hh" #include "Rivet/Tools/RivetYODA.hh" #include "Rivet/Tools/Percentile.hh" #include "Rivet/Projections/CentralityProjection.hh" /// @def vetoEvent /// Preprocessor define for vetoing events, including the log message and return. #define vetoEvent \ do { MSG_DEBUG("Vetoing event on line " << __LINE__ << " of " << __FILE__); return; } while(0) namespace Rivet { // Forward declaration class AnalysisHandler; /// @brief This is the base class of all analysis classes in Rivet. /// /// There are /// three virtual functions which should be implemented in base classes: /// /// void init() is called by Rivet before a run is started. Here the /// analysis class should book necessary histograms. The needed /// projections should probably rather be constructed in the /// constructor. /// /// void analyze(const Event&) is called once for each event. Here the /// analysis class should apply the necessary Projections and fill the /// histograms. /// /// void finalize() is called after a run is finished. Here the analysis /// class should do whatever manipulations are necessary on the /// histograms. Writing the histograms to a file is, however, done by /// the Rivet class. class Analysis : public ProjectionApplier { /// The AnalysisHandler is a friend. friend class AnalysisHandler; public: /// @name Standard constructors and destructors. //@{ // /// The default constructor. // Analysis(); /// Constructor Analysis(const std::string& name); /// The destructor. virtual ~Analysis() {} //@} public: /// @name Main analysis methods //@{ /// Initialize this analysis object. A concrete class should here /// book all necessary histograms. An overridden function must make /// sure it first calls the base class function. virtual void init() { } /// Analyze one event. A concrete class should here apply the /// necessary projections on the \a event and fill the relevant /// histograms. An overridden function must make sure it first calls /// the base class function. virtual void analyze(const Event& event) = 0; /// Finalize this analysis object. A concrete class should here make /// all necessary operations on the histograms. Writing the /// histograms to a file is, however, done by the Rivet class. An /// overridden function must make sure it first calls the base class /// function. virtual void finalize() { } //@} public: /// @name Metadata /// Metadata is used for querying from the command line and also for /// building web pages and the analysis pages in the Rivet manual. //@{ /// Get the actual AnalysisInfo object in which all this metadata is stored. const AnalysisInfo& info() const { assert(_info && "No AnalysisInfo object :O"); return *_info; } /// @brief Get the name of the analysis. /// /// By default this is computed by combining the results of the /// experiment, year and Spires ID metadata methods and you should /// only override it if there's a good reason why those won't /// work. If options has been set for this instance, a /// corresponding string is appended at the end. virtual std::string name() const { return ( (info().name().empty()) ? _defaultname : info().name() ) + _optstring; } // get name of reference data file, which could be different from plugin name virtual std::string getRefDataName() const { return (info().getRefDataName().empty()) ? _defaultname : info().getRefDataName(); } // set name of reference data file, which could be different from plugin name virtual void setRefDataName(const std::string& ref_data="") { info().setRefDataName(!ref_data.empty() ? ref_data : name()); } /// Get the Inspire ID code for this analysis. virtual std::string inspireId() const { return info().inspireId(); } /// Get the SPIRES ID code for this analysis (~deprecated). virtual std::string spiresId() const { return info().spiresId(); } /// @brief Names & emails of paper/analysis authors. /// /// Names and email of authors in 'NAME \' format. The first /// name in the list should be the primary contact person. virtual std::vector authors() const { return info().authors(); } /// @brief Get a short description of the analysis. /// /// Short (one sentence) description used as an index entry. /// Use @a description() to provide full descriptive paragraphs /// of analysis details. virtual std::string summary() const { return info().summary(); } /// @brief Get a full description of the analysis. /// /// Full textual description of this analysis, what it is useful for, /// what experimental techniques are applied, etc. Should be treated /// as a chunk of restructuredText (http://docutils.sourceforge.net/rst.html), /// with equations to be rendered as LaTeX with amsmath operators. virtual std::string description() const { return info().description(); } /// @brief Information about the events needed as input for this analysis. /// /// Event types, energies, kinematic cuts, particles to be considered /// stable, etc. etc. Should be treated as a restructuredText bullet list /// (http://docutils.sourceforge.net/rst.html) virtual std::string runInfo() const { return info().runInfo(); } /// Experiment which performed and published this analysis. virtual std::string experiment() const { return info().experiment(); } /// Collider on which the experiment ran. virtual std::string collider() const { return info().collider(); } /// When the original experimental analysis was published. virtual std::string year() const { return info().year(); } /// The luminosity in inverse femtobarn virtual std::string luminosityfb() const { return info().luminosityfb(); } /// Journal, and preprint references. virtual std::vector references() const { return info().references(); } /// BibTeX citation key for this article. virtual std::string bibKey() const { return info().bibKey(); } /// BibTeX citation entry for this article. virtual std::string bibTeX() const { return info().bibTeX(); } /// Whether this analysis is trusted (in any way!) virtual std::string status() const { return (info().status().empty()) ? "UNVALIDATED" : info().status(); } /// Any work to be done on this analysis. virtual std::vector todos() const { return info().todos(); } /// Return the allowed pairs of incoming beams required by this analysis. virtual const std::vector& requiredBeams() const { return info().beams(); } /// Declare the allowed pairs of incoming beams required by this analysis. virtual Analysis& setRequiredBeams(const std::vector& requiredBeams) { info().setBeams(requiredBeams); return *this; } /// Sets of valid beam energy pairs, in GeV virtual const std::vector >& requiredEnergies() const { return info().energies(); } /// Get vector of analysis keywords virtual const std::vector & keywords() const { return info().keywords(); } /// Declare the list of valid beam energy pairs, in GeV virtual Analysis& setRequiredEnergies(const std::vector >& requiredEnergies) { info().setEnergies(requiredEnergies); return *this; } /// Return true if this analysis needs to know the process cross-section. /// @todo Remove this and require HepMC >= 2.06 bool needsCrossSection() const { return info().needsCrossSection(); } /// Declare whether this analysis needs to know the process cross-section from the generator. /// @todo Remove this and require HepMC >= 2.06 Analysis& setNeedsCrossSection(bool needed=true) { info().setNeedsCrossSection(needed); return *this; } //@} /// @name Internal metadata modifying methods //@{ /// Get the actual AnalysisInfo object in which all this metadata is stored (non-const). AnalysisInfo& info() { assert(_info && "No AnalysisInfo object :O"); return *_info; } //@} /// @name Run conditions //@{ /// Incoming beams for this run const ParticlePair& beams() const; /// Incoming beam IDs for this run const PdgIdPair beamIds() const; /// Centre of mass energy for this run double sqrtS() const; /// Check if we are running rivet-merge. bool merging() const { return sqrtS() <= 0.0; } //@} /// @name Analysis / beam compatibility testing /// @todo Replace with beamsCompatible() with no args (calling beams() function internally) /// @todo Add beamsMatch() methods with same (shared-code?) tolerance as in beamsCompatible() //@{ /// Check if analysis is compatible with the provided beam particle IDs and energies bool isCompatible(const ParticlePair& beams) const; /// Check if analysis is compatible with the provided beam particle IDs and energies bool isCompatible(PdgId beam1, PdgId beam2, double e1, double e2) const; /// Check if analysis is compatible with the provided beam particle IDs and energies bool isCompatible(const PdgIdPair& beams, const std::pair& energies) const; //@} /// Set the cross section from the generator Analysis& setCrossSection(double xs); //, double xserr=0.0); /// Access the controlling AnalysisHandler object. AnalysisHandler& handler() const { return *_analysishandler; } protected: /// Get a Log object based on the name() property of the calling analysis object. Log& getLog() const; /// Get the process cross-section in pb. Throws if this hasn't been set. double crossSection() const; /// Get the process cross-section per generated event in pb. Throws if this /// hasn't been set. double crossSectionPerEvent() const; /// @brief Get the number of events seen (via the analysis handler). /// /// @note Use in the finalize phase only. size_t numEvents() const; /// @brief Get the sum of event weights seen (via the analysis handler). /// /// @note Use in the finalize phase only. double sumW() const; /// Alias double sumOfWeights() const { return sumW(); } /// @brief Get the sum of squared event weights seen (via the analysis handler). /// /// @note Use in the finalize phase only. double sumW2() const; protected: /// @name Histogram paths //@{ /// Get the canonical histogram "directory" path for this analysis. const std::string histoDir() const; /// Get the canonical histogram path for the named histogram in this analysis. const std::string histoPath(const std::string& hname) const; /// Get the canonical histogram path for the numbered histogram in this analysis. const std::string histoPath(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const; /// Get the internal histogram name for given d, x and y (cf. HepData) const std::string mkAxisCode(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const; /// Alias /// @deprecated Prefer the "mk" form, consistent with other "making function" names const std::string makeAxisCode(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return mkAxisCode(datasetId, xAxisId, yAxisId); } //@} /// @name Histogram reference data //@{ /// Get reference data for a named histo /// @todo SFINAE to ensure that the type inherits from YODA::AnalysisObject? template const T& refData(const string& hname) const { _cacheRefData(); MSG_TRACE("Using histo bin edges for " << name() << ":" << hname); if (!_refdata[hname]) { MSG_ERROR("Can't find reference histogram " << hname); throw Exception("Reference data " + hname + " not found."); } return dynamic_cast(*_refdata[hname]); } /// Get reference data for a numbered histo /// @todo SFINAE to ensure that the type inherits from YODA::AnalysisObject? template const T& refData(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { const string hname = makeAxisCode(datasetId, xAxisId, yAxisId); return refData(hname); } //@} /// @name Counter booking //@{ /// Book a counter. CounterPtr bookCounter(const std::string& name, const std::string& title=""); // const std::string& valtitle="" /// Book a counter, using a path generated from the dataset and axis ID codes /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. CounterPtr bookCounter(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, const std::string& title=""); // const std::string& valtitle="" //@} /// @name 1D histogram booking //@{ /// Book a 1D histogram with @a nbins uniformly distributed across the range @a lower - @a upper . Histo1DPtr bookHisto1D(const std::string& name, size_t nbins, double lower, double upper, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D histogram with non-uniform bins defined by the vector of bin edges @a binedges . Histo1DPtr bookHisto1D(const std::string& name, const std::vector& binedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D histogram with non-uniform bins defined by the vector of bin edges @a binedges . Histo1DPtr bookHisto1D(const std::string& name, const std::initializer_list& binedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D histogram with binning from a reference scatter. Histo1DPtr bookHisto1D(const std::string& name, const Scatter2D& refscatter, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D histogram, using the binnings in the reference data histogram. Histo1DPtr bookHisto1D(const std::string& name, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. Histo1DPtr bookHisto1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); //@} /// @name 2D histogram booking //@{ /// Book a 2D histogram with @a nxbins and @a nybins uniformly /// distributed across the ranges @a xlower - @a xupper and @a /// ylower - @a yupper respectively along the x- and y-axis. Histo2DPtr bookHisto2D(const std::string& name, size_t nxbins, double xlower, double xupper, size_t nybins, double ylower, double yupper, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D histogram with non-uniform bins defined by the /// vectors of bin edges @a xbinedges and @a ybinedges. Histo2DPtr bookHisto2D(const std::string& name, const std::vector& xbinedges, const std::vector& ybinedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D histogram with non-uniform bins defined by the /// vectors of bin edges @a xbinedges and @a ybinedges. Histo2DPtr bookHisto2D(const std::string& name, const std::initializer_list& xbinedges, const std::initializer_list& ybinedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D histogram with binning from a reference scatter. Histo2DPtr bookHisto2D(const std::string& name, const Scatter3D& refscatter, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D histogram, using the binnings in the reference data histogram. Histo2DPtr bookHisto2D(const std::string& name, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. Histo2DPtr bookHisto2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); //@} /// @name 1D profile histogram booking //@{ /// Book a 1D profile histogram with @a nbins uniformly distributed across the range @a lower - @a upper . Profile1DPtr bookProfile1D(const std::string& name, size_t nbins, double lower, double upper, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D profile histogram with non-uniform bins defined by the vector of bin edges @a binedges . Profile1DPtr bookProfile1D(const std::string& name, const std::vector& binedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D profile histogram with non-uniform bins defined by the vector of bin edges @a binedges . Profile1DPtr bookProfile1D(const std::string& name, const std::initializer_list& binedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D profile histogram with binning from a reference scatter. Profile1DPtr bookProfile1D(const std::string& name, const Scatter2D& refscatter, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D profile histogram, using the binnings in the reference data histogram. Profile1DPtr bookProfile1D(const std::string& name, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// Book a 1D profile histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. Profile1DPtr bookProfile1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); //@} /// @name 2D profile histogram booking //@{ /// Book a 2D profile histogram with @a nxbins and @a nybins uniformly /// distributed across the ranges @a xlower - @a xupper and @a ylower - @a /// yupper respectively along the x- and y-axis. Profile2DPtr bookProfile2D(const std::string& name, size_t nxbins, double xlower, double xupper, size_t nybins, double ylower, double yupper, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D profile histogram with non-uniform bins defined by the vectorx /// of bin edges @a xbinedges and @a ybinedges. Profile2DPtr bookProfile2D(const std::string& name, const std::vector& xbinedges, const std::vector& ybinedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D profile histogram with non-uniform bins defined by the vectorx /// of bin edges @a xbinedges and @a ybinedges. Profile2DPtr bookProfile2D(const std::string& name, const std::initializer_list& xbinedges, const std::initializer_list& ybinedges, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D profile histogram with binning from a reference scatter. Profile2DPtr bookProfile2D(const std::string& name, const Scatter3D& refscatter, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D profile histogram, using the binnings in the reference data histogram. Profile2DPtr bookProfile2D(const std::string& name, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); /// Book a 2D profile histogram, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. Profile2DPtr bookProfile2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, const std::string& title="", const std::string& xtitle="", const std::string& ytitle="", const std::string& ztitle=""); //@} /// @name 2D scatter booking //@{ /// @brief Book a 2-dimensional data point set with the given name. /// /// @note Unlike histogram booking, scatter booking by default makes no /// attempt to use reference data to pre-fill the data object. If you want /// this, which is sometimes useful e.g. when the x-position is not really /// meaningful and can't be extracted from the data, then set the @a /// copy_pts parameter to true. This creates points to match the reference /// data's x values and errors, but with the y values and errors zeroed... /// assuming that there is a reference histo with the same name: if there /// isn't, an exception will be thrown. Scatter2DPtr bookScatter2D(const std::string& name, bool copy_pts=false, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// @brief Book a 2-dimensional data point set, using the binnings in the reference data histogram. /// /// The paper, dataset and x/y-axis IDs will be used to build the histo name in the HepData standard way. /// /// @note Unlike histogram booking, scatter booking by default makes no /// attempt to use reference data to pre-fill the data object. If you want /// this, which is sometimes useful e.g. when the x-position is not really /// meaningful and can't be extracted from the data, then set the @a /// copy_pts parameter to true. This creates points to match the reference /// data's x values and errors, but with the y values and errors zeroed. Scatter2DPtr bookScatter2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId, bool copy_pts=false, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// @brief Book a 2-dimensional data point set with equally spaced x-points in a range. /// /// The y values and errors will be set to 0. Scatter2DPtr bookScatter2D(const std::string& name, size_t npts, double lower, double upper, const std::string& title="", const std::string& xtitle="", const std::string& ytitle=""); /// @brief Book a 2-dimensional data point set based on provided contiguous "bin edges". /// /// The y values and errors will be set to 0. Scatter2DPtr bookScatter2D(const std::string& hname, const std::vector& binedges, const std::string& title, const std::string& xtitle, const std::string& ytitle); /// Book a 2-dimensional data point set with x-points from an existing scatter and a new path. Scatter2DPtr bookScatter2D(const Scatter2DPtr scPtr, const std::string& path, const std::string& title = "", const std::string& xtitle = "", const std::string& ytitle = "" ); //@} public: /// @name accessing options for this Analysis instance. //@{ + /// Return the map of all options given to this analysis. + const std::map & options() { + return _options; + } + /// Get an option for this analysis instance as a string. std::string getOption(std::string optname) { if ( _options.find(optname) != _options.end() ) return _options.find(optname)->second; return ""; } /// Get an option for this analysis instance converted to a /// specific type (given by the specified @a def value). template T getOption(std::string optname, T def) { if (_options.find(optname) == _options.end()) return def; std::stringstream ss; ss << _options.find(optname)->second; T ret; ss >> ret; return ret; } //@} /// @brief Book a CentralityProjection /// /// Using a SingleValueProjection, @a proj, giving the value of an /// experimental observable to be used as a centrality estimator, /// book a CentralityProjection based on the experimentally /// measured pecentiles of this observable (as given by the /// reference data for the @a calHistName histogram in the @a /// calAnaName analysis. If a preloaded file with the output of a /// run using the @a calAnaName analysis contains a valid /// generated @a calHistName histogram, it will be used as an /// optional percentile binning. Also if this preloaded file /// contains a histogram with the name @a calHistName with an /// appended "_IMP" This histogram will be used to add an optional /// centrality percentile based on the generated impact /// parameter. If @increasing is true, a low (high) value of @proj /// is assumed to correspond to a more peripheral (central) event. const CentralityProjection& declareCentrality(const SingleValueProjection &proj, string calAnaName, string calHistName, const string projName, bool increasing = false); /// @brief Book a Pecentile wrapper around AnalysisObjects. /// /// Based on a previously registered CentralityProjection named @a /// projName book one AnalysisObject for each @a centralityBin and /// name them according to the corresponding code in the @a ref /// vector. template Percentile bookPercentile(string projName, vector > centralityBins, vector > ref) { typedef typename ReferenceTraits::RefT RefT; Percentile pctl(this, projName); const int nCent = centralityBins.size(); for (int iCent = 0; iCent < nCent; ++iCent) { const string axisCode = makeAxisCode(std::get<0>(ref[iCent]), std::get<1>(ref[iCent]), std::get<2>(ref[iCent])); const RefT & refscatter = refData(axisCode); shared_ptr ao = addOrGetCompatAO(make_shared(refscatter, histoPath(axisCode))); CounterPtr cnt = addOrGetCompatAO(make_shared(histoPath("TMP/COUNTER/" + axisCode))); pctl.add(ao, cnt, centralityBins[iCent]); } return pctl; } /// @brief Book Pecentile wrappers around AnalysisObjects. /// /// Based on a previously registered CentralityProjection named @a /// projName book one (or several) AnalysisObject(s) named /// according to @a ref where the x-axis will be filled according /// to the percentile output(s) of the @projName. template PercentileXaxis bookPercentileXaxis(string projName, tuple ref) { typedef typename ReferenceTraits::RefT RefT; PercentileXaxis pctl(this, projName); const string axisCode = makeAxisCode(std::get<0>(ref), std::get<1>(ref), std::get<2>(ref)); const RefT & refscatter = refData(axisCode); shared_ptr ao = addOrGetCompatAO(make_shared(refscatter, histoPath(axisCode))); pctl.add(proj, ao, make_shared()); return pctl; } /// @name Analysis object manipulation /// @todo Should really be protected: only public to keep BinnedHistogram happy for now... //@{ /// Multiplicatively scale the given counter, @a cnt, by factor @s factor. void scale(CounterPtr cnt, double factor); /// Multiplicatively scale the given counters, @a cnts, by factor @s factor. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of CounterPtrs void scale(const std::vector& cnts, double factor) { for (auto& c : cnts) scale(c, factor); } /// @todo YUCK! template void scale(const CounterPtr (&cnts)[array_size], double factor) { // for (size_t i = 0; i < std::extent::value; ++i) scale(cnts[i], factor); for (auto& c : cnts) scale(c, factor); } /// Normalize the given histogram, @a histo, to area = @a norm. void normalize(Histo1DPtr histo, double norm=1.0, bool includeoverflows=true); /// Normalize the given histograms, @a histos, to area = @a norm. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo1DPtrs void normalize(const std::vector& histos, double norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// @todo YUCK! template void normalize(const Histo1DPtr (&histos)[array_size], double norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// Multiplicatively scale the given histogram, @a histo, by factor @s factor. void scale(Histo1DPtr histo, double factor); /// Multiplicatively scale the given histograms, @a histos, by factor @s factor. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo1DPtrs void scale(const std::vector& histos, double factor) { for (auto& h : histos) scale(h, factor); } /// @todo YUCK! template void scale(const Histo1DPtr (&histos)[array_size], double factor) { for (auto& h : histos) scale(h, factor); } /// Normalize the given histogram, @a histo, to area = @a norm. void normalize(Histo2DPtr histo, double norm=1.0, bool includeoverflows=true); /// Normalize the given histograms, @a histos, to area = @a norm. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo2DPtrs void normalize(const std::vector& histos, double norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// @todo YUCK! template void normalize(const Histo2DPtr (&histos)[array_size], double norm=1.0, bool includeoverflows=true) { for (auto& h : histos) normalize(h, norm, includeoverflows); } /// Multiplicatively scale the given histogram, @a histo, by factor @s factor. void scale(Histo2DPtr histo, double factor); /// Multiplicatively scale the given histograms, @a histos, by factor @s factor. /// @note Constness intentional, if weird, to allow passing rvalue refs of smart ptrs (argh) /// @todo Use SFINAE for a generic iterable of Histo2DPtrs void scale(const std::vector& histos, double factor) { for (auto& h : histos) scale(h, factor); } /// @todo YUCK! template void scale(const Histo2DPtr (&histos)[array_size], double factor) { for (auto& h : histos) scale(h, factor); } /// Helper for counter division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(CounterPtr c1, CounterPtr c2, Scatter1DPtr s) const; /// Helper for histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Counter& c1, const YODA::Counter& c2, Scatter1DPtr s) const; /// Helper for histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const; /// Helper for histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Histo1D& h1, const YODA::Histo1D& h2, Scatter2DPtr s) const; /// Helper for profile histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Profile1DPtr p1, Profile1DPtr p2, Scatter2DPtr s) const; /// Helper for profile histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Profile1D& p1, const YODA::Profile1D& p2, Scatter2DPtr s) const; /// Helper for 2D histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Histo2DPtr h1, Histo2DPtr h2, Scatter3DPtr s) const; /// Helper for 2D histogram division with raw YODA objects. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Histo2D& h1, const YODA::Histo2D& h2, Scatter3DPtr s) const; /// Helper for 2D profile histogram division. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(Profile2DPtr p1, Profile2DPtr p2, Scatter3DPtr s) const; /// Helper for 2D profile histogram division with raw YODA objects /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void divide(const YODA::Profile2D& p1, const YODA::Profile2D& p2, Scatter3DPtr s) const; /// Helper for histogram efficiency calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void efficiency(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const; /// Helper for histogram efficiency calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void efficiency(const YODA::Histo1D& h1, const YODA::Histo1D& h2, Scatter2DPtr s) const; /// Helper for histogram asymmetry calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void asymm(Histo1DPtr h1, Histo1DPtr h2, Scatter2DPtr s) const; /// Helper for histogram asymmetry calculation. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void asymm(const YODA::Histo1D& h1, const YODA::Histo1D& h2, Scatter2DPtr s) const; /// Helper for converting a differential histo to an integral one. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void integrate(Histo1DPtr h, Scatter2DPtr s) const; /// Helper for converting a differential histo to an integral one. /// /// @note Assigns to the (already registered) output scatter, @a s. Preserves the path information of the target. void integrate(const Histo1D& h, Scatter2DPtr s) const; //@} public: /// List of registered analysis data objects const vector& analysisObjects() const { return _analysisobjects; } protected: /// @name Data object registration, retrieval, and removal //@{ /// Register a data object in the histogram system void addAnalysisObject(AnalysisObjectPtr ao); /// Register a data object in the system and return its pointer, /// or, if an object of the same path is already there, check if it /// is compatible (eg. same type and same binning) and return that /// object instead. Emits a warning if an incompatible object with /// the same name is found and replcaces that with the given data /// object. template std::shared_ptr addOrGetCompatAO(std::shared_ptr aonew) { foreach (const AnalysisObjectPtr& ao, analysisObjects()) { if ( ao->path() == aonew->path() ) { std::shared_ptr aoold = dynamic_pointer_cast(ao); if ( aoold && bookingCompatible(aonew, aoold) ) { MSG_TRACE("Bound pre-existing data object " << aonew->path() << " for " << name()); return aoold; } else { MSG_WARNING("Found incompatible pre-existing data object with same path " << aonew->path() << " for " << name()); } } } MSG_TRACE("Registered " << aonew->annotation("Type") << " " << aonew->path() << " for " << name()); addAnalysisObject(aonew); return aonew; } /// Get a data object from the histogram system template const std::shared_ptr getAnalysisObject(const std::string& name) const { foreach (const AnalysisObjectPtr& ao, analysisObjects()) { if (ao->path() == histoPath(name)) return dynamic_pointer_cast(ao); } throw LookupError("Data object " + histoPath(name) + " not found"); } /// Get a data object from the histogram system (non-const) template std::shared_ptr getAnalysisObject(const std::string& name) { foreach (const AnalysisObjectPtr& ao, analysisObjects()) { if (ao->path() == histoPath(name)) return dynamic_pointer_cast(ao); } throw LookupError("Data object " + histoPath(name) + " not found"); } /// Unregister a data object from the histogram system (by name) void removeAnalysisObject(const std::string& path); /// Unregister a data object from the histogram system (by pointer) void removeAnalysisObject(AnalysisObjectPtr ao); /// Get all data object from the AnalysisHandler. vector getAllData(bool includeorphans) const; /// Get a data object from another analysis (e.g. preloaded /// calibration histogram). /// Get a data object from the histogram system (non-const) template std::shared_ptr getAnalysisObject(const std::string & ananame, const std::string& name) { std::string path = "/" + ananame + "/" + name; for ( AnalysisObjectPtr ao : getAllData(true) ) { if ( ao->path() == path ) return dynamic_pointer_cast(ao); } return std::shared_ptr(); } /// Get a named Histo1D object from the histogram system const Histo1DPtr getHisto1D(const std::string& name) const { return getAnalysisObject(name); } /// Get a named Histo1D object from the histogram system (non-const) Histo1DPtr getHisto1D(const std::string& name) { return getAnalysisObject(name); } /// Get a Histo1D object from the histogram system by axis ID codes (non-const) const Histo1DPtr getHisto1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a Histo1D object from the histogram system by axis ID codes (non-const) Histo1DPtr getHisto1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a named Histo2D object from the histogram system const Histo2DPtr getHisto2D(const std::string& name) const { return getAnalysisObject(name); } /// Get a named Histo2D object from the histogram system (non-const) Histo2DPtr getHisto2D(const std::string& name) { return getAnalysisObject(name); } /// Get a Histo2D object from the histogram system by axis ID codes (non-const) const Histo2DPtr getHisto2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a Histo2D object from the histogram system by axis ID codes (non-const) Histo2DPtr getHisto2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a named Profile1D object from the histogram system const Profile1DPtr getProfile1D(const std::string& name) const { return getAnalysisObject(name); } /// Get a named Profile1D object from the histogram system (non-const) Profile1DPtr getProfile1D(const std::string& name) { return getAnalysisObject(name); } /// Get a Profile1D object from the histogram system by axis ID codes (non-const) const Profile1DPtr getProfile1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a Profile1D object from the histogram system by axis ID codes (non-const) Profile1DPtr getProfile1D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a named Profile2D object from the histogram system const Profile2DPtr getProfile2D(const std::string& name) const { return getAnalysisObject(name); } /// Get a named Profile2D object from the histogram system (non-const) Profile2DPtr getProfile2D(const std::string& name) { return getAnalysisObject(name); } /// Get a Profile2D object from the histogram system by axis ID codes (non-const) const Profile2DPtr getProfile2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a Profile2D object from the histogram system by axis ID codes (non-const) Profile2DPtr getProfile2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a named Scatter2D object from the histogram system const Scatter2DPtr getScatter2D(const std::string& name) const { return getAnalysisObject(name); } /// Get a named Scatter2D object from the histogram system (non-const) Scatter2DPtr getScatter2D(const std::string& name) { return getAnalysisObject(name); } /// Get a Scatter2D object from the histogram system by axis ID codes (non-const) const Scatter2DPtr getScatter2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) const { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } /// Get a Scatter2D object from the histogram system by axis ID codes (non-const) Scatter2DPtr getScatter2D(unsigned int datasetId, unsigned int xAxisId, unsigned int yAxisId) { return getAnalysisObject(makeAxisCode(datasetId, xAxisId, yAxisId)); } //@} private: /// Name passed to constructor (used to find .info analysis data file, and as a fallback) string _defaultname; /// Pointer to analysis metadata object unique_ptr _info; /// Storage of all plot objects /// @todo Make this a map for fast lookup by path? vector _analysisobjects; /// @name Cross-section variables //@{ double _crossSection; bool _gotCrossSection; //@} /// The controlling AnalysisHandler object. AnalysisHandler* _analysishandler; /// Collection of cached refdata to speed up many autobookings: the /// reference data file should only be read once. mutable std::map _refdata; - /// Options the (this instance of) the analysis - map _options; + /// Options the (this instance of) the analysis + map _options; /// The string of options. string _optstring; private: /// @name Utility functions //@{ /// Get the reference data for this paper and cache it. void _cacheRefData() const; //@} /// The assignment operator is private and must never be called. /// In fact, it should not even be implemented. Analysis& operator=(const Analysis&); }; } // Include definition of analysis plugin system so that analyses automatically see it when including Analysis.hh #include "Rivet/AnalysisBuilder.hh" /// @def DECLARE_RIVET_PLUGIN /// Preprocessor define to prettify the global-object plugin hook mechanism. #define DECLARE_RIVET_PLUGIN(clsname) Rivet::AnalysisBuilder plugin_ ## clsname /// @def DECLARE_ALIASED_RIVET_PLUGIN /// Preprocessor define to prettify the global-object plugin hook mechanism, with an extra alias name for this analysis. // #define DECLARE_ALIASED_RIVET_PLUGIN(clsname, alias) Rivet::AnalysisBuilder plugin_ ## clsname ## ( ## #alias ## ) #define DECLARE_ALIASED_RIVET_PLUGIN(clsname, alias) DECLARE_RIVET_PLUGIN(clsname)( #alias ) /// @def DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR /// Preprocessor define to prettify the manky constructor with name string argument #define DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR(clsname) clsname() : Analysis(# clsname) {} /// @def DEFAULT_RIVET_ANALYSIS_CTOR /// Slight abbreviation for DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR #define DEFAULT_RIVET_ANALYSIS_CTOR(clsname) DEFAULT_RIVET_ANALYSIS_CONSTRUCTOR(clsname) #endif diff --git a/include/Rivet/Makefile.am b/include/Rivet/Makefile.am --- a/include/Rivet/Makefile.am +++ b/include/Rivet/Makefile.am @@ -1,196 +1,197 @@ ## Internal headers - not to be installed nobase_dist_noinst_HEADERS = ## Public headers - to be installed nobase_pkginclude_HEADERS = ## Rivet interface nobase_pkginclude_HEADERS += \ Rivet.hh \ Run.hh \ Event.hh \ ParticleBase.hh \ Particle.fhh Particle.hh \ Jet.fhh Jet.hh \ Projection.fhh Projection.hh \ ProjectionApplier.hh \ ProjectionHandler.hh \ Analysis.hh \ AnalysisHandler.hh \ AnalysisInfo.hh \ AnalysisBuilder.hh \ AnalysisLoader.hh ## Build config stuff nobase_pkginclude_HEADERS += \ Config/RivetCommon.hh \ Config/RivetConfig.hh \ Config/BuildOptions.hh ## Projections nobase_pkginclude_HEADERS += \ Projections/AliceCommon.hh \ Projections/AxesDefinition.hh \ Projections/Beam.hh \ Projections/BeamThrust.hh \ Projections/CentralEtHCM.hh \ Projections/CentralityProjection.hh \ Projections/ChargedFinalState.hh \ Projections/ChargedLeptons.hh \ Projections/ConstLossyFinalState.hh \ Projections/DirectFinalState.hh \ Projections/DISFinalState.hh \ Projections/DISKinematics.hh \ Projections/DISLepton.hh \ Projections/DressedLeptons.hh \ Projections/EventMixingFinalState.hh \ Projections/FastJets.hh \ Projections/PxConePlugin.hh \ Projections/FinalPartons.hh \ Projections/FinalState.hh \ Projections/FoxWolframMoments.hh \ Projections/FParameter.hh \ Projections/GeneratedPercentileProjection.hh \ Projections/HadronicFinalState.hh \ Projections/HeavyHadrons.hh \ Projections/Hemispheres.hh \ Projections/IdentifiedFinalState.hh \ Projections/ImpactParameterProjection.hh \ Projections/IndirectFinalState.hh \ Projections/InitialQuarks.hh \ Projections/InvMassFinalState.hh \ Projections/JetAlg.hh \ Projections/JetShape.hh \ Projections/LeadingParticlesFinalState.hh \ Projections/LossyFinalState.hh \ Projections/MergedFinalState.hh \ Projections/MissingMomentum.hh \ Projections/NeutralFinalState.hh \ Projections/NonHadronicFinalState.hh \ Projections/NonPromptFinalState.hh \ Projections/ParisiTensor.hh \ Projections/ParticleFinder.hh \ Projections/PartonicTops.hh \ Projections/PercentileProjection.hh \ Projections/PrimaryHadrons.hh \ Projections/PrimaryParticles.hh \ Projections/PromptFinalState.hh \ Projections/SingleValueProjection.hh \ Projections/SmearedParticles.hh \ Projections/SmearedJets.hh \ Projections/SmearedMET.hh \ Projections/Sphericity.hh \ Projections/Spherocity.hh \ Projections/TauFinder.hh \ Projections/Thrust.hh \ Projections/TriggerCDFRun0Run1.hh \ Projections/TriggerCDFRun2.hh \ Projections/TriggerProjection.hh \ Projections/TriggerUA5.hh \ Projections/UnstableFinalState.hh \ + Projections/UnstableParticles.hh \ Projections/UserCentEstimate.hh \ Projections/VetoedFinalState.hh \ Projections/VisibleFinalState.hh \ Projections/WFinder.hh \ Projections/ZFinder.hh ## Meta-projection convenience headers nobase_pkginclude_HEADERS += \ Projections/FinalStates.hh \ Projections/Smearing.hh ## Analysis base class headers # TODO: Move to Rivet/AnalysisTools header dir? nobase_pkginclude_HEADERS += \ Analyses/MC_Cent_pPb.hh \ Analyses/MC_ParticleAnalysis.hh \ Analyses/MC_JetAnalysis.hh \ Analyses/MC_JetSplittings.hh ## Tools nobase_pkginclude_HEADERS += \ Tools/AliceCommon.hh \ Tools/AtlasCommon.hh \ Tools/BeamConstraint.hh \ Tools/BinnedHistogram.hh \ Tools/CentralityBinner.hh \ Tools/Cmp.fhh \ Tools/Cmp.hh \ Tools/Correlators.hh \ Tools/Cutflow.hh \ Tools/Cuts.fhh \ Tools/Cuts.hh \ Tools/Exceptions.hh \ Tools/JetUtils.hh \ Tools/Logging.hh \ Tools/Random.hh \ Tools/ParticleBaseUtils.hh \ Tools/ParticleIdUtils.hh \ Tools/ParticleUtils.hh \ Tools/ParticleName.hh \ Tools/Percentile.hh \ Tools/PrettyPrint.hh \ Tools/RivetPaths.hh \ Tools/RivetSTL.hh \ Tools/RivetFastJet.hh \ Tools/RivetHepMC.hh \ Tools/RivetYODA.hh \ Tools/RivetMT2.hh \ Tools/SmearingFunctions.hh \ Tools/MomentumSmearingFunctions.hh \ Tools/ParticleSmearingFunctions.hh \ Tools/JetSmearingFunctions.hh \ Tools/TypeTraits.hh \ Tools/Utils.hh \ Tools/fjcontrib/AxesDefinition.hh \ Tools/fjcontrib/BottomUpSoftDrop.hh \ Tools/fjcontrib/EnergyCorrelator.hh \ Tools/fjcontrib/ExtraRecombiners.hh \ Tools/fjcontrib/IteratedSoftDrop.hh \ Tools/fjcontrib/MeasureDefinition.hh \ Tools/fjcontrib/ModifiedMassDropTagger.hh \ Tools/fjcontrib/Njettiness.hh \ Tools/fjcontrib/NjettinessPlugin.hh \ Tools/fjcontrib/Nsubjettiness.hh \ Tools/fjcontrib/Recluster.hh \ Tools/fjcontrib/RecursiveSoftDrop.hh \ Tools/fjcontrib/RecursiveSymmetryCutBase.hh \ Tools/fjcontrib/SoftDrop.hh \ Tools/fjcontrib/TauComponents.hh \ Tools/fjcontrib/XConePlugin.hh nobase_dist_noinst_HEADERS += \ Tools/osdir.hh ## Maths nobase_pkginclude_HEADERS += \ Math/Matrices.hh \ Math/Vector3.hh \ Math/VectorN.hh \ Math/MatrixN.hh \ Math/MatrixDiag.hh \ Math/MathHeader.hh \ Math/Vectors.hh \ Math/LorentzTrans.hh \ Math/Matrix3.hh \ Math/MathUtils.hh \ Math/Vector4.hh \ Math/Math.hh \ Math/Units.hh \ Math/Constants.hh \ Math/eigen/util.h \ Math/eigen/regressioninternal.h \ Math/eigen/regression.h \ Math/eigen/vector.h \ Math/eigen/ludecompositionbase.h \ Math/eigen/ludecomposition.h \ Math/eigen/linearsolver.h \ Math/eigen/linearsolverbase.h \ Math/eigen/matrix.h \ Math/eigen/vectorbase.h \ Math/eigen/projective.h \ Math/eigen/matrixbase.h diff --git a/include/Rivet/Math/Vector3.hh b/include/Rivet/Math/Vector3.hh --- a/include/Rivet/Math/Vector3.hh +++ b/include/Rivet/Math/Vector3.hh @@ -1,381 +1,381 @@ #ifndef RIVET_MATH_VECTOR3 #define RIVET_MATH_VECTOR3 #include "Rivet/Math/MathHeader.hh" #include "Rivet/Math/MathUtils.hh" #include "Rivet/Math/VectorN.hh" namespace Rivet { class Vector3; typedef Vector3 ThreeVector; class Matrix3; Vector3 multiply(const double, const Vector3&); Vector3 multiply(const Vector3&, const double); Vector3 add(const Vector3&, const Vector3&); Vector3 operator*(const double, const Vector3&); Vector3 operator*(const Vector3&, const double); Vector3 operator/(const Vector3&, const double); Vector3 operator+(const Vector3&, const Vector3&); Vector3 operator-(const Vector3&, const Vector3&); /// @brief Three-dimensional specialisation of Vector. class Vector3 : public Vector<3> { friend class Matrix3; friend Vector3 multiply(const double, const Vector3&); friend Vector3 multiply(const Vector3&, const double); friend Vector3 add(const Vector3&, const Vector3&); friend Vector3 subtract(const Vector3&, const Vector3&); public: Vector3() : Vector<3>() { } template Vector3(const V3& other) { this->setX(other.x()); this->setY(other.y()); this->setZ(other.z()); } Vector3(const Vector<3>& other) { this->setX(other.get(0)); this->setY(other.get(1)); this->setZ(other.get(2)); } Vector3(double x, double y, double z) { this->setX(x); this->setY(y); this->setZ(z); } ~Vector3() { } public: static Vector3 mkX() { return Vector3(1,0,0); } static Vector3 mkY() { return Vector3(0,1,0); } static Vector3 mkZ() { return Vector3(0,0,1); } public: double x() const { return get(0); } double y() const { return get(1); } double z() const { return get(2); } Vector3& setX(double x) { set(0, x); return *this; } Vector3& setY(double y) { set(1, y); return *this; } Vector3& setZ(double z) { set(2, z); return *this; } /// Dot-product with another vector double dot(const Vector3& v) const { return _vec.dot(v._vec); } /// Cross-product with another vector Vector3 cross(const Vector3& v) const { Vector3 result; result._vec = _vec.cross(v._vec); return result; } /// Angle in radians to another vector double angle(const Vector3& v) const { const double localDotOther = unit().dot(v.unit()); if (localDotOther > 1.0) return 0.0; if (localDotOther < -1.0) return M_PI; return acos(localDotOther); } - /// Unit-normalized version of this vector + /// Unit-normalized version of this vector. Vector3 unitVec() const { - /// @todo What to do in this situation? - if (isZero()) return *this; - else return *this * 1.0/this->mod(); + double md = mod(); + if ( md <= 0.0 ) return Vector3(); + else return *this * 1.0/md; } /// Synonym for unitVec Vector3 unit() const { return unitVec(); } /// Polar projection of this vector into the x-y plane Vector3 polarVec() const { Vector3 rtn = *this; rtn.setZ(0.); return rtn; } /// Synonym for polarVec Vector3 perpVec() const { return polarVec(); } /// Synonym for polarVec Vector3 rhoVec() const { return polarVec(); } /// Square of the polar radius ( double polarRadius2() const { return x()*x() + y()*y(); } /// Synonym for polarRadius2 double perp2() const { return polarRadius2(); } /// Synonym for polarRadius2 double rho2() const { return polarRadius2(); } /// Polar radius double polarRadius() const { return sqrt(polarRadius2()); } /// Synonym for polarRadius double perp() const { return polarRadius(); } /// Synonym for polarRadius double rho() const { return polarRadius(); } /// Angle subtended by the vector's projection in x-y and the x-axis. double azimuthalAngle(const PhiMapping mapping = ZERO_2PI) const { // If this is a null vector, return zero rather than let atan2 set an error state if (Rivet::isZero(mod2())) return 0.0; // Calculate the arctan and return in the requested range const double value = atan2( y(), x() ); return mapAngle(value, mapping); } /// Synonym for azimuthalAngle. double phi(const PhiMapping mapping = ZERO_2PI) const { return azimuthalAngle(mapping); } /// Angle subtended by the vector and the z-axis. double polarAngle() const { // Get number beween [0,PI] const double polarangle = atan2(polarRadius(), z()); return mapAngle0ToPi(polarangle); } /// Synonym for polarAngle double theta() const { return polarAngle(); } /// Purely geometric approximation to rapidity /// /// Also invariant under z-boosts, equal to y for massless particles. /// /// @note A cut-off is applied such that |eta| < log(2/DBL_EPSILON) double pseudorapidity() const { const double epsilon = DBL_EPSILON; double m = mod(); if ( m == 0.0 ) return 0.0; double pt = max(epsilon*m, perp()); double rap = std::log((m + fabs(z()))/pt); return z() > 0.0 ? rap: -rap; } /// Synonym for pseudorapidity double eta() const { return pseudorapidity(); } /// Convenience shortcut for fabs(eta()) double abseta() const { return fabs(eta()); } public: Vector3& operator*=(const double a) { _vec = multiply(a, *this)._vec; return *this; } Vector3& operator/=(const double a) { _vec = multiply(1.0/a, *this)._vec; return *this; } Vector3& operator+=(const Vector3& v) { _vec = add(*this, v)._vec; return *this; } Vector3& operator-=(const Vector3& v) { _vec = subtract(*this, v)._vec; return *this; } Vector3 operator-() const { Vector3 rtn; rtn._vec = -_vec; return rtn; } }; inline double dot(const Vector3& a, const Vector3& b) { return a.dot(b); } inline Vector3 cross(const Vector3& a, const Vector3& b) { return a.cross(b); } inline Vector3 multiply(const double a, const Vector3& v) { Vector3 result; result._vec = a * v._vec; return result; } inline Vector3 multiply(const Vector3& v, const double a) { return multiply(a, v); } inline Vector3 operator*(const double a, const Vector3& v) { return multiply(a, v); } inline Vector3 operator*(const Vector3& v, const double a) { return multiply(a, v); } inline Vector3 operator/(const Vector3& v, const double a) { return multiply(1.0/a, v); } inline Vector3 add(const Vector3& a, const Vector3& b) { Vector3 result; result._vec = a._vec + b._vec; return result; } inline Vector3 subtract(const Vector3& a, const Vector3& b) { Vector3 result; result._vec = a._vec - b._vec; return result; } inline Vector3 operator+(const Vector3& a, const Vector3& b) { return add(a, b); } inline Vector3 operator-(const Vector3& a, const Vector3& b) { return subtract(a, b); } // More physicsy coordinates etc. /// Angle (in radians) between two 3-vectors. inline double angle(const Vector3& a, const Vector3& b) { return a.angle(b); } ///////////////////////////////////////////////////// /// @name \f$ |\Delta eta| \f$ calculations from 3-vectors //@{ /// Calculate the difference in pseudorapidity between two spatial vectors. inline double deltaEta(const Vector3& a, const Vector3& b) { return deltaEta(a.pseudorapidity(), b.pseudorapidity()); } /// Calculate the difference in pseudorapidity between two spatial vectors. inline double deltaEta(const Vector3& v, double eta2) { return deltaEta(v.pseudorapidity(), eta2); } /// Calculate the difference in pseudorapidity between two spatial vectors. inline double deltaEta(double eta1, const Vector3& v) { return deltaEta(eta1, v.pseudorapidity()); } //@} /// @name \f$ \Delta phi \f$ calculations from 3-vectors //@{ /// Calculate the difference in azimuthal angle between two spatial vectors. inline double deltaPhi(const Vector3& a, const Vector3& b, bool sign=false) { return deltaPhi(a.azimuthalAngle(), b.azimuthalAngle(), sign); } /// Calculate the difference in azimuthal angle between two spatial vectors. inline double deltaPhi(const Vector3& v, double phi2, bool sign=false) { return deltaPhi(v.azimuthalAngle(), phi2, sign); } /// Calculate the difference in azimuthal angle between two spatial vectors. inline double deltaPhi(double phi1, const Vector3& v, bool sign=false) { return deltaPhi(phi1, v.azimuthalAngle(), sign); } //@} /// @name \f$ \Delta R \f$ calculations from 3-vectors //@{ /// Calculate the 2D rapidity-azimuthal ("eta-phi") distance between two spatial vectors. inline double deltaR2(const Vector3& a, const Vector3& b) { return deltaR2(a.pseudorapidity(), a.azimuthalAngle(), b.pseudorapidity(), b.azimuthalAngle()); } /// Calculate the 2D rapidity-azimuthal ("eta-phi") distance between two spatial vectors. inline double deltaR(const Vector3& a, const Vector3& b) { return sqrt(deltaR2(a,b)); } /// Calculate the 2D rapidity-azimuthal ("eta-phi") distance between two spatial vectors. inline double deltaR2(const Vector3& v, double eta2, double phi2) { return deltaR2(v.pseudorapidity(), v.azimuthalAngle(), eta2, phi2); } /// Calculate the 2D rapidity-azimuthal ("eta-phi") distance between two spatial vectors. inline double deltaR(const Vector3& v, double eta2, double phi2) { return sqrt(deltaR2(v, eta2, phi2)); } /// Calculate the 2D rapidity-azimuthal ("eta-phi") distance between two spatial vectors. inline double deltaR2(double eta1, double phi1, const Vector3& v) { return deltaR2(eta1, phi1, v.pseudorapidity(), v.azimuthalAngle()); } /// Calculate the 2D rapidity-azimuthal ("eta-phi") distance between two spatial vectors. inline double deltaR(double eta1, double phi1, const Vector3& v) { return sqrt(deltaR2(eta1, phi1, v)); } //@} /// @name Typedefs of vector types to short names /// @todo Switch canonical and alias names //@{ //typedef Vector3 V3; //< generic typedef Vector3 X3; //< spatial //@} } #endif diff --git a/include/Rivet/Projections/DISKinematics.hh b/include/Rivet/Projections/DISKinematics.hh --- a/include/Rivet/Projections/DISKinematics.hh +++ b/include/Rivet/Projections/DISKinematics.hh @@ -1,124 +1,126 @@ // -*- C++ -*- #ifndef RIVET_DISKinematics_HH #define RIVET_DISKinematics_HH #include "Rivet/Particle.hh" #include "Rivet/Event.hh" #include "Rivet/Projection.hh" #include "Rivet/Projections/DISLepton.hh" #include "Rivet/Projections/Beam.hh" namespace Rivet { /// @brief Get the DIS kinematic variables and relevant boosts for an event. class DISKinematics : public Projection { public: /// The default constructor. - DISKinematics() + DISKinematics(const DISLepton & lepton = DISLepton(), + const std::map & opts = + std::map()) : _theQ2(-1.0), _theW2(-1.0), _theX(-1.0), _theY(-1.0), _theS(-1.0) { setName("DISKinematics"); //addPdgIdPair(ANY, hadid); addProjection(Beam(), "Beam"); - addProjection(DISLepton(), "Lepton"); + addProjection(lepton, "Lepton"); } /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(DISKinematics); protected: /// Perform the projection operation on the supplied event. virtual void project(const Event& e); /// Compare with other projections. virtual int compare(const Projection& p) const; public: /// The \f$Q^2\f$. double Q2() const { return _theQ2; } /// The \f$W^2\f$. double W2() const { return _theW2; } /// The Bjorken \f$x\f$. double x() const { return _theX; } /// The inelasticity \f$y\f$ double y() const { return _theY; } /// The centre of mass energy \f$s\f$ double s() const { return _theS; } /// The LorentzRotation needed to boost a particle to the hadronic CM frame. const LorentzTransform& boostHCM() const { return _hcm; } /// The LorentzRotation needed to boost a particle to the hadronic Breit frame. const LorentzTransform& boostBreit() const { return _breit; } /// The incoming hadron beam particle const Particle& beamHadron() const { return _inHadron; } /// The incoming lepton beam particle const Particle& beamLepton() const { return _inLepton; } /// The scattered DIS lepton const Particle& scatteredLepton() const { return _outLepton; } /// @brief 1/-1 multiplier indicating (respectively) whether the event has conventional orientation or not /// /// Conventional DIS orientation has the hadron travelling in the +z direction const int orientation() const { return sign(_inHadron.pz()); } private: /// The \f$Q^2\f$. double _theQ2; /// The \f$W^2\f$. double _theW2; /// The Bjorken \f$x\f$. double _theX; /// The Inelasticity \f$y\f$ double _theY; /// The centre of mass energy \f$s\f$ double _theS; /// Incoming and outgoing DIS particles Particle _inHadron, _inLepton, _outLepton; /// The LorentzRotation needed to boost a particle to the hadronic CM frame. LorentzTransform _hcm; /// The LorentzRotation needed to boost a particle to the hadronic Breit frame. LorentzTransform _breit; }; } #endif diff --git a/include/Rivet/Projections/DISLepton.hh b/include/Rivet/Projections/DISLepton.hh --- a/include/Rivet/Projections/DISLepton.hh +++ b/include/Rivet/Projections/DISLepton.hh @@ -1,69 +1,111 @@ // -*- C++ -*- #ifndef RIVET_DISLepton_HH #define RIVET_DISLepton_HH #include "Rivet/Projections/Beam.hh" #include "Rivet/Projections/PromptFinalState.hh" +#include "Rivet/Projections/HadronicFinalState.hh" +#include "Rivet/Projections/DressedLeptons.hh" #include "Rivet/Particle.hh" #include "Rivet/Event.hh" namespace Rivet { /// @brief Get the incoming and outgoing leptons in a DIS event. class DISLepton : public Projection { public: /// @name Constructors. //@{ - DISLepton(){ + /// Default constructor taking general options. The recognised + /// options are: LMODE, taking the options "prompt", "any" and + /// "dressed"; DressedDR giving a delta-R cone radius where photon + /// momenta are added to the lepton candidates for LMODE=dresses; + /// and IsolDR giving a cone in delta-R where no hadrons are + /// allowed around a lepton candidate. + DISLepton(const std::map & opts = + std::map()): _isolDR(0.0) { setName("DISLepton"); addProjection(Beam(), "Beam"); - addProjection(PromptFinalState(), "FS"); + addProjection(HadronicFinalState(), "IFS"); + + auto isol = opts.find("IsolDR"); + if ( isol != opts.end() ) _isolDR = std::stod(isol->second); + + double dressdr = 0.0; + auto dress = opts.find("DressDR"); + if ( dress != opts.end() ) + dressdr = std::stod(dress->second); + + auto lmode = opts.find("LMODE"); + if ( lmode != opts.end() && lmode->second == "any" ) + addProjection(FinalState(), "LFS"); + else if ( lmode != opts.end() && lmode->second == "dressed" ) + addProjection(DressedLeptons(dressdr), "LFS"); + else + addProjection(PromptFinalState(), "LFS"); } + /// Constructor taking the following arguments: a final state + /// projection defining which lepton candidates to consider; a + /// beam projection detining the momenta of the incoming lepton + /// beam, and a final state projection defining the particles not + /// allowed witin a delta-R of @a isolationcut of a lepton + /// candidate. + DISLepton(const FinalState & leptoncandidates, + const Beam & beamproj = Beam(), + const FinalState & isolationfs = FinalState(), + double isolationcut = 0.0): _isolDR(isolationcut) { + addProjection(leptoncandidates, "LFS"); + addProjection(isolationfs, "IFS"); + addProjection(beamproj, "Beam"); + } + + + /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(DISLepton); //@} protected: /// Perform the projection operation on the supplied event. virtual void project(const Event& e); /// Compare with other projections. virtual int compare(const Projection& p) const; public: /// The incoming lepton const Particle& in() const { return _incoming; } /// The outgoing lepton const Particle& out() const { return _outgoing; } /// Sign of the incoming lepton pz component int pzSign() const { return sign(_incoming.pz()); } private: /// The incoming lepton Particle _incoming; /// The outgoing lepton Particle _outgoing; - // /// The charge sign of the DIS current - // double _charge; + /// If larger than zerp an isolation cut around the lepton is required. + double _isolDR; }; } #endif diff --git a/include/Rivet/Projections/FinalStates.hh b/include/Rivet/Projections/FinalStates.hh --- a/include/Rivet/Projections/FinalStates.hh +++ b/include/Rivet/Projections/FinalStates.hh @@ -1,17 +1,16 @@ // -*- C++ -*- #ifndef RIVET_FinalStates_HH #define RIVET_FinalStates_HH /// @file FinalStates.hh Convenience include of all FinalState projection headers #include "Rivet/Projections/FinalState.hh" #include "Rivet/Projections/ChargedFinalState.hh" #include "Rivet/Projections/NeutralFinalState.hh" #include "Rivet/Projections/IdentifiedFinalState.hh" #include "Rivet/Projections/VetoedFinalState.hh" #include "Rivet/Projections/PromptFinalState.hh" #include "Rivet/Projections/NonPromptFinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" #include "Rivet/Projections/VisibleFinalState.hh" #endif diff --git a/include/Rivet/Projections/HeavyHadrons.hh b/include/Rivet/Projections/HeavyHadrons.hh --- a/include/Rivet/Projections/HeavyHadrons.hh +++ b/include/Rivet/Projections/HeavyHadrons.hh @@ -1,112 +1,112 @@ // -*- C++ -*- #ifndef RIVET_HeavyHadrons_HH #define RIVET_HeavyHadrons_HH #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Particle.hh" #include "Rivet/Event.hh" namespace Rivet { /// @brief Project out the last pre-decay b and c hadrons. /// /// This currently defines a c-hadron as one which contains a @a c quark and /// @a{not} a @a b quark. /// /// @todo This assumes that the heavy hadrons are unstable... should we also look for stable ones in case the decays are disabled? class HeavyHadrons : public FinalState { public: /// @name Constructors and destructors. //@{ /// Constructor with specification of the minimum and maximum pseudorapidity /// \f$ \eta \f$ and the min \f$ p_T \f$ (in GeV). HeavyHadrons(const Cut& c=Cuts::open()) { setName("HeavyHadrons"); - addProjection(UnstableFinalState(c), "UFS"); + addProjection(UnstableParticles(c), "UFS"); } /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(HeavyHadrons); //@} /// @name b hadron accessors //@{ /// Get all weakly decaying b hadrons (return by reference) const Particles& bHadrons() const { return _theBs; } /// Get weakly decaying b hadrons with a Cut applied (return by value) Particles bHadrons(const Cut& c) const { return filter_select(bHadrons(), c); } /// Get weakly decaying b hadrons with a pTmin cut (return by value) /// @deprecated Prefer bHadrons(Cuts::pT > x) Particles bHadrons(double ptmin) const { return bHadrons(Cuts::pT > ptmin); } /// Get weakly decaying b hadrons with a general filter function applied (return by value) Particles bHadrons(const ParticleSelector& s) const { return filter_select(bHadrons(), s); } //@} /// @name b hadron accessors //@{ /// Get all weakly decaying c hadrons (return by reference) const Particles& cHadrons() const { return _theCs; } /// Get weakly decaying c hadrons with a Cut applied (return by value) Particles cHadrons(const Cut& c) const { return filter_select(cHadrons(), c); } /// Get weakly decaying c hadrons with a pTmin cut (return by value) /// @deprecated Prefer cHadrons(Cuts::pT > x) Particles cHadrons(double ptmin) const { return cHadrons(Cuts::pT > ptmin); } /// Get weakly decaying c hadrons with a general filter function applied (return by value) Particles cHadrons(const ParticleSelector& s) const { return filter_select(cHadrons(), s); } //@} protected: /// Apply the projection to the event. virtual void project(const Event& e); /// Compare projections (only difference is in UFS definition) virtual int compare(const Projection& p) const { return mkNamedPCmp(p, "UFS"); } /// b and c hadron containers Particles _theBs, _theCs; }; } #endif diff --git a/include/Rivet/Projections/PrimaryHadrons.hh b/include/Rivet/Projections/PrimaryHadrons.hh --- a/include/Rivet/Projections/PrimaryHadrons.hh +++ b/include/Rivet/Projections/PrimaryHadrons.hh @@ -1,55 +1,55 @@ // -*- C++ -*- #ifndef RIVET_PrimaryHadrons_HH #define RIVET_PrimaryHadrons_HH #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" #include "Rivet/Particle.hh" #include "Rivet/Event.hh" namespace Rivet { /// @brief Project out the first hadrons from hadronisation. /// /// @todo Also be able to return taus? Prefer a separate tau finder. /// @todo This assumes that the primary hadrons are unstable... should we also look for stable primary hadrons? class PrimaryHadrons : public FinalState { public: /// @name Constructors and destructors. //@{ /// Constructor with cuts argument PrimaryHadrons(const Cut& c=Cuts::open()) { setName("PrimaryHadrons"); - addProjection(UnstableFinalState(c), "UFS"); + addProjection(UnstableParticles(c), "UFS"); } /// Constructor with specification of the minimum and maximum pseudorapidity /// \f$ \eta \f$ and the min \f$ p_T \f$ (in GeV). PrimaryHadrons(double mineta, double maxeta, double minpt=0.0*GeV) { setName("PrimaryHadrons"); - addProjection(UnstableFinalState(Cuts::etaIn(mineta, maxeta) && Cuts::pT > minpt), "UFS"); + addProjection(UnstableParticles(Cuts::etaIn(mineta, maxeta) && Cuts::pT > minpt), "UFS"); } /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(PrimaryHadrons); //@} /// Apply the projection to the event. virtual void project(const Event& e); // /// Compare projections. // int compare(const Projection& p) const; }; } #endif diff --git a/include/Rivet/Projections/TauFinder.hh b/include/Rivet/Projections/TauFinder.hh --- a/include/Rivet/Projections/TauFinder.hh +++ b/include/Rivet/Projections/TauFinder.hh @@ -1,67 +1,67 @@ #ifndef RIVET_TauFinder_HH #define RIVET_TauFinder_HH #include "Rivet/Projections/FinalState.hh" -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { /// @brief Convenience finder of unstable taus /// /// @todo Convert to a general ParticleFinder, since it's not a true final state? Needs some care... class TauFinder : public FinalState { public: enum DecayType { ANY=0, LEPTONIC=1, HADRONIC }; static bool isHadronic(const Particle& tau) { assert(tau.abspid() == PID::TAU); return any(tau.stableDescendants(), isHadron); } static bool isLeptonic(const Particle& tau) { return !isHadronic(tau); } TauFinder(DecayType decaytype, const Cut& cut=Cuts::open()) { /// @todo What about directness/promptness? setName("TauFinder"); _dectype = decaytype; - addProjection(UnstableFinalState(cut), "UFS"); + addProjection(UnstableParticles(cut), "UFS"); } /// Clone on the heap. DEFAULT_RIVET_PROJ_CLONE(TauFinder); const Particles& taus() const { return _theParticles; } protected: /// Apply the projection on the supplied event. void project(const Event& e); /// Compare with other projections. virtual int compare(const Projection& p) const; private: /// The decaytype enum DecayType _dectype; }; /// @todo Make this the canonical name in future using Taus = TauFinder; } #endif diff --git a/include/Rivet/Projections/UnstableFinalState.hh b/include/Rivet/Projections/UnstableFinalState.hh --- a/include/Rivet/Projections/UnstableFinalState.hh +++ b/include/Rivet/Projections/UnstableFinalState.hh @@ -1,59 +1,8 @@ // -*- C++ -*- #ifndef RIVET_UnstableFinalState_HH #define RIVET_UnstableFinalState_HH -#include "Rivet/Projections/FinalState.hh" - -namespace Rivet { - - - /// @brief Project out all physical-but-decayed particles in an event. - /// - /// The particles returned by are unique unstable particles, such as hadrons - /// which are decayed by the generator. If, for example, you set Ks and Lambda - /// particles stable in the generator, they will not be returned. Also, you - /// should be aware that all unstable particles in a decay chain are returned: - /// if you are looking for something like the number of B hadrons in an event - /// and there is a decay chain from e.g. B** -> B, you will count both B - /// mesons unless you are careful to check for ancestor/descendent relations - /// between the particles. Duplicate particles in the event record, i.e. those - /// which differ only in bookkeeping details or photon emissions, are stripped - /// from the returned particles collection. - /// - /// @todo Rename header, with fallback - /// @todo Convert to a general ParticleFinder since this is explicitly not a final state... but needs care - /// @todo Make TauFinder inherit/use - class UnstableParticles : public FinalState { - public: - - /// @name Standard constructors and destructors. - //@{ - - /// Cut-based / default constructor - UnstableParticles(const Cut& c=Cuts::open()) - : FinalState(c) - { - setName("UnstableParticles"); - } - - /// Clone on the heap. - DEFAULT_RIVET_PROJ_CLONE(UnstableParticles); - - //@} - - protected: - - /// Apply the projection to the event. - virtual void project(const Event& e); - - }; - - - // Backward compatibility alias - using UnstableFinalState = UnstableParticles; - - -} - +#include "Rivet/Projections/UnstableParticles.hh" +#pragma message "UnstableFinalState.hh is deprecated. Please use UnstableParticles.hh instead" #endif diff --git a/include/Rivet/Projections/UnstableParticles.hh b/include/Rivet/Projections/UnstableParticles.hh new file mode 100644 --- /dev/null +++ b/include/Rivet/Projections/UnstableParticles.hh @@ -0,0 +1,58 @@ +// -*- C++ -*- +#ifndef RIVET_UnstableParticles_HH +#define RIVET_UnstableParticles_HH + +#include "Rivet/Projections/FinalState.hh" + +namespace Rivet { + + + /// @brief Project out all physical-but-decayed particles in an event. + /// + /// The particles returned by are unique unstable particles, such as hadrons + /// which are decayed by the generator. If, for example, you set Ks and Lambda + /// particles stable in the generator, they will not be returned. Also, you + /// should be aware that all unstable particles in a decay chain are returned: + /// if you are looking for something like the number of B hadrons in an event + /// and there is a decay chain from e.g. B** -> B, you will count both B + /// mesons unless you are careful to check for ancestor/descendent relations + /// between the particles. Duplicate particles in the event record, i.e. those + /// which differ only in bookkeeping details or photon emissions, are stripped + /// from the returned particles collection. + /// + /// @todo Convert to a general ParticleFinder since this is explicitly not a final state... but needs care + /// @todo Add a FIRST/LAST/ANY enum to specify the mode for uniquifying replica chains (default = LAST) + class UnstableParticles : public FinalState { + public: + + /// @name Standard constructors and destructors. + //@{ + + /// Cut-based / default constructor + UnstableParticles(const Cut& c=Cuts::open()) + : FinalState(c) + { + setName("UnstableParticles"); + } + + /// Clone on the heap. + DEFAULT_RIVET_PROJ_CLONE(UnstableParticles); + + //@} + + protected: + + /// Apply the projection to the event. + virtual void project(const Event& e); + + }; + + + // Backward compatibility alias + using UnstableFinalState = UnstableParticles; + + +} + + +#endif diff --git a/m4/libtool.m4 b/m4/libtool.m4 --- a/m4/libtool.m4 +++ b/m4/libtool.m4 @@ -1,7982 +1,8387 @@ # libtool.m4 - Configure libtool for the host system. -*-Autoconf-*- # -# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, -# 2006, 2007, 2008, 2009, 2010, 2011 Free Software -# Foundation, Inc. +# Copyright (C) 1996-2001, 2003-2015 Free Software Foundation, Inc. # Written by Gordon Matzigkeit, 1996 # # This file is free software; the Free Software Foundation gives # unlimited permission to copy and/or distribute it, with or without # modifications, as long as this notice is preserved. m4_define([_LT_COPYING], [dnl -# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, -# 2006, 2007, 2008, 2009, 2010, 2011 Free Software -# Foundation, Inc. -# Written by Gordon Matzigkeit, 1996 +# Copyright (C) 2014 Free Software Foundation, Inc. +# This is free software; see the source for copying conditions. There is NO +# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. + +# GNU Libtool is free software; you can redistribute it and/or modify +# it under the terms of the GNU General Public License as published by +# the Free Software Foundation; either version 2 of of the License, or +# (at your option) any later version. # -# This file is part of GNU Libtool. +# As a special exception to the GNU General Public License, if you +# distribute this file as part of a program or library that is built +# using GNU Libtool, you may include this file under the same +# distribution terms that you use for the rest of that program. # -# GNU Libtool is free software; you can redistribute it and/or -# modify it under the terms of the GNU General Public License as -# published by the Free Software Foundation; either version 2 of -# the License, or (at your option) any later version. -# -# As a special exception to the GNU General Public License, -# if you distribute this file as part of a program or library that -# is built using GNU Libtool, you may include this file under the -# same distribution terms that you use for the rest of that program. -# -# GNU Libtool is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of +# GNU Libtool is distributed in the hope that it will be useful, but +# WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License -# along with GNU Libtool; see the file COPYING. If not, a copy -# can be downloaded from http://www.gnu.org/licenses/gpl.html, or -# obtained by writing to the Free Software Foundation, Inc., -# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. +# along with this program. If not, see . ]) -# serial 57 LT_INIT +# serial 58 LT_INIT # LT_PREREQ(VERSION) # ------------------ # Complain and exit if this libtool version is less that VERSION. m4_defun([LT_PREREQ], [m4_if(m4_version_compare(m4_defn([LT_PACKAGE_VERSION]), [$1]), -1, [m4_default([$3], [m4_fatal([Libtool version $1 or higher is required], 63)])], [$2])]) # _LT_CHECK_BUILDDIR # ------------------ # Complain if the absolute build directory name contains unusual characters m4_defun([_LT_CHECK_BUILDDIR], [case `pwd` in *\ * | *\ *) AC_MSG_WARN([Libtool does not cope well with whitespace in `pwd`]) ;; esac ]) # LT_INIT([OPTIONS]) # ------------------ AC_DEFUN([LT_INIT], -[AC_PREREQ([2.58])dnl We use AC_INCLUDES_DEFAULT +[AC_PREREQ([2.62])dnl We use AC_PATH_PROGS_FEATURE_CHECK AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl AC_BEFORE([$0], [LT_LANG])dnl AC_BEFORE([$0], [LT_OUTPUT])dnl AC_BEFORE([$0], [LTDL_INIT])dnl m4_require([_LT_CHECK_BUILDDIR])dnl dnl Autoconf doesn't catch unexpanded LT_ macros by default: m4_pattern_forbid([^_?LT_[A-Z_]+$])dnl m4_pattern_allow([^(_LT_EOF|LT_DLGLOBAL|LT_DLLAZY_OR_NOW|LT_MULTI_MODULE)$])dnl dnl aclocal doesn't pull ltoptions.m4, ltsugar.m4, or ltversion.m4 dnl unless we require an AC_DEFUNed macro: AC_REQUIRE([LTOPTIONS_VERSION])dnl AC_REQUIRE([LTSUGAR_VERSION])dnl AC_REQUIRE([LTVERSION_VERSION])dnl AC_REQUIRE([LTOBSOLETE_VERSION])dnl m4_require([_LT_PROG_LTMAIN])dnl _LT_SHELL_INIT([SHELL=${CONFIG_SHELL-/bin/sh}]) dnl Parse OPTIONS _LT_SET_OPTIONS([$0], [$1]) # This can be used to rebuild libtool when needed -LIBTOOL_DEPS="$ltmain" +LIBTOOL_DEPS=$ltmain # Always use our own libtool. LIBTOOL='$(SHELL) $(top_builddir)/libtool' AC_SUBST(LIBTOOL)dnl _LT_SETUP # Only expand once: m4_define([LT_INIT]) ])# LT_INIT # Old names: AU_ALIAS([AC_PROG_LIBTOOL], [LT_INIT]) AU_ALIAS([AM_PROG_LIBTOOL], [LT_INIT]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PROG_LIBTOOL], []) dnl AC_DEFUN([AM_PROG_LIBTOOL], []) +# _LT_PREPARE_CC_BASENAME +# ----------------------- +m4_defun([_LT_PREPARE_CC_BASENAME], [ +# Calculate cc_basename. Skip known compiler wrappers and cross-prefix. +func_cc_basename () +{ + for cc_temp in @S|@*""; do + case $cc_temp in + compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;; + distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;; + \-*) ;; + *) break;; + esac + done + func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` +} +])# _LT_PREPARE_CC_BASENAME + + # _LT_CC_BASENAME(CC) # ------------------- -# Calculate cc_basename. Skip known compiler wrappers and cross-prefix. +# It would be clearer to call AC_REQUIREs from _LT_PREPARE_CC_BASENAME, +# but that macro is also expanded into generated libtool script, which +# arranges for $SED and $ECHO to be set by different means. m4_defun([_LT_CC_BASENAME], -[for cc_temp in $1""; do - case $cc_temp in - compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;; - distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;; - \-*) ;; - *) break;; - esac -done -cc_basename=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"` +[m4_require([_LT_PREPARE_CC_BASENAME])dnl +AC_REQUIRE([_LT_DECL_SED])dnl +AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl +func_cc_basename $1 +cc_basename=$func_cc_basename_result ]) # _LT_FILEUTILS_DEFAULTS # ---------------------- # It is okay to use these file commands and assume they have been set -# sensibly after `m4_require([_LT_FILEUTILS_DEFAULTS])'. +# sensibly after 'm4_require([_LT_FILEUTILS_DEFAULTS])'. m4_defun([_LT_FILEUTILS_DEFAULTS], [: ${CP="cp -f"} : ${MV="mv -f"} : ${RM="rm -f"} ])# _LT_FILEUTILS_DEFAULTS # _LT_SETUP # --------- m4_defun([_LT_SETUP], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_REQUIRE([_LT_PREPARE_SED_QUOTE_VARS])dnl AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl _LT_DECL([], [PATH_SEPARATOR], [1], [The PATH separator for the build system])dnl dnl _LT_DECL([], [host_alias], [0], [The host system])dnl _LT_DECL([], [host], [0])dnl _LT_DECL([], [host_os], [0])dnl dnl _LT_DECL([], [build_alias], [0], [The build system])dnl _LT_DECL([], [build], [0])dnl _LT_DECL([], [build_os], [0])dnl dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl dnl AC_REQUIRE([AC_PROG_LN_S])dnl test -z "$LN_S" && LN_S="ln -s" _LT_DECL([], [LN_S], [1], [Whether we need soft or hard links])dnl dnl AC_REQUIRE([LT_CMD_MAX_LEN])dnl _LT_DECL([objext], [ac_objext], [0], [Object file suffix (normally "o")])dnl _LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl m4_require([_LT_CMD_RELOAD])dnl m4_require([_LT_CHECK_MAGIC_METHOD])dnl m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl m4_require([_LT_CMD_OLD_ARCHIVE])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_WITH_SYSROOT])dnl +m4_require([_LT_CMD_TRUNCATE])dnl _LT_CONFIG_LIBTOOL_INIT([ -# See if we are running on zsh, and set the options which allow our +# See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes INIT. -if test -n "\${ZSH_VERSION+set}" ; then +if test -n "\${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi ]) -if test -n "${ZSH_VERSION+set}" ; then +if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi _LT_CHECK_OBJDIR m4_require([_LT_TAG_COMPILER])dnl case $host_os in aix3*) # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. - if test "X${COLLECT_NAMES+set}" != Xset; then + if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi ;; esac # Global variables: ofile=libtool can_build_shared=yes -# All known linkers require a `.a' archive for static linking (except MSVC, +# All known linkers require a '.a' archive for static linking (except MSVC, # which needs '.lib'). libext=a -with_gnu_ld="$lt_cv_prog_gnu_ld" - -old_CC="$CC" -old_CFLAGS="$CFLAGS" +with_gnu_ld=$lt_cv_prog_gnu_ld + +old_CC=$CC +old_CFLAGS=$CFLAGS # Set sane defaults for various variables test -z "$CC" && CC=cc test -z "$LTCC" && LTCC=$CC test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS test -z "$LD" && LD=ld test -z "$ac_objext" && ac_objext=o _LT_CC_BASENAME([$compiler]) # Only perform the check for file, if the check method requires it test -z "$MAGIC_CMD" && MAGIC_CMD=file case $deplibs_check_method in file_magic*) if test "$file_magic_cmd" = '$MAGIC_CMD'; then _LT_PATH_MAGIC fi ;; esac # Use C for the default configuration in the libtool script LT_SUPPORTED_TAG([CC]) _LT_LANG_C_CONFIG _LT_LANG_DEFAULT_CONFIG _LT_CONFIG_COMMANDS ])# _LT_SETUP # _LT_PREPARE_SED_QUOTE_VARS # -------------------------- # Define a few sed substitution that help us do robust quoting. m4_defun([_LT_PREPARE_SED_QUOTE_VARS], [# Backslashify metacharacters that are still active within # double-quoted strings. sed_quote_subst='s/\([["`$\\]]\)/\\\1/g' # Same as above, but do not quote variable references. double_quote_subst='s/\([["`\\]]\)/\\\1/g' # Sed substitution to delay expansion of an escaped shell variable in a # double_quote_subst'ed string. delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g' # Sed substitution to delay expansion of an escaped single quote. delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g' # Sed substitution to avoid accidental globbing in evaled expressions no_glob_subst='s/\*/\\\*/g' ]) # _LT_PROG_LTMAIN # --------------- -# Note that this code is called both from `configure', and `config.status' +# Note that this code is called both from 'configure', and 'config.status' # now that we use AC_CONFIG_COMMANDS to generate libtool. Notably, -# `config.status' has no value for ac_aux_dir unless we are using Automake, +# 'config.status' has no value for ac_aux_dir unless we are using Automake, # so we pass a copy along to make sure it has a sensible value anyway. m4_defun([_LT_PROG_LTMAIN], [m4_ifdef([AC_REQUIRE_AUX_FILE], [AC_REQUIRE_AUX_FILE([ltmain.sh])])dnl _LT_CONFIG_LIBTOOL_INIT([ac_aux_dir='$ac_aux_dir']) -ltmain="$ac_aux_dir/ltmain.sh" +ltmain=$ac_aux_dir/ltmain.sh ])# _LT_PROG_LTMAIN ## ------------------------------------- ## ## Accumulate code for creating libtool. ## ## ------------------------------------- ## # So that we can recreate a full libtool script including additional # tags, we accumulate the chunks of code to send to AC_CONFIG_COMMANDS -# in macros and then make a single call at the end using the `libtool' +# in macros and then make a single call at the end using the 'libtool' # label. # _LT_CONFIG_LIBTOOL_INIT([INIT-COMMANDS]) # ---------------------------------------- # Register INIT-COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL_INIT], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_INIT], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_INIT]) # _LT_CONFIG_LIBTOOL([COMMANDS]) # ------------------------------ # Register COMMANDS to be passed to AC_CONFIG_COMMANDS later. m4_define([_LT_CONFIG_LIBTOOL], [m4_ifval([$1], [m4_append([_LT_OUTPUT_LIBTOOL_COMMANDS], [$1 ])])]) # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS]) # _LT_CONFIG_SAVE_COMMANDS([COMMANDS], [INIT_COMMANDS]) # ----------------------------------------------------- m4_defun([_LT_CONFIG_SAVE_COMMANDS], [_LT_CONFIG_LIBTOOL([$1]) _LT_CONFIG_LIBTOOL_INIT([$2]) ]) # _LT_FORMAT_COMMENT([COMMENT]) # ----------------------------- # Add leading comment marks to the start of each line, and a trailing # full-stop to the whole comment if one is not present already. m4_define([_LT_FORMAT_COMMENT], [m4_ifval([$1], [ m4_bpatsubst([m4_bpatsubst([$1], [^ *], [# ])], [['`$\]], [\\\&])]m4_bmatch([$1], [[!?.]$], [], [.]) )]) ## ------------------------ ## ## FIXME: Eliminate VARNAME ## ## ------------------------ ## # _LT_DECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION], [IS-TAGGED?]) # ------------------------------------------------------------------- # CONFIGNAME is the name given to the value in the libtool script. # VARNAME is the (base) name used in the configure script. # VALUE may be 0, 1 or 2 for a computed quote escaped value based on # VARNAME. Any other value will be used directly. m4_define([_LT_DECL], [lt_if_append_uniq([lt_decl_varnames], [$2], [, ], [lt_dict_add_subkey([lt_decl_dict], [$2], [libtool_name], [m4_ifval([$1], [$1], [$2])]) lt_dict_add_subkey([lt_decl_dict], [$2], [value], [$3]) m4_ifval([$4], [lt_dict_add_subkey([lt_decl_dict], [$2], [description], [$4])]) lt_dict_add_subkey([lt_decl_dict], [$2], [tagged?], [m4_ifval([$5], [yes], [no])])]) ]) # _LT_TAGDECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION]) # -------------------------------------------------------- m4_define([_LT_TAGDECL], [_LT_DECL([$1], [$2], [$3], [$4], [yes])]) # lt_decl_tag_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_tag_varnames], [_lt_decl_filter([tagged?], [yes], $@)]) # _lt_decl_filter(SUBKEY, VALUE, [SEPARATOR], [VARNAME1..]) # --------------------------------------------------------- m4_define([_lt_decl_filter], [m4_case([$#], [0], [m4_fatal([$0: too few arguments: $#])], [1], [m4_fatal([$0: too few arguments: $#: $1])], [2], [lt_dict_filter([lt_decl_dict], [$1], [$2], [], lt_decl_varnames)], [3], [lt_dict_filter([lt_decl_dict], [$1], [$2], [$3], lt_decl_varnames)], [lt_dict_filter([lt_decl_dict], $@)])[]dnl ]) # lt_decl_quote_varnames([SEPARATOR], [VARNAME1...]) # -------------------------------------------------- m4_define([lt_decl_quote_varnames], [_lt_decl_filter([value], [1], $@)]) # lt_decl_dquote_varnames([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_dquote_varnames], [_lt_decl_filter([value], [2], $@)]) # lt_decl_varnames_tagged([SEPARATOR], [VARNAME1...]) # --------------------------------------------------- m4_define([lt_decl_varnames_tagged], [m4_assert([$# <= 2])dnl _$0(m4_quote(m4_default([$1], [[, ]])), m4_ifval([$2], [[$2]], [m4_dquote(lt_decl_tag_varnames)]), m4_split(m4_normalize(m4_quote(_LT_TAGS)), [ ]))]) m4_define([_lt_decl_varnames_tagged], [m4_ifval([$3], [lt_combine([$1], [$2], [_], $3)])]) # lt_decl_all_varnames([SEPARATOR], [VARNAME1...]) # ------------------------------------------------ m4_define([lt_decl_all_varnames], [_$0(m4_quote(m4_default([$1], [[, ]])), m4_if([$2], [], m4_quote(lt_decl_varnames), m4_quote(m4_shift($@))))[]dnl ]) m4_define([_lt_decl_all_varnames], [lt_join($@, lt_decl_varnames_tagged([$1], lt_decl_tag_varnames([[, ]], m4_shift($@))))dnl ]) # _LT_CONFIG_STATUS_DECLARE([VARNAME]) # ------------------------------------ -# Quote a variable value, and forward it to `config.status' so that its -# declaration there will have the same value as in `configure'. VARNAME +# Quote a variable value, and forward it to 'config.status' so that its +# declaration there will have the same value as in 'configure'. VARNAME # must have a single quote delimited value for this to work. m4_define([_LT_CONFIG_STATUS_DECLARE], [$1='`$ECHO "$][$1" | $SED "$delay_single_quote_subst"`']) # _LT_CONFIG_STATUS_DECLARATIONS # ------------------------------ # We delimit libtool config variables with single quotes, so when # we write them to config.status, we have to be sure to quote all # embedded single quotes properly. In configure, this macro expands # each variable declared with _LT_DECL (and _LT_TAGDECL) into: # # ='`$ECHO "$" | $SED "$delay_single_quote_subst"`' m4_defun([_LT_CONFIG_STATUS_DECLARATIONS], [m4_foreach([_lt_var], m4_quote(lt_decl_all_varnames), [m4_n([_LT_CONFIG_STATUS_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAGS # ---------------- # Output comment and list of tags supported by the script m4_defun([_LT_LIBTOOL_TAGS], [_LT_FORMAT_COMMENT([The names of the tagged configurations supported by this script])dnl -available_tags="_LT_TAGS"dnl +available_tags='_LT_TAGS'dnl ]) # _LT_LIBTOOL_DECLARE(VARNAME, [TAG]) # ----------------------------------- # Extract the dictionary values for VARNAME (optionally with TAG) and # expand to a commented shell variable setting: # # # Some comment about what VAR is for. # visible_name=$lt_internal_name m4_define([_LT_LIBTOOL_DECLARE], [_LT_FORMAT_COMMENT(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [description])))[]dnl m4_pushdef([_libtool_name], m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [libtool_name])))[]dnl m4_case(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [value])), [0], [_libtool_name=[$]$1], [1], [_libtool_name=$lt_[]$1], [2], [_libtool_name=$lt_[]$1], [_libtool_name=lt_dict_fetch([lt_decl_dict], [$1], [value])])[]dnl m4_ifval([$2], [_$2])[]m4_popdef([_libtool_name])[]dnl ]) # _LT_LIBTOOL_CONFIG_VARS # ----------------------- # Produce commented declarations of non-tagged libtool config variables -# suitable for insertion in the LIBTOOL CONFIG section of the `libtool' +# suitable for insertion in the LIBTOOL CONFIG section of the 'libtool' # script. Tagged libtool config variables (even for the LIBTOOL CONFIG # section) are produced by _LT_LIBTOOL_TAG_VARS. m4_defun([_LT_LIBTOOL_CONFIG_VARS], [m4_foreach([_lt_var], m4_quote(_lt_decl_filter([tagged?], [no], [], lt_decl_varnames)), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var)])])]) # _LT_LIBTOOL_TAG_VARS(TAG) # ------------------------- m4_define([_LT_LIBTOOL_TAG_VARS], [m4_foreach([_lt_var], m4_quote(lt_decl_tag_varnames), [m4_n([_LT_LIBTOOL_DECLARE(_lt_var, [$1])])])]) # _LT_TAGVAR(VARNAME, [TAGNAME]) # ------------------------------ m4_define([_LT_TAGVAR], [m4_ifval([$2], [$1_$2], [$1])]) # _LT_CONFIG_COMMANDS # ------------------- # Send accumulated output to $CONFIG_STATUS. Thanks to the lists of # variables for single and double quote escaping we saved from calls # to _LT_DECL, we can put quote escaped variables declarations -# into `config.status', and then the shell code to quote escape them in -# for loops in `config.status'. Finally, any additional code accumulated +# into 'config.status', and then the shell code to quote escape them in +# for loops in 'config.status'. Finally, any additional code accumulated # from calls to _LT_CONFIG_LIBTOOL_INIT is expanded. m4_defun([_LT_CONFIG_COMMANDS], [AC_PROVIDE_IFELSE([LT_OUTPUT], dnl If the libtool generation code has been placed in $CONFIG_LT, dnl instead of duplicating it all over again into config.status, dnl then we will have config.status run $CONFIG_LT later, so it dnl needs to know what name is stored there: [AC_CONFIG_COMMANDS([libtool], [$SHELL $CONFIG_LT || AS_EXIT(1)], [CONFIG_LT='$CONFIG_LT'])], dnl If the libtool generation code is destined for config.status, dnl expand the accumulated commands and init code now: [AC_CONFIG_COMMANDS([libtool], [_LT_OUTPUT_LIBTOOL_COMMANDS], [_LT_OUTPUT_LIBTOOL_COMMANDS_INIT])]) ])#_LT_CONFIG_COMMANDS # Initialize. m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS_INIT], [ # The HP-UX ksh and POSIX shell print the target directory to stdout # if CDPATH is set. (unset CDPATH) >/dev/null 2>&1 && unset CDPATH sed_quote_subst='$sed_quote_subst' double_quote_subst='$double_quote_subst' delay_variable_subst='$delay_variable_subst' _LT_CONFIG_STATUS_DECLARATIONS LTCC='$LTCC' LTCFLAGS='$LTCFLAGS' compiler='$compiler_DEFAULT' # A function that is used when there is no print builtin or printf. func_fallback_echo () { eval 'cat <<_LTECHO_EOF \$[]1 _LTECHO_EOF' } # Quote evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_quote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) - eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" + eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done # Double-quote double-evaled strings. for var in lt_decl_all_varnames([[ \ ]], lt_decl_dquote_varnames); do case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in *[[\\\\\\\`\\"\\\$]]*) - eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" + eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes ;; *) eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\"" ;; esac done _LT_OUTPUT_LIBTOOL_INIT ]) # _LT_GENERATED_FILE_INIT(FILE, [COMMENT]) # ------------------------------------ # Generate a child script FILE with all initialization necessary to # reuse the environment learned by the parent script, and make the # file executable. If COMMENT is supplied, it is inserted after the -# `#!' sequence but before initialization text begins. After this +# '#!' sequence but before initialization text begins. After this # macro, additional text can be appended to FILE to form the body of # the child script. The macro ends with non-zero status if the # file could not be fully written (such as if the disk is full). m4_ifdef([AS_INIT_GENERATED], [m4_defun([_LT_GENERATED_FILE_INIT],[AS_INIT_GENERATED($@)])], [m4_defun([_LT_GENERATED_FILE_INIT], [m4_require([AS_PREPARE])]dnl [m4_pushdef([AS_MESSAGE_LOG_FD])]dnl [lt_write_fail=0 cat >$1 <<_ASEOF || lt_write_fail=1 #! $SHELL # Generated by $as_me. $2 SHELL=\${CONFIG_SHELL-$SHELL} export SHELL _ASEOF cat >>$1 <<\_ASEOF || lt_write_fail=1 AS_SHELL_SANITIZE _AS_PREPARE exec AS_MESSAGE_FD>&1 _ASEOF -test $lt_write_fail = 0 && chmod +x $1[]dnl +test 0 = "$lt_write_fail" && chmod +x $1[]dnl m4_popdef([AS_MESSAGE_LOG_FD])])])# _LT_GENERATED_FILE_INIT # LT_OUTPUT # --------- # This macro allows early generation of the libtool script (before # AC_OUTPUT is called), incase it is used in configure for compilation # tests. AC_DEFUN([LT_OUTPUT], [: ${CONFIG_LT=./config.lt} AC_MSG_NOTICE([creating $CONFIG_LT]) _LT_GENERATED_FILE_INIT(["$CONFIG_LT"], [# Run this file to recreate a libtool stub with the current configuration.]) cat >>"$CONFIG_LT" <<\_LTEOF lt_cl_silent=false exec AS_MESSAGE_LOG_FD>>config.log { echo AS_BOX([Running $as_me.]) } >&AS_MESSAGE_LOG_FD lt_cl_help="\ -\`$as_me' creates a local libtool stub from the current configuration, +'$as_me' creates a local libtool stub from the current configuration, for use in further configure time tests before the real libtool is generated. Usage: $[0] [[OPTIONS]] -h, --help print this help, then exit -V, --version print version number, then exit -q, --quiet do not print progress messages -d, --debug don't remove temporary files Report bugs to ." lt_cl_version="\ m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION]) configured by $[0], generated by m4_PACKAGE_STRING. Copyright (C) 2011 Free Software Foundation, Inc. This config.lt script is free software; the Free Software Foundation gives unlimited permision to copy, distribute and modify it." -while test $[#] != 0 +while test 0 != $[#] do case $[1] in --version | --v* | -V ) echo "$lt_cl_version"; exit 0 ;; --help | --h* | -h ) echo "$lt_cl_help"; exit 0 ;; --debug | --d* | -d ) debug=: ;; --quiet | --q* | --silent | --s* | -q ) lt_cl_silent=: ;; -*) AC_MSG_ERROR([unrecognized option: $[1] -Try \`$[0] --help' for more information.]) ;; +Try '$[0] --help' for more information.]) ;; *) AC_MSG_ERROR([unrecognized argument: $[1] -Try \`$[0] --help' for more information.]) ;; +Try '$[0] --help' for more information.]) ;; esac shift done if $lt_cl_silent; then exec AS_MESSAGE_FD>/dev/null fi _LTEOF cat >>"$CONFIG_LT" <<_LTEOF _LT_OUTPUT_LIBTOOL_COMMANDS_INIT _LTEOF cat >>"$CONFIG_LT" <<\_LTEOF AC_MSG_NOTICE([creating $ofile]) _LT_OUTPUT_LIBTOOL_COMMANDS AS_EXIT(0) _LTEOF chmod +x "$CONFIG_LT" # configure is writing to config.log, but config.lt does its own redirection, # appending to config.log, which fails on DOS, as config.log is still kept # open by configure. Here we exec the FD to /dev/null, effectively closing # config.log, so it can be properly (re)opened and appended to by config.lt. lt_cl_success=: -test "$silent" = yes && +test yes = "$silent" && lt_config_lt_args="$lt_config_lt_args --quiet" exec AS_MESSAGE_LOG_FD>/dev/null $SHELL "$CONFIG_LT" $lt_config_lt_args || lt_cl_success=false exec AS_MESSAGE_LOG_FD>>config.log $lt_cl_success || AS_EXIT(1) ])# LT_OUTPUT # _LT_CONFIG(TAG) # --------------- # If TAG is the built-in tag, create an initial libtool script with a # default configuration from the untagged config vars. Otherwise add code # to config.status for appending the configuration named by TAG from the # matching tagged config vars. m4_defun([_LT_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_CONFIG_SAVE_COMMANDS([ m4_define([_LT_TAG], m4_if([$1], [], [C], [$1]))dnl m4_if(_LT_TAG, [C], [ - # See if we are running on zsh, and set the options which allow our + # See if we are running on zsh, and set the options that allow our # commands through without removal of \ escapes. - if test -n "${ZSH_VERSION+set}" ; then + if test -n "${ZSH_VERSION+set}"; then setopt NO_GLOB_SUBST fi - cfgfile="${ofile}T" + cfgfile=${ofile}T trap "$RM \"$cfgfile\"; exit 1" 1 2 15 $RM "$cfgfile" cat <<_LT_EOF >> "$cfgfile" #! $SHELL - -# `$ECHO "$ofile" | sed 's%^.*/%%'` - Provide generalized library-building support services. -# Generated automatically by $as_me ($PACKAGE$TIMESTAMP) $VERSION -# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`: +# Generated automatically by $as_me ($PACKAGE) $VERSION # NOTE: Changes made to this file will be lost: look at ltmain.sh. -# + +# Provide generalized library-building support services. +# Written by Gordon Matzigkeit, 1996 + _LT_COPYING _LT_LIBTOOL_TAGS +# Configured defaults for sys_lib_dlsearch_path munging. +: \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"} + # ### BEGIN LIBTOOL CONFIG _LT_LIBTOOL_CONFIG_VARS _LT_LIBTOOL_TAG_VARS # ### END LIBTOOL CONFIG _LT_EOF + cat <<'_LT_EOF' >> "$cfgfile" + +# ### BEGIN FUNCTIONS SHARED WITH CONFIGURE + +_LT_PREPARE_MUNGE_PATH_LIST +_LT_PREPARE_CC_BASENAME + +# ### END FUNCTIONS SHARED WITH CONFIGURE + +_LT_EOF + case $host_os in aix3*) cat <<\_LT_EOF >> "$cfgfile" # AIX sometimes has problems with the GCC collect2 program. For some # reason, if we set the COLLECT_NAMES environment variable, the problems # vanish in a puff of smoke. -if test "X${COLLECT_NAMES+set}" != Xset; then +if test set != "${COLLECT_NAMES+set}"; then COLLECT_NAMES= export COLLECT_NAMES fi _LT_EOF ;; esac _LT_PROG_LTMAIN # We use sed instead of cat because bash on DJGPP gets confused if # if finds mixed CR/LF and LF-only lines. Since sed operates in # text mode, it properly converts lines to CR/LF. This bash problem # is reportedly fixed, but why not run on old versions too? sed '$q' "$ltmain" >> "$cfgfile" \ || (rm -f "$cfgfile"; exit 1) - _LT_PROG_REPLACE_SHELLFNS - mv -f "$cfgfile" "$ofile" || (rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile") chmod +x "$ofile" ], [cat <<_LT_EOF >> "$ofile" dnl Unfortunately we have to use $1 here, since _LT_TAG is not expanded dnl in a comment (ie after a #). # ### BEGIN LIBTOOL TAG CONFIG: $1 _LT_LIBTOOL_TAG_VARS(_LT_TAG) # ### END LIBTOOL TAG CONFIG: $1 _LT_EOF ])dnl /m4_if ], [m4_if([$1], [], [ PACKAGE='$PACKAGE' VERSION='$VERSION' - TIMESTAMP='$TIMESTAMP' RM='$RM' ofile='$ofile'], []) ])dnl /_LT_CONFIG_SAVE_COMMANDS ])# _LT_CONFIG # LT_SUPPORTED_TAG(TAG) # --------------------- # Trace this macro to discover what tags are supported by the libtool # --tag option, using: # autoconf --trace 'LT_SUPPORTED_TAG:$1' AC_DEFUN([LT_SUPPORTED_TAG], []) # C support is built-in for now m4_define([_LT_LANG_C_enabled], []) m4_define([_LT_TAGS], []) # LT_LANG(LANG) # ------------- # Enable libtool support for the given language if not already enabled. AC_DEFUN([LT_LANG], [AC_BEFORE([$0], [LT_OUTPUT])dnl m4_case([$1], [C], [_LT_LANG(C)], [C++], [_LT_LANG(CXX)], [Go], [_LT_LANG(GO)], [Java], [_LT_LANG(GCJ)], [Fortran 77], [_LT_LANG(F77)], [Fortran], [_LT_LANG(FC)], [Windows Resource], [_LT_LANG(RC)], [m4_ifdef([_LT_LANG_]$1[_CONFIG], [_LT_LANG($1)], [m4_fatal([$0: unsupported language: "$1"])])])dnl ])# LT_LANG # _LT_LANG(LANGNAME) # ------------------ m4_defun([_LT_LANG], [m4_ifdef([_LT_LANG_]$1[_enabled], [], [LT_SUPPORTED_TAG([$1])dnl m4_append([_LT_TAGS], [$1 ])dnl m4_define([_LT_LANG_]$1[_enabled], [])dnl _LT_LANG_$1_CONFIG($1)])dnl ])# _LT_LANG m4_ifndef([AC_PROG_GO], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_GO. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_GO], [AC_LANG_PUSH(Go)dnl AC_ARG_VAR([GOC], [Go compiler command])dnl AC_ARG_VAR([GOFLAGS], [Go compiler flags])dnl _AC_ARG_VAR_LDFLAGS()dnl AC_CHECK_TOOL(GOC, gccgo) if test -z "$GOC"; then if test -n "$ac_tool_prefix"; then AC_CHECK_PROG(GOC, [${ac_tool_prefix}gccgo], [${ac_tool_prefix}gccgo]) fi fi if test -z "$GOC"; then AC_CHECK_PROG(GOC, gccgo, gccgo, false) fi ])#m4_defun ])#m4_ifndef # _LT_LANG_DEFAULT_CONFIG # ----------------------- m4_defun([_LT_LANG_DEFAULT_CONFIG], [AC_PROVIDE_IFELSE([AC_PROG_CXX], [LT_LANG(CXX)], [m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[LT_LANG(CXX)])]) AC_PROVIDE_IFELSE([AC_PROG_F77], [LT_LANG(F77)], [m4_define([AC_PROG_F77], defn([AC_PROG_F77])[LT_LANG(F77)])]) AC_PROVIDE_IFELSE([AC_PROG_FC], [LT_LANG(FC)], [m4_define([AC_PROG_FC], defn([AC_PROG_FC])[LT_LANG(FC)])]) dnl The call to [A][M_PROG_GCJ] is quoted like that to stop aclocal dnl pulling things in needlessly. AC_PROVIDE_IFELSE([AC_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([A][M_PROG_GCJ], [LT_LANG(GCJ)], [AC_PROVIDE_IFELSE([LT_PROG_GCJ], [LT_LANG(GCJ)], [m4_ifdef([AC_PROG_GCJ], [m4_define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([A][M_PROG_GCJ], [m4_define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[LT_LANG(GCJ)])]) m4_ifdef([LT_PROG_GCJ], [m4_define([LT_PROG_GCJ], defn([LT_PROG_GCJ])[LT_LANG(GCJ)])])])])]) AC_PROVIDE_IFELSE([AC_PROG_GO], [LT_LANG(GO)], [m4_define([AC_PROG_GO], defn([AC_PROG_GO])[LT_LANG(GO)])]) AC_PROVIDE_IFELSE([LT_PROG_RC], [LT_LANG(RC)], [m4_define([LT_PROG_RC], defn([LT_PROG_RC])[LT_LANG(RC)])]) ])# _LT_LANG_DEFAULT_CONFIG # Obsolete macros: AU_DEFUN([AC_LIBTOOL_CXX], [LT_LANG(C++)]) AU_DEFUN([AC_LIBTOOL_F77], [LT_LANG(Fortran 77)]) AU_DEFUN([AC_LIBTOOL_FC], [LT_LANG(Fortran)]) AU_DEFUN([AC_LIBTOOL_GCJ], [LT_LANG(Java)]) AU_DEFUN([AC_LIBTOOL_RC], [LT_LANG(Windows Resource)]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_CXX], []) dnl AC_DEFUN([AC_LIBTOOL_F77], []) dnl AC_DEFUN([AC_LIBTOOL_FC], []) dnl AC_DEFUN([AC_LIBTOOL_GCJ], []) dnl AC_DEFUN([AC_LIBTOOL_RC], []) # _LT_TAG_COMPILER # ---------------- m4_defun([_LT_TAG_COMPILER], [AC_REQUIRE([AC_PROG_CC])dnl _LT_DECL([LTCC], [CC], [1], [A C compiler])dnl _LT_DECL([LTCFLAGS], [CFLAGS], [1], [LTCC compiler flags])dnl _LT_TAGDECL([CC], [compiler], [1], [A language specific compiler])dnl _LT_TAGDECL([with_gcc], [GCC], [0], [Is the compiler the GNU compiler?])dnl # If no C compiler was specified, use CC. LTCC=${LTCC-"$CC"} # If no C compiler flags were specified, use CFLAGS. LTCFLAGS=${LTCFLAGS-"$CFLAGS"} # Allow CC to be a program name with arguments. compiler=$CC ])# _LT_TAG_COMPILER # _LT_COMPILER_BOILERPLATE # ------------------------ # Check for compiler boilerplate output or warnings with # the simple compiler test code. m4_defun([_LT_COMPILER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_compile_test_code" >conftest.$ac_ext eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_compiler_boilerplate=`cat conftest.err` $RM conftest* ])# _LT_COMPILER_BOILERPLATE # _LT_LINKER_BOILERPLATE # ---------------------- # Check for linker boilerplate output or warnings with # the simple link test code. m4_defun([_LT_LINKER_BOILERPLATE], [m4_require([_LT_DECL_SED])dnl ac_outfile=conftest.$ac_objext echo "$lt_simple_link_test_code" >conftest.$ac_ext eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err _lt_linker_boilerplate=`cat conftest.err` $RM -r conftest* ])# _LT_LINKER_BOILERPLATE # _LT_REQUIRED_DARWIN_CHECKS # ------------------------- m4_defun_once([_LT_REQUIRED_DARWIN_CHECKS],[ case $host_os in rhapsody* | darwin*) AC_CHECK_TOOL([DSYMUTIL], [dsymutil], [:]) AC_CHECK_TOOL([NMEDIT], [nmedit], [:]) AC_CHECK_TOOL([LIPO], [lipo], [:]) AC_CHECK_TOOL([OTOOL], [otool], [:]) AC_CHECK_TOOL([OTOOL64], [otool64], [:]) _LT_DECL([], [DSYMUTIL], [1], [Tool to manipulate archived DWARF debug symbol files on Mac OS X]) _LT_DECL([], [NMEDIT], [1], [Tool to change global to local symbols on Mac OS X]) _LT_DECL([], [LIPO], [1], [Tool to manipulate fat objects and archives on Mac OS X]) _LT_DECL([], [OTOOL], [1], [ldd/readelf like tool for Mach-O binaries on Mac OS X]) _LT_DECL([], [OTOOL64], [1], [ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4]) AC_CACHE_CHECK([for -single_module linker flag],[lt_cv_apple_cc_single_mod], [lt_cv_apple_cc_single_mod=no - if test -z "${LT_MULTI_MODULE}"; then + if test -z "$LT_MULTI_MODULE"; then # By default we will add the -single_module flag. You can override # by either setting the environment variable LT_MULTI_MODULE # non-empty at configure time, or by adding -multi_module to the # link flags. rm -rf libconftest.dylib* echo "int foo(void){return 1;}" > conftest.c echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \ -dynamiclib -Wl,-single_module conftest.c 2>conftest.err _lt_result=$? # If there is a non-empty error log, and "single_module" # appears in it, assume the flag caused a linker warning if test -s conftest.err && $GREP single_module conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD # Otherwise, if the output was created with a 0 exit code from # the compiler, it worked. - elif test -f libconftest.dylib && test $_lt_result -eq 0; then + elif test -f libconftest.dylib && test 0 = "$_lt_result"; then lt_cv_apple_cc_single_mod=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -rf libconftest.dylib* rm -f conftest.* fi]) AC_CACHE_CHECK([for -exported_symbols_list linker flag], [lt_cv_ld_exported_symbols_list], [lt_cv_ld_exported_symbols_list=no save_LDFLAGS=$LDFLAGS echo "_main" > conftest.sym LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [lt_cv_ld_exported_symbols_list=yes], [lt_cv_ld_exported_symbols_list=no]) - LDFLAGS="$save_LDFLAGS" + LDFLAGS=$save_LDFLAGS ]) AC_CACHE_CHECK([for -force_load linker flag],[lt_cv_ld_force_load], [lt_cv_ld_force_load=no cat > conftest.c << _LT_EOF int forced_loaded() { return 2;} _LT_EOF echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD $AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD $RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD cat > conftest.c << _LT_EOF int main() { return 0;} _LT_EOF echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&AS_MESSAGE_LOG_FD $LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err _lt_result=$? if test -s conftest.err && $GREP force_load conftest.err; then cat conftest.err >&AS_MESSAGE_LOG_FD - elif test -f conftest && test $_lt_result -eq 0 && $GREP forced_load conftest >/dev/null 2>&1 ; then + elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then lt_cv_ld_force_load=yes else cat conftest.err >&AS_MESSAGE_LOG_FD fi rm -f conftest.err libconftest.a conftest conftest.c rm -rf conftest.dSYM ]) case $host_os in rhapsody* | darwin1.[[012]]) - _lt_dar_allow_undefined='${wl}-undefined ${wl}suppress' ;; + _lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;; darwin1.*) - _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; + _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; darwin*) # darwin 5.x on # if running on 10.5 or later, the deployment target defaults # to the OS version, if on x86, and 10.4, the deployment # target defaults to 10.4. Don't you love it? case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in 10.0,*86*-darwin8*|10.0,*-darwin[[91]]*) - _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; - 10.[[012]]*) - _lt_dar_allow_undefined='${wl}-flat_namespace ${wl}-undefined ${wl}suppress' ;; + _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;; + 10.[[012]][[,.]]*) + _lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;; 10.*) - _lt_dar_allow_undefined='${wl}-undefined ${wl}dynamic_lookup' ;; + _lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;; esac ;; esac - if test "$lt_cv_apple_cc_single_mod" = "yes"; then + if test yes = "$lt_cv_apple_cc_single_mod"; then _lt_dar_single_mod='$single_module' fi - if test "$lt_cv_ld_exported_symbols_list" = "yes"; then - _lt_dar_export_syms=' ${wl}-exported_symbols_list,$output_objdir/${libname}-symbols.expsym' + if test yes = "$lt_cv_ld_exported_symbols_list"; then + _lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym' else - _lt_dar_export_syms='~$NMEDIT -s $output_objdir/${libname}-symbols.expsym ${lib}' + _lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib' fi - if test "$DSYMUTIL" != ":" && test "$lt_cv_ld_force_load" = "no"; then + if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then _lt_dsymutil='~$DSYMUTIL $lib || :' else _lt_dsymutil= fi ;; esac ]) # _LT_DARWIN_LINKER_FEATURES([TAG]) # --------------------------------- # Checks for linker and compiler features on darwin m4_defun([_LT_DARWIN_LINKER_FEATURES], [ m4_require([_LT_REQUIRED_DARWIN_CHECKS]) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported - if test "$lt_cv_ld_force_load" = "yes"; then - _LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience ${wl}-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' + if test yes = "$lt_cv_ld_force_load"; then + _LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`' m4_case([$1], [F77], [_LT_TAGVAR(compiler_needs_object, $1)=yes], [FC], [_LT_TAGVAR(compiler_needs_object, $1)=yes]) else _LT_TAGVAR(whole_archive_flag_spec, $1)='' fi _LT_TAGVAR(link_all_deplibs, $1)=yes - _LT_TAGVAR(allow_undefined_flag, $1)="$_lt_dar_allow_undefined" + _LT_TAGVAR(allow_undefined_flag, $1)=$_lt_dar_allow_undefined case $cc_basename in - ifort*) _lt_dar_can_shared=yes ;; + ifort*|nagfor*) _lt_dar_can_shared=yes ;; *) _lt_dar_can_shared=$GCC ;; esac - if test "$_lt_dar_can_shared" = "yes"; then + if test yes = "$_lt_dar_can_shared"; then output_verbose_link_cmd=func_echo_all - _LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod${_lt_dsymutil}" - _LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dsymutil}" - _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring ${_lt_dar_single_mod}${_lt_dar_export_syms}${_lt_dsymutil}" - _LT_TAGVAR(module_expsym_cmds, $1)="sed -e 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags${_lt_dar_export_syms}${_lt_dsymutil}" + _LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil" + _LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil" + _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil" + _LT_TAGVAR(module_expsym_cmds, $1)="sed -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil" m4_if([$1], [CXX], -[ if test "$lt_cv_apple_cc_single_mod" != "yes"; then - _LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dsymutil}" - _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's,^,_,' < \$export_symbols > \$output_objdir/\${libname}-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \${lib}-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \${lib}-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring${_lt_dar_export_syms}${_lt_dsymutil}" +[ if test yes != "$lt_cv_apple_cc_single_mod"; then + _LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dsymutil" + _LT_TAGVAR(archive_expsym_cmds, $1)="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dar_export_syms$_lt_dsymutil" fi ],[]) else _LT_TAGVAR(ld_shlibs, $1)=no fi ]) # _LT_SYS_MODULE_PATH_AIX([TAGNAME]) # ---------------------------------- # Links a minimal program and checks the executable # for the system default hardcoded library path. In most cases, # this is /usr/lib:/lib, but when the MPI compilers are used # the location of the communication and MPI libs are included too. # If we don't find anything, use the default library path according # to the aix ld manual. # Store the results from the different compilers for each TAGNAME. # Allow to override them for all tags through lt_cv_aix_libpath. m4_defun([_LT_SYS_MODULE_PATH_AIX], [m4_require([_LT_DECL_SED])dnl -if test "${lt_cv_aix_libpath+set}" = set; then +if test set = "${lt_cv_aix_libpath+set}"; then aix_libpath=$lt_cv_aix_libpath else AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])], [AC_LINK_IFELSE([AC_LANG_PROGRAM],[ lt_aix_libpath_sed='[ /Import File Strings/,/^$/ { /^0/ { s/^0 *\([^ ]*\) *$/\1/ p } }]' _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` # Check for a 64-bit object if we didn't find anything. if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"` fi],[]) if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then - _LT_TAGVAR([lt_cv_aix_libpath_], [$1])="/usr/lib:/lib" + _LT_TAGVAR([lt_cv_aix_libpath_], [$1])=/usr/lib:/lib fi ]) aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1]) fi ])# _LT_SYS_MODULE_PATH_AIX # _LT_SHELL_INIT(ARG) # ------------------- m4_define([_LT_SHELL_INIT], [m4_divert_text([M4SH-INIT], [$1 ])])# _LT_SHELL_INIT # _LT_PROG_ECHO_BACKSLASH # ----------------------- # Find how we can fake an echo command that does not interpret backslash. # In particular, with Autoconf 2.60 or later we add some code to the start -# of the generated configure script which will find a shell with a builtin -# printf (which we can use as an echo command). +# of the generated configure script that will find a shell with a builtin +# printf (that we can use as an echo command). m4_defun([_LT_PROG_ECHO_BACKSLASH], [ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO AC_MSG_CHECKING([how to print strings]) # Test print first, because it will be a builtin if present. if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \ test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='print -r --' elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then ECHO='printf %s\n' else # Use this function as a fallback that always works. func_fallback_echo () { eval 'cat <<_LTECHO_EOF $[]1 _LTECHO_EOF' } ECHO='func_fallback_echo' fi # func_echo_all arg... # Invoke $ECHO with all args, space-separated. func_echo_all () { - $ECHO "$*" + $ECHO "$*" } -case "$ECHO" in +case $ECHO in printf*) AC_MSG_RESULT([printf]) ;; print*) AC_MSG_RESULT([print -r]) ;; *) AC_MSG_RESULT([cat]) ;; esac m4_ifdef([_AS_DETECT_SUGGESTED], [_AS_DETECT_SUGGESTED([ test -n "${ZSH_VERSION+set}${BASH_VERSION+set}" || ( ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\' ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO PATH=/empty FPATH=/empty; export PATH FPATH test "X`printf %s $ECHO`" = "X$ECHO" \ || test "X`print -r -- $ECHO`" = "X$ECHO" )])]) _LT_DECL([], [SHELL], [1], [Shell to use when invoking shell scripts]) _LT_DECL([], [ECHO], [1], [An echo program that protects backslashes]) ])# _LT_PROG_ECHO_BACKSLASH # _LT_WITH_SYSROOT # ---------------- AC_DEFUN([_LT_WITH_SYSROOT], [AC_MSG_CHECKING([for sysroot]) AC_ARG_WITH([sysroot], -[ --with-sysroot[=DIR] Search for dependent libraries within DIR - (or the compiler's sysroot if not specified).], +[AS_HELP_STRING([--with-sysroot@<:@=DIR@:>@], + [Search for dependent libraries within DIR (or the compiler's sysroot + if not specified).])], [], [with_sysroot=no]) dnl lt_sysroot will always be passed unquoted. We quote it here dnl in case the user passed a directory name. lt_sysroot= -case ${with_sysroot} in #( +case $with_sysroot in #( yes) - if test "$GCC" = yes; then + if test yes = "$GCC"; then lt_sysroot=`$CC --print-sysroot 2>/dev/null` fi ;; #( /*) lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"` ;; #( no|'') ;; #( *) - AC_MSG_RESULT([${with_sysroot}]) + AC_MSG_RESULT([$with_sysroot]) AC_MSG_ERROR([The sysroot must be an absolute path.]) ;; esac AC_MSG_RESULT([${lt_sysroot:-no}]) _LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl -[dependent libraries, and in which our libraries should be installed.])]) +[dependent libraries, and where our libraries should be installed.])]) # _LT_ENABLE_LOCK # --------------- m4_defun([_LT_ENABLE_LOCK], [AC_ARG_ENABLE([libtool-lock], [AS_HELP_STRING([--disable-libtool-lock], [avoid locking (might break parallel builds)])]) -test "x$enable_libtool_lock" != xno && enable_libtool_lock=yes +test no = "$enable_libtool_lock" || enable_libtool_lock=yes # Some flags need to be propagated to the compiler or linker for good # libtool support. case $host in ia64-*-hpux*) - # Find out which ABI we are using. + # Find out what ABI is being produced by ac_compile, and set mode + # options accordingly. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.$ac_objext` in *ELF-32*) - HPUX_IA64_MODE="32" + HPUX_IA64_MODE=32 ;; *ELF-64*) - HPUX_IA64_MODE="64" + HPUX_IA64_MODE=64 ;; esac fi rm -rf conftest* ;; *-*-irix6*) - # Find out which ABI we are using. + # Find out what ABI is being produced by ac_compile, and set linker + # options accordingly. echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then - if test "$lt_cv_prog_gnu_ld" = yes; then + if test yes = "$lt_cv_prog_gnu_ld"; then case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -melf32bsmip" ;; *N32*) LD="${LD-ld} -melf32bmipn32" ;; *64-bit*) LD="${LD-ld} -melf64bmip" ;; esac else case `/usr/bin/file conftest.$ac_objext` in *32-bit*) LD="${LD-ld} -32" ;; *N32*) LD="${LD-ld} -n32" ;; *64-bit*) LD="${LD-ld} -64" ;; esac fi fi rm -rf conftest* ;; -x86_64-*kfreebsd*-gnu|x86_64-*linux*|ppc*-*linux*|powerpc*-*linux*| \ +mips64*-*linux*) + # Find out what ABI is being produced by ac_compile, and set linker + # options accordingly. + echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext + if AC_TRY_EVAL(ac_compile); then + emul=elf + case `/usr/bin/file conftest.$ac_objext` in + *32-bit*) + emul="${emul}32" + ;; + *64-bit*) + emul="${emul}64" + ;; + esac + case `/usr/bin/file conftest.$ac_objext` in + *MSB*) + emul="${emul}btsmip" + ;; + *LSB*) + emul="${emul}ltsmip" + ;; + esac + case `/usr/bin/file conftest.$ac_objext` in + *N32*) + emul="${emul}n32" + ;; + esac + LD="${LD-ld} -m $emul" + fi + rm -rf conftest* + ;; + +x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \ s390*-*linux*|s390*-*tpf*|sparc*-*linux*) - # Find out which ABI we are using. + # Find out what ABI is being produced by ac_compile, and set linker + # options accordingly. Note that the listed cases only cover the + # situations where additional linker options are needed (such as when + # doing 32-bit compilation for a host where ld defaults to 64-bit, or + # vice versa); the common cases where no linker options are needed do + # not appear in the list. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.o` in *32-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_i386_fbsd" ;; x86_64-*linux*) - LD="${LD-ld} -m elf_i386" + case `/usr/bin/file conftest.o` in + *x86-64*) + LD="${LD-ld} -m elf32_x86_64" + ;; + *) + LD="${LD-ld} -m elf_i386" + ;; + esac ;; - ppc64-*linux*|powerpc64-*linux*) + powerpc64le-*linux*) + LD="${LD-ld} -m elf32lppclinux" + ;; + powerpc64-*linux*) LD="${LD-ld} -m elf32ppclinux" ;; s390x-*linux*) LD="${LD-ld} -m elf_s390" ;; sparc64-*linux*) LD="${LD-ld} -m elf32_sparc" ;; esac ;; *64-bit*) case $host in x86_64-*kfreebsd*-gnu) LD="${LD-ld} -m elf_x86_64_fbsd" ;; x86_64-*linux*) LD="${LD-ld} -m elf_x86_64" ;; - ppc*-*linux*|powerpc*-*linux*) + powerpcle-*linux*) + LD="${LD-ld} -m elf64lppc" + ;; + powerpc-*linux*) LD="${LD-ld} -m elf64ppc" ;; s390*-*linux*|s390*-*tpf*) LD="${LD-ld} -m elf64_s390" ;; sparc*-*linux*) LD="${LD-ld} -m elf64_sparc" ;; esac ;; esac fi rm -rf conftest* ;; *-*-sco3.2v5*) # On SCO OpenServer 5, we need -belf to get full-featured binaries. - SAVE_CFLAGS="$CFLAGS" + SAVE_CFLAGS=$CFLAGS CFLAGS="$CFLAGS -belf" AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf, [AC_LANG_PUSH(C) AC_LINK_IFELSE([AC_LANG_PROGRAM([[]],[[]])],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no]) AC_LANG_POP]) - if test x"$lt_cv_cc_needs_belf" != x"yes"; then + if test yes != "$lt_cv_cc_needs_belf"; then # this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf - CFLAGS="$SAVE_CFLAGS" + CFLAGS=$SAVE_CFLAGS fi ;; *-*solaris*) - # Find out which ABI we are using. + # Find out what ABI is being produced by ac_compile, and set linker + # options accordingly. echo 'int i;' > conftest.$ac_ext if AC_TRY_EVAL(ac_compile); then case `/usr/bin/file conftest.o` in *64-bit*) case $lt_cv_prog_gnu_ld in yes*) case $host in - i?86-*-solaris*) + i?86-*-solaris*|x86_64-*-solaris*) LD="${LD-ld} -m elf_x86_64" ;; sparc*-*-solaris*) LD="${LD-ld} -m elf64_sparc" ;; esac # GNU ld 2.21 introduced _sol2 emulations. Use them if available. if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then - LD="${LD-ld}_sol2" + LD=${LD-ld}_sol2 fi ;; *) if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then LD="${LD-ld} -64" fi ;; esac ;; esac fi rm -rf conftest* ;; esac -need_locks="$enable_libtool_lock" +need_locks=$enable_libtool_lock ])# _LT_ENABLE_LOCK # _LT_PROG_AR # ----------- m4_defun([_LT_PROG_AR], [AC_CHECK_TOOLS(AR, [ar], false) : ${AR=ar} : ${AR_FLAGS=cru} _LT_DECL([], [AR], [1], [The archiver]) _LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive]) AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file], [lt_cv_ar_at_file=no AC_COMPILE_IFELSE([AC_LANG_PROGRAM], [echo conftest.$ac_objext > conftest.lst lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD' AC_TRY_EVAL([lt_ar_try]) - if test "$ac_status" -eq 0; then + if test 0 -eq "$ac_status"; then # Ensure the archiver fails upon bogus file names. rm -f conftest.$ac_objext libconftest.a AC_TRY_EVAL([lt_ar_try]) - if test "$ac_status" -ne 0; then + if test 0 -ne "$ac_status"; then lt_cv_ar_at_file=@ fi fi rm -f conftest.* libconftest.a ]) ]) -if test "x$lt_cv_ar_at_file" = xno; then +if test no = "$lt_cv_ar_at_file"; then archiver_list_spec= else archiver_list_spec=$lt_cv_ar_at_file fi _LT_DECL([], [archiver_list_spec], [1], [How to feed a file listing to the archiver]) ])# _LT_PROG_AR # _LT_CMD_OLD_ARCHIVE # ------------------- m4_defun([_LT_CMD_OLD_ARCHIVE], [_LT_PROG_AR AC_CHECK_TOOL(STRIP, strip, :) test -z "$STRIP" && STRIP=: _LT_DECL([], [STRIP], [1], [A symbol stripping program]) AC_CHECK_TOOL(RANLIB, ranlib, :) test -z "$RANLIB" && RANLIB=: _LT_DECL([], [RANLIB], [1], [Commands used to install an old-style archive]) # Determine commands to create old-style static archives. old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs' old_postinstall_cmds='chmod 644 $oldlib' old_postuninstall_cmds= if test -n "$RANLIB"; then case $host_os in - openbsd*) + bitrig* | openbsd*) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib" ;; *) old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib" ;; esac old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib" fi case $host_os in darwin*) lock_old_archive_extraction=yes ;; *) lock_old_archive_extraction=no ;; esac _LT_DECL([], [old_postinstall_cmds], [2]) _LT_DECL([], [old_postuninstall_cmds], [2]) _LT_TAGDECL([], [old_archive_cmds], [2], [Commands used to build an old-style archive]) _LT_DECL([], [lock_old_archive_extraction], [0], [Whether to use a lock for old archive extraction]) ])# _LT_CMD_OLD_ARCHIVE # _LT_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------------------- # Check whether the given compiler option works AC_DEFUN([_LT_COMPILER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no m4_if([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4]) echo "$lt_simple_compile_test_code" > conftest.$ac_ext - lt_compiler_flag="$3" + lt_compiler_flag="$3" ## exclude from sc_useless_quotes_in_assignment # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. # The option is referenced via a variable to avoid confusing sed. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>conftest.err) ac_status=$? cat conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s "$ac_outfile"; then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings other than the usual output. $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi fi $RM conftest* ]) -if test x"[$]$2" = xyes; then +if test yes = "[$]$2"; then m4_if([$5], , :, [$5]) else m4_if([$6], , :, [$6]) fi ])# _LT_COMPILER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_COMPILER_OPTION], [_LT_COMPILER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], []) # _LT_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS, # [ACTION-SUCCESS], [ACTION-FAILURE]) # ---------------------------------------------------- # Check whether the given linker option works AC_DEFUN([_LT_LINKER_OPTION], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_SED])dnl AC_CACHE_CHECK([$1], [$2], [$2=no - save_LDFLAGS="$LDFLAGS" + save_LDFLAGS=$LDFLAGS LDFLAGS="$LDFLAGS $3" echo "$lt_simple_link_test_code" > conftest.$ac_ext if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then # The linker can only warn and ignore the option if not recognized # So say no if there are warnings if test -s conftest.err; then # Append any errors to the config.log. cat conftest.err 1>&AS_MESSAGE_LOG_FD $ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp $SED '/^$/d; /^ *+/d' conftest.err >conftest.er2 if diff conftest.exp conftest.er2 >/dev/null; then $2=yes fi else $2=yes fi fi $RM -r conftest* - LDFLAGS="$save_LDFLAGS" + LDFLAGS=$save_LDFLAGS ]) -if test x"[$]$2" = xyes; then +if test yes = "[$]$2"; then m4_if([$4], , :, [$4]) else m4_if([$5], , :, [$5]) fi ])# _LT_LINKER_OPTION # Old name: AU_ALIAS([AC_LIBTOOL_LINKER_OPTION], [_LT_LINKER_OPTION]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], []) # LT_CMD_MAX_LEN #--------------- AC_DEFUN([LT_CMD_MAX_LEN], [AC_REQUIRE([AC_CANONICAL_HOST])dnl # find the maximum length of command line arguments AC_MSG_CHECKING([the maximum length of command line arguments]) AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl i=0 - teststring="ABCD" + teststring=ABCD case $build_os in msdosdjgpp*) # On DJGPP, this test can blow up pretty badly due to problems in libc # (any single argument exceeding 2000 bytes causes a buffer overrun # during glob expansion). Even if it were fixed, the result of this # check would be larger than it should be. lt_cv_sys_max_cmd_len=12288; # 12K is about right ;; gnu*) # Under GNU Hurd, this test is not required because there is # no limit to the length of command line arguments. # Libtool will interpret -1 as no limit whatsoever lt_cv_sys_max_cmd_len=-1; ;; cygwin* | mingw* | cegcc*) # On Win9x/ME, this test blows up -- it succeeds, but takes # about 5 minutes as the teststring grows exponentially. # Worse, since 9x/ME are not pre-emptively multitasking, # you end up with a "frozen" computer, even though with patience # the test eventually succeeds (with a max line length of 256k). # Instead, let's just punt: use the minimum linelength reported by # all of the supported platforms: 8192 (on NT/2K/XP). lt_cv_sys_max_cmd_len=8192; ;; mint*) # On MiNT this can take a long time and run out of memory. lt_cv_sys_max_cmd_len=8192; ;; amigaos*) # On AmigaOS with pdksh, this test takes hours, literally. # So we just punt and use a minimum line length of 8192. lt_cv_sys_max_cmd_len=8192; ;; - netbsd* | freebsd* | openbsd* | darwin* | dragonfly*) + bitrig* | darwin* | dragonfly* | freebsd* | netbsd* | openbsd*) # This has been around since 386BSD, at least. Likely further. if test -x /sbin/sysctl; then lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax` elif test -x /usr/sbin/sysctl; then lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax` else lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs fi # And add a safety zone lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` ;; interix*) # We know the value 262144 and hardcode it with a safety zone (like BSD) lt_cv_sys_max_cmd_len=196608 ;; os2*) # The test takes a long time on OS/2. lt_cv_sys_max_cmd_len=8192 ;; osf*) # Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure # due to this test when exec_disable_arg_limit is 1 on Tru64. It is not # nice to cause kernel panics so lets avoid the loop below. # First set a reasonable default. lt_cv_sys_max_cmd_len=16384 # if test -x /sbin/sysconfig; then case `/sbin/sysconfig -q proc exec_disable_arg_limit` in *1*) lt_cv_sys_max_cmd_len=-1 ;; esac fi ;; sco3.2v5*) lt_cv_sys_max_cmd_len=102400 ;; sysv5* | sco5v6* | sysv4.2uw2*) kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null` if test -n "$kargmax"; then lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[[ ]]//'` else lt_cv_sys_max_cmd_len=32768 fi ;; *) lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null` - if test -n "$lt_cv_sys_max_cmd_len"; then + if test -n "$lt_cv_sys_max_cmd_len" && \ + test undefined != "$lt_cv_sys_max_cmd_len"; then lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4` lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3` else # Make teststring a little bigger before we do anything with it. # a 1K string should be a reasonable start. - for i in 1 2 3 4 5 6 7 8 ; do + for i in 1 2 3 4 5 6 7 8; do teststring=$teststring$teststring done SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}} # If test is not a shell built-in, we'll probably end up computing a # maximum length that is only half of the actual maximum length, but # we can't tell. - while { test "X"`env echo "$teststring$teststring" 2>/dev/null` \ + while { test X`env echo "$teststring$teststring" 2>/dev/null` \ = "X$teststring$teststring"; } >/dev/null 2>&1 && - test $i != 17 # 1/2 MB should be enough + test 17 != "$i" # 1/2 MB should be enough do i=`expr $i + 1` teststring=$teststring$teststring done # Only check the string length outside the loop. lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1` teststring= # Add a significant safety factor because C++ compilers can tack on # massive amounts of additional arguments before passing them to the # linker. It appears as though 1/2 is a usable value. lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2` fi ;; esac ]) -if test -n $lt_cv_sys_max_cmd_len ; then +if test -n "$lt_cv_sys_max_cmd_len"; then AC_MSG_RESULT($lt_cv_sys_max_cmd_len) else AC_MSG_RESULT(none) fi max_cmd_len=$lt_cv_sys_max_cmd_len _LT_DECL([], [max_cmd_len], [0], [What is the maximum length of a command?]) ])# LT_CMD_MAX_LEN # Old name: AU_ALIAS([AC_LIBTOOL_SYS_MAX_CMD_LEN], [LT_CMD_MAX_LEN]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], []) # _LT_HEADER_DLFCN # ---------------- m4_defun([_LT_HEADER_DLFCN], [AC_CHECK_HEADERS([dlfcn.h], [], [], [AC_INCLUDES_DEFAULT])dnl ])# _LT_HEADER_DLFCN # _LT_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE, # ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING) # ---------------------------------------------------------------- m4_defun([_LT_TRY_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl -if test "$cross_compiling" = yes; then : +if test yes = "$cross_compiling"; then : [$4] else lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2 lt_status=$lt_dlunknown cat > conftest.$ac_ext <<_LT_EOF [#line $LINENO "configure" #include "confdefs.h" #if HAVE_DLFCN_H #include #endif #include #ifdef RTLD_GLOBAL # define LT_DLGLOBAL RTLD_GLOBAL #else # ifdef DL_GLOBAL # define LT_DLGLOBAL DL_GLOBAL # else # define LT_DLGLOBAL 0 # endif #endif /* We may have to define LT_DLLAZY_OR_NOW in the command line if we find out it does not work in some platform. */ #ifndef LT_DLLAZY_OR_NOW # ifdef RTLD_LAZY # define LT_DLLAZY_OR_NOW RTLD_LAZY # else # ifdef DL_LAZY # define LT_DLLAZY_OR_NOW DL_LAZY # else # ifdef RTLD_NOW # define LT_DLLAZY_OR_NOW RTLD_NOW # else # ifdef DL_NOW # define LT_DLLAZY_OR_NOW DL_NOW # else # define LT_DLLAZY_OR_NOW 0 # endif # endif # endif # endif #endif -/* When -fvisbility=hidden is used, assume the code has been annotated +/* When -fvisibility=hidden is used, assume the code has been annotated correspondingly for the symbols needed. */ -#if defined(__GNUC__) && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) +#if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3)) int fnord () __attribute__((visibility("default"))); #endif int fnord () { return 42; } int main () { void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW); int status = $lt_dlunknown; if (self) { if (dlsym (self,"fnord")) status = $lt_dlno_uscore; else { if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore; else puts (dlerror ()); } /* dlclose (self); */ } else puts (dlerror ()); return status; }] _LT_EOF - if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext} 2>/dev/null; then + if AC_TRY_EVAL(ac_link) && test -s "conftest$ac_exeext" 2>/dev/null; then (./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null lt_status=$? case x$lt_status in x$lt_dlno_uscore) $1 ;; x$lt_dlneed_uscore) $2 ;; x$lt_dlunknown|x*) $3 ;; esac else : # compilation failed $3 fi fi rm -fr conftest* ])# _LT_TRY_DLOPEN_SELF # LT_SYS_DLOPEN_SELF # ------------------ AC_DEFUN([LT_SYS_DLOPEN_SELF], [m4_require([_LT_HEADER_DLFCN])dnl -if test "x$enable_dlopen" != xyes; then +if test yes != "$enable_dlopen"; then enable_dlopen=unknown enable_dlopen_self=unknown enable_dlopen_self_static=unknown else lt_cv_dlopen=no lt_cv_dlopen_libs= case $host_os in beos*) - lt_cv_dlopen="load_add_on" + lt_cv_dlopen=load_add_on lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ;; mingw* | pw32* | cegcc*) - lt_cv_dlopen="LoadLibrary" + lt_cv_dlopen=LoadLibrary lt_cv_dlopen_libs= ;; cygwin*) - lt_cv_dlopen="dlopen" + lt_cv_dlopen=dlopen lt_cv_dlopen_libs= ;; darwin*) - # if libdl is installed we need to link against it + # if libdl is installed we need to link against it AC_CHECK_LIB([dl], [dlopen], - [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"],[ - lt_cv_dlopen="dyld" + [lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl],[ + lt_cv_dlopen=dyld lt_cv_dlopen_libs= lt_cv_dlopen_self=yes ]) ;; + tpf*) + # Don't try to run any link tests for TPF. We know it's impossible + # because TPF is a cross-compiler, and we know how we open DSOs. + lt_cv_dlopen=dlopen + lt_cv_dlopen_libs= + lt_cv_dlopen_self=no + ;; + *) AC_CHECK_FUNC([shl_load], - [lt_cv_dlopen="shl_load"], + [lt_cv_dlopen=shl_load], [AC_CHECK_LIB([dld], [shl_load], - [lt_cv_dlopen="shl_load" lt_cv_dlopen_libs="-ldld"], + [lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld], [AC_CHECK_FUNC([dlopen], - [lt_cv_dlopen="dlopen"], + [lt_cv_dlopen=dlopen], [AC_CHECK_LIB([dl], [dlopen], - [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-ldl"], + [lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl], [AC_CHECK_LIB([svld], [dlopen], - [lt_cv_dlopen="dlopen" lt_cv_dlopen_libs="-lsvld"], + [lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld], [AC_CHECK_LIB([dld], [dld_link], - [lt_cv_dlopen="dld_link" lt_cv_dlopen_libs="-ldld"]) + [lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld]) ]) ]) ]) ]) ]) ;; esac - if test "x$lt_cv_dlopen" != xno; then + if test no = "$lt_cv_dlopen"; then + enable_dlopen=no + else enable_dlopen=yes - else - enable_dlopen=no fi case $lt_cv_dlopen in dlopen) - save_CPPFLAGS="$CPPFLAGS" - test "x$ac_cv_header_dlfcn_h" = xyes && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" - - save_LDFLAGS="$LDFLAGS" + save_CPPFLAGS=$CPPFLAGS + test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H" + + save_LDFLAGS=$LDFLAGS wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\" - save_LIBS="$LIBS" + save_LIBS=$LIBS LIBS="$lt_cv_dlopen_libs $LIBS" AC_CACHE_CHECK([whether a program can dlopen itself], lt_cv_dlopen_self, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes, lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross) ]) - if test "x$lt_cv_dlopen_self" = xyes; then + if test yes = "$lt_cv_dlopen_self"; then wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\" AC_CACHE_CHECK([whether a statically linked program can dlopen itself], lt_cv_dlopen_self_static, [dnl _LT_TRY_DLOPEN_SELF( lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross) ]) fi - CPPFLAGS="$save_CPPFLAGS" - LDFLAGS="$save_LDFLAGS" - LIBS="$save_LIBS" + CPPFLAGS=$save_CPPFLAGS + LDFLAGS=$save_LDFLAGS + LIBS=$save_LIBS ;; esac case $lt_cv_dlopen_self in yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;; *) enable_dlopen_self=unknown ;; esac case $lt_cv_dlopen_self_static in yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;; *) enable_dlopen_self_static=unknown ;; esac fi _LT_DECL([dlopen_support], [enable_dlopen], [0], [Whether dlopen is supported]) _LT_DECL([dlopen_self], [enable_dlopen_self], [0], [Whether dlopen of programs is supported]) _LT_DECL([dlopen_self_static], [enable_dlopen_self_static], [0], [Whether dlopen of statically linked programs is supported]) ])# LT_SYS_DLOPEN_SELF # Old name: AU_ALIAS([AC_LIBTOOL_DLOPEN_SELF], [LT_SYS_DLOPEN_SELF]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], []) # _LT_COMPILER_C_O([TAGNAME]) # --------------------------- # Check to see if options -c and -o are simultaneously supported by compiler. # This macro does not hard code the compiler like AC_PROG_CC_C_O. m4_defun([_LT_COMPILER_C_O], [m4_require([_LT_DECL_SED])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no $RM -r conftest 2>/dev/null mkdir conftest cd conftest mkdir out echo "$lt_simple_compile_test_code" > conftest.$ac_ext lt_compiler_flag="-o out/conftest2.$ac_objext" # Insert the option either (1) after the last *FLAGS variable, or # (2) before a word containing "conftest.", or (3) at the end. # Note that $ac_compile itself does not contain backslashes and begins # with a dollar sign (not a hyphen), so the echo should work correctly. lt_compile=`echo "$ac_compile" | $SED \ -e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \ -e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \ -e 's:$: $lt_compiler_flag:'` (eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$lt_compile" 2>out/conftest.err) ac_status=$? cat out/conftest.err >&AS_MESSAGE_LOG_FD echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD if (exit $ac_status) && test -s out/conftest2.$ac_objext then # The compiler can only warn and ignore the option if not recognized # So say no if there are warnings $ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp $SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2 if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes fi fi chmod u+w . 2>&AS_MESSAGE_LOG_FD $RM conftest* # SGI C++ compiler will create directory out/ii_files/ for # template instantiation test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files $RM out/* && rmdir out cd .. $RM -r conftest $RM conftest* ]) _LT_TAGDECL([compiler_c_o], [lt_cv_prog_compiler_c_o], [1], [Does compiler simultaneously support -c and -o options?]) ])# _LT_COMPILER_C_O # _LT_COMPILER_FILE_LOCKS([TAGNAME]) # ---------------------------------- # Check to see if we can do hard links to lock some files if needed m4_defun([_LT_COMPILER_FILE_LOCKS], [m4_require([_LT_ENABLE_LOCK])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl _LT_COMPILER_C_O([$1]) -hard_links="nottested" -if test "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" = no && test "$need_locks" != no; then +hard_links=nottested +if test no = "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" && test no != "$need_locks"; then # do not overwrite the value of need_locks provided by the user AC_MSG_CHECKING([if we can lock with hard links]) hard_links=yes $RM conftest* ln conftest.a conftest.b 2>/dev/null && hard_links=no touch conftest.a ln conftest.a conftest.b 2>&5 || hard_links=no ln conftest.a conftest.b 2>/dev/null && hard_links=no AC_MSG_RESULT([$hard_links]) - if test "$hard_links" = no; then - AC_MSG_WARN([`$CC' does not support `-c -o', so `make -j' may be unsafe]) + if test no = "$hard_links"; then + AC_MSG_WARN(['$CC' does not support '-c -o', so 'make -j' may be unsafe]) need_locks=warn fi else need_locks=no fi _LT_DECL([], [need_locks], [1], [Must we lock files when doing compilation?]) ])# _LT_COMPILER_FILE_LOCKS # _LT_CHECK_OBJDIR # ---------------- m4_defun([_LT_CHECK_OBJDIR], [AC_CACHE_CHECK([for objdir], [lt_cv_objdir], [rm -f .libs 2>/dev/null mkdir .libs 2>/dev/null if test -d .libs; then lt_cv_objdir=.libs else # MS-DOS does not allow filenames that begin with a dot. lt_cv_objdir=_libs fi rmdir .libs 2>/dev/null]) objdir=$lt_cv_objdir _LT_DECL([], [objdir], [0], [The name of the directory that contains temporary libtool files])dnl m4_pattern_allow([LT_OBJDIR])dnl -AC_DEFINE_UNQUOTED(LT_OBJDIR, "$lt_cv_objdir/", - [Define to the sub-directory in which libtool stores uninstalled libraries.]) +AC_DEFINE_UNQUOTED([LT_OBJDIR], "$lt_cv_objdir/", + [Define to the sub-directory where libtool stores uninstalled libraries.]) ])# _LT_CHECK_OBJDIR # _LT_LINKER_HARDCODE_LIBPATH([TAGNAME]) # -------------------------------------- # Check hardcoding attributes. m4_defun([_LT_LINKER_HARDCODE_LIBPATH], [AC_MSG_CHECKING([how to hardcode library paths into programs]) _LT_TAGVAR(hardcode_action, $1)= if test -n "$_LT_TAGVAR(hardcode_libdir_flag_spec, $1)" || test -n "$_LT_TAGVAR(runpath_var, $1)" || - test "X$_LT_TAGVAR(hardcode_automatic, $1)" = "Xyes" ; then + test yes = "$_LT_TAGVAR(hardcode_automatic, $1)"; then # We can hardcode non-existent directories. - if test "$_LT_TAGVAR(hardcode_direct, $1)" != no && + if test no != "$_LT_TAGVAR(hardcode_direct, $1)" && # If the only mechanism to avoid hardcoding is shlibpath_var, we # have to relink, otherwise we might link with an installed library # when we should be linking with a yet-to-be-installed one - ## test "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" != no && - test "$_LT_TAGVAR(hardcode_minus_L, $1)" != no; then + ## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" && + test no != "$_LT_TAGVAR(hardcode_minus_L, $1)"; then # Linking always hardcodes the temporary library directory. _LT_TAGVAR(hardcode_action, $1)=relink else # We can link without hardcoding, and we can hardcode nonexisting dirs. _LT_TAGVAR(hardcode_action, $1)=immediate fi else # We cannot hardcode anything, or else we can only hardcode existing # directories. _LT_TAGVAR(hardcode_action, $1)=unsupported fi AC_MSG_RESULT([$_LT_TAGVAR(hardcode_action, $1)]) -if test "$_LT_TAGVAR(hardcode_action, $1)" = relink || - test "$_LT_TAGVAR(inherit_rpath, $1)" = yes; then +if test relink = "$_LT_TAGVAR(hardcode_action, $1)" || + test yes = "$_LT_TAGVAR(inherit_rpath, $1)"; then # Fast installation is not supported enable_fast_install=no -elif test "$shlibpath_overrides_runpath" = yes || - test "$enable_shared" = no; then +elif test yes = "$shlibpath_overrides_runpath" || + test no = "$enable_shared"; then # Fast installation is not necessary enable_fast_install=needless fi _LT_TAGDECL([], [hardcode_action], [0], [How to hardcode a shared library path into an executable]) ])# _LT_LINKER_HARDCODE_LIBPATH # _LT_CMD_STRIPLIB # ---------------- m4_defun([_LT_CMD_STRIPLIB], [m4_require([_LT_DECL_EGREP]) striplib= old_striplib= AC_MSG_CHECKING([whether stripping libraries is possible]) if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then test -z "$old_striplib" && old_striplib="$STRIP --strip-debug" test -z "$striplib" && striplib="$STRIP --strip-unneeded" AC_MSG_RESULT([yes]) else # FIXME - insert some real tests, host_os isn't really good enough case $host_os in darwin*) - if test -n "$STRIP" ; then + if test -n "$STRIP"; then striplib="$STRIP -x" old_striplib="$STRIP -S" AC_MSG_RESULT([yes]) else AC_MSG_RESULT([no]) fi ;; *) AC_MSG_RESULT([no]) ;; esac fi _LT_DECL([], [old_striplib], [1], [Commands to strip libraries]) _LT_DECL([], [striplib], [1]) ])# _LT_CMD_STRIPLIB +# _LT_PREPARE_MUNGE_PATH_LIST +# --------------------------- +# Make sure func_munge_path_list() is defined correctly. +m4_defun([_LT_PREPARE_MUNGE_PATH_LIST], +[[# func_munge_path_list VARIABLE PATH +# ----------------------------------- +# VARIABLE is name of variable containing _space_ separated list of +# directories to be munged by the contents of PATH, which is string +# having a format: +# "DIR[:DIR]:" +# string "DIR[ DIR]" will be prepended to VARIABLE +# ":DIR[:DIR]" +# string "DIR[ DIR]" will be appended to VARIABLE +# "DIRP[:DIRP]::[DIRA:]DIRA" +# string "DIRP[ DIRP]" will be prepended to VARIABLE and string +# "DIRA[ DIRA]" will be appended to VARIABLE +# "DIR[:DIR]" +# VARIABLE will be replaced by "DIR[ DIR]" +func_munge_path_list () +{ + case x@S|@2 in + x) + ;; + *:) + eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'` \@S|@@S|@1\" + ;; + x:*) + eval @S|@1=\"\@S|@@S|@1 `$ECHO @S|@2 | $SED 's/:/ /g'`\" + ;; + *::*) + eval @S|@1=\"\@S|@@S|@1\ `$ECHO @S|@2 | $SED -e 's/.*:://' -e 's/:/ /g'`\" + eval @S|@1=\"`$ECHO @S|@2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \@S|@@S|@1\" + ;; + *) + eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'`\" + ;; + esac +} +]])# _LT_PREPARE_PATH_LIST + + # _LT_SYS_DYNAMIC_LINKER([TAG]) # ----------------------------- # PORTME Fill in your ld.so characteristics m4_defun([_LT_SYS_DYNAMIC_LINKER], [AC_REQUIRE([AC_CANONICAL_HOST])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_OBJDUMP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CHECK_SHELL_FEATURES])dnl +m4_require([_LT_PREPARE_MUNGE_PATH_LIST])dnl AC_MSG_CHECKING([dynamic linker characteristics]) m4_if([$1], [], [ -if test "$GCC" = yes; then +if test yes = "$GCC"; then case $host_os in - darwin*) lt_awk_arg="/^libraries:/,/LR/" ;; - *) lt_awk_arg="/^libraries:/" ;; + darwin*) lt_awk_arg='/^libraries:/,/LR/' ;; + *) lt_awk_arg='/^libraries:/' ;; esac case $host_os in - mingw* | cegcc*) lt_sed_strip_eq="s,=\([[A-Za-z]]:\),\1,g" ;; - *) lt_sed_strip_eq="s,=/,/,g" ;; + mingw* | cegcc*) lt_sed_strip_eq='s|=\([[A-Za-z]]:\)|\1|g' ;; + *) lt_sed_strip_eq='s|=/|/|g' ;; esac lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq` case $lt_search_path_spec in *\;*) # if the path contains ";" then we assume it to be the separator # otherwise default to the standard path separator (i.e. ":") - it is # assumed that no part of a normal pathname contains ";" but that should # okay in the real world where ";" in dirpaths is itself problematic. lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'` ;; *) lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"` ;; esac # Ok, now we have the path, separated by spaces, we can step through it - # and add multilib dir if necessary. + # and add multilib dir if necessary... lt_tmp_lt_search_path_spec= - lt_multi_os_dir=`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` + lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null` + # ...but if some path component already ends with the multilib dir we assume + # that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer). + case "$lt_multi_os_dir; $lt_search_path_spec " in + "/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*) + lt_multi_os_dir= + ;; + esac for lt_sys_path in $lt_search_path_spec; do - if test -d "$lt_sys_path/$lt_multi_os_dir"; then - lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path/$lt_multi_os_dir" - else + if test -d "$lt_sys_path$lt_multi_os_dir"; then + lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir" + elif test -n "$lt_multi_os_dir"; then test -d "$lt_sys_path" && \ lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path" fi done lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk ' -BEGIN {RS=" "; FS="/|\n";} { - lt_foo=""; - lt_count=0; +BEGIN {RS = " "; FS = "/|\n";} { + lt_foo = ""; + lt_count = 0; for (lt_i = NF; lt_i > 0; lt_i--) { if ($lt_i != "" && $lt_i != ".") { if ($lt_i == "..") { lt_count++; } else { if (lt_count == 0) { - lt_foo="/" $lt_i lt_foo; + lt_foo = "/" $lt_i lt_foo; } else { lt_count--; } } } } if (lt_foo != "") { lt_freq[[lt_foo]]++; } if (lt_freq[[lt_foo]] == 1) { print lt_foo; } }'` # AWK program above erroneously prepends '/' to C:/dos/paths # for these hosts. case $host_os in mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\ - $SED 's,/\([[A-Za-z]]:\),\1,g'` ;; + $SED 's|/\([[A-Za-z]]:\)|\1|g'` ;; esac sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP` else sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" fi]) library_names_spec= libname_spec='lib$name' soname_spec= -shrext_cmds=".so" +shrext_cmds=.so postinstall_cmds= postuninstall_cmds= finish_cmds= finish_eval= shlibpath_var= shlibpath_overrides_runpath=unknown version_type=none dynamic_linker="$host_os ld.so" sys_lib_dlsearch_path_spec="/lib /usr/lib" need_lib_prefix=unknown hardcode_into_libs=no # when you set need_version to no, make sure it does not cause -set_version # flags to be left without arguments need_version=unknown +AC_ARG_VAR([LT_SYS_LIBRARY_PATH], +[User-defined run-time library search path.]) + case $host_os in aix3*) version_type=linux # correct to gnu/linux during the next big refactor - library_names_spec='${libname}${release}${shared_ext}$versuffix $libname.a' + library_names_spec='$libname$release$shared_ext$versuffix $libname.a' shlibpath_var=LIBPATH # AIX 3 has no versioning support, so we append a major version to the name. - soname_spec='${libname}${release}${shared_ext}$major' + soname_spec='$libname$release$shared_ext$major' ;; aix[[4-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no hardcode_into_libs=yes - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # AIX 5 supports IA64 - library_names_spec='${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext}$versuffix $libname${shared_ext}' + library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH else # With GCC up to 2.95.x, collect2 would create an import file # for dependence libraries. The import file would start with - # the line `#! .'. This would cause the generated library to - # depend on `.', always an invalid library. This was fixed in + # the line '#! .'. This would cause the generated library to + # depend on '.', always an invalid library. This was fixed in # development snapshots of GCC prior to 3.0. case $host_os in aix4 | aix4.[[01]] | aix4.[[01]].*) if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)' echo ' yes ' - echo '#endif'; } | ${CC} -E - | $GREP yes > /dev/null; then + echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then : else can_build_shared=no fi ;; esac - # AIX (on Power*) has no versioning support, so currently we can not hardcode correct + # Using Import Files as archive members, it is possible to support + # filename-based versioning of shared library archives on AIX. While + # this would work for both with and without runtime linking, it will + # prevent static linking of such archives. So we do filename-based + # shared library versioning with .so extension only, which is used + # when both runtime linking and shared linking is enabled. + # Unfortunately, runtime linking may impact performance, so we do + # not want this to be the default eventually. Also, we use the + # versioned .so libs for executables only if there is the -brtl + # linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only. + # To allow for filename-based versioning support, we need to create + # libNAME.so.V as an archive file, containing: + # *) an Import File, referring to the versioned filename of the + # archive as well as the shared archive member, telling the + # bitwidth (32 or 64) of that shared object, and providing the + # list of exported symbols of that shared object, eventually + # decorated with the 'weak' keyword + # *) the shared object with the F_LOADONLY flag set, to really avoid + # it being seen by the linker. + # At run time we better use the real file rather than another symlink, + # but for link time we create the symlink libNAME.so -> libNAME.so.V + + case $with_aix_soname,$aix_use_runtimelinking in + # AIX (on Power*) has no versioning support, so currently we cannot hardcode correct # soname into executable. Probably we can add versioning support to # collect2, so additional links can be useful in future. - if test "$aix_use_runtimelinking" = yes; then + aix,yes) # traditional libtool + dynamic_linker='AIX unversionable lib.so' # If using run time linking (on AIX 4.2 or later) use lib.so # instead of lib.a to let people know that these are not # typical AIX shared libraries. - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - else + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + ;; + aix,no) # traditional AIX only + dynamic_linker='AIX lib.a[(]lib.so.V[)]' # We preserve .a as extension for shared libraries through AIX4.2 # and later when we are not doing run time linking. - library_names_spec='${libname}${release}.a $libname.a' - soname_spec='${libname}${release}${shared_ext}$major' - fi + library_names_spec='$libname$release.a $libname.a' + soname_spec='$libname$release$shared_ext$major' + ;; + svr4,*) # full svr4 only + dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)]" + library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' + # We do not specify a path in Import Files, so LIBPATH fires. + shlibpath_overrides_runpath=yes + ;; + *,yes) # both, prefer svr4 + dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)], lib.a[(]lib.so.V[)]" + library_names_spec='$libname$release$shared_ext$major $libname$shared_ext' + # unpreferred sharedlib libNAME.a needs extra handling + postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"' + postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"' + # We do not specify a path in Import Files, so LIBPATH fires. + shlibpath_overrides_runpath=yes + ;; + *,no) # both, prefer aix + dynamic_linker="AIX lib.a[(]lib.so.V[)], lib.so.V[(]$shared_archive_member_spec.o[)]" + library_names_spec='$libname$release.a $libname.a' + soname_spec='$libname$release$shared_ext$major' + # unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling + postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)' + postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"' + ;; + esac shlibpath_var=LIBPATH fi ;; amigaos*) case $host_cpu in powerpc) # Since July 2007 AmigaOS4 officially supports .so libraries. # When compiling the executable, add -use-dynld -Lsobjs: to the compileline. - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' ;; m68k) library_names_spec='$libname.ixlibrary $libname.a' # Create ${libname}_ixlibrary.a entries in /sys/libs. - finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; test $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' + finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done' ;; esac ;; beos*) - library_names_spec='${libname}${shared_ext}' + library_names_spec='$libname$shared_ext' dynamic_linker="$host_os ld.so" shlibpath_var=LIBRARY_PATH ;; bsdi[[45]]*) version_type=linux # correct to gnu/linux during the next big refactor need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib" sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib" # the default ld.so.conf also contains /usr/contrib/lib and # /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow # libtool to hard-code these into programs ;; cygwin* | mingw* | pw32* | cegcc*) version_type=windows - shrext_cmds=".dll" + shrext_cmds=.dll need_version=no need_lib_prefix=no case $GCC,$cc_basename in yes,*) # gcc library_names_spec='$libname.dll.a' # DLL is installed to $(libdir)/../bin by postinstall_cmds - postinstall_cmds='base_file=`basename \${file}`~ - dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ + postinstall_cmds='base_file=`basename \$file`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname~ chmod a+x \$dldir/$dlname~ if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; fi' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes case $host_os in cygwin*) # Cygwin DLLs use 'cyg' prefix rather than 'lib' - soname_spec='`echo ${libname} | sed -e 's/^lib/cyg/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' + soname_spec='`echo $libname | sed -e 's/^lib/cyg/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"]) ;; mingw* | cegcc*) # MinGW DLLs use traditional 'lib' prefix - soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' + soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' ;; pw32*) # pw32 DLLs use 'pw' prefix rather than 'lib' - library_names_spec='`echo ${libname} | sed -e 's/^lib/pw/'``echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' + library_names_spec='`echo $libname | sed -e 's/^lib/pw/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' ;; esac dynamic_linker='Win32 ld.exe' ;; *,cl*) # Native MSVC libname_spec='$name' - soname_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext}' - library_names_spec='${libname}.dll.lib' + soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext' + library_names_spec='$libname.dll.lib' case $build_os in mingw*) sys_lib_search_path_spec= lt_save_ifs=$IFS IFS=';' for lt_path in $LIB do IFS=$lt_save_ifs # Let DOS variable expansion print the short 8.3 style file name. lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"` sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path" done IFS=$lt_save_ifs # Convert to MSYS style. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'` ;; cygwin*) # Convert to unix form, then to dos form, then back to unix form # but this time dos style (no spaces!) so that the unix form looks # like /cygdrive/c/PROGRA~1:/cygdr... sys_lib_search_path_spec=`cygpath --path --unix "$LIB"` sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null` sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` ;; *) - sys_lib_search_path_spec="$LIB" + sys_lib_search_path_spec=$LIB if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then # It is most probably a Windows format PATH. sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'` else sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"` fi # FIXME: find the short name or the path components, as spaces are # common. (e.g. "Program Files" -> "PROGRA~1") ;; esac # DLL is installed to $(libdir)/../bin by postinstall_cmds - postinstall_cmds='base_file=`basename \${file}`~ - dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\${base_file}'\''i; echo \$dlname'\''`~ + postinstall_cmds='base_file=`basename \$file`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~ dldir=$destdir/`dirname \$dlpath`~ test -d \$dldir || mkdir -p \$dldir~ $install_prog $dir/$dlname \$dldir/$dlname' postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~ dlpath=$dir/\$dldll~ $RM \$dlpath' shlibpath_overrides_runpath=yes dynamic_linker='Win32 link.exe' ;; *) # Assume MSVC wrapper - library_names_spec='${libname}`echo ${release} | $SED -e 's/[[.]]/-/g'`${versuffix}${shared_ext} $libname.lib' + library_names_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext $libname.lib' dynamic_linker='Win32 ld.exe' ;; esac # FIXME: first we should search . and the directory the executable is in shlibpath_var=PATH ;; darwin* | rhapsody*) dynamic_linker="$host_os dyld" version_type=darwin need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${major}$shared_ext ${libname}$shared_ext' - soname_spec='${libname}${release}${major}$shared_ext' + library_names_spec='$libname$release$major$shared_ext $libname$shared_ext' + soname_spec='$libname$release$major$shared_ext' shlibpath_overrides_runpath=yes shlibpath_var=DYLD_LIBRARY_PATH shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`' m4_if([$1], [],[ sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"]) sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib' ;; dgux*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname$shared_ext' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; freebsd* | dragonfly*) # DragonFly does not have aout. When/if they implement a new # versioning mechanism, adjust this. if test -x /usr/bin/objformat; then objformat=`/usr/bin/objformat` else case $host_os in freebsd[[23]].*) objformat=aout ;; *) objformat=elf ;; esac fi version_type=freebsd-$objformat case $version_type in freebsd-elf*) - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' need_version=no need_lib_prefix=no ;; freebsd-*) - library_names_spec='${libname}${release}${shared_ext}$versuffix $libname${shared_ext}$versuffix' + library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' need_version=yes ;; esac shlibpath_var=LD_LIBRARY_PATH case $host_os in freebsd2.*) shlibpath_overrides_runpath=yes ;; freebsd3.[[01]]* | freebsdelf3.[[01]]*) shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \ freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1) shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; *) # from 4.6 on, and DragonFly shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; esac ;; -gnu*) - version_type=linux # correct to gnu/linux during the next big refactor - need_lib_prefix=no - need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' - shlibpath_var=LD_LIBRARY_PATH - shlibpath_overrides_runpath=no - hardcode_into_libs=yes - ;; - haiku*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no dynamic_linker="$host_os runtime_loader" - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}${major} ${libname}${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LIBRARY_PATH - shlibpath_overrides_runpath=yes + shlibpath_overrides_runpath=no sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib' hardcode_into_libs=yes ;; hpux9* | hpux10* | hpux11*) # Give a soname corresponding to the major version so that dld.sl refuses to # link against other versions. version_type=sunos need_lib_prefix=no need_version=no case $host_cpu in ia64*) shrext_cmds='.so' hardcode_into_libs=yes dynamic_linker="$host_os dld.so" shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' - if test "X$HPUX_IA64_MODE" = X32; then + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' + if test 32 = "$HPUX_IA64_MODE"; then sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib" + sys_lib_dlsearch_path_spec=/usr/lib/hpux32 else sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64" + sys_lib_dlsearch_path_spec=/usr/lib/hpux64 fi - sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; hppa*64*) shrext_cmds='.sl' hardcode_into_libs=yes dynamic_linker="$host_os dld.sl" shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH shlibpath_overrides_runpath=yes # Unless +noenvvar is specified. - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64" sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; *) shrext_cmds='.sl' dynamic_linker="$host_os dld.sl" shlibpath_var=SHLIB_PATH shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' ;; esac # HP-UX runs *really* slowly unless shared libraries are mode 555, ... postinstall_cmds='chmod 555 $lib' # or fails outright, so override atomically: install_override_mode=555 ;; interix[[3-9]]*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; irix5* | irix6* | nonstopux*) case $host_os in nonstopux*) version_type=nonstopux ;; *) - if test "$lt_cv_prog_gnu_ld" = yes; then + if test yes = "$lt_cv_prog_gnu_ld"; then version_type=linux # correct to gnu/linux during the next big refactor else version_type=irix fi ;; esac need_lib_prefix=no need_version=no - soname_spec='${libname}${release}${shared_ext}$major' - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${release}${shared_ext} $libname${shared_ext}' + soname_spec='$libname$release$shared_ext$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext' case $host_os in irix5* | nonstopux*) libsuff= shlibsuff= ;; *) case $LD in # libtool.m4 will add one of these switches to LD *-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ") libsuff= shlibsuff= libmagic=32-bit;; *-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ") libsuff=32 shlibsuff=N32 libmagic=N32;; *-64|*"-64 "|*-melf64bmip|*"-melf64bmip ") libsuff=64 shlibsuff=64 libmagic=64-bit;; *) libsuff= shlibsuff= libmagic=never-match;; esac ;; esac shlibpath_var=LD_LIBRARY${shlibsuff}_PATH shlibpath_overrides_runpath=no - sys_lib_search_path_spec="/usr/lib${libsuff} /lib${libsuff} /usr/local/lib${libsuff}" - sys_lib_dlsearch_path_spec="/usr/lib${libsuff} /lib${libsuff}" + sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff" + sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff" hardcode_into_libs=yes ;; # No shared lib support for Linux oldld, aout, or coff. linux*oldld* | linux*aout* | linux*coff*) dynamic_linker=no ;; +linux*android*) + version_type=none # Android doesn't support versioned libraries. + need_lib_prefix=no + need_version=no + library_names_spec='$libname$release$shared_ext' + soname_spec='$libname$release$shared_ext' + finish_cmds= + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=yes + + # This implies no fast_install, which is unacceptable. + # Some rework will be needed to allow for fast_install + # before this can be enabled. + hardcode_into_libs=yes + + dynamic_linker='Android linker' + # Don't embed -rpath directories since the linker doesn't support them. + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + ;; + # This must be glibc/ELF. -linux* | k*bsd*-gnu | kopensolaris*-gnu) +linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no # Some binutils ld are patched to set DT_RUNPATH AC_CACHE_VAL([lt_cv_shlibpath_overrides_runpath], [lt_cv_shlibpath_overrides_runpath=no save_LDFLAGS=$LDFLAGS save_libdir=$libdir eval "libdir=/foo; wl=\"$_LT_TAGVAR(lt_prog_compiler_wl, $1)\"; \ LDFLAGS=\"\$LDFLAGS $_LT_TAGVAR(hardcode_libdir_flag_spec, $1)\"" AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])], [AS_IF([ ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null], [lt_cv_shlibpath_overrides_runpath=yes])]) LDFLAGS=$save_LDFLAGS libdir=$save_libdir ]) shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath # This implies no fast_install, which is unacceptable. # Some rework will be needed to allow for fast_install # before this can be enabled. hardcode_into_libs=yes - # Append ld.so.conf contents to the search path + # Ideally, we could use ldconfig to report *all* directores which are + # searched for libraries, however this is still not possible. Aside from not + # being certain /sbin/ldconfig is available, command + # 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64, + # even though it is searched at run-time. Try to do the best guess by + # appending ld.so.conf contents (and includes) to the search path. if test -f /etc/ld.so.conf; then lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '` sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra" fi # We used to test for /lib/ld.so.1 and disable shared libraries on # powerpc, because MkLinux only supported shared libraries with the # GNU dynamic linker. Since this was broken with cross compilers, # most powerpc-linux boxes support dynamic linking these days and # people can always --disable-shared, the test was removed, and we # assume the GNU/Linux dynamic linker is in use. dynamic_linker='GNU/Linux ld.so' ;; +netbsdelf*-gnu) + version_type=linux + need_lib_prefix=no + need_version=no + library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' + soname_spec='${libname}${release}${shared_ext}$major' + shlibpath_var=LD_LIBRARY_PATH + shlibpath_overrides_runpath=no + hardcode_into_libs=yes + dynamic_linker='NetBSD ld.elf_so' + ;; + netbsd*) version_type=sunos need_lib_prefix=no need_version=no if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' dynamic_linker='NetBSD (a.out) ld.so' else - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' dynamic_linker='NetBSD ld.elf_so' fi shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes ;; newsos6) version_type=linux # correct to gnu/linux during the next big refactor - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes ;; *nto* | *qnx*) version_type=qnx need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes dynamic_linker='ldqnx.so' ;; -openbsd*) +openbsd* | bitrig*) version_type=sunos - sys_lib_dlsearch_path_spec="/usr/lib" + sys_lib_dlsearch_path_spec=/usr/lib need_lib_prefix=no - # Some older versions of OpenBSD (3.3 at least) *do* need versioned libs. - case $host_os in - openbsd3.3 | openbsd3.3.*) need_version=yes ;; - *) need_version=no ;; - esac - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then + need_version=no + else + need_version=yes + fi + library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir' shlibpath_var=LD_LIBRARY_PATH - if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then - case $host_os in - openbsd2.[[89]] | openbsd2.[[89]].*) - shlibpath_overrides_runpath=no - ;; - *) - shlibpath_overrides_runpath=yes - ;; - esac - else - shlibpath_overrides_runpath=yes - fi + shlibpath_overrides_runpath=yes ;; os2*) libname_spec='$name' - shrext_cmds=".dll" + version_type=windows + shrext_cmds=.dll + need_version=no need_lib_prefix=no - library_names_spec='$libname${shared_ext} $libname.a' + # OS/2 can only load a DLL with a base name of 8 characters or less. + soname_spec='`test -n "$os2dllname" && libname="$os2dllname"; + v=$($ECHO $release$versuffix | tr -d .-); + n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _); + $ECHO $n$v`$shared_ext' + library_names_spec='${libname}_dll.$libext' dynamic_linker='OS/2 ld.exe' - shlibpath_var=LIBPATH + shlibpath_var=BEGINLIBPATH + sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec + postinstall_cmds='base_file=`basename \$file`~ + dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~ + dldir=$destdir/`dirname \$dlpath`~ + test -d \$dldir || mkdir -p \$dldir~ + $install_prog $dir/$dlname \$dldir/$dlname~ + chmod a+x \$dldir/$dlname~ + if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then + eval '\''$striplib \$dldir/$dlname'\'' || exit \$?; + fi' + postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~ + dlpath=$dir/\$dldll~ + $RM \$dlpath' ;; osf3* | osf4* | osf5*) version_type=osf need_lib_prefix=no need_version=no - soname_spec='${libname}${release}${shared_ext}$major' - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + soname_spec='$libname$release$shared_ext$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib" - sys_lib_dlsearch_path_spec="$sys_lib_search_path_spec" + sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec ;; rdos*) dynamic_linker=no ;; solaris*) version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes # ldd complains unless libraries are executable postinstall_cmds='chmod +x $lib' ;; sunos4*) version_type=sunos - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${shared_ext}$versuffix' + library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix' finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes - if test "$with_gnu_ld" = yes; then + if test yes = "$with_gnu_ld"; then need_lib_prefix=no fi need_version=yes ;; sysv4 | sysv4.3*) version_type=linux # correct to gnu/linux during the next big refactor - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH case $host_vendor in sni) shlibpath_overrides_runpath=no need_lib_prefix=no runpath_var=LD_RUN_PATH ;; siemens) need_lib_prefix=no ;; motorola) need_lib_prefix=no need_version=no shlibpath_overrides_runpath=no sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib' ;; esac ;; sysv4*MP*) - if test -d /usr/nec ;then + if test -d /usr/nec; then version_type=linux # correct to gnu/linux during the next big refactor - library_names_spec='$libname${shared_ext}.$versuffix $libname${shared_ext}.$major $libname${shared_ext}' - soname_spec='$libname${shared_ext}.$major' + library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext' + soname_spec='$libname$shared_ext.$major' shlibpath_var=LD_LIBRARY_PATH fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) - version_type=freebsd-elf + version_type=sco need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext} $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=yes hardcode_into_libs=yes - if test "$with_gnu_ld" = yes; then + if test yes = "$with_gnu_ld"; then sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib' else sys_lib_search_path_spec='/usr/ccs/lib /usr/lib' case $host_os in sco3.2v5*) sys_lib_search_path_spec="$sys_lib_search_path_spec /lib" ;; esac fi sys_lib_dlsearch_path_spec='/usr/lib' ;; tpf*) # TPF is a cross-target only. Preferred cross-host = GNU/Linux. version_type=linux # correct to gnu/linux during the next big refactor need_lib_prefix=no need_version=no - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' shlibpath_var=LD_LIBRARY_PATH shlibpath_overrides_runpath=no hardcode_into_libs=yes ;; uts4*) version_type=linux # correct to gnu/linux during the next big refactor - library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major $libname${shared_ext}' - soname_spec='${libname}${release}${shared_ext}$major' + library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext' + soname_spec='$libname$release$shared_ext$major' shlibpath_var=LD_LIBRARY_PATH ;; *) dynamic_linker=no ;; esac AC_MSG_RESULT([$dynamic_linker]) -test "$dynamic_linker" = no && can_build_shared=no +test no = "$dynamic_linker" && can_build_shared=no variables_saved_for_relink="PATH $shlibpath_var $runpath_var" -if test "$GCC" = yes; then +if test yes = "$GCC"; then variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH" fi -if test "${lt_cv_sys_lib_search_path_spec+set}" = set; then - sys_lib_search_path_spec="$lt_cv_sys_lib_search_path_spec" +if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then + sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec fi -if test "${lt_cv_sys_lib_dlsearch_path_spec+set}" = set; then - sys_lib_dlsearch_path_spec="$lt_cv_sys_lib_dlsearch_path_spec" + +if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then + sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec fi +# remember unaugmented sys_lib_dlsearch_path content for libtool script decls... +configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec + +# ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code +func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH" + +# to be used as default LT_SYS_LIBRARY_PATH value in generated libtool +configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH + _LT_DECL([], [variables_saved_for_relink], [1], [Variables whose values should be saved in libtool wrapper scripts and restored at link time]) _LT_DECL([], [need_lib_prefix], [0], [Do we need the "lib" prefix for modules?]) _LT_DECL([], [need_version], [0], [Do we need a version for libraries?]) _LT_DECL([], [version_type], [0], [Library versioning type]) _LT_DECL([], [runpath_var], [0], [Shared library runtime path variable]) _LT_DECL([], [shlibpath_var], [0],[Shared library path variable]) _LT_DECL([], [shlibpath_overrides_runpath], [0], [Is shlibpath searched before the hard-coded library search path?]) _LT_DECL([], [libname_spec], [1], [Format of library name prefix]) _LT_DECL([], [library_names_spec], [1], [[List of archive names. First name is the real one, the rest are links. The last name is the one that the linker finds with -lNAME]]) _LT_DECL([], [soname_spec], [1], [[The coded name of the library, if different from the real name]]) _LT_DECL([], [install_override_mode], [1], [Permission mode override for installation of shared libraries]) _LT_DECL([], [postinstall_cmds], [2], [Command to use after installation of a shared archive]) _LT_DECL([], [postuninstall_cmds], [2], [Command to use after uninstallation of a shared archive]) _LT_DECL([], [finish_cmds], [2], [Commands used to finish a libtool library installation in a directory]) _LT_DECL([], [finish_eval], [1], [[As "finish_cmds", except a single script fragment to be evaled but not shown]]) _LT_DECL([], [hardcode_into_libs], [0], [Whether we should hardcode library paths into libraries]) _LT_DECL([], [sys_lib_search_path_spec], [2], [Compile-time system search path for libraries]) -_LT_DECL([], [sys_lib_dlsearch_path_spec], [2], - [Run-time system search path for libraries]) +_LT_DECL([sys_lib_dlsearch_path_spec], [configure_time_dlsearch_path], [2], + [Detected run-time system search path for libraries]) +_LT_DECL([], [configure_time_lt_sys_library_path], [2], + [Explicit LT_SYS_LIBRARY_PATH set during ./configure time]) ])# _LT_SYS_DYNAMIC_LINKER # _LT_PATH_TOOL_PREFIX(TOOL) # -------------------------- -# find a file program which can recognize shared library +# find a file program that can recognize shared library AC_DEFUN([_LT_PATH_TOOL_PREFIX], [m4_require([_LT_DECL_EGREP])dnl AC_MSG_CHECKING([for $1]) AC_CACHE_VAL(lt_cv_path_MAGIC_CMD, [case $MAGIC_CMD in [[\\/*] | ?:[\\/]*]) - lt_cv_path_MAGIC_CMD="$MAGIC_CMD" # Let the user override the test with a path. + lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path. ;; *) - lt_save_MAGIC_CMD="$MAGIC_CMD" - lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + lt_save_MAGIC_CMD=$MAGIC_CMD + lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR dnl $ac_dummy forces splitting on constant user-supplied paths. dnl POSIX.2 word splitting is done only on the output of word expansions, dnl not every word. This closes a longstanding sh security hole. ac_dummy="m4_if([$2], , $PATH, [$2])" for ac_dir in $ac_dummy; do - IFS="$lt_save_ifs" + IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. - if test -f $ac_dir/$1; then - lt_cv_path_MAGIC_CMD="$ac_dir/$1" + if test -f "$ac_dir/$1"; then + lt_cv_path_MAGIC_CMD=$ac_dir/"$1" if test -n "$file_magic_test_file"; then case $deplibs_check_method in "file_magic "*) file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"` - MAGIC_CMD="$lt_cv_path_MAGIC_CMD" + MAGIC_CMD=$lt_cv_path_MAGIC_CMD if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null | $EGREP "$file_magic_regex" > /dev/null; then : else cat <<_LT_EOF 1>&2 *** Warning: the command libtool uses to detect shared libraries, *** $file_magic_cmd, produces output that libtool cannot recognize. *** The result is that libtool may fail to recognize shared libraries *** as such. This will affect the creation of libtool libraries that *** depend on shared libraries, but programs linked with such libtool *** libraries will work regardless of this problem. Nevertheless, you *** may want to report the problem to your system manager and/or to *** bug-libtool@gnu.org _LT_EOF fi ;; esac fi break fi done - IFS="$lt_save_ifs" - MAGIC_CMD="$lt_save_MAGIC_CMD" + IFS=$lt_save_ifs + MAGIC_CMD=$lt_save_MAGIC_CMD ;; esac]) -MAGIC_CMD="$lt_cv_path_MAGIC_CMD" +MAGIC_CMD=$lt_cv_path_MAGIC_CMD if test -n "$MAGIC_CMD"; then AC_MSG_RESULT($MAGIC_CMD) else AC_MSG_RESULT(no) fi _LT_DECL([], [MAGIC_CMD], [0], [Used to examine libraries when file_magic_cmd begins with "file"])dnl ])# _LT_PATH_TOOL_PREFIX # Old name: AU_ALIAS([AC_PATH_TOOL_PREFIX], [_LT_PATH_TOOL_PREFIX]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_PATH_TOOL_PREFIX], []) # _LT_PATH_MAGIC # -------------- -# find a file program which can recognize a shared library +# find a file program that can recognize a shared library m4_defun([_LT_PATH_MAGIC], [_LT_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH) if test -z "$lt_cv_path_MAGIC_CMD"; then if test -n "$ac_tool_prefix"; then _LT_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH) else MAGIC_CMD=: fi fi ])# _LT_PATH_MAGIC # LT_PATH_LD # ---------- # find the pathname to the GNU or non-GNU linker AC_DEFUN([LT_PATH_LD], [AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PROG_ECHO_BACKSLASH])dnl AC_ARG_WITH([gnu-ld], [AS_HELP_STRING([--with-gnu-ld], [assume the C compiler uses GNU ld @<:@default=no@:>@])], - [test "$withval" = no || with_gnu_ld=yes], + [test no = "$withval" || with_gnu_ld=yes], [with_gnu_ld=no])dnl ac_prog=ld -if test "$GCC" = yes; then +if test yes = "$GCC"; then # Check if gcc -print-prog-name=ld gives a path. AC_MSG_CHECKING([for ld used by $CC]) case $host in *-*-mingw*) - # gcc leaves a trailing carriage return which upsets mingw + # gcc leaves a trailing carriage return, which upsets mingw ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;; *) ac_prog=`($CC -print-prog-name=ld) 2>&5` ;; esac case $ac_prog in # Accept absolute paths. [[\\/]]* | ?:[[\\/]]*) re_direlt='/[[^/]][[^/]]*/\.\./' # Canonicalize the pathname of ld ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'` while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"` done - test -z "$LD" && LD="$ac_prog" + test -z "$LD" && LD=$ac_prog ;; "") # If it fails, then pretend we aren't using GCC. ac_prog=ld ;; *) # If it is relative, then search for the first ld in PATH. with_gnu_ld=unknown ;; esac -elif test "$with_gnu_ld" = yes; then +elif test yes = "$with_gnu_ld"; then AC_MSG_CHECKING([for GNU ld]) else AC_MSG_CHECKING([for non-GNU ld]) fi AC_CACHE_VAL(lt_cv_path_LD, [if test -z "$LD"; then - lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH; do - IFS="$lt_save_ifs" + IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then - lt_cv_path_LD="$ac_dir/$ac_prog" + lt_cv_path_LD=$ac_dir/$ac_prog # Check to see if the program is GNU ld. I'd rather use --version, # but apparently some variants of GNU ld only accept -v. # Break only if it was the GNU/non-GNU ld that we prefer. case `"$lt_cv_path_LD" -v 2>&1 &1 conftest.i +cat conftest.i conftest.i >conftest2.i +: ${lt_DD:=$DD} +AC_PATH_PROGS_FEATURE_CHECK([lt_DD], [dd], +[if "$ac_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then + cmp -s conftest.i conftest.out \ + && ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=: +fi]) +rm -f conftest.i conftest2.i conftest.out]) +])# _LT_PATH_DD + + +# _LT_CMD_TRUNCATE +# ---------------- +# find command to truncate a binary pipe +m4_defun([_LT_CMD_TRUNCATE], +[m4_require([_LT_PATH_DD]) +AC_CACHE_CHECK([how to truncate binary pipes], [lt_cv_truncate_bin], +[printf 0123456789abcdef0123456789abcdef >conftest.i +cat conftest.i conftest.i >conftest2.i +lt_cv_truncate_bin= +if "$ac_cv_path_lt_DD" bs=32 count=1 conftest.out 2>/dev/null; then + cmp -s conftest.i conftest.out \ + && lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1" +fi +rm -f conftest.i conftest2.i conftest.out +test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q"]) +_LT_DECL([lt_truncate_bin], [lt_cv_truncate_bin], [1], + [Command to truncate a binary pipe]) +])# _LT_CMD_TRUNCATE + + # _LT_CHECK_MAGIC_METHOD # ---------------------- # how to check for library dependencies # -- PORTME fill in with the dynamic library characteristics m4_defun([_LT_CHECK_MAGIC_METHOD], [m4_require([_LT_DECL_EGREP]) m4_require([_LT_DECL_OBJDUMP]) AC_CACHE_CHECK([how to recognize dependent libraries], lt_cv_deplibs_check_method, [lt_cv_file_magic_cmd='$MAGIC_CMD' lt_cv_file_magic_test_file= lt_cv_deplibs_check_method='unknown' # Need to set the preceding variable on all platforms that support # interlibrary dependencies. # 'none' -- dependencies not supported. -# `unknown' -- same as none, but documents that we really don't know. +# 'unknown' -- same as none, but documents that we really don't know. # 'pass_all' -- all dependencies passed with no checks. # 'test_compile' -- check by making test program. # 'file_magic [[regex]]' -- check by looking for files in library path -# which responds to the $file_magic_cmd with a given extended regex. -# If you have `file' or equivalent on your system and you're not sure -# whether `pass_all' will *always* work, you probably want this one. +# that responds to the $file_magic_cmd with a given extended regex. +# If you have 'file' or equivalent on your system and you're not sure +# whether 'pass_all' will *always* work, you probably want this one. case $host_os in aix[[4-9]]*) lt_cv_deplibs_check_method=pass_all ;; beos*) lt_cv_deplibs_check_method=pass_all ;; bsdi[[45]]*) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib)' lt_cv_file_magic_cmd='/usr/bin/file -L' lt_cv_file_magic_test_file=/shlib/libc.so ;; cygwin*) # func_win32_libid is a shell function defined in ltmain.sh lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' ;; mingw* | pw32*) # Base MSYS/MinGW do not provide the 'file' command needed by # func_win32_libid shell function, so use a weaker test based on 'objdump', # unless we find 'file', for example because we are cross-compiling. - # func_win32_libid assumes BSD nm, so disallow it if using MS dumpbin. - if ( test "$lt_cv_nm_interface" = "BSD nm" && file / ) >/dev/null 2>&1; then + if ( file / ) >/dev/null 2>&1; then lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL' lt_cv_file_magic_cmd='func_win32_libid' else # Keep this pattern in sync with the one in func_win32_libid. lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' lt_cv_file_magic_cmd='$OBJDUMP -f' fi ;; cegcc*) # use the weaker test based on 'objdump'. See mingw*. lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?' lt_cv_file_magic_cmd='$OBJDUMP -f' ;; darwin* | rhapsody*) lt_cv_deplibs_check_method=pass_all ;; freebsd* | dragonfly*) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then case $host_cpu in i*86 ) # Not sure whether the presence of OpenBSD here was a mistake. # Let's accept both of them until this is cleared up. lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*` ;; esac else lt_cv_deplibs_check_method=pass_all fi ;; -gnu*) - lt_cv_deplibs_check_method=pass_all - ;; - haiku*) lt_cv_deplibs_check_method=pass_all ;; hpux10.20* | hpux11*) lt_cv_file_magic_cmd=/usr/bin/file case $host_cpu in ia64*) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64' lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so ;; hppa*64*) [lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]'] lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl ;; *) lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]]\.[[0-9]]) shared library' lt_cv_file_magic_test_file=/usr/lib/libc.sl ;; esac ;; interix[[3-9]]*) # PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$' ;; irix5* | irix6* | nonstopux*) case $LD in *-32|*"-32 ") libmagic=32-bit;; *-n32|*"-n32 ") libmagic=N32;; *-64|*"-64 ") libmagic=64-bit;; *) libmagic=never-match;; esac lt_cv_deplibs_check_method=pass_all ;; # This must be glibc/ELF. -linux* | k*bsd*-gnu | kopensolaris*-gnu) +linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) lt_cv_deplibs_check_method=pass_all ;; -netbsd*) +netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$' fi ;; newos6*) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)' lt_cv_file_magic_cmd=/usr/bin/file lt_cv_file_magic_test_file=/usr/lib/libnls.so ;; *nto* | *qnx*) lt_cv_deplibs_check_method=pass_all ;; -openbsd*) - if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then +openbsd* | bitrig*) + if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$' else lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$' fi ;; osf3* | osf4* | osf5*) lt_cv_deplibs_check_method=pass_all ;; rdos*) lt_cv_deplibs_check_method=pass_all ;; solaris*) lt_cv_deplibs_check_method=pass_all ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*) lt_cv_deplibs_check_method=pass_all ;; sysv4 | sysv4.3*) case $host_vendor in motorola) lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]' lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*` ;; ncr) lt_cv_deplibs_check_method=pass_all ;; sequent) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )' ;; sni) lt_cv_file_magic_cmd='/bin/file' lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib" lt_cv_file_magic_test_file=/lib/libc.so ;; siemens) lt_cv_deplibs_check_method=pass_all ;; pc) lt_cv_deplibs_check_method=pass_all ;; esac ;; tpf*) lt_cv_deplibs_check_method=pass_all ;; +os2*) + lt_cv_deplibs_check_method=pass_all + ;; esac ]) file_magic_glob= want_nocaseglob=no if test "$build" = "$host"; then case $host_os in mingw* | pw32*) if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then want_nocaseglob=yes else file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"` fi ;; esac fi file_magic_cmd=$lt_cv_file_magic_cmd deplibs_check_method=$lt_cv_deplibs_check_method test -z "$deplibs_check_method" && deplibs_check_method=unknown _LT_DECL([], [deplibs_check_method], [1], [Method to check whether dependent libraries are shared objects]) _LT_DECL([], [file_magic_cmd], [1], [Command to use when deplibs_check_method = "file_magic"]) _LT_DECL([], [file_magic_glob], [1], [How to find potential files when deplibs_check_method = "file_magic"]) _LT_DECL([], [want_nocaseglob], [1], [Find potential files using nocaseglob when deplibs_check_method = "file_magic"]) ])# _LT_CHECK_MAGIC_METHOD # LT_PATH_NM # ---------- # find the pathname to a BSD- or MS-compatible name lister AC_DEFUN([LT_PATH_NM], [AC_REQUIRE([AC_PROG_CC])dnl AC_CACHE_CHECK([for BSD- or MS-compatible name lister (nm)], lt_cv_path_NM, [if test -n "$NM"; then # Let the user override the test. - lt_cv_path_NM="$NM" + lt_cv_path_NM=$NM else - lt_nm_to_check="${ac_tool_prefix}nm" + lt_nm_to_check=${ac_tool_prefix}nm if test -n "$ac_tool_prefix" && test "$build" = "$host"; then lt_nm_to_check="$lt_nm_to_check nm" fi for lt_tmp_nm in $lt_nm_to_check; do - lt_save_ifs="$IFS"; IFS=$PATH_SEPARATOR + lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do - IFS="$lt_save_ifs" + IFS=$lt_save_ifs test -z "$ac_dir" && ac_dir=. - tmp_nm="$ac_dir/$lt_tmp_nm" - if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext" ; then + tmp_nm=$ac_dir/$lt_tmp_nm + if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then # Check to see if the nm accepts a BSD-compat flag. - # Adding the `sed 1q' prevents false positives on HP-UX, which says: + # Adding the 'sed 1q' prevents false positives on HP-UX, which says: # nm: unknown option "B" ignored # Tru64's nm complains that /dev/null is an invalid object file - case `"$tmp_nm" -B /dev/null 2>&1 | sed '1q'` in - */dev/null* | *'Invalid file or object type'*) + # MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty + case $build_os in + mingw*) lt_bad_file=conftest.nm/nofile ;; + *) lt_bad_file=/dev/null ;; + esac + case `"$tmp_nm" -B $lt_bad_file 2>&1 | sed '1q'` in + *$lt_bad_file* | *'Invalid file or object type'*) lt_cv_path_NM="$tmp_nm -B" - break + break 2 ;; *) case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in */dev/null*) lt_cv_path_NM="$tmp_nm -p" - break + break 2 ;; *) lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but continue # so that we can try to find one that supports BSD flags ;; esac ;; esac fi done - IFS="$lt_save_ifs" + IFS=$lt_save_ifs done : ${lt_cv_path_NM=no} fi]) -if test "$lt_cv_path_NM" != "no"; then - NM="$lt_cv_path_NM" +if test no != "$lt_cv_path_NM"; then + NM=$lt_cv_path_NM else # Didn't find any BSD compatible name lister, look for dumpbin. if test -n "$DUMPBIN"; then : # Let the user override the test. else AC_CHECK_TOOLS(DUMPBIN, [dumpbin "link -dump"], :) - case `$DUMPBIN -symbols /dev/null 2>&1 | sed '1q'` in + case `$DUMPBIN -symbols -headers /dev/null 2>&1 | sed '1q'` in *COFF*) - DUMPBIN="$DUMPBIN -symbols" + DUMPBIN="$DUMPBIN -symbols -headers" ;; *) DUMPBIN=: ;; esac fi AC_SUBST([DUMPBIN]) - if test "$DUMPBIN" != ":"; then - NM="$DUMPBIN" + if test : != "$DUMPBIN"; then + NM=$DUMPBIN fi fi test -z "$NM" && NM=nm AC_SUBST([NM]) _LT_DECL([], [NM], [1], [A BSD- or MS-compatible name lister])dnl AC_CACHE_CHECK([the name lister ($NM) interface], [lt_cv_nm_interface], [lt_cv_nm_interface="BSD nm" echo "int some_variable = 0;" > conftest.$ac_ext (eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&AS_MESSAGE_LOG_FD) (eval "$ac_compile" 2>conftest.err) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&AS_MESSAGE_LOG_FD) (eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out) cat conftest.err >&AS_MESSAGE_LOG_FD (eval echo "\"\$as_me:$LINENO: output\"" >&AS_MESSAGE_LOG_FD) cat conftest.out >&AS_MESSAGE_LOG_FD if $GREP 'External.*some_variable' conftest.out > /dev/null; then lt_cv_nm_interface="MS dumpbin" fi rm -f conftest*]) ])# LT_PATH_NM # Old names: AU_ALIAS([AM_PROG_NM], [LT_PATH_NM]) AU_ALIAS([AC_PROG_NM], [LT_PATH_NM]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AM_PROG_NM], []) dnl AC_DEFUN([AC_PROG_NM], []) # _LT_CHECK_SHAREDLIB_FROM_LINKLIB # -------------------------------- # how to determine the name of the shared library # associated with a specific link library. # -- PORTME fill in with the dynamic library characteristics m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB], [m4_require([_LT_DECL_EGREP]) m4_require([_LT_DECL_OBJDUMP]) m4_require([_LT_DECL_DLLTOOL]) AC_CACHE_CHECK([how to associate runtime and link libraries], lt_cv_sharedlib_from_linklib_cmd, [lt_cv_sharedlib_from_linklib_cmd='unknown' case $host_os in cygwin* | mingw* | pw32* | cegcc*) - # two different shell functions defined in ltmain.sh - # decide which to use based on capabilities of $DLLTOOL + # two different shell functions defined in ltmain.sh; + # decide which one to use based on capabilities of $DLLTOOL case `$DLLTOOL --help 2>&1` in *--identify-strict*) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib ;; *) lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback ;; esac ;; *) # fallback: assume linklib IS sharedlib - lt_cv_sharedlib_from_linklib_cmd="$ECHO" + lt_cv_sharedlib_from_linklib_cmd=$ECHO ;; esac ]) sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO _LT_DECL([], [sharedlib_from_linklib_cmd], [1], [Command to associate shared and link libraries]) ])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB # _LT_PATH_MANIFEST_TOOL # ---------------------- # locate the manifest tool m4_defun([_LT_PATH_MANIFEST_TOOL], [AC_CHECK_TOOL(MANIFEST_TOOL, mt, :) test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool], [lt_cv_path_mainfest_tool=no echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD $MANIFEST_TOOL '-?' 2>conftest.err > conftest.out cat conftest.err >&AS_MESSAGE_LOG_FD if $GREP 'Manifest Tool' conftest.out > /dev/null; then lt_cv_path_mainfest_tool=yes fi rm -f conftest*]) -if test "x$lt_cv_path_mainfest_tool" != xyes; then +if test yes != "$lt_cv_path_mainfest_tool"; then MANIFEST_TOOL=: fi _LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl ])# _LT_PATH_MANIFEST_TOOL +# _LT_DLL_DEF_P([FILE]) +# --------------------- +# True iff FILE is a Windows DLL '.def' file. +# Keep in sync with func_dll_def_p in the libtool script +AC_DEFUN([_LT_DLL_DEF_P], +[dnl + test DEF = "`$SED -n dnl + -e '\''s/^[[ ]]*//'\'' dnl Strip leading whitespace + -e '\''/^\(;.*\)*$/d'\'' dnl Delete empty lines and comments + -e '\''s/^\(EXPORTS\|LIBRARY\)\([[ ]].*\)*$/DEF/p'\'' dnl + -e q dnl Only consider the first "real" line + $1`" dnl +])# _LT_DLL_DEF_P + + # LT_LIB_M # -------- # check for math library AC_DEFUN([LT_LIB_M], [AC_REQUIRE([AC_CANONICAL_HOST])dnl LIBM= case $host in *-*-beos* | *-*-cegcc* | *-*-cygwin* | *-*-haiku* | *-*-pw32* | *-*-darwin*) # These system don't have libm, or don't need it ;; *-ncr-sysv4.3*) - AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM="-lmw") + AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM=-lmw) AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm") ;; *) - AC_CHECK_LIB(m, cos, LIBM="-lm") + AC_CHECK_LIB(m, cos, LIBM=-lm) ;; esac AC_SUBST([LIBM]) ])# LT_LIB_M # Old name: AU_ALIAS([AC_CHECK_LIBM], [LT_LIB_M]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([AC_CHECK_LIBM], []) # _LT_COMPILER_NO_RTTI([TAGNAME]) # ------------------------------- m4_defun([_LT_COMPILER_NO_RTTI], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= -if test "$GCC" = yes; then +if test yes = "$GCC"; then case $cc_basename in nvcc*) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -Xcompiler -fno-builtin' ;; *) _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' ;; esac _LT_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions], lt_cv_prog_compiler_rtti_exceptions, [-fno-rtti -fno-exceptions], [], [_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"]) fi _LT_TAGDECL([no_builtin_flag], [lt_prog_compiler_no_builtin_flag], [1], [Compiler flag to turn off builtin functions]) ])# _LT_COMPILER_NO_RTTI # _LT_CMD_GLOBAL_SYMBOLS # ---------------------- m4_defun([_LT_CMD_GLOBAL_SYMBOLS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_PROG_CC])dnl AC_REQUIRE([AC_PROG_AWK])dnl AC_REQUIRE([LT_PATH_NM])dnl AC_REQUIRE([LT_PATH_LD])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_TAG_COMPILER])dnl # Check for command to grab the raw symbol name followed by C symbol from nm. AC_MSG_CHECKING([command to parse $NM output from $compiler object]) AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe], [ # These are sane defaults that work on at least a few old systems. # [They come from Ultrix. What could be older than Ultrix?!! ;)] # Character class describing NM global symbol codes. symcode='[[BCDEGRST]]' # Regexp to match symbols that can be accessed directly from C. sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)' # Define system-specific variables. case $host_os in aix*) symcode='[[BCDT]]' ;; cygwin* | mingw* | pw32* | cegcc*) symcode='[[ABCDGISTW]]' ;; hpux*) - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then symcode='[[ABCDEGRST]]' fi ;; irix* | nonstopux*) symcode='[[BCDEGRST]]' ;; osf*) symcode='[[BCDEGQRST]]' ;; solaris*) symcode='[[BDRT]]' ;; sco3.2v5*) symcode='[[DT]]' ;; sysv4.2uw2*) symcode='[[DT]]' ;; sysv5* | sco5v6* | unixware* | OpenUNIX*) symcode='[[ABDT]]' ;; sysv4) symcode='[[DFNSTU]]' ;; esac # If we're using GNU nm, then use its standard symbol codes. case `$NM -V 2>&1` in *GNU* | *'with BFD'*) symcode='[[ABCDGIRSTW]]' ;; esac +if test "$lt_cv_nm_interface" = "MS dumpbin"; then + # Gets list of data symbols to import. + lt_cv_sys_global_symbol_to_import="sed -n -e 's/^I .* \(.*\)$/\1/p'" + # Adjust the below global symbol transforms to fixup imported variables. + lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'" + lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'" + lt_c_name_lib_hook="\ + -e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\ + -e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'" +else + # Disable hooks by default. + lt_cv_sys_global_symbol_to_import= + lt_cdecl_hook= + lt_c_name_hook= + lt_c_name_lib_hook= +fi + # Transform an extracted symbol line into a proper C declaration. # Some systems (esp. on ia64) link data and code symbols differently, # so use this general approach. -lt_cv_sys_global_symbol_to_cdecl="sed -n -e 's/^T .* \(.*\)$/extern int \1();/p' -e 's/^$symcode* .* \(.*\)$/extern char \1;/p'" +lt_cv_sys_global_symbol_to_cdecl="sed -n"\ +$lt_cdecl_hook\ +" -e 's/^T .* \(.*\)$/extern int \1();/p'"\ +" -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'" # Transform an extracted symbol line into symbol name and symbol address -lt_cv_sys_global_symbol_to_c_name_address="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"\2\", (void *) \&\2},/p'" -lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n -e 's/^: \([[^ ]]*\)[[ ]]*$/ {\\\"\1\\\", (void *) 0},/p' -e 's/^$symcode* \([[^ ]]*\) \(lib[[^ ]]*\)$/ {\"\2\", (void *) \&\2},/p' -e 's/^$symcode* \([[^ ]]*\) \([[^ ]]*\)$/ {\"lib\2\", (void *) \&\2},/p'" +lt_cv_sys_global_symbol_to_c_name_address="sed -n"\ +$lt_c_name_hook\ +" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ +" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'" + +# Transform an extracted symbol line into symbol name with lib prefix and +# symbol address. +lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n"\ +$lt_c_name_lib_hook\ +" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\ +" -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\ +" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'" # Handle CRLF in mingw tool chain opt_cr= case $build_os in mingw*) opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp ;; esac # Try without a prefix underscore, then with it. for ac_symprfx in "" "_"; do # Transform symcode, sympat, and symprfx into a raw symbol and a C symbol. symxfrm="\\1 $ac_symprfx\\2 \\2" # Write the raw and C identifiers. if test "$lt_cv_nm_interface" = "MS dumpbin"; then - # Fake it for dumpbin and say T for any non-static function - # and D for any global variable. + # Fake it for dumpbin and say T for any non-static function, + # D for any global variable and I for any imported variable. # Also find C++ and __fastcall symbols from MSVC++, # which start with @ or ?. lt_cv_sys_global_symbol_pipe="$AWK ['"\ " {last_section=section; section=\$ 3};"\ " /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\ " /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\ +" /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\ +" /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\ +" /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\ " \$ 0!~/External *\|/{next};"\ " / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\ " {if(hide[section]) next};"\ -" {f=0}; \$ 0~/\(\).*\|/{f=1}; {printf f ? \"T \" : \"D \"};"\ -" {split(\$ 0, a, /\||\r/); split(a[2], s)};"\ -" s[1]~/^[@?]/{print s[1], s[1]; next};"\ -" s[1]~prfx {split(s[1],t,\"@\"); print t[1], substr(t[1],length(prfx))}"\ +" {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\ +" {split(\$ 0,a,/\||\r/); split(a[2],s)};"\ +" s[1]~/^[@?]/{print f,s[1],s[1]; next};"\ +" s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\ " ' prfx=^$ac_symprfx]" else lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'" fi lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'" # Check to see that the pipe works correctly. pipe_works=no rm -f conftest* cat > conftest.$ac_ext <<_LT_EOF #ifdef __cplusplus extern "C" { #endif char nm_test_var; void nm_test_func(void); void nm_test_func(void){} #ifdef __cplusplus } #endif int main(){nm_test_var='a';nm_test_func();return(0);} _LT_EOF if AC_TRY_EVAL(ac_compile); then # Now try to grab the symbols. nlist=conftest.nm if AC_TRY_EVAL(NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) && test -s "$nlist"; then # Try sorting and uniquifying the output. if sort "$nlist" | uniq > "$nlist"T; then mv -f "$nlist"T "$nlist" else rm -f "$nlist"T fi # Make sure that we snagged all the symbols we need. if $GREP ' nm_test_var$' "$nlist" >/dev/null; then if $GREP ' nm_test_func$' "$nlist" >/dev/null; then cat <<_LT_EOF > conftest.$ac_ext /* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ -#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) -/* DATA imports from DLLs on WIN32 con't be const, because runtime +#if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE +/* DATA imports from DLLs on WIN32 can't be const, because runtime relocations are performed -- see ld's documentation on pseudo-relocs. */ # define LT@&t@_DLSYM_CONST -#elif defined(__osf__) +#elif defined __osf__ /* This system does not cope well with relocations in const data. */ # define LT@&t@_DLSYM_CONST #else # define LT@&t@_DLSYM_CONST const #endif #ifdef __cplusplus extern "C" { #endif _LT_EOF # Now generate the symbol file. eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext' cat <<_LT_EOF >> conftest.$ac_ext /* The mapping between symbol names and symbols. */ LT@&t@_DLSYM_CONST struct { const char *name; void *address; } lt__PROGRAM__LTX_preloaded_symbols[[]] = { { "@PROGRAM@", (void *) 0 }, _LT_EOF - $SED "s/^$symcode$symcode* \(.*\) \(.*\)$/ {\"\2\", (void *) \&\2},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext + $SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext cat <<\_LT_EOF >> conftest.$ac_ext {0, (void *) 0} }; /* This works around a problem in FreeBSD linker */ #ifdef FREEBSD_WORKAROUND static const void *lt_preloaded_setup() { return lt__PROGRAM__LTX_preloaded_symbols; } #endif #ifdef __cplusplus } #endif _LT_EOF # Now try linking the two files. mv conftest.$ac_objext conftstm.$ac_objext lt_globsym_save_LIBS=$LIBS lt_globsym_save_CFLAGS=$CFLAGS - LIBS="conftstm.$ac_objext" + LIBS=conftstm.$ac_objext CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)" - if AC_TRY_EVAL(ac_link) && test -s conftest${ac_exeext}; then + if AC_TRY_EVAL(ac_link) && test -s conftest$ac_exeext; then pipe_works=yes fi LIBS=$lt_globsym_save_LIBS CFLAGS=$lt_globsym_save_CFLAGS else echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD fi else echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD fi else echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD cat conftest.$ac_ext >&5 fi rm -rf conftest* conftst* # Do not use the global_symbol_pipe unless it works. - if test "$pipe_works" = yes; then + if test yes = "$pipe_works"; then break else lt_cv_sys_global_symbol_pipe= fi done ]) if test -z "$lt_cv_sys_global_symbol_pipe"; then lt_cv_sys_global_symbol_to_cdecl= fi if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then AC_MSG_RESULT(failed) else AC_MSG_RESULT(ok) fi # Response file support. if test "$lt_cv_nm_interface" = "MS dumpbin"; then nm_file_list_spec='@' elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then nm_file_list_spec='@' fi _LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1], [Take the output of nm and produce a listing of raw symbols and C names]) _LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1], [Transform the output of nm in a proper C declaration]) +_LT_DECL([global_symbol_to_import], [lt_cv_sys_global_symbol_to_import], [1], + [Transform the output of nm into a list of symbols to manually relocate]) _LT_DECL([global_symbol_to_c_name_address], [lt_cv_sys_global_symbol_to_c_name_address], [1], [Transform the output of nm in a C name address pair]) _LT_DECL([global_symbol_to_c_name_address_lib_prefix], [lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1], [Transform the output of nm in a C name address pair when lib prefix is needed]) +_LT_DECL([nm_interface], [lt_cv_nm_interface], [1], + [The name lister interface]) _LT_DECL([], [nm_file_list_spec], [1], [Specify filename containing input files for $NM]) ]) # _LT_CMD_GLOBAL_SYMBOLS # _LT_COMPILER_PIC([TAGNAME]) # --------------------------- m4_defun([_LT_COMPILER_PIC], [m4_require([_LT_TAG_COMPILER])dnl _LT_TAGVAR(lt_prog_compiler_wl, $1)= _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)= m4_if([$1], [CXX], [ # C++ specific cases for pic, static, wl, etc. - if test "$GXX" = yes; then + if test yes = "$GXX"; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi + _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but - # adding the `-m68020' flag to GCC prevents building anything better, - # like `-m68040'. + # adding the '-m68020' flag to GCC prevents building anything better, + # like '-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) + case $host_os in + os2*) + _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static' + ;; + esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; *djgpp*) # DJGPP does not support shared libraries at all _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac else case $host_os in aix[[4-9]]*) # All AIX code is PIC. - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; chorus*) case $cc_basename in cxch68*) # Green Hills C++ Compiler # _LT_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a" ;; esac ;; mingw* | cygwin* | os2* | pw32* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) ;; dgux*) case $cc_basename in ec++*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; ghcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; freebsd* | dragonfly*) # FreeBSD uses GNU C++ ;; hpux9* | hpux10* | hpux11*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' - _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' - if test "$host_cpu" != ia64; then + _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive' + if test ia64 != "$host_cpu"; then _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' fi ;; aCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' - _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' + _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive' case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac ;; *) ;; esac ;; interix*) # This is c89, which is MS Visual C++ (no shared libs) # Anyone wants to do a port? ;; irix5* | irix6* | nonstopux*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' # CC pic flag -KPIC is the default. ;; *) ;; esac ;; - linux* | k*bsd*-gnu | kopensolaris*-gnu) + linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # KAI C++ Compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; ecpc* ) - # old Intel C++ for x86_64 which still supported -KPIC. + # old Intel C++ for x86_64, which still supported -KPIC. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; icpc* ) # Intel C++, used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; pgCC* | pgcpp*) # Portland Group C++ compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; cxx*) # Compaq C++ # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xlc* | xlC* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL 8.0, 9.0 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; esac ;; esac ;; lynxos*) ;; m88k*) ;; mvs*) case $cc_basename in cxx*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall' ;; *) ;; esac ;; - netbsd*) + netbsd* | netbsdelf*-gnu) ;; *qnx* | *nto*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,' ;; RCC*) # Rational C++ 2.4.1 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; cxx*) # Digital/Compaq C++ _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # Make sure the PIC flag is empty. It appears that all Alpha # Linux and Compaq Tru64 Unix objects are PIC. _LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; *) ;; esac ;; psos*) ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; gcx*) # Green Hills C++ Compiler _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' ;; *) ;; esac ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; lcc*) # Lucid _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' ;; *) ;; esac ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) case $cc_basename in CC*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' ;; *) ;; esac ;; vxworks*) ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ], [ - if test "$GCC" = yes; then + if test yes = "$GCC"; then _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' case $host_os in aix*) # All AIX code is PIC. - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi + _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; m68k) # FIXME: we need at least 68020 code to build shared libraries, but - # adding the `-m68020' flag to GCC prevents building anything better, - # like `-m68040'. + # adding the '-m68020' flag to GCC prevents building anything better, + # like '-m68040'. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4' ;; esac ;; beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*) # PIC is the default for these OSes. ;; mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). # Although the cygwin gcc ignores -fPIC, still need this for old-style # (--disable-auto-import) libraries m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) + case $host_os in + os2*) + _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static' + ;; + esac ;; darwin* | rhapsody*) # PIC is the default on this platform # Common symbols not allowed in MH_DYLIB files _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' ;; haiku*) # PIC is the default for Haiku. # The "-static" flag exists, but is broken. _LT_TAGVAR(lt_prog_compiler_static, $1)= ;; hpux*) # PIC is the default for 64-bit PA HP-UX, but not for 32-bit # PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag # sets the default TLS model and affects inlining. case $host_cpu in hppa*64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac ;; interix[[3-9]]*) # Interix 3.x gcc -fpic/-fPIC options generate broken code. # Instead, we relocate shared libraries at runtime. ;; msdosdjgpp*) # Just because we use GCC doesn't mean we suddenly get shared libraries # on systems that don't support them. _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no enable_shared=no ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic fi ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' ;; esac case $cc_basename in nvcc*) # Cuda Compiler Driver 2.2 _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Xlinker ' if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_TAGVAR(lt_prog_compiler_pic, $1)="-Xcompiler $_LT_TAGVAR(lt_prog_compiler_pic, $1)" fi ;; esac else # PORTME Check for flag to pass linker flags through the system compiler. case $host_os in aix*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # AIX 5 now supports IA64 processor _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' else _LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp' fi ;; + darwin* | rhapsody*) + # PIC is the default on this platform + # Common symbols not allowed in MH_DYLIB files + _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common' + case $cc_basename in + nagfor*) + # NAG Fortran compiler + _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,' + _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' + _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' + ;; + esac + ;; + mingw* | cygwin* | pw32* | os2* | cegcc*) # This hack is so that the source file can tell whether it is being # built for inclusion in a dll (and should export symbols for example). m4_if([$1], [GCJ], [], [_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT']) + case $host_os in + os2*) + _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static' + ;; + esac ;; hpux9* | hpux10* | hpux11*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC is the default for IA64 HP-UX and 64-bit HP-UX, but # not for PA HP-UX. case $host_cpu in hppa*64*|ia64*) # +Z the default ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z' ;; esac # Is there a better lt_prog_compiler_static that works with the bundled CC? - _LT_TAGVAR(lt_prog_compiler_static, $1)='${wl}-a ${wl}archive' + _LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive' ;; irix5* | irix6* | nonstopux*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # PIC (with -KPIC) is the default. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; - linux* | k*bsd*-gnu | kopensolaris*-gnu) + linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in - # old Intel for x86_64 which still supported -KPIC. + # old Intel for x86_64, which still supported -KPIC. ecc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # icc used to be incompatible with GCC. # ICC 10 doesn't accept -KPIC any more. icc* | ifort*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; # Lahey Fortran 8.1. lf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared' _LT_TAGVAR(lt_prog_compiler_static, $1)='--static' ;; nagfor*) # NAG Fortran compiler _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; + tcc*) + # Fabrice Bellard et al's Tiny C Compiler + _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' + _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' + _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' + ;; pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group compilers (*not* the Pentium gcc compiler, # which looks to be a dead project) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; ccc*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All Alpha code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; xl* | bgxl* | bgf* | mpixl*) # IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink' ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [[1-7]].* | *Sun*Fortran*\ 8.[[0-3]]*) # Sun Fortran 8.3 passes all unrecognized flags to the linker _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='' ;; *Sun\ F* | *Sun*Fortran*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' ;; *Sun\ C*) # Sun C 5.9 _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' ;; *Intel*\ [[CF]]*Compiler*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-static' ;; *Portland\ Group*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; esac ;; esac ;; newsos6) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *nto* | *qnx*) # QNX uses GNU C++, but need to define -shared option too, otherwise # it will coredump. _LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared' ;; osf3* | osf4* | osf5*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' # All OSF/1 code is PIC. _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; rdos*) _LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared' ;; solaris*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' case $cc_basename in f77* | f90* | f95* | sunf77* | sunf90* | sunf95*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';; *) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';; esac ;; sunos4*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4 | sysv4.2uw2* | sysv4.3*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; sysv4*MP*) - if test -d /usr/nec ;then + if test -d /usr/nec; then _LT_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' fi ;; sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; unicos*) _LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,' _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; uts4*) _LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic' _LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic' ;; *) _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no ;; esac fi ]) case $host_os in - # For platforms which do not support PIC, -DPIC is meaningless: + # For platforms that do not support PIC, -DPIC is meaningless: *djgpp*) _LT_TAGVAR(lt_prog_compiler_pic, $1)= ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])" ;; esac AC_CACHE_CHECK([for $compiler option to produce PIC], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)], [_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)]) _LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1) # # Check to make sure the PIC flag actually works. # if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then _LT_COMPILER_OPTION([if $compiler PIC flag $_LT_TAGVAR(lt_prog_compiler_pic, $1) works], [_LT_TAGVAR(lt_cv_prog_compiler_pic_works, $1)], [$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])], [], [case $_LT_TAGVAR(lt_prog_compiler_pic, $1) in "" | " "*) ;; *) _LT_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_TAGVAR(lt_prog_compiler_pic, $1)" ;; esac], [_LT_TAGVAR(lt_prog_compiler_pic, $1)= _LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no]) fi _LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1], [Additional compiler flags for building library objects]) _LT_TAGDECL([wl], [lt_prog_compiler_wl], [1], [How to pass a linker flag through the compiler]) # # Check to make sure the static flag actually works. # wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_TAGVAR(lt_prog_compiler_static, $1)\" _LT_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works], _LT_TAGVAR(lt_cv_prog_compiler_static_works, $1), $lt_tmp_static_flag, [], [_LT_TAGVAR(lt_prog_compiler_static, $1)=]) _LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1], [Compiler flag to prevent dynamic linking]) ])# _LT_COMPILER_PIC # _LT_LINKER_SHLIBS([TAGNAME]) # ---------------------------- # See if the linker supports building shared libraries. m4_defun([_LT_LINKER_SHLIBS], [AC_REQUIRE([LT_PATH_LD])dnl AC_REQUIRE([LT_PATH_NM])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_DECL_SED])dnl m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl m4_require([_LT_TAG_COMPILER])dnl AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) m4_if([$1], [CXX], [ _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] case $host_os in aix[[4-9]]*) # If we're using GNU nm, then we don't want the "-C" option. - # -C means demangle to AIX nm, but means don't demangle with GNU nm - # Also, AIX nm treats weak defined symbols like other global defined - # symbols, whereas GNU nm marks them as "W". + # -C means demangle to GNU nm, but means don't demangle to AIX nm. + # Without the "-l" option, or with the "-B" option, AIX nm treats + # weak defined symbols like other global defined symbols, whereas + # GNU nm marks them as "W". + # While the 'weak' keyword is ignored in the Export File, we need + # it in the Import File for the 'aix-soname' feature, so we have + # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then - _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' + _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else - _LT_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' + _LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi ;; pw32*) - _LT_TAGVAR(export_symbols_cmds, $1)="$ltdll_cmds" + _LT_TAGVAR(export_symbols_cmds, $1)=$ltdll_cmds ;; cygwin* | mingw* | cegcc*) case $cc_basename in cl*) _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] ;; esac ;; + linux* | k*bsd*-gnu | gnu*) + _LT_TAGVAR(link_all_deplibs, $1)=no + ;; *) _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' ;; esac ], [ runpath_var= _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_cmds, $1)= _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(old_archive_from_new_cmds, $1)= _LT_TAGVAR(old_archive_from_expsyms_cmds, $1)= _LT_TAGVAR(thread_safe_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= # include_expsyms should be a list of space-separated symbols to be *always* # included in the symbol list _LT_TAGVAR(include_expsyms, $1)= # exclude_expsyms can be an extended regexp of symbols to exclude - # it will be wrapped by ` (' and `)$', so one must not match beginning or - # end of line. Example: `a|bc|.*d.*' will exclude the symbols `a' and `bc', - # as well as any symbol that contains `d'. + # it will be wrapped by ' (' and ')$', so one must not match beginning or + # end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc', + # as well as any symbol that contains 'd'. _LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*'] # Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out # platforms (ab)use it in PIC code, but their linkers get confused if # the symbol is explicitly referenced. Since portable code cannot # rely on this symbol name, it's probably fine to never include it in # preloaded symbol tables. # Exclude shared library initialization/finalization symbols. dnl Note also adjust exclude_expsyms for C++ above. extract_expsyms_cmds= case $host_os in cygwin* | mingw* | pw32* | cegcc*) # FIXME: the MSVC++ port hasn't been tested in a loooong time # When not using gcc, we currently assume that we are using # Microsoft Visual C++. - if test "$GCC" != yes; then + if test yes != "$GCC"; then with_gnu_ld=no fi ;; interix*) # we just hope/assume this is gcc and not c89 (= MSVC++) with_gnu_ld=yes ;; - openbsd*) + openbsd* | bitrig*) with_gnu_ld=no ;; + linux* | k*bsd*-gnu | gnu*) + _LT_TAGVAR(link_all_deplibs, $1)=no + ;; esac _LT_TAGVAR(ld_shlibs, $1)=yes # On some targets, GNU ld is compatible enough with the native linker # that we're better off using the native interface for both. lt_use_gnu_ld_interface=no - if test "$with_gnu_ld" = yes; then + if test yes = "$with_gnu_ld"; then case $host_os in aix*) # The AIX port of GNU ld has always aspired to compatibility # with the native linker. However, as the warning in the GNU ld # block says, versions before 2.19.5* couldn't really create working # shared libraries, regardless of the interface used. case `$LD -v 2>&1` in *\ \(GNU\ Binutils\)\ 2.19.5*) ;; *\ \(GNU\ Binutils\)\ 2.[[2-9]]*) ;; *\ \(GNU\ Binutils\)\ [[3-9]]*) ;; *) lt_use_gnu_ld_interface=yes ;; esac ;; *) lt_use_gnu_ld_interface=yes ;; esac fi - if test "$lt_use_gnu_ld_interface" = yes; then + if test yes = "$lt_use_gnu_ld_interface"; then # If archive_cmds runs LD, not CC, wlarc should be empty - wlarc='${wl}' + wlarc='$wl' # Set some defaults for GNU ld with shared library support. These # are reset later if shared libraries are not supported. Putting them # here allows them to be overridden if necessary. runpath_var=LD_RUN_PATH - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' # ancient GNU ld didn't support --whole-archive et. al. if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then - _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi supports_anon_versioning=no - case `$LD -v 2>&1` in + case `$LD -v | $SED -e 's/([^)]\+)\s\+//' 2>&1` in *GNU\ gold*) supports_anon_versioning=yes ;; *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11 *\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ... *\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ... *\ 2.11.*) ;; # other 2.11 versions *) supports_anon_versioning=yes ;; esac # See if GNU ld supports shared libraries. case $host_os in aix[[3-9]]*) # On AIX/PPC, the GNU linker is very broken - if test "$host_cpu" != ia64; then + if test ia64 != "$host_cpu"; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: the GNU linker, at least up to release 2.19, is reported *** to be unable to reliably create shared libraries on AIX. *** Therefore, libtool is disabling shared libraries support. If you *** really care for shared libraries, you may want to install binutils *** 2.20 or above, or modify your PATH so that a non-GNU linker is found. *** You will then need to restart the configuration process. _LT_EOF fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME - _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; cygwin* | mingw* | pw32* | cegcc*) # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols' _LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname'] if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' - # If the export-symbols file already is a .def file (1st line - # is EXPORTS), use it as is; otherwise, prepend... - _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then - cp $export_symbols $output_objdir/$soname.def; - else - echo EXPORTS > $output_objdir/$soname.def; - cat $export_symbols >> $output_objdir/$soname.def; - fi~ - $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file, use it as + # is; otherwise, prepend EXPORTS... + _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; haiku*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; + os2*) + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_TAGVAR(hardcode_minus_L, $1)=yes + _LT_TAGVAR(allow_undefined_flag, $1)=unsupported + shrext_cmds=.dll + _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ + $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ + $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ + $ECHO EXPORTS >> $output_objdir/$libname.def~ + emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ + $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ + emximp -o $lib $output_objdir/$libname.def' + _LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ + $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ + $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ + $ECHO EXPORTS >> $output_objdir/$libname.def~ + prefix_cmds="$SED"~ + if test EXPORTS = "`$SED 1q $export_symbols`"; then + prefix_cmds="$prefix_cmds -e 1d"; + fi~ + prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ + cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ + $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ + emximp -o $lib $output_objdir/$libname.def' + _LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' + _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes + ;; + interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu) tmp_diet=no - if test "$host_os" = linux-dietlibc; then + if test linux-dietlibc = "$host_os"; then case $cc_basename in diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn) esac fi if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \ - && test "$tmp_diet" = no + && test no = "$tmp_diet" then tmp_addflag=' $pic_flag' tmp_sharedflag='-shared' case $cc_basename,$host_cpu in pgcc*) # Portland Group C compiler - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag' ;; pgf77* | pgf90* | pgf95* | pgfortran*) # Portland Group f77 and f90 compilers - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' tmp_addflag=' $pic_flag -Mnomain' ;; ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64 tmp_addflag=' -i_dynamic' ;; efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64 tmp_addflag=' -i_dynamic -nofor_main' ;; ifc* | ifort*) # Intel Fortran compiler tmp_addflag=' -nofor_main' ;; lf95*) # Lahey Fortran 8.1 _LT_TAGVAR(whole_archive_flag_spec, $1)= tmp_sharedflag='--shared' ;; + nagfor*) # NAGFOR 5.3 + tmp_sharedflag='-Wl,-shared' ;; xl[[cC]]* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL C 8.0 on PPC (deal with xlf below) tmp_sharedflag='-qmkshrobj' tmp_addflag= ;; nvcc*) # Cuda Compiler Driver 2.2 - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes ;; esac case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C 5.9 - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes tmp_sharedflag='-G' ;; *Sun\ F*) # Sun Fortran 8.3 tmp_sharedflag='-G' ;; esac - _LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - - if test "x$supports_anon_versioning" = xyes; then + _LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + + if test yes = "$supports_anon_versioning"; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ - cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ - echo "local: *; };" >> $output_objdir/$libname.ver~ - $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + echo "local: *; };" >> $output_objdir/$libname.ver~ + $CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi case $cc_basename in + tcc*) + _LT_TAGVAR(export_dynamic_flag_spec, $1)='-rdynamic' + ;; xlf* | bgf* | bgxlf* | mpixlf*) # IBM XL Fortran 10.1 on PPC cannot create shared libs itself _LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib' - if test "x$supports_anon_versioning" = xyes; then + if test yes = "$supports_anon_versioning"; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ - cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ - echo "local: *; };" >> $output_objdir/$libname.ver~ - $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + echo "local: *; };" >> $output_objdir/$libname.ver~ + $LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib' fi ;; esac else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; - netbsd*) + netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib' wlarc= else - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' fi ;; solaris*) if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 *** Warning: The releases 2.8.* of the GNU linker cannot reliably *** create shared libraries on Solaris systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.9.1 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*) case `$LD -v 2>&1` in *\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*) _LT_TAGVAR(ld_shlibs, $1)=no cat <<_LT_EOF 1>&2 -*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 can not +*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot *** reliably create shared libraries on SCO systems. Therefore, libtool *** is disabling shared libraries support. We urge you to upgrade GNU *** binutils to release 2.16.91.0.3 or newer. Another option is to modify *** your PATH or compiler configuration so that the native linker is *** used, and then restart. _LT_EOF ;; *) # For security reasons, it is highly recommended that you always # use absolute paths for naming shared libraries, and exclude the # DT_RUNPATH tag from executables and libraries. But doing so # requires that you compile everything twice, which is a pain. if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; sunos4*) _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags' wlarc= _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac - if test "$_LT_TAGVAR(ld_shlibs, $1)" = no; then + if test no = "$_LT_TAGVAR(ld_shlibs, $1)"; then runpath_var= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else # PORTME fill in a description of your system's linker (not GNU ld) case $host_os in aix3*) _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname' # Note: this linker hardcodes the directories in LIBPATH if there # are no directories specified by -L. _LT_TAGVAR(hardcode_minus_L, $1)=yes - if test "$GCC" = yes && test -z "$lt_prog_compiler_static"; then + if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then # Neither direct hardcoding nor static linking is supported with a # broken collect2. _LT_TAGVAR(hardcode_direct, $1)=unsupported fi ;; aix[[4-9]]*) - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' - no_entry_flag="" + no_entry_flag= else # If we're using GNU nm, then we don't want the "-C" option. - # -C means demangle to AIX nm, but means don't demangle with GNU nm - # Also, AIX nm treats weak defined symbols like other global - # defined symbols, whereas GNU nm marks them as "W". + # -C means demangle to GNU nm, but means don't demangle to AIX nm. + # Without the "-l" option, or with the "-B" option, AIX nm treats + # weak defined symbols like other global defined symbols, whereas + # GNU nm marks them as "W". + # While the 'weak' keyword is ignored in the Export File, we need + # it in the Import File for the 'aix-soname' feature, so we have + # to replace the "-B" option with "-P" for AIX nm. if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then - _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' + _LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols' else - _LT_TAGVAR(export_symbols_cmds, $1)='$NM -BCpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B")) && ([substr](\$ 3,1,1) != ".")) { print \$ 3 } }'\'' | sort -u > $export_symbols' + _LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols' fi aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we - # need to do runtime linking. + # have runtime linking enabled, and use it for executables. + # For shared libraries, we enable/disable runtime linking + # depending on the kind of the shared library created - + # when "with_aix_soname,aix_use_runtimelinking" is: + # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables + # "aix,yes" lib.so shared, rtl:yes, for executables + # lib.a static archive + # "both,no" lib.so.V(shr.o) shared, rtl:yes + # lib.a(lib.so.V) shared, rtl:no, for executables + # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables + # lib.a(lib.so.V) shared, rtl:no + # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables + # lib.a static archive case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do - if (test $ld_flag = "-brtl" || test $ld_flag = "-Wl,-brtl"); then + if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then aix_use_runtimelinking=yes break fi done + if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then + # With aix-soname=svr4, we create the lib.so.V shared archives only, + # so we don't have lib.a shared libs to link our executables. + # We have to force runtime linking in this case. + aix_use_runtimelinking=yes + LDFLAGS="$LDFLAGS -Wl,-brtl" + fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes - _LT_TAGVAR(file_list_spec, $1)='${wl}-f,' - - if test "$GCC" = yes; then + _LT_TAGVAR(file_list_spec, $1)='$wl-f,' + case $with_aix_soname,$aix_use_runtimelinking in + aix,*) ;; # traditional, no import file + svr4,* | *,yes) # use import file + # The Import File defines what to hardcode. + _LT_TAGVAR(hardcode_direct, $1)=no + _LT_TAGVAR(hardcode_direct_absolute, $1)=no + ;; + esac + + if test yes = "$GCC"; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ - collect2name=`${CC} -print-prog-name=collect2` + collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi ;; esac shared_flag='-shared' - if test "$aix_use_runtimelinking" = yes; then - shared_flag="$shared_flag "'${wl}-G' + if test yes = "$aix_use_runtimelinking"; then + shared_flag="$shared_flag "'$wl-G' fi + # Need to ensure runtime linking is disabled for the traditional + # shared library, or the linker may eventually find shared libraries + # /with/ Import File - we do not want to mix them. + shared_flag_aix='-shared' + shared_flag_svr4='-shared $wl-G' else # not using gcc - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else - if test "$aix_use_runtimelinking" = yes; then - shared_flag='${wl}-G' + if test yes = "$aix_use_runtimelinking"; then + shared_flag='$wl-G' else - shared_flag='${wl}-bM:SRE' + shared_flag='$wl-bM:SRE' fi + shared_flag_aix='$wl-bM:SRE' + shared_flag_svr4='$wl-G' fi fi - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-bexpall' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to export. _LT_TAGVAR(always_export_symbols, $1)=yes - if test "$aix_use_runtimelinking" = yes; then + if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. _LT_TAGVAR(allow_undefined_flag, $1)='-berok' # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else - if test "$host_cpu" = ia64; then - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' + if test ia64 = "$host_cpu"; then + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" - _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. - _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' - _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' - if test "$with_gnu_ld" = yes; then + _LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok' + _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok' + if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes - # This is similar to how AIX traditionally builds its shared libraries. - _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + _LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' + # -brtl affects multiple linker settings, -berok does not and is overridden later + compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`' + if test svr4 != "$with_aix_soname"; then + # This is similar to how AIX traditionally builds its shared libraries. + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' + fi + if test aix != "$with_aix_soname"; then + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' + else + # used by -dlpreopen to get the symbols + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir' + fi + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d' fi fi ;; amigaos*) case $host_cpu in powerpc) # see comment about AmigaOS4 .so support - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='' ;; m68k) _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac ;; bsdi[[45]]*) _LT_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic ;; cygwin* | mingw* | pw32* | cegcc*) # When not using gcc, we currently assume that we are using # Microsoft Visual C++. # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. case $cc_basename in cl*) # Native MSVC _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. - shrext_cmds=".dll" + shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. - _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' - _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then - sed -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; - else - sed -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; - fi~ - $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ - linknames=' + _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' + _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then + cp "$export_symbols" "$output_objdir/$soname.def"; + echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; + else + $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; + fi~ + $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ + linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes _LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*' _LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols' # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ - lt_tool_outputfile="@TOOL_OUTPUT@"~ - case $lt_outputfile in - *.exe|*.EXE) ;; - *) - lt_outputfile="$lt_outputfile.exe" - lt_tool_outputfile="$lt_tool_outputfile.exe" - ;; - esac~ - if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then - $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; - $RM "$lt_outputfile.manifest"; - fi' + lt_tool_outputfile="@TOOL_OUTPUT@"~ + case $lt_outputfile in + *.exe|*.EXE) ;; + *) + lt_outputfile=$lt_outputfile.exe + lt_tool_outputfile=$lt_tool_outputfile.exe + ;; + esac~ + if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then + $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; + $RM "$lt_outputfile.manifest"; + fi' ;; *) # Assume MSVC wrapper _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. - shrext_cmds=".dll" + shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. _LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames=' # The linker will automatically build a .lib file if we build a DLL. _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' # FIXME: Should let the user specify the lib program. _LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; dgux*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor # support. Future versions do this automatically, but an explicit c++rt0.o # does not break anything, and helps significantly (at the cost of a little # extra space). freebsd2.2*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # Unfortunately, older versions of FreeBSD 2 do not have this feature. freebsd2.*) _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; # FreeBSD 3 and greater uses gcc -shared to do shared libraries. freebsd* | dragonfly*) _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; hpux9*) - if test "$GCC" = yes; then - _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + if test yes = "$GCC"; then + _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else - _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' fi - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' ;; hpux10*) - if test "$GCC" = yes && test "$with_gnu_ld" = no; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + if test yes,no = "$GCC,$with_gnu_ld"; then + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags' fi - if test "$with_gnu_ld" = no; then - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + if test no = "$with_gnu_ld"; then + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes fi ;; hpux11*) - if test "$GCC" = yes && test "$with_gnu_ld" = no; then + if test yes,no = "$GCC,$with_gnu_ld"; then case $host_cpu in hppa*64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags' ;; esac else case $host_cpu in hppa*64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' ;; ia64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags' ;; *) m4_if($1, [], [ # Older versions of the 11.00 compiler do not understand -b yet # (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does) _LT_LINKER_OPTION([if $CC understands -b], _LT_TAGVAR(lt_cv_prog_compiler__b, $1), [-b], - [_LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags'], + [_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'], [_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'])], - [_LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $libobjs $deplibs $compiler_flags']) + [_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags']) ;; esac fi - if test "$with_gnu_ld" = no; then - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + if test no = "$with_gnu_ld"; then + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # hardcode_minus_L: Not really in the search PATH, # but as the default location of the library. _LT_TAGVAR(hardcode_minus_L, $1)=yes ;; esac fi ;; irix5* | irix6* | nonstopux*) - if test "$GCC" = yes; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + if test yes = "$GCC"; then + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' # Try to use the -exported_symbol ld option, if it does not # work, assume that -exports_file does not work either and # implicitly export all symbols. # This should be the same for all languages, so no per-tag cache variable. AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol], [lt_cv_irix_exported_symbol], - [save_LDFLAGS="$LDFLAGS" - LDFLAGS="$LDFLAGS -shared ${wl}-exported_symbol ${wl}foo ${wl}-update_registry ${wl}/dev/null" + [save_LDFLAGS=$LDFLAGS + LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null" AC_LINK_IFELSE( [AC_LANG_SOURCE( [AC_LANG_CASE([C], [[int foo (void) { return 0; }]], [C++], [[int foo (void) { return 0; }]], [Fortran 77], [[ subroutine foo end]], [Fortran], [[ subroutine foo end]])])], [lt_cv_irix_exported_symbol=yes], [lt_cv_irix_exported_symbol=no]) - LDFLAGS="$save_LDFLAGS"]) - if test "$lt_cv_irix_exported_symbol" = yes; then - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations ${wl}-exports_file ${wl}$export_symbols -o $lib' + LDFLAGS=$save_LDFLAGS]) + if test yes = "$lt_cv_irix_exported_symbol"; then + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib' fi + _LT_TAGVAR(link_all_deplibs, $1)=no else - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -exports_file $export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes _LT_TAGVAR(link_all_deplibs, $1)=yes ;; - netbsd*) + linux*) + case $cc_basename in + tcc*) + # Fabrice Bellard et al's Tiny C Compiler + _LT_TAGVAR(ld_shlibs, $1)=yes + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + ;; + esac + ;; + + netbsd* | netbsdelf*-gnu) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out else _LT_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; newsos6) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *nto* | *qnx*) ;; - openbsd*) + openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes - if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then + if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags ${wl}-retain-symbols-file,$export_symbols' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' else - case $host_os in - openbsd[[01]].* | openbsd2.[[0-7]] | openbsd2.[[0-7]].*) - _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' - ;; - *) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - ;; - esac + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' fi else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; os2*) _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(allow_undefined_flag, $1)=unsupported - _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY $libname INITINSTANCE" > $output_objdir/$libname.def~$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~echo DATA >> $output_objdir/$libname.def~echo " SINGLE NONSHARED" >> $output_objdir/$libname.def~echo EXPORTS >> $output_objdir/$libname.def~emxexp $libobjs >> $output_objdir/$libname.def~$CC -Zdll -Zcrtdll -o $lib $libobjs $deplibs $compiler_flags $output_objdir/$libname.def' - _LT_TAGVAR(old_archive_from_new_cmds, $1)='emximp -o $output_objdir/$libname.a $output_objdir/$libname.def' + shrext_cmds=.dll + _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ + $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ + $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ + $ECHO EXPORTS >> $output_objdir/$libname.def~ + emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ + $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ + emximp -o $lib $output_objdir/$libname.def' + _LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ + $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ + $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ + $ECHO EXPORTS >> $output_objdir/$libname.def~ + prefix_cmds="$SED"~ + if test EXPORTS = "`$SED 1q $export_symbols`"; then + prefix_cmds="$prefix_cmds -e 1d"; + fi~ + prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ + cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ + $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ + emximp -o $lib $output_objdir/$libname.def' + _LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' + _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes ;; osf3*) - if test "$GCC" = yes; then - _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + if test yes = "$GCC"; then + _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; osf4* | osf5*) # as osf3* with the addition of -msym flag - if test "$GCC" = yes; then - _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $pic_flag $libobjs $deplibs $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + if test yes = "$GCC"; then + _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' else _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~ - $CC -shared${allow_undefined_flag} ${wl}-input ${wl}$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~$RM $lib.exp' + $CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp' # Both c and cxx compiler support -rpath directly _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' fi _LT_TAGVAR(archive_cmds_need_lc, $1)='no' _LT_TAGVAR(hardcode_libdir_separator, $1)=: ;; solaris*) _LT_TAGVAR(no_undefined_flag, $1)=' -z defs' - if test "$GCC" = yes; then - wlarc='${wl}' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags' + if test yes = "$GCC"; then + wlarc='$wl' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ - $CC -shared $pic_flag ${wl}-z ${wl}text ${wl}-M ${wl}$lib.exp ${wl}-h ${wl}$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' + $CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' else case `$CC -V 2>&1` in *"Compilers 5.0"*) wlarc='' - _LT_TAGVAR(archive_cmds, $1)='$LD -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $linker_flags' + _LT_TAGVAR(archive_cmds, $1)='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ - $LD -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' + $LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp' ;; *) - wlarc='${wl}' - _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h $soname -o $lib $libobjs $deplibs $compiler_flags' + wlarc='$wl' + _LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ - $CC -G${allow_undefined_flag} -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' + $CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp' ;; esac fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, - # but understands `-z linker_flag'. GCC discards it without `$wl', + # but understands '-z linker_flag'. GCC discards it without '$wl', # but is careful enough not to reorder. # Supported since Solaris 2.6 (maybe 2.5.1?) - if test "$GCC" = yes; then - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + if test yes = "$GCC"; then + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' else _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' fi ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes ;; sunos4*) - if test "x$host_vendor" = xsequent; then + if test sequent = "$host_vendor"; then # Use $CC to link under sequent, because it throws in some extra .o # files that make .init and .fini sections work. - _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h $soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags' else _LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags' fi _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4) case $host_vendor in sni) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=yes # is this really true??? ;; siemens) ## LD is ld it makes a PLAMLIB ## CC just makes a GrossModule. _LT_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs' _LT_TAGVAR(hardcode_direct, $1)=no ;; motorola) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie ;; esac runpath_var='LD_RUN_PATH' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; sysv4.3*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport' ;; sysv4*MP*) if test -d /usr/nec; then _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var=LD_RUN_PATH hardcode_runpath_var=yes _LT_TAGVAR(ld_shlibs, $1)=yes fi ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) - _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' + _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' - if test "$GCC" = yes; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + if test yes = "$GCC"; then + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else - _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; sysv5* | sco3.2v5* | sco5v6*) - # Note: We can NOT use -z defs as we might desire, because we do not + # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. - _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' - _LT_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' + _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' + _LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R,$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport' runpath_var='LD_RUN_PATH' - if test "$GCC" = yes; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + if test yes = "$GCC"; then + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' else - _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' fi ;; uts4*) _LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(ld_shlibs, $1)=no ;; esac - if test x$host_vendor = xsni; then + if test sni = "$host_vendor"; then case $host in sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*) - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Blargedynsym' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Blargedynsym' ;; esac fi fi ]) AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) -test "$_LT_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no +test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no _LT_TAGVAR(with_gnu_ld, $1)=$with_gnu_ld _LT_DECL([], [libext], [0], [Old archive suffix (normally "a")])dnl _LT_DECL([], [shrext_cmds], [1], [Shared library suffix (normally ".so")])dnl _LT_DECL([], [extract_expsyms_cmds], [2], [The commands to extract the exported symbol list from a shared archive]) # # Do we need to explicitly link libc? # case "x$_LT_TAGVAR(archive_cmds_need_lc, $1)" in x|xyes) # Assume -lc should be added _LT_TAGVAR(archive_cmds_need_lc, $1)=yes - if test "$enable_shared" = yes && test "$GCC" = yes; then + if test yes,yes = "$GCC,$enable_shared"; then case $_LT_TAGVAR(archive_cmds, $1) in *'~'*) # FIXME: we may have to deal with multi-command sequences. ;; '$CC '*) # Test whether the compiler implicitly links with -lc since on some # systems, -lgcc has to come before -lc. If gcc already passes -lc # to ld, don't add -lc before -lgcc. AC_CACHE_CHECK([whether -lc should be explicitly linked in], [lt_cv_]_LT_TAGVAR(archive_cmds_need_lc, $1), [$RM conftest* echo "$lt_simple_compile_test_code" > conftest.$ac_ext if AC_TRY_EVAL(ac_compile) 2>conftest.err; then soname=conftest lib=conftest libobjs=conftest.$ac_objext deplibs= wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) pic_flag=$_LT_TAGVAR(lt_prog_compiler_pic, $1) compiler_flags=-v linker_flags=-v verstring= output_objdir=. libname=conftest lt_save_allow_undefined_flag=$_LT_TAGVAR(allow_undefined_flag, $1) _LT_TAGVAR(allow_undefined_flag, $1)= if AC_TRY_EVAL(_LT_TAGVAR(archive_cmds, $1) 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1) then lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=no else lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=yes fi _LT_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag else cat conftest.err 1>&5 fi $RM conftest* ]) _LT_TAGVAR(archive_cmds_need_lc, $1)=$lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1) ;; esac fi ;; esac _LT_TAGDECL([build_libtool_need_lc], [archive_cmds_need_lc], [0], [Whether or not to add -lc for building shared libraries]) _LT_TAGDECL([allow_libtool_libs_with_static_runtimes], [enable_shared_with_static_runtimes], [0], [Whether or not to disallow shared libs when runtime libs are static]) _LT_TAGDECL([], [export_dynamic_flag_spec], [1], [Compiler flag to allow reflexive dlopens]) _LT_TAGDECL([], [whole_archive_flag_spec], [1], [Compiler flag to generate shared objects directly from archives]) _LT_TAGDECL([], [compiler_needs_object], [1], [Whether the compiler copes with passing no objects directly]) _LT_TAGDECL([], [old_archive_from_new_cmds], [2], [Create an old-style archive from a shared archive]) _LT_TAGDECL([], [old_archive_from_expsyms_cmds], [2], [Create a temporary old-style archive to link instead of a shared archive]) _LT_TAGDECL([], [archive_cmds], [2], [Commands used to build a shared archive]) _LT_TAGDECL([], [archive_expsym_cmds], [2]) _LT_TAGDECL([], [module_cmds], [2], [Commands used to build a loadable module if different from building a shared archive.]) _LT_TAGDECL([], [module_expsym_cmds], [2]) _LT_TAGDECL([], [with_gnu_ld], [1], [Whether we are building with GNU ld or not]) _LT_TAGDECL([], [allow_undefined_flag], [1], [Flag that allows shared libraries with undefined symbols to be built]) _LT_TAGDECL([], [no_undefined_flag], [1], [Flag that enforces no undefined symbols]) _LT_TAGDECL([], [hardcode_libdir_flag_spec], [1], [Flag to hardcode $libdir into a binary during linking. This must work even if $libdir does not exist]) _LT_TAGDECL([], [hardcode_libdir_separator], [1], [Whether we need a single "-rpath" flag with a separated argument]) _LT_TAGDECL([], [hardcode_direct], [0], - [Set to "yes" if using DIR/libNAME${shared_ext} during linking hardcodes + [Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_direct_absolute], [0], - [Set to "yes" if using DIR/libNAME${shared_ext} during linking hardcodes + [Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes DIR into the resulting binary and the resulting library dependency is - "absolute", i.e impossible to change by setting ${shlibpath_var} if the + "absolute", i.e impossible to change by setting $shlibpath_var if the library is relocated]) _LT_TAGDECL([], [hardcode_minus_L], [0], [Set to "yes" if using the -LDIR flag during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_shlibpath_var], [0], [Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR into the resulting binary]) _LT_TAGDECL([], [hardcode_automatic], [0], [Set to "yes" if building a shared library automatically hardcodes DIR into the library and all subsequent libraries and executables linked against it]) _LT_TAGDECL([], [inherit_rpath], [0], [Set to yes if linker adds runtime paths of dependent libraries to runtime path list]) _LT_TAGDECL([], [link_all_deplibs], [0], [Whether libtool must link a program against all its dependency libraries]) _LT_TAGDECL([], [always_export_symbols], [0], [Set to "yes" if exported symbols are required]) _LT_TAGDECL([], [export_symbols_cmds], [2], [The commands to list exported symbols]) _LT_TAGDECL([], [exclude_expsyms], [1], [Symbols that should not be listed in the preloaded symbols]) _LT_TAGDECL([], [include_expsyms], [1], [Symbols that must always be exported]) _LT_TAGDECL([], [prelink_cmds], [2], [Commands necessary for linking programs (against libraries) with templates]) _LT_TAGDECL([], [postlink_cmds], [2], [Commands necessary for finishing linking programs]) _LT_TAGDECL([], [file_list_spec], [1], [Specify filename containing input files]) dnl FIXME: Not yet implemented dnl _LT_TAGDECL([], [thread_safe_flag_spec], [1], dnl [Compiler flag to generate thread safe objects]) ])# _LT_LINKER_SHLIBS # _LT_LANG_C_CONFIG([TAG]) # ------------------------ # Ensure that the configuration variables for a C compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write -# the compiler configuration to `libtool'. +# the compiler configuration to 'libtool'. m4_defun([_LT_LANG_C_CONFIG], [m4_require([_LT_DECL_EGREP])dnl -lt_save_CC="$CC" +lt_save_CC=$CC AC_LANG_PUSH(C) # Source file extension for C test sources. ac_ext=c # Object file extension for compiled C test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(){return(0);}' _LT_TAG_COMPILER # Save the default compiler, since it gets overwritten when the other # tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP. compiler_DEFAULT=$CC # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) LT_SYS_DLOPEN_SELF _LT_CMD_STRIPLIB - # Report which library types will actually be built + # Report what library types will actually be built AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) - test "$can_build_shared" = "no" && enable_shared=no + test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) - test "$enable_shared" = yes && enable_static=no + test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) - if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then - test "$enable_shared" = yes && enable_static=no + if test ia64 != "$host_cpu"; then + case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in + yes,aix,yes) ;; # shared object as lib.so file only + yes,svr4,*) ;; # shared object as lib.so archive member only + yes,*) enable_static=no ;; # shared object in lib.a archive as well + esac fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. - test "$enable_shared" = yes || enable_static=yes + test yes = "$enable_shared" || enable_static=yes AC_MSG_RESULT([$enable_static]) _LT_CONFIG($1) fi AC_LANG_POP -CC="$lt_save_CC" +CC=$lt_save_CC ])# _LT_LANG_C_CONFIG # _LT_LANG_CXX_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a C++ compiler are suitably # defined. These variables are subsequently used by _LT_CONFIG to write -# the compiler configuration to `libtool'. +# the compiler configuration to 'libtool'. m4_defun([_LT_LANG_CXX_CONFIG], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl m4_require([_LT_DECL_EGREP])dnl m4_require([_LT_PATH_MANIFEST_TOOL])dnl -if test -n "$CXX" && ( test "X$CXX" != "Xno" && - ( (test "X$CXX" = "Xg++" && `g++ -v >/dev/null 2>&1` ) || - (test "X$CXX" != "Xg++"))) ; then +if test -n "$CXX" && ( test no != "$CXX" && + ( (test g++ = "$CXX" && `g++ -v >/dev/null 2>&1` ) || + (test g++ != "$CXX"))); then AC_PROG_CXXCPP else _lt_caught_CXX_error=yes fi AC_LANG_PUSH(C++) _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(compiler_needs_object, $1)=no _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for C++ test sources. ac_ext=cpp # Object file extension for compiled C++ test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the CXX compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. -if test "$_lt_caught_CXX_error" != yes; then +if test yes != "$_lt_caught_CXX_error"; then # Code to be used in simple compile tests lt_simple_compile_test_code="int some_variable = 0;" # Code to be used in simple link tests lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_LD=$LD lt_save_GCC=$GCC GCC=$GXX lt_save_with_gnu_ld=$with_gnu_ld lt_save_path_LD=$lt_cv_path_LD if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx else $as_unset lt_cv_prog_gnu_ld fi if test -n "${lt_cv_path_LDCXX+set}"; then lt_cv_path_LD=$lt_cv_path_LDCXX else $as_unset lt_cv_path_LD fi test -z "${LDCXX+set}" || LD=$LDCXX CC=${CXX-"c++"} CFLAGS=$CXXFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then # We don't want -fno-exception when compiling C++ code, so set the # no_builtin_flag separately - if test "$GXX" = yes; then + if test yes = "$GXX"; then _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' else _LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)= fi - if test "$GXX" = yes; then + if test yes = "$GXX"; then # Set up default GNU C++ configuration LT_PATH_LD # Check if GNU C++ uses GNU ld as the underlying linker, since the # archiving commands below assume that GNU ld is being used. - if test "$with_gnu_ld" = yes; then - _LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' - - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + if test yes = "$with_gnu_ld"; then + _LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' + + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' # If archive_cmds runs LD, not CC, wlarc should be empty # XXX I think wlarc can be eliminated in ltcf-cxx, but I need to # investigate it a little bit more. (MM) - wlarc='${wl}' + wlarc='$wl' # ancient GNU ld didn't support --whole-archive et. al. if eval "`$CC -print-prog-name=ld` --help 2>&1" | $GREP 'no-whole-archive' > /dev/null; then - _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' else _LT_TAGVAR(whole_archive_flag_spec, $1)= fi else with_gnu_ld=no wlarc= # A generic and very simple default shared library creation # command for GNU C++ for the case where it uses the native # linker, instead of GNU ld. If possible, this setting should # overridden to take advantage of the native linker features on # the platform it is being used on. _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' fi # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else GXX=no with_gnu_ld=no wlarc= fi # PORTME: fill in a description of your system's C++ link characteristics AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries]) _LT_TAGVAR(ld_shlibs, $1)=yes case $host_os in aix3*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aix[[4-9]]*) - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # On IA64, the linker does run time linking by default, so we don't # have to do anything special. aix_use_runtimelinking=no exp_sym_flag='-Bexport' - no_entry_flag="" + no_entry_flag= else aix_use_runtimelinking=no # Test if we are trying to use run time linking or normal # AIX style linking. If -brtl is somewhere in LDFLAGS, we - # need to do runtime linking. + # have runtime linking enabled, and use it for executables. + # For shared libraries, we enable/disable runtime linking + # depending on the kind of the shared library created - + # when "with_aix_soname,aix_use_runtimelinking" is: + # "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables + # "aix,yes" lib.so shared, rtl:yes, for executables + # lib.a static archive + # "both,no" lib.so.V(shr.o) shared, rtl:yes + # lib.a(lib.so.V) shared, rtl:no, for executables + # "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables + # lib.a(lib.so.V) shared, rtl:no + # "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables + # lib.a static archive case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*) for ld_flag in $LDFLAGS; do case $ld_flag in *-brtl*) aix_use_runtimelinking=yes break ;; esac done + if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then + # With aix-soname=svr4, we create the lib.so.V shared archives only, + # so we don't have lib.a shared libs to link our executables. + # We have to force runtime linking in this case. + aix_use_runtimelinking=yes + LDFLAGS="$LDFLAGS -Wl,-brtl" + fi ;; esac exp_sym_flag='-bexport' no_entry_flag='-bnoentry' fi # When large executables or shared objects are built, AIX ld can # have problems creating the table of contents. If linking a library # or program results in "error TOC overflow" add -mminimal-toc to # CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not # enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS. _LT_TAGVAR(archive_cmds, $1)='' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes - _LT_TAGVAR(file_list_spec, $1)='${wl}-f,' - - if test "$GXX" = yes; then + _LT_TAGVAR(file_list_spec, $1)='$wl-f,' + case $with_aix_soname,$aix_use_runtimelinking in + aix,*) ;; # no import file + svr4,* | *,yes) # use import file + # The Import File defines what to hardcode. + _LT_TAGVAR(hardcode_direct, $1)=no + _LT_TAGVAR(hardcode_direct_absolute, $1)=no + ;; + esac + + if test yes = "$GXX"; then case $host_os in aix4.[[012]]|aix4.[[012]].*) # We only want to do this on AIX 4.2 and lower, the check # below for broken collect2 doesn't work under 4.3+ - collect2name=`${CC} -print-prog-name=collect2` + collect2name=`$CC -print-prog-name=collect2` if test -f "$collect2name" && strings "$collect2name" | $GREP resolve_lib_name >/dev/null then # We have reworked collect2 : else # We have old collect2 _LT_TAGVAR(hardcode_direct, $1)=unsupported # It fails to find uninstalled libraries when the uninstalled # path is not listed in the libpath. Setting hardcode_minus_L # to unsupported forces relinking _LT_TAGVAR(hardcode_minus_L, $1)=yes _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)= fi esac shared_flag='-shared' - if test "$aix_use_runtimelinking" = yes; then - shared_flag="$shared_flag "'${wl}-G' + if test yes = "$aix_use_runtimelinking"; then + shared_flag=$shared_flag' $wl-G' fi + # Need to ensure runtime linking is disabled for the traditional + # shared library, or the linker may eventually find shared libraries + # /with/ Import File - we do not want to mix them. + shared_flag_aix='-shared' + shared_flag_svr4='-shared $wl-G' else # not using gcc - if test "$host_cpu" = ia64; then + if test ia64 = "$host_cpu"; then # VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release # chokes on -Wl,-G. The following line is correct: shared_flag='-G' else - if test "$aix_use_runtimelinking" = yes; then - shared_flag='${wl}-G' + if test yes = "$aix_use_runtimelinking"; then + shared_flag='$wl-G' else - shared_flag='${wl}-bM:SRE' + shared_flag='$wl-bM:SRE' fi + shared_flag_aix='$wl-bM:SRE' + shared_flag_svr4='$wl-G' fi fi - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-bexpall' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall' # It seems that -bexpall does not export symbols beginning with # underscore (_), so it is better to generate a list of symbols to # export. _LT_TAGVAR(always_export_symbols, $1)=yes - if test "$aix_use_runtimelinking" = yes; then + if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then # Warning - without using the other runtime loading flags (-brtl), # -berok will link without error, but may produce a broken library. - _LT_TAGVAR(allow_undefined_flag, $1)='-berok' + # The "-G" linker flag allows undefined symbols. + _LT_TAGVAR(no_undefined_flag, $1)='-bernotok' # Determine the default libpath from the value encoded in an empty # executable. _LT_SYS_MODULE_PATH_AIX([$1]) - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" - - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags `if test "x${allow_undefined_flag}" != "x"; then func_echo_all "${wl}${allow_undefined_flag}"; else :; fi` '"\${wl}$exp_sym_flag:\$export_symbols $shared_flag" + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" + + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag else - if test "$host_cpu" = ia64; then - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $libdir:/usr/lib:/lib' + if test ia64 = "$host_cpu"; then + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib' _LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs" - _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\${wl}$no_entry_flag"' $compiler_flags ${wl}${allow_undefined_flag} '"\${wl}$exp_sym_flag:\$export_symbols" + _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols" else # Determine the default libpath from the value encoded in an # empty executable. _LT_SYS_MODULE_PATH_AIX([$1]) - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-blibpath:$libdir:'"$aix_libpath" + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath" # Warning - without using the other run time loading flags, # -berok will link without error, but may produce a broken library. - _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-bernotok' - _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-berok' - if test "$with_gnu_ld" = yes; then + _LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok' + _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok' + if test yes = "$with_gnu_ld"; then # We only use this code for GNU lds that support --whole-archive. - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive' else # Exported symbols can be pulled into shared objects from archives _LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience' fi _LT_TAGVAR(archive_cmds_need_lc, $1)=yes - # This is similar to how AIX traditionally builds its shared - # libraries. - _LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs ${wl}-bnoentry $compiler_flags ${wl}-bE:$export_symbols${allow_undefined_flag}~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$soname' + _LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d' + # -brtl affects multiple linker settings, -berok does not and is overridden later + compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`' + if test svr4 != "$with_aix_soname"; then + # This is similar to how AIX traditionally builds its shared + # libraries. Need -bnortl late, we may have -brtl in LDFLAGS. + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname' + fi + if test aix != "$with_aix_soname"; then + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp' + else + # used by -dlpreopen to get the symbols + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir' + fi + _LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d' fi fi ;; beos*) if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then _LT_TAGVAR(allow_undefined_flag, $1)=unsupported # Joseph Beckenbach says some releases of gcc # support --undefined. This deserves some investigation. FIXME - _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; chorus*) case $cc_basename in *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; cygwin* | mingw* | pw32* | cegcc*) case $GXX,$cc_basename in ,cl* | no,cl*) # Native MSVC # hardcode_libdir_flag_spec is actually meaningless, as there is # no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' ' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=yes _LT_TAGVAR(file_list_spec, $1)='@' # Tell ltmain to make .lib files, not .a files. libext=lib # Tell ltmain to make .dll files, not .so files. - shrext_cmds=".dll" + shrext_cmds=.dll # FIXME: Setting linknames here is a bad hack. - _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-dll~linknames=' - _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then - $SED -n -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' -e '1\\\!p' < $export_symbols > $output_objdir/$soname.exp; - else - $SED -e 's/\\\\\\\(.*\\\\\\\)/-link\\\ -EXPORT:\\\\\\\1/' < $export_symbols > $output_objdir/$soname.exp; - fi~ - $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ - linknames=' + _LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames=' + _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then + cp "$export_symbols" "$output_objdir/$soname.def"; + echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp"; + else + $SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp; + fi~ + $CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~ + linknames=' # The linker will not automatically build a static lib if we build a DLL. # _LT_TAGVAR(old_archive_from_new_cmds, $1)='true' _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes # Don't use ranlib _LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib' _LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~ - lt_tool_outputfile="@TOOL_OUTPUT@"~ - case $lt_outputfile in - *.exe|*.EXE) ;; - *) - lt_outputfile="$lt_outputfile.exe" - lt_tool_outputfile="$lt_tool_outputfile.exe" - ;; - esac~ - func_to_tool_file "$lt_outputfile"~ - if test "$MANIFEST_TOOL" != ":" && test -f "$lt_outputfile.manifest"; then - $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; - $RM "$lt_outputfile.manifest"; - fi' + lt_tool_outputfile="@TOOL_OUTPUT@"~ + case $lt_outputfile in + *.exe|*.EXE) ;; + *) + lt_outputfile=$lt_outputfile.exe + lt_tool_outputfile=$lt_tool_outputfile.exe + ;; + esac~ + func_to_tool_file "$lt_outputfile"~ + if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then + $MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1; + $RM "$lt_outputfile.manifest"; + fi' ;; *) # g++ # _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless, # as there is no search path for DLLs. _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-all-symbols' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols' _LT_TAGVAR(allow_undefined_flag, $1)=unsupported _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' - # If the export-symbols file already is a .def file (1st line - # is EXPORTS), use it as is; otherwise, prepend... - _LT_TAGVAR(archive_expsym_cmds, $1)='if test "x`$SED 1q $export_symbols`" = xEXPORTS; then - cp $export_symbols $output_objdir/$soname.def; - else - echo EXPORTS > $output_objdir/$soname.def; - cat $export_symbols >> $output_objdir/$soname.def; - fi~ - $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname ${wl}--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' + # If the export-symbols file already is a .def file, use it as + # is; otherwise, prepend EXPORTS... + _LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then + cp $export_symbols $output_objdir/$soname.def; + else + echo EXPORTS > $output_objdir/$soname.def; + cat $export_symbols >> $output_objdir/$soname.def; + fi~ + $CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib' else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; darwin* | rhapsody*) _LT_DARWIN_LINKER_FEATURES($1) ;; + os2*) + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir' + _LT_TAGVAR(hardcode_minus_L, $1)=yes + _LT_TAGVAR(allow_undefined_flag, $1)=unsupported + shrext_cmds=.dll + _LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ + $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ + $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ + $ECHO EXPORTS >> $output_objdir/$libname.def~ + emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~ + $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ + emximp -o $lib $output_objdir/$libname.def' + _LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~ + $ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~ + $ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~ + $ECHO EXPORTS >> $output_objdir/$libname.def~ + prefix_cmds="$SED"~ + if test EXPORTS = "`$SED 1q $export_symbols`"; then + prefix_cmds="$prefix_cmds -e 1d"; + fi~ + prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~ + cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~ + $CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~ + emximp -o $lib $output_objdir/$libname.def' + _LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def' + _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes + ;; + dgux*) case $cc_basename in ec++*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; ghcx*) # Green Hills C++ Compiler # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; freebsd2.*) # C++ shared libraries reported to be fairly broken before # switch to ELF _LT_TAGVAR(ld_shlibs, $1)=no ;; freebsd-elf*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; freebsd* | dragonfly*) # FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF # conventions _LT_TAGVAR(ld_shlibs, $1)=yes ;; - gnu*) - ;; - haiku*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(link_all_deplibs, $1)=yes ;; hpux9*) - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) - _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. - output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' + output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) - if test "$GXX" = yes; then - _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag ${wl}+b ${wl}$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test $output_objdir/$soname = $lib || mv $output_objdir/$soname $lib' + if test yes = "$GXX"; then + _LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; hpux10*|hpux11*) - if test $with_gnu_ld = no; then - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}+b ${wl}$libdir' + if test no = "$with_gnu_ld"; then + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: case $host_cpu in hppa*64*|ia64*) ;; *) - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' ;; esac fi case $host_cpu in hppa*64*|ia64*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no ;; *) _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH, # but as the default # location of the library. ;; esac case $cc_basename in CC*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; aCC*) case $host_cpu in hppa*64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) - _LT_TAGVAR(archive_cmds, $1)='$CC -b ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. - output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' + output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) - if test "$GXX" = yes; then - if test $with_gnu_ld = no; then + if test yes = "$GXX"; then + if test no = "$with_gnu_ld"; then case $host_cpu in hppa*64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC ${wl}+h ${wl}$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; ia64*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; *) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag ${wl}+h ${wl}$soname ${wl}+b ${wl}$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' ;; esac fi else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; interix[[3-9]]*) _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' # Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc. # Instead, shared libraries are loaded at an image base (0x10000000 by # default) and relocated if they conflict, which is a slow very memory # consuming and fragmenting process. To avoid this, we pick a random, # 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link # time. Moving up from 0x10000000 also allows more sbrk(2) space. - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s,^,_," $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags ${wl}-h,$soname ${wl}--retain-symbols-file,$output_objdir/$soname.expsym ${wl}--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib' ;; irix5* | irix6*) case $cc_basename in CC*) # SGI C++ - _LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' # Archives containing C++ object files must be created using # "CC -ar", where "CC" is the IRIX C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs' ;; *) - if test "$GXX" = yes; then - if test "$with_gnu_ld" = no; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + if test yes = "$GXX"; then + if test no = "$with_gnu_ld"; then + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' else - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` -o $lib' fi fi _LT_TAGVAR(link_all_deplibs, $1)=yes ;; esac - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: _LT_TAGVAR(inherit_rpath, $1)=yes ;; - linux* | k*bsd*-gnu | kopensolaris*-gnu) + linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. - _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib ${wl}-retain-symbols-file,$export_symbols; mv \$templib $lib' + _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib $wl-retain-symbols-file,$export_symbols; mv \$templib $lib' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. - output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' - - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' + output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' + + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' # Archives containing C++ object files must be created using # "CC -Bstatic", where "CC" is the KAI C++ compiler. _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; icpc* | ecpc* ) # Intel C++ with_gnu_ld=yes # version 8.0 and above of icpc choke on multiply defined symbols # if we add $predep_objects and $postdep_objects, however 7.1 and # earlier do not add the objects themselves. case `$CC -V 2>&1` in *"Version 7."*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 8.0 or newer tmp_idyn= case $host_cpu in ia64*) tmp_idyn=' -i_dynamic';; esac - _LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-retain-symbols-file $wl$export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; esac _LT_TAGVAR(archive_cmds_need_lc, $1)=no - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive$convenience ${wl}--no-whole-archive' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive' ;; pgCC* | pgcpp*) # Portland Group C++ compiler case `$CC -V` in *pgCC\ [[1-5]].* | *pgcpp\ [[1-5]].*) _LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~ - rm -rf $tpldir~ - $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ - compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' + rm -rf $tpldir~ + $CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~ + compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"' _LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~ - rm -rf $tpldir~ - $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ - $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ - $RANLIB $oldlib' + rm -rf $tpldir~ + $CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~ + $AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~ + $RANLIB $oldlib' _LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~ - rm -rf $tpldir~ - $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ - $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' + rm -rf $tpldir~ + $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ + $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~ - rm -rf $tpldir~ - $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ - $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' + rm -rf $tpldir~ + $CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~ + $CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; *) # Version 6 and above use weak symbols - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname ${wl}-retain-symbols-file ${wl}$export_symbols -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib' ;; esac - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}--rpath ${wl}$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl--rpath $wl$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' ;; cxx*) # Compaq C++ - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $wl$soname -o $lib ${wl}-retain-symbols-file $wl$export_symbols' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib $wl-retain-symbols-file $wl$export_symbols' runpath_var=LD_RUN_PATH _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. - output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed' ;; xl* | mpixl* | bgxl*) # IBM XL 8.0 on PPC, with GNU ld - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}--export-dynamic' - _LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname -o $lib' - if test "x$supports_anon_versioning" = xyes; then + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic' + _LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib' + if test yes = "$supports_anon_versioning"; then _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~ - cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ - echo "local: *; };" >> $output_objdir/$libname.ver~ - $CC -qmkshrobj $libobjs $deplibs $compiler_flags ${wl}-soname $wl$soname ${wl}-version-script ${wl}$output_objdir/$libname.ver -o $lib' + cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~ + echo "local: *; };" >> $output_objdir/$libname.ver~ + $CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib' fi ;; *) case `$CC -V 2>&1 | sed 5q` in *Sun\ C*) # Sun C++ 5.9 _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' - _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file ${wl}$export_symbols' + _LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file $wl$export_symbols' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` ${wl}--no-whole-archive' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive' _LT_TAGVAR(compiler_needs_object, $1)=yes # Not sure whether something based on # $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 # would be better. output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; esac ;; esac ;; lynxos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; m88k*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; mvs*) case $cc_basename in cxx*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; netbsd*) if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then _LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags' wlarc= _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no fi # Workaround some broken pre-1.5 toolchains output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"' ;; *nto* | *qnx*) _LT_TAGVAR(ld_shlibs, $1)=yes ;; - openbsd2*) - # C++ shared libraries are fairly broken - _LT_TAGVAR(ld_shlibs, $1)=no - ;; - - openbsd*) + openbsd* | bitrig*) if test -f /usr/libexec/ld.so; then _LT_TAGVAR(hardcode_direct, $1)=yes _LT_TAGVAR(hardcode_shlibpath_var, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=yes _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' - if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`" || test "$host_os-$host_cpu" = "openbsd2.8-powerpc"; then - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-retain-symbols-file,$export_symbols -o $lib' - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-E' - _LT_TAGVAR(whole_archive_flag_spec, $1)="$wlarc"'--whole-archive$convenience '"$wlarc"'--no-whole-archive' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' + if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`"; then + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file,$export_symbols -o $lib' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E' + _LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive' fi output_verbose_link_cmd=func_echo_all else _LT_TAGVAR(ld_shlibs, $1)=no fi ;; osf3* | osf4* | osf5*) case $cc_basename in KCC*) # Kuck and Associates, Inc. (KAI) C++ Compiler # KCC will only create a shared library if the output file # ends with ".so" (or ".sl" for HP-UX), so rename the library # to its proper name (with version) after linking. - _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\${tempext}\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' - - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath,$libdir' + _LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib' + + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Archives containing C++ object files must be created using # the KAI C++ compiler. case $host in osf3*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;; *) _LT_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' ;; esac ;; RCC*) # Rational C++ 2.4.1 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; cxx*) case $host in osf3*) - _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname $soname `test -n "$verstring" && func_echo_all "${wl}-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $soname `test -n "$verstring" && func_echo_all "$wl-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' ;; *) _LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*' - _LT_TAGVAR(archive_cmds, $1)='$CC -shared${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~ - echo "-hidden">> $lib.exp~ - $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname ${wl}-input ${wl}$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry ${output_objdir}/so_locations -o $lib~ - $RM $lib.exp' + echo "-hidden">> $lib.exp~ + $CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname $wl-input $wl$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~ + $RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir' ;; esac _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. # # There doesn't appear to be a way to prevent this compiler from # explicitly linking system object files so we need to strip them # from the output so that they don't get included in the library # dependencies. - output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list=""; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' + output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"' ;; *) - if test "$GXX" = yes && test "$with_gnu_ld" = no; then - _LT_TAGVAR(allow_undefined_flag, $1)=' ${wl}-expect_unresolved ${wl}\*' + if test yes,no = "$GXX,$with_gnu_ld"; then + _LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*' case $host in osf3*) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' ;; *) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib ${allow_undefined_flag} $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-msym ${wl}-soname ${wl}$soname `test -n "$verstring" && func_echo_all "${wl}-set_version ${wl}$verstring"` ${wl}-update_registry ${wl}${output_objdir}/so_locations -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib' ;; esac - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-rpath ${wl}$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=: # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no fi ;; esac ;; psos*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; sunos4*) case $cc_basename in CC*) # Sun C++ 4.x # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; lcc*) # Lucid # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; solaris*) case $cc_basename in CC* | sunCC*) # Sun C++ 4.2, 5.x and Centerline C++ _LT_TAGVAR(archive_cmds_need_lc,$1)=yes _LT_TAGVAR(no_undefined_flag, $1)=' -zdefs' - _LT_TAGVAR(archive_cmds, $1)='$CC -G${allow_undefined_flag} -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ - $CC -G${allow_undefined_flag} ${wl}-M ${wl}$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' + $CC -G$allow_undefined_flag $wl-M $wl$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir' _LT_TAGVAR(hardcode_shlibpath_var, $1)=no case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) # The compiler driver will combine and reorder linker options, - # but understands `-z linker_flag'. + # but understands '-z linker_flag'. # Supported since Solaris 2.6 (maybe 2.5.1?) _LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract' ;; esac _LT_TAGVAR(link_all_deplibs, $1)=yes output_verbose_link_cmd='func_echo_all' # Archives containing C++ object files must be created using # "CC -xar", where "CC" is the Sun C++ compiler. This is # necessary to make sure instantiated templates are included # in the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs' ;; gcx*) # Green Hills C++ Compiler - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' # The C++ compiler must be used to create the archive. _LT_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs' ;; *) # GNU C++ compiler with Solaris linker - if test "$GXX" = yes && test "$with_gnu_ld" = no; then - _LT_TAGVAR(no_undefined_flag, $1)=' ${wl}-z ${wl}defs' + if test yes,no = "$GXX,$with_gnu_ld"; then + _LT_TAGVAR(no_undefined_flag, $1)=' $wl-z ${wl}defs' if $CC --version | $GREP -v '^2\.7' > /dev/null; then - _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ - $CC -shared $pic_flag -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' + $CC -shared $pic_flag -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' else - # g++ 2.7 appears to require `-G' NOT `-shared' on this + # g++ 2.7 appears to require '-G' NOT '-shared' on this # platform. - _LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $LDFLAGS $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags ${wl}-h $wl$soname -o $lib' + _LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib' _LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~ - $CC -G -nostdlib ${wl}-M $wl$lib.exp -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' + $CC -G -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp' # Commands to make compiler produce verbose output that lists # what "hidden" libraries, object files and flags are used when # linking a shared library. output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"' fi - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R $wl$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $wl$libdir' case $host_os in solaris2.[[0-5]] | solaris2.[[0-5]].*) ;; *) - _LT_TAGVAR(whole_archive_flag_spec, $1)='${wl}-z ${wl}allextract$convenience ${wl}-z ${wl}defaultextract' + _LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract' ;; esac fi ;; esac ;; sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*) - _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' + _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no runpath_var='LD_RUN_PATH' case $cc_basename in CC*) - _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; *) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; sysv5* | sco3.2v5* | sco5v6*) - # Note: We can NOT use -z defs as we might desire, because we do not + # Note: We CANNOT use -z defs as we might desire, because we do not # link with -lc, and that would cause any symbols used from libc to # always be unresolved, which means just about no library would # ever link correctly. If we're not using GNU ld we use -z text # though, which does catch some bad symbols but isn't as heavy-handed # as -z defs. - _LT_TAGVAR(no_undefined_flag, $1)='${wl}-z,text' - _LT_TAGVAR(allow_undefined_flag, $1)='${wl}-z,nodefs' + _LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text' + _LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs' _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(hardcode_shlibpath_var, $1)=no - _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='${wl}-R,$libdir' + _LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir' _LT_TAGVAR(hardcode_libdir_separator, $1)=':' _LT_TAGVAR(link_all_deplibs, $1)=yes - _LT_TAGVAR(export_dynamic_flag_spec, $1)='${wl}-Bexport' + _LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport' runpath_var='LD_RUN_PATH' case $cc_basename in CC*) - _LT_TAGVAR(archive_cmds, $1)='$CC -G ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' _LT_TAGVAR(old_archive_cmds, $1)='$CC -Tprelink_objects $oldobjs~ - '"$_LT_TAGVAR(old_archive_cmds, $1)" + '"$_LT_TAGVAR(old_archive_cmds, $1)" _LT_TAGVAR(reload_cmds, $1)='$CC -Tprelink_objects $reload_objs~ - '"$_LT_TAGVAR(reload_cmds, $1)" + '"$_LT_TAGVAR(reload_cmds, $1)" ;; *) - _LT_TAGVAR(archive_cmds, $1)='$CC -shared ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' - _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared ${wl}-Bexport:$export_symbols ${wl}-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' + _LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags' ;; esac ;; tandem*) case $cc_basename in NCC*) # NonStop-UX NCC 3.20 # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac ;; vxworks*) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; *) # FIXME: insert proper C++ library support _LT_TAGVAR(ld_shlibs, $1)=no ;; esac AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)]) - test "$_LT_TAGVAR(ld_shlibs, $1)" = no && can_build_shared=no - - _LT_TAGVAR(GCC, $1)="$GXX" - _LT_TAGVAR(LD, $1)="$LD" + test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no + + _LT_TAGVAR(GCC, $1)=$GXX + _LT_TAGVAR(LD, $1)=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS LDCXX=$LD LD=$lt_save_LD GCC=$lt_save_GCC with_gnu_ld=$lt_save_with_gnu_ld lt_cv_path_LDCXX=$lt_cv_path_LD lt_cv_path_LD=$lt_save_path_LD lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld -fi # test "$_lt_caught_CXX_error" != yes +fi # test yes != "$_lt_caught_CXX_error" AC_LANG_POP ])# _LT_LANG_CXX_CONFIG # _LT_FUNC_STRIPNAME_CNF # ---------------------- # func_stripname_cnf prefix suffix name # strip PREFIX and SUFFIX off of NAME. # PREFIX and SUFFIX must not contain globbing or regex special # characters, hashes, percent signs, but SUFFIX may contain a leading # dot (in which case that matches only a dot). # # This function is identical to the (non-XSI) version of func_stripname, # except this one can be used by m4 code that may be executed by configure, # rather than the libtool script. m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl AC_REQUIRE([_LT_DECL_SED]) AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH]) func_stripname_cnf () { - case ${2} in - .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; - *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; + case @S|@2 in + .*) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%\\\\@S|@2\$%%"`;; + *) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%@S|@2\$%%"`;; esac } # func_stripname_cnf ])# _LT_FUNC_STRIPNAME_CNF + # _LT_SYS_HIDDEN_LIBDEPS([TAGNAME]) # --------------------------------- # Figure out "hidden" library dependencies from verbose # compiler output when linking a shared library. # Parse the compiler output and extract the necessary # objects, libraries and library flags. m4_defun([_LT_SYS_HIDDEN_LIBDEPS], [m4_require([_LT_FILEUTILS_DEFAULTS])dnl AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl # Dependencies to place before and after the object being linked: _LT_TAGVAR(predep_objects, $1)= _LT_TAGVAR(postdep_objects, $1)= _LT_TAGVAR(predeps, $1)= _LT_TAGVAR(postdeps, $1)= _LT_TAGVAR(compiler_lib_search_path, $1)= dnl we can't use the lt_simple_compile_test_code here, dnl because it contains code intended for an executable, dnl not a library. It's possible we should let each dnl tag define a new lt_????_link_test_code variable, dnl but it's only used here... m4_if([$1], [], [cat > conftest.$ac_ext <<_LT_EOF int a; void foo (void) { a = 0; } _LT_EOF ], [$1], [CXX], [cat > conftest.$ac_ext <<_LT_EOF class Foo { public: Foo (void) { a = 0; } private: int a; }; _LT_EOF ], [$1], [F77], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer*4 a a=0 return end _LT_EOF ], [$1], [FC], [cat > conftest.$ac_ext <<_LT_EOF subroutine foo implicit none integer a a=0 return end _LT_EOF ], [$1], [GCJ], [cat > conftest.$ac_ext <<_LT_EOF public class foo { private int a; public void bar (void) { a = 0; } }; _LT_EOF ], [$1], [GO], [cat > conftest.$ac_ext <<_LT_EOF package foo func foo() { } _LT_EOF ]) _lt_libdeps_save_CFLAGS=$CFLAGS case "$CC $CFLAGS " in #( *\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;; *\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;; *\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;; esac dnl Parse the compiler output and extract the necessary dnl objects, libraries and library flags. if AC_TRY_EVAL(ac_compile); then # Parse the compiler output and extract the necessary # objects, libraries and library flags. # Sentinel used to keep track of whether or not we are before # the conftest object file. pre_test_object_deps_done=no for p in `eval "$output_verbose_link_cmd"`; do - case ${prev}${p} in + case $prev$p in -L* | -R* | -l*) # Some compilers place space between "-{L,R}" and the path. # Remove the space. - if test $p = "-L" || - test $p = "-R"; then + if test x-L = "$p" || + test x-R = "$p"; then prev=$p continue fi # Expand the sysroot to ease extracting the directories later. if test -z "$prev"; then case $p in -L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;; -R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;; -l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;; esac fi case $p in =*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;; esac - if test "$pre_test_object_deps_done" = no; then - case ${prev} in + if test no = "$pre_test_object_deps_done"; then + case $prev in -L | -R) # Internal compiler library paths should come after those # provided the user. The postdeps already come after the # user supplied libs so there is no need to process them. if test -z "$_LT_TAGVAR(compiler_lib_search_path, $1)"; then - _LT_TAGVAR(compiler_lib_search_path, $1)="${prev}${p}" + _LT_TAGVAR(compiler_lib_search_path, $1)=$prev$p else - _LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} ${prev}${p}" + _LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} $prev$p" fi ;; # The "-l" case would never come before the object being # linked, so don't bother handling this case. esac else if test -z "$_LT_TAGVAR(postdeps, $1)"; then - _LT_TAGVAR(postdeps, $1)="${prev}${p}" + _LT_TAGVAR(postdeps, $1)=$prev$p else - _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} ${prev}${p}" + _LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} $prev$p" fi fi prev= ;; *.lto.$objext) ;; # Ignore GCC LTO objects *.$objext) # This assumes that the test object file only shows up # once in the compiler output. if test "$p" = "conftest.$objext"; then pre_test_object_deps_done=yes continue fi - if test "$pre_test_object_deps_done" = no; then + if test no = "$pre_test_object_deps_done"; then if test -z "$_LT_TAGVAR(predep_objects, $1)"; then - _LT_TAGVAR(predep_objects, $1)="$p" + _LT_TAGVAR(predep_objects, $1)=$p else _LT_TAGVAR(predep_objects, $1)="$_LT_TAGVAR(predep_objects, $1) $p" fi else if test -z "$_LT_TAGVAR(postdep_objects, $1)"; then - _LT_TAGVAR(postdep_objects, $1)="$p" + _LT_TAGVAR(postdep_objects, $1)=$p else _LT_TAGVAR(postdep_objects, $1)="$_LT_TAGVAR(postdep_objects, $1) $p" fi fi ;; *) ;; # Ignore the rest. esac done # Clean up. rm -f a.out a.exe else echo "libtool.m4: error: problem compiling $1 test program" fi $RM -f confest.$objext CFLAGS=$_lt_libdeps_save_CFLAGS # PORTME: override above test on systems where it is broken m4_if([$1], [CXX], [case $host_os in interix[[3-9]]*) # Interix 3.5 installs completely hosed .la files for C++, so rather than # hack all around it, let's just trust "g++" to DTRT. _LT_TAGVAR(predep_objects,$1)= _LT_TAGVAR(postdep_objects,$1)= _LT_TAGVAR(postdeps,$1)= ;; - -linux*) - case `$CC -V 2>&1 | sed 5q` in - *Sun\ C*) - # Sun C++ 5.9 - - # The more standards-conforming stlport4 library is - # incompatible with the Cstd library. Avoid specifying - # it if it's in CXXFLAGS. Ignore libCrun as - # -library=stlport4 depends on it. - case " $CXX $CXXFLAGS " in - *" -library=stlport4 "*) - solaris_use_stlport4=yes - ;; - esac - - if test "$solaris_use_stlport4" != yes; then - _LT_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' - fi - ;; - esac - ;; - -solaris*) - case $cc_basename in - CC* | sunCC*) - # The more standards-conforming stlport4 library is - # incompatible with the Cstd library. Avoid specifying - # it if it's in CXXFLAGS. Ignore libCrun as - # -library=stlport4 depends on it. - case " $CXX $CXXFLAGS " in - *" -library=stlport4 "*) - solaris_use_stlport4=yes - ;; - esac - - # Adding this requires a known-good setup of shared libraries for - # Sun compiler versions before 5.6, else PIC objects from an old - # archive will be linked into the output, leading to subtle bugs. - if test "$solaris_use_stlport4" != yes; then - _LT_TAGVAR(postdeps,$1)='-library=Cstd -library=Crun' - fi - ;; - esac - ;; esac ]) case " $_LT_TAGVAR(postdeps, $1) " in *" -lc "*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;; esac _LT_TAGVAR(compiler_lib_search_dirs, $1)= if test -n "${_LT_TAGVAR(compiler_lib_search_path, $1)}"; then - _LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | ${SED} -e 's! -L! !g' -e 's!^ !!'` + _LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | $SED -e 's! -L! !g' -e 's!^ !!'` fi _LT_TAGDECL([], [compiler_lib_search_dirs], [1], [The directories searched by this compiler when creating a shared library]) _LT_TAGDECL([], [predep_objects], [1], [Dependencies to place before and after the objects being linked to create a shared library]) _LT_TAGDECL([], [postdep_objects], [1]) _LT_TAGDECL([], [predeps], [1]) _LT_TAGDECL([], [postdeps], [1]) _LT_TAGDECL([], [compiler_lib_search_path], [1], [The library search path used internally by the compiler when linking a shared library]) ])# _LT_SYS_HIDDEN_LIBDEPS # _LT_LANG_F77_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for a Fortran 77 compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG -# to write the compiler configuration to `libtool'. +# to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_F77_CONFIG], [AC_LANG_PUSH(Fortran 77) -if test -z "$F77" || test "X$F77" = "Xno"; then +if test -z "$F77" || test no = "$F77"; then _lt_disable_F77=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for f77 test sources. ac_ext=f # Object file extension for compiled f77 test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the F77 compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. -if test "$_lt_disable_F77" != yes; then +if test yes != "$_lt_disable_F77"; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. - lt_save_CC="$CC" + lt_save_CC=$CC lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${F77-"f77"} CFLAGS=$FFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) GCC=$G77 if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) - test "$can_build_shared" = "no" && enable_shared=no + test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) - test "$enable_shared" = yes && enable_static=no + test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) - if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then - test "$enable_shared" = yes && enable_static=no + if test ia64 != "$host_cpu"; then + case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in + yes,aix,yes) ;; # shared object as lib.so file only + yes,svr4,*) ;; # shared object as lib.so archive member only + yes,*) enable_static=no ;; # shared object in lib.a archive as well + esac fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. - test "$enable_shared" = yes || enable_static=yes + test yes = "$enable_shared" || enable_static=yes AC_MSG_RESULT([$enable_static]) - _LT_TAGVAR(GCC, $1)="$G77" - _LT_TAGVAR(LD, $1)="$LD" + _LT_TAGVAR(GCC, $1)=$G77 + _LT_TAGVAR(LD, $1)=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC - CC="$lt_save_CC" - CFLAGS="$lt_save_CFLAGS" -fi # test "$_lt_disable_F77" != yes + CC=$lt_save_CC + CFLAGS=$lt_save_CFLAGS +fi # test yes != "$_lt_disable_F77" AC_LANG_POP ])# _LT_LANG_F77_CONFIG # _LT_LANG_FC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for a Fortran compiler are # suitably defined. These variables are subsequently used by _LT_CONFIG -# to write the compiler configuration to `libtool'. +# to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_FC_CONFIG], [AC_LANG_PUSH(Fortran) -if test -z "$FC" || test "X$FC" = "Xno"; then +if test -z "$FC" || test no = "$FC"; then _lt_disable_FC=yes fi _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(allow_undefined_flag, $1)= _LT_TAGVAR(always_export_symbols, $1)=no _LT_TAGVAR(archive_expsym_cmds, $1)= _LT_TAGVAR(export_dynamic_flag_spec, $1)= _LT_TAGVAR(hardcode_direct, $1)=no _LT_TAGVAR(hardcode_direct_absolute, $1)=no _LT_TAGVAR(hardcode_libdir_flag_spec, $1)= _LT_TAGVAR(hardcode_libdir_separator, $1)= _LT_TAGVAR(hardcode_minus_L, $1)=no _LT_TAGVAR(hardcode_automatic, $1)=no _LT_TAGVAR(inherit_rpath, $1)=no _LT_TAGVAR(module_cmds, $1)= _LT_TAGVAR(module_expsym_cmds, $1)= _LT_TAGVAR(link_all_deplibs, $1)=unknown _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds _LT_TAGVAR(no_undefined_flag, $1)= _LT_TAGVAR(whole_archive_flag_spec, $1)= _LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no # Source file extension for fc test sources. ac_ext=${ac_fc_srcext-f} # Object file extension for compiled fc test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # No sense in running all these tests if we already determined that # the FC compiler isn't working. Some variables (like enable_shared) # are currently assumed to apply to all compilers on this platform, # and will be corrupted by setting them based on a non-working compiler. -if test "$_lt_disable_FC" != yes; then +if test yes != "$_lt_disable_FC"; then # Code to be used in simple compile tests lt_simple_compile_test_code="\ subroutine t return end " # Code to be used in simple link tests lt_simple_link_test_code="\ program t end " # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. - lt_save_CC="$CC" + lt_save_CC=$CC lt_save_GCC=$GCC lt_save_CFLAGS=$CFLAGS CC=${FC-"f95"} CFLAGS=$FCFLAGS compiler=$CC GCC=$ac_cv_fc_compiler_gnu _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) if test -n "$compiler"; then AC_MSG_CHECKING([if libtool supports shared libraries]) AC_MSG_RESULT([$can_build_shared]) AC_MSG_CHECKING([whether to build shared libraries]) - test "$can_build_shared" = "no" && enable_shared=no + test no = "$can_build_shared" && enable_shared=no # On AIX, shared libraries and static libraries use the same namespace, and # are all built from PIC. case $host_os in aix3*) - test "$enable_shared" = yes && enable_static=no + test yes = "$enable_shared" && enable_static=no if test -n "$RANLIB"; then archive_cmds="$archive_cmds~\$RANLIB \$lib" postinstall_cmds='$RANLIB $lib' fi ;; aix[[4-9]]*) - if test "$host_cpu" != ia64 && test "$aix_use_runtimelinking" = no ; then - test "$enable_shared" = yes && enable_static=no + if test ia64 != "$host_cpu"; then + case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in + yes,aix,yes) ;; # shared object as lib.so file only + yes,svr4,*) ;; # shared object as lib.so archive member only + yes,*) enable_static=no ;; # shared object in lib.a archive as well + esac fi ;; esac AC_MSG_RESULT([$enable_shared]) AC_MSG_CHECKING([whether to build static libraries]) # Make sure either enable_shared or enable_static is yes. - test "$enable_shared" = yes || enable_static=yes + test yes = "$enable_shared" || enable_static=yes AC_MSG_RESULT([$enable_static]) - _LT_TAGVAR(GCC, $1)="$ac_cv_fc_compiler_gnu" - _LT_TAGVAR(LD, $1)="$LD" + _LT_TAGVAR(GCC, $1)=$ac_cv_fc_compiler_gnu + _LT_TAGVAR(LD, $1)=$LD ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... _LT_SYS_HIDDEN_LIBDEPS($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_SYS_DYNAMIC_LINKER($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi # test -n "$compiler" GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS -fi # test "$_lt_disable_FC" != yes +fi # test yes != "$_lt_disable_FC" AC_LANG_POP ])# _LT_LANG_FC_CONFIG # _LT_LANG_GCJ_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Java Compiler compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG -# to write the compiler configuration to `libtool'. +# to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_GCJ_CONFIG], [AC_REQUIRE([LT_PROG_GCJ])dnl AC_LANG_SAVE # Source file extension for Java test sources. ac_ext=java # Object file extension for compiled Java test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="class foo {}" # Code to be used in simple link tests lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GCJ-"gcj"} CFLAGS=$GCJFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC -_LT_TAGVAR(LD, $1)="$LD" +_LT_TAGVAR(LD, $1)=$LD _LT_CC_BASENAME([$compiler]) # GCJ did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GCJ_CONFIG # _LT_LANG_GO_CONFIG([TAG]) # -------------------------- # Ensure that the configuration variables for the GNU Go compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG -# to write the compiler configuration to `libtool'. +# to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_GO_CONFIG], [AC_REQUIRE([LT_PROG_GO])dnl AC_LANG_SAVE # Source file extension for Go test sources. ac_ext=go # Object file extension for compiled Go test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code="package main; func main() { }" # Code to be used in simple link tests lt_simple_link_test_code='package main; func main() { }' # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC=yes CC=${GOC-"gccgo"} CFLAGS=$GOFLAGS compiler=$CC _LT_TAGVAR(compiler, $1)=$CC -_LT_TAGVAR(LD, $1)="$LD" +_LT_TAGVAR(LD, $1)=$LD _LT_CC_BASENAME([$compiler]) # Go did not exist at the time GCC didn't implicitly link libc in. _LT_TAGVAR(archive_cmds_need_lc, $1)=no _LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds _LT_TAGVAR(reload_flag, $1)=$reload_flag _LT_TAGVAR(reload_cmds, $1)=$reload_cmds ## CAVEAT EMPTOR: ## There is no encapsulation within the following macros, do not change ## the running order or otherwise move them around unless you know exactly ## what you are doing... if test -n "$compiler"; then _LT_COMPILER_NO_RTTI($1) _LT_COMPILER_PIC($1) _LT_COMPILER_C_O($1) _LT_COMPILER_FILE_LOCKS($1) _LT_LINKER_SHLIBS($1) _LT_LINKER_HARDCODE_LIBPATH($1) _LT_CONFIG($1) fi AC_LANG_RESTORE GCC=$lt_save_GCC CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_GO_CONFIG # _LT_LANG_RC_CONFIG([TAG]) # ------------------------- # Ensure that the configuration variables for the Windows resource compiler # are suitably defined. These variables are subsequently used by _LT_CONFIG -# to write the compiler configuration to `libtool'. +# to write the compiler configuration to 'libtool'. m4_defun([_LT_LANG_RC_CONFIG], [AC_REQUIRE([LT_PROG_RC])dnl AC_LANG_SAVE # Source file extension for RC test sources. ac_ext=rc # Object file extension for compiled RC test sources. objext=o _LT_TAGVAR(objext, $1)=$objext # Code to be used in simple compile tests lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }' # Code to be used in simple link tests -lt_simple_link_test_code="$lt_simple_compile_test_code" +lt_simple_link_test_code=$lt_simple_compile_test_code # ltmain only uses $CC for tagged configurations so make sure $CC is set. _LT_TAG_COMPILER # save warnings/boilerplate of simple test code _LT_COMPILER_BOILERPLATE _LT_LINKER_BOILERPLATE # Allow CC to be a program name with arguments. -lt_save_CC="$CC" +lt_save_CC=$CC lt_save_CFLAGS=$CFLAGS lt_save_GCC=$GCC GCC= CC=${RC-"windres"} CFLAGS= compiler=$CC _LT_TAGVAR(compiler, $1)=$CC _LT_CC_BASENAME([$compiler]) _LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes if test -n "$compiler"; then : _LT_CONFIG($1) fi GCC=$lt_save_GCC AC_LANG_RESTORE CC=$lt_save_CC CFLAGS=$lt_save_CFLAGS ])# _LT_LANG_RC_CONFIG # LT_PROG_GCJ # ----------- AC_DEFUN([LT_PROG_GCJ], [m4_ifdef([AC_PROG_GCJ], [AC_PROG_GCJ], [m4_ifdef([A][M_PROG_GCJ], [A][M_PROG_GCJ], [AC_CHECK_TOOL(GCJ, gcj,) - test "x${GCJFLAGS+set}" = xset || GCJFLAGS="-g -O2" + test set = "${GCJFLAGS+set}" || GCJFLAGS="-g -O2" AC_SUBST(GCJFLAGS)])])[]dnl ]) # Old name: AU_ALIAS([LT_AC_PROG_GCJ], [LT_PROG_GCJ]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_GCJ], []) # LT_PROG_GO # ---------- AC_DEFUN([LT_PROG_GO], [AC_CHECK_TOOL(GOC, gccgo,) ]) # LT_PROG_RC # ---------- AC_DEFUN([LT_PROG_RC], [AC_CHECK_TOOL(RC, windres,) ]) # Old name: AU_ALIAS([LT_AC_PROG_RC], [LT_PROG_RC]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_RC], []) # _LT_DECL_EGREP # -------------- # If we don't have a new enough Autoconf to choose the best grep # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_EGREP], [AC_REQUIRE([AC_PROG_EGREP])dnl AC_REQUIRE([AC_PROG_FGREP])dnl test -z "$GREP" && GREP=grep _LT_DECL([], [GREP], [1], [A grep program that handles long lines]) _LT_DECL([], [EGREP], [1], [An ERE matcher]) _LT_DECL([], [FGREP], [1], [A literal string matcher]) dnl Non-bleeding-edge autoconf doesn't subst GREP, so do it here too AC_SUBST([GREP]) ]) # _LT_DECL_OBJDUMP # -------------- # If we don't have a new enough Autoconf to choose the best objdump # available, choose the one first in the user's PATH. m4_defun([_LT_DECL_OBJDUMP], [AC_CHECK_TOOL(OBJDUMP, objdump, false) test -z "$OBJDUMP" && OBJDUMP=objdump _LT_DECL([], [OBJDUMP], [1], [An object symbol dumper]) AC_SUBST([OBJDUMP]) ]) # _LT_DECL_DLLTOOL # ---------------- # Ensure DLLTOOL variable is set. m4_defun([_LT_DECL_DLLTOOL], [AC_CHECK_TOOL(DLLTOOL, dlltool, false) test -z "$DLLTOOL" && DLLTOOL=dlltool _LT_DECL([], [DLLTOOL], [1], [DLL creation program]) AC_SUBST([DLLTOOL]) ]) # _LT_DECL_SED # ------------ # Check for a fully-functional sed program, that truncates # as few characters as possible. Prefer GNU sed if found. m4_defun([_LT_DECL_SED], [AC_PROG_SED test -z "$SED" && SED=sed Xsed="$SED -e 1s/^X//" _LT_DECL([], [SED], [1], [A sed program that does not truncate output]) _LT_DECL([], [Xsed], ["\$SED -e 1s/^X//"], [Sed that helps us avoid accidentally triggering echo(1) options like -n]) ])# _LT_DECL_SED m4_ifndef([AC_PROG_SED], [ ############################################################ # NOTE: This macro has been submitted for inclusion into # # GNU Autoconf as AC_PROG_SED. When it is available in # # a released version of Autoconf we should remove this # # macro and use it instead. # ############################################################ m4_defun([AC_PROG_SED], [AC_MSG_CHECKING([for a sed that does not truncate output]) AC_CACHE_VAL(lt_cv_path_SED, [# Loop through the user's path and test for sed and gsed. # Then use that list of sed's as ones to test for truncation. as_save_IFS=$IFS; IFS=$PATH_SEPARATOR for as_dir in $PATH do IFS=$as_save_IFS test -z "$as_dir" && as_dir=. for lt_ac_prog in sed gsed; do for ac_exec_ext in '' $ac_executable_extensions; do if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext" fi done done done IFS=$as_save_IFS lt_ac_max=0 lt_ac_count=0 # Add /usr/xpg4/bin/sed as it is typically found on Solaris # along with /bin/sed that truncates output. for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do - test ! -f $lt_ac_sed && continue + test ! -f "$lt_ac_sed" && continue cat /dev/null > conftest.in lt_ac_count=0 echo $ECHO_N "0123456789$ECHO_C" >conftest.in # Check for GNU sed and select it if it is found. if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then lt_cv_path_SED=$lt_ac_sed break fi while true; do cat conftest.in conftest.in >conftest.tmp mv conftest.tmp conftest.in cp conftest.in conftest.nl echo >>conftest.nl $lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break cmp -s conftest.out conftest.nl || break # 10000 chars as input seems more than enough - test $lt_ac_count -gt 10 && break + test 10 -lt "$lt_ac_count" && break lt_ac_count=`expr $lt_ac_count + 1` - if test $lt_ac_count -gt $lt_ac_max; then + if test "$lt_ac_count" -gt "$lt_ac_max"; then lt_ac_max=$lt_ac_count lt_cv_path_SED=$lt_ac_sed fi done done ]) SED=$lt_cv_path_SED AC_SUBST([SED]) AC_MSG_RESULT([$SED]) ])#AC_PROG_SED ])#m4_ifndef # Old name: AU_ALIAS([LT_AC_PROG_SED], [AC_PROG_SED]) dnl aclocal-1.4 backwards compatibility: dnl AC_DEFUN([LT_AC_PROG_SED], []) # _LT_CHECK_SHELL_FEATURES # ------------------------ # Find out whether the shell is Bourne or XSI compatible, # or has some other useful features. m4_defun([_LT_CHECK_SHELL_FEATURES], -[AC_MSG_CHECKING([whether the shell understands some XSI constructs]) -# Try some XSI features -xsi_shell=no -( _lt_dummy="a/b/c" - test "${_lt_dummy##*/},${_lt_dummy%/*},${_lt_dummy#??}"${_lt_dummy%"$_lt_dummy"}, \ - = c,a/b,b/c, \ - && eval 'test $(( 1 + 1 )) -eq 2 \ - && test "${#_lt_dummy}" -eq 5' ) >/dev/null 2>&1 \ - && xsi_shell=yes -AC_MSG_RESULT([$xsi_shell]) -_LT_CONFIG_LIBTOOL_INIT([xsi_shell='$xsi_shell']) - -AC_MSG_CHECKING([whether the shell understands "+="]) -lt_shell_append=no -( foo=bar; set foo baz; eval "$[1]+=\$[2]" && test "$foo" = barbaz ) \ - >/dev/null 2>&1 \ - && lt_shell_append=yes -AC_MSG_RESULT([$lt_shell_append]) -_LT_CONFIG_LIBTOOL_INIT([lt_shell_append='$lt_shell_append']) - -if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then +[if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then lt_unset=unset else lt_unset=false fi _LT_DECL([], [lt_unset], [0], [whether the shell understands "unset"])dnl # test EBCDIC or ASCII case `echo X|tr X '\101'` in A) # ASCII based system # \n is not interpreted correctly by Solaris 8 /usr/ucb/tr lt_SP2NL='tr \040 \012' lt_NL2SP='tr \015\012 \040\040' ;; *) # EBCDIC based system lt_SP2NL='tr \100 \n' lt_NL2SP='tr \r\n \100\100' ;; esac _LT_DECL([SP2NL], [lt_SP2NL], [1], [turn spaces into newlines])dnl _LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl ])# _LT_CHECK_SHELL_FEATURES -# _LT_PROG_FUNCTION_REPLACE (FUNCNAME, REPLACEMENT-BODY) -# ------------------------------------------------------ -# In `$cfgfile', look for function FUNCNAME delimited by `^FUNCNAME ()$' and -# '^} FUNCNAME ', and replace its body with REPLACEMENT-BODY. -m4_defun([_LT_PROG_FUNCTION_REPLACE], -[dnl { -sed -e '/^$1 ()$/,/^} # $1 /c\ -$1 ()\ -{\ -m4_bpatsubsts([$2], [$], [\\], [^\([ ]\)], [\\\1]) -} # Extended-shell $1 implementation' "$cfgfile" > $cfgfile.tmp \ - && mv -f "$cfgfile.tmp" "$cfgfile" \ - || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") -test 0 -eq $? || _lt_function_replace_fail=: -]) - - -# _LT_PROG_REPLACE_SHELLFNS -# ------------------------- -# Replace existing portable implementations of several shell functions with -# equivalent extended shell implementations where those features are available.. -m4_defun([_LT_PROG_REPLACE_SHELLFNS], -[if test x"$xsi_shell" = xyes; then - _LT_PROG_FUNCTION_REPLACE([func_dirname], [dnl - case ${1} in - */*) func_dirname_result="${1%/*}${2}" ;; - * ) func_dirname_result="${3}" ;; - esac]) - - _LT_PROG_FUNCTION_REPLACE([func_basename], [dnl - func_basename_result="${1##*/}"]) - - _LT_PROG_FUNCTION_REPLACE([func_dirname_and_basename], [dnl - case ${1} in - */*) func_dirname_result="${1%/*}${2}" ;; - * ) func_dirname_result="${3}" ;; - esac - func_basename_result="${1##*/}"]) - - _LT_PROG_FUNCTION_REPLACE([func_stripname], [dnl - # pdksh 5.2.14 does not do ${X%$Y} correctly if both X and Y are - # positional parameters, so assign one to ordinary parameter first. - func_stripname_result=${3} - func_stripname_result=${func_stripname_result#"${1}"} - func_stripname_result=${func_stripname_result%"${2}"}]) - - _LT_PROG_FUNCTION_REPLACE([func_split_long_opt], [dnl - func_split_long_opt_name=${1%%=*} - func_split_long_opt_arg=${1#*=}]) - - _LT_PROG_FUNCTION_REPLACE([func_split_short_opt], [dnl - func_split_short_opt_arg=${1#??} - func_split_short_opt_name=${1%"$func_split_short_opt_arg"}]) - - _LT_PROG_FUNCTION_REPLACE([func_lo2o], [dnl - case ${1} in - *.lo) func_lo2o_result=${1%.lo}.${objext} ;; - *) func_lo2o_result=${1} ;; - esac]) - - _LT_PROG_FUNCTION_REPLACE([func_xform], [ func_xform_result=${1%.*}.lo]) - - _LT_PROG_FUNCTION_REPLACE([func_arith], [ func_arith_result=$(( $[*] ))]) - - _LT_PROG_FUNCTION_REPLACE([func_len], [ func_len_result=${#1}]) -fi - -if test x"$lt_shell_append" = xyes; then - _LT_PROG_FUNCTION_REPLACE([func_append], [ eval "${1}+=\\${2}"]) - - _LT_PROG_FUNCTION_REPLACE([func_append_quoted], [dnl - func_quote_for_eval "${2}" -dnl m4 expansion turns \\\\ into \\, and then the shell eval turns that into \ - eval "${1}+=\\\\ \\$func_quote_for_eval_result"]) - - # Save a `func_append' function call where possible by direct use of '+=' - sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1+="%g' $cfgfile > $cfgfile.tmp \ - && mv -f "$cfgfile.tmp" "$cfgfile" \ - || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") - test 0 -eq $? || _lt_function_replace_fail=: -else - # Save a `func_append' function call even when '+=' is not available - sed -e 's%func_append \([[a-zA-Z_]]\{1,\}\) "%\1="$\1%g' $cfgfile > $cfgfile.tmp \ - && mv -f "$cfgfile.tmp" "$cfgfile" \ - || (rm -f "$cfgfile" && cp "$cfgfile.tmp" "$cfgfile" && rm -f "$cfgfile.tmp") - test 0 -eq $? || _lt_function_replace_fail=: -fi - -if test x"$_lt_function_replace_fail" = x":"; then - AC_MSG_WARN([Unable to substitute extended shell functions in $ofile]) -fi -]) - # _LT_PATH_CONVERSION_FUNCTIONS # ----------------------------- -# Determine which file name conversion functions should be used by +# Determine what file name conversion functions should be used by # func_to_host_file (and, implicitly, by func_to_host_path). These are needed # for certain cross-compile configurations and native mingw. m4_defun([_LT_PATH_CONVERSION_FUNCTIONS], [AC_REQUIRE([AC_CANONICAL_HOST])dnl AC_REQUIRE([AC_CANONICAL_BUILD])dnl AC_MSG_CHECKING([how to convert $build file names to $host format]) AC_CACHE_VAL(lt_cv_to_host_file_cmd, [case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32 ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32 ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32 ;; esac ;; *-*-cygwin* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin ;; *-*-cygwin* ) lt_cv_to_host_file_cmd=func_convert_file_noop ;; * ) # otherwise, assume *nix lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin ;; esac ;; * ) # unhandled hosts (and "normal" native builds) lt_cv_to_host_file_cmd=func_convert_file_noop ;; esac ]) to_host_file_cmd=$lt_cv_to_host_file_cmd AC_MSG_RESULT([$lt_cv_to_host_file_cmd]) _LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd], [0], [convert $build file names to $host format])dnl AC_MSG_CHECKING([how to convert $build file names to toolchain format]) AC_CACHE_VAL(lt_cv_to_tool_file_cmd, [#assume ordinary cross tools, or native build. lt_cv_to_tool_file_cmd=func_convert_file_noop case $host in *-*-mingw* ) case $build in *-*-mingw* ) # actually msys lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32 ;; esac ;; esac ]) to_tool_file_cmd=$lt_cv_to_tool_file_cmd AC_MSG_RESULT([$lt_cv_to_tool_file_cmd]) _LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd], [0], [convert $build files to toolchain format])dnl ])# _LT_PATH_CONVERSION_FUNCTIONS diff --git a/src/Projections/DISLepton.cc b/src/Projections/DISLepton.cc --- a/src/Projections/DISLepton.cc +++ b/src/Projections/DISLepton.cc @@ -1,74 +1,69 @@ // -*- C++ -*- #include "Rivet/Projections/DISLepton.hh" namespace Rivet { int DISLepton::compare(const Projection& p) const { const DISLepton& other = pcast(p); - return mkNamedPCmp(other, "Beam") || mkNamedPCmp(other, "FS"); + return mkNamedPCmp(other, "Beam") || mkNamedPCmp(other, "LFS") || + mkNamedPCmp(other, "IFS"); } void DISLepton::project(const Event& e) { // Find incoming lepton beam const ParticlePair& inc = applyProjection(e, "Beam").beams(); bool firstIsLepton = PID::isLepton(inc.first.pid()); bool secondIsLepton = PID::isLepton(inc.second.pid()); if (firstIsLepton && !secondIsLepton) { _incoming = inc.first; } else if (!firstIsLepton && secondIsLepton) { _incoming = inc.second; } else { throw Error("DISLepton could not find the correct beam"); } - // // Find outgoing scattered lepton via HepMC graph - // /// @todo Evidence that this doesn't work with Sherpa... FIX - // const GenParticle* current_l = _incoming.genParticle(); - // bool found_next_vertex = true; - // while (found_next_vertex) { - // found_next_vertex = false; - // if (!current_l->end_vertex()) break; - // // Get lists of outgoing particles consistent with a neutral (gamma/Z) or charged (W) DIS current - // /// @todo Avoid loops - // vector out_n, out_c; - // for (const GenParticle* pp : particles_out(current_l, HepMC::children)) { - // if (current_l->pdg_id() == pp->pdg_id()) out_n.push_back(pp); - // if (std::abs(std::abs(current_l->pdg_id()) - std::abs(pp->pdg_id())) == 1) out_c.push_back(pp); - // } - // if (out_n.empty() && out_c.empty()) { - // MSG_WARNING("No lepton in the new vertex"); - // break; - // } - // if (out_c.size() + out_n.size() > 1) { - // MSG_WARNING("More than one lepton in the new vertex"); - // break; - // } - // current_l = out_c.empty() ? out_n.front() : out_c.front(); - // found_next_vertex = true; - // } - // if (current_l != nullptr) { - // _outgoing = Particle(current_l); - // MSG_DEBUG("Found DIS lepton from event-record structure"); - // return; - // } + // If no graph-connected scattered lepton, use the hardest + // (preferably same-flavour) prompt FS lepton in the event. - // If no graph-connected scattered lepton, use the hardest (preferably same-flavour) prompt FS lepton in the event - /// @todo Specify the charged or neutral current being searched for in the DISLepton constructor/API, and remove the guesswork - const Particles fsleptons = applyProjection(e, "FS").particles(isLepton, cmpMomByE); - const Particles sfleptons = filter_select(fsleptons, Cuts::pid == _incoming.pid()); - MSG_DEBUG("SF leptons = " << sfleptons.size() << ", all leptons = " << fsleptons.size()); - if (!sfleptons.empty()) { + const Particles fsleptons = + applyProjection(e, "LFS").particles(isLepton, cmpMomByE); + Particles sfleptons = + filter_select(fsleptons, Cuts::pid == _incoming.pid()); + MSG_DEBUG("SF leptons = " << sfleptons.size() << ", all leptons = " + << fsleptons.size()); + if ( sfleptons.empty() ) sfleptons = fsleptons; + + if ( _isolDR > 0.0 ) { + const Particles & other = + applyProjection(e, "IFS").particles(); + while (!sfleptons.empty()) { + bool skip = false; + Particle testlepton = sfleptons.front(); + for ( auto p: other ) { + if ( skip ) break; + if ( deltaR(p, testlepton) < _isolDR ) skip = true; + for ( auto c : testlepton.constituents() ) { + if ( c.genParticle() == p.genParticle() ) { + skip = false; + break; + } + } + } + if ( !skip ) break; + sfleptons.erase(sfleptons.begin()); + } + } + + if ( !sfleptons.empty() ) { _outgoing = sfleptons.front(); - } else if (!fsleptons.empty()) { - _outgoing = fsleptons.front(); } else { throw Error("Could not find the scattered lepton"); } } } diff --git a/src/Projections/TauFinder.cc b/src/Projections/TauFinder.cc --- a/src/Projections/TauFinder.cc +++ b/src/Projections/TauFinder.cc @@ -1,28 +1,27 @@ // -*- C++ -*- #include "Rivet/Projections/TauFinder.hh" -#include "Rivet/Projections/UnstableFinalState.hh" namespace Rivet { void TauFinder::project(const Event& e) { _theParticles.clear(); - const UnstableFinalState& ufs = applyProjection(e, "UFS"); + const auto& ufs = applyProjection(e, "UFS"); for (const Particle& p : ufs.particles()) { if (p.abspid() != PID::TAU) continue; if (_dectype == ANY || (_dectype == LEPTONIC && isLeptonic(p)) || (_dectype == HADRONIC && isHadronic(p)) ) _theParticles.push_back(p); } } int TauFinder::compare(const Projection& p) const { const PCmp fscmp = mkNamedPCmp(p, "UFS"); if (fscmp != EQUIVALENT) return fscmp; - const TauFinder& other = dynamic_cast(p); + const auto& other = dynamic_cast(p); return cmp(_dectype, other._dectype); } } diff --git a/src/Projections/UnstableFinalState.cc b/src/Projections/UnstableFinalState.cc --- a/src/Projections/UnstableFinalState.cc +++ b/src/Projections/UnstableFinalState.cc @@ -1,74 +1,72 @@ // -*- C++ -*- -#include "Rivet/Projections/UnstableFinalState.hh" +#include "Rivet/Projections/UnstableParticles.hh" namespace Rivet { - /// @todo Add a FIRST/LAST/ANY enum to specify the mode for uniquifying replica chains (default = LAST) - - - void UnstableFinalState::project(const Event& e) { + void UnstableParticles::project(const Event& e) { _theParticles.clear(); /// @todo Replace PID veto list with PID:: functions? vector vetoIds; vetoIds += 22; // status 2 photons don't count! vetoIds += 110; vetoIds += 990; vetoIds += 9990; // Reggeons - //vetoIds += 9902210; // something weird from PYTHIA6 for (const GenParticle* p : Rivet::particles(e.genEvent())) { + const Particle rp(p); const int st = p->status(); bool passed = (st == 1 || (st == 2 && !contains(vetoIds, abs(p->pdg_id())))) && !PID::isParton(p->pdg_id()) && ///< Always veto partons !p->is_beam() && // Filter beam particles - _cuts->accept(p->momentum()); + _cuts->accept(rp); // Avoid double counting by re-marking as unpassed if ID == (any) parent ID const GenVertex* pv = p->production_vertex(); // if (passed && pv) { // for (GenVertex::particles_in_const_iterator pp = pv->particles_in_const_begin(); pp != pv->particles_in_const_end(); ++pp) { // if (p->pdg_id() == (*pp)->pdg_id() && (*pp)->status() == 2) { // passed = false; // break; // } // } // } // + // Avoid double counting by re-marking as unpassed if ID == any child ID const GenVertex* dv = p->end_vertex(); if (passed && dv) { for (GenParticle* pp : particles_out(dv)) { if (p->pdg_id() == pp->pdg_id() && pp->status() == 2) { passed = false; break; } } } // Add to output particles collection - if (passed) _theParticles.push_back(Particle(*p)); + if (passed) _theParticles.push_back(rp); // Log parents and children if (getLog().isActive(Log::TRACE)) { MSG_TRACE("ID = " << p->pdg_id() << ", status = " << st << ", pT = " << p->momentum().perp() << ", eta = " << p->momentum().eta() << ": result = " << std::boolalpha << passed); if (pv) { for (GenVertex::particles_in_const_iterator pp = pv->particles_in_const_begin(); pp != pv->particles_in_const_end(); ++pp) { MSG_TRACE(" parent ID = " << (*pp)->pdg_id()); } } if (dv) { for (GenVertex::particles_out_const_iterator pp = dv->particles_out_const_begin(); pp != dv->particles_out_const_end(); ++pp) { MSG_TRACE(" child ID = " << (*pp)->pdg_id()); } } } } MSG_DEBUG("Number of unstable final-state particles = " << _theParticles.size()); } }