Page MenuHomeHEPForge

No OneTemporary

This file is larger than 256 KB, so syntax highlighting was skipped.
diff --git a/ChangeLog b/ChangeLog
--- a/ChangeLog
+++ b/ChangeLog
@@ -1,6422 +1,6425 @@
2019-05-30 Christian Gutschow <chris.g@cern.ch>
+
* implement --skip-weights flag to ignore variation weights
* make ribet-mkanalysis more intuitive for new users
* interface change: remove title/axis-label options from book methods
* pull some Py3 script updates
2019-05-29 Christian Gutschow <chris.g@cern.ch>
+
* Bypass *proj == *proj using reinterpret_cast to uintptr_t
* remove ununsed variable from Spherocity
2019-05-15 Christian Gutschow <chris.g@cern.ch>
+
* Clean up logic around CmpState
2018-08-30 Andy Buckley <andy.buckley@cern.ch>
* Update embedded yaml-cpp to v0.6.0.
2018-08-29 Andy Buckley <andy.buckley@cern.ch>
* Change default plugin library installation to $prefix/lib/Rivet, since they're non-linkable.
2018-08-14 Andy Buckley <andy.buckley@cern.ch>
* Version 2.6.1 release.
2018-08-08 Andy Buckley <andy.buckley@cern.ch>
* Add a RIVET_RANDOM_SEED variable to fix the smearing random-seed engine for validation comparisons.
2018-07-19 Andy Buckley <andy.buckley@cern.ch>
* Merge in ATLAS_2017_I1604029 (ttbar+gamma), ATLAS_2017_I1626105
(dileptonic ttbar), ATLAS_2017_I1644367 (triphotons), and
ATLAS_2017_I1645627 (photon + jets).
* Postpone Particles enhancement now, since the required C++11 isn't supported on lxplus7 = CentOS7.
* Add MC_DILEPTON analysis.
2018-07-10 Andy Buckley <andy.buckley@cern.ch>
* Fix HepData tarball download handling: StringIO is *not* safe anymore
2018-07-08 Andy Buckley <andy.buckley@cern.ch>
* Add LorentzTransform factory functions direct from FourMomentum, and operator()s
2018-06-20 Andy Buckley <andy.buckley@cern.ch>
* Add FinalState(fs, cut) augmenting constructor, and PrevFS projection machinery. Validated for a abscharge > 0 cut.
* Add hasProjection() methods to ProjectionHandler and ProjectionApplier.
* Clone MC_GENERIC as MC_FSPARTICLES and deprecate the badly-named original.
* Fix Spires -> Inspire ID for CMS_2017_I1518399.
2018-06-04 Andy Buckley <andy.buckley@cern.ch>
* Fix installation of (In)DirectFinalState.hh
2018-05-31 Andy Buckley <andy.buckley@cern.ch>
* Add init-time setting of a single weight-vector index from the
RIVET_WEIGHT_INDEX environment variable. To be removed in v3, but
really we should have done this years ago... and we don't know how
long the handover will be.
2018-05-22 Neil Warrack <neil.warrack@cern.ch>
* Include 'unphysical' photon parents in PartonicTops' veto of prompt leptons from photon conversions.
2018-05-20 Andy Buckley <andy.buckley@cern.ch>
* Make Particles and Jets into actual specialisations of
std::vector rather than typedefs, and update surrounding classes
to use them. The specialisations can implicitly cast to vectors of
FourMomentum (and maybe Pseudojet).
2018-05-18 Andy Buckley <andy.buckley@cern.ch>
* Make CmpAnaHandle::operator() const, for GCC 8 (thanks to CMS)
2018-05-07 Andy Buckley <andy.buckley@cern.ch>
* CMS_2016_I1421646.cc: Add patch from CMS to veto if leading jets
outside |y| < 2.5, rather than only considering jets in that
acceptance. Thanks to CMS and Markus Seidel.
2018-04-27 Andy Buckley <andy.buckley@cern.ch>
* Tidy keywords and luminosity entries, and add both to BSM search .info files.
* Add Luminosity_fb and Keywords placeholders in mkanalysis output.
2018-04-26 Andy Buckley <andy.buckley@cern.ch>
* Add pairMass and pairPt functions.
* Add (i)discardIfAnyDeltaRLess and (i)discardIfAnyDeltaPhiLess functions.
* Add normalize() methods to Cutflow and Cutflows.
* Add DirectFinalState and IndirectFinalState alias headers, for forward compatibility. 'Prompt' is confusing.
2018-04-24 Andy Buckley <andy.buckley@cern.ch>
* Add initializer_list overload for binIndex. Needed for other util functions operating on vectors.
* Fix function signature bug is isMT2 overload.
* Add isSameSign, isOppSign, isSameFlav, isOppFlav, and isOSSF etc. functions on PIDs and Particles.
2018-03-27 Andy Buckley <andy.buckley@cern.ch>
* Add RatioPlotLogY key to make-plots. Thanks to Antonin Maire.
2018-02-22 Andy Buckley <andy.buckley@cern.ch>
* Adding boolean operator syntactic sugar for composition of bool functors.
* Copy & paste error fixes in implementation of BoolJetAND,OR,NOT.
2018-02-01 Andy Buckley <andy.buckley@cern.ch>
* Make the project() and compare() methods of projections public.
* Fix a serious bug in the SmearedParticles and SmearedJets compare methods.
* Add string representations and streamability to the Cut objects, for debugging.
2018-01-08 Andy Buckley <andy.buckley@cern.ch>
* Add highlighted source to HTML analysis metadata listings.
2017-12-21 Andy Buckley <andy.buckley@cern.ch>
* Version 2.6.0 release.
2017-12-20 Andy Buckley <andy.buckley@cern.ch>
* Typo fix in TOTEM_2012_I1220862 data -- thanks to Anton Karneyeu.
2017-12-19 Andy Buckley <andy.buckley@cern.ch>
* Adding contributed analyses: 1 ALICE, 6 ATLAS, 1 CMS.
* Fix bugged PID codes in MC_PRINTEVENT.
2017-12-13 Andy Buckley <andy.buckley@cern.ch>
* Protect Run methods and rivet script against being told to run from a missing or unreadable file.
2017-12-11 Andy Buckley <andy.buckley@cern.ch>
* Replace manual event count & weight handling with a YODA Counter object.
2017-11-28 Andy Buckley <andy.buckley@cern.ch>
* Providing neater & more YODA-consistent sumW and sumW2 methods on AnalysisHandler and Analysis.
* Fix to Python version check for >= 2.7.10 (patch submitted to GNU)
2017-11-17 Andy Buckley <andy.buckley@cern.ch>
* Various improvements to DISKinematics, DISLepton, and the ZEUS 2001 analysis.
2017-11-06 Andy Buckley <andy.buckley@cern.ch>
* Extend AOPath regex to allow dots and underscores in weight names.
2017-10-27 Andy Buckley <andy.buckley@cern.ch>
* Add energy to the list of cuts (both as Cuts::E and Cuts::energy)
* Add missing pT (rather than Et) functions to SmearedMET,
although they are just copies of the MET functions for now.
2017-10-26 Chris Pollard <cspollard@gmail.com>
* update the way crossSection() works; remove
setNeedsCrossSection()
2017-10-09 Andy Buckley <andy.buckley@cern.ch>
* Embed zstr and enable transparent reading of gzipped HepMC streams.
2017-10-03 Andy Buckley <andy.buckley@cern.ch>
* Use Lester MT2 bisection header, and expose a few more mT2 function signatures.
2017-09-26 Andy Buckley <andy.buckley@cern.ch>
* Use generic YODA read and write functions -- enables zipped yoda.gz output.
* Add ChargedLeptons enum and mode argument to ZFinder and WFinder
constructors, to allow control over whether the selected charged
leptons are prompt. This is mostly cosmetic/for symmetry in the
case of ZFinder, since the same can be achieved by passing a
PromptFinalState as the fs argument, but for WFinder it's
essential since passing a prompt final state screws up the MET
calculation. Both are slightly different in the treatment of the
lepton dressing, although conventionally this is an area where
only prompt photons are used.
2017-09-25 Andy Buckley <andy.buckley@cern.ch>
* Add deltaR2 functions for squared distances.
2017-09-10 Andy Buckley <andy.buckley@cern.ch>
* Add white backgrounds to make-plots main and ratio plot frames.
2017-09-05 Andy Buckley <andy.buckley@cern.ch>
* Add CMS_2016_PAS_TOP_15_006 jet multiplicity in lepton+jets ttbar at 8 TeV analysis.
* Add CMS_2017_I1467451 Higgs -> WW -> emu + MET in 8 TeV pp analysis.
* Add ATLAS_2017_I1609448 Z->ll + pTmiss analysis.
* Add vectorMissingEt/Pt and vectorMET/MPT convenience methods to MissingMomentum.
* Add ATLAS_2017_I1598613 J/psi + mu analysis.
* Add CMS SUSY 0-lepton search CMS_2017_I1594909 (unofficial implementation, validated vs. published cutflows)
2017-09-04 Andy Buckley <andy.buckley@cern.ch>
* Change license explicitly to GPLv3, cf. MCnet3 agreement.
* Add a better jet smearing resolution parametrisation, based on GAMBIT code from Matthias Danninger.
2017-08-16 Andy Buckley <andy.buckley@cern.ch>
* Protect make-plots against NaNs in error band values (patch from Dmitry Kalinkin).
2017-07-20 Andy Buckley <andy.buckley@cern.ch>
* Add sumPt, sumP4, sumP3 utility functions.
* Record truth particles as constituents of SmearedParticles output.
* Rename UnstableFinalState -> UnstableParticles, and convert
ZFinder to be a general ParticleFinder rather than FinalState.
2017-07-19 Andy Buckley <andy.buckley@cern.ch>
* Add implicit cast from FourVector & FourMomentum to Vector3, and tweak mT implementation.
* Add rawParticles() to ParticleFinder, and update DressedLeptons, WFinder, ZFinder and VetoedFinalState to cope.
* Add isCharged() and isChargedLepton() to Particle.
* Add constituents() and rawConstituents() to Particle.
* Add support for specifying bin edges as braced initializer lists rather than explicit vector<double>.
2017-07-18 Andy Buckley <andy.buckley@cern.ch>
* Enable methods for booking of Histo2D and Profile2D from Scatter3D reference data.
* Remove IsRef annotation from autobooked histogram objects.
2017-07-17 Andy Buckley <andy.buckley@cern.ch>
* Add pair-smearing to SmearedJets.
2017-07-08 Andy Buckley <andy.buckley@cern.ch>
* Add Event::centrality(), for non-HepMC access to the generator
value if one has been recorded -- otherwise -1.
2017-06-28 Andy Buckley <andy.buckley@cern.ch>
* Split the smearing functions into separate header files for
generic/momentum, Particle, Jet, and experiment-specific smearings
& efficiencies.
2017-06-27 Andy Buckley <andy.buckley@cern.ch>
* Add 'JetFinder' alias for JetAlg, by analogy with ParticleFinder.
2017-06-26 Andy Buckley <andy.buckley@cern.ch>
* Convert SmearedParticles to a more general list of combined
efficiency+smearing functions, with extra constructors and some
variadic template cleverness to allow implicit conversions from
single-operation eff and smearing function. Yay for C++11 ;-)
This work based on a macro-based version of combined eff/smear
functions by Karl Nordstrom -- thanks!
* Add *EffFn, *SmearFn, and *EffSmearFn types to SmearingFunctions.hh.
2017-06-23 Andy Buckley <andy.buckley@cern.ch>
* Add portable OpenMP enabling flags to AM_CXXFLAGS.
2017-06-22 Andy Buckley <andy.buckley@cern.ch>
* Fix the smearing random number seed and make it thread-specific
if OpenMP is available (not yet in the build system).
* Remove the UNUSED macro and find an alternative solution for the
cases where it was used, since there was a risk of macro clashes
with embedding codes.
* Add a -o output directory option to make-plots.
* Vector4.hh: Add mT2(vec,vec) functions.
2017-06-21 Andy Buckley <andy.buckley@cern.ch>
* Add a full set of in-range kinematics functors: ptInRange,
(abs)etaInRange, (abs)phiInRange, deltaRInRange, deltaPhiInRange,
deltaEtaInRange, deltaRapInRange.
* Add a convenience JET_BTAG_EFFS functor with several constructors to handle mistag rates.
* Add const efficiency functors operating on Particle, Jet, and FourMomentum.
* Add const-efficiency constructor variants for SmearedParticles.
2017-06-21 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Fix normalisations in CMS_2016_I1454211.
* Fix analysis name in ref histo paths for ATLAS_2017_I1591327.
2017-06-18 Andy Buckley <andy.buckley@cern.ch>
* Move all standard plugin files into subdirs of src/Analyses,
with some custom make rules driving rivet-buildplugin.
2017-06-18 David Grellscheid <david.grellscheid@durham.ac.uk>
* Parallelise rivet-buildplugin, with source-file cat'ing and use
of a temporary Makefile.
2016-06-18 Holger Schulz <holger.schulz@durham.ac.uk>
* Version 2.5.4 release!
2016-06-17 Holger Schulz <holger.schulz@durham.ac.uk>
* Fix 8 TeV DY (ATLAS_2016_I1467454), EL/MU bits were bissing.
* Add 13 TeV DY (ATLAS_2017_I1514251) and mark
ATLAS_2015_CONF_2015_041 obsolete
* Add missing install statement for ATLAS_2016_I1448301.yoda/plot/info leading to
segfault
2017-06-09 Andy Buckley <andy.buckley@cern.ch>
* Slight improvements to Particle constructors.
* Improvement to Beam projection: before falling back to barcodes
1 & 2, try a manual search for status=4 particles. Based on a
patch from Andrii Verbytskyi.
2017-06-05 Andy Buckley <andy.buckley@cern.ch>
* Add CMS_2016_I1430892: dilepton channel ttbar charge asymmetry analysis.
* Add CMS_2016_I1413748: dilepton channel ttbar spin correlations and polarisation analysis.
* Add CMS_2017_I1518399: leading jet mass for boosted top quarks at 8 TeV.
* Add convenience constructors for ChargedLeptons projection.
2017-06-03 Andy Buckley <andy.buckley@cern.ch>
* Add FinalState and Cut (optional) constructor arguments and
usage to DISFinalState. Thanks to Andrii Verbytskyi for the idea
and initial patch.
2017-05-23 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2016_I1448301, Z/gamma cross section measurement at 8 TeV.
* Add ATLAS_2016_I1426515, WW production at 8 TeV.
2016-05-19 Holger Schulz <holger.schulz@durham.ac.uk>
* Add BELLE measurement of semileptonic B0bar -> D*+ ell nu decays. I
took the liberty to correct the data in the sense that I take the bin
widhts into account in the normalisation. BELLE_2017_I1512299.
This is a nice analysis as it probes the hadronic and the leptonic
side of the decay so very valuable for model building and of course it
is rare as it is an unfolded B measurement.
2016-05-17 Holger Schulz <holger.schulz@durham.ac.uk>
* Add ALEPH measurement of hadronic tau decays, ALEPH_2014_I1267648.
* Add ALEPH dimuon invariant mass (OS and SS) analysis, ALEPH_2016_I1492968
* The latter needed GENKTEE FastJet algorithm so I added that FastJets
* Protection against logspace exception in histobooking of
MC_JetAnalysis
* Fix compiler complaints about uninitialised variable in OPAL_2004.
2016-05-16 Holger Schulz <holger.schulz@durham.ac.uk>
* Tidy ALEPH_1999 charm fragmentation analysis and normalise to data
integral. Added DSTARPLUS and DSTARMINUS to PID.
2017-05-16 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2016_CONF_2016_092, inclusive jet cross sections using early 13 TeV data.
* Add ATLAS_2017_I1591327, isolated diphoton + X differential cross-sections.
* Add ATLAS_2017_I1589844, ATLAS_2017_I1589844_EL, ATLAS_2017_I1589844_MU: kT splittings in Z events at 8 TeV.
* Add ATLAS_2017_I1509919, track-based underlying event at 13 TeV in ATLAS.
* Add ATLAS_2016_I1492320_2l2j and ATLAS_2016_I1492320_3l, the WWW cross-section at 8 TeV.
2017-05-12 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2016_I1449082, charge asymmetry in top quark pair production in dilepton channel.
* Add ATLAS_2015_I1394865, inclusive 4-lepton/ZZ lineshape.
2017-05-11 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2013_I1234228, high-mass Drell-Yan at 7 TeV.
2017-05-10 Andy Buckley <andy.buckley@cern.ch>
* Add CMS_2017_I1519995, search for new physics with dijet angular distributions in proton-proton collisions at sqrt{(s) = 13 TeV.
* Add CMS_2017_I1511284, inclusive energy spectrum in the very forward direction in proton-proton collisions at 13 TeV.
* Add CMS_2016_I1486238, studies of 2 b-jet + 2 jet production in proton-proton collisions at 7 TeV.
* Add CMS_2016_I1454211, boosted ttbar in pp collisions at sqrtS = 8 TeV.
* Add CMS_2016_I1421646, CMS azimuthal decorrelations at 8 TeV.
2017-05-09 Andy Buckley <andy.buckley@cern.ch>
* Add CMS_2015_I1380605, per-event yield of the highest transverse
momentum charged particle and charged-particle jet.
* Add CMS_2015_I1370682_PARTON, a partonic-top version of the CMS
7 TeV pseudotop ttbar differential cross-section analysis.
* Adding EHS_1988_I265504 from Felix Riehn: charged-particle
production in K+ p, pi+ p and pp interactions at 250 GeV/c.
* Fix ALICE_2012_I1116147 for pi0 and Lambda feed-down.
2017-05-08 Andy Buckley <andy.buckley@cern.ch>
* Add protection against leptons from QED FSR photon conversions
in assigning PartonicTop decay modes. Thanks to Markus Seidel for
the report and suggested fix.
* Reimplement FastJets methods in terms of new static helper functions.
* Add new mkClusterInputs, mkJet and mkJets static methods to
FastJets, to help with direct calls to FastJet where particle
lookup for constituents and ghost tags are required.
* Fix Doxygen config and Makefile target to allow working with
out-of-source builds. Thanks to Christian Holm Christensen.
* Improve DISLepton for HERA analyses: thanks to Andrii Verbytskyi for the patch!
2017-03-30 Andy Buckley <andy.buckley@cern.ch>
* Replace non-template Analysis::refData functions with C++11 default T=Scatter2D.
2017-03-29 Andy Buckley <andy.buckley@cern.ch>
* Allow yes/no and true/false values for LogX, etc. plot options.
* Add --errs as an alias for --mc-errs to rivet-mkhtml and rivet-cmphistos.
2017-03-08 Peter Richardson <peter.richardson@durham.ac.uk>
* Added 6 analyses AMY_1990_I295160, HRS_1986_I18502, JADE_1983_I190818,
PLUTO_1980_I154270, TASSO_1989_I277658, TPC_1987_I235694 for charged multiplicity
in e+e- at CMS energies below the Z pole
* Added 2 analyses for charged multiplicity at the Z pole DELPHI_1991_I301657,
OPAL_1992_I321190
* Updated ALEPH_1991_S2435284 to plot the average charged multiplcity
* Added analyses OPAL_2004_I631361, OPAL_2004_I631361_qq, OPAL_2004_I648738 for
gluon jets in e+e-, most need fictitious e+e- > g g process
2017-03-29 Andy Buckley <andy.buckley@cern.ch>
* Add Cut and functor selection args to HeavyHadrons accessor methods.
2017-03-03 Andy Buckley <andy.buckley@cern.ch>
* bin/rivet-mkanalysis: Add FastJets.hh include by default -- it's almost always used.
2017-03-02 Andy Buckley <andy.buckley@cern.ch>
* src/Analyses/CMS_2016_I1473674.cc: Patch from CMS to use partonic tops.
* src/Analyses/CMS_2015_I1370682.cc: Patch to inline jet finding from CMS.
2017-03-01 Andy Buckley <andy.buckley@cern.ch>
* Convert DressedLeptons use of fromDecay to instead veto photons
that match fromHadron() || fromHadronicTau() -- meaning that
electrons and muons from leptonic taus will now be dressed.
* Move Particle and Jet std::function aliases to .fhh files, and
replace many uses of templates for functor arguments with
ParticleSelector meta-types instead.
* Move the canonical implementations of hasAncestorWith, etc. and
isLastWith, etc. from ParticleUtils.hh into Particle.
* Disable the event-to-event beam consistency check if the
ignore-beams mode is active.
2017-02-27 Andy Buckley <andy.buckley@cern.ch>
* Add BoolParticleAND, BoolJetOR, etc. functor combiners to
Tools/ParticleUtils.hh and Tools/JetUtils.hh.
2017-02-24 Andy Buckley <andy.buckley@cern.ch>
* Mark ATLAS_2016_CONF_2016_078 and CMS_2016_PAS_SUS_16_14
analyses as validated, since their cutflows match the documentation.
2017-02-22 Andy Buckley <andy.buckley@cern.ch>
* Add aggregate signal regions to CMS_2016_PAS_SUS_16_14.
2017-02-18 Andy Buckley <andy.buckley@cern.ch>
* Add getEnvParam function, for neater use of environment variable
parameters with a required default.
2017-02-05 Andy Buckley <andy.buckley@cern.ch>
* Add HasBTag and HasCTag jet functors, with lower-case aliases.
2017-01-18 Andy Buckley <andy.buckley@cern.ch>
* Use std::function in functor-expanded method signatures on JetAlg.
2017-01-16 Andy Buckley <andy.buckley@cern.ch>
* Convert FinalState particles() accessors to use std::function
rather than a template arg for sorting, and add filtering functor
support -- including a mix of filtering and sorting functors. Yay
for C++11!
* Add ParticleEffFilter and JetEffFilter constructors from a double (encoding constant efficiency).
* Add Vector3::abseta()
2016-12-13 Andy Buckley <andy.buckley@cern.ch>
* Version 2.5.3 release.
2016-12-12 Holger Schulz <holger.schulz@cern.ch>
* Add cut in BZ calculation in OPAL 4 jet analysis. Paper is not clear
about treatment of parallel vectors, leads to division by zero and
nan-fill and subsequent YODA RangeError (OPAL_2001_S4553896)
2016-12-12 Andy Buckley <andy.buckley@cern.ch>
* Fix bugs in SmearedJets treatment of b & c tagging rates.
* Adding ATLAS_2016_I1467454 analysis (high-mass Drell-Yan at 8 TeV)
* Tweak to 'convert' call to improve the thumbnail quality from rivet-mkhtml/make-plots.
2016-12-07 Andy Buckley <andy.buckley@cern.ch>
* Require Cython 0.24 or later.
2016-12-02 Andy Buckley <andy.buckley@cern.ch>
* Adding L3_2004_I652683 (LEP 1 & 2 event shapes) and
LHCB_2014_I1262703 (Z+jet at 7 TeV).
* Adding leading dijet mass plots to MC_JetAnalysis (and all
derived classes). Thanks to Chris Gutschow!
* Adding CMS_2012_I1298807 (ZZ cross-section at 8 TeV),
CMS_2016_I1459051 (inclusive jet cross-sections at 13 TeV) and
CMS_PAS_FSQ_12_020 (preliminary 7 TeV leading-track underlying
event).
* Adding CDF_2015_1388868 (ppbar underlying event at 300, 900, and 1960 GeV)
* Adding ATLAS_2016_I1467230 (13 TeV min bias),
ATLAS_2016_I1468167 (13 TeV inelastic pp cross-section), and
ATLAS_2016_I1479760 (7 TeV pp double-parton scattering with 4
jets).
2016-12-01 Andy Buckley <andy.buckley@cern.ch>
* Adding ALICE_2012_I1116147 (eta and pi0 pTs and ratio) and
ATLAS_2011_I929691 (7 TeV jet frag)
2016-11-30 Andy Buckley <andy.buckley@cern.ch>
* Fix bash bugs in rivet-buildplugin, including fixing the --cmd mode.
2016-11-28 Andy Buckley <andy.buckley@cern.ch>
* Add LHC Run 2 BSM analyses ATLAS_2016_CONF_2016_037 (3-lepton
and same-sign 2-lepton), ATLAS_2016_CONF_2016_054 (1-lepton +
jets), ATLAS_2016_CONF_2016_078 (ICHEP jets + MET),
ATLAS_2016_CONF_2016_094 (1-lepton + many jets), CMS_2013_I1223519
(alphaT + b-jets), and CMS_2016_PAS_SUS_16_14 (jets + MET).
* Provide convenience reversed-argument versions of apply and
declare methods, to allow presentational choice of declare syntax
in situations where the projection argument is very long, and
reduce requirements on the user's memory since this is one
situation in Rivet where there is no 'most natural' ordering
choice.
2016-11-24 Andy Buckley <andy.buckley@cern.ch>
* Adding pTvec() function to 4-vectors and ParticleBase.
* Fix --pwd option of the rivet script
2016-11-21 Andy Buckley <andy.buckley@cern.ch>
* Add weights and scaling to Cutflow/s.
2016-11-19 Andy Buckley <andy.buckley@cern.ch>
* Add Et(const ParticleBase&) unbound function.
2016-11-18 Andy Buckley <andy.buckley@cern.ch>
* Fix missing YAML quote mark in rivet-mkanalysis.
2016-11-15 Andy Buckley <andy.buckley@cern.ch>
* Fix constness requirements on ifilter_select() and Particle/JetEffFilter::operator().
* src/Analyses/ATLAS_2016_I1458270.cc: Fix inverted particle efficiency filtering.
2016-10-24 Andy Buckley <andy.buckley@cern.ch>
* Add rough ATLAS and CMS photon reco efficiency functions from
Delphes (ATLAS and CMS versions are identical, hmmm)
2016-10-12 Andy Buckley <andy.buckley@cern.ch>
* Tidying/fixing make-plots custom z-ticks code. Thanks to Dmitry Kalinkin.
2016-10-03 Holger Schulz <holger.schulz@cern.ch>
* Fix SpiresID -> InspireID in some analyses (show-analysis pointed to
non-existing web page)
2016-09-29 Holger Schulz <holger.schulz@cern.ch>
* Add Luminosity_fb to AnalysisInfo
* Added some keywords and Lumi to ATLAS_2016_I1458270
2016-09-28 Andy Buckley <andy.buckley@cern.ch>
* Merge the ATLAS and CMS from-Delphes electron and muon tracking
efficiency functions into generic trkeff functions -- this is how
it should be.
* Fix return type typo in Jet::bTagged(FN) templated method.
* Add eta and pT cuts to ATLAS truth b-jet definition.
* Use rounding rather than truncation in Cutflow percentage efficiency printing.
2016-09-28 Frank Siegert <frank.siegert@cern.ch>
* make-plots bugfix in y-axis labels for RatioPlotMode=deviation
2016-09-27 Andy Buckley <andy.buckley@cern.ch>
* Add vector and scalar pT (rather than Et) to MissingMomentum.
2016-09-27 Holger Schulz <holger.schulz@cern.ch>
* Analysis keyword machinery
* rivet -a @semileptonic
* rivet -a @semileptonic@^bdecays -a @semileptonic@^ddecays
2016-09-22 Holger Schulz <holger.schulz@cern.ch>
* Release version 2.5.2
2016-09-21 Andy Buckley <andy.buckley@cern.ch>
* Add a requirement to DressedLeptons that the FinalState passed
as 'bareleptons' will be filtered to only contain charged leptons,
if that is not already the case. Thanks to Markus Seidel for the
suggestion.
2016-09-21 Holger Schulz <holger.schulz@cern.ch>
* Add Simone Amoroso's plugin for hadron spectra (ALEPH_1995_I382179)
* Add Simone Amoroso's plugin for hadron spectra (OPAL_1993_I342766)
2016-09-20 Holger Schulz <holger.schulz@cern.ch>
* Add CMS ttbar analysis from contrib, mark validated (CMS_2016_I1473674)
* Extend rivet-mkhtml --booklet to also work with pdfmerge
2016-09-20 Andy Buckley <andy.buckley@cern.ch>
* Fix make-plots automatic YMax calculation, which had a typo from code cleaning (mea culpa!).
* Fix ChargedLeptons projection, which failed to exclude neutrinos!!! Thanks to Markus Seidel.
* Add templated FN filtering arg versions of the Jet::*Tags() and
Jet::*Tagged() functions.
2016-09-18 Andy Buckley <andy.buckley@cern.ch>
* Add CMS partonic top analysis (CMS_2015_I1397174)
2016-09-18 Holger Schulz <holger.schulz@cern.ch>
* Add L3 xp analysis of eta mesons, thanks Simone (L3_1992_I336180)
* Add D0 1.8 TeV jet shapes analysis, thanks Simone (D0_1995_I398175)
2016-09-17 Andy Buckley <andy.buckley@cern.ch>
* Add has{Ancestor,Parent,Child,Descendant}With functions and
HasParticle{Ancestor,Parent,Child,Descendant}With functors.
2016-09-16 Holger Schulz <holger.schulz@cern.ch>
* Add ATLAS 8TeV ttbar analysis from contrib (ATLAS_2015_I1404878)
2016-09-16 Andy Buckley <andy.buckley@cern.ch>
* Add particles(GenParticlePtr) to RivetHepMC.hh
* Add hasParent, hasParentWith, and hasAncestorWith to Particle.
2016-09-15 Holger Schulz <holger.schulz@cern.ch>
* Add ATLAS 8TeV dijet analysis from contrib (ATLAS_2015_I1393758)
* Add ATLAS 8TeV 'number of tracks in jets' analysis from contrib (ATLAS_2016_I1419070)
* Add ATLAS 8TeV g->H->WW->enumunu analysis from contrib (ATLAS_2016_I1444991)
2016-09-14 Holger Schulz <holger.schulz@cern.ch>
* Explicit std::toupper and std::tolower to make clang happy
2016-09-14 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS Run 2 0-lepton SUSY and monojet search papers (ATLAS_2016_I1452559, ATLAS_2016_I1458270)
2016-09-13 Andy Buckley <andy.buckley@cern.ch>
* Add experimental Cutflow and Cutflows objects for BSM cut tracking.
* Add 'direct' versions of any, all, none to Utils.hh, with an
implicity bool() transforming function.
2016-09-13 Holger Schulz <holger.schulz@cern.ch>
* Add and mark validated B+ to omega analysis (BABAR_2013_I1116411)
* Add and mark validated D0 to pi- analysis (BABAR_2015_I1334693)
* Add a few more particle names and use PID names in recently added
analyses
* Add Simone's OPAL b-frag analysis (OPAL_2003_I599181) after some
cleanup and heavy usage of new features
* Restructured DELPHI_2011_I890503 in the same manner --- picks up
a few more B-hadrons now (e.g. 20523 and such)
* Clean up and add ATLAS 8TeV MinBias (from contrib ATLAS_2016_I1426695)
2016-09-12 Andy Buckley <andy.buckley@cern.ch>
* Add a static constexpr DBL_NAN to Utils.hh for convenience, and
move some utils stuff out of MathHeader.hh
2016-09-12 Holger Schulz <holger.schulz@cern.ch>
* Add count function to Tools/Utils.h
* Add and mark validated B0bar and Bminus-decay to pi analysis (BELLE_2013_I1238273)
* Add and mark validated B0-decay analysis (BELLE_2011_I878990)
* Add and mark validated B to D decay analysis (BELLE_2011_I878990)
2016-09-08 Andy Buckley <andy.buckley@cern.ch>
* Add C-array version of multi-target Analysis::scale() and normalize(), and fix (semantic) constness.
* Add == and != operators for cuts applied to integers.
* Add missing delta{Phi,Eta,Rap}{Gtr,Less} functors to ParticleBaseUtils.hh
2016-09-07 Andy Buckley <andy.buckley@cern.ch>
* Add templated functor filtering args to the Particle parent/child/descendent methods.
2016-09-06 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS Run 1 medium and tight electron ID efficiency functions.
* Update configure scripts to use newer (Py3-safe) Python testing macros.
2016-09-02 Andy Buckley <andy.buckley@cern.ch>
* Add isFirstWith(out), isLastWith(out) functions, and functor
wrappers, using Cut and templated function/functor args.
* Add Particle::parent() method.
* Add using import/typedef of HepMC *Ptr types (useful step for HepMC 2.07 and 3.00).
* Various typo fixes (and canonical renaming) in ParticleBaseUtils functor collection.
* Add ATLAS MV2c10 and MV2c20 b-tagging effs to SmearingFunctions.hh collection.
2016-09-01 Andy Buckley <andy.buckley@cern.ch>
* Add a PartonicTops projection.
* Add overloaded versions of the Event::allParticles() method with
selection Cut or templated selection function arguments.
2016-08-25 Andy Buckley <andy.buckley@cern.ch>
* Add rapidity scheme arg to DeltaR functor constructors.
2016-08-23 Andy Buckley <andy.buckley@cern.ch>
* Provide an Analysis::bookCounter(d,x,y, title) function, for
convenience and making the mkanalysis template valid.
* Improve container utils functions, and provide combined
remove_if+erase filter_* functions for both select- and
discard-type selector functions.
2016-08-22 Holger Schulz <holger.schulz@cern.ch>
* Bugfix in rivet-mkhtml (NoneType: ana.spiresID() --> spiresid)
* Added <numeric> include to Rivet/Tools/Utils.h to make gcc6 happy
2016-08-22 Andy Buckley <andy.buckley@cern.ch>
* Add efffilt() functions and Particle/JetEffFilt functors to SmearingFunctions.hh
2016-08-20 Andy Buckley <andy.buckley@cern.ch>
* Adding filterBy methods for Particle and Jet which accept
generic boolean functions as well as the Cut specialisation.
2016-08-18 Andy Buckley <andy.buckley@cern.ch>
* Add a Jet::particles(Cut&) method, for inline filtering of jet constituents.
* Add 'conjugate' behaviours to container head and tail functions
via negative length arg values.
2016-08-15 Andy Buckley <andy.buckley@cern.ch>
* Add convenience headers for including all final-state and
smearing projections, to save user typing.
2016-08-12 Andy Buckley <andy.buckley@cern.ch>
* Add standard MET functions for ATLAS R1 (and currently copies
for R2 and CMS).
* Add lots of vector/container helpers for e.g. container slicing,
summing, and min/max calculation.
* Adapt SmearedMET to take *two* arguments, since SET is typically
used to calculate MET resolution.
* Adding functors for computing vector & ParticleBase differences
w.r.t. another vector.
2016-08-12 Holger Schulz <holger.schulz@cern.ch>
* Implemented a few more cuts in prompt photon analysis
(CDF_1993_S2742446) but to no avail, the rise of the data towards
larger costheta values cannot be reproduced --- maybe this is a
candidate for more scrutiny and using the boosting machinery such that
the c.m. cuts can be done in a non-approximate way
2016-08-11 Holger Schulz <holger.schulz@cern.ch>
* Rename CDF_2009_S8383952 to CDF_2009_I856131 due to invalid Spires
entry.
* Add InspireID to all analysis known by their Spires key
2016-08-09 Holger Schulz <holger.schulz@cern.ch>
* Release 2.5.1
2016-08-08 Andy Buckley <andy.buckley@cern.ch>
* Add a simple MC_MET analysis for out-of-the-box MET distribution testing.
2016-08-08 Holger Schulz <holger.schulz@cern.ch>
* Add DELPHI_2011_I890503 b-quark fragmentation function measurement,
which superseded DELPHI_2002_069_CONF_603. The latter is marked
OBSOLETE.
2016-08-05 Holger Schulz <holger.schulz@cern.ch>
* Use Jet mass and energy smearing in CDF_1997_... six-jet analysis,
mark validated.
* Mark CDF_2001_S4563131 validated
* D0_1996_S3214044 --- cut on jet Et rather than pT, fix filling of
costheta and theta plots, mark validated. Concerning the jet
algorithm, I tried with the implementation of fastjet
fastjet/D0RunIConePlugin.hh but that really does not help.
* D0_1996_S3324664 --- fix normalisations, sorting jets properly now,
cleanup and mark validated.
2016-08-04 Holger Schulz <holger.schulz@cern.ch>
* Use Jet mass and energy smearing in CDF_1996_S310 ... jet properties
analysis. Cleanup analysis and mark validated. Added some more run info.
The same for CDF_1996_S334... (pretty much the same cuts, different
observables).
* Minor fixes in SmearedJets projection
2016-08-03 Andy Buckley <andy.buckley@cern.ch>
* Protect SmearedJets against loss of tagging information if a
momentum smearing function is used (rather than a dedicated Jet
smearing fn) via implicit casts.
2016-08-02 Andy Buckley <andy.buckley@cern.ch>
* Add SmearedMET projection, wrapping MissingMomentum.
* Include base truth-level projections in SmearedParticles/Jets compare() methods.
2016-07-29 Andy Buckley <andy.buckley@cern.ch>
* Rename TOTEM_2012_002 to proper TOTEM_2012_I1220862 name.
* Remove conditional building of obsolete, preliminary and
unvalidated analyses. Now always built, since there are sufficient
warnings.
2016-07-28 Holger Schulz <holger.schulz@cern.ch>
* Mark D0_2000... W pT analysis validated
* Mark LHCB_2011_S919... phi meson analysis validated
2016-07-25 Andy Buckley <andy.buckley@cern.ch>
* Add unbound accessors for momentum properties of ParticleBase objects.
* Add Rivet/Tools/ParticleBaseUtils.hh to collect tools like functors for particle & jet filtering.
* Add vector<ptr> versions of Analysis::scale() and ::normalize(), for batched scaling.
* Add Analysis::scale() and Analysis::divide() methods for Counter types.
* Utils.hh: add a generic sum() function for containers, and use auto in loop to support arrays.
* Set data path as well as lib path in scripts with --pwd option, and use abs path to $PWD.
* Add setAnalysisDataPaths and addAnalysisDataPath to RivetPaths.hh/cc and Python.
* Pass absolutized RIVET_DATA_PATH from rivet-mkhtml to rivet-cmphistos.
2016-07-24 Holger Schulz <holger.schulz@cern.ch>
* Mark CDF_2008_S77... b jet shapes validated
* Added protection against low stats yoda exception
in finalize for that analysis
2016-07-22 Andy Buckley <andy.buckley@cern.ch>
* Fix newly introduced bug in make-plots which led to data point
markers being skipped for all but the last bin.
2016-07-21 Andy Buckley <andy.buckley@cern.ch>
* Add pid, abspid, charge, abscharge, charge3, and abscharge3 Cut
enums, handled by Particle cut targets.
* Add abscharge() and abscharge3() methods to Particle.
* Add optional Cut and duplicate-removal flags to Particle children & descendants methods.
* Add unbound versions of Particle is* and from* methods, for easier functor use.
* Add Particle::isPrompt() as a member rather than unbound function.
* Add protections against -ve mass from numerical precision errors in smearing functions.
2016-07-20 Andy Buckley <andy.buckley@cern.ch>
* Move several internal system headers into the include/Rivet/Tools directory.
* Fix median-computing safety logic in ATLAS_2010_S8914702 and
tidy this and @todo markers in several similar analyses.
* Add to_str/toString and stream functions for Particle, and a bit
of Particle util function reorganisation.
* Add isStrange/Charm/Bottom PID and Particle functions.
* Add RangeError exception throwing from MathUtils.hh stats
functions if given empty/mismatched datasets.
* Add Rivet/Tools/PrettyPrint.hh, based on https://louisdx.github.io/cxx-prettyprint/
* Allow use of path regex group references in .plot file keyed values.
2016-07-20 Holger Schulz <holger.schulz@cern.ch>
* Fix the --nskip behaviour on the main rivet script.
2016-07-07 Andy Buckley <andy.buckley@cern.ch>
* Release version 2.5.0
2016-07-01 Andy Buckley <andy.buckley@cern.ch>
* Fix pandoc interface flag version detection.
2016-06-28 Andy Buckley <andy.buckley@cern.ch>
* Release version 2.4.3
* Add ATLAS_2016_I1468168 early ttbar fully leptonic fiducial
cross-section analysis at 13 TeV.
2016-06-21 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2016_I1457605 inclusive photon analysis at 8 TeV.
2016-06-15 Andy Buckley <andy.buckley@cern.ch>
* Add a --show-bibtex option to the rivet script, for convenient
outputting of a BibTeX db for the used analyses.
2016-06-14 Andy Buckley <andy.buckley@cern.ch>
* Add and rename 4-vector boost calculation methods: new methods
beta, betaVec, gamma & gammaVec are now preferred to the
deprecated boostVector method.
2016-06-13 Andy Buckley <andy.buckley@cern.ch>
* Add and use projection handling methods declare(proj, pname) and
apply<PROJ>(evt, pname) rather than the longer and explicitly
'projectiony' addProjection & applyProjection.
* Start using the DEFAULT_RIVET_ANALYSIS_CTOR macro (newly created
preferred alias to long-present DEFAULT_RIVET_ANA_CONSTRUCTOR)
* Add a DEFAULT_RIVET_PROJ_CLONE macro for implementing the
clone() method boiler-plate code in projections.
2016-06-10 Andy Buckley <andy.buckley@cern.ch>
* Add a NonPromptFinalState projection, and tweak the
PromptFinalState and unbound Particle functions a little in
response. May need some more finessing.
* Add user-facing aliases to ProjectionApplier add, get, and apply
methods... the templated versions of which can now be called
without using the word 'projection', which makes the function
names a bit shorter and pithier, and reduces semantic repetition.
2016-06-10 Andy Buckley <andy.buckley@cern.ch>
* Adding ATLAS_2015_I1397635 Wt at 8 TeV analysis.
* Adding ATLAS_2015_I1390114 tt+b(b) at 8 TeV analysis.
2016-06-09 Andy Buckley <andy.buckley@cern.ch>
* Downgrade some non-fatal error messages from ERROR to WARNING
status, because *sigh* ATLAS's software treats any appearance of
the word 'ERROR' in its log file as a reason to report the job as
failed (facepalm).
2016-06-07 Andy Buckley <andy.buckley@cern.ch>
* Adding ATLAS 13 TeV minimum bias analysis, ATLAS_2016_I1419652.
2016-05-30 Andy Buckley <andy.buckley@cern.ch>
* pyext/rivet/util.py: Add pandoc --wrap/--no-wrap CLI detection
and batch conversion.
* bin/rivet: add -o as a more standard 'output' option flag alias to -H.
2016-05-23 Andy Buckley <andy.buckley@cern.ch>
* Remove the last ref-data bin from table 16 of
ATLAS_2010_S8918562, due to data corruption. The corresponding
HepData record will be amended by ATLAS.
2016-05-12 Holger Schulz <holger.schulz@durham.ac.uk>
* Mark ATLAS_2012_I1082009 as validated after exhaustive tests with
Pythia8 and Sherpa in inclusive QCD mode.
2016-05-11 Andy Buckley <andy.buckley@cern.ch>
* Specialise return error codes from the rivet script.
2016-05-11 Andy Buckley <andy.buckley@cern.ch>
* Add Event::allParticles() to provide neater (but not *helpful*)
access to Rivet-wrapped versions of the raw particles in the
Event::genEvent() record, and hence reduce HepMC digging.
2016-05-05 Andy Buckley <andy.buckley@cern.ch>
* Version 2.4.2 release!
* Update SLD_2002_S4869273 ref data to match publication erratum,
now updated in HepData. Thanks to Peter Skands for the report and
Mike Whalley / Graeme Watt for the quick fix and heads-up.
2016-04-27 Andy Buckley <andy.buckley@cern.ch>
* Add CMS_2014_I1305624 event shapes analysis, with standalone
variable calculation struct embedded in an unnamed namespace.
2016-04-19 Andy Buckley <andy.buckley@cern.ch>
* Various clean-ups and fixes in ATLAS analyses using isolated
photons with median pT density correction.
2016-04-18 Andy Buckley <andy.buckley@cern.ch>
* Add transformBy(LT) methods to Particle and Jet.
* Add mkObjectTransform and mkFrameTransform factory methods to LorentzTransform.
2016-04-17 Andy Buckley <andy.buckley@cern.ch>
* Add null GenVertex protection in Particle children & descendants methods.
2016-04-15 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2015_I1397637, ATLAS 8 TeV boosted top cross-section vs. pT
2016-04-14 Andy Buckley <andy.buckley@cern.ch>
* Add a --no-histos argument to the rivet script.
2016-04-13 Andy Buckley <andy.buckley@cern.ch>
* Add ATLAS_2015_I1351916 (8 TeV Z FB asymmetry) and
ATLAS_2015_I1408516 (8 TeV Z phi* and pT) analyses, and their _EL,
_MU variants.
2016-04-12 Andy Buckley <andy.buckley@cern.ch>
* Patch PID utils for ordering issues in baryon decoding.
2016-04-11 Andy Buckley <andy.buckley@cern.ch>
* Actually implement ZEUS_2001_S4815815... only 10 years late!
2016-04-08 Andy Buckley <andy.buckley@cern.ch>
* Add a --guess-prefix flag to rivet-config, cf. fastjet-config.
* Add RIVET_DATA_PATH variable and related functions in C++ and
Python as a common first-fallback for RIVET_REF_PATH,
RIVET_INFO_PATH, and RIVET_PLOT_PATH.
* Add --pwd options to rivet-mkhtml and rivet-cmphistos
2016-04-07 Andy Buckley <andy.buckley@cern.ch>
* Remove implicit conventional event rotation for HERA -- this
needs to be done explicitly from now.
* Add comBoost functions and methods to Beam.hh, and tidy
LorentzTransformation.
* Restructure Beam projection functions for beam particle and
sqrtS extraction, and add asqrtS functions.
* Rename and improve PID and Particle Z,A,lambda functions ->
nuclZ,nuclA,nuclNlambda.
2016-04-05 Andy Buckley <andy.buckley@cern.ch>
* Improve binIndex function, with an optional argument to allow
overflow lookup, and add it to testMath.
* Adding setPE, setPM, setPtEtaPhiM, etc. methods and
corresponding mk* static methods to FourMomentum, as well as
adding more convenience aliases and vector attributes for
completeness. Coordinate conversion functions taken from
HEPUtils::P4. New attrs also mapped to ParticleBase.
2016-03-29 Andy Buckley <andy.buckley@cern.ch>
* ALEPH_1996_S3196992.cc, ATLAS_2010_S8914702.cc,
ATLAS_2011_I921594.cc, ATLAS_2011_S9120807.cc,
ATLAS_2012_I1093738.cc, ATLAS_2012_I1199269.cc,
ATLAS_2013_I1217867.cc, ATLAS_2013_I1244522.cc,
ATLAS_2013_I1263495.cc, ATLAS_2014_I1307756.cc,
ATLAS_2015_I1364361.cc, CDF_2008_S7540469.cc,
CMS_2015_I1370682.cc, MC_JetSplittings.cc, STAR_2006_S6870392.cc:
Updates for new FastJets interface, and other cleaning.
* Deprecate 'standalone' FastJets constructors -- they are
misleading.
* More improvements around jets, including unbound conversion and
filtering routines between collections of Particles, Jets, and
PseudoJets.
* Place 'Cut' forward declaration in a new Cuts.fhh header.
* Adding a Cuts::OPEN extern const (a bit more standard- and
constant-looking than Cuts::open())
2016-03-28 Andy Buckley <andy.buckley@cern.ch>
* Improvements to FastJets constructors, including specification
of optional AreaDefinition as a constructor arg, disabling dodgy
no-FS constructors which I suspect don't work properly in the
brave new world of automatic ghost tagging, using a bit of
judicious constructor delegation, and completing/exposing use of
shared_ptr for internal memory management.
2016-03-26 Andy Buckley <andy.buckley@cern.ch>
* Remove Rivet/Tools/RivetBoost.hh and Boost references from
rivet-config, rivet-buildplugin, and configure.ac. It's gone ;-)
* Replace Boost assign usage with C++11 brace initialisers. All
Boost use is gone from Rivet!
* Replace Boost lexical_cast and string algorithms.
2016-03-25 Andy Buckley <andy.buckley@cern.ch>
* Bug-fix in semi-leptonic top selection of CMS_2015_I1370682.
2016-03-12 Andy Buckley <andy.buckley@cern.ch>
* Allow multi-line major tick labels on make-plots linear x and y
axes. Linebreaks are indicated by \n in the .dat file.
2016-03-09 Andy Buckley <andy.buckley@cern.ch>
* Release 2.4.1
2016-03-03 Andy Buckley <andy.buckley@cern.ch>
* Add a --nskip flag to the rivet command-line tool, to allow
processing to begin in the middle of an event file (useful for
batched processing of large files, in combination with --nevts)
2016-03-03 Holger Schulz <holger.schulz@durham.ac.uk>
* Add ATLAS 7 TeV event shapes in Z+jets analysis (ATLAS_2016_I1424838)
2016-02-29 Andy Buckley <andy.buckley@cern.ch>
* Update make-plots to use multiprocessing rather than threading.
* Add FastJets::trimJet method, thanks to James Monk for the
suggestion and patch.
* Add new preferred name PID::charge3 in place of
PID::threeCharge, and also convenience PID::abscharge and
PID::abscharge3 functions -- all derived from changes in external
HEPUtils.
* Add analyze(const GenEvent*) and analysis(string&) methods to
AnalysisHandler, plus some docstring improvements.
2016-02-23 Andy Buckley <andy.buckley@cern.ch>
* New ATLAS_2015_I1394679 analysis.
* New MC_HHJETS analysis from Andreas Papaefstathiou.
* Ref data updates for ATLAS_2013_I1219109, ATLAS_2014_I1312627,
and ATLAS_2014_I1319490.
* Add automatic output paging to 'rivet --show-analyses'
2016-02-16 Andy Buckley <andy.buckley@cern.ch>
* Apply cross-section unit fixes and plot styling improvements to
ATLAS_2013_I1217863 analyses, thanks to Christian Gutschow.
* Fix to rivet-cmphistos to avoid overwriting RatioPlotYLabel if
already set via e.g. the PLOT pseudo-file. Thanks to Johann Felix
v. Soden-Fraunhofen.
2016-02-15 Andy Buckley <andy.buckley@cern.ch>
* Add Analysis::bookCounter and some machinery in rivet-cmphistos
to avoid getting tripped up by unplottable (for now) data types.
* Add --font and --format options to rivet-mkhtml and make-plots,
to replace the individual flags used for that purpose. Not fully
cleaned up, but a necessary step.
* Add new plot styling options to rivet-cmphistos and
rivet-mkhtml. Thanks to Gavin Hesketh.
* Modify rivet-cmphistos and rivet-mkhtml to apply plot hiding if
*any* path component is hidden by an underscore prefix, as
implemented in AOPath, plus other tidying using new AOPath
methods.
* Add pyext/rivet/aopaths.py, containing AOPath object for central
& standard decoding of Rivet-standard analysis object path
structures.
2016-02-12 Andy Buckley <andy.buckley@cern.ch>
* Update ParticleIdUtils.hh (i.e. PID:: functions) to use the
functions from the latest version of MCUtils' PIDUtils.h.
2016-01-15 Andy Buckley <andy.buckley@cern.ch>
* Change rivet-cmphistos path matching logic from match to search
(user can add explicit ^ marker if they want match semantics).
2015-12-20 Andy Buckley <andy.buckley@cern.ch>
* Improve linspace (and hence also logspace) precision errors by
using multiplication rather than repeated addition to build edge
list (thanks to Holger Schulz for the suggestion).
2015-12-15 Andy Buckley <andy.buckley@cern.ch>
* Add cmphistos and make-plots machinery for handling 'suffix'
variations on plot paths, currently just by plotting every line,
with the variations in a 70% faded tint.
* Add Beam::pv() method for finding the beam interaction primary
vertex 4-position.
* Add a new Particle::setMomentum(E,x,y,z) method, and an origin
position member which is automatically populated from the
GenParticle, with access methods corresponding to the momentum
ones.
2015-12-10 Andy Buckley <andy.buckley@cern.ch>
* make-plots: improve custom tick attribute handling, allowing
empty lists. Also, any whitespace now counts as a tick separator
-- explicit whitespace in labels should be done via ~ or similar
LaTeX markup.
2015-12-04 Andy Buckley <andy.buckley@cern.ch>
* Pro-actively use -m/-M arguments when initially loading
histograms in mkhtml, *before* passing them to cmphistos.
2015-12-03 Andy Buckley <andy.buckley@cern.ch>
* Move contains() and has_key() functions on STL containers from std to Rivet namespaces.
* Adding IsRef attributes to all YODA refdata files; this will be
used to replace the /REF prefix in Rivet v3 onwards. The migration
has also removed leading # characters from BEGIN/END blocks, as
per YODA format evolution: new YODA versions as required by
current Rivet releases are able to read both the old and new
formats.
2015-12-02 Andy Buckley <andy.buckley@cern.ch>
* Add handling of a command-line PLOT 'file' argument to rivet-mkhtml, cf. rivet-cmphistos.
* Improvements to rivet-mkhtml behaviour re. consistency with
rivet-cmphistos in how muti-part histo paths are decomposed into
analysis-name + histo name, and removal of 'NONE' strings.
2015-11-30 Andy Buckley <andy.buckley@cern.ch>
* Relax rivet/plotinfo.py pattern matching on .plot file
components, to allow leading whitespace and around = signs, and to
make the leading # optional on BEGIN/END blocks.
2015-11-26 Andy Buckley <andy.buckley@cern.ch>
* Write out intermediate histogram files by default, with event interval of 10k.
2015-11-25 Andy Buckley <andy.buckley@cern.ch>
* Protect make-plots against lock-up due to partial pstricks command when there are no data points.
2015-11-17 Andy Buckley <andy.buckley@cern.ch>
* rivet-cmphistos: Use a ratio label that doesn't mention 'data' when plotting MC vs. MC.
2015-11-12 Andy Buckley <andy.buckley@cern.ch>
* Tweak plot and subplot sizing defaults in make-plots so the
total canvas is always the same size by default.
2015-11-10 Andy Buckley <andy.buckley@cern.ch>
* Handle 2D histograms better in rivet-cmphistos (since they can't be overlaid)
2015-11-05 Andy Buckley <andy.buckley@cern.ch>
* Allow comma-separated analysis name lists to be passed to a single -a/--analysis/--analyses option.
* Convert namespace-global const variables to be static, to suppress compiler warnings.
* Use standard MAX_DBL and MAX_INT macros as a source for MAXDOUBLE and MAXINT, to suppress GCC5 warnings.
2015-11-04 Holger Schulz <holger.schulz@durham.ac.uk>
* Adding LHCB inelastic xsection measurement (LHCB_2015_I1333223)
* Adding ATLAS colour flow in ttbar->semileptonic measurement (ATLAS_2015_I1376945)
2015-10-07 Chris Pollard <cpollard@cern.ch>
* Release 2.4.0
2015-10-06 Holger Schulz <holger.schulz@durham.ac.uk>
* Adding CMS_2015_I1327224 dijet analysis (Mjj>2 TeV)
2015-10-03 Holger Schulz <holger.schulz@durham.ac.uk>
* Adding CMS_2015_I1346843 Z+gamma
2015-09-30 Andy Buckley <andy.buckley@cern.ch>
* Important improvement in FourVector & FourMomentum: new reverse()
method to return a 4-vector in which only the spatial component
has been inverted cf. operator- which flips the t/E component as
well.
2015-09-28 Holger Schulz <holger.schulz@durham.ac.uk>
* Adding D0_2000_I503361 ZPT at 1800 GeV
2015-09-29 Chris Pollard <cpollard@cern.ch>
* Adding ATLAS_2015_CONF_2015_041
2015-09-29 Chris Pollard <cpollard@cern.ch>
* Adding ATLAS_2015_I1387176
2015-09-29 Chris Pollard <cpollard@cern.ch>
* Adding ATLAS_2014_I1327229
2015-09-28 Chris Pollard <cpollard@cern.ch>
* Adding ATLAS_2014_I1326641
2015-09-28 Holger Schulz <holger.schulz@durham.ac.uk>
* Adding CMS_2013_I1122847 FB assymetry in DY analysis
2015-09-28 Andy Buckley <andy.buckley@cern.ch>
* Adding CMS_2015_I1385107 LHA pp 2.76 TeV track-jet underlying event.
2015-09-27 Andy Buckley <andy.buckley@cern.ch>
* Adding CMS_2015_I1384119 LHC Run 2 minimum bias dN/deta with no B field.
2015-09-25 Andy Buckley <andy.buckley@cern.ch>
* Adding TOTEM_2014_I1328627 forward charged density in eta analysis.
2015-09-23 Andy Buckley <andy.buckley@cern.ch>
* Add CMS_2015_I1310737 Z+jets analysis.
* Allow running MC_{W,Z}INC, MC_{W,Z}JETS as separate bare lepton
analyses.
2015-09-23 Andy Buckley <andy.buckley@cern.ch>
* FastJets now allows use of FastJet pure ghosts, by excluding
them from the constituents of Rivet Jet objects. Thanks to James
Monk for raising the issue and providing a patch.
2015-09-15 Andy Buckley <andy.buckley@cern.ch>
* More MissingMomentum changes: add optional 'mass target'
argument when retrieving the vector sum as a 4-momentum, with the
mass defaulting to 0 rather than sqrt(sum(E)^2 - sum(p)^2).
* Require Boost 1.55 for robust compilation, as pointed out by
Andrii Verbytskyi.
2015-09-10 Andy Buckley <andy.buckley@cern.ch>
* Allow access to MissingMomentum projection via WFinder.
* Adding extra methods to MissingMomentum, to make it more user-friendly.
2015-09-09 Andy Buckley <andy.buckley@cern.ch>
* Fix factor of 2 in LHCB_2013_I1218996 normalisation, thanks to
Felix Riehn for the report.
2015-08-20 Frank Siegert <frank.siegert@cern.ch>
* Add function to ZFinder to retrieve all fiducial dressed
leptons, e.g. to allow vetoing on a third one (proposed by
Christian Gutschow).
2015-08-18 Andy Buckley <andy.buckley@cern.ch>
* Rename xs and counter AOs to start with underscores, and modify
rivet-cmphistos to skip AOs whose basenames start with _.
2015-08-17 Andy Buckley <andy.buckley@cern.ch>
* Add writing out of cross-section and total event counter by
default. Need to add some name protection to avoid them being
plotted.
2015-08-16 Andy Buckley <andy.buckley@cern.ch>
* Add templated versions of Analysis::refData() to use data types
other than Scatter2DPtr, and convert the cached ref data store to
generic AnalysisObjectPtrs to make it possible.
2015-07-29 Andy Buckley <andy.buckley@cern.ch>
* Add optional Cut arguments to all the Jet tag methods.
* Add exception handling and pre-emptive testing for a
non-writeable output directory (based on patch from Lukas
Heinrich).
2015-07-24 Andy Buckley <andy.buckley@cern.ch>
* Version 2.3.0 release.
2015-07-02 Holger Schulz <holger.schulz@durham.ac.uk>
* Tidy up ATLAS higgs combination analysis.
* Add ALICE kaon, pion analysis (ALICE_2015_I1357424)
* Add ALICE strange baryon analysis (ALICE_2014_I1300380)
* Add CDF ZpT measurement in Z->ee events analysis (CDF_2012_I1124333)
* Add validated ATLAS W+charm measurement (ATLAS_2014_I1282447)
* Add validated CMS jet and dijet analysis (CMS_2013_I1208923)
2015-07-01 Andy Buckley <andy.buckley@cern.ch>
* Define a private virtual operator= on Projection, to block
'sliced' accidental value copies of derived class instances.
* Add new muon-in-jet options to FastJet constructors, pass that
and invisibles enums correctly to JetAlg, tweak the default
strategies, and add a FastJets constructor from a
fastjet::JetDefinition (while deprecating the plugin-by-reference
constructor).
2015-07-01 Holger Schulz <holger.schulz@durham.ac.uk>
* Add D0 phi* measurement (D0_2015_I1324946).
* Remove WUD and MC_PHOTONJETUE analyses
* Don't abort ATLAS_2015_I1364361 if there is no stable Higgs
print a warning instead and veto event
2015-07-01 Andy Buckley <andy.buckley@cern.ch>
* Add all, none, from-decay muon filtering options to JetAlg and
FastJets.
* Rename NONPROMPT_INVISIBLES to DECAY_INVISIBLES for clarity &
extensibility.
* Remove FastJets::ySubJet, splitJet, and filterJet methods --
they're BDRS-paper-specific and you can now use the FastJet
objects directly to do this and much more.
* Adding InvisiblesStrategy to JetAlg, using it rather than a bool
in the useInvisibles method, and updating FastJets to use this
approach for its particle filtering and to optionally use the enum
in the constructor arguments. The new default invisibles-using
behaviour is to still exclude _prompt_ invisibles, and the default
is still to exclude them all. Only one analysis
(src/Analyses/STAR_2006_S6870392.cc) required updating, since it
was the only one to be using the FastJets legacy seed_threshold
constructor argument.
* Adding isVisible method to Particle, taken from
VisibleFinalState (which now uses this).
2015-06-30 Andy Buckley <andy.buckley@cern.ch>
* Marking many old & superseded ATLAS analyses as obsolete.
* Adding cmpMomByMass and cmpMomByAscMass sorting functors.
* Bump version to 2.3.0 and require YODA > 1.4.0 (current head at
time of development).
2015-06-08 Andy Buckley <andy.buckley@cern.ch>
* Add handling of -m/-M flags on rivet-cmphistos and rivet-mkhtml,
moving current rivet-mkhtml -m/-M to -a/-A (for analysis name
pattern matching). Requires YODA head (will be YODA 1.3.2 of
1.4.0).
* src/Analyses/ATLAS_2015_I1364361.cc: Now use the built-in prompt
photon selecting functions.
* Tweak legend positions in MC_JETS .plot file.
* Add a bit more debug output from ZFinder and WFinder.
2015-05-24 Holger Schulz <holger.schulz@durham.ac.uk>
* Normalisation discussion concerning ATLAS_2014_I1325553
is resolved. Changed YLabel accordingly.
2015-05-19 Holger Schulz <holger.schulz@durham.ac.uk>
* Add (preliminary) ATLAS combined Higgs analysis
(ATLAS_2015_I1364361). Data will be updated and more
histos added as soon as paper is published in journal.
For now using data taken from public ressource
https://atlas.web.cern.ch/Atlas/GROUPS/PHYSICS/PAPERS/HIGG-2014-11/
2015-05-19 Peter Richardson <peter.richardson@durham.ac.uk>
* Fix ATLAS_2014_I1325553 normalisation of histograms was wrong by
factor of two |y| vs y problem
2015-05-01 Andy Buckley <andy.buckley@cern.ch>
* Fix MC_HJETS/HINC/HKTSPLITTINGS analyses to (ab)use the ZFinder
with a mass range of 115-135 GeV and a mass target of 125 GeV (was
previously 115-125 and mass target of mZ)
2015-04-30 Andy Buckley <andy.buckley@cern.ch>
* Removing uses of boost::assign::list_of, preferring the existing
comma-based assign override for now, for C++11 compatibility.
* Convert MC_Z* analysis finalize methods to use scale() rather than normalize().
2015-04-01 Holger Schulz <holger.schulz@durham.ac.uk>
* Add CMS 7 TeV rapidity gap analysis (CMS_2015_I1356998).
* Remove FinalState Projection.
2015-03-30 Holger Schulz <holger.schulz@durham.ac.uk>
* Add ATLAS 7 TeV photon + jets analysis (ATLAS_2013_I1244522).
2015-03-26 Andy Buckley <andy.buckley@cern.ch>
* Updates for HepMC 2.07 interface constness improvements.
2015-03-25 Holger Schulz <holger.schulz@durham.ac.uk>
* Add ATLAS double parton scattering in W+2j analysis (ATLAS_2013_I1216670).
2015-03-24 Andy Buckley <andy.buckley@cern.ch>
* 2.2.1 release!
2015-03-23 Holger Schulz <holger.schulz@durham.ac.uk>
* Add ATLAS differential Higgs analysis (ATLAS_2014_I1306615).
2015-03-19 Chris Pollard <cpollard@cern.ch>
* Add ATLAS V+gamma analyses (ATLAS_2013_I1217863)
2015-03-20 Andy Buckley <andy.buckley@cern.ch>
* Adding ATLAS R-jets analysis i.e. ratios of W+jets and Z+jets
observables (ATLAS_2014_I1312627 and _EL, _MU variants)
* include/Rivet/Tools/ParticleUtils.hh: Adding same/oppSign and
same/opp/diffCharge functions, operating on two Particles.
* include/Rivet/Tools/ParticleUtils.hh: Adding HasAbsPID functor
and removing optional abs arg from HasPID.
2015-03-19 Andy Buckley <andy.buckley@cern.ch>
* Mark ATLAS_2012_I1083318 as VALIDATED and fix d25-x01-y02 ref data.
2015-03-19 Chris Pollard <cpollard@cern.ch>
* Add ATLAS W and Z angular analyses (ATLAS_2011_I928289)
2015-03-19 Andy Buckley <andy.buckley@cern.ch>
* Add LHCb charged particle multiplicities and densities analysis (LHCB_2014_I1281685)
* Add LHCb Z y and phi* analysis (LHCB_2012_I1208102)
2015-03-19 Holger Schulz <holger.schulz@durham.ac.uk>
* Add ATLAS dijet analysis (ATLAS_2014_I1325553).
* Add ATLAS Z pT analysis (ATLAS_2014_I1300647).
* Add ATLAS low-mass Drell-Yan analysis (ATLAS_2014_I1288706).
* Add ATLAS gap fractions analysis (ATLAS_2014_I1307243).
2015-03-18 Andy Buckley <andy.buckley@cern.ch>
* Adding CMS_2014_I1298810 and CMS_2014_I1303894 analyses.
2015-03-18 Holger Schulz <holger.schulz@durham.ac.uk>
* Add PDG_TAUS analysis which makes use of the TauFinder.
* Add ATLAS 'traditional' Underlying Event in Z->mumu analysis (ATLAS_2014_I1315949).
2015-03-18 Andy Buckley <andy.buckley@cern.ch>
* Change UnstableFinalState duplicate resolution to use the last
in a chain rather than the first.
2015-03-17 Holger Schulz <holger.schulz@durham.ac.uk>
* Update Taufinder to use decaytyoe (can be HADRONIC, LEPTONIC or
ANY), in FastJet.cc --- set TauFinder mode to hadronic for tau-tagging
2015-03-16 Chris Pollard <cpollard@cern.ch>
* Removed fuzzyEquals() from Vector3::angle()
2015-03-16 Andy Buckley <andy.buckley@cern.ch>
* Adding Cuts-based constructor to PrimaryHadrons.
* Adding missing compare() method to HeavyHadrons projection.
2015-03-15 Chris Pollard <cpollard@cern.ch>
* Adding FinalPartons projection which selects the quarks and
gluons immediately before hadronization
2015-03-05 Andy Buckley <andy.buckley@cern.ch>
* Adding Cuts-based constructors and other tidying in UnstableFinalState and HeavyHadrons
2015-03-03 Andy Buckley <andy.buckley@cern.ch>
* Add support for a PLOT meta-file argument to rivet-cmphistos.
2015-02-27 Andy Buckley <andy.buckley@cern.ch>
* Improved time reporting.
2015-02-24 Andy Buckley <andy.buckley@cern.ch>
* Add Particle::fromHadron and Particle::fromPromptTau, and add a
boolean 'prompt' argument to Particle::fromTau.
* Fix WFinder use-transverse-mass property setting. Thanks to Christian Gutschow.
2015-02-04 Andy Buckley <andy.buckley@cern.ch>
* Add more protection against math domain errors with log axes.
* Add some protection against nan-valued points and error bars in make-plots.
2015-02-03 Andy Buckley <andy.buckley@cern.ch>
* Converting 'bitwise' to 'logical' Cuts combinations in all analyses.
2015-02-02 Andy Buckley <andy.buckley@cern.ch>
* Use vector MET rather than scalar VET (doh...) in WFinder
cut. Thanks to Ines Ochoa for the bug report.
* Updating and tidying analyses with deprecation warnings.
* Adding more Cuts/FS constructors for Charged,Neutral,UnstableFinalState.
* Add &&, || and ! operators for without-parens-warnings Cut
combining. Note these don't short-circuit, but this is ok since
Cut comparisons don't have side-effects.
* Add absetaIn, absrapIn Cut range definitions.
* Updating use of sorted particle/jet access methods and cmp
functors in projections and analyses.
2014-12-09 Andy Buckley <andy.buckley@cern.ch>
* Adding a --cmd arg to rivet-buildplugin to allow the output
paths to be sed'ed (to help deal with naive Grid
distribution). For example BUILDROOT=`rivet-config --prefix`;
rivet-buildplugin PHOTONS.cc --cmd | sed -e
"s:$BUILDROOT:$SITEROOT:g"
2014-11-26 Andy Buckley <andy.buckley@cern.ch>
* Interface improvements in DressedLeptons constructor.
* Adding DEPRECATED macro to throw compiler deprecation warnings when using deprecated features.
2014-11-25 Andy Buckley <andy.buckley@cern.ch>
* Adding Cut-based constructors, and various constructors with
lists of PDG codes to IdentifiedFinalState.
2014-11-20 Andy Buckley <andy.buckley@cern.ch>
* Analysis updates (ATLAS, CMS, CDF, D0) to apply the changes below.
* Adding JetAlg jets(Cut, Sorter) methods, and other interface
improvements for cut and sorted ParticleBase retrieval from JetAlg
and ParticleFinder projections. Some old many-doubles versions
removed, syntactic sugar sorting methods deprecated.
* Adding Cuts::Et and Cuts::ptIn, Cuts::etIn, Cuts::massIn.
* Moving FastJet includes, conversions, uses etc. into Tools/RivetFastJet.hh
2014-10-07 Andy Buckley <andy.buckley@cern.ch>
* Fix a bug in the isCharmHadron(pid) function and remove isStrange* functions.
2014-09-30 Andy Buckley <andy.buckley@cern.ch>
* 2.2.0 release!
* Mark Jet::containsBottom and Jet::containsCharm as deprecated
methods: use the new methods. Analyses updated.
* Add Jet::bTagged(), Jet::cTagged() and Jet::tauTagged() as
ghost-assoc-based replacements for the 'contains' tagging methods.
2014-09-17 Andy Buckley <andy.buckley@cern.ch>
* Adding support for 1D and 3D YODA scatters, and helper methods
for calling the efficiency, asymm and 2D histo divide functions.
2014-09-12 Andy Buckley <andy.buckley@cern.ch>
* Adding 5 new ATLAS analyses:
ATLAS_2011_I921594: Inclusive isolated prompt photon analysis with full 2010 LHC data
ATLAS_2013_I1263495: Inclusive isolated prompt photon analysis with 2011 LHC data
ATLAS_2014_I1279489: Measurements of electroweak production of dijets + $Z$ boson, and distributions sensitive to vector boson fusion
ATLAS_2014_I1282441: The differential production cross section of the $\phi(1020)$ meson in $\sqrt{s}=7$ TeV $pp$ collisions measured with the ATLAS detector
ATLAS_2014_I1298811: Leading jet underlying event at 7 TeV in ATLAS
* Adding a median(vector<NUM>) function and fixing the other stats
functions to operate on vector<NUM> rather than vector<int>.
2014-09-03 Andy Buckley <andy.buckley@cern.ch>
* Fix wrong behaviour of LorentzTransform with a null boost vector
-- thanks to Michael Grosse.
2014-08-26 Andy Buckley <andy.buckley@cern.ch>
* Add calc() methods to Hemispheres as requested, to allow it to
be used with Jet or FourMomentum inputs outside the normal
projection system.
2014-08-17 Andy Buckley <andy.buckley@cern.ch>
* Improvements to the particles methods on
ParticleFinder/FinalState, in particular adding the range of cuts
arguments cf. JetAlg (and tweaking the sorted jets equivalent) and
returning as a copy rather than a reference if cut/sorted to avoid
accidentally messing up the cached copy.
* Creating ParticleFinder projection base class, and moving
Particles-accessing methods from FinalState into it.
* Adding basic forms of MC_ELECTRONS, MC_MUONS, and MC_TAUS
analyses.
2014-08-15 Andy Buckley <andy.buckley@cern.ch>
* Version bump to 2.2.0beta1 for use at BOOST and MCnet school.
2014-08-13 Andy Buckley <andy.buckley@cern.ch>
* New analyses:
ATLAS_2014_I1268975 (high mass dijet cross-section at 7 TeV)
ATLAS_2014_I1304688 (jet multiplicity and pT at 7 TeV)
ATLAS_2014_I1307756 (scalar diphoton resonance search at 8 TeV -- no histograms!)
CMSTOTEM_2014_I1294140 (charged particle pseudorapidity at 8 TeV)
2014-08-09 Andy Buckley <andy.buckley@cern.ch>
* Adding PromptFinalState, based on code submitted by Alex
Grohsjean and Will Bell. Thanks!
2014-08-06 Andy Buckley <andy.buckley@cern.ch>
* Adding MC_HFJETS and MC_JETTAGS analyses.
2014-08-05 Andy Buckley <andy.buckley@cern.ch>
* Update all analyses to use the xMin/Max/Mid, xMean, xWidth,
etc. methods on YODA classes rather than the deprecated lowEdge
etc.
* Merge new HasPID functor from Holger Schulz into
Rivet/Tools/ParticleUtils.hh, mainly for use with the any()
function in Rivet/Tools/Utils.hh
2014-08-04 Andy Buckley <andy.buckley@cern.ch>
* Add ghost tagging of charms, bottoms and taus to FastJets, and
tag info accessors to Jet.
* Add constructors from and cast operators to FastJet's PseudoJet
object from Particle and Jet.
* Convert inRange to not use fuzzy comparisons on closed
intervals, providing old version as fuzzyInRange.
2014-07-30 Andy Buckley <andy.buckley@cern.ch>
* Remove classifier functions accepting a Particle from the PID
inner namespace.
2014-07-29 Andy Buckley <andy.buckley@cern.ch>
* MC_JetAnalysis.cc: re-enable +- ratios for eta and y, now that
YODA divide doesn't throw an exception.
* ATLAS_2012_I1093734: fix a loop index error which led to the
first bin value being unfilled for half the dphi plots.
* Fix accidental passing of a GenParticle pointer as a PID code
int in HeavyHadrons.cc. Effect limited to incorrect deductions
about excited HF decay chains and should be small. Thanks to
Tomasz Przedzinski for finding and reporting the issue during
HepMC3 design work!
2014-07-23 Andy Buckley <andy.buckley@cern.ch>
* Fix to logspace: make sure that start and end values are exact,
not the result of exp(log(x)).
2014-07-16 Andy Buckley <andy.buckley@cern.ch>
* Fix setting of library paths for doc building: Python can't
influence the dynamic loader in its own process by setting an
environment variable because the loader only looks at the variable
once, when it starts.
2014-07-02 Andy Buckley <andy.buckley@cern.ch>
* rivet-cmphistos now uses the generic yoda.read() function rather
than readYODA() -- AIDA files can also be compared and plotted
directly now.
2014-06-24 Andy Buckley <andy.buckley@cern.ch>
* Add stupid missing <string> include and std:: prefix in Rivet.hh
2014-06-20 Holger Schulz <hschulz@physik.hu-berlin.de>
* bin/make-plots: Automatic generation of minor xtick labels if LogX is requested
but data resides e.g. in [200, 700]. Fixes m_12 plots of, e.g.
ATLAS_2010_S8817804
2014-06-17 David Grellscheid <David.Grellscheid@durham.ac.uk>
* pyext/rivet/Makefile.am: 'make distcheck' and out-of-source
builds should work now.
2014-06-10 Andy Buckley <andy.buckley@cern.ch>
* Fix use of the install command for bash completion installation on Macs.
2014-06-07 Andy Buckley <andy.buckley@cern.ch>
* Removing direct includes of MathUtils.hh and others from analysis code files.
2014-06-02 Andy Buckley <andy.buckley@cern.ch>
* Rivet 2.1.2 release!
2014-05-30 Andy Buckley <andy.buckley@cern.ch>
* Using Particle absrap(), abseta() and abspid() where automatic conversion was feasible.
* Adding a few extra kinematics mappings to ParticleBase.
* Adding p3() accessors to the 3-momentum on FourMomentum, Particle, and Jet.
* Using Jet and Particle kinematics methods directly (without momentum()) where possible.
* More tweaks to make-plots 2D histo parsing behaviour.
2014-05-30 Holger Schulz <hschulz@physik.hu-berlin.de>
* Actually fill the XQ 2D histo, .plot decorations.
* Have make-plots produce colourmaps using YODA_3D_SCATTER
objects. Remove the grid in colourmaps.
* Some tweaks for the SFM analysis, trying to contact Martin Wunsch
who did the unfolding back then.
2014-05-29 Holger Schulz <hschulz@physik.hu-berlin.de>
* Re-enable 2D histo in MC_PDFS
2014-05-28 Andy Buckley <andy.buckley@cern.ch>
* Updating analysis and project routines to use Particle::pid() by
preference to Particle::pdgId(), and Particle::abspid() by
preference to abs(Particle::pdgId()), etc.
* Adding interfacing of smart pointer types and booking etc. for
YODA 2D histograms and profiles.
* Improving ParticleIdUtils and ParticleUtils functions based on
merging of improved function collections from MCUtils, and
dropping the compiled ParticleIdUtils.cc file.
2014-05-27 Andy Buckley <andy.buckley@cern.ch>
* Adding CMS_2012_I1090423 (dijet angular distributions),
CMS_2013_I1256943 (Zbb xsec and angular correlations),
CMS_2013_I1261026 (jet and UE properties vs. Nch) and
D0_2000_I499943 (bbbar production xsec and angular correlations).
2014-05-26 Andy Buckley <andy.buckley@cern.ch>
* Fixing a bug in plot file handling, and adding a texpand()
routine to rivet.util, to be used to expand some 'standard'
physics TeX macros.
* Adding ATLAS_2012_I1124167 (min bias event shapes),
ATLAS_2012_I1203852 (ZZ cross-section), and ATLAS_2013_I1190187
(WW cross-section) analyses.
2014-05-16 Andy Buckley <andy.buckley@cern.ch>
* Adding any(iterable, fn) and all(iterable, fn) template functions for convenience.
2014-05-15 Holger Schulz <holger.schulz@cern.ch>
* Fix some bugs in identified hadron PIDs in OPAL_1998_S3749908.
2014-05-13 Andy Buckley <andy.buckley@cern.ch>
* Writing out [UNVALIDATED], [PRELIMINARY], etc. in the
--list-analyses output if analysis is not VALIDATED.
2014-05-12 Andy Buckley <andy.buckley@cern.ch>
* Adding CMS_2013_I1265659 colour coherence analysis.
2014-05-07 Andy Buckley <andy.buckley@cern.ch>
* Bug fixes in CMS_2013_I1209721 from Giulio Lenzi.
* Fixing compiler warnings from clang, including one which
indicated a misapplied cut bug in CDF_2006_S6653332.
2014-05-05 Andy Buckley <andy.buckley@cern.ch>
* Fix missing abs() in Particle::abspid()!!!!
2014-04-14 Andy Buckley <andy.buckley@cern.ch>
* Adding the namespace protection workaround for Boost described
at http://www.boost.org/doc/libs/1_55_0/doc/html/foreach.html
2014-04-13 Andy Buckley <andy.buckley@cern.ch>
* Adding a rivet.pc template file and installation rule for pkg-config to use.
* Updating data/refdata/ALEPH_2001_S4656318.yoda to corrected version in HepData.
2014-03-27 Andy Buckley <andy.buckley@cern.ch>
* Flattening PNG output of make-plots (i.e. no transparency) and other tweaks.
2014-03-23 Andy Buckley <andy.buckley@cern.ch>
* Renaming the internal meta-particle class in DressedLeptons (and
exposed in the W/ZFinders) from ClusteredLepton to DressedLepton
for consistency with the change in name of its containing class.
* Removing need for cmake and unportable yaml-cpp trickery by
using libtool to build an embedded symbol-mangled copy of yaml-cpp
rather than trying to mangle and build direct from the tarball.
2014-03-10 Andy Buckley <andy.buckley@cern.ch>
* Rivet 2.1.1 release.
2014-03-07 Andy Buckley <andy.buckley@cern.ch>
* Adding ATLAS multilepton search (no ref data file), ATLAS_2012_I1204447.
2014-03-05 Andy Buckley <andy.buckley@cern.ch>
* Also renaming Breit-Wigner functions to cdfBW, invcdfBW and bwspace.
* Renaming index_between() to the more Rivety binIndex(), since that's the only real use of such a function... plus a bit of SFINAE type relaxation trickery.
2014-03-04 Andy Buckley <andy.buckley@cern.ch>
* Adding programmatic access to final histograms via AnalysisHandler::getData().
* Adding CMS 4 jet correlations analysis, CMS_2013_I1273574.
* Adding CMS W + 2 jet double parton scattering analysis, CMS_2013_I1272853.
* Adding ATLAS isolated diphoton measurement, ATLAS_2012_I1199269.
* Improving the index_between function so the numeric types don't
have to exactly match.
* Adding better momentum comparison functors and sortBy, sortByX
functions to use them easily on containers of Particle, Jet, and
FourMomentum.
2014-02-10 Andy Buckley <andy.buckley@cern.ch>
* Removing duplicate and unused ParticleBase sorting functors.
* Removing unused HT increment and units in ATLAS_2012_I1180197 (unvalidated SUSY).
* Fixing photon isolation logic bug in CMS_2013_I1258128 (Z rapidity).
* Replacing internal uses of #include Rivet/Rivet.hh with
Rivet/Config/RivetCommon.hh, removing the MAXRAPIDITY const, and
repurposing Rivet/Rivet.hh as a convenience include for external
API users.
* Adding isStable, children, allDescendants, stableDescendants,
and flightLength functions to Particle.
* Replacing Particle and Jet deltaX functions with generic ones on
ParticleBase, and adding deltaRap variants.
* Adding a Jet.fhh forward declaration header, including fastjet::PseudoJet.
* Adding a RivetCommon.hh header to allow Rivet.hh to be used externally.
* Fixing HeavyHadrons to apply pT cuts if specified.
2014-02-06 Andy Buckley <andy.buckley@cern.ch>
* 2.1.0 release!
2014-02-05 Andy Buckley <andy.buckley@cern.ch>
* Protect against invalid prefix value if the --prefix configure option is unused.
2014-02-03 Andy Buckley <andy.buckley@cern.ch>
* Adding the ATLAS_2012_I1093734 fwd-bwd / azimuthal minbias correlations analysis.
* Adding the LHCB_2013_I1208105 forward energy flow analysis.
2014-01-31 Andy Buckley <andy.buckley@cern.ch>
* Checking the YODA minimum version in the configure script.
* Fixing the JADE_OPAL analysis ycut values to the midpoints,
thanks to information from Christoph Pahl / Stefan Kluth.
2014-01-29 Andy Buckley <andy.buckley@cern.ch>
* Removing unused/overrestrictive Isolation* headers.
2014-01-27 Andy Buckley <andy.buckley@cern.ch>
* Re-bundling yaml-cpp, now built as a mangled static lib based on
the LHAPDF6 experience.
* Throw a UserError rather than an assert if AnalysisHandler::init
is called more than once.
2014-01-25 David Grellscheid <david.grellscheid@durham.ac.uk>
* src/Core/Cuts.cc: New Cuts machinery, already used in FinalState.
Old-style "mineta, maxeta, minpt" constructors kept around for ease of
transition. Minimal set of convenience functions available, like EtaIn(),
should be expanded as needed.
2014-01-22 Andy Buckley <andy.buckley@cern.ch>
* configure.ac: Remove opportunistic C++11 build, until this
becomes mandatory (in version 2.2.0?). Anyone who wants C++11 can
explicitly set the CXXFLAGS (and DYLDFLAGS for pre-Mavericks Macs)
2014-01-21 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Core/Analysis.cc: Fixed bug in Analysis::isCompatible where
an 'abs' was left out when checking that beam energes does not
differ by more than 1GeV.
* src/Analyses/CMS_2011_S8978280.cc: Fixed checking of beam energy
and booking corresponding histograms.
2013-12-19 Andy Buckley <andy.buckley@cern.ch>
* Adding pid() and abspid() methods to Particle.
* Adding hasCharm and hasBottom methods to Particle.
* Adding a sorting functor arg version of the ZFinder::constituents() method.
* Adding pTmin cut accessors to HeavyHadrons.
* Tweak to the WFinder constructor to place the target W (trans) mass argument last.
2013-12-18 Andy Buckley <andy.buckley@cern.ch>
* Adding a GenParticle* cast operator to Particle, removing the
Particle and Jet copies of the momentum cmp functors, and general
tidying/improvement/unification of the momentum properties of jets
and particles.
2013-12-17 Andy Buckley <andy.buckley@cern.ch>
* Using SFINAE techniques to improve the math util functions.
* Adding isNeutrino to ParticleIdUtils, and
isHadron/isMeson/isBaryon/isLepton/isNeutrino methods to Particle.
* Adding a FourMomentum cast operator to ParticleBase, so that
Particle and Jet objects can be used directly as FourMomentums.
2013-12-16 Andy Buckley <andy@duality>
* LeptonClusters renamed to DressedLeptons.
* Adding singular particle accessor functions to WFinder and ZFinder.
* Removing ClusteredPhotons and converting ATLAS_2010_S8919674.
2013-12-12 Andy Buckley <andy.buckley@cern.ch>
* Fixing a problem with --disable-analyses (thanks to David Hall)
* Require FastJet version 3.
* Bumped version to 2.1.0a0
* Adding -DNDEBUG to the default build flags, unless in --enable-debug mode.
* Adding a special treatment of RIVET_*_PATH variables: if they
end in :: the default search paths will not be appended. Used
primarily to restrict the doc builds to look only inside the build
dirs, but potentially also useful in other special circumstances.
* Adding a definition of exec_prefix to rivet-buildplugin.
* Adding -DNDEBUG to the default non-debug build flags.
2013-11-27 Andy Buckley <andy.buckley@cern.ch>
* Removing accidentally still-present no-as-needed linker flag from rivet-config.
* Lots of analysis clean-up and migration to use new features and W/Z finder APIs.
* More momentum method forwarding on ParticleBase and adding
abseta(), absrap() etc. functions.
* Adding the DEFAULT_RIVET_ANA_CONSTRUCTOR cosmetic macro.
* Adding deltaRap() etc. function variations
* Adding no-decay photon clustering option to WFinder and ZFinder,
and replacing opaque bool args with enums.
* Adding an option for ignoring photons from hadron/tau decays in LeptonClusters.
2013-11-22 Andy Buckley <andy.buckley@cern.ch>
* Adding Particle::fromBottom/Charm/Tau() members. LHCb were
aready mocking this up, so it seemed sensible to add it to the
interface as a more popular (and even less dangerous) version of
hasAncestor().
* Adding an empty() member to the JetAlg interface.
2013-11-07 Andy Buckley <andy.buckley@cern.ch>
* Adding the GSL lib path to the library path in the env scripts
and the rivet-config --ldflags output.
2013-10-25 Andy Buckley <andy.buckley@cern.ch>
* 2.0.0 release!!!!!!
2013-10-24 Andy Buckley <andy.buckley@cern.ch>
* Supporting zsh completion via bash completion compatibility.
2013-10-22 Andy Buckley <andy.buckley@cern.ch>
* Updating the manual to describe YODA rather than AIDA and the new rivet-cmphistos script.
* bin/make-plots: Adding paths to error messages in histogram combination.
* CDF_2005_S6217184: fixes to low stats errors and final scatter plot binning.
2013-10-21 Andy Buckley <andy.buckley@cern.ch>
* Several small fixes in jet shape analyses, SFM_1984, etc. found
in the last H++ validation run.
2013-10-18 Andy Buckley <andy.buckley@cern.ch>
* Updates to configure and the rivetenv scripts to try harder to discover YODA.
2013-09-26 Andy Buckley <andy.buckley@cern.ch>
* Now bundling Cython-generated files in the tarballs, so Cython
is not a build requirement for non-developers.
2013-09-24 Andy Buckley <andy.buckley@cern.ch>
* Removing unnecessary uses of a momentum() indirection when
accessing kinematic variables.
* Clean-up in Jet, Particle, and ParticleBase, in particular
splitting PID functions on Particle from those on PID codes, and
adding convenience kinematic functions to ParticleBase.
2013-09-23 Andy Buckley <andy.buckley@cern.ch>
* Add the -avoid-version flag to libtool.
* Final analysis histogramming issues resolved.
2013-08-16 Andy Buckley <andy.buckley@cern.ch>
* Adding a ConnectBins flag in make-plots, to decide whether to
connect adjacent, gapless bins with a vertical line. Enabled by
default (good for the step-histo default look of MC lines), but
now rivet-cmphistos disables it for the reference data.
2013-08-14 Andy Buckley <andy.buckley@cern.ch>
* Making 2.0.0beta3 -- just a few remaining analysis migration
issues remaining but it's worth making another beta since there
were lots of framework fixes/improvements.
2013-08-11 Andy Buckley <andy.buckley@cern.ch>
* ARGUS_1993_S2669951 also fixed using scatter autobooking.
* Fixing remaining issues with booking in BABAR_2007_S7266081
using the feature below (far nicer than hard-coding).
* Adding a copy_pts param to some Analysis::bookScatter2D methods:
pre-setting the points with x values is sometimes genuinely
useful.
2013-07-26 Andy Buckley <andy.buckley@cern.ch>
* Removed the (officially) obsolete CDF 2008 LEADINGJETS and
NOTE_9351 underlying event analyses -- superseded by the proper
versions of these analyses based on the final combined paper.
* Removed the semi-demo Multiplicity projection -- only the
EXAMPLE analysis and the trivial ALEPH_1991_S2435284 needed
adaptation.
2013-07-24 Andy Buckley <andy.buckley@cern.ch>
* Adding a rejection of histo paths containing /TMP/ from the
writeData function. Use this to handle booked temporary
histograms... for now.
2013-07-23 Andy Buckley <andy.buckley@cern.ch>
* Make rivet-cmphistos _not_ draw a ratio plot if there is only one line.
* Improvements and fixes to HepData lookup with rivet-mkanalysis.
2013-07-22 Andy Buckley <andy.buckley@cern.ch>
* Add -std=c++11 or -std=c++0x to the Rivet compiler flags if supported.
* Various fixes to analyses with non-zero numerical diffs.
2013-06-12 Andy Buckley <andy.buckley@cern.ch>
* Adding a new HeavyHadrons projection.
* Adding optional extra include_end args to logspace() and linspace().
2013-06-11 Andy Buckley <andy.buckley@cern.ch>
* Moving Rivet/RivetXXX.hh tools headers into Rivet/Tools/.
* Adding PrimaryHadrons projection.
* Adding particles_in/out functions on GenParticle to RivetHepMC.
* Moved STL extensions from Utils.hh to RivetSTL.hh and tidying.
* Tidying, improving, extending, and documenting in RivetSTL.hh.
* Adding a #include of Logging.hh into Projection.hh, and removing
unnecessary #includes from all Projection headers.
2013-06-10 Andy Buckley <andy.buckley@cern.ch>
* Moving htmlify() and detex() Python functions into rivet.util.
* Add HepData URL for Inspire ID lookup to the rivet script.
* Fix analyses' info files which accidentally listed the Inspire
ID under the SpiresID metadata key.
2013-06-07 Andy Buckley <andy.buckley@cern.ch>
* Updating mk-analysis-html script to produce MathJax output
* Adding a version of Analysis::removeAnalysisObject() which takes
an AO pointer as its argument.
* bin/rivet: Adding pandoc-based conversion of TeX summary and
description strings to plain text on the terminal output.
* Add MathJax to rivet-mkhtml output, set up so the .info entries should render ok.
* Mark the OPAL 1993 analysis as UNVALIDATED: from the H++
benchmark runs it looks nothing like the data, and there are some
outstanding ambiguities.
2013-06-06 Andy Buckley <andy.buckley@cern.ch>
* Releasing 2.0.0b2 beta version.
2013-06-05 Andy Buckley <andy.buckley@cern.ch>
* Renaming Analysis::add() etc. to very explicit
addAnalysisObject(), sorting out shared_pointer polymorphism
issues via the Boost dynamic_pointer_cast, and adding a full set
of getHisto1D(), etc. explicitly named and typed accessors,
including ones with HepData dataset/axis ID signatures.
* Adding histo booking from an explicit reference Scatter2D (and
more placeholders for 2D histos / 3D scatters) and rewriting
existing autobooking to use this.
* Converting inappropriate uses of size_t to unsigned int in Analysis.
* Moving Analysis::addPlot to Analysis::add() (or reg()?) and
adding get() and remove() (or unreg()?)
* Fixing attempted abstraction of import fallbacks in rivet.util.import_ET().
* Removing broken attempt at histoDir() caching which led to all
histograms being registered under the same analysis name.
2013-06-04 Andy Buckley <andy.buckley@cern.ch>
* Updating the Cython version requirement to 0.18
* Adding Analysis::integrate() functions and tidying the Analysis.hh file a bit.
2013-06-03 Andy Buckley <andy.buckley@cern.ch>
* Adding explicit protection against using inf/nan scalefactors in
ATLAS_2011_S9131140 and H1_2000_S4129130.
* Making Analysis::scale noisly complain about invalid
scalefactors.
2013-05-31 Andy Buckley <andy.buckley@cern.ch>
* Reducing the TeX main memory to ~500MB. Turns out that it *can*
be too large with new versions of TeXLive!
2013-05-30 Andy Buckley <andy.buckley@cern.ch>
* Reverting bookScatter2D behaviour to never look at ref data, and
updating a few affected analyses. This should fix some bugs with
doubled datapoints introduced by the previous behaviour+addPoint.
* Adding a couple of minor Utils.hh and MathUtils.hh features.
2013-05-29 Andy Buckley <andy.buckley@cern.ch>
* Removing Constraints.hh header.
* Minor bugfixes and improvements in Scatter2D booking and MC_JetAnalysis.
2013-05-28 Andy Buckley <andy.buckley@cern.ch>
* Removing defunct HistoFormat.hh and HistoHandler.{hh,cc}
2013-05-27 Andy Buckley <andy.buckley@cern.ch>
* Removing includes of Logging.hh, RivetYODA.hh, and
ParticleIdUtils.hh from analyses (and adding an include of
ParticleIdUtils.hh to Analysis.hh)
* Removing now-unused .fhh files.
* Removing lots of unnecessary .fhh includes from core classes:
everything still compiling ok. A good opportunity to tidy this up
before the release.
* Moving the rivet-completion script from the data dir to bin (the
completion is for scripts in bin, and this makes development
easier).
* Updating bash completion scripts for YODA format and
compare-histos -> rivet-cmphistos.
2013-05-23 Andy Buckley <andy.buckley@cern.ch>
* Adding Doxy comments and a couple of useful convenience functions to Utils.hh.
* Final tweaks to ATLAS ttbar jet veto analysis (checked logic with Kiran Joshi).
2013-05-15 Andy Buckley <andy.buckley@cern.ch>
* Many 1.0 -> weight bugfixes in ATLAS_2011_I945498.
* yaml-cpp v3 support re-introduced in .info parsing.
* Lots of analysis clean-ups for YODA TODO issues.
2013-05-13 Andy Buckley <andy.buckley@cern.ch>
* Analysis histo booking improvements for Scatter2D, placeholders
for 2D histos, and general tidying.
2013-05-12 Andy Buckley <andy.buckley@cern.ch>
* Adding configure-time differentiation between yaml-cpp API versions 3 and 5.
2013-05-07 Andy Buckley <andy.buckley@cern.ch>
* Converting info file reading to use the yaml-cpp 0.5.x API.
2013-04-12 Andy Buckley <andy.buckley@cern.ch>
* Tagging as 2.0.0b1
2013-04-04 Andy Buckley <andy.buckley@cern.ch>
* Removing bundling of yaml-cpp: it needs to be installed by the
user / bootstrap script from now on.
2013-04-03 Andy Buckley <andy.buckley@cern.ch>
* Removing svn:external m4 directory, and converting Boost
detection to use better boost.m4 macros.
2013-03-22 Andy Buckley <andy.buckley@cern.ch>
* Moving PID consts to the PID namespace and corresponding code
updates and opportunistic clean-ups.
* Adding Particle::fromDecay() method.
2013-03-09 Andy Buckley <andy.buckley@cern.ch>
* Version bump to 2.0.0b1 in anticipation of first beta release.
* Adding many more 'popular' particle ID code named-consts and
aliases, and updating the RapScheme enum with ETA -> ETARAP, and
fixing affected analyses (plus other opportunistic tidying / minor
bug-fixing).
* Fixing a symbol misnaming in ATLAS_2012_I1119557.
2013-03-07 Andy Buckley <andy.buckley@cern.ch>
* Renaming existing uses of ParticleVector to the new 'Particles' type.
* Updating util classes, projections, and analyses to deal with
the HepMC return value changes.
* Adding new Particle(const GenParticle*) constructor.
* Converting Particle::genParticle() to return a const pointer
rather than a reference, for the same reason as below (+
consistency within Rivet and with the HepMC pointer-centric coding
design).
* Converting Event to use a different implementation of original
and modified GenParticles, and to manage the memory in a more
future-proof way. Event::genParticle() now returns a const pointer
rather than a reference, to signal that the user is leaving the
happy pastures of 'normal' Rivet behind.
* Adding a Particles typedef by analogy to Jets, and in preference
to the cumbersome ParticleVector.
* bin/: Lots of tidying/pruning of messy/defunct scripts.
* Creating spiresbib, util, and plotinfo rivet python module
submodules: this eliminates lighthisto and the standalone
spiresbib modules. Util contains convenience functions for Python
version testing, clean ElementTree import, and process renaming,
for primary use by the rivet-* scripts.
* Removing defunct scripts that have been replaced/obsoleted by YODA.
2013-03-06 Andy Buckley <andy.buckley@cern.ch>
* Fixing doc build so that the reference histos and titles are
~correctly documented. We may want to truncate some of the lists!
2013-03-06 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ATLAS_2012_I1125575 analysis
* Converted rivet-mkhtml to yoda
* Introduced rivet-cmphistos as yoda based replacement for compare-histos
2013-03-05 Andy Buckley <andy.buckley@cern.ch>
* Replacing all AIDA ref data with YODA versions.
* Fixing the histograms entries in the documentation to be
tolerant to plotinfo loading failures.
* Making the findDatafile() function primarily find YODA data
files, then fall back to AIDA. The ref data loader will use the
appropriate YODA format reader.
2013-02-05 David Grellscheid <David.Grellscheid@durham.ac.uk>
* include/Rivet/Math/MathUtils.hh: added BWspace bin edge method
to give equal-area Breit-Wigner bins
2013-02-01 Andy Buckley <andy.buckley@cern.ch>
* Adding an element to the PhiMapping enum and a new mapAngle(angle, mapping) function.
* Fixes to Vector3::azimuthalAngle and Vector3::polarAngle calculation (using the mapAngle functions).
2013-01-25 Frank Siegert <frank.siegert@cern.ch>
* Split MC_*JETS analyses into three separate bits:
MC_*INC (inclusive properties)
MC_*JETS (jet properties)
MC_*KTSPLITTINGS (kT splitting scales).
2013-01-22 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix TeX variable in the rivetenv scripts, especially for csh
2012-12-21 Andy Buckley <andy.buckley@cern.ch>
* Version 1.8.2 release!
2012-12-20 Andy Buckley <andy.buckley@cern.ch>
* Adding ATLAS_2012_I1119557 analysis (from Roman Lysak and Lily Asquith).
2012-12-18 Andy Buckley <andy.buckley@cern.ch>
* Adding TOTEM_2012_002 analysis, from Sercan Sen.
2012-12-18 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2011_I954992 analysis
2012-12-17 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2012_I1193338 analysis
* Fixed xi cut in ATLAS_2011_I894867
2012-12-17 Andy Buckley <andy.buckley@cern.ch>
* Adding analysis descriptions to the HTML analysis page ToC.
2012-12-14 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2012_PAS_FWD_11_003 analysis
* Added LHCB_2012_I1119400 analysis
2012-12-12 Andy Buckley <andy.buckley@cern.ch>
* Correction to jet acceptance in CMS_2011_S9120041, from Sercan Sen: thanks!
2012-12-12 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2012_PAS_QCD_11_010 analysis
2012-12-07 Andy Buckley <andy.buckley@cern.ch>
* Version number bump to 1.8.2 -- release approaching.
* Rewrite of ALICE_2012_I1181770 analysis to make it a bit more sane and acceptable.
* Adding a note on FourVector and FourMomentum that operator- and
operator-= invert both the space and time components: use of -=
can result in a vector with negative energy.
* Adding particlesByRapidity and particlesByAbsRapidity to FinalState.
2012-12-07 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ALICE_2012_I1181770 analysis
* Bump version to 1.8.2
2012-12-06 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ATLAS_2012_I1188891 analysis
* Added ATLAS_2012_I1118269 analysis
* Added CMS_2012_I1184941 analysis
* Added LHCB_2010_I867355 analysis
* Added TGraphErrors support to root2flat
2012-11-27 Andy Buckley <andy.buckley@cern.ch>
* Converting CMS_2012_I1102908 analysis to use YODA.
* Adding XLabel and YLabel setting in histo/profile/scatter booking.
2012-11-27 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix make-plots png creation for SL5
2012-11-23 Peter Richardson <peter.richardson@durham.ac.uk>
* Added ATLAS_2012_CONF_2012_153 4-lepton SUSY search
2012-11-17 Andy Buckley <andy.buckley@cern.ch>
* Adding MC_PHOTONS by Steve Lloyd and AB, for testing general
unisolated photon properties, especially those associated with
charged leptons (e and mu).
2012-11-16 Andy Buckley <andy.buckley@cern.ch>
* Adding MC_PRINTEVENT, a convenient (but verbose!) analysis for
printing out event details to stdout.
2012-11-15 Andy Buckley <andy.buckley@cern.ch>
* Removing the long-unused/defunct autopackage system.
2012-11-15 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added LHCF_2012_I1115479 analysis
* Added ATLAS_2011_I894867 analysis
2012-11-14 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2012_I1102908 analysis
2012-11-14 Andy Buckley <andy.buckley@cern.ch>
* Converting the argument order of logspace, clarifying the
arguments, updating affected code, and removing Analysis::logBinEdges.
* Merging updates from the AIDA maintenance branch up to r4002
(latest revision for next merges is r4009).
2012-11-11 Andy Buckley <andy.buckley@cern.ch>
* include/Math/: Various numerical fixes to Vector3::angle and
changing the 4 vector mass treatment to permit spacelike
virtualities (in some cases even the fuzzy isZero assert check was
being violated). The angle check allows a clean-up of some
workaround code in MC_VH2BB.
2012-10-15 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2012_I1107658 analysis
2012-10-11 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CDF_2012_NOTE10874 analysis
2012-10-04 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ATLAS_2012_I1183818 analysis
2012-07-17 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Cleanup and multiple fixes in CMS_2011_S9120041
* Bugfixed in ALEPH_2004_S5765862 and ATLAS_2010_CONF_2010_049
(thanks to Anil Pratap)
2012-08-09 Andy Buckley <andy.buckley@cern.ch>
* Fixing aida2root command-line help message and converting to TH*
rather than TGraph by default.
2012-07-24 Andy Buckley <andy.buckley@cern.ch>
* Improvements/migrations to rivet-mkhtml, rivet-mkanalysis, and
rivet-buildplugin.
2012-07-17 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Add CMS_2012_I1087342
2012-07-12 Andy Buckley <andy.buckley@cern.ch>
* Fix rivet-mkanalysis a bit for YODA compatibility.
2012-07-05 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Version 1.8.1!
2012-07-05 Holger Schulz <holger.schulz@physik.hu-berlin.de>
* Add ATLAS_2011_I945498
2012-07-03 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Bugfix for transverse mass (thanks to Gavin Hesketh)
2012-06-29 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Merge YODA branch into trunk. YODA is alive!!!!!!
2012-06-26 Holger Schulz <holger.schulz@physik.hu-berlin.de>
* Add ATLAS_2012_I1091481
2012-06-20 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added D0_2011_I895662: 3-jet mass
2012-04-24 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* fixed a few bugs in rivet-rmgaps
* Added new TOTEM dN/deta analysis
2012-03-19 Andy Buckley <andy.buckley@cern.ch>
* Version 1.8.0!
* src/Projections/UnstableFinalState.cc: Fix compiler warning.
* Version bump for testing: 1.8.0beta1.
* src/Core/AnalysisInfo.cc: Add printout of YAML parser exception error messages to aid debugging.
* bin/Makefile.am: Attempt to fix rivet-nopy build on SLC5.
* src/Analyses/LHCB_2010_S8758301.cc: Add two missing entries to the PDGID -> lifetime map.
* src/Projections/UnstableFinalState.cc: Extend list of vetoed particles to include reggeons.
2012-03-16 Andy Buckley <andy.buckley@cern.ch>
* Version change to 1.8.0beta0 -- nearly ready for long-awaited release!
* pyext/setup.py.in: Adding handling for the YAML library: fix for
Genser build from Anton Karneyeu.
* src/Analyses/LHCB_2011_I917009.cc: Hiding lifetime-lookup error
message if the offending particle is not a hadron.
* include/Rivet/Math/MathHeader.hh: Using unnamespaced std::isnan
and std::isinf as standard.
2012-03-16 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Improve default plot behaviour for 2D histograms
2012-03-15 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Make ATLAS_2012_I1084540 less verbose, and general code
cleanup of that analysis.
* New-style plugin hook in ATLAS_2011_I926145,
ATLAS_2011_I944826 and ATLAS_2012_I1084540
* Fix compiler warnings in ATLAS_2011_I944826 and CMS_2011_S8973270
* CMS_2011_S8941262: Weights are double, not int.
* disable inRange() tests in test/testMath.cc until we have a proper
fix for the compiler warnings we see on SL5.
2012-03-07 Andy Buckley <andy.buckley@cern.ch>
* Marking ATLAS_2011_I919017 as VALIDATED (this should have
happened a long time ago) and adding more references.
2012-02-28 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* lighthisto.py: Caching for re.compile(). This speeds up aida2flat
and flat2aida by more than an order of magnitude.
2012-02-27 Andy Buckley <andy.buckley@cern.ch>
* doc/mk-analysis-html: Adding more LaTeX/text -> HTML conversion
replacements, including better <,> handling.
2012-02-26 Andy Buckley <andy.buckley@cern.ch>
* Adding CMS_2011_S8973270, CMS_2011_S8941262, CMS_2011_S9215166,
CMS_QCD_10_024, from CMS.
* Adding LHCB_2011_I917009 analysis, from Alex Grecu.
* src/Core/Analysis.cc, include/Rivet/Analysis.hh: Add a numeric-arg version of histoPath().
2012-02-24 Holger Schulz <holger.schulz@physik.hu-berlin.de>
* Adding ATLAS Ks/Lambda analysis.
2012-02-20 Andy Buckley <andy.buckley@cern.ch>
* src/Analyses/ATLAS_2011_I925932.cc: Using new overflow-aware
normalize() in place of counters and scale(..., 1/count)
2012-02-14 Andy Buckley <andy.buckley@cern.ch>
* Splitting MC_GENERIC to put the PDF and PID plotting into
MC_PDFS and MC_IDENTIFIED respectively.
* Renaming MC_LEADINGJETS to MC_LEADJETUE.
2012-02-14 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* DELPHI_1996_S3430090 and ALEPH_1996_S3486095:
fix rapidity vs {Thrust,Sphericity}-axis.
2012-02-14 Andy Buckley <andy.buckley@cern.ch>
* bin/compare-histos: Don't attempt to remove bins from MC histos
where they aren't found in the ref file, if the ref file is not
expt data, or if the new --no-rmgapbins arg is given.
* bin/rivet: Remove the conversion of requested analysis names to
upper-case: mixed-case analysis names will now work.
2012-02-14 Frank Siegert <frank.siegert@cern.ch>
* Bugfixes and improvements for MC_TTBAR:
- Avoid assert failure with logspace starting at 0.0
- Ignore charged lepton in jet finding (otherwise jet multi is always
+1).
- Add some dR/deta/dphi distributions as noted in TODO
- Change pT plots to logspace as well (to avoid low-stat high pT bins)
2012-02-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* rivet-mkhtml -c option now has the semantics of a .plot
file. The contents are appended to the dat output by
compare-histos.
2012-02-09 David Grellscheid <david.grellscheid@durham.ac.uk>
* Fixed broken UnstableFS behaviour
2012-01-25 Frank Siegert <frank.siegert@cern.ch>
* Improvements in make-plots:
- Add PlotTickLabels and RatioPlotTickLabels options (cf.
make-plots.txt)
- Make ErrorBars and ErrorBands non-exclusive (and change
their order, such that Bars are on top of Bands)
2012-01-25 Holger Schulz <holger.schulz@physik.hu-berlin.de>
* Add ATLAS diffractive gap analysis
2012-01-23 Andy Buckley <andy.buckley@cern.ch>
* bin/rivet: When using --list-analyses, the analysis summary is
now printed out when log level is <= INFO, rather than < INFO.
The effect on command line behaviour is that useful identifying
info is now printed by default when using --list-analyses, rather
than requiring --list-analyses -v. To get the old behaviour,
e.g. if using the output of rivet --list-analyses for scripting,
now use --list-analyses -q.
2012-01-22 Andy Buckley <andy.buckley@cern.ch>
* Tidying lighthisto, including fixing the order in which +- error
values are passed to the Bin constructor in fromFlatHisto.
2012-01-16 Frank Siegert <frank.siegert@cern.ch>
* Bugfix in ATLAS_2012_I1083318: Include non-signal neutrinos in
jet clustering.
* Add first version of ATLAS_2012_I1083318 (W+jets). Still
UNVALIDATED until final happiness with validation plots arises and
data is in Hepdata.
* Bugfix in ATLAS_2010_S8919674: Really use neutrino with highest
pT for Etmiss. Doesn't seem to make very much difference, but is
more correct in principle.
2012-01-16 Peter Richardson <peter.richardson@durham.ac.uk>
* Fixes to ATLAS_20111_S9225137 to include reference data
2012-01-13 Holger Schulz <holger.schulz@physik.hu-berlin.de>
* Add ATLAS inclusive lepton analysis
2012-01-12 Hendrik Hoeth <hendrik.hoeth@durham.ac.uk>
* Font selection support in rivet-mkhtml
2012-01-11 Peter Richardson <peter.richardson@durham.ac.uk>
* Added pi0 to list of particles.
2012-01-11 Andy Buckley <andy.buckley@cern.ch>
* Removing references to Boost random numbers.
2011-12-30 Andy Buckley <andy.buckley@cern.ch>
* Adding a placeholder rivet-which script (not currently
installed).
* Tweaking to avoid a very time-consuming debug printout in
compare-histos with the -v flag, and modifying the Rivet env vars
in rivet-mkhtml before calling compare-histos to eliminate
problems induced by relative paths (i.e. "." does not mean the
same thing when the directory is changed within the script).
2011-12-12 Andy Buckley <andy.buckley@cern.ch>
* Adding a command line completion function for rivet-mkhtml.
2011-12-12 Frank Siegert <frank.siegert@cern.ch>
* Fix for factor of 2.0 in normalisation of CMS_2011_S9086218
* Add --ignore-missing option to rivet-mkhtml to ignore non-existing
AIDA files.
2011-12-06 Andy Buckley <andy.buckley@cern.ch>
* Include underflow and overflow bins in the normalisation when
calling Analysis::normalise(h)
2011-11-23 Andy Buckley <andy.buckley@cern.ch>
* Bumping version to 1.8.0alpha0 since the Jet interface changes
are quite a major break with backward compatibility (although the
vast majority of analyses should be unaffected).
* Removing crufty legacy stuff from the Jet class -- there is
never any ambiguity between whether Particle or FourMomentum
objects are the constituents now, and the jet 4-momentum is set
explicitly by the jet alg so that e.g. there is no mismatch if the
FastJet pt recombination scheme is used.
* Adding default do-nothing implementations of Analysis::init()
and Analysis::finalize(), since it is possible for analysis
implementations to not need to do anything in these methods, and
forcing analysis authors to write do-nothing boilerplate code is
not "the Rivet way"!
2011-11-19 Andy Buckley <andy.buckley@cern.ch>
* Adding variant constructors to FastJets with a more natural
Plugin* argument, and decrufting the constructor implementations a
bit.
* bin/rivet: Adding a more helpful error message if the rivet
module can't be loaded, grouping the option parser options,
removing the -A option (this just doesn't seem useful anymore),
and providing a --pwd option as a shortcut to append "." to the
search path.
2011-11-18 Andy Buckley <andy.buckley@cern.ch>
* Adding a guide to compiling a new analysis template to the
output message of rivet-mkanalysis.
2011-11-11 Andy Buckley <andy.buckley@cern.ch>
* Version 1.7.0 release!
* Protecting the OPAL 2004 analysis against NaNs in the
hemispheres projection -- I can't track the origin of these and
suspect some occasional memory corruption.
2011-11-09 Andy Buckley <andy@insectnation.org>
* Renaming source files for EXAMPLE and
PDG_HADRON_MULTIPLICITIES(_RATIOS) analyses to match the analysis
names.
* Cosmetic fixes in ATLAS_2011_S9212183 SUSY analysis.
* Adding new ATLAS W pT analysis from Elena Yatsenko (slightly adapted).
2011-10-20 Frank Siegert <frank.siegert@cern.ch>
* Extend API of W/ZFinder to allow for specification of input final
state in which to search for leptons/photons.
2011-10-19 Andy Buckley <andy@insectnation.org>
* Adding new version of LHCB_2010_S8758301, based on submission
from Alex Grecu. There is some slightly dodgy-looking GenParticle*
fiddling going on, but apparently it's necessary (and hopefully robust).
2011-10-17 Andy Buckley <andy@insectnation.org>
* bin/rivet-nopy linker line tweak to make compilation work with
GCC 4.6 (-lHepMC has to be explicitly added for some reason).
2011-10-13 Frank Siegert <frank.siegert@cern.ch>
* Add four CMS QCD analyses kindly provided by CMS.
2011-10-12 Andy Buckley <andy@insectnation.org>
* Adding a separate test program for non-matrix/vector math
functions, and adding a new set of int/float literal arg tests for
the inRange functions in it.
* Adding a jet multiplicity plot for jets with pT > 30 GeV to
MC_TTBAR.
2011-10-11 Andy Buckley <andy@insectnation.org>
* Removing SVertex.
2011-10-11 James Monk <jmonk@cern.ch>
* root2flat was missing the first bin (plus spurious last bin)
* My version of bash does not understand the pipe syntax |& in rivet-buildplugin
2011-09-30 James Monk <jmonk@cern.ch>
* Fix bug in ATLAS_2010_S8817804 that misidentified the akt4 jets
as akt6
2011-09-29 Andy Buckley <andy@insectnation.org>
* Converting FinalStateHCM to a slightly more general
DISFinalState.
2011-09-26 Andy Buckley <andy@insectnation.org>
* Adding a default libname argument to rivet-buildplugin. If the
first argument doesn't have a .so library suffix, then use
RivetAnalysis.so as the default.
2011-09-19 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* make-plots: Fixing regex for \physicscoor. Adding "FrameColor"
option.
2011-09-17 Andy Buckley <andy@insectnation.org>
* Improving interactive metadata printout, by not printing
headings for missing info.
* Bumping the release number to 1.7.0alpha0, since with these
SPIRES/Inspire changes and the MissingMomentum API change we need
more than a minor release.
* Updating the mkanalysis, BibTeX-grabbing and other places that
care about analysis SPIRES IDs to also be able to handle the new
Inspire system record IDs. The missing link is getting to HepData
from an Inspire code...
* Using the .info file rather than an in-code declaration to
specify that an analysis needs cross-section information.
* Adding Inspire support to the AnalysisInfo and Analysis
interfaces. Maybe we can find a way to combine the two,
e.g. return the SPIRES code prefixed with an "S" if no Inspire ID
is available...
2011-09-17 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ALICE_2011_S8909580 (strange particle production at 900 GeV)
* Feed-down correction in ALICE_2011_S8945144
2011-09-16 Andy Buckley <andy@insectnation.org>
* Adding ATLAS track jet analysis, modified from the version
provided by Seth Zenz: ATLAS_2011_I919017. Note that this analysis
is currently using the Inspire ID rather than the Spires one:
we're clearly going to have to update the API to handle Inspire
codes, so might as well start now...
2011-09-14 Andy Buckley <andy@insectnation.org>
* Adding the ATLAS Z pT measurement at 7 TeV (ATLAS_2011_S9131140)
and an MC analysis for VH->bb events (MC_VH2BB).
2011-09-12 Andy Buckley <andy@insectnation.org>
* Removing uses of getLog, cout, cerr, and endl from all standard
analyses and projections, except in a very few special cases.
2011-09-10 Andy Buckley <andy@insectnation.org>
* Changing the behaviour and interface of the MissingMomentum
projection to calculate vector ET correctly. This was previously
calculated according to the common definition of -E*sin(theta) of
the summed visible 4-momentum in the event, but that is incorrect
because the timelike term grows monotonically. Instead, transverse
2-vectors of size ET need to be constructed for each visible
particle, and vector-summed in the transverse plane.
The rewrite of this behaviour made it opportune to make an API
improvement: the previous method names scalarET/vectorET() have
been renamed to scalar/vectorEt() to better match the Rivet
FourMomentum::Et() method, and MissingMomentum::vectorEt() now
returns a Vector3 rather than a double so that the transverse
missing Et direction is also available.
Only one data analysis has been affected by this change in
behaviour: the D0_2004_S5992206 dijet delta(phi) analysis. It's
expected that this change will not be very significant, as it is
a *veto* on significant missing ET to reduce non-QCD
contributions. MC studies using this analysis ~always run with QCD
events only, so these contributions should be small. The analysis
efficiency may have been greatly improved, as fewer events will
now fail the missing ET veto cut.
* Add sorting of the ParticleVector returned by the ChargedLeptons
projection.
* configure.ac: Adding a check to make sure that no-one tries to
install into --prefix=$PWD.
2011-09-04 Andy Buckley <andy@insectnation.org>
* lighthisto fixes from Christian Roehr.
2011-08-26 Andy Buckley <andy@insectnation.org>
* Removing deprecated features: the setBeams(...) method on
Analysis, the MaxRapidity constant, the split(...) function, the
default init() method from AnalysisHandler and its test, and the
deprecated TotalVisibleMomentum and PVertex projections.
2011-08-23 Andy Buckley <andy@insectnation.org>
* Adding a new DECLARE_RIVET_PLUGIN wrapper macro to hide the
details of the plugin hook system from analysis authors. Migration
of all analyses and the rivet-mkanalysis script to use this as the
standard plugin hook syntax.
* Also call the --cflags option on root-config when using the
--root option with rivet-biuldplugin (thanks to Richard Corke for
the report)
2011-08-23 Frank Siegert <frank.siegert@cern.ch>
* Added ATLAS_2011_S9126244
* Added ATLAS_2011_S9128077
2011-08-23 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ALICE_2011_S8945144
* Remove obsolete setBeams() from the analyses
* Update CMS_2011_S8957746 reference data to the official numbers
* Use Inspire rather than Spires.
2011-08-19 Frank Siegert <frank.siegert@cern.ch>
* More NLO parton level generator friendliness: Don't crash or fail when
there are no beam particles.
* Add --ignore-beams option to skip compatibility check.
2011-08-09 David Mallows <dave.mallows@gmail.com>
* Fix aida2flat to ignore empty dataPointSet
2011-08-07 Andy Buckley <andy@insectnation.org>
* Adding TEXINPUTS and LATEXINPUTS prepend definitions to the
variables provided by rivetenv.(c)sh. A manual setting of these
variables that didn't include the Rivet TEXMFHOME path was
breaking make-plots on lxplus, presumably since the system LaTeX
packages are so old there.
2011-08-02 Frank Siegert <frank.siegert@cern.ch>
Version 1.6.0 release!
2011-08-01 Frank Siegert <frank.siegert@cern.ch>
* Overhaul of the WFinder and ZFinder projections, including a change
of interface. This solves potential problems with leptons which are not
W/Z constituents being excluded from the RemainingFinalState.
2011-07-29 Andy Buckley <andy@insectnation.org>
* Version 1.5.2 release!
* New version of aida2root from James Monk.
2011-07-29 Frank Siegert <frank.siegert@cern.ch>
* Fix implementation of --config file option in make-plots.
2011-07-27 David Mallows <dave.mallows@gmail.com>
* Updated MC_TTBAR.plot to reflect updated analysis.
2011-07-25 Andy Buckley <andy@insectnation.org>
* Adding a useTransverseMass flag method and implementation to
InvMassFinalState, and using it in the WFinder, after feedback
from Gavin Hesketh. This was the neatest way I could do it :S Some
other tidying up happened along the way.
* Adding transverse mass massT and massT2 methods and functions
for FourMomentum.
2011-07-22 Frank Siegert <frank.siegert@cern.ch>
* Added ATLAS_2011_S9120807
* Add two more observables to MC_DIPHOTON and make its isolation cut
more LHC-like
* Add linear photon pT histo to MC_PHOTONJETS
2011-07-20 Andy Buckley <andy@insectnation.org>
* Making MC_TTBAR work with semileptonic ttbar events and generally
tidying the code.
2011-07-19 Andy Buckley <andy@insectnation.org>
* Version bump to 1.5.2.b01 in preparation for a release in the
very near future.
2011-07-18 David Mallows <dave.mallows@gmail.com>
* Replaced MC_TTBAR: Added t,tbar reconstruction. Not yet working.
2011-07-18 Andy Buckley <andy@insectnation.org>
* bin/rivet-buildplugin.in: Pass the AM_CXXFLAGS
variable (including the warning flags) to the C++ compiler when
building user analysis plugins.
* include/LWH/DataPointSet.h: Fix accidental setting of errorMinus
= scalefactor * error_Plus_. Thanks to Anton Karneyeu for the bug
report!
2011-07-18 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added CMS_2011_S8884919 (charged hadron multiplicity in NSD
events corrected to pT>=0).
* Added CMS_2010_S8656010 and CMS_2010_S8547297 (charged
hadron pT and eta in NSD events)
* Added CMS_2011_S8968497 (chi_dijet)
* Added CMS_2011_S8978280 (strangeness)
2011-07-13 Andy Buckley <andy@insectnation.org>
* Rivet PDF manual updates, to not spread disinformation about
bootstrapping a Genser repo.
2011-07-12 Andy Buckley <andy@insectnation.org>
* bin/make-plots: Protect property reading against unstripped \r
characters from DOS newlines.
* bin/rivet-mkhtml: Add a -M unmatch regex flag (note that these
are matching the analysis path rather than individual histos on
this script), and speed up the initial analysis identification and
selection by avoiding loops of regex comparisons for repeats of
strings which have already been analysed.
* bin/compare-histos: remove the completely (?) unused histogram
list, and add -m and -M regex flags, as for aida2flat and
flat2aida.
2011-06-30 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* fix fromFlat() in lighthistos: It would ignore histogram paths
before.
* flat2aida: preserve histogram order from .dat files
2011-06-27 Andy Buckley <andy@insectnation.org>
* pyext/setup.py.in: Use CXXFLAGS and LDFLAGS safely in the Python
extension build, and improve the use of build/src directory
arguments.
2011-06-23 Andy Buckley <andy@insectnation.org>
* Adding a tentative rivet-updateanalyses script, based on
lhapdf-getdata, which will download new analyses as requested. We
could change our analysis-providing behaviour a bit to allow this
sort of delivery mechanism to be used as the normal way of getting
analysis updates without us having to make a whole new Rivet
release. It is nice to be able to identify analyses with releases,
though, for tracking whether bugs have been addressed.
2011-06-10 Frank Siegert <frank.siegert@cern.ch>
* Bugfixes in WFinder.
2011-06-10 Andy Buckley <andy@insectnation.org>
* Adding \physicsxcoor and \physicsycoor treatment to make-plots.
2011-06-06 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Allow for negative cross-sections. NLO tools need this.
* make-plots: For RatioPlotMode=deviation also consider the MC
uncertainties, not just data.
2011-06-04 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Add support for goodness-of-fit calculations to make-plots.
The results are shown in the legend, and one histogram can
be selected to determine the color of the plot margin. See
the documentation for more details.
2011-06-04 Andy Buckley <andy@insectnation.org>
* Adding auto conversion of Histogram2D to DataPointSets in the
AnalysisHandler _normalizeTree method.
2011-06-03 Andy Buckley <andy@insectnation.org>
* Adding a file-weight feature to the Run object, which will
optionally rescale the weights in the provided HepMC files. This
should be useful for e.g. running on multiple differently-weighted
AlpGen HepMC files/streams. The new functionality is used by the
rivet command via an optional weight appended to the filename with
a colon delimiter, e.g. "rivet fifo1.hepmc fifo2.hepmc:2.31"
2011-06-01 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Add BeamThrust projection
2011-05-31 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix LIBS for fastjet-3.0
* Add basic infrastructure for Taylor plots in make-plots
* Fix OPAL_2004_S6132243: They are using charged+neutral.
* Release 1.5.1
2011-05-22 Andy Buckley <andy@insectnation.org>
* Adding plots of stable and decayed PID multiplicities to
MC_GENERIC (useful for sanity-checking generator setups).
* Removing actually-unused ProjectionApplier.fhh forward
declaration header.
2011-05-20 Andy Buckley <andy@insectnation.org>
* Removing import of ipython shell from rivet-rescale, having just
seen it throw a multi-coloured warning message on a student's
lxplus Rivet session!
* Adding support for the compare-histos --no-ratio flag when using
rivet-mkhtml. Adding --rel-ratio, --linear, etc. is an exercise
for the enthusiast ;-)
2011-05-10 Andy Buckley <andy@insectnation.org>
* Internal minor changes to the ProjectionHandler and
ProjectionApplier interfaces, in particular changing the
ProjectionHandler::create() function to be called getInstance and
to return a reference rather than a pointer. The reference change
is to make way for an improved singleton implementation, which
cannot yet be used due to a bug in projection memory
management. The code of the improved singleton is available, but
commented out, in ProjectionManager.hh to allow for easier
migration and to avoid branching.
2011-05-08 Andy Buckley <andy@insectnation.org>
* Extending flat2aida to be able to read from and write to
stdin/out as for aida2flat, and also eliminating the internal
histo parsing representation in favour of the one in
lighthisto. lighthisto's fromFlat also needed a bit of an
overhaul: it has been extended to parse each histo's chunk of
text (including BEGIN and END lines) in fromFlatHisto, and for
fromFlat to parse a collection of histos from a file, in keeping
with the behaviour of fromDPS/fromAIDA. Merging into Professor is
now needed.
* Extending aida2flat to have a better usage message, to accept
input from stdin for command chaining via pipes, and to be a bit
more sensibly internally structured (although it also now has to
hold all histos in memory before writing out -- that shouldn't be
a problem for anything other than truly huge histo files).
2011-05-04 Andy Buckley <andy@insectnation.org>
* compare-histos: If using --mc-errs style, prefer dotted and
dashdotted line styles to dashed, since dashes are often too long
to be distinguishable from solid lines. Even better might be to
always use a solid line for MC errs style, and to add more colours.
* rivet-mkhtml: use a no-mc-errors drawing style by default,
to match the behaviour of compare-histos, which it calls. The
--no-mc-errs option has been replaced with an --mc-errs option.
2011-05-04 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Ignore duplicate files in compare-histos.
2011-04-25 Andy Buckley <andy@insectnation.org>
* Adding some hadron-specific N and sumET vs. |eta| plots to MC_GENERIC.
* Re-adding an explicit attempt to get the beam particles, since
HepMC's IO_HERWIG seems to not always set them even though it's
meant to.
2011-04-19 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added ATLAS_2011_S9002537 W asymmetry analysis
2011-04-14 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* deltaR, deltaPhi, deltaEta now available in all combinations of
FourVector, FourMomentum, Vector3, doubles. They also accept jets
and particles as arguments now.
2011-04-13 David Grellscheid <david.grellscheid@durham.ac.uk>
* added ATLAS 8983313: 0-lepton BSM
2011-04-01 Andy Buckley <andy@insectnation.org>
* bin/rivet-mkanalysis: Don't try to download SPIRES or HepData
info if it's not a standard analysis (i.e. if the SPIRES ID is not
known), and make the default .info file validly parseable by YAML,
which was an unfortunate gotcha for anyone writing a first
analysis.
2011-03-31 Andy Buckley <andy@insectnation.org>
* bin/compare-histos: Write more appropriate ratio plot labels
when not comparing to data, and use the default make-plots labels
when comparing to data.
* bin/rivet-mkhtml: Adding a timestamp to the generated pages, and
a -t/--title option to allow setting the main HTML page title on
the command line: otherwise it becomes impossible to tell these
pages apart when you have a lot of them, except by URL!
2011-03-24 Andy Buckley <andy@insectnation.org>
* bin/aida2flat: Adding a -M option to *exclude* histograms whose
paths match a regex. Writing a negative lookahead regex with
positive matching was far too awkward!
2011-03-18 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Core/AnalysisHandler.cc (AnalysisHandler::removeAnalysis):
Fixed strange shared pointer assignment that caused seg-fault.
2011-03-13 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* filling of functions works now in a more intuitive way (I hope).
2011-03-09 Andy Buckley <andy@insectnation.org>
* Version 1.5.0 release!
2011-03-08 Andy Buckley <andy@insectnation.org>
* Adding some extra checks for external packages in make-plots.
2011-03-07 Andy Buckley <andy@insectnation.org>
* Changing the accuracy of the beam energy checking to 1%, to make
the UI a bit more forgiving. It's still best to specify exactly the right
energy of course!
2011-03-01 Andy Buckley <andy@insectnation.org>
* Adding --no-plottitle to compare-histos (+ completion).
* Fixing segfaults in UA1_1990_S2044935 and UA5_1982_S875503.
* Bump ABI version numbers for 1.5.0 release.
* Use AnalysisInfo for storage of the NeedsCrossSection analysis flag.
* Allow field setting in AnalysisInfo.
2011-02-27 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Support LineStyle=dashdotted in make-plots
* New command line option --style for compare-histos. Options are
"default", "bw" and "talk".
* cleaner uninstall
2011-02-26 Andy Buckley <andy@insectnation.org>
* Changing internal storage and return type of Particle::pdgId()
to PdgId, and adding Particle::energy().
* Renaming Analysis::energies() as Analysis::requiredEnergies().
* Adding beam energies into beam consistency checking:
Analysis::isCompatible methods now also require the beam energies
to be provided.
* Removing long-deprecated AnalysisHandler::init() constructor and
AnalysisHandler::removeIncompatibleAnalyses() methods.
2011-02-25 Andy Buckley <andy@insectnation.org>
* Adding --disable-obsolete, which takes its value from the value
of --disable-preliminary by default.
* Replacing RivetUnvalidated and RivetPreliminary plugin libraries
with optionally-configured analysis contents in the
experiment-specific plugin libraries. This avoids issues with
making libraries rebuild consistently when sources were reassigned
between libraries.
2011-02-24 Andy Buckley <andy@insectnation.org>
* Changing analysis plugin registration to fall back through
available paths rather than have RIVET_ANALYSIS_PATH totally
override the built-in paths. The first analysis hook of a given
name to be found is now the one that's used: any duplicates found
will be warned about but unused. getAnalysisLibPaths() now returns
*all* the search paths, in keeping with the new search behaviour.
2011-02-22 Andy Buckley <andy@insectnation.org>
* Moving the definition of the MSG_* macros into the Logging.hh
header. They can't be used everywhere, though, as they depend on
the existence of a this->getLog() method in the call scope. This
move makes them available in e.g. AnalysisHandler and other bits
of framework other than projections and analyses.
* Adding a gentle print-out from the Rivet AnalysisHandler if
preliminary analyses are being used, and strengthening the current
warning if unvalidated analyses are used.
* Adding documentation about the validation "process" and
the (un)validated and preliminary analysis statuses.
* Adding the new RivetPreliminary analysis library, and the
corresponding --disable-preliminary configure flag. Analyses in
this library are subject to change names, histograms, reference
data values, etc. between releases: make sure you check any
dependences on these analyses when upgrading Rivet.
* Change the Python script ref data search behaviours to use Rivet
ref data by default where available, rather than requiring a -R
option. Where relevant, -R is still a valid option, to avoid
breaking legacy scripts, and there is a new --no-rivet-refs option
to turn the default searching *off*.
* Add the prepending and appending optional arguments to the path
searching functions. This will make it easier to combine the
search functions with user-supplied paths in Python scripts.
* Make make-plots killable!
* Adding Rivet version to top of run printout.
* Adding Run::crossSection() and printing out the cross-section in
pb at the end of a Rivet run.
2011-02-22 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Make lighthisto.py aware of 2D histograms
* Adding published versions of the CDF_2008 leading jets and DY
analyses, and marking the preliminary ones as "OBSOLETE".
2011-02-21 Andy Buckley <andy@insectnation.org>
* Adding PDF documentation for path searching and .info/.plot
files, and tidying overfull lines.
* Removing unneeded const declarations from various return by
value path and internal binning functions. Should not affect ABI
compatibility but will force recompilation of external packages
using the RivetPaths.hh and Utils.hh headers.
* Adding findAnalysis*File(fname) functions, to be used by Rivet
scripts and external programs to find files known to Rivet
according to Rivet's (newly standard) lookup rule.
* Changing search path function behaviour to always return *all*
search directories rather than overriding the built-in locations
if the environment variables are set.
2011-02-20 Andy Buckley <andy@insectnation.org>
* Adding the ATLAS 2011 transverse jet shapes analysis.
2011-02-18 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Support for transparency in make-plots
2011-02-18 Frank Siegert <frank.siegert@cern.ch>
* Added ATLAS prompt photon analysis ATLAS_2010_S8914702
2011-02-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Simple NOOP constructor for Thrust projection
* Add CMS event shape analysis. Data read off the plots. We
will get the final numbers when the paper is accepted by
the journal.
2011-02-10 Frank Siegert <frank.siegert@cern.ch>
* Add final version of ATLAS dijet azimuthal decorrelation
2011-02-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* remove ATLAS conf note analyses for which we have final data
* reshuffle histograms in ATLAS minbias analysis to match Hepdata
* small pT-cut fix in ATLAS track based UE analysis
2011-01-31 Andy Buckley <andy@insectnation.org>
* Doc tweaks and adding cmp-by-|p| functions for Jets, to match
those added by Hendrik for Particles.
* Don't sum photons around muons in the D0 2010 Z pT analysis.
2011-01-27 Andy Buckley <andy@insectnation.org>
* Adding ATLAS 2010 min bias and underlying event analyses and data.
2011-01-23 Andy Buckley <andy@insectnation.org>
* Make make-plots write out PDF rather than PS by default.
2011-01-12 Andy Buckley <andy@insectnation.org>
* Fix several rendering and comparison bugs in rivet-mkhtml.
* Allow make-plots to write into an existing directory, at the
user's own risk.
* Make rivet-mkhtml produce PDF-based output rather than PS by
default (most people want PDF these days). Can we do the same
change of default for make-plots?
* Add getAnalysisPlotPaths() function, and use it in compare-histos
* Use proper .info file search path function internally in AnalysisInfo::make.
2011-01-11 Andy Buckley <andy@insectnation.org>
* Clean up ATLAS dijet analysis.
2010-12-30 Andy Buckley <andy@insectnation.org>
* Adding a run timeout option, and small bug-fixes to the event
timeout handling, and making first event timeout work nicely with
the run timeout. Run timeout is intended to be used in conjunction
with timed batch token expiry, of the type that likes to make 0
byte AIDA files on LCG when Grid proxies time out.
2010-12-21 Andy Buckley <andy@insectnation.org>
* Fix the cuts in the CDF 1994 colour coherence analysis.
2010-12-19 Andy Buckley <andy@insectnation.org>
* Fixing CDF midpoint cone jet algorithm default construction to
have an overlap threshold of 0.5 rather than 0.75. This was
recommended by the FastJet manual, and noticed while adding the
ATLAS and CMS cones.
* Adding ATLAS and CMS old iterative cones as "official" FastJets
constructor options (they could always have been used by explicit
instantiation and attachment of a Fastjet plugin object).
* Removing defunct and unused ClosestJetShape projection.
2010-12-16 Andy Buckley <andy@insectnation.org>
* bin/compare-histos, pyext/lighthisto.py: Take ref paths from
rivet module API rather than getting the environment by hand.
* pyext/lighthisto.py: Only read .plot info from the first
matching file (speed-up compare-histos).
2010-12-14 Andy Buckley <andy@insectnation.org>
* Augmenting the physics vector functionality to make FourMomentum
support maths operators with the correct return type (FourMomentum
rather than FourVector).
2010-12-11 Andy Buckley <andy@insectnation.org>
* Adding a --event-timeout option to control the event timeout,
adding it to the completion script, and making sure that the init
time check is turned OFF once successful!
* Adding an 3600 second timeout for initialising an event file. If
it takes longer than (or anywhere close to) this long, chances are
that the event source is inactive for some reason (perhaps
accidentally unspecified and stdin is not active, or the event
generator has died at the other end of the pipe. The reason for
not making it something shorter is that e.g. Herwig++ or Sherpa
can have long initialisation times to set up the MPI handler or to
run the matrix element integration. An timeout after an hour is
still better than a batch job which runs for two days before you
realise that you forgot to generate any events!
2010-12-10 Andy Buckley <andy@insectnation.org>
* Fixing unbooked-histo segfault in UA1_1990_S2044935 at 63 GeV.
2010-12-08 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixes in ATLAS_2010_CONF_083, declaring it validated
* Added ATLAS_2010_CONF_046, only two plots are implemented.
The paper will be out soon, and we don't need the other plots
right now. Data is read off the plots in the note.
* New option "SmoothLine" for HISTOGRAM sections in make-plots
* Changed CustomTicks to CustomMajorTicks and added CustomMinorTicks
in make-plots.
2010-12-07 Andy Buckley <andy@insectnation.org>
* Update the documentation to explain this latest bump to path
lookup behaviours.
* Various improvements to existing path lookups. In particular,
the analysis lib path locations are added to the info and ref
paths to avoid having to set three variables when you have all
three file types in the same personal plugin directory.
* Adding setAnalysisLibPaths and addAnalysisLibPath
functions. rivet --analysis-path{,-append} now use these and work
correctly. Hurrah!
* Add --show-analyses as an alias for --show-analysis, following a
comment at the ATLAS tutorial.
2010-12-07 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Change LegendXPos behaviour in make-plots. Now the top left
corner of the legend is used as anchor point.
2010-12-03 Andy Buckley <andy@insectnation.org>
* 1.4.0 release.
* Add bin-skipping to compare-histos to avoid one use of
rivet-rmgaps (it's still needed for non-plotting post-processing
like Professor).
2010-12-03 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix normalisation issues in UA5 and ALEPH analyses
2010-11-27 Andy Buckley <andy@insectnation.org>
* MathUtils.hh: Adding fuzzyGtrEquals and fuzzyLessEquals, and
tidying up the math utils collection a bit.
* CDF 1994 colour coherence analysis overhauled and
correction/norm factors fixed. Moved to VALIDATED status.
* Adding programmable completion for aida2flat and flat2aida.
* Improvements to programmable completion using the neat _filedir
completion shell function which I just discovered.
2010-11-26 Andy Buckley <andy@insectnation.org>
* Tweak to floating point inRange to use fuzzyEquals for CLOSED
interval equality comparisons.
* Some BibTeX generation improvements, and fixing the ATLAS dijet
BibTeX key.
* Resolution upgrade in PNG make-plots output.
* CDF_2005_S6217184.cc, CDF_2008_S7782535.cc: Updates to use the
new per-jet JetAlg interface (and some other fixes).
* JetAlg.cc: Changed the interface on request to return per-jet
rather than per-event jet shapes, with an extra jet index argument.
* MathUtils.hh: Adding index_between(...) function, which is handy
for working out which bin a value falls in, given a set of bin edges.
2010-11-25 Andy Buckley <andy@insectnation.org>
* Cmp.hh: Adding ASC/DESC (and ANTISORTED) as preferred
non-EQUIVALENT enum value synonyms over misleading
SORTED/UNSORTED.
* Change of rapidity scheme enum name to RapScheme
* Reworking JetShape a bit further: constructor args now avoid
inconsistencies (it was previously possible to define incompatible
range-ends and interval). Internal binning implementation also
reworked to use a vector of bin edges: the bin details are
available via the interface. The general jet pT cuts can be
applied via the JetShape constructor.
* MathUtils.hh: Adding linspace and logspace utility
functions. Useful for defining binnings.
* Adding more general cuts on jet pT and (pseudo)rapidity.
2010-11-11 Andy Buckley <andy@insectnation.org>
* Adding special handling of FourMomentum::mass() for computed
zero-mass vectors for which mass2 can go (very slightly) negative
due to numerical precision.
2010-11-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Adding ATLAS-CONF-2010-083 conference note. Data is read from plots.
When I run Pythia 6 the bins close to pi/2 are higher than in the note,
so I call this "unvalidated". But then ... the note doesn't specify
a tune or even just a version for the generators in the plots. Not even
if they used Pythia 6 or Pythia 8. Probably 6, since they mention AGILe.
2010-11-10 Andy Buckley <andy@insectnation.org>
* Adding a JetAlg::useInvisibles(bool) mechanism to allow ATLAS
jet studies to include neutrinos. Anyone who chooses to use this
mechanism had better be careful to remove hard neutrinos manually
in the provided FinalState object.
2010-11-09 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Adding ATLAS-CONF-2010-049 conference note. Data is read from plots.
Fragmentation functions look good, but I can't reproduce the MC lines
(or even the relative differences between them) in the jet cross-section
plots. So consider those unvalidated for now. Oh, and it seems ATLAS
screwed up the error bands in their ratio plots, too. They are
upside-down.
2010-11-07 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Adding ATLAS-CONF-2010-081 conference note. Data is read from plots.
2010-11-06 Andy Buckley <andy@insectnation.org>
* Deprecating the old JetShape projection and renaming to
ClosestJetShape: the algorithm has a tenuous relationship with
that actually used in the CDF (and ATLAS) jet shape analyses. CDF
analyses to be migrated to the new JetShape projection... and some
of that projection's features, design elements, etc. to be
finished off: we may as well take this opportunity to clear up
what was one of our nastiest pieces of code.
2010-11-05 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Adding ATLAS-CONF-2010-031 conference note. Data is read from plots.
2010-10-29 Andy Buckley <andy@insectnation.org>
* Making rivet-buildplugin use the same C++ compiler and CXXFLAGS
variable as used for the Rivet system build.
* Fixing NeutralFinalState projection to, erm, actually select
neutral particles (by Hendrik).
* Allow passing a general FinalState reference to the JetShape
projection, rather than requiring a VetoedFS.
2010-10-07 Andy Buckley <andy@insectnation.org>
* Adding a --with-root flag to rivet-buildplugin to add
root-config --libs flags to the plugin build command.
2010-09-24 Andy Buckley <andy@insectnation.org>
* Releasing as Rivet 1.3.0.
* Bundling underscore.sty to fix problems with running make-plots
on dat files generated by compare-histos from AIDA files with
underscores in their names.
2010-09-16 Andy Buckley <andy@insectnation.org>
* Fix error in N_effective definition for weighted profile errors.
2010-08-18 Andy Buckley <andy@insectnation.org>
* Adding MC_GENERIC analysis. NB. Frank Siegert also added MC_HJETS.
2010-08-03 Andy Buckley <andy@insectnation.org>
* Fixing compare-histos treatment of what is now a ref file, and
speeding things up... again. What a mess!
2010-08-02 Andy Buckley <andy@insectnation.org>
* Adding rivet-nopy: a super-simple Rivet C++ command line
interface which avoids Python to make profiling and debugging
easier.
* Adding graceful exception handling to the AnalysisHandler event
loop methods.
* Changing compare-histos behaviour to always show plots for which
there is at least one MC histo. The default behaviour should now
be the correct one in 99% of use cases.
2010-07-30 Andy Buckley <andy@insectnation.org>
* Merging in a fix for shared_ptrs not being compared for
insertion into a set based on raw pointer value.
2010-07-16 Andy Buckley <andy@insectnation.org>
* Adding an explicit library dependency declaration on libHepMC,
and hence removing the -lHepMC from the rivet-config --libs
output.
2010-07-14 Andy Buckley <andy@insectnation.org>
* Adding a manual section on use of Rivet (and AGILe) as
libraries, and how to use the -config scripts to aid compilation.
* FastJets projection now allows setting of a jet area definition,
plus a hacky mapping for getting the area-enabled cluster
sequence. Requested by Pavel Starovoitov & Paolo Francavilla.
* Lots of script updates in last two weeks!
2010-06-30 Andy Buckley <andy@insectnation.org>
* Minimising amount of Log class mapped into SWIG.
* Making Python ext build checks fail with error rather than
warning if it has been requested (or, rather, not explicitly
disabled).
2010-06-28 Andy Buckley <andy@insectnation.org>
* Converting rivet Python module to be a package, with the dlopen
flag setting etc. done around the SWIG generated core wrapper
module (rivet.rivetwrap).
* Requiring Python >= 2.4.0 in rivet scripts (and adding a Python
version checker function to rivet module)
* Adding --epspng option to make-plots (and converting to use subprocess.Popen).
2010-06-27 Andy Buckley <andy@insectnation.org>
* Converting JADE_OPAL analysis to use the fastjet
exclusive_ymerge_*max* function, rather than just
exclusive_ymerge: everything looks good now. It seems that fastjet
>= 2.4.2 is needed for this to work properly.
2010-06-24 Andy Buckley <andy@insectnation.org>
* Making rivet-buildplugin look in its own bin directory when
trying to find rivet-config.
2010-06-23 Andy Buckley <andy@insectnation.org>
* Adding protection and warning about numerical precision issues
in jet mass calculation/histogramming to the MC_JetAnalysis
analysis.
* Numerical precision improvement in calculation of
Vector4::mass2.
* Adding relative scale ratio plot flag to compare-histos
* Extended command completion to rivet-config, compare-histos, and
make-plots.
* Providing protected log messaging macros,
MSG_{TRACE,DEBUG,INFO,WARNING,ERROR} cf. Athena.
* Adding environment-aware functions for Rivet search path list access.
2010-06-21 Andy Buckley <andy@insectnation.org>
* Using .info file beam ID and energy info in HTML and LaTeX documentation.
* Using .info file beam ID and energy info in command-line printout.
* Fixing a couple of references to temporary variables in the
analysis beam info, which had been introduced during refactoring:
have reinstated reference-type returns as the more efficient
solution. This should not affect API compatibility.
* Making SWIG configure-time check include testing for
incompatibilities with the C++ compiler (re. the recurring _const_
char* literals issue).
* Various tweaks to scripts: make-plots and compare-histos
processes are now renamed (on Linux), rivet-config is avoided when
computing the Rivet version,and RIVET_REF_PATH is also set using
the rivet --analysis-path* flags. compare-histos now uses multiple
ref data paths for .aida file globbing.
* Hendrik changed VetoedFinalState comparison to always return
UNDEFINED if vetoing on the results of other FS projections is
being used. This is the only simple way to avoid problems
emanating from the remainingFinalState thing.
2010-06-19 Andy Buckley <andy@insectnation.org>
* Adding --analysis-path and --analysis-path-append command-line
flags to the rivet script, as a "persistent" way to set or extend
the RIVET_ANALYSIS_PATH variable.
* Changing -Q/-V script verbosity arguments to more standard
-q/-v, after Hendrik moaned about it ;)
* Small fix to TinyXML operator precendence: removes a warning,
and I think fixes a small bug.
* Adding plotinfo entries for new jet rapidity and jet mass plots
in MC_JetAnalysis derivatives.
* Moving MC_JetAnalysis base class into a new
libRivetAnalysisTools library, with analysis base class and helper
headers to be stored in the reinstated Rivet/Analyses include
directory.
2010-06-08 Andy Buckley <andy@insectnation.org>
* Removing check for CEDARSTD #define guard, since we no longer
compile against AGILe and don't have to be careful about
duplication.
* Moving crappy closest approach and decay significance functions
from Utils into SVertex, which is the only place they have ever
been used (and is itself almost entirely pointless).
* Overhauling particle ID <-> name system to clear up ambiguities
between enums, ints, particles and beams. There are no more enums,
although the names are still available as const static ints, and
names are now obtained via a singleton class which wraps an STL
map for name/ID lookups in both directions.
2010-05-18 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing factor-of-2 bug in the error calculation when scaling
histograms.
* Fixing D0_2001_S4674421 analysis.
2010-05-11 Andy Buckley <andy@insectnation.org>
* Replacing TotalVisibleMomentum with MissingMomentum in analyses
and WFinder. Using vector ET rather than scalar ET in some places.
2010-05-07 Andy Buckley <andy@insectnation.org>
* Revamping the AnalysisHandler constructor and data writing, with
some LWH/AIDA mangling to bypass the stupid AIDA idea of having to
specify the sole output file and format when making the data
tree. Preferred AnalysisHandler constructor now takes only one arg
-- the runname -- and there is a new AH.writeData(outfile) method
to replace AH.commitData(). Doing this now to begin migration to
more flexible histogramming in the long term.
2010-04-21 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing LaTeX problems (make-plots) on ancient machines, like lxplus.
2010-04-29 Andy Buckley <andy@insectnation.org>
* Fixing (I hope!) the treatment of weighted profile bin errors in LWH.
2010-04-21 Andy Buckley <andy@insectnation.org>
* Removing defunct and unused KtJets and Configuration classes.
* Hiding some internal details from Doxygen.
* Add @brief Doxygen comments to all analyses, projections and
core classes which were missing them.
2010-04-21 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* remove obsolete reference to InitialQuarks from DELPHI_2002
* fix normalisation in CDF_2000_S4155203
2010-04-20 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* bin/make-plots: real support for 2-dim histograms plotted as
colormaps, updated the documentation accordingly.
* fix misleading help comment in configure.ac
2010-04-08 Andy Buckley <andy@insectnation.org>
* bin/root2flat: Adding this little helper script, minimally
modified from one which Holger Schulz made for internal use in
ATLAS.
2010-04-05 Andy Buckley <andy@insectnation.org>
* Using spiresbib in rivet-mkanalysis: analysis templates made
with rivet-mkanalysis will now contain a SPIRES-dumped BibTeX key
and entry if possible!
* Adding BibKey and BibTeX entries to analysis metadata files, and
updating doc build to use them rather than the time-consuming
SPIRES screen-scraping. Added SPIRES BibTeX dumps to all analysis
metadata using new (uninstalled & unpackaged) doc/get-spires-data
script hack.
* Updating metadata files to add Energies, Beams and PtCuts
entries to all of them.
* Adding ToDo, NeedsCrossSection, and better treatment of Beams
and Energies entries in metadata files and in AnalysisInfo and
Analysis interfaces.
2010-04-03 Andy Buckley <andy@insectnation.org>
* Frank Siegert: Update of rivet-mkhtml to conform to improved
compare-histos.
* Frank Siegert: LWH output in precision-8 scientific notation, to
solve a binning precision problem... the first time weve noticed a
problem!
* Improved treatment of data/reference datasets and labels in
compare-histos.
* Rewrite of rivet-mkanalysis in Python to make way for neat
additions.
* Improving SWIG tests, since once again the user's biuld system
must include SWIG (no test to check that it's a 'good SWIG', since
the meaning of that depends on which compiler is being used and we
hope that the user system is consistent... evidence from Finkified
Macs and bloody SLC5 notwithstanding).
2010-03-23 Andy Buckley <andy@insectnation.org>
* Tag as patch release 1.2.1.
2010-03-22 Andy Buckley <andy@insectnation.org>
* General tidying of return arguments and intentionally unused
parameters to keep -Wextra happy (some complaints remain from
TinyXML, FastJet, and HepMC).
* Some extra bug fixes: in FastJets projection with explicit
plugin argument, removing muon veto cut on FoxWolframMoments.
* Adding UNUSED macro to help with places where compiler warnings
can't be helped.
* Turning on -Wextra warnings, and fixing some violations.
2010-03-21 Andy Buckley <andy@insectnation.org>
* Adding MissingMomentum projection, as replacement for ~all uses
of now-deprecated TotalVisibleMomentum projection.
* Fixing bug with TotalVisibleMomentum projection usage in MC_SUSY
analysis.
* Frank Siegert fixed major bug in pTmin param passing to FastJets
projection. D'oh: requires patch release.
2010-03-02 Andy Buckley <andy@insectnation.org>
* Tagging for 1.2.0 release... at last!
2010-03-01 Andy Buckley <andy@insectnation.org>
* Updates to manual, manual generation scripts, analysis info etc.
* Add HepData URL to metadata print-out with rivet --show-analysis
* Fix average Et plot in UA1 analysis to only apply to the tracker
acceptance (but to include neutral particle contributions,
i.e. the region of the calorimeter in the tracker acceptance).
* Use Et rather than pT in filling the scalar Et measure in
TotalVisibleMomentum.
* Fixes to UA1 normalisation (which is rather funny in the paper).
2010-02-26 Andy Buckley <andy@insectnation.org>
* Update WFinder to not place cuts and other restrictions on the
neutrino.
2010-02-11 Andy Buckley <andy@insectnation.org>
* Change analysis loader behaviour to use ONLY RIVET_ANALYSIS_PATH
locations if set, otherwise use ONLY the standard Rivet analysis
install path. Should only impact users of personal plugin
analyses, who should now explicitly set RIVET_ANALYSIS_PATH to
load their analysis... and who can now create personal versions of
standard analyses without getting an error message about duplicate
loading.
2010-01-15 Andy Buckley <andy@insectnation.org>
* Add tests for "stable" heavy flavour hadrons in jets (rather
than just testing for c/b hadrons in the ancestor lists of stable
jet constituents)
2009-12-23 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New option "RatioPlotMode=deviation" in make-plots.
2009-12-14 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New option "MainPlot" in make-plots. For people who only want
the ratio plot and nothing else.
* New option "ConnectGaps" in make-plots. Set to 1 if you
want to connect gaps in histograms with a line when ErrorBars=0.
Works both in PLOT and in HISTOGRAM sections.
* Eliminated global variables for coordinates in make-plots and
enabled multithreading.
2009-12-14 Andy Buckley <andy@insectnation.org>
* AnalysisHandler::execute now calls AnalysisHandler::init(event)
if it has not yet been initialised.
* Adding more beam configuration features to Beam and
AnalysisHandler: the setRunBeams(...) methods on the latter now
allows a beam configuration for the run to be specified without
using the Run class.
2009-12-11 Andy Buckley <andy@insectnation.org>
* Removing use of PVertex from few remaining analyses. Still used
by SVertex, which is itself hardly used and could maybe be
removed...
2009-12-10 Andy Buckley <andy@insectnation.org>
* Updating JADE_OPAL to do the histo booking in init(), since
sqrtS() is now available at that stage.
* Renaming and slightly re-engineering all MC_*_* analyses to not
be collider-specific (now the Analysis::sqrtS/beams()) methods
mean that histograms can be dynamically binned.
* Creating RivetUnvalidated.so plugin library for unvalidated
analyses. Unvalidated analyses now need to be explicitly enabled
with a --enable-unvalidated flag on the configure script.
* Various min bias analyses updated and validated.
2009-12-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Propagate SPECIAL and HISTOGRAM sections from .plot files
through compare-histos
* STAR_2006_S6860818: <pT> vs particle mass, validate analysis
2009-12-04 Andy Buckley <andy@insectnation.org>
* Use scaling rather than normalising in DELPHI_1996: this is
generally desirable, since normalizing to 1 for 1/sig dsig/dx
observables isn't correct if any events fall outside the histo
bounds.
* Many fixes to OPAL_2004.
* Improved Hemispheres interface to remove unnecessary consts on
returned doubles, and to also return non-squared versions
of (scaled) hemisphere masses.
* Add "make pyclean" make target at the top level to make it
easier for developers to clean their Python module build when the
API is extended.
* Identify use of unvalidated analyses with a warning message at
runtime.
* Providing Analysis::sqrtS() and Analysis::beams(), and making
sure they're available by the time the init methods are called.
2009-12-02 Andy Buckley <andy@insectnation.org>
* Adding passing of first event sqrt(s) and beams to analysis handler.
* Restructuring running to only use one HepMC input file (no-one
was using multiple ones, right?) and to break down the Run class
to cleanly separate the init and event loop phases. End of file is
now neater.
2009-12-01 Andy Buckley <andy@insectnation.org>
* Adding parsing of beam types and pairs of energies from YAML.
2009-12-01 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing trigger efficiency in CDF_2009_S8233977
2009-11-30 Andy Buckley <andy@insectnation.org>
* Using shared pointers to make I/O object memory management
neater and less error-prone.
* Adding crossSectionPerEvent() method [==
crossSection()/sumOfWeights()] to Analysis. Useful for histogram
scaling since numerator of sumW_passed/sumW_total (to calculate
pass-cuts xsec) is cancelled by dividing histo by sumW_passed.
* Clean-up of Particle class and provision of inline PID::
functions which take a Particle as an argument to avoid having to
explicitly call the Particle::pdgId() method.
2009-11-30 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing division by zero in Profile1D bin errors for
bins with just a single entry.
2009-11-24 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* First working version of STAR_2006_S6860818
2009-11-23 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Adding missing CDF_2001_S4751469 plots to uemerge
* New "ShowZero" option in make-plots
* Improving lots of plot defaults
* Fixing typos / non-existing bins in CDF_2001_S4751469 and
CDF_2004_S5839831 reference data
2009-11-19 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing our compare() for doubles.
2009-11-17 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Zeroth version of STAR_2006_S6860818 analysis (identified
strange particles). Not working yet for unstable particles.
2009-11-11 Andy Buckley <andy@insectnation.org>
* Adding separate jet-oriented and photon-oriented observables to
MC PHOTONJETUE analysis.
* Bug fix in MC leading jets analysis, and general tidying of
leading jet analyses to insert units, etc. (should not affect any
current results)
2009-11-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing last issues in STAR_2006_S6500200 and setting it to
VALIDATED.
* Noramlise STAR_2006_S6870392 to cross-section
2009-11-09 Andy Buckley <andy@insectnation.org>
* Overhaul of jet caching and ParticleBase interface.
* Adding lists of analyses' histograms (obtained by scanning the
plot info files) to the LaTeX documentation.
2009-11-07 Andy Buckley <andy@insectnation.org>
* Adding checking system to ensure that Projections aren't
registered before the init phase of analyses.
* Now that the ProjHandler isn't full of defunct pointers (which
tend to coincidentally point to *new* Projection pointers rather
than undefined memory, hence it wasn't noticed until recently!),
use of a duplicate projection name is now banned with a helpful
message at runtime.
* (Huge) overhaul of ProjectionHandler system to use shared_ptr:
projections are now deleted much more efficiently, naturally
cleaning themselves out of the central repository as they go out
of scope.
2009-11-06 Andy Buckley <andy@insectnation.org>
* Adding Cmp<double> specialisation, using fuzzyEquals().
2009-11-05 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing histogram division code.
2009-11-04 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New analysis STAR_2006_S6500200 (pion and proton pT spectra in
pp collisions at 200 GeV). It is still unclear if they used a cut
in rapidity or pseudorapidity, thus the analysis is declared
"UNDER DEVELOPMENT" and "DO NOT USE".
* Fixing compare() in NeutralFinalState and MergedFinalState
2009-11-04 Andy Buckley <andy@insectnation.org>
* Adding consistence checking on beam ID and sqrt(s) vs. those
from first event.
2009-11-03 Andy Buckley <andy@insectnation.org>
* Adding more assertion checks to linear algebra testing.
2009-11-02 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixing normalisation issue with stacked histograms in
make-plots.
2009-10-30 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* CDF_2009_S8233977: Updating data and axes labels to match final
paper. Normalise to cross-section instead of data.
2009-10-23 Andy Buckley <andy@insectnation.org>
* Fixing Cheese-3 plot in CDF 2004... at last!
2009-10-23 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix muon veto in CDF_1994_S2952106, CDF_2005_S6217184,
CDF_2008_S7782535, and D0_2004_S5992206
2009-10-19 Andy Buckley <andy@insectnation.org>
* Adding analysis info files for MC SUSY and PHOTONJETUE analyses.
* Adding MC UE analysis in photon+jet events.
2009-10-19 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Adding new NeutralFinalState projection. Note that this final
state takes E_T instead of p_T as argument (makes more sense for
neutral particles). The compare() method does not yet work as
expected (E_T comparison still missing).
* Adding new MergedFinalState projection. This merges two final
states, removing duplicate particles. Duplicates are identified by
looking at the genParticle(), so users need to take care of any
manually added particles themselves.
* Fixing most open issues with the STAR_2009_UE_HELEN analysis.
There is only one question left, regarding the away region.
* Set the default split-merge value for SISCone in our FastJets
projection to the recommended (but not Fastjet-default!) value of
0.75.
2009-10-17 Andy Buckley <andy@insectnation.org>
* Adding parsing of units in cross-sections passed to the "-x"
flag, i.e. "-x 101 mb" is parsed internally into 1.01e11 pb.
2009-10-16 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Disabling DELPHI_2003_WUD_03_11 in the Makefiles, since I don't
trust the data.
* Getting STAR_2009_UE_HELEN to work.
2009-10-04 Andy Buckley <andy@insectnation.org>
* Adding triggers and other tidying to (still unvalidated)
UA1_1990 analysis.
* Fixing definition of UA5 trigger to not be intrinscally
different for pp and ppbar: this is corrected for (although it
takes some readng to work this out) in the 1982 paper, which I
think is the only one to compare the two modes.
* Moving projection setup and registration into init() method for
remaining analyses.
* Adding trigger implementations as projections for CDF Runs 0 &
1, and for UA5.
2009-10-01 Andy Buckley <andy.buckley@cern.ch>
* Moving projection setup and registration into init() method for
analyses from ALEPH, CDF and the MC_ group.
* Adding generic SUSY validation analysis, based on plots used in
ATLAS Herwig++ validation.
* Adding sorted particle accessors to FinalState (cf. JetAlg).
2009-09-29 Andy Buckley <andy@insectnation.org>
* Adding optional use of args as regex match expressions with
-l/--list-analyses.
2009-09-03 Andy Buckley <andy.buckley@cern.ch>
* Passing GSL include path to compiler, since its absence was
breaking builds on systems with no GSL installation in a standard
location (such as SLC5, for some mysterious reason!)
* Removing lib extension passing to compiler from the configure
script, because Macs and Linux now both use .so extensions for the
plugin analysis modules.
2009-09-02 Andy Buckley <andy@insectnation.org>
* Adding analysis info file path search with RIVET_DATA_PATH
variable (and using this to fix doc build.)
* Improvements to AnalysisLoader path search.
* Moving analysis sources back into single directory, after a
proletarian uprising ;)
2009-09-01 Andy Buckley <andy@insectnation.org>
* Adding WFinder and WAnalysis, based on Z proj and analysis, with
some tidying of the Z code.
* ClusteredPhotons now uses an IdentifiedFS to pick the photons to
be looped over, and only clusters photons around *charged* signal
particles.
2009-08-31 Andy Buckley <andy@insectnation.org>
* Splitting analyses by directory, to make it easier to disable
building of particular analysis group plugin libs.
* Removing/merging headers for all analyses except for the special
MC_JetAnalysis base class.
* Exit with an error message if addProjection is used twice from
the same parent with distinct projections.
2009-08-28 Andy Buckley <andy@insectnation.org>
* Changed naming convention for analysis plugin libraries, since
the loader has changed so much: they must now *start* with the
word "Rivet" (i.e. no lib prefix).
* Split standard plugin analyses into several plugin libraries:
these will eventually move into separate subdirs for extra build
convenience.
* Started merging analysis headers into the source files, now that
we can (the plugin hooks previously forbade this).
* Replacement of analysis loader system with a new one based on
ideas from ThePEG, which uses dlopen-time instantiation of
templated global variables to reduce boilerplate plugin hooks to
one line in analyses.
2009-07-14 Frank Siegert <frank.siegert@durham.ac.uk>
* Replacing in-source histo-booking metadata with .plot files.
2009-07-14 Andy Buckley <andy@insectnation.org>
* Making Python wrapper files copy into place based on bundled
versions for each active HepMC interface (2.3, 2.4 & 2.5), using a
new HepMC version detector test in configure.
* Adding YAML metadata files and parser, removing same metadata
from the analysis classes' source headers.
2009-07-07 Andy Buckley <andy@insectnation.org>
* Adding Jet::hadronicEnergy()
* Adding VisibleFinalState and automatically using it in JetAlg
projections.
* Adding YAML parser for new metadata (and eventually ref data)
files.
2009-07-02 Andy Buckley <andy@insectnation.org>
* Adding Jet::neutralEnergy() (and Jet::totalEnergy() for
convenience/symmetry).
2009-06-25 Andy Buckley <andy@insectnation.org>
* Tidying and small efficiency improvements in CDF_2008_S7541902
W+jets analysis (remove unneeded second stage of jet storing,
sorting the jets twice, using foreach, etc.).
2009-06-24 Andy Buckley <andy@insectnation.org>
* Fixing Jet's containsBottom and containsCharm methods, since B
hadrons are not necessarily to be found in the final
state. Discovered at the same time that HepMC::GenParticle defines
a massively unhelpful copy constructor that actually loses the
tree information; it would be better to hide it entirely!
* Adding RivetHepMC.hh, which defines container-type accessors to
HepMC particles and vertices, making it possible to use Boost
foreach and hence avoiding the usual huge boilerplate for-loops.
2009-06-11 Andy Buckley <andy@insectnation.org>
* Adding --disable-pdfmanual option, to make the bootstrap a bit
more robust.
* Re-enabling D0IL in FastJets: adding 10^-10 to the pTmin removes
the numerical instability!
* Fixing CDF_2004 min/max cone analysis to use calo jets for the
leading jet Et binning. Thanks to Markus Warsinsky
for (re)discovering this bug: I was sure it had been fixed. I'm
optimistic that this will fix the main distributions, although
Swiss Cheese "minus 3" is still likely to be broken. Early tests
look okay, but it'll take more stats before we can remove the "do
not trust" sign.
2009-06-10 Andy Buckley <andy@insectnation.org>
* Providing "calc" methods so that Thrust and Sphericity
projections can be used as calculators without having to use the
projecting/caching system.
2009-06-09 Andy Buckley <andy@insectnation.org>
* 1.1.3 release!
* More doc building and SWIG robustness tweaks.
2009-06-07 Andy Buckley <andy@insectnation.org>
* Make doc build from metadata work even before the library is
installed.
2009-06-07 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix phi rotation in CDF_2008_LEADINGJETS.
2009-06-07 Andy Buckley <andy@insectnation.org>
* Disabling D0 IL midpoint cone (using CDF modpoint instead),
since there seems to be a crashing bug in FastJet's
implementation: we can't release that way, since ~no D0 analyses
will run.
2009-06-03 Andy Buckley <andy@insectnation.org>
* Putting SWIG-generated source files under SVN control to make
life easier for people who we advise to check out the SVN head
version, but who don't have a sufficiently modern copy of SWIG to
* Adding the --disable-analyses option, for people who just want
to use Rivet as a framework for their own analyses.
* Enabling HepMC cross-section reading, now that HepMC 2.5.0 has
been released.
2009-05-23 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Using gsl-config to locate libgsl
* Fix the paths for linking such that our own libraries are found
before any system libraries, e.g. for the case that there is an
outdated fastjet version installed on the system while we want to
use our own up-to-date version.
* Change dmerge to ymerge in the e+e- analyses using JADE or
DURHAM from fastjet. That's what it is called in fastjet-2.4 now.
2009-05-18 Andy Buckley <andy@insectnation.org>
* Adding use of gsl-config in configure script.
2009-05-16 Andy Buckley <andy@insectnation.org>
* Removing argument to vetoEvent macro, since no weight
subtraction is now needed. It's now just an annotated return, with
built-in debug log message.
* Adding an "open" FinalState, which is only calculated once per
even, then used by all other FSes, avoiding the loop over
non-status 1 particles.
2009-05-15 Andy Buckley <andy@insectnation.org>
* Removing incorrect setting of DPS x-errs in CDF_2008 jet shape
analysis: the DPS autobooking already gets this bit right.
* Using Jet rather than FastJet::PseudoJet where possible, as it
means that the phi ranges match up nicely between Particle and the
Jet object. The FastJet objects are only really needed if you want
to do detailed things like look at split/merge scales for
e.g. diff jet rates or "y-split" analyses.
* Tidying and debugging CDF jet shape analyses and jet shape
plugin... ongoing, but I think I've found at least one real bug,
plus a lot of stuff that can be done a lot more nicely.
* Fully removing deprecated math functions and updating affected
analyses.
2009-05-14 Andy Buckley <andy@insectnation.org>
* Removing redundant rotation in DISKinematics... this was a
legacy of Peter using theta rather than pi-theta in his rotation.
* Adding convenience phi, rho, eta, theta, and perp,perp2 methods
to the 3 and 4 vector classes.
2009-05-12 Andy Buckley <andy@insectnation.org>
* Adding event auto-rotation for events with one proton... more
complete approach?
2009-05-09 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Renaming CDF_2008_NOTE_9337 to CDF_2009_S8233977.
* Numerous small bug fixes in ALEPH_1996_S3486095.
* Adding data for one of the Rick-Field-style STAR UE analyses.
2009-05-08 Andy Buckley <andy@insectnation.org>
* Adding rivet-mkanalysis script, to make generating new analysis
source templates easier.
2009-05-07 Andy Buckley <andy@insectnation.org>
* Adding null vector check to Vector3::azimuthalAngle().
* Fixing definition of HCM/Breit frames in DISKinematics, and
adding asserts to check that the transformation is doing what it
should.
2009-05-05 Andy Buckley <andy@insectnation.org>
* Removing eta and Et cuts from CDF 2000 Z pT analysis, based on
our reading of the paper, and converting most of the analysis to a
call of the ZFinder projection.
2009-05-05 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Support non-default seed_threshold in CDF cone jet algorithms.
* New analyses STAR_2006_S6870392 and STAR_2008_S7993412. In
STAR_2008_S7993412 only the first distribution is filled at the
moment. STAR_2006_S6870392 is normalised to data instead of the
Monte Carlo cross-section, since we don't have that available in
the HepMC stream yet.
2009-05-05 Andy Buckley <andy@insectnation.org>
* Changing Event wrapper to copy whole GenEvents rather than
pointers, use std units if supported in HepMC, and run a
placeholder function for event auto-orientation.
2009-04-28 Andy Buckley <andy@insectnation.org>
* Removing inclusion of IsolationTools header by analyses that
aren't actually using the isolation tools... which is all of
them. Leaving the isolation tools in place for now, as there might
still be use cases for them and there's quite a lot of code there
that deserves a second chance to be used!
2009-04-24 Andy Buckley <andy@insectnation.org>
* Deleting Rivet implementations of TrackJet and D0ILConeJets: the
code from these has now been incorporated into FastJet 2.4.0.
* Removed all mentions of the FastJet JADE patch and the HAVE_JADE
preprocessor macro.
* Bug fix to D0_2008_S6879055 to ensure that cuts compare to both
electron and positron momenta (was just comparing against
electrons, twice, probably thanks to the miracle of cut and
paste).
* Converting all D0 IL Cone jets to use FastJets. Involved tidying
D0_2004 jet azimuthal decorrelation analysis and D0_2008_S6879055
as part of migration away from using the getLorentzJets method,
and removing the D0ILConeJets header from quite a few analyses
that weren't using it at all.
* Updating CDF 2001 to use FastJets in place of TrackJet, and
adding axis labels to its plots.
* Note that ZEUS_2001_S4815815 uses the wrong jet definition: it
should be a cone but curently uses kT.
* Fixing CDF_2005_S6217184 to use correct (midpoint, R=0.7) jet
definition. That this was using a kT definition with R=1.0 was
only made obvious when the default FastJets constructor was
removed.
* Removing FastJets default constructor: since there are now
several good (IRC safe) jet definitions available, there is no
obvious safe default and analyses should have to specify which
they use.
* Moving FastJets constructors into implementation file to reduce
recompilation dependencies, and adding new plugins.
* Ensuring that axis labels actually get written to the output
data file.
2009-04-22 Andy Buckley <andy@insectnation.org>
* Adding explicit FastJet CDF jet alg overlap_threshold
constructor param values, since the default value from 2.3.x is
now removed in version 2.4.0.
* Removing use of HepMC ThreeVector::mag method (in one place
only) since this has been removed in version 2.5.0b.
* Fix to hepmc.i (included by rivet.i) to ignore new HepMC 2.5.0b
GenEvent stream operator.
2009-04-21 Andy Buckley <andy@insectnation.org>
* Dependency on FastJet now requires version 2.4.0 or later. Jade
algorithm is now native.
* Moving all analysis constructors and Projection headers from the
analysis header files into their .cc implementation files, cutting
header dependencies.
* Removing AIDA headers: now using LWH headers only, with
enhancement to use axis labels. This facility is now used by the
histo booking routines, and calling the booking function versions
which don't specify axis labels will result in a runtime warning.
2009-04-07 Andy Buckley <andy@insectnation.org>
* Adding $(DESTDIR) prefix to call to Python module "setup.py
install"
* Moving HepMC SWIG mappings into Python Rivet module for now:
seems to work-around the SL type-mapping bug.
2009-04-03 Andy Buckley <andy@insectnation.org>
* Adding MC analysis for LHC UE: higher-pT replica of Tevatron
2008 leading jets study.
* Adding CDF_1990 pseudorapidity analysis.
* Moving CDF_2001 constructor into implementation file.
* Cleaning up CDF_2008_LEADINGJETS a bit, e.g. using foreach
loops.
* Adding function interface for specifying axis labels in histo
bookings. Currently has no effect, since AIDA doesn't seem to have
a mechanism for axis labels. It really is a piece of crap.
2009-03-18 Andy Buckley <andy@insectnation.org>
* Adding docs "make upload" and other tweaks to make the doc files
fit in with the Rivet website.
* Improving LaTex docs to show email addresses in printable form
and to group analyses by collider or other metadata.
* Adding doc script to include run info in LaTeX docs, and to make
HTML docs.
* Removing WZandh projection, which wasn't generator independent
and whose sole usage was already replaced by ZFinder.
* Improvements to constructors of ZFinder and InvMassFS.
* Changing ExampleTree to use real FS-based Z finding.
2009-03-16 Andy Buckley <andy@insectnation.org>
* Allow the -H histo file spec to give a full name if wanted. If
it doesn't end in the desired extension, it will be added.
* Adding --runname option (and API elements) to choose a run name
to be prepended as a "top level directory" in histo paths. An
empty value results in no extra TLD.
2009-03-06 Andy Buckley <andy@insectnation.org>
* Adding R=0.2 photon clustering to the electrons in the CDF 2000
Z pT analysis.
2009-03-04 Andy Buckley <andy@insectnation.org>
* Fixing use of fastjet-config to not use the user's PATH
variable.
* Fixing SWIG type table for HepMC object interchange.
2009-02-20 Andy Buckley <andy@insectnation.org>
* Adding use of new metadata in command line analysis querying
with the rivet command, and in building the PDF Rivet manual.
* Adding extended metadata methods to the Analysis interface and
the Python wrapper. All standard analyses comply with this new
interface.
2009-02-19 Andy Buckley <andy@insectnation.org>
* Adding usefully-scoped config headers, a Rivet::version()
function which uses them, and installing the generated headers to
fix "external" builds against an installed copy of Rivet. The
version() function has been added to the Python wrapper.
2009-02-05 Andy Buckley <andy@insectnation.org>
* Removing ROOT dependency and linking. Woo! There's no need for
this now, because the front-end accepts no histo format switch and
we'll just use aida2root for output conversions. Simpler this way,
and it avoids about half of our compilation bug reports from 32/64
bit ROOT build confusions.
2009-02-04 Andy Buckley <andy@insectnation.org>
* Adding automatic generation of LaTeX manual entries for the
standard analyses.
2009-01-20 Andy Buckley <andy@insectnation.org>
* Removing RivetGun and TCLAP source files!
2009-01-19 Andy Buckley <andy@insectnation.org>
* Added psyco Python optimiser to rivet, make-plots and
compare-histos.
* bin/aida2root: Added "-" -> "_" mangling, following requests.
2009-01-17 Andy Buckley <andy@insectnation.org>
* 1.1.2 release.
2009-01-15 Andy Buckley <andy@insectnation.org>
* Converting Python build system to bundle SWIG output in tarball.
2009-01-14 Andy Buckley <andy@insectnation.org>
* Converting AIDA/LWH divide function to return a DPS so that bin
width factors don't get all screwed up. Analyses adapted to use
the new division operation (a DPS/DPS divide would also be
nice... but can wait for YODA).
2009-01-06 Andy Buckley <andy@insectnation.org>
* bin/make-plots: Added --png option for making PNG output files,
using 'convert' (after making a PDF --- it's a bit messy)
* bin/make-plots: Added --eps option for output filtering through
ps2eps.
2009-01-05 Andy Buckley <andy@insectnation.org>
* Python: reworking Python extension build to use distutils and
newer m4 Python macros. Probably breaks distcheck but is otherwise
more robust and platform independent (i.e. it should now work on
Macs).
2008-12-19 Andy Buckley <andy@insectnation.org>
* make-plots: Multi-threaded make-plots and cleaned up the LaTeX
building a bit (necessary to remove the implicit single global
state).
2008-12-18 Andy Buckley <andy@insectnation.org>
* make-plots: Made LaTeX run in no-stop mode.
* compare-histos: Updated to use a nicer labelling syntax on the
command line and to successfully build MC-MC plots.
2008-12-16 Andy Buckley <andy@insectnation.org>
* Made LWH bin edge comparisons safe against numerical errors.
* Added Particle comparison functions for sorting.
* Removing most bad things from ExampleTree and tidying up. Marked
WZandh projection for removal.
2008-12-03 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added the two missing observables to the CDF_2008_NOTE_9337 analysis,
i.e. track pT and sum(ET). There is a small difference between our MC
output and the MC plots of the analysis' author, we're still waiting
for the author's comments.
2008-12-02 Andy Buckley <andy@insectnation.org>
* Overloading use of a std::set in the interface, since the
version of SWIG on Sci Linux doesn't have a predefined mapping for
STL sets.
2008-12-02 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixed uemerge. The output was seriously broken by a single line
of debug information in fillAbove(). Also changed uemerge output
to exponential notation.
* Unified ref and mc histos in compare-histos. Histos with one
bin are plotted linear. Option for disabling the ratio plot.
Several fixes for labels, legends, output directories, ...
* Changed rivetgun's fallback directory for parameter files to
$PREFIX/share/AGILe, since that's where the steering files now are.
* Running aida2flat in split mode now produces make-plots compatible
dat-files for direct plotting.
2008-11-28 Andy Buckley <andy@insectnation.org>
* Replaced binreloc with an upgraded and symbol-independent copy.
2008-11-25 Andy Buckley <andy@insectnation.org>
* Added searching of $RIVET_REF_PATH for AIDA reference data
files.
2008-11-24 Andy Buckley <andy@insectnation.org>
* Removing "get"s and other obsfucated syntax from
ProjectionApplier (Projection and Analysis) interfaces.
2008-11-21 Andy Buckley <andy@insectnation.org>
* Using new "global" Jet and V4 sorting functors in
TrackJet. Looks like there was a sorting direction problem before...
* Verbose mode with --list-analyses now shows descriptions as well
as analysis names.
* Moved data/Rivet to data/refdata and moved data/RivetGun
contents to AGILe (since generator steering is no longer a Rivet
thing)
* Added unchecked ratio plots to D0 Run II jet + photon analysis.
* Added D0 inclusive photon analysis.
* Added D0 Z rapidity analysis.
* Tidied up constructor interface and projection chain
implementation of InvMassFinalState.
* Added ~complete set of Jet and FourMomentum sorting functors.
2008-11-20 Andy Buckley <andy@insectnation.org>
* Added IdentifiedFinalState.
* Moved a lot of TrackJet and Jet code into .cc files.
* Fixed a caching bug in Jet: cache flag resets should never be
conditional, since they are then sensitive to initialisation
errors.
* Added quark enum values to ParticleName.
* Rationalised JetAlg interfaces somewhat, with "size()" and
"jets()" methods in the interface.
* Added D0 W charge asymmetry and D0 inclusive jets analyses.
2008-11-18 Andy Buckley <andy@insectnation.org>
* Adding D0 inclusive Z pT shape analysis.
* Added D0 Z + jet pT and photon + jet pT spectrum analyses.
* Lots of tidying up of particle, event, particle name etc.
* Now the first event is used to detect the beam type and remove
incompatible analyses.
2008-11-17 Andy Buckley <andy@insectnation.org>
* Added bash completion for rivetgun.
* Starting to provide stand-alone call methods on projections so
they can be used without the caching infrastructure. This could
also be handy for unit testing.
* Adding functionality (sorting function and built-in sorting
schemes) to the JetAlg interface.
2008-11-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fix floating point number output format in aida2flat and flat2aida
* Added CDF_2002_S4796047: CDF Run-I charged multiplicity distribution
* Renamed CDF_2008_MINBIAS to CDF_2008_NOTE_9337, since the
note is publicly available now.
2008-11-10 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added DELPHI_2003_WUD_03_11: Delphi 4-jet angular distributions.
There is still a problem with the analysis, so don't use it yet.
But I need to commit the code because my laptop is broken ...
2008-11-06 Andy Buckley <andy@insectnation.org>
* Code review: lots of tidying up of projections and analyses.
* Fixes for compatibility with the LLVM C & C++ compiler.
* Change of Particle interface to remove "get"-prefixed method
names.
2008-11-05 Andy Buckley <andy@insectnation.org>
* Adding ability to query analysis metadata from the command line.
* Example of a plugin analyis now in plugindemo, with a make check
test to make sure that the plugin analysis is recognised by the
command line "rivet" tool.
* GCC 4.3 fix to mat-vec tests.
2008-11-04 Andy Buckley <andy@insectnation.org>
* Adding native logger control from Python interface.
2008-11-03 Andy Buckley <andy@insectnation.org>
* Adding bash_completion for rivet executable.
2008-10-31 Andy Buckley <andy@insectnation.org>
* Clean-up of histo titles and analysis code review.
* Added momentum construction functions from FastJet PseudoJets.
2008-10-28 Andy Buckley <andy@insectnation.org>
* Auto-booking of histograms with a name, rather than the HepData
ID 3-tuple is now possible.
* Fix in CDF 2001 pT spectra to get the normalisations to depend
on the pT_lead cutoff.
2008-10-23 Andy Buckley <andy@insectnation.org>
* rivet handles signals neatly, as for rivetgun, so that premature
killing of the analysis process will still result in an analysis
file.
* rivet now accepts cross-section as a command line argument and,
if it is missing and is required, will prompt the user for it
interactively.
2008-10-22 Andy Buckley <andy@insectnation.org>
* rivet (Python interface) now can list analyses, check when
adding analyses that the given names are valid, specify histo file
name, and provide sensibly graded event number logging.
2008-10-20 Andy Buckley <andy@insectnation.org>
* Corrections to CDF 2004 analysis based on correspondance with
Joey Huston. M bias dbns now use whole event within |eta| < 0.7,
and Cheese plots aren't filled at all if there are insufficient
jets (and the correct ETlead is used).
2008-10-08 Andy Buckley <andy@insectnation.org>
* Added AnalysisHandler::commitData() method, to allow the Python
interface to write out a histo file without having to know
anything about the histogramming API.
* Reduced SWIG interface file to just map a subset of Analysis and
AnalysisHandler functionality. This will be the basis for a new
command line interface.
2008-10-06 Andy Buckley <andy@insectnation.org>
* Converted FastJets plugin to use a Boost shared_pointer to the
cached ClusterSequence. The nullness of the pointer is now used to
indicate an empty tracks (and hence jets) set. Once FastJet
natively support empty CSeqs, we can rewrite this a bit more
neatly and ditch the shared_ptr.
2008-10-02 Andy Buckley <andy@insectnation.org>
* The CDF_2004 (Acosta) data file now includes the full range of
pT for the min bias data at both 630 and 1800 GeV. Previously,
only the small low-pT insert plot had been entered into HepData.
2008-09-30 Andy Buckley <andy@insectnation.org>
* Lots of updates to CDF_2004 (Acosta) UE analysis, including
sorting jets by E rather than Et, and factorising transverse cone
code into a function so that it can be called with a random
"leading jet" in min bias mode. Min bias histos are now being
trial-filled just with tracks in the transverse cones, since the
paper is very unclear on this.
* Discovered a serious caching problem in FastJets projection when
an empty tracks vector is passed from the
FinalState. Unfortunately, FastJet provides no API way to solve
the problem, so we'll have to report this upstream. For now, we're
solving this for CDF_2004 by explicitly vetoing events with no
tracks.
* Added Doxygen to the build with target "dox"
* Moved detection of whether cross-section information is needed
into the AnalysisHandler, with dynamic computation by scanning
contained analyses.
* Improved robustness of event reading to detect properly when the
input file is smaller than expected.
2008-09-29 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New analysis: CDF_2000_S4155203
2008-09-23 Andy Buckley <andy@insectnation.org>
* rivetgun can now be built and run without AGILe. Based on a
patch by Frank Siegert.
2008-09-23 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Some preliminary numbers for the CDF_2008_LEADINGJETS analysis
(only transverse region and not all observables. But all we have now.)
2008-09-17 Andy Buckley <andy@insectnation.org>
* Breaking up the mammoth "generate" function, to make Python
mapping easier, among other reasons.
* Added if-zero-return-zero checking to angle mapping functions,
to avoid problems where 1e-19 gets mapped on to 2 pi and then
fails boundary asserts.
* Added HistoHandler singleton class, which will be a central
repository for holding analyses' histogram objects to be accessed
via a user-chosen name.
2008-08-26 Andy Buckley <andy@insectnation.org>
* Allowing rivet-config to return combined flags.
2008-08-14 Andy Buckley <andy@insectnation.org>
* Fixed some g++ 4.3 compilation bugs, including "vector" not
being a valid name for a method which returns a physics vector,
since it clashes with std::vector (which we globally import). Took
the opportunity to rationalise the Jet interface a bit, since
"particle" was used to mean "FourMomentum", and "Particle" types
required a call to "getFullParticle". I removed the "gets" at the
same time, as part of our gradual migration to a coherent naming
policy.
2008-08-11 Andy Buckley <andy@insectnation.org>
* Tidying of FastJets and added new data files from HepData.
2008-08-10 James Monk <jmonk@hep.ucl.ac.uk>
* FastJets now uses user_index property of fastjet::PseudoJet to
reconstruct PID information in jet contents.
2008-08-07 Andy Buckley <andy@insectnation.org>
* Reworking of param file and command line parsing. Tab characters
are now handled by the parser, in a way equivalent to spaces.
2008-08-06 Andy Buckley <andy@insectnation.org>
* Added extra histos and filling to Acosta analysis - all HepData
histos should now be filled, depending on sqrt{s}. Also trialling
use of LaTeX math mode in titles.
2008-08-05 Andy Buckley <andy@insectnation.org>
* More data files for CDF analyses (2 x 2008, 1 x 1994), and moved
the RivetGun AtlasPythia6.params file to more standard
fpythia-atlas.params (and added to the install list).
2008-08-04 Andy Buckley <andy@insectnation.org>
* Reduced size of available options blocks in RivetGun help text
by removing "~" negating variants (which are hardly ever used in
practice) and restricting beam particles to
PROTON, ANTIPROTON,ELECTRON and POSITRON.
* Fixed Et sorting in Acosta analysis.
2008-08-01 Andy Buckley <andy@insectnation.org>
* Added AIDA headers to the install list, since
external (plugin-type) analyses need them to be present for
compilation to succeed.
2008-07-29 Andy Buckley <andy@insectnation.org>
* Fixed missing ROOT compile flags for libRivet.
* Added command line repetition to logging.
2008-07-29 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Included the missing numbers and three more observables
in the CDF_2008_NOTE_9351 analysis.
2008-07-29 Andy Buckley <andy@insectnation.org>
* Fixed wrong flags on rivet-config
2008-07-28 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Renamed CDF_2008_DRELLYAN to CDF_2008_NOTE_9351. Updated
numbers and cuts to the final version of this CDF note.
2008-07-28 Andy Buckley <andy@insectation.org>
* Fixed polar angle calcuation to use atan2.
* Added "mk" prefixes and x/setX convention to math classes.
2008-07-28 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Fixed definition of FourMomentum::pT (it had been returning pT2)
2008-07-27 Andy Buckley <andy@insectnation.org>
* Added better tests for Boost headers.
* Added testing for -ansi, -pedantic and -Wall compiler flags.
2008-07-25 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* updated DELPHI_2002_069_CONF_603 according to information
from the author
2008-07-17 Andy Buckley <andy@insectnation.org>
* Improvements to aida2flat: now can produce one output file per
histo, and there is a -g "gnuplot mode" which comments out the
YODA/make_plot headers to make the format readable by gnuplot.
* Import boost::assign namespace contents into the Rivet namespace
--- provides very useful intuitive collection initialising
functions.
2008-07-15 Andy Buckley <andy.buckley@dur.ac.uk>
* Fixed missing namespace in vector/matrix testing.
* Removed Boost headers: now a system dependency.
* Fixed polarRadius infinite loop.
2008-07-09 Andy Buckley <andy@insectnation.org>
* Fixed definitions of mapAngleMPiToPi, etc. and used them to fix
the Jet::getPhi method.
* Trialling use of "foreach" loop in CDF_2004: it works! Very nice.
2008-07-08 Andy Buckley <andy@insectnation.org>
* Removed accidental reference to an "FS" projection in
FinalStateHCM's compare method. rivetgun -A now works again.
* Added TASSO, SLD and D0_2008 reference data. The TASSO and SLD
papers aren't installed or included in the tarball since there are
currently no plans to implement these analyses.
* Added Rivet namespacing to vector, matrix etc. classes. This
required some re-writing and the opportunity was taken to move
some canonical function definitions inside the classes and to
improve the header structure of the Math area.
2008-07-07 Andy Buckley <andy@insectnation.org>
* Added Rivet namespace to Units.hh and Constants.hh.
* Added Doxygen "@brief" flags to analyses.
* Added "RIVET_" namespacing to all header guards.
* Merged Giulio Lenzi's isolation/vetoing/invmass projections and
D0 2008 analysis.
2008-06-23 Jon Butterworth <J.Butterworth@ucl.ac.uk>
* Modified FastJet to fix ysplit and split and filter.
* Modified ExampleTree to show how to call them.
2008-06-19 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added first version of the CDF_2008_DRELLYAN analysis described on
http://www-cdf.fnal.gov/physics/new/qcd/abstracts/UEinDY_08.html
There is a small difference between the analysis and this
implementation, but it's too small to be visible.
The fpythia-cdfdrellyan.params parameter file is for this analysis.
* Added first version of the CDF_2008_MINBIAS analysis described on
http://www-cdf.fnal.gov/physics/new/qcd/abstracts/minbias_08.html
The .aida file is read from the plots on the web and will change.
I'm still discussing some open questions about the analysis with
the author.
2008-06-18 Jon Butterworth <J.Butterworth@ucl.ac.uk>
* Added First versions of splitJet and filterJet methods to
fastjet.cc. Not yet tested, buyer beware.
2008-06-18 Andy Buckley <andy@insectnation.org>
* Added extra sorted Jets and Pseudojets methods to FastJets, and
added ptmin argument to the JetAlg getJets() method, requiring a
change to TrackJet.
2008-06-13 Andy Buckley <andy@insectnation.org>
* Fixed processing of "RG" parameters to ensure that invalid
iterators are never used.
2008-06-10 Andy Buckley <andy@insectnation.org>
* Updated AIDA reference files, changing "/HepData" root path to
"/REF". Still missing a couple of reference files due to upstream
problems with the HepData records.
2008-06-09 Andy Buckley <andy@insectnation.org>
* rivetgun now handles termination signals (SIGTERM, SIGINT and
SIGHUP) gracefully, finishing the event loop and finalising
histograms. This means that histograms will always get written
out, even if not all the requested events have been generated.
2008-06-04 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added DELPHI_2002_069_CONF_603 analysis
2008-05-30 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added InitialQuarks projection
* Added OPAL_1998_S3780481 analysis
2008-05-29 Andy Buckley <andy@insectnation.org>
* distcheck compatibility fixes and autotools tweaks.
2008-05-28 Andy Buckley <andy@insectnation.org>
* Converted FastJet to use Boost smart_ptr for its plugin
handling, to solve double-delete errors stemming from the heap
cloning of projections.
* Added (a subset of) Boost headers, particularly the smart
pointers.
2008-05-24 Andy Buckley <andy@insectnation.org>
* Added autopackage spec files.
* Merged these changes into the trunk.
* Added a registerClonedProjection(...) method to
ProjectionHandler: this is needed so that cloned projections will
have valid pointer entries in the ProjectHandler repository.
* Added clone() methods to all projections (need to use this,
since the templated "new PROJ(proj)" approach to cloning can't
handle object polymorphism.
2008-05-19 Andy Buckley <andy@insectnation.org>
* Moved projection-applying functions into ProjectionApplier base
class (from which Projection and Analysis both derive).
* Added Rivet-specific exceptions in place of std::runtime_error.
* Removed unused HepML reference files.
* Added error handling for requested analyses with wrong case
convention / missing name.
2008-05-15 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New analysis PDG_Hadron_Multiplicities
* flat2aida converter
2008-05-15 Andy Buckley <andy@insectnation.org>
* Removed unused mysterious Perl scripts!
* Added RivetGun.HepMC logging of HepMC event details.
2008-05-14 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New analysis DELPHI_1995_S3137023. This analysis contains
the xp spectra of Xi+- and Sigma(1385)+-.
2008-05-13 Andy Buckley <andy@insectnation.org>
* Improved logging interface: log levels are now integers (for
cross-library compatibility and level setting also applies to
existing loggers.
2008-05-09 Andy Buckley <andy@insectnation.org>
* Improvements to robustness of ROOT checks.
* Added --version flag on config scripts and rivetgun.
2008-05-06 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* New UnstableFinalState projection which selects all hadrons,
leptons and real photons including unstable particles.
* In the DELPHI_1996_S3430090 analysis the multiplicities for
pi+/pi- and p0 are filled, using the UnstableFinalState projection.
2008-05-06 Andy Buckley <andy@insectnation.org>
* FastJets projection now protects against the case where no
particles exist in the final state (where FastJet throws an
exception).
* AIDA file writing is now separated from the
AnalysisHandler::finalize method... API users can choose what to
do with the histo objects, be that writing out or further
processing.
2008-04-29 Andy Buckley <andy@insectnation.org>
* Increased default tolerances in floating point comparisons as
they were overly stringent and valid f.p. precision errors were
being treated as significant.
* Implemented remainder of Acosta UE analysis.
* Added proper getEtSum() to Jet.
* Added Et2() member and function to FourMomentum.
* Added aida2flat conversion script.
* Fixed ambiguity in TrackJet algorithm as to how the iteration
continues when tracks are merged into jets in the inner loop.
2008-04-28 Andy Buckley <andy@insectnation.org>
* Merged in major "ProjectionHandler" branch. Projections are now
all stored centrally in the singleton ProjectionHandler container,
rather than as member pointers in projections and analyses. This
also affects the applyProjection mechanism, which is now available
as a templated method on Analysis and Projection. Still a few
wrinkles need to be worked out.
* The branch changes required a comprehensive review of all
existing projections and analyses: lots of tidying up of these
classes, as well as the auxiliary code like math utils, has taken
place. Too much to list and track, unfortunately!
2008-03-28 Andy Buckley <buckley@pc54.hep.ucl.ac.uk>
* Started second CDF UE analysis ("Acosta"): histograms defined.
* Fixed anomalous factor of 2 in LWH conversion from Profile1D
to DataPointSet.
* Added pT distribution histos to CDF 2001 UE analysis.
2008-03-26 Andy Buckley <andy@insectnation.org>
* Removed charged+neutral versions of histograms and projections
from DELPHI analysis since they just duplicate the more robust
charged-only measurements and aren't really of interest for
tuning.
2008-03-10 Andy Buckley <andy@insectnation.org>
* Profile histograms now use error computation with proper
weighting, as described here:
http://en.wikipedia.org/wiki/Weighted_average
2008-02-28 Andy Buckley <andy@insectnation.org>
* Added --enable-jade flag for Professor studies with patched
FastJet.
* Minor fixes to LCG tag generator and gfilt m4 macros.
* Fixed projection slicing issues with Field UE analysis.
* Added Analysis::vetoEvent(e) function, which keeps track of the
correction to the sum of weights due to event vetoing in analysis
classes.
2008-02-26 Andy Buckley <andy@insectnation.org>
* Vector<N> and derived classes now initialise to have zeroed
components when the no-arg constructor is used.
* Added Analysis::scale() function to scale 1D
histograms. Analysis::normalize() uses it internally, and the
DELPHI (A)EEC, whose histo weights are not pure event weights, and
normalised using scale(h, 1/sumEventWeights).
2008-02-21 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Added EEC and AEEC to the DELPHI_1996_S3430090 analysis. The
normalisation of these histograms is still broken (ticket #163).
2008-02-19 Hendrik Hoeth <hendrik.hoeth@cern.ch>
* Many fixes to the DELPHI_1996_S3430090 analysis: bugfix in the
calulation of eigenvalues/eigenvectors in MatrixDiag.hh for the
sphericity, rewrite of Thrust/Major/Minor, fixed scaled momentum,
hemisphere masses, normalisation in single particle events,
final state slicing problems in the projections for Thrust,
Sphericity and Hemispheres.
2008-02-08 Andy Buckley <andy@insectnation.org>
* Applied fixes and extensions to DIS classes, based on
submissions by Dan Traynor.
2008-02-06 Andy Buckley <andy@insectnation.org>
* Made projection pointers used for cut combining into const
pointers. Required some redefinition of the Projection* comparison
operator.
* Temporarily added FinalState member to ChargedFinalState to stop
projection lifetime crash.
2008-02-01 Andy Buckley <andy@insectnation.org>
* Fixed another misplaced factor of bin width in the
Analysis::normalize() method.
2008-01-30 Andy Buckley <andy@insectnation.org>
* Fixed the conversion of IHistogram1D to DPS, both via the
explicit Analysis::normalize() method and the implicit
AnalysisHandler::treeNormalize() route. The root of the problem is
the AIDA choice of the word "height" to represent the sum of
weights in a bin: i.e. the bin width is not taken into account
either in computing bin height or error.
2008-01-22 Andy Buckley <andy@insectnation.org>
* Beam projection now uses HepMC GenEvent::beam_particles() method
to get the beam particles. This is more portable and robust for
C++ generators, and equivalent to the existing "first two" method
for Fortran generators.
2008-01-17 Andy Buckley <andy@insectnation.org>
* Added angle range fix to pseudorapidity function (thanks to
Piergiulio Lenzi).
2008-01-10 Andy Buckley <andy@insectnation.org>
* Changed autobooking plot codes to use zero-padding (gets the
order right in JAS, file browser, ROOT etc.). Also changed the
'ds' part to 'd' for consistency. HepData's AIDA output has been
correspondingly updated, as have the bundled data files.
2008-01-04 Andy Buckley <andy@insectnation.org>
* Tidied up JetShape projection a bit, including making the
constructor params const references. This seems to have sorted the
runtime segfault in the CDF_2005 analysis.
* Added caching of the analysis bin edges from the AIDA file -
each analysis object will now only read its reference file once,
which massively speeds up the rivetgun startup time for analyses
with large numbhers of autobooked histos (e.g. the
DELPHI_1996_S3430090 analysis).
2008-01-02 Andy Buckley <andy@insectnation.org>
* CDF_2001_S4751469 now uses the LossyFinalState projection, with
an 8% loss rate.
* Added LossyFinalState and HadronicFinalState, and fixed a
"polarity" bug in the charged final state projection (it was
keeping only the *uncharged* particles).
* Now using isatty(1) to determine whether or not color escapes
can be used. Also removed --color argument, since it can't have an
effect (TCLAP doesn't do position-based flag toggling).
* Made Python extension build optional (and disabled by default).
2008-01-01 Andy Buckley <andy@insectnation.org>
* Removed some unwanted DEBUG statements, and lowered the level of
some infrastructure DEBUGs to TRACE level.
* Added bash color escapes to the logger system.
2007-12-21 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* include/LWH/ManagedObject.h: Fixed infinite loop in
encodeForXML cf. ticket #135.
2007-12-20 Andy Buckley <andy@insectnation.org>
* Removed HepPID, HepPDT and Boost dependencies.
* Fixed XML entity encoding in LWH. Updated CDF_2007_S7057202
analysis to not do its own XML encoding of titles.
2007-12-19 Andy Buckley <andy@insectnation.org>
* Changed units header to set GeV = 1 (HepMC convention) and using
units in CDF UE analysis.
2007-12-15 Andy Buckley <andy@insectnation.org>
* Introduced analysis metadata methods for all analyses (and made
them part of the Analysis interface).
2007-12-11 Andy Buckley <andy@insectnation.org>
* Added JetAlg base projection for TrackJet, FastJet etc.
2007-12-06 Andy Buckley <andy@insectnation.org>
* Added checking for Boost library, and the standard Boost test
program for shared_ptr.
* Got basic Python interface running - required some tweaking
since Python and Rivet's uses of dlopen collide (another
RTLD_GLOBAL issue - see
http://muttley.hates-software.com/2006/01/25/c37456e6.html )
2007-12-05 Andy Buckley <andy@insectnation.org>
* Replaced all use of KtJets projection with FastJets
projection. KtJets projection disabled but left undeleted for
now. CLHEP and KtJet libraries removed from configure searches and
Makefile flags.
2007-12-04 Andy Buckley <andy@insectnation.org>
* Param file loading now falls back to the share/RivetGun
directory if a local file can't be found and the provided name has
no directory separators in it.
* Converted TrackJet projection to update the jet centroid with
each particle added, using pT weighting in the eta and phi
averaging.
2007-12-03 Andy Buckley <andy@insectnation.org>
* Merged all command line handling functions into one large parse
function, since only one executable now needs them. This removes a
few awkward memory leaks.
* Removed rivet executable - HepMC file reading functionality will
move into rivetgun.
* Now using HepMC IO_GenEvent format (IO_Ascii and
IO_ExtendedAscii are deprecated). Now requires HepMC >= 2.3.0.
* Added forward declarations of GSL diagonalisation routines,
eliminating need for GSL headers to be installed on build machine.
2007-11-27 Andy Buckley <andy@insectnation.org>
* Removed charge differentiation from Multiplicity projection (use
CFS proj) and updated ExampleAnalysis to produce more useful numbers.
* Introduced binreloc for runtime path determination.
* Fixed several bugs in FinalState, ChargedFinalState, TrackJet
and Field analysis.
* Completed move to new analysis naming scheme.
2007-11-26 Andy Buckley <andy@insectnation.org>
* Removed conditional HAVE_FASTJET bits: FastJet is now compulsory.
* Merging appropriate RivetGun parts into Rivet. RivetGun currently broken.
2007-11-23 Andy Buckley <andy@insectnation.org>
* Renaming analyses to Spires-ID scheme: currently of form
S<SpiresID>, to become <Expt>_<YYYY>_<SpiresID>.
2007-11-20 Andy Buckley <andy@insectnation.org>
* Merged replacement vectors, matrices and boosts into trunk.
2007-11-15 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Analysis.cc, include/Rivet/Analysis.hh: Introduced normalize
function. See ticket #126.
2007-10-31 Andy Buckley <andy@insectnation.org>
* Tagging as 1.0b2 for HERA-LHC meeting.
2007-10-25 Andy Buckley <andy@insectnation.org>
* Added AxesDefinition base interface to Sphericity and Thrust,
used by Hemispheres.
* Exposed BinaryCut class, improved its interface and fixed a few
bugs. It's now used by VetoedFinalState for momentum cuts.
* Removed extra output from autobooking AIDA reader.
* Added automatic DPS booking.
2007-10-12 Andy Buckley <andy@insectnation.org>
* Improved a few features of the build system
2007-10-09 James Monk
* Fixed dylib dlopen on Mac OS X.
2007-10-05 Andy Buckley <andy@insectnation.org>
* Added new reference files.
2007-10-03 Andy Buckley <andy@insectnation.org>
* Fixed bug in configure.ac which led to explicit CXX setting
being ignored.
* Including Logging.hh in Projection.hh, hence new transitive
dependency on Logging.hh being installed. Since this is the normal
behaviour, I don't think this is a problem.
* Fixed segfaulting bug due to use of addProjection() in
locally-scoped contained projections. This isn't a proper fix,
since the whole framework should be designed to avoid the
possibility of bugs like this.
* Added newly built HepML and AIDA reference files for current
analyses.
2007-10-02 Andy Buckley <andy@insectnation.org>
* Fixed possible null-pointer dereference in Particle copy
constructor and copy assignment: this removes one of two blocker
segfaults, the other of which is related to the copy-assignment of
the TotalVisMomentum projection in the ExampleTree analysis.
2007-10-01 Andy Buckley <andy@insectnation.org>
* Fixed portable path to Rivet share directory.
2007-09-28 Andy Buckley <andy@insectnation.org>
* Added more functionality to the rivet-config script: now has
libdir, includedir, cppflags, ldflags and ldlibs options.
2007-09-26 Andy Buckley <andy@insectnation.org>
* Added the analysis library closer function to the
AnalysisHandler finalize() method, and also moved the analysis
delete loop into AnalysisHandler::finalize() so as not to try
deleting objects whose libraries have already closed.
* Replaced the RivetPaths.cc.in method for portable paths with
something using -D defines - much simpler!
2007-09-21 Lars Sonnenschein <sonne@mail.cern.ch>
* Added HepEx0505013 analysis and JetShape projection (some fixes
by AB.)
* Added GetLorentzJets member function to D0 RunII cone jet projection
2007-09-21 Andy Buckley <andy@insectnation.org>
* Fixed lots if bugs and bad practice in HepEx0505013 (to make it
compile-able!)
* Downclassed the log messages from the Test analysis to DEBUG
level.
* Added isEmpty() method to final state projection.
* Added testing for empty final state and useful debug log
messages to sphericity projection.
2007-09-20 Andy Buckley <andy@insectnation.org>
* Added Hemispheres projection, which calculates event hemisphere
masses and broadenings.
2007-09-19 Andy Buckley <andy@insectnation.org>
* Added an explicit copy assignment operator to Particle: the
absence of one of these was responsible for the double-delete
error.
* Added a "fuzzy equals" utility function for float/double types
to Utils.hh (which already contains a variety of handy little
functions).
* Removed deprecated Beam::operator().
* Added ChargedFinalState projection and de-pointered the
contained FinalState projection in VetoedFinalState.
2007-09-18 Andy Buckley <andy@insectnation.org>
* Major bug fixes to the regularised version of the sphericity
projection (and hence the Parisi tensor projection). Don't trust
C & D param results from any previous version!
* Added extra methods to thrust and sphericity projections to get
the oblateness and the sphericity basis (currently returns dummy
axes since I can't yet work out how to get the similarity
transform eigenvectors from CLHEP)
2007-09-14 Andy Buckley <andy@insectnation.org>
* Merged in a branch of pluggable analysis mechanisms.
2007-06-25 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Fixed some bugs in the root output for DataPoint.h
2007-06-25 Andy Buckley <andy@insectnation.org>
* include/Rivet/**/Makefile.am: No longer installing headers for
"internal" functionality.
* include/Rivet/Projections/*.hh: Removed the private restrictions
on copy-assignment operators.
2007-06-18 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* include/LWH/Tree.h: Fixed minor bug in listObjectNames.
* include/LWH/DataPointSet.h: Fixed setCoordinate functions so
that they resize the vector of DataPoints if it initially was
empty.
* include/LWH/DataPoint.h: Added constructor taking a vector of
measuremts.
2007-06-16 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* include/LWH/Tree.h: Implemented the listObjectNames and ls
functions.
* include/Rivet/Projections/FinalStateHCM.hh,
include/Rivet/Projections/VetoedFinalState.hh: removed
_theParticles and corresponding access function. Use base class
variable instead.
* include/Rivet/Projections/FinalState.hh: Made _theParticles
protected.
2007-06-13 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Projections/FinalStateHCM.cc,
src/Projections/DISKinematics.cc: Equality checks using
GenParticle::operator== changed to check for pointer equality.
* include/Rivet/Analysis/HepEx9506012.hh: Uses modified DISLepton
projection.
* include/Rivet/Particle.hh: Added member function to check if a
GenParticle is associated.
* include/Rivet/Projections/DISLepton.hh,
src/Projections/DISLepton.cc: Fixed bug in projection. Introduced
final state projection to limit searching for scattered
lepton. Still not properly tested.
2007-06-08 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* include/Rivet/Projections/PVertex.hh,
src/Projections/PVertex.cc: Fixed the projection to simply get the
signal_process_vertex from the GenEvent. This is the way it should
work. If the GenEvent does not have a signal_process_vertex
properly set up in this way, the problem is with the class that
fills the GenEvent.
2007-06-06 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Merged TotalVisibleMomentum and CalMET
* Added pT ranges to Vetoed final state projection
2007-05-27 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Fixed initialization of VetoedFinalStateProjection in ExampleTree
2007-05-27 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* include/Rivet/Projections/KtJets.*: Make sure the KtEvent is
deleted properly.
2007-05-26 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Added leptons to the ExampleTree.
* Added TotalVisibleEnergy projection, and added output to ExampleTree.
2007-05-25 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Added a charged lepton projection
2007-05-23 Andy Buckley <andy@insectnation.org>
* src/Analysis/HepEx0409040.cc: Changed range of the histograms to
the "pi" range rather than the "128" range.
* src/Analysis/Analysis.cc: Fixed a bug in the AIDA path building.
Histogram auto-booking now works.
2007-05-23 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Analysis/HepEx9506012.cc: Now uses the histogram booking
function in the Analysis class.
2007-05-23 Jon Butterworth <jmb@hep.ucl.ac.uk>
* Fixed bug in PRD65092002 (was failing on zero jets)
2007-05-23 Andy Buckley <andy@insectnation.org>
* Added (but haven't properly tested) a VetoedFinalState projection.
* Added normalize() method for AIDA 1D histograms.
* Added configure checking for Mac OS X version, and setting the
development target flag accordingly.
2007-05-22 Andy Buckley <andy@insectnation.org>
* Added an ostream method for AnalysisName enums.
* Converted Analyses and Projections to use projection lists, cuts
and beam constraints.
* Added beam pair combining to the BeamPair sets of Projections
by finding set meta-intersections.
* Added methods to Cuts, Analysis and Projection to make Cut
definition easier.
* Fixed default fall-through in cut handling switch statement and
now using -numeric_limits<double>::max() rather than min()
* Added more control of logging presentation via static flag
methods on Log.
2007-05-13 Andy Buckley <andy@insectnation.org>
* Added self-consistency checking mechanisms for Cuts and Beam
* Re-implemented the cut-handling part of RivetInfo as a Cuts class.
* Changed names of Analysis and Projection name() and handler()
methods to getName() and getHandler() to be more consistent with
the rest of the public method names in those classes.
2007-05-02 Andy Buckley <andy@insectnation.org>
* Added auto-booking of histogram bins from AIDA XML files. The
AIDA files are located via a C++ function which is generated from
RivetPaths.cc.in by running configure.
2007-04-18 Andy Buckley <andy@insectnation.org>
* Added a preliminary version of the Rick Field UE analysis, under
the name PRD65092002.
2007-04-19 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Analysis/HepEx0409040.cc: The reason this did not compile
under gcc-4 is that some iterators into a vector were wrongly
assued to be pointers and were initialized to 0 and later compared
to 0. I've changed this to initialize to end() of the
corresponding vector and to compare with the same end() later.
2007-04-05 Andy Buckley <andy@insectnation.org>
* Lots of name changes in anticipation of the MCNet
school. RivetHandler is now AnalysisHandler (since that's what it
does!), BeamParticle has become ParticleName, and RivetInfo has
been split into Cut and BeamConstraint portions.
* Added BeamConstraint mechanism, which can be used to determine
if an analysis is compatible with the beams being used in the
generator. The ParticleName includes an "ANY" wildcard for this
purpose.
2006-03-19 Andy Buckley <andy@insectnation.org>
* Added "rivet" executable which can read in HepMC ASCII dump
files and apply Rivet analyses on the events.
2007-02-24 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Projections/KtJets.cc: Added comparison of member variables
in compare() function
* all: Merged changes from polymorphic-projections branch into
trunk
2007-02-17 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* all: projections and analysis handlers: All projections which
uses other projctions now has a pointer rather than a copy of
those projections to allow for polymorphism. The constructors has
also been changed to require the used projections themselves,
rather than the arguments needed to construct them.
2007-02-17 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Projections/FinalState.cc,
include/Rivet/Projections/FinalState.icc (Rivet),
include/Rivet/Projections/FinalState.hh: Added cut in transverse
momentum on the particles to be included in the final state.
2007-02-06 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* include/LWH/HistogramFactory.h: Fixed divide-by-zero in divide
function. Also fixed bug in error calculation in divide
function. Introduced checkBin function to make sure two histograms
are equal even if they have variable bin widths.
* include/LWH/Histogram1D.h: In normalize(double), do not do anything
if the sum of the bins are zero to avoid dividing by zero.
2007-01-20 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* src/Test/testLWH.cc: Modified to output files using the Tree.
* configure.ac: Removed AC_CONFIG_AUX_DIR([include/Rivet/Config])
since the directory does not exist anymore.
2006-12-21 Andy Buckley <andy@insectnation.org>
* Rivet will now conditionally install the AIDA and LWH headers if
it can't find them when configure'ing.
* Started integrating Leif's LWH package to fulfill the AIDA
duties.
* Replaced multitude of CLHEP wrapper headers with a single
RivetCLHEP.h header.
2006-11-20 Andy Buckley <andy@insectnation.org>
* Introduced log4cpp logging.
* Added analysis enum, which can be used as input to an analysis
factory by Rivet users.
2006-11-02 Andy Buckley <andy@insectnation.org>
* Yet more, almost pointless, administrative moving around of
things with the intention of making the structure a bit
better-defined:
* The RivetInfo and RivetHandler classes have been
moved from src/Analysis into src as they are really the main Rivet
interface classes. The Rivet.h header has also been moved into the
"header root".
* The build of a single shared library in lib has been disabled,
with the library being built instead in src.
2006-10-14 Andy Buckley <andy@insectnation.org>
* Introduced a minimal subset of the Sherpa math tools, such as
Vector{3,4}D, Matrix, etc. The intention is to eventually cut the
dependency on CLHEP.
2006-07-28 Andy Buckley <andy@insectnation.org>
* Moving things around: all sources now in directories under src
2006-06-04 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* Analysis/Examples/HZ95108.*: Now uses CentralEtHCM. Also set GeV
units on the relevant histograms.
* Projections/CentralEtHCM.*: Making a special class just to get
out one number - the summed Et in the central rapidity bin - may
seem like an overkill. But in case some one else might nees it...
2006-06-03 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* Analysis/Examples/HZ95108.*: Added the hz95108 energy flow
analysis from HZtool.
* Projections/DISLepton.*: Since many HERA measurements do not
care if we have electron or positron beam, it is now possible to
specify lepton or anti-lepton.
* Projections/Event.*: Added member and access function for the
weight of an event (taken from the GenEvent object.weights()[0].
* Analysis/RivetHandler.*: Now depends explicitly on the AIDA
interface. An AIDA analysis factory must be specified in the
constructor, where a tree and histogram factory is automatically
created. Added access functions to the relevant AIDA objects.
* Analysis/AnalysisBase.*: Added access to the RivetHandler and
its AIDA factories.
2005-12-27 Leif Lonnblad <Leif.Lonnblad@thep.lu.se>
* configure.ac: Added -I$THEPEGPATH/include to AM_CPPFLAGS.
* Config/Rivet.h: Added some std incudes and using std::
declaration.
* Analysis/RivetInfo.*: Fixed some bugs. The RivetInfo facility
now works, although it has not been thoroughly tested.
* Analysis/Examples/TestMultiplicity.*: Re-introduced
FinalStateHCM for testing purposes but commented it away again.
* .: Made a number of changes to implement handling of RivetInfo
objects.
diff --git a/bin/Makefile.am b/bin/Makefile.am
--- a/bin/Makefile.am
+++ b/bin/Makefile.am
@@ -1,34 +1,36 @@
bin_SCRIPTS = rivet-config rivet-buildplugin
dist_bin_SCRIPTS = make-plots make-pgfplots
EXTRA_DIST =
RIVETPROGS = \
rivet \
rivet-mkanalysis \
- rivet-findid rivet-which \
+ rivet-findid rivet-which \
+ rivet-diffhepdata \
+ rivet-diffhepdata-all \
rivet-cmphistos rivet-mkhtml
if ENABLE_PYEXT
dist_bin_SCRIPTS += $(RIVETPROGS)
else
EXTRA_DIST += $(RIVETPROGS)
endif
## bash completion
if ENABLE_PYEXT
dist_pkgdata_DATA = rivet-completion
bashcomp_dir = $(DESTDIR)$(prefix)/etc/bash_completion.d
install-data-local:
if [[ -d "$(bashcomp_dir)" && -w "$(bashcomp_dir)" ]]; then \
$(install_sh_DATA) rivet-completion $(bashcomp_dir)/; fi
uninstall-local:
rm -f $(bashcomp_dir)/rivet-completion
else
EXTRA_DIST += rivet-completion
endif
## No-Python Rivet program
noinst_PROGRAMS = rivet-nopy
rivet_nopy_SOURCES = rivet-nopy.cc
rivet_nopy_CPPFLAGS = -I$(top_srcdir)/include $(AM_CPPFLAGS)
rivet_nopy_LDFLAGS = -L$(HEPMCLIBPATH)
rivet_nopy_LDADD = $(top_builddir)/src/libRivet.la -lHepMC
diff --git a/bin/make-pgfplots b/bin/make-pgfplots
--- a/bin/make-pgfplots
+++ b/bin/make-pgfplots
@@ -1,2802 +1,2819 @@
#! /usr/bin/env python
"""\
-Usage: %prog [options] file.dat [file2.dat ...]
+%(prog)s [options] file.dat [file2.dat ...]
TODO
* Optimise output for e.g. lots of same-height bins in a row
* Add a RatioFullRange directive to show the full range of error bars + MC envelope in the ratio
* Tidy LaTeX-writing code -- faster to compile one doc only, then split it?
* Handle boolean values flexibly (yes, no, true, false, etc. as well as 1, 0)
"""
from __future__ import print_function
##
## This program is copyright by Christian Gutschow <chris.g@cern.ch> and
## the Rivet team https://rivet.hepforge.org. It may be used
## for scientific and private purposes. Patches are welcome, but please don't
## redistribute changed versions yourself.
##
## Check the Python version
import sys
if sys.version_info[:3] < (2,6,0):
print("make-plots requires Python version >= 2.6.0... exiting")
sys.exit(1)
## Try to rename the process on Linux
try:
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
libc.prctl(15, 'make-plots', 0, 0, 0)
except Exception as e:
pass
import os, logging, re
import tempfile
import getopt
import string
import copy
from math import *
## Regex patterns
pat_begin_block = re.compile(r'^#+\s*BEGIN ([A-Z0-9_]+) ?(\S+)?')
pat_end_block = re.compile('^#+\s*END ([A-Z0-9_]+)')
pat_comment = re.compile('^#|^\s*$')
pat_property = re.compile('^(\w+?)=(.*)$')
+pat_property_opt = re.compile('^ReplaceOption\[(\w+=\w+)\]=(.*)$')
pat_path_property = re.compile('^(\S+?)::(\w+?)=(.*)$')
+pat_options = re.compile(r"((?::\w+=[^:/]+)+)")
def fuzzyeq(a, b, tolerance=1e-6):
"Fuzzy equality comparison function for floats, with given fractional tolerance"
# if type(a) is not float or type(a) is not float:
# print(a, b)
if (a == 0 and abs(b) < 1e-12) or (b == 0 and abs(a) < 1e-12):
return True
return 2.0*abs(a-b) < tolerance * abs(a+b)
def inrange(x, a, b):
return x >= a and x < b
def floatify(x):
if type(x) is str:
x = x.split()
if not hasattr(x, "__len__"):
x = [x]
x = [float(a) for a in x]
return x[0] if len(x) == 1 else x
def floatpair(x):
if type(x) is str:
x = x.split()
if hasattr(x, "__len__"):
assert len(x) == 2
return [float(a) for a in x]
return [float(x), float(x)]
def is_end_marker(line, blockname):
m = pat_end_block.match(line)
return m and m.group(1) == blockname
def is_comment(line):
return pat_comment.match(line) is not None
def checkColor(line):
if '[RGB]' in line:
# e.g. '{[RGB]{1,2,3}}'
if line[0] == '{' and line[-1] == '}': line = line[1:-1]
# i.e. '[RGB]{1,2,3}'
composition = line.split('{')[1][:-1]
# e.g. '1,2,3'
line = '{rgb,255:red,%s;green,%s;blue,%s}' % tuple(composition.split(','))
return line
class Described(object):
"Inherited functionality for objects holding a 'props' dictionary"
def __init__(self):
pass
def has_attr(self, key):
return key in self.props
def set_attr(self, key, val):
self.props[key] = val
def attr(self, key, default=None):
return self.props.get(key, default)
def attr_bool(self, key, default=None):
x = self.attr(key, default)
if str(x).lower() in ["1", "true", "yes", "on"]: return True
if str(x).lower() in ["0", "false", "no", "off"]: return False
return None
def attr_int(self, key, default=None):
x = self.attr(key, default)
try:
x = int(x)
except:
x = None
return x
def attr_float(self, key, default=None):
x = self.attr(key, default)
try:
x = float(x)
except:
x = None
return x
#def doGrid(self, pre = ''):
# grid_major = self.attr_bool(pre + 'Grid', False) or self.attr_bool(pre + 'GridMajor', False)
# grid_minor = self.attr_bool(pre + 'GridMinor', False)
# if grid_major and grid_minor: return 'both'
# elif grid_major: return 'major'
# elif grid_minor: return 'minor'
def ratio_names(self, skipFirst = False):
offset = 1 if skipFirst else 0
return [ ('RatioPlot%s' % (str(i) if i else ''), i) for i in range(skipFirst, 9) ]
def legend_names(self, skipFirst = False):
offset = 1 if skipFirst else 0
return [ ('Legend%s' % (str(i) if i else ''), i) for i in range(skipFirst, 9) ]
class InputData(Described):
def __init__(self, filename):
self.filename=filename
if not self.filename.endswith(".dat"):
self.filename += ".dat"
self.props = {}
self.histos = {}
self.ratios = {}
self.special = {}
self.functions = {}
# not sure what this is good for ... yet
self.histomangler = {}
self.normalised = False
+ self.props['_OptSubs'] = { }
self.props['is2dim'] = False
# analyse input dat file
f = open(self.filename)
for line in f:
m = pat_begin_block.match(line)
if m:
name, path = m.group(1,2)
if path is None and name != 'PLOT':
raise Exception('BEGIN sections need a path name.')
if name == 'PLOT':
self.read_input(f);
elif name == 'SPECIAL':
self.special[path] = Special(f)
elif name == 'HISTOGRAM' or name == 'HISTOGRAM2D':
self.histos[path] = Histogram(f, p=path)
self.props['is2dim'] = self.histos[path].is2dim
self.histos[path].zlog = self.attr_bool('LogZ')
if self.attr_bool('Is3D', 0):
self.histos[path].zlog = False
if not self.histos[path].getName() == '':
newname = self.histos[path].getName()
self.histos[newname] = copy.deepcopy(self.histos[path])
del self.histos[path]
elif name == 'HISTO1D':
self.histos[path] = Histo1D(f, p=path)
if not self.histos[path].getName() == '':
newname = self.histos[path].getName()
self.histos[newname] = copy.deepcopy(self.histos[path])
del self.histos[path]
elif name == 'HISTO2D':
self.histos[path] = Histo2D(f, p=path)
self.props['is2dim'] = True
self.histos[path].zlog = self.attr_bool('LogZ')
if self.attr_bool('Is3D', 0):
self.histos[path].zlog = False
if not self.histos[path].getName() == '':
newname = self.histos[path].getName()
self.histos[newname] = copy.deepcopy(self.histos[path])
del self.histos[path]
elif name == 'HISTOGRAMMANGLER':
self.histomangler[path] = PlotFunction(f)
elif name == 'COUNTER':
self.histos[path] = Counter(f, p=path)
elif name == 'VALUE':
self.histos[path] = Value(f, p=path)
elif name == 'FUNCTION':
self.functions[path] = Function(f)
f.close()
- self.apply_config_files(opts.CONFIGFILES)
+ self.apply_config_files(args.CONFIGFILES)
self.props.setdefault('PlotSizeX', 10.)
self.props.setdefault('PlotSizeY', 6.)
if self.props['is2dim']:
self.props['PlotSizeX'] -= 1.7
self.props['PlotSizeY'] = 10.
self.props['RatioPlot'] = '0'
if self.props.get('PlotSize', '') != '':
plotsizes = self.props['PlotSize'].split(',')
self.props['PlotSizeX'] = float(plotsizes[0])
self.props['PlotSizeY'] = float(plotsizes[1])
if len(plotsizes) == 3:
self.props['RatioPlotSizeY'] = float(plotsizes[2])
del self.props['PlotSize']
self.props['RatioPlotSizeY'] = 0. # default is no ratio
if self.attr('MainPlot') == '0':
## has RatioPlot, but no MainPlot
self.props['PlotSizeY'] = 0. # size of MainPlot
self.props['RatioPlot'] = '1' #< don't allow both to be zero!
if self.attr_bool('RatioPlot'):
if self.has_attr('RatioPlotYSize') and self.attr('RatioPlotYSize') != '':
self.props['RatioPlotSizeY'] = self.attr_float('RatioPlotYSize')
else:
self.props['RatioPlotSizeY'] = 6. if self.attr_bool('MainPlot') else 3.
if self.props['is2dim']: self.props['RatioPlotSizeY'] *= 2.
for rname, _ in self.ratio_names(True):
if self.attr_bool(rname, False):
if self.props.get(rname+'YSize') != '':
self.props[rname+'SizeY'] = self.attr_float(rname+'YSize')
else:
self.props[rname+'SizeY'] = 3. if self.attr('MainPlot') == '0' else 6.
if self.props['is2dim']: self.props[rname+'SizeY'] *= 2.
## Ensure numbers, not strings
self.props['PlotSizeX'] = float(self.props['PlotSizeX'])
self.props['PlotSizeY'] = float(self.props['PlotSizeY'])
self.props['RatioPlotSizeY'] = float(self.props['RatioPlotSizeY'])
# self.props['TopMargin'] = float(self.props['TopMargin'])
# self.props['BottomMargin'] = float(self.props['BottomMargin'])
self.props['LogX'] = self.attr_bool('LogX', 0)
self.props['LogY'] = self.attr_bool('LogY', 0)
self.props['LogZ'] = self.attr_bool('LogZ', 0)
for rname, _ in self.ratio_names(True):
self.props[rname+'LogY'] = self.attr_bool(rname+'LogY', 0)
self.props[rname+'LogZ'] = self.attr_bool(rname+'LogZ', 0)
if self.has_attr('Rebin'):
for key in self.histos:
self.histos[key].props['Rebin'] = self.props['Rebin']
if self.has_attr('ConnectBins'):
for key in self.histos:
self.histos[key].props['ConnectBins'] = self.props['ConnectBins']
self.histo_sorting('DrawOnly')
for curves, _ in self.ratio_names():
if self.has_attr(curves+'DrawOnly'):
self.histo_sorting(curves+'DrawOnly')
else:
self.props[curves+'DrawOnly'] = self.props['DrawOnly']
## Inherit various values from histograms if not explicitly set
#for k in ['LogX', 'LogY', 'LogZ',
# 'XLabel', 'YLabel', 'ZLabel',
# 'XCustomMajorTicks', 'YCustomMajorTicks', 'ZCustomMajorTicks']:
# self.inherit_from_histos(k)
@property
def is2dim(self):
return self.attr_bool("is2dim", False)
@is2dim.setter
def is2dim(self, val):
self.set_attr("is2dim", val)
@property
def drawonly(self):
x = self.attr("DrawOnly")
if type(x) is str:
self.drawonly = x #< use setter to listify
return x if x else []
@drawonly.setter
def drawonly(self, val):
if type(val) is str:
val = val.strip().split()
self.set_attr("DrawOnly", val)
@property
def stacklist(self):
x = self.attr("Stack")
if type(x) is str:
self.stacklist = x #< use setter to listify
return x if x else []
@stacklist.setter
def stacklist(self, val):
if type(val) is str:
val = val.strip().split()
self.set_attr("Stack", val)
@property
def plotorder(self):
x = self.attr("PlotOrder")
if type(x) is str:
self.plotorder = x #< use setter to listify
return x if x else []
@plotorder.setter
def plotorder(self, val):
if type(val) is str:
val = val.strip().split()
self.set_attr("PlotOrder", val)
@property
def plotsizex(self):
return self.attr_float("PlotSizeX")
@plotsizex.setter
def plotsizex(self, val):
self.set_attr("PlotSizeX", val)
@property
def plotsizey(self):
return self.attr_float("PlotSizeY")
@plotsizey.setter
def plotsizey(self, val):
self.set_attr("PlotSizeY", val)
@property
def plotsize(self):
return [self.plotsizex, self.plotsizey]
@plotsize.setter
def plotsize(self, val):
if type(val) is str:
val = [float(x) for x in val.split(",")]
assert len(val) == 2
self.plotsizex = val[0]
self.plotsizey = val[1]
@property
def ratiosizey(self):
return self.attr_float("RatioPlotSizeY")
@ratiosizey.setter
def ratiosizey(self, val):
self.set_attr("RatioPlotSizeY", val)
@property
def scale(self):
return self.attr_float("Scale")
@scale.setter
def scale(self, val):
self.set_attr("Scale", val)
@property
def xmin(self):
return self.attr_float("XMin")
@xmin.setter
def xmin(self, val):
self.set_attr("XMin", val)
@property
def xmax(self):
return self.attr_float("XMax")
@xmax.setter
def xmax(self, val):
self.set_attr("XMax", val)
@property
def xrange(self):
return [self.xmin, self.xmax]
@xrange.setter
def xrange(self, val):
if type(val) is str:
val = [float(x) for x in val.split(",")]
assert len(val) == 2
self.xmin = val[0]
self.xmax = val[1]
@property
def ymin(self):
return self.attr_float("YMin")
@ymin.setter
def ymin(self, val):
self.set_attr("YMin", val)
@property
def ymax(self):
return self.attr_float("YMax")
@ymax.setter
def ymax(self, val):
self.set_attr("YMax", val)
@property
def yrange(self):
return [self.ymin, self.ymax]
@yrange.setter
def yrange(self, val):
if type(val) is str:
val = [float(y) for y in val.split(",")]
assert len(val) == 2
self.ymin = val[0]
self.ymax = val[1]
# TODO: add more rw properties for plotsize(x,y), ratiosize(y),
# show_mainplot, show_ratioplot, show_legend, log(x,y,z), rebin,
# drawonly, legendonly, plotorder, stack,
# label(x,y,z), majorticks(x,y,z), minorticks(x,y,z),
# min(x,y,z), max(x,y,z), range(x,y,z)
def getLegendPos(self, prefix = ''):
xpos = self.attr_float(prefix+'LegendXPos', 0.95 if self.getLegendAlign() == 'right' else 0.53)
ypos = self.attr_float(prefix+'LegendYPos', 0.93)
return (xpos, ypos)
def getLegendAlign(self, prefix = ''):
la = self.attr(prefix+'LegendAlign', 'left')
if la == 'l': return 'left'
elif la == 'c': return 'center'
elif la == 'r': return 'right'
else: return la
#def inherit_from_histos(self, k):
# """Note: this will inherit the key from a random histogram:
# only use if you're sure all histograms have this key!"""
# if k not in self.props:
# h = list(self.histos.values())[0]
# if k in h.props:
# self.props[k] = h.props[k]
def read_input(self, f):
for line in f:
if is_end_marker(line, 'PLOT'):
break
elif is_comment(line):
continue
m = pat_property.match(line)
- if m:
+ m_opt = pat_property_opt.match(line)
+ if m_opt:
+ opt_old, opt_new = m_opt.group(1,2)
+ self.props['_OptSubs'][opt_old.strip()] = opt_new.strip()
+ elif m:
prop, value = m.group(1,2)
+ prop = prop.strip()
+ value = value.strip()
if prop in self.props:
logging.debug("Overwriting property %s = %s -> %s" % (prop, self.props[prop], value))
## Use strip here to deal with DOS newlines containing \r
self.props[prop.strip()] = value.strip()
def apply_config_files(self, conffiles):
"""Use config file to overwrite cosmetic properties."""
if conffiles is not None:
for filename in conffiles:
cf = open(filename, 'r')
lines = cf.readlines()
for i in range(len(lines)):
## First evaluate PLOT sections
m = pat_begin_block.match(lines[i])
if m and m.group(1) == 'PLOT' and re.match(m.group(2),self.filename):
while i<len(lines)-1:
i = i+1
if is_end_marker(lines[i], 'PLOT'):
break
elif is_comment(lines[i]):
continue
m = pat_property.match(lines[i])
if m:
prop, value = m.group(1,2)
if prop in self.props:
logging.debug("Overwriting from conffile property %s = %s -> %s" % (prop, self.props[prop], value))
## Use strip here to deal with DOS newlines containing \r
self.props[prop.strip()] = value.strip()
elif is_comment(lines[i]):
continue
else:
## Then evaluate path-based settings, e.g. for HISTOGRAMs
m = pat_path_property.match(lines[i])
if m:
regex, prop, value = m.group(1,2,3)
for obj_dict in [self.special, self.histos, self.functions]:
for path, obj in obj_dict.items():
if re.match(regex, path):
## Use strip here to deal with DOS newlines containing \r
obj.props.update({prop.strip() : value.strip()})
cf.close()
def histo_sorting(self, curves):
"""Determine in what order to draw curves."""
histoordermap = {}
histolist = self.histos.keys()
if self.has_attr(curves):
histolist = filter(self.histos.keys().count, self.attr(curves).strip().split())
for histo in histolist:
order = 0
if self.histos[histo].has_attr('PlotOrder'):
order = int(self.histos[histo].attr['PlotOrder'])
if not order in histoordermap:
histoordermap[order] = []
histoordermap[order].append(histo)
sortedhistolist = []
for i in sorted(histoordermap.keys()):
sortedhistolist.extend(histoordermap[i])
self.props[curves] = sortedhistolist
class Plot(object):
def __init__(self): #, inputdata):
self.customCols = {}
def panel_header(self, **kwargs):
out = ''
out += ('\\begin{axis}[\n')
out += ('at={(0,%4.3fcm)},\n' % kwargs['PanelOffset'])
out += ('xmode=%s,\n' % kwargs['Xmode'])
out += ('ymode=%s,\n' % kwargs['Ymode'])
#if kwargs['Zmode']: out += ('zmode=log,\n')
out += ('scale only axis=true,\n')
out += ('scaled ticks=false,\n')
out += ('clip marker paths=true,\n')
out += ('axis on top,\n')
out += ('axis line style={line width=0.3pt},\n')
out += ('height=%scm,\n' % kwargs['PanelHeight'])
out += ('width=%scm,\n' % kwargs['PanelWidth'])
out += ('xmin=%s,\n' % kwargs['Xmin'])
out += ('xmax=%s,\n' % kwargs['Xmax'])
out += ('ymin=%s,\n' % kwargs['Ymin'])
out += ('ymax=%s,\n' % kwargs['Ymax'])
if kwargs['is2D']:
out += ('zmin=%s,\n' % kwargs['Zmin'])
out += ('zmax=%s,\n' % kwargs['Zmax'])
#out += ('legend style={\n')
#out += (' draw=none, fill=none, anchor = north west,\n')
#out += (' at={(%4.3f,%4.3f)},\n' % kwargs['LegendPos'])
#out += ('},\n')
#out += ('legend cell align=%s,\n' % kwargs['LegendAlign'])
#out += ('legend image post style={sharp plot, -},\n')
if kwargs['is2D']:
if kwargs['is3D']:
hrotate = 45 + kwargs['HRotate']
vrotate = 30 + kwargs['VRotate']
out += ('view={%i}{%s}, zticklabel pos=right,\n' % (hrotate, vrotate))
else:
out += ('view={0}{90}, colorbar,\n')
out += ('colormap/%s,\n' % kwargs['ColorMap'])
if not kwargs['is3D'] and kwargs['Zmode']:
out += ('colorbar style={yticklabel=$\\,10^{\\pgfmathprintnumber{\\tick}}$},\n')
#if kwargs['Grid']:
# out += ('grid=%s,\n' % kwargs['Grid'])
for axis, label in kwargs['Labels'].iteritems():
out += ('%s={%s},\n' % (axis.lower(), label))
if kwargs['XLabelSep'] != None:
if not kwargs['is3D']:
out += ('xlabel style={at={(1,0)},below left,yshift={-%4.3fcm}},\n' % kwargs['XLabelSep'])
out += ('xticklabel shift=%4.3fcm,\n' % kwargs['XTickShift'])
else:
out += ('xticklabels={,,},\n')
if kwargs['YLabelSep'] != None:
if not kwargs['is3D']:
out += ('ylabel style={at={(0,1)},left,yshift={%4.3fcm}},\n' % kwargs['YLabelSep'])
out += ('yticklabel shift=%4.3fcm,\n' % kwargs['YTickShift'])
out += ('major tick length={%4.3fcm},\n' % kwargs['MajorTickLength'])
out += ('minor tick length={%4.3fcm},\n' % kwargs['MinorTickLength'])
# check if 'number of minor tick divisions' is specified
for axis, nticks in kwargs['MinorTicks'].iteritems():
if nticks:
out += ('minor %s tick num=%i,\n' % (axis.lower(), nticks))
# check if actual major/minor tick divisions have been specified
out += ('max space between ticks=20,\n')
for axis, tickinfo in kwargs['CustomTicks'].iteritems():
majorlabels, majorticks, minorticks = tickinfo
if len(minorticks):
out += ('minor %stick={%s},\n' % (axis.lower(), ','.join(minorticks)))
if len(majorticks):
if float(majorticks[0]) > float(kwargs['%smin' % axis]):
majorticks = [ str(2 * float(majorticks[0]) - float(majorticks[1])) ] + majorticks
if len(majorlabels):
majorlabels = [ '.' ] + majorlabels # dummy label
if float(majorticks[-1]) < float(kwargs['%smax' % axis]):
majorticks.append(str(2 * float(majorticks[-1]) - float(majorticks[-2])))
if len(majorlabels):
majorlabels.append('.') # dummy label
out += ('%stick={%s},\n' % (axis.lower(), ','.join(majorticks)))
if kwargs['NeedsXLabels'] and len(majorlabels):
out += ('%sticklabels={{%s}},\n' % (axis.lower(), '},{'.join(majorlabels)))
out += ('every %s tick/.style={black},\n' % axis.lower())
out += (']\n')
return out
def panel_footer(self):
out = ''
out += ('\\end{axis}\n')
return out
def set_normalisation(self, inputdata):
if inputdata.normalised:
return
for method in ['NormalizeToIntegral', 'NormalizeToSum']:
if inputdata.has_attr(method):
for key in inputdata.props['DrawOnly']:
if not inputdata.histos[key].has_attr(method):
inputdata.histos[key].props[method] = inputdata.props[method]
if inputdata.has_attr('Scale'):
for key in inputdata.props['DrawOnly']:
inputdata.histos[key].props['Scale'] = inputdata.attr_float('Scale')
for key in inputdata.histos.keys():
inputdata.histos[key].mangle_input()
inputdata.normalised = True
def stack_histograms(self, inputdata):
if inputdata.has_attr('Stack'):
stackhists = [h for h in inputdata.attr('Stack').strip().split() if h in inputdata.histos]
previous = ''
for key in stackhists:
if previous != '':
inputdata.histos[key].add(inputdata.histos[previous])
previous = key
def set_histo_options(self, inputdata):
if inputdata.has_attr('ConnectGaps'):
for key in inputdata.histos.keys():
if not inputdata.histos[key].has_attr('ConnectGaps'):
inputdata.histos[i].props['ConnectGaps'] = inputdata.props['ConnectGaps']
# Counter and Value only have dummy x-axis, ticks wouldn't make sense here, so suppress them:
if 'Value object' in str(inputdata.histos) or 'Counter object' in str(inputdata.histos):
inputdata.props['XCustomMajorTicks'] = ''
inputdata.props['XCustomMinorTicks'] = ''
def set_borders(self, inputdata):
self.set_xmax(inputdata)
self.set_xmin(inputdata)
self.set_ymax(inputdata)
self.set_ymin(inputdata)
self.set_zmax(inputdata)
self.set_zmin(inputdata)
inputdata.props['Borders'] = (self.xmin, self.xmax, self.ymin, self.ymax, self.zmin, self.zmax)
def set_xmin(self, inputdata):
self.xmin = inputdata.xmin
if self.xmin is None:
xmins = [inputdata.histos[h].getXMin() for h in inputdata.props['DrawOnly']]
self.xmin = min(xmins) if xmins else 0.0
def set_xmax(self,inputdata):
self.xmax = inputdata.xmax
if self.xmax is None:
xmaxs = [inputdata.histos[h].getXMax() for h in inputdata.props['DrawOnly']]
self.xmax = min(xmaxs) if xmaxs else 1.0
def set_ymin(self,inputdata):
if inputdata.ymin is not None:
self.ymin = inputdata.ymin
else:
ymins = [inputdata.histos[i].getYMin(self.xmin, self.xmax, inputdata.props['LogY']) for i in inputdata.attr('DrawOnly')]
minymin = min(ymins) if ymins else 0.0
if inputdata.props['is2dim']:
self.ymin = minymin
else:
showzero = inputdata.attr_bool("ShowZero", True)
if showzero:
self.ymin = 0. if minymin > -1e-4 else 1.1*minymin
else:
self.ymin = 1.1*minymin if minymin < -1e-4 else 0 if minymin < 1e-4 else 0.9*minymin
if inputdata.props['LogY']:
ymins = [ymin for ymin in ymins if ymin > 0.0]
if not ymins:
if self.ymax == 0:
self.ymax = 1
ymins.append(2e-7*self.ymax)
minymin = min(ymins)
- fullrange = opts.FULL_RANGE
+ fullrange = args.FULL_RANGE
if inputdata.has_attr('FullRange'):
fullrange = inputdata.attr_bool('FullRange')
self.ymin = minymin/1.7 if fullrange else max(minymin/1.7, 2e-7*self.ymax)
if self.ymin == self.ymax:
self.ymin -= 1
self.ymax += 1
def set_ymax(self,inputdata):
if inputdata.has_attr('YMax'):
self.ymax = inputdata.attr_float('YMax')
else:
ymaxs = [inputdata.histos[h].getYMax(self.xmin, self.xmax) for h in inputdata.attr('DrawOnly')]
self.ymax = max(ymaxs) if ymaxs else 1.0
if not inputdata.is2dim:
self.ymax *= (1.7 if inputdata.attr_bool('LogY') else 1.1)
def set_zmin(self,inputdata):
if inputdata.has_attr('ZMin'):
self.zmin = inputdata.attr_float('ZMin')
else:
zmins = [inputdata.histos[i].getZMin(self.xmin, self.xmax, self.ymin, self.ymax) for i in inputdata.attr('DrawOnly')]
minzmin = min(zmins) if zmins else 0.0
self.zmin = minzmin
if zmins:
showzero = inputdata.attr_bool('ShowZero', True)
if showzero:
self.zmin = 0 if minzmin > -1e-4 else 1.1*minzmin
else:
self.zmin = 1.1*minzmin if minzmin < -1e-4 else 0. if minzmin < 1e-4 else 0.9*minzmin
if inputdata.attr_bool('LogZ', False):
zmins = [zmin for zmin in zmins if zmin > 0]
if not zmins:
if self.zmax == 0:
self.zmax = 1
zmins.append(2e-7*self.zmax)
minzmin = min(zmins)
- fullrange = inputdata.attr_bool("FullRange", opts.FULL_RANGE)
+ fullrange = inputdata.attr_bool("FullRange", args.FULL_RANGE)
self.zmin = minzmin/1.7 if fullrange else max(minzmin/1.7, 2e-7*self.zmax)
if self.zmin == self.zmax:
self.zmin -= 1
self.zmax += 1
def set_zmax(self,inputdata):
self.zmax = inputdata.attr_float('ZMax')
if self.zmax is None:
zmaxs = [inputdata.histos[h].getZMax(self.xmin, self.xmax, self.ymin, self.ymax) for h in inputdata.attr('DrawOnly')]
self.zmax = max(zmaxs) if zmaxs else 1.0
def getTicks(self, inputdata, axis):
majorticks = []; majorlabels = []
ticktype = '%sCustomMajorTicks' % axis
if inputdata.attr(ticktype):
ticks = inputdata.attr(ticktype).strip().split()
if not len(ticks) % 2:
for i in range(0,len(ticks),2):
majorticks.append(ticks[i])
majorlabels.append(ticks[i+1])
minorticks = []
ticktype = '%sCustomMinorTicks' % axis
if inputdata.attr(ticktype):
- ticks = inputdata.attr(ticktype).strip().split()
+ ticks = inputdata.attr(ticktype).strip().split()
for val in ticks:
minorticks.append(val)
return (majorlabels, majorticks, minorticks)
def draw(self):
pass
def write_header(self,inputdata):
inputdata.props.setdefault('TopMargin', 0.8)
inputdata.props.setdefault('LeftMargin', 1.4)
inputdata.props.setdefault('BottomMargin', 0.75)
inputdata.props.setdefault('RightMargin', 0.35)
- if inputdata.attr('is2dim'):
+ if inputdata.attr('is2dim'):
inputdata.props['RightMargin'] += 1.8
- papersizex = inputdata.attr_float('PlotSizeX') + 0.1
+ papersizex = inputdata.attr_float('PlotSizeX') + 0.1
papersizex += inputdata.attr_float('LeftMargin') + inputdata.attr_float('RightMargin')
papersizey = inputdata.attr_float('PlotSizeY') + 0.2
papersizey += inputdata.attr_float('TopMargin') + inputdata.attr_float('BottomMargin')
for rname, _ in inputdata.ratio_names():
if inputdata.has_attr(rname+'SizeY'):
papersizey += inputdata.attr_float(rname+'SizeY')
#
out = ""
out += '\\documentclass{article}\n'
- if opts.OUTPUT_FONT == "MINION":
+ if args.OUTPUT_FONT == "MINION":
out += ('\\usepackage{minion}\n')
- elif opts.OUTPUT_FONT == "PALATINO_OSF":
+ elif args.OUTPUT_FONT == "PALATINO_OSF":
out += ('\\usepackage[osf,sc]{mathpazo}\n')
- elif opts.OUTPUT_FONT == "PALATINO":
+ elif args.OUTPUT_FONT == "PALATINO":
out += ('\\usepackage{mathpazo}\n')
- elif opts.OUTPUT_FONT == "TIMES":
+ elif args.OUTPUT_FONT == "TIMES":
out += ('\\usepackage{mathptmx}\n')
- elif opts.OUTPUT_FONT == "HELVETICA":
+ elif args.OUTPUT_FONT == "HELVETICA":
out += ('\\renewcommand{\\familydefault}{\\sfdefault}\n')
out += ('\\usepackage{sfmath}\n')
out += ('\\usepackage{helvet}\n')
out += ('\\usepackage[symbolgreek]{mathastext}\n')
- for pkg in opts.LATEXPKGS:
+ for pkg in args.LATEXPKGS:
out += ('\\usepackage{%s}\n' % pkg)
out += ('\\usepackage[dvipsnames]{xcolor}\n')
out += ('\\selectcolormodel{rgb}\n')
out += ('\\definecolor{red}{HTML}{EE3311}\n') # (Google uses 'DC3912')
out += ('\\definecolor{blue}{HTML}{3366FF}\n')
out += ('\\definecolor{green}{HTML}{109618}\n')
out += ('\\definecolor{orange}{HTML}{FF9900}\n')
out += ('\\definecolor{lilac}{HTML}{990099}\n')
out += ('\\usepackage{amsmath}\n')
out += ('\\usepackage{amssymb}\n')
out += ('\\usepackage{relsize}\n')
out += ('\\usepackage{graphicx}\n')
out += ('\\usepackage[dvips,\n')
out += (' left=%4.3fcm,\n' % (inputdata.attr_float('LeftMargin')-0.45))
out += (' right=0cm,\n')
out += (' top=%4.3fcm,\n' % (inputdata.attr_float('TopMargin')-0.70))
out += (' bottom=0cm,\n')
out += (' paperwidth=%scm,paperheight=%scm\n' % (papersizex, papersizey))
out += (']{geometry}\n')
if inputdata.has_attr('DefineColor'):
out += ('% user defined colours\n')
for color in inputdata.attr('DefineColor').split('\t'):
out += ('%s\n' % color)
col_count = 0
for obj in inputdata.histos:
for col in inputdata.histos[obj].customCols:
if col in self.customCols:
# already seen, look up name
inputdata.histos[obj].customCols[col] = self.customCols[col]
elif ']{' in col:
colname = 'MyColour%i' % col_count # assign custom name
inputdata.histos[obj].customCols[col] = colname
self.customCols[col] = colname
col_count += 1
# remove outer {...} if present
while col[0] == '{' and col[-1] == '}': col = col[1:-1]
model, specs = tuple(col[1:-1].split(']{'))
out += ('\\definecolor{%s}{%s}{%s}\n' % (colname, model, specs))
out += ('\\usepackage{pgfplots}\n')
out += ('\\usepgfplotslibrary{fillbetween}\n')
#out += ('\\usetikzlibrary{positioning,shapes.geometric,patterns}\n')
out += ('\\usetikzlibrary{patterns}\n')
out += ('\\pgfplotsset{ compat=1.16,\n')
out += (' title style={at={(0,1)},right,yshift={0.10cm}},\n')
out += ('}\n')
out += ('\\begin{document}\n')
out += ('\\pagestyle{empty}\n')
out += ('\\begin{tikzpicture}[\n')
out += (' inner sep=0,\n')
out += (' trim axis left = %4.3f,\n' % (inputdata.attr_float('LeftMargin') + 0.1))
out += (' trim axis right,\n')
out += (' baseline,')
out += (' hatch distance/.store in=\\hatchdistance,\n')
out += (' hatch distance=8pt,\n')
out += (' hatch thickness/.store in=\\hatchthickness,\n')
out += (' hatch thickness=1pt,\n')
out += (']\n')
out += ('\\makeatletter\n')
out += ('\\pgfdeclarepatternformonly[\hatchdistance,\hatchthickness]{diagonal hatch}\n')
out += ('{\\pgfqpoint{0pt}{0pt}}\n')
out += ('{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n')
out += ('{\\pgfpoint{\\hatchdistance-1pt}{\\hatchdistance-1pt}}%\n')
out += ('{\n')
out += (' \\pgfsetcolor{\\tikz@pattern@color}\n')
out += (' \\pgfsetlinewidth{\\hatchthickness}\n')
out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{0pt}}\n')
out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n')
out += (' \\pgfusepath{stroke}\n')
out += ('}\n')
out += ('\\pgfdeclarepatternformonly[\hatchdistance,\hatchthickness]{antidiagonal hatch}\n')
out += ('{\\pgfqpoint{0pt}{0pt}}\n')
out += ('{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n')
out += ('{\\pgfpoint{\\hatchdistance-1pt}{\\hatchdistance-1pt}}%\n')
out += ('{\n')
out += (' \\pgfsetcolor{\\tikz@pattern@color}\n')
out += (' \\pgfsetlinewidth{\\hatchthickness}\n')
out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{\\hatchdistance}}\n')
out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{0pt}}\n')
out += (' \\pgfusepath{stroke}\n')
out += ('}\n')
out += ('\\pgfdeclarepatternformonly[\hatchdistance,\hatchthickness]{cross hatch}\n')
out += ('{\\pgfqpoint{0pt}{0pt}}\n')
out += ('{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n')
out += ('{\\pgfpoint{\\hatchdistance-1pt}{\\hatchdistance-1pt}}%\n')
out += ('{\n')
out += (' \\pgfsetcolor{\\tikz@pattern@color}\n')
out += (' \\pgfsetlinewidth{\\hatchthickness}\n')
out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{0pt}}\n')
out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{\\hatchdistance}}\n')
out += (' \\pgfusepath{stroke}\n')
out += (' \\pgfsetcolor{\\tikz@pattern@color}\n')
out += (' \\pgfsetlinewidth{\\hatchthickness}\n')
out += (' \\pgfpathmoveto{\\pgfqpoint{0pt}{\\hatchdistance}}\n')
out += (' \\pgfpathlineto{\\pgfqpoint{\\hatchdistance}{0pt}}\n')
out += (' \\pgfusepath{stroke}\n')
out += ('}\n')
out += ('\makeatother\n')
if inputdata.attr_bool('is2dim'):
colorseries = '{hsb}{grad}[rgb]{0,0,1}{-.700,0,0}'
if inputdata.attr('ColorSeries', ''):
colorseries = inputdata.attr('ColorSeries')
out += ('\\definecolorseries{gradientcolors}%s\n' % colorseries)
out += ('\\resetcolorseries[130]{gradientcolors}\n')
return out
def write_footer(self):
out = ""
out += ('\\end{tikzpicture}\n')
out += ('\\end{document}\n')
return out
class MainPlot(Plot):
def __init__(self, inputdata):
self.name = 'MainPlot'
inputdata.props['PlotStage'] = 'MainPlot'
self.set_normalisation(inputdata)
self.stack_histograms(inputdata)
do_gof = inputdata.props.get('GofLegend', '0') == '1' or inputdata.props.get('GofFrame', '') != ''
do_taylor = inputdata.props.get('TaylorPlot', '0') == '1'
if do_gof and not do_taylor:
self.calculate_gof(inputdata)
self.set_histo_options(inputdata)
self.set_borders(inputdata)
self.yoffset = inputdata.props['PlotSizeY']
def draw(self, inputdata):
out = ""
out += ('\n%\n% MainPlot\n%\n')
offset = 0.
for rname, i in inputdata.ratio_names():
if inputdata.has_attr(rname+'SizeY'):
offset += inputdata.attr_float(rname+'SizeY')
labels = self.getLabels(inputdata)
out += self.panel_header(
PanelOffset = offset,
Xmode = 'log' if inputdata.attr_bool('LogX') else 'normal',
Ymode = 'log' if inputdata.attr_bool('LogY') else 'normal',
Zmode = inputdata.attr_bool('LogZ'),
PanelHeight = inputdata.props['PlotSizeY'],
PanelWidth = inputdata.props['PlotSizeX'],
Xmin = self.xmin, Xmax = self.xmax,
Ymin = self.ymin, Ymax = self.ymax,
Zmin = self.zmin, Zmax = self.zmax,
Labels = { l : inputdata.attr(l) for l in labels if inputdata.has_attr(l) },
XLabelSep = inputdata.attr_float('XLabelSep', 0.7) if 'XLabel' in labels else None,
YLabelSep = inputdata.attr_float('YLabelSep', 1.2) if 'YLabel' in labels else None,
XTickShift = inputdata.attr_float('XTickShift', 0.1) if 'XLabel' in labels else None,
YTickShift = inputdata.attr_float('YTickShift', 0.1) if 'YLabel' in labels else None,
MajorTickLength = inputdata.attr_float('MajorTickLength', 0.30),
MinorTickLength = inputdata.attr_float('MinorTickLength', 0.15),
MinorTicks = { axis : inputdata.attr_int('%sMinorTickMarks' % axis, 4) for axis in ['X', 'Y', 'Z'] },
CustomTicks = { axis : self.getTicks(inputdata, axis) for axis in ['X', 'Y', 'Z'] },
NeedsXLabels = self.needsXLabel(inputdata),
#Grid = inputdata.doGrid(),
is2D = inputdata.is2dim,
is3D = inputdata.attr_bool('Is3D', 0),
HRotate = inputdata.attr_int('HRotate', 0),
VRotate = inputdata.attr_int('VRotate', 0),
ColorMap = inputdata.attr('ColorMap', 'jet'),
#LegendAlign = inputdata.getLegendAlign(),
#LegendPos = inputdata.getLegendPos(),
)
out += self.plot_object(inputdata)
out += self.panel_footer()
return out
def plot_object(self, inputdata):
out = ""
if inputdata.attr_bool('DrawSpecialFirst', False):
for s in inputdata.special.values():
out += s.draw(inputdata)
if inputdata.attr_bool('DrawFunctionFirst', False):
for f in inputdata.functions.values():
out += f.draw(inputdata, self.props['Borders'][0], self.props['Borders'][1])
for key in inputdata.props['DrawOnly']:
#add_legend = inputdata.attr_bool('Legend')
out += inputdata.histos[key].draw() #add_legend)
if not inputdata.attr_bool('DrawSpecialFirst', False):
for s in inputdata.special.values():
out += s.draw(inputdata)
if not inputdata.attr_bool('DrawFunctionFirst', False):
for f in inputdata.functions.values():
out += f.draw(inputdata, self.props['Borders'][0], self.props['Borders'][1])
for lname, i in inputdata.legend_names():
if inputdata.attr_bool(lname, False):
legend = Legend(inputdata.props,inputdata.histos,inputdata.functions, lname, i)
out += legend.draw()
return out
def needsXLabel(self, inputdata):
if inputdata.attr('PlotTickLabels') == '0':
return False
# only draw the x-axis label if there are no ratio panels
drawlabels = not any([ inputdata.attr_bool(rname) for rname, _ in inputdata.ratio_names() ])
return drawlabels
def getLabels(self, inputdata):
labels = ['Title', 'YLabel']
if self.needsXLabel(inputdata):
labels.append('XLabel')
if inputdata.props['is2dim']:
labels.append('ZLabel')
return labels
def calculate_gof(self, inputdata):
refdata = inputdata.props.get('GofReference')
if refdata is None:
refdata = inputdata.props.get('RatioPlotReference')
if refdata is None:
inputdata.props['GofLegend'] = '0'
inputdata.props['GofFrame'] = ''
return
def pickcolor(gof):
color = None
colordefs = {}
for i in inputdata.props.setdefault('GofFrameColor', '0:green 3:yellow 6:red!70').strip().split():
foo = i.split(':')
if len(foo) != 2:
continue
colordefs[float(foo[0])] = foo[1]
for col in sorted(colordefs.keys()):
if gof >= col:
color=colordefs[col]
return color
inputdata.props.setdefault('GofLegend', '0')
inputdata.props.setdefault('GofFrame', '')
inputdata.props.setdefault('FrameColor', None)
for key in inputdata.props['DrawOnly']:
if key == refdata:
continue
if inputdata.props['GofLegend'] != '1' and key != inputdata.props['GofFrame']:
continue
if inputdata.props.get('GofType', 'chi2') != 'chi2':
return
gof = inputdata.histos[key].getChi2(inputdata.histos[refdata])
if key == inputdata.props['GofFrame'] and inputdata.props['FrameColor'] is None:
inputdata.props['FrameColor'] = pickcolor(gof)
if inputdata.histos[key].props.setdefault('Title', '') != '':
inputdata.histos[key].props['Title'] += ', '
inputdata.histos[key].props['Title'] += '$\\chi^2/n={}$%1.2f' %gof
class TaylorPlot(Plot):
def __init__(self, inputdata):
self.refdata = inputdata.props['TaylorPlotReference']
self.calculate_taylorcoordinates(inputdata)
def calculate_taylorcoordinates(self,inputdata):
foo = inputdata.props['DrawOnly'].pop(inputdata.props['DrawOnly'].index(self.refdata))
inputdata.props['DrawOnly'].append(foo)
for i in inputdata.props['DrawOnly']:
print(i)
print('meanbinval = ', inputdata.histos[i].getMeanBinValue())
print('sigmabinval = ', inputdata.histos[i].getSigmaBinValue())
print('chi2/nbins = ', inputdata.histos[i].getChi2(inputdata.histos[self.refdata]))
print('correlation = ', inputdata.histos[i].getCorrelation(inputdata.histos[self.refdata]))
print('distance = ', inputdata.histos[i].getRMSdistance(inputdata.histos[self.refdata]))
class RatioPlot(Plot):
def __init__(self, inputdata, i):
self.number = i
self.name='RatioPlot%s' % (str(i) if i else '')
# initialise histograms even when no main plot
self.set_normalisation(inputdata)
self.refdata = inputdata.props[self.name+'Reference']
if not inputdata.histos.has_key(self.refdata):
print('ERROR: %sReference=%s not found in:' % (self.name,self.refdata))
for i in inputdata.histos.keys():
print(' ', i)
sys.exit(1)
if not inputdata.has_attr('RatioPlotYOffset'):
inputdata.props['RatioPlotYOffset'] = inputdata.props['PlotSizeY']
if not inputdata.has_attr(self.name + 'SameStyle'):
inputdata.props[self.name+'SameStyle'] = '1'
self.yoffset = inputdata.props['RatioPlotYOffset'] + inputdata.props[self.name+'SizeY']
inputdata.props['PlotStage'] = self.name
inputdata.props['RatioPlotYOffset'] = self.yoffset
inputdata.props['PlotSizeY'] = inputdata.props[self.name+'SizeY']
inputdata.props['LogY'] = inputdata.props.get(self.name+"LogY", False)
# TODO: It'd be nice it this wasn't so MC-specific
rpmode = inputdata.props.get(self.name+'Mode', "mcdata")
if rpmode=='deviation':
inputdata.props['YLabel']='$(\\text{MC}-\\text{data})$'
inputdata.props['YMin']=-2.99
inputdata.props['YMax']=2.99
elif rpmode=='delta':
inputdata.props['YLabel']='\\delta'
inputdata.props['YMin']=-0.5
inputdata.props['YMax']=0.5
elif rpmode=='deltapercent':
inputdata.props['YLabel']='\\delta\;[\%]'
inputdata.props['YMin']=-50.
inputdata.props['YMax']=50.
elif rpmode=='deltamc':
inputdata.props['YLabel']='Data/MC'
inputdata.props['YMin']=0.5
inputdata.props['YMax']=1.5
else:
inputdata.props['YLabel'] = 'MC/Data'
inputdata.props['YMin'] = 0.5
inputdata.props['YMax'] = 1.5
if inputdata.has_attr(self.name+'YLabel'):
inputdata.props['YLabel'] = inputdata.props[self.name+'YLabel']
if inputdata.has_attr(self.name+'YMin'):
inputdata.props['YMin'] = inputdata.props[self.name+'YMin']
if inputdata.has_attr(self.name+'YMax'):
inputdata.props['YMax'] = inputdata.props[self.name+'YMax']
if inputdata.has_attr(self.name+'YLabelSep'):
inputdata.props['YLabelSep'] = inputdata.props[self.name+'YLabelSep']
if not inputdata.has_attr(self.name+'ErrorBandColor'):
inputdata.props[self.name+'ErrorBandColor'] = 'yellow'
if inputdata.props[self.name+'SameStyle']=='0':
inputdata.histos[self.refdata].props['ErrorBandColor'] = inputdata.props[self.name+'ErrorBandColor']
inputdata.histos[self.refdata].props['ErrorBandOpacity'] = inputdata.props[self.name+'ErrorBandOpacity']
inputdata.histos[self.refdata].props['ErrorBands'] = '1'
inputdata.histos[self.refdata].props['ErrorBars'] = '0'
- inputdata.histos[self.refdata].props['ErrorTubes'] = '0'
+ inputdata.histos[self.refdata].props['ErrorTubes'] = '0'
inputdata.histos[self.refdata].props['LineStyle'] = 'solid'
inputdata.histos[self.refdata].props['LineColor'] = 'black'
inputdata.histos[self.refdata].props['LineWidth'] = '0.3pt'
inputdata.histos[self.refdata].props['MarkerStyle'] = ''
inputdata.histos[self.refdata].props['ConnectGaps'] = '1'
self.calculate_ratios(inputdata)
self.set_borders(inputdata)
def draw(self, inputdata):
out = ''
out += ('\n%\n% RatioPlot\n%\n')
offset = 0.
for rname, i in inputdata.ratio_names():
if i > self.number and inputdata.has_attr(rname+'SizeY'):
offset += inputdata.attr_float(rname+'SizeY')
labels = self.getLabels(inputdata)
out += self.panel_header(
PanelOffset = offset,
Xmode = 'log' if inputdata.attr_bool('LogX') else 'normal',
Ymode = 'log' if inputdata.attr_bool('LogY') else 'normal',
Zmode = inputdata.attr_bool('LogZ'),
PanelHeight = inputdata.props['PlotSizeY'],
PanelWidth = inputdata.props['PlotSizeX'],
Xmin = self.xmin, Xmax = self.xmax,
Ymin = self.ymin, Ymax = self.ymax,
Zmin = self.zmin, Zmax = self.zmax,
Labels = { l : inputdata.attr(l) for l in labels if inputdata.has_attr(l) },
XLabelSep = inputdata.attr_float('XLabelSep', 0.7) if 'XLabel' in labels else None,
YLabelSep = inputdata.attr_float('YLabelSep', 1.2) if 'YLabel' in labels else None,
XTickShift = inputdata.attr_float('XTickShift', 0.1) if 'XLabel' in labels else None,
YTickShift = inputdata.attr_float('YTickShift', 0.1) if 'YLabel' in labels else None,
MajorTickLength = inputdata.attr_float('MajorTickLength', 0.30),
MinorTickLength = inputdata.attr_float('MinorTickLength', 0.15),
MinorTicks = { axis : self.getMinorTickMarks(inputdata, axis) for axis in ['X', 'Y', 'Z'] },
CustomTicks = { axis : self.getTicks(inputdata, axis) for axis in ['X', 'Y', 'Z'] },
NeedsXLabels = self.needsXLabel(inputdata),
#Grid = inputdata.doGrid(self.name),
is2D = inputdata.is2dim,
is3D = inputdata.attr_bool('Is3D', 0),
HRotate = inputdata.attr_int('HRotate', 0),
VRotate = inputdata.attr_int('VRotate', 0),
ColorMap = inputdata.attr('ColorMap', 'jet'),
#LegendAlign = inputdata.getLegendAlign(self.name),
#LegendPos = inputdata.getLegendPos(self.name),
)
out += self.add_object(inputdata)
for lname, i in inputdata.legend_names():
if inputdata.attr_bool(self.name + lname, False):
legend = Legend(inputdata.props,inputdata.histos,inputdata.functions, self.name + lname, i)
out += legend.draw()
out += self.panel_footer()
return out
def calculate_ratios(self, inputdata):
inputdata.ratios = {}
inputdata.ratios = copy.deepcopy(inputdata.histos)
name = inputdata.attr(self.name+'DrawOnly').pop(inputdata.attr(self.name+'DrawOnly').index(self.refdata))
reffirst = inputdata.attr(self.name+'DrawReferenceFirst') != '0'
if reffirst and inputdata.histos[self.refdata].attr_bool('ErrorBands'):
inputdata.props[self.name+'DrawOnly'].insert(0, name)
else:
inputdata.props[self.name+'DrawOnly'].append(name)
rpmode = inputdata.props.get(self.name+'Mode', 'mcdata')
for i in inputdata.props[self.name+'DrawOnly']: # + [ self.refdata ]:
if i != self.refdata:
if rpmode == 'deviation':
inputdata.ratios[i].deviation(inputdata.ratios[self.refdata])
elif rpmode == 'delta':
inputdata.ratios[i].delta(inputdata.ratios[self.refdata])
elif rpmode == 'deltapercent':
inputdata.ratios[i].deltapercent(inputdata.ratios[self.refdata])
elif rpmode == 'datamc':
inputdata.ratios[i].dividereverse(inputdata.ratios[self.refdata])
inputdata.ratios[i].props['ErrorBars'] = '1'
else:
inputdata.ratios[i].divide(inputdata.ratios[self.refdata])
if rpmode == 'deviation':
inputdata.ratios[self.refdata].deviation(inputdata.ratios[self.refdata])
elif rpmode == 'delta':
inputdata.ratios[self.refdata].delta(inputdata.ratios[self.refdata])
elif rpmode == 'deltapercent':
inputdata.ratios[self.refdata].deltapercent(inputdata.ratios[self.refdata])
elif rpmode == 'datamc':
inputdata.ratios[self.refdata].dividereverse(inputdata.ratios[self.refdata])
else:
inputdata.ratios[self.refdata].divide(inputdata.ratios[self.refdata])
def add_object(self, inputdata):
out = ""
if inputdata.attr_bool('DrawSpecialFirst', False):
for s in inputdata.special.values():
out += s.draw(inputdata)
if inputdata.attr_bool('DrawFunctionFirst'):
for i in inputdata.functions.keys():
out += inputdata.functions[i].draw(inputdata, self.props['Borders'][0], self.props['Borders'][1])
for key in inputdata.props[self.name+'DrawOnly']:
if inputdata.has_attr(self.name+'Mode') and inputdata.attr(self.name+'Mode') == 'datamc':
if key == self.refdata: continue
#add_legend = inputdata.attr_bool(self.name+'Legend')
out += inputdata.ratios[key].draw() #add_legend)
if not inputdata.attr_bool('DrawFunctionFirst'):
for i in inputdata.functions.keys():
out += inputdata.functions[i].draw(inputdata, self.props['Borders'][0], self.props['Borders'][1])
if not inputdata.attr_bool('DrawSpecialFirst', False):
for s in inputdata.special.values():
out += s.draw(inputdata)
return out
def getMinorTickMarks(self, inputdata, axis):
tag = '%sMinorTickMarks' % axis
if inputdata.has_attr(self.name + tag):
return inputdata.attr_int(self.name + tag)
return inputdata.attr_int(tag, 4)
def needsXLabel(self, inputdata):
# only plot x label if it's the last ratio panel
ratios = [ i for rname, i in inputdata.ratio_names() if inputdata.attr_bool(rname, True) and inputdata.attr(rname + 'Reference', False) ]
return ratios[-1] == self.number
def getLabels(self, inputdata):
labels = ['YLabel']
drawtitle = inputdata.has_attr('MainPlot') and not inputdata.attr_bool('MainPlot')
if drawtitle and not any([inputdata.attr_bool(rname) for rname, i in inputdata.ratio_names() if i < self.number]):
labels.append('Title')
if self.needsXLabel(inputdata):
labels.append('XLabel')
return labels
class Legend(Described):
def __init__(self, props, histos, functions, name, number):
self.name = name
self.number = number
self.histos = histos
self.functions = functions
self.props = props
def draw(self):
legendordermap = {}
legendlist = self.props['DrawOnly'] + list(self.functions.keys())
if self.name + 'Only' in self.props:
legendlist = []
for legend in self.props[self.name+'Only'].strip().split():
if legend in self.histos or legend in self.functions:
legendlist.append(legend)
for legend in legendlist:
order = 0
if legend in self.histos and 'LegendOrder' in self.histos[legend].props:
order = int(self.histos[legend].props['LegendOrder'])
if legend in self.functions and 'LegendOrder' in self.functions[legend].props:
order = int(self.functions[legend].props['LegendOrder'])
if not order in legendordermap:
legendordermap[order] = []
legendordermap[order].append(legend)
orderedlegendlist=[]
for i in sorted(legendordermap.keys()):
orderedlegendlist.extend(legendordermap[i])
-
+
if self.props['is2dim']:
return self.draw_2dlegend(orderedlegendlist)
out = ""
out += '\n%\n% Legend\n%\n'
talign = 'right' if self.getLegendAlign() == 'left' else 'left'
posalign = 'left' if talign == 'right' else 'right'
legx = float(self.getLegendXPos()); legy = float(self.getLegendYPos())
ypos = legy -0.05*6/self.props['PlotSizeY']
if self.props.has_key(self.name+'Title'):
for i in self.props[self.name+'Title'].strip().split('\\\\'):
out += ('\\node[black, inner sep=0, align=%s, %s,\n' % (posalign, talign))
out += ('] at (rel axis cs: %4.3f,%4.3f) {%s};\n' % (legx, ypos, i))
ypos -= 0.075*6/self.props['PlotSizeY']
offset = self.attr_float(self.name+'EntryOffset', 0.)
separation = self.attr_float(self.name+'EntrySeparation', 0.)
hline = True; vline = True
if self.props.has_key(self.name+'HorizontalLine'):
hline = self.props[self.name+'HorizontalLine'] != '0'
if self.props.has_key(self.name+'VerticalLine'):
vline = self.props[self.name+'VerticalLine'] != '0'
rel_xpos_sign = 1.0
if self.getLegendAlign() == 'right':
rel_xpos_sign = -1.0
xwidth = self.getLegendIconWidth()
xpos1 = legx -0.02*rel_xpos_sign-0.08*xwidth*rel_xpos_sign
xpos2 = legx -0.02*rel_xpos_sign
xposc = legx -0.02*rel_xpos_sign-0.04*xwidth*rel_xpos_sign
xpostext = 0.1*rel_xpos_sign
for i in orderedlegendlist:
if self.histos.has_key(i):
drawobject=self.histos[i]
elif self.functions.has_key(i):
drawobject=self.functions[i]
else:
continue
title = drawobject.getTitle()
+ mopts = pat_options.search(drawobject.path)
+ if mopts and not self.props.get("RemoveOptions", 0):
+ opts = list(mopts.groups())[0].lstrip(':').split(":")
+ for opt in opts:
+ if opt in self.props['_OptSubs']:
+ title += ' %s' % self.props['_OptSubs'][opt]
+ else:
+ title += ' [%s]' % opt
if title == '':
continue
else:
titlelines=[]
for i in title.strip().split('\\\\'):
titlelines.append(i)
ypos -= 0.075*6/self.props['PlotSizeY']*separation
boxtop = 0.045*(6./self.props['PlotSizeY'])
boxbottom = 0.
lineheight = 0.5*(boxtop-boxbottom)
xico = xpostext + xposc
xhi = xpostext + xpos2
xlo = xpostext + xpos1
yhi = ypos + lineheight
ylo = ypos - lineheight
xleg = legx + xpostext;
# options set -> lineopts
setup = ('%s,\n' % drawobject.getLineColor())
linewidth = drawobject.getLineWidth()
try:
float(linewidth)
linewidth += 'cm'
except ValueError:
pass
setup += ('draw opacity=%s,\n' % drawobject.getLineOpacity())
if drawobject.getErrorBands():
out += ('\\fill[\n')
out += (' fill=none, fill opacity=%s,\n' % drawobject.getErrorBandOpacity())
if drawobject.getPatternFill():
out += ('fill=%s,\n' % drawobject.getErrorBandFillColor())
out += ('] (rel axis cs: %4.3f, %4.3f) rectangle (rel axis cs: %4.3f, %4.3f);\n' % (xlo,ylo,xhi,yhi))
if drawobject.getPattern() != '':
out += ('\\fill[\n')
out += ('pattern = %s,\n' % drawobject.getPattern())
if drawobject.getErroBandHatchDistance() != "":
out += ('hatch distance = %s,\n' % drawobject.getErroBandHatchDistance())
if drawobject.getPatternColor() != '':
out += ('pattern color = %s,\n' % drawobject.getPatternColor())
out += ('] (rel axis cs: %4.3f, %4.3f) rectangle (rel axis cs: %4.3f, %4.3f);\n' % (xlo,ylo,xhi,yhi))
if drawobject.getFillBorder():
out += ('\\draw[line style=solid, thin, %s] (rel axis cs: %4.3f,%4.3f)' % (setup, xlo, yhi))
out += ('-- (rel axis cs: %4.3f, %4.3f);\n' % (xhi, yhi))
out += ('\\draw[line style=solid, thin, %s] (rel axis cs: %4.3f,%4.3f)' % (setup, xlo, ylo))
out += ('-- (rel axis cs: %4.3f, %4.3f);\n' % (xhi, ylo))
setup += ('line width={%s},\n' % linewidth)
setup += ('style={%s},\n' % drawobject.getLineStyle())
if drawobject.getLineDash():
setup += ('dash pattern=%s,\n' % drawobject.getLineDash())
if drawobject.getErrorBars() and vline:
out += ('\\draw[%s] (rel axis cs: %4.3f,%4.3f)' % (setup, xico, ylo))
out += (' -- (rel axis cs: %4.3f, %4.3f);\n' % (xico, yhi))
if hline:
out += ('\\draw[%s] (rel axis cs: %4.3f,%4.3f)' % (setup, xlo, ypos))
out += ('-- (rel axis cs: %4.3f, %4.3f);\n' % (xhi, ypos))
if drawobject.getMarkerStyle() != 'none':
setup += ('mark options={\n')
setup += (' %s, fill color=%s,\n' % (drawobject.getMarkerColor(), drawobject.getMarkerColor()))
setup += (' mark size={%s}, scale=%s,\n' % (drawobject.getMarkerSize(), drawobject.getMarkerScale()))
setup += ('},\n')
out += ('\\draw[mark=*, %s] plot coordinates {\n' % setup)
out += ('(rel axis cs: %4.3f,%4.3f)};\n' % (xico, ypos))
ypos -= 0.075*6/self.props['PlotSizeY']*offset
for i in titlelines:
out += ('\\node[black, inner sep=0, align=%s, %s,\n' % (posalign, talign))
out += ('] at (rel axis cs: %4.3f,%4.3f) {%s};\n' % (xleg, ypos, i))
ypos -= 0.075*6/self.props['PlotSizeY']
if 'CustomLegend' in self.props:
for i in self.props['CustomLegend'].strip().split('\\\\'):
out += ('\\node[black, inner sep=0, align=%s, %s,\n' % (posalign, talign))
out += ('] at (rel axis cs: %4.3f, %4.3f) {%s};\n' % (xleg, ypos, i))
ypos -= 0.075*6/self.props['PlotSizeY']
return out
def draw_2dlegend(self,orderedlegendlist):
histos = ""
for i in range(0,len(orderedlegendlist)):
if self.histos.has_key(orderedlegendlist[i]):
drawobject=self.histos[orderedlegendlist[i]]
elif self.functions.has_key(orderedlegendlist[i]):
drawobject=self.functions[orderedlegendlist[i]]
else:
continue
title = drawobject.getTitle()
if title == '':
continue
else:
histos += title.strip().split('\\\\')[0]
if not i==len(orderedlegendlist)-1:
histos += ', '
#out = '\\rput(1,1){\\rput[rB](0, 1.7\\labelsep){\\normalsize '+histos+'}}\n'
out += ('\\node[black, inner sep=0, align=left]\n')
out += ('at (rel axis cs: 1,1) {\\normalsize{%s}};\n' % histos)
return out
def getLegendXPos(self):
return self.props.get(self.name+'XPos', '0.95' if self.getLegendAlign() == 'right' else '0.53')
def getLegendYPos(self):
return self.props.get(self.name+'YPos', '0.93')
def getLegendAlign(self):
la = self.props.get(self.name+'Align', 'left')
if la == 'l': return 'left'
elif la == 'c': return 'center'
elif la == 'r': return 'right'
else: return la
def getLegendIconWidth(self):
return float(self.props.get(self.name+'IconWidth', '1.0'))
class PlotFunction(object):
def __init__(self, f):
self.props = {}
self.read_input(f)
def read_input(self, f):
self.code='def histomangler(x):\n'
iscode=False
for line in f:
if is_end_marker(line, 'HISTOGRAMMANGLER'):
break
elif is_comment(line):
continue
else:
m = pat_property.match(line)
if iscode:
self.code+=' '+line
elif m:
prop, value = m.group(1,2)
if prop=='Code':
iscode=True
else:
self.props[prop] = value
if not iscode:
print('++++++++++ ERROR: No code in function')
else:
foo = compile(self.code, '<string>', 'exec')
exec(foo)
self.histomangler = histomangler
def transform(self, x):
return self.histomangler(x)
class Special(Described):
def __init__(self, f):
self.props = {}
self.data = []
self.read_input(f)
if not self.props.has_key('Location'):
self.props['Location']='MainPlot'
self.props['Location']=self.props['Location'].split('\t')
if not self.props.has_key('Coordinates'):
- self.props['Coordinates'] = 'Relative'
+ self.props['Coordinates'] = 'Relative'
def read_input(self, f):
for line in f:
if is_end_marker(line, 'SPECIAL'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.props[prop] = value
else:
self.data.append(line)
def draw(self, inputdata):
drawme = False
for i in self.props['Location']:
if i in inputdata.props['PlotStage']:
drawme = True
break
if not drawme:
return ""
out = ""
out += ('\n%\n% Special\n%\n')
out += ('\\pgfplotsset{\n')
out += (' after end axis/.append code={\n')
for l in self.data:
cs = 'axis cs:'
if self.props['Coordinates'].lower() == 'relative':
cs = 'rel ' + cs
atpos = l.index('at')
cspos = l[atpos:].index('(') + 1
l = l[:atpos+cspos] + cs + l[atpos+cspos:]
out += ' %s%s\n' % (l, ';' if l[-1:] != ';' else '')
out += (' }\n}\n')
return out
class DrawableObject(Described):
def __init__(self, f):
pass
def getName(self):
return self.props.get('Name', '')
def getTitle(self):
return self.props.get('Title', '')
def getLineStyle(self):
if 'LineStyle' in self.props:
## I normally like there to be "only one way to do it", but providing
## this dashdotted/dotdashed synonym just seems humane ;-)
if self.props['LineStyle'] in ('dashdotted', 'dotdashed'):
self.props['LineStyle']='dashed'
self.props['LineDash']='3pt 3pt .8pt 3pt'
return self.props['LineStyle']
else:
return 'solid'
def getLineDash(self):
pattern = self.props.get('LineDash', '')
if pattern:
# converting this into pgfplots syntax
# "3pt 3pt .8pt 3pt" becomes
# "on 3pt off 3pt on .8pt off 3pt"
for i, val in enumerate(pattern.split(' ')):
if i == 0:
pattern = 'on %s' % val
else:
pattern += ' %s %s' % ('on' if i % 2 == 0 else 'off', val)
return pattern
def getLineWidth(self):
return self.props.get("LineWidth", "0.8pt")
def getColor(self, col):
if col in self.customCols:
return self.customCols[col]
return col
def getLineColor(self):
return self.getColor(self.props.get("LineColor", "black"))
def getLineOpacity(self):
return self.props.get("LineOpacity", "1.0")
def getFillOpacity(self):
return self.props.get("FillOpacity", "1.0")
def getFillBorder(self):
return self.attr_bool("FillBorder", "1")
def getHatchColor(self):
return self.getColor(self.props.get("HatchColor", self.getErrorBandColor()))
def getMarkerStyle(self):
return self.props.get("MarkerStyle", "*" if self.getErrorBars() else "none")
def getMarkerSize(self):
return self.props.get("MarkerSize", "1.5pt")
def getMarkerScale(self):
return self.props.get("MarkerScale", "1")
def getMarkerColor(self):
return self.getColor(self.props.get("MarkerColor", "black"))
def getErrorMarkStyle(self):
return self.props.get("ErrorMarkStyle", "none")
def getErrorBars(self):
return bool(int(self.props.get("ErrorBars", "0")))
def getErrorBands(self):
return bool(int(self.props.get("ErrorBands", "0")))
def getErrorBandColor(self):
return self.getColor(self.props.get("ErrorBandColor", self.getLineColor()))
def getErrorBandStyle(self):
return self.props.get("ErrorBandStyle", "solid")
def getPattern(self):
return self.props.get("ErrorBandPattern", "")
def getPatternColor(self):
return self.getColor(self.props.get("ErrorBandPatternColor", ""))
def getPatternFill(self):
return bool(int(self.props.get("ErrorBandFill", self.getPattern() == "")))
def getErrorBandFillColor(self):
return self.getColor(self.props.get("ErrorBandFillColor", self.getErrorBandColor()))
def getErroBandHatchDistance(self):
return self.props.get("ErrorBandHatchDistance", "")
def getErrorBandOpacity(self):
return self.props.get("ErrorBandOpacity", "1.0")
def removeXerrors(self):
return bool(int(self.props.get("RemoveXerrors", "0")))
def getSmoothLine(self):
return bool(int(self.props.get("SmoothLine", "0")))
def getShader(self):
return self.props.get('Shader', 'flat')
def makecurve(self, setup, metadata, data = None): #, legendName = None):
out = ''
out += ('\\addplot%s+[\n' % ('3' if self.is2dim else ''))
out += setup
out += (']\n')
out += metadata
out += ('{\n')
if data:
out += data
out += ('};\n')
#if legendName:
# out += ('\\addlegendentry[\n')
# out += (' image style={yellow, mark=*},\n')
# out += (']{%s};\n' % legendName)
return out
def addcurve(self, points): #, legendName):
setup = '';
#setup += ('color=%s,\n' % self.getLineColor())
setup += ('%s,\n' % self.getLineColor())
linewidth = self.getLineWidth()
try:
float(linewidth)
linewidth += 'cm'
except ValueError:
pass
setup += ('line width={%s},\n' % linewidth)
setup += ('style={%s},\n' % self.getLineStyle())
setup += ('mark=%s,\n' % self.getMarkerStyle())
if self.getLineDash():
setup += ('dash pattern=%s,\n' % self.getLineDash())
if self.getSmoothLine():
setup += ('smooth,\n')
#if not legendName:
# setup += ('forget plot,\n')
if self.getMarkerStyle() != 'none':
setup += ('mark options={\n')
setup += (' %s, fill color=%s,\n' % (self.getMarkerColor(), self.getMarkerColor()))
setup += (' mark size={%s}, scale=%s,\n' % (self.getMarkerSize(), self.getMarkerScale()))
setup += ('},\n')
if self.getErrorBars():
setup += ('only marks,\n')
setup += ('error bars/.cd,\n')
setup += ('x dir=both,x explicit,\n')
setup += ('y dir=both,y explicit,\n')
setup += ('error bar style={%s, line width={%s}},\n' % (self.getLineColor(), linewidth))
setup += ('error mark=%s,\n' % self.getErrorMarkStyle())
if self.getErrorMarkStyle() != 'none':
setup += ('error mark options={line width=%s},\n' % linewidth)
metadata = 'coordinates\n'
if self.getErrorBars():
metadata = 'table [x error plus=ex+, x error minus=ex-, y error plus=ey+, y error minus=ey-]\n'
return self.makecurve(setup, metadata, points) #, legendName)
def makeband(self, points):
setup = ''
setup += ('%s,\n' % self.getErrorBandColor())
setup += ('fill opacity=%s,\n' % self.getErrorBandOpacity())
setup += ('forget plot,\n')
setup += ('solid,\n')
setup += ('draw = %s,\n' % self.getErrorBandColor())
if self.getPatternFill():
setup += ('fill=%s,\n' % self.getErrorBandFillColor())
if self.getPattern() != '':
if self.getPatternFill():
setup += ('postaction={\n')
setup += ('pattern = %s,\n' % self.getPattern())
if self.getErroBandHatchDistance() != "":
setup += ('hatch distance = %s,\n' % self.getErroBandHatchDistance())
if self.getPatternColor() != '':
setup += ('pattern color = %s,\n' % self.getPatternColor())
if self.getPatternFill():
setup += ('fill opacity=1,\n')
setup += ('},\n')
aux = 'draw=none, no markers, forget plot, name path=%s\n'
env = 'table [x=x, y=%s]\n'
out = ''
out += self.makecurve(aux % 'pluserr', env % 'y+', points)
out += self.makecurve(aux % 'minuserr', env % 'y-', points)
out += self.makecurve(setup, 'fill between [of=pluserr and minuserr]\n')
return out
def make2dee(self, points, zlog, zmin, zmax):
setup = 'mark=none, surf, shader=%s,\n' % self.getShader()
metadata = 'table [x index=0, y index=1, z index=2]\n'
setup += 'restrict z to domain=%s:%s,\n' % (log10(zmin) if zlog else zmin, log10(zmax) if zlog else zmax)
if zlog:
metadata = 'table [x index=0, y index=1, z expr=log10(\\thisrowno{2})]\n'
return self.makecurve(setup, metadata, points)
class Function(DrawableObject):
def __init__(self, f):
self.props = {}
self.read_input(f)
if not self.props.has_key('Location'):
self.props['Location']='MainPlot'
self.props['Location']=self.props['Location'].split('\t')
def read_input(self, f):
self.code='def plotfunction(x):\n'
iscode=False
for line in f:
if is_end_marker(line, 'FUNCTION'):
break
elif is_comment(line):
continue
else:
m = pat_property.match(line)
if iscode:
self.code+=' '+line
elif m:
prop, value = m.group(1,2)
if prop=='Code':
iscode=True
else:
self.props[prop] = value
if not iscode:
print('++++++++++ ERROR: No code in function')
else:
foo = compile(self.code, '<string>', 'exec')
exec(foo)
self.plotfunction = plotfunction
def draw(self, inputdata, xmin, xmax):
drawme = False
for key in self.attr('Location'):
if key in inputdata.attr('PlotStage'):
drawme = True
break
if not drawme:
return ''
if self.has_attr('XMin') and self.attr('XMin'):
xmin = self.attr_float('XMin')
if self.props.has_attr('FunctionXMin') and self.attr('FunctionXMin'):
xmin = max(xmin, self.attr_float('FunctionXMin'))
if self.has_attr('XMax') and self.attr('XMax'):
xmax = self.attr_float('XMax')
if self.has_attr('FunctionXMax') and self.attr('FunctionXMax'):
xmax = min(xmax, self.attr_float('FunctionXMax'))
xmin = min(xmin, xmax)
xmax = max(xmin, xmax)
# TODO: Space sample points logarithmically if LogX=1
points = ''
xsteps = 500.
if self.has_attr('XSteps') and self.attr('XSteps'):
xsteps = self.attr_float('XSteps')
dx = (xmax - xmin) / xsteps
x = xmin - dx
while x < (xmax+2*dx):
y = self.plotfunction(x)
points += ('(%s,%s)\n' % (x, y))
x += dx
setup = '';
setup += ('%s,\n' % self.getLineColor())
linewidth = self.getLineWidth()
try:
float(linewidth)
linewidth += 'cm'
except ValueError:
pass
setup += ('line width={%s},\n' % linewidth)
setup += ('style={%s},\n' % self.getLineStyle())
setup += ('smooth, mark=none,\n')
if self.getLineDash():
setup += ('dash pattern=%s,\n' % self.getLineDash())
metadata = 'coordinates\n'
return self.makecurve(setup, metadata, points)
class BinData(object):
"""\
Store bin edge and value+error(s) data for a 1D or 2D bin.
TODO: generalise/alias the attr names to avoid mention of x and y
"""
def __init__(self, low, high, val, err):
#print("@", low, high, val, err)
self.low = floatify(low)
self.high = floatify(high)
self.val = float(val)
self.err = floatpair(err)
@property
def is2D(self):
return hasattr(self.low, "__len__") and hasattr(self.high, "__len__")
@property
def isValid(self):
invalid_val = (isnan(self.val) or isnan(self.err[0]) or isnan(self.err[1]))
if invalid_val:
return False
if self.is2D:
invalid_low = any(isnan(x) for x in self.low)
invalid_high = any(isnan(x) for x in self.high)
else:
invalid_low, invalid_high = isnan(self.low), isnan(self.high)
return not (invalid_low or invalid_high)
@property
def xmin(self):
return self.low
@xmin.setter
def xmin(self,x):
self.low = x
@property
def xmax(self):
return self.high
@xmax.setter
def xmax(self,x):
self.high = x
@property
def xmid(self):
# TODO: Generalise to 2D
return 0.5 * (self.xmin + self.xmax)
@property
def xwidth(self):
# TODO: Generalise to 2D
assert self.xmin <= self.xmax
return self.xmax - self.xmin
@property
def y(self):
return self.val
@y.setter
def y(self, x):
self.val = x
@property
def ey(self):
return self.err
@ey.setter
def ey(self, x):
self.err = x
@property
def ymin(self):
return self.y - self.ey[0]
@property
def ymax(self):
return self.y + self.ey[1]
def __getitem__(self, key):
"dict-like access for backward compatibility"
if key in ("LowEdge"):
return self.xmin
elif key == ("UpEdge", "HighEdge"):
return self.xmax
elif key == "Content":
return self.y
elif key == "Errors":
return self.ey
class Histogram(DrawableObject, Described):
def __init__(self, f, p=None):
self.props = {}
self.customCols = {}
self.is2dim = False
self.zlog = False
self.data = []
self.read_input_data(f)
self.sigmabinvalue = None
self.meanbinvalue = None
self.path = p
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'HISTOGRAM'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.props[prop] = value
if 'Color' in prop and '{' in value:
self.customCols[value] = value
else:
## Detect symm errs
linearray = line.split()
if len(linearray) == 4:
self.data.append(BinData(*linearray))
## Detect asymm errs
elif len(linearray) == 5:
self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]]))
## Detect two-dimensionality
elif len(linearray) in [6,7]:
self.is2dim = True
# If asymm z error, use the max or average of +- error
err = float(linearray[5])
if len(linearray) == 7:
if self.props.get("ShowMaxZErr", 1):
err = max(err, float(linearray[6]))
else:
err = 0.5 * (err + float(linearray[6]))
self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], linearray[4], err))
## Unknown histo format
else:
raise RuntimeError("Unknown HISTOGRAM data line format with %d entries" % len(linearray))
def mangle_input(self):
norm2int = self.attr_bool("NormalizeToIntegral", False)
norm2sum = self.attr_bool("NormalizeToSum", False)
if norm2int or norm2sum:
if norm2int and norm2sum:
print("Cannot normalise to integral and to sum at the same time. Will normalise to the integral.")
foo = 0.0
for point in self.data:
if norm2int: foo += point.val*point.xwidth
else: foo += point.val
if foo != 0:
for point in self.data:
point.val /= foo
point.err[0] /= foo
point.err[1] /= foo
scale = self.attr_float('Scale', 1.0)
if scale != 1.0:
# TODO: change to "in self.data"?
for point in self.data:
point.val *= scale
point.err[0] *= scale
point.err[1] *= scale
if self.attr_float("ScaleError", 0.0):
scale = self.attr_float("ScaleError")
for point in self.data:
point.err[0] *= scale
point.err[1] *= scale
if self.attr_float('Shift', 0.0):
shift = self.attr_float("Shift")
for point in self.data:
point.val += shift
if self.has_attr('Rebin') and self.attr('Rebin') != '':
rawrebins = self.attr('Rebin').strip().split('\t')
rebins = []
maxindex = len(self.data)-1
if len(rawrebins) % 2 == 1:
rebins.append({'Start': self.data[0].xmin,
'Rebin': int(rawrebins[0])})
rawrebins.pop(0)
for i in range(0,len(rawrebins),2):
if float(rawrebins[i])<self.data[maxindex].xmin:
rebins.append({'Start': float(rawrebins[i]),
'Rebin': int(rawrebins[i+1])})
if (rebins[0]['Start'] > self.data[0].xmin):
rebins.insert(0,{'Start': self.data[0].xmin,
'Rebin': 1})
errortype = self.attr("ErrorType", "stat")
newdata = []
lower = self.getBin(rebins[0]['Start'])
for k in range(0,len(rebins),1):
rebin = rebins[k]['Rebin']
upper = maxindex
end = 1
if (k<len(rebins)-1):
upper = self.getBin(rebins[k+1]['Start'])
end = 0
for i in range(lower,(upper/rebin)*rebin+end,rebin):
foo=0.
barl=0.
baru=0.
for j in range(rebin):
if i+j>maxindex:
break
binwidth = self.data[i+j].xwidth
foo += self.data[i+j].val * binwidth
if errortype=="stat":
barl += (binwidth * self.data[i+j].err[0])**2
baru += (binwidth * self.data[i+j].err[1])**2
elif errortype == "env":
barl += self.data[i+j].ymin * binwidth
baru += self.data[i+j].ymax * binwidth
else:
logging.error("Rebinning for ErrorType not implemented.")
sys.exit(1)
upedge = min(i+rebin-1,maxindex)
newbinwidth=self.data[upedge].xmax-self.data[i].xmin
newcentral=foo/newbinwidth
if errortype=="stat":
newerror=[sqrt(barl)/newbinwidth,sqrt(baru)/newbinwidth]
elif errortype=="env":
newerror=[(foo-barl)/newbinwidth,(baru-foo)/newbinwidth]
newdata.append(BinData(self.data[i].xmin, self.data[i+rebin-1].xmax, newcentral, newerror))
lower = (upper/rebin)*rebin+(upper%rebin)
self.data=newdata
def add(self, name):
if len(self.data) != len(name.data):
print('+++ Error in Histogram.add() for %s: different numbers of bins' % self.path)
for i, b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
b.val += name.data[i].val
b.err[0] = sqrt(b.err[0]**2 + name.data[i].err[0]**2)
b.err[1] = sqrt(b.err[1]**2 + name.data[i].err[1]**2)
else:
print('+++ Error in Histogram.add() for %s: binning of histograms differs' % self.path)
def divide(self, name):
#print(name.path, self.path)
if len(self.data) != len(name.data):
print('+++ Error in Histogram.divide() for %s: different numbers of bins' % self.path)
for i,b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
try:
b.err[0] /= name.data[i].val
except ZeroDivisionError:
b.err[0] = 0.
try:
b.err[1] /= name.data[i].val
except ZeroDivisionError:
b.err[1] = 0.
try:
b.val /= name.data[i].val
except ZeroDivisionError:
b.val = 1.
# b.err[0] = sqrt(b.err[0]**2 + name.data[i].err[0]**2)
# b.err[1] = sqrt(b.err[1]**2 + name.data[i].err[1]**2)
else:
print('+++ Error in Histogram.divide() for %s: binning of histograms differs' % self.path)
def dividereverse(self, name):
if len(self.data) != len(name.data):
print('+++ Error in Histogram.dividereverse() for %s: different numbers of bins' % self.path)
for i, b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
try:
b.err[0] = name.data[i].err[0]/b.val
except ZeroDivisionError:
b.err[0] = 0.
try:
b.err[1] = name.data[i].err[1]/b.val
except ZeroDivisionError:
b.err[1] = 0.
try:
b.val = name.data[i].val/b.val
except ZeroDivisionError:
b.val = 1.
else:
print('+++ Error in Histogram.dividereverse(): binning of histograms differs')
def deviation(self, name):
if len(self.data) != len(name.data):
print('+++ Error in Histogram.deviation() for %s: different numbers of bins' % self.path)
for i, b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
b.val -= name.data[i].val
try:
b.val /= 0.5*sqrt((name.data[i].err[0] + name.data[i].err[1])**2 + (b.err[0] + b.err[1])**2)
except ZeroDivisionError:
b.val = 0.0
try:
b.err[0] /= name.data[i].err[0]
except ZeroDivisionError:
b.err[0] = 0.0
try:
b.err[1] /= name.data[i].err[1]
except ZeroDivisionError:
b.err[1] = 0.0
else:
print('+++ Error in Histogram.deviation() for %s: binning of histograms differs' % self.path)
def delta(self,name):
self.divide(name)
for point in self.data:
point.val -= 1.
def deltapercent(self,name):
self.delta(name)
for point in self.data:
point.val *= 100.
point.err[0] *= 100.
point.err[1] *= 100.
def getBin(self,x):
if x<self.data[0].xmin or x>self.data[len(self.data)-1].xmax:
print('+++ Error in Histogram.getBin(): x out of range')
return float('nan')
for i in range(1,len(self.data)-1,1):
if x<self.data[i].xmin:
return i-1
return len(self.data)-1
def getChi2(self, name):
chi2 = 0.
for i, b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
try:
denom = 0.25*(b.err[0] + b.err[1])**2 + 0.25*(name.data[i].err[0] + name.data[i].err[1])**2
chi2 += (self.data[i].val - name.data[i].val) ** 2 / denom
except ZeroDivisionError:
pass
else:
print('+++ Error in Histogram.getChi2() for %s: binning of histograms differs' % self.path)
return chi2 / len(self.data)
def getSigmaBinValue(self):
if self.sigmabinvalue is None:
self.sigmabinvalue = 0.
sumofweights = 0.
for point in self.data:
if self.is2dim:
binwidth = abs((point.xmax[0] - point.xmin[0])*(point.xmax[1] - point.xmin[1]))
else:
binwidth = abs(point.xmax - point.xmin)
self.sigmabinvalue += binwidth*(point.val - self.getMeanBinValue()) ** 2
sumofweights += binwidth
self.sigmabinvalue = sqrt(self.sigmabinvalue / sumofweights)
return self.sigmabinvalue
def getMeanBinValue(self):
if self.meanbinvalue==None:
self.meanbinvalue = 0.
sumofweights = 0.
for point in self.data:
if self.is2dim:
binwidth = abs((point.xmax[0] - point.xmin[0])*(point.xmax[1] - point.xmin[1]))
else:
binwidth = abs(point.xmax - point.xmin)
self.meanbinvalue += binwidth*point.val
sumofweights += binwidth
self.meanbinvalue /= sumofweights
return self.meanbinvalue
def getCorrelation(self, name):
correlation = 0.
sumofweights = 0.
for i, b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
if self.is2dim:
binwidth = abs( (b.xmax[0] - b.xmin[0]) * (b.xmax[1] - b.xmin[1]) )
else:
binwidth = abs(b.xmax - b.xmin)
correlation += binwidth * (b.val - self.getMeanBinValue()) * (name.data[i].val - name.getMeanBinValue())
sumofweights += binwidth
else:
print('+++ Error in Histogram.getCorrelation(): binning of histograms differs' % self.path)
correlation /= sumofweights
try:
correlation /= self.getSigmaBinValue()*name.getSigmaBinValue()
except ZeroDivisionError:
correlation = 0
return correlation
def getRMSdistance(self,name):
distance = 0.
sumofweights = 0.
for i, b in enumerate(self.data):
if fuzzyeq(b.xmin, name.data[i].xmin) and fuzzyeq(b.xmax, name.data[i].xmax):
if self.is2dim:
binwidth = abs( (b.xmax[0] - b.xmin[0]) * (b.xmax[1] - b.xmin[1]) )
else:
binwidth = abs(b.xmax - b.xmin)
distance += binwidth * ( (b.val - self.getMeanBinValue()) - (name.data[i].val - name.getMeanBinValue())) ** 2
sumofweights += binwidth
else:
print('+++ Error in Histogram.getRMSdistance() for %s: binning of histograms differs' % self.path)
distance = sqrt(distance/sumofweights)
return distance
def draw(self): #, addLegend = None):
out = ''
seen_nan = False
if any(b.isValid for b in self.data):
out += "% START DATA\n"
#legendName = self.getTitle() if addLegend else ''
if self.is2dim:
points = ''; rowlo = ''; rowhi = ''
thisx = None; thisy = None
xinit = True; yinit = None
zmin = min([ b.val for b in self.data if b.val ])
zmax = max([ b.val for b in self.data if b.val ])
for b in self.data:
if thisx == None or thisx != b.high[0]:
if thisx != None:
points += '%s\n%s\n' % (rowlo, rowhi)
rowlo = ''; rowhi = ''
thisx = b.high[0]
rowlo += ('%s %s %s\n' % (1.001 * b.low[0], 1.001 * b.low[1], b.val))
rowlo += ('%s %s %s\n' % (1.001 * b.low[0], 0.999 * b.high[1], b.val))
rowhi += ('%s %s %s\n' % (0.999 * b.high[0], 1.001 * b.low[1], b.val))
rowhi += ('%s %s %s\n' % (0.999 * b.high[0], 0.999 * b.high[1], b.val))
points += '%s\n%s\n' % (rowlo, rowhi)
points += '\n'
out += self.make2dee(points, self.zlog, zmin, zmax)
else:
if self.getErrorBands():
self.props['SmoothLine'] = 0
points = ('x y- y+\n')
for b in self.data:
if isnan(b.val) or isnan(b.err[0]) or isnan(b.err[1]):
seen_nan = True
continue
points += ('%s %s %s\n' % (b.xmin, b.val - b.err[0], b.val + b.err[1]))
points += ('%s %s %s\n' % (b.xmax, b.val - b.err[0], b.val + b.err[1]))
out += self.makeband(points)
points = ''
if self.getErrorBars():
self.props['SmoothLine'] = 0
points += ('x y ex- ex+ ey- ey+\n')
for b in self.data:
if isnan(b.val):
seen_nan = True
continue
if self.getErrorBars():
if isnan(b.err[0]) or isnan(b.err[1]):
seen_nan = True
continue
if b.val == 0. and b.err == [0.,0.]:
continue
points += ('%s %s ' % (b.xmid, b.val))
if self.removeXerrors():
points += ('0 0 ')
else:
points += ('%s %s ' % (b.xmid - b.xmin, b.xmax - b.xmid))
points += ('%s %s' % (b.err[0], b.err[1]))
elif self.getSmoothLine():
points += ('(%s,%s)' % (b.xmid, b.val))
else:
points += ('(%s,%s) (%s,%s)' % (b.xmin, b.val, b.xmax, b.val))
points += ('\n')
out += self.addcurve(points) #, legendName)
out += "% END DATA\n"
else:
print("WARNING: No valid bin value/errors/edges to plot!")
out += "% NO DATA!\n"
if seen_nan:
print ("WARNING: NaN-valued value or error bar!")
return out
#def addLegend(self):
# out = ''
# if self.getTitle():
# out += ('\\addlegendentry{%s}\n' % self.getTitle())
# return out
def getXMin(self):
if not self.data:
return 0
elif self.is2dim:
return min(b.low[0] for b in self.data)
else:
return min(b.xmin for b in self.data)
def getXMax(self):
if not self.data:
return 1
elif self.is2dim:
return max(b.high[0] for b in self.data)
else:
return max(b.xmax for b in self.data)
def getYMin(self, xmin, xmax, logy):
if not self.data:
return 0
elif self.is2dim:
return min(b.low[1] for b in self.data)
else:
yvalues = []
for b in self.data:
if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax):
foo = b.val
if self.getErrorBars() or self.getErrorBands():
foo -= b.err[0]
if not isnan(foo) and (not logy or foo > 0):
yvalues.append(foo)
return min(yvalues) if yvalues else self.data[0].val
def getYMax(self, xmin, xmax):
if not self.data:
return 1
elif self.is2dim:
return max(b.high[1] for b in self.data)
else:
yvalues = []
for b in self.data:
if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax):
foo = b.val
if self.getErrorBars() or self.getErrorBands():
foo += b.err[1]
if not isnan(foo): # and (not logy or foo > 0):
yvalues.append(foo)
return max(yvalues) if yvalues else self.data[0].val
def getZMin(self, xmin, xmax, ymin, ymax):
if not self.is2dim:
return 0
zvalues = []
for b in self.data:
if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax):
zvalues.append(b.val)
return min(zvalues)
def getZMax(self, xmin, xmax, ymin, ymax):
if not self.is2dim:
return 0
zvalues = []
for b in self.data:
if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax):
zvalues.append(b.val)
return max(zvalues)
class Value(Histogram):
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'VALUE'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.props[prop] = value
else:
linearray = line.split()
if len(linearray) == 3:
self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[2] ])) # dummy x-values
else:
raise Exception('Value does not have the expected number of columns. ' + line)
# TODO: specialise draw() here
class Counter(Histogram):
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'COUNTER'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.props[prop] = value
else:
linearray = line.split()
if len(linearray) == 2:
self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[1] ])) # dummy x-values
else:
raise Exception('Counter does not have the expected number of columns. ' + line)
# TODO: specialise draw() here
class Histo1D(Histogram):
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'HISTO1D'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.props[prop] = value
if 'Color' in prop and '{' in value:
self.customCols[value] = value
else:
linearray = line.split()
## Detect symm errs
# TODO: Not sure what the 8-param version is for... auto-compatibility with YODA format?
if len(linearray) in [4,8]:
self.data.append(BinData(linearray[0], linearray[1], linearray[2], linearray[3]))
## Detect asymm errs
elif len(linearray) == 5:
self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]]))
else:
raise Exception('Histo1D does not have the expected number of columns. ' + line)
# TODO: specialise draw() here
class Histo2D(Histogram):
def read_input_data(self, f):
self.is2dim = True #< Should really be done in a constructor, but this is easier for now...
for line in f:
if is_end_marker(line, 'HISTO2D'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.props[prop] = value
if 'Color' in prop and '{' in value:
self.customCols[value] = value
else:
linearray = line.split()
if len(linearray) in [6,7]:
# If asymm z error, use the max or average of +- error
err = float(linearray[5])
if len(linearray) == 7:
if self.props.get("ShowMaxZErr", 1):
err = max(err, float(linearray[6]))
else:
err = 0.5 * (err + float(linearray[6]))
self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], float(linearray[4]), err))
else:
raise Exception('Histo2D does not have the expected number of columns. '+line)
# TODO: specialise draw() here
####################
def try_cmd(args):
"Run the given command + args and return True/False if it succeeds or not"
import subprocess
try:
subprocess.check_output(args, stderr=subprocess.STDOUT)
return True
except:
return False
def have_cmd(cmd):
return try_cmd(["which", cmd])
import shutil, subprocess
def process_datfile(datfile):
global opts
if not os.access(datfile, os.R_OK):
raise Exception("Could not read data file '%s'" % datfile)
datpath = os.path.abspath(datfile)
datfile = os.path.basename(datpath)
datdir = os.path.dirname(datpath)
- outdir = opts.OUTPUT_DIR if opts.OUTPUT_DIR else datdir
+ outdir = args.OUTPUT_DIR if args.OUTPUT_DIR else datdir
filename = datfile.replace('.dat','')
## Create a temporary directory
# cwd = os.getcwd()
tempdir = tempfile.mkdtemp('.make-plots')
tempdatpath = os.path.join(tempdir, datfile)
shutil.copy(datpath, tempdir)
- if opts.NO_CLEANUP:
+ if args.NO_CLEANUP:
logging.info('Keeping temp-files in %s' % tempdir)
## Make TeX file
inputdata = InputData(datpath)
if inputdata.attr_bool('IgnorePlot', False):
return
texpath = os.path.join(tempdir, '%s.tex' % filename)
texfile = open(texpath, 'w')
p = Plot()
texfile.write(p.write_header(inputdata))
if inputdata.attr_bool("MainPlot", True):
mp = MainPlot(inputdata)
texfile.write(mp.draw(inputdata))
if not inputdata.attr_bool("is2dim", False):
for rname, i in inputdata.ratio_names():
if inputdata.attr_bool(rname, True) and inputdata.attr(rname + 'Reference', False):
rp = RatioPlot(inputdata, i)
texfile.write(rp.draw(inputdata))
#for s in inputdata.special.values():
# texfile.write(p.write_special(inputdata))
texfile.write(p.write_footer())
texfile.close()
- if opts.OUTPUT_FORMAT != ["TEX"]:
+ if args.OUTPUT_FORMAT != ["TEX"]:
## Check for the required programs
latexavailable = have_cmd("latex")
dvipsavailable = have_cmd("dvips")
convertavailable = have_cmd("convert")
ps2pnmavailable = have_cmd("ps2pnm")
pnm2pngavailable = have_cmd("pnm2png")
# TODO: It'd be nice to be able to control the size of the PNG between thumb and full-size...
# currently defaults (and is used below) to a size suitable for thumbnails
def mkpngcmd(infile, outfile, outsize=450, density=300):
if convertavailable:
pngcmd = ["convert",
"-flatten",
"-density", str(density),
infile,
"-quality", "100",
"-resize", "{size:d}x{size:d}>".format(size=outsize),
#"-sharpen", "0x1.0",
outfile]
#logging.debug(" ".join(pngcmd))
#pngproc = subprocess.Popen(pngcmd, stdout=subprocess.PIPE, cwd=tempdir)
#pngproc.wait()
return pngcmd
else:
raise Exception("Required PNG maker program (convert) not found")
# elif ps2pnmavailable and pnm2pngavailable:
# pstopnm = "pstopnm -stdout -xsize=461 -ysize=422 -xborder=0.01 -yborder=0.01 -portrait " + infile
# p1 = subprocess.Popen(pstopnm.split(), stdout=subprocess.PIPE, stderr=open("/dev/null", "w"), cwd=tempdir)
# p2 = subprocess.Popen(["pnmtopng"], stdin=p1.stdout, stdout=open("%s/%s.png" % (tempdir, outfile), "w"), stderr=open("/dev/null", "w"), cwd=tempdir)
# p2.wait()
# else:
# raise Exception("Required PNG maker programs (convert, or ps2pnm and pnm2png) not found")
## Run LaTeX (in no-stop mode)
logging.debug(os.listdir(tempdir))
texcmd = ["latex", "\scrollmode\input", texpath]
logging.debug("TeX command: " + " ".join(texcmd))
texproc = subprocess.Popen(texcmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, cwd=tempdir)
logging.debug(texproc.communicate()[0])
logging.debug(os.listdir(tempdir))
## Run dvips
dvcmd = ["dvips", filename]
if not logging.getLogger().isEnabledFor(logging.DEBUG):
dvcmd.append("-q")
## Handle Minion Font
- if opts.OUTPUT_FONT == "MINION":
+ if args.OUTPUT_FONT == "MINION":
dvcmd.append('-Pminion')
## Choose format
# TODO: Rationalise... this is a mess! Maybe we can use tex2pix?
- if "PS" in opts.OUTPUT_FORMAT:
+ if "PS" in args.OUTPUT_FORMAT:
dvcmd += ["-o", "%s.ps" % filename]
logging.debug(" ".join(dvcmd))
dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir)
dvproc.wait()
- if "PDF" in opts.OUTPUT_FORMAT:
+ if "PDF" in args.OUTPUT_FORMAT:
dvcmd.append("-f")
logging.debug(" ".join(dvcmd))
dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir)
cnvproc = subprocess.Popen(["ps2pdf", "-"], stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir)
f = open(os.path.join(tempdir, "%s.pdf" % filename), "wb")
f.write(cnvproc.communicate()[0])
f.close()
- if "EPS" in opts.OUTPUT_FORMAT:
+ if "EPS" in args.OUTPUT_FORMAT:
dvcmd.append("-f")
logging.debug(" ".join(dvcmd))
dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir)
cnvproc = subprocess.Popen(["ps2eps"], stdin=dvproc.stdout, stderr=subprocess.PIPE, stdout=subprocess.PIPE, cwd=tempdir)
f = open(os.path.join(tempdir, "%s.eps" % filename), "wb")
f.write(cnvproc.communicate()[0])
f.close()
- if "PNG" in opts.OUTPUT_FORMAT:
+ if "PNG" in args.OUTPUT_FORMAT:
dvcmd.append("-f")
logging.debug(" ".join(dvcmd))
dvproc = subprocess.Popen(dvcmd, stdout=subprocess.PIPE, cwd=tempdir)
#pngcmd = ["convert", "-flatten", "-density", "110", "-", "-quality", "100", "-sharpen", "0x1.0", "%s.png" % filename]
pngcmd = mkpngcmd("-", "%s.png" % filename)
logging.debug(" ".join(pngcmd))
pngproc = subprocess.Popen(pngcmd, stdin=dvproc.stdout, stdout=subprocess.PIPE, cwd=tempdir)
pngproc.wait()
logging.debug(os.listdir(tempdir))
## Copy results back to main dir
- for fmt in opts.OUTPUT_FORMAT:
+ for fmt in args.OUTPUT_FORMAT:
outname = "%s.%s" % (filename, fmt.lower())
outpath = os.path.join(tempdir, outname)
if os.path.exists(outpath):
shutil.copy(outpath, outdir)
else:
logging.error("No output file '%s' from processing %s" % (outname, datfile))
## Clean up
- if not opts.NO_CLEANUP:
+ if not args.NO_CLEANUP:
shutil.rmtree(tempdir, ignore_errors=True)
####################
if __name__ == '__main__':
## Try to rename the process on Linux
try:
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
libc.prctl(15, 'make-plots', 0, 0, 0)
except Exception:
pass
## Try to use Psyco optimiser
try:
import psyco
psyco.full()
except ImportError:
pass
## Find number of (virtual) processing units
import multiprocessing
try:
numcores = multiprocessing.cpu_count()
except:
numcores = 1
## Parse command line options
- from optparse import OptionParser, OptionGroup
- parser = OptionParser(usage=__doc__)
- parser.add_option("-j", "-n", "--num-threads", dest="NUM_THREADS", type="int",
- default=numcores, help="max number of threads to be used [%s]" % numcores)
- parser.add_option("-o", "--outdir", dest="OUTPUT_DIR", default=None,
- help="choose the output directory (default = .dat dir)")
- parser.add_option("--font", dest="OUTPUT_FONT", choices="palatino,cm,times,helvetica,minion".split(","),
- default="palatino", help="choose the font to be used in the plots")
- parser.add_option("-f", "--format", dest="OUTPUT_FORMAT", default="PDF",
- help="choose plot format, perhaps multiple comma-separated formats e.g. 'pdf' or 'tex,pdf,png' (default = PDF).")
- parser.add_option("--no-cleanup", dest="NO_CLEANUP", action="store_true", default=False,
- help="keep temporary directory and print its filename.")
- parser.add_option("--no-subproc", dest="NO_SUBPROC", action="store_true", default=False,
- help="don't use subprocesses to render the plots in parallel -- useful for debugging.")
- parser.add_option("--full-range", dest="FULL_RANGE", action="store_true", default=False,
- help="plot full y range in log-y plots.")
- parser.add_option("-c", "--config", dest="CONFIGFILES", action="append", default=None,
- help="plot config file to be used. Overrides internal config blocks.")
- verbgroup = OptionGroup(parser, "Verbosity control")
- verbgroup.add_option("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL",
- default=logging.INFO, help="print debug (very verbose) messages")
- verbgroup.add_option("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL",
- default=logging.INFO, help="be very quiet")
- parser.add_option_group(verbgroup)
- opts, args = parser.parse_args()
+ import argparse
+ parser = argparse.ArgumentParser(usage=__doc__)
+ parser.add_argument("DATFILES", nargs="+", help=".dat files to plot")
+ parser.add_argument("-j", "-n", "--num-threads", dest="NUM_THREADS", type=int,
+ default=numcores, help="max number of threads to be used [%s]" % numcores)
+ parser.add_argument("-o", "--outdir", dest="OUTPUT_DIR", default=None,
+ help="choose the output directory (default = .dat dir)")
+ parser.add_argument("--font", dest="OUTPUT_FONT", choices="palatino,cm,times,helvetica,minion".split(","),
+ default="palatino", help="choose the font to be used in the plots")
+ parser.add_argument("-f", "--format", dest="OUTPUT_FORMAT", default="PDF",
+ help="choose plot format, perhaps multiple comma-separated formats e.g. 'pdf' or 'tex,pdf,png' (default = PDF).")
+ parser.add_argument("--no-cleanup", dest="NO_CLEANUP", action="store_true", default=False,
+ help="keep temporary directory and print its filename.")
+ parser.add_argument("--no-subproc", dest="NO_SUBPROC", action="store_true", default=False,
+ help="don't use subprocesses to render the plots in parallel -- useful for debugging.")
+ parser.add_argument("--full-range", dest="FULL_RANGE", action="store_true", default=False,
+ help="plot full y range in log-y plots.")
+ parser.add_argument("-c", "--config", dest="CONFIGFILES", action="append", default=None,
+ help="plot config file to be used. Overrides internal config blocks.")
+ verbgroup = parser.add_argument_group("Verbosity control")
+ verbgroup.add_argument("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL",
+ default=logging.INFO, help="print debug (very verbose) messages")
+ verbgroup.add_argument("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL",
+ default=logging.INFO, help="be very quiet")
+ args = parser.parse_args()
## Tweak the opts output
- logging.basicConfig(level=opts.LOGLEVEL, format="%(message)s")
- opts.OUTPUT_FONT = opts.OUTPUT_FONT.upper()
- opts.OUTPUT_FORMAT = opts.OUTPUT_FORMAT.upper().split(",")
- if opts.NUM_THREADS == 1:
- opts.NO_SUBPROC = True
+ logging.basicConfig(level=args.LOGLEVEL, format="%(message)s")
+ args.OUTPUT_FONT = args.OUTPUT_FONT.upper()
+ args.OUTPUT_FORMAT = args.OUTPUT_FORMAT.upper().split(",")
+ if args.NUM_THREADS == 1:
+ args.NO_SUBPROC = True
## Check for no args
- if len(args) == 0:
+ if len(args.DATFILES) == 0:
logging.error(parser.get_usage())
sys.exit(2)
## Check that the files exist
- for f in args:
+ for f in args.DATFILES:
if not os.access(f, os.R_OK):
print("Error: cannot read from %s" % f)
sys.exit(1)
## Test for external programs (kpsewhich, latex, dvips, ps2pdf/ps2eps, and convert)
- opts.LATEXPKGS = []
- if opts.OUTPUT_FORMAT != ["TEX"]:
+ args.LATEXPKGS = []
+ if args.OUTPUT_FORMAT != ["TEX"]:
try:
## latex
if not have_cmd("latex"):
logging.error("ERROR: required program 'latex' could not be found. Exiting...")
sys.exit(1)
## dvips
if not have_cmd("dvips"):
logging.error("ERROR: required program 'dvips' could not be found. Exiting...")
sys.exit(1)
## ps2pdf / ps2eps
- if "PDF" in opts.OUTPUT_FORMAT:
+ if "PDF" in args.OUTPUT_FORMAT:
if not have_cmd("ps2pdf"):
logging.error("ERROR: required program 'ps2pdf' (for PDF output) could not be found. Exiting...")
sys.exit(1)
- elif "EPS" in opts.OUTPUT_FORMAT:
+ elif "EPS" in args.OUTPUT_FORMAT:
if not have_cmd("ps2eps"):
logging.error("ERROR: required program 'ps2eps' (for EPS output) could not be found. Exiting...")
sys.exit(1)
## PNG output converter
- if "PNG" in opts.OUTPUT_FORMAT:
+ if "PNG" in args.OUTPUT_FORMAT:
if not have_cmd("convert"):
logging.error("ERROR: required program 'convert' (for PNG output) could not be found. Exiting...")
sys.exit(1)
## kpsewhich: required for LaTeX package testing
if not have_cmd("kpsewhich"):
logging.warning("WARNING: required program 'kpsewhich' (for LaTeX package checks) could not be found")
else:
## Check minion font
- if opts.OUTPUT_FONT == "MINION":
+ if args.OUTPUT_FONT == "MINION":
p = subprocess.Popen(["kpsewhich", "minion.sty"], stdout=subprocess.PIPE)
p.wait()
if p.returncode != 0:
logging.warning('Warning: Using "--minion" requires minion.sty to be installed. Ignoring it.')
- opts.OUTPUT_FONT = "PALATINO"
+ args.OUTPUT_FONT = "PALATINO"
## Check for HEP LaTeX packages
# TODO: remove HEP-specifics/non-standards?
for pkg in ["hepnames", "hepunits", "underscore"]:
p = subprocess.Popen(["kpsewhich", "%s.sty" % pkg], stdout=subprocess.PIPE)
p.wait()
if p.returncode == 0:
- opts.LATEXPKGS.append(pkg)
+ args.LATEXPKGS.append(pkg)
## Check for Palatino old style figures and small caps
- if opts.OUTPUT_FONT == "PALATINO":
+ if args.OUTPUT_FONT == "PALATINO":
p = subprocess.Popen(["kpsewhich", "ot1pplx.fd"], stdout=subprocess.PIPE)
p.wait()
if p.returncode == 0:
- opts.OUTPUT_FONT = "PALATINO_OSF"
+ args.OUTPUT_FONT = "PALATINO_OSF"
except Exception as e:
logging.warning("Problem while testing for external packages. I'm going to try and continue without testing, but don't hold your breath...")
def init_worker():
import signal
signal.signal(signal.SIGINT, signal.SIG_IGN)
## Run rendering jobs
- datfiles = args
+ datfiles = args.DATFILES
plotword = "plots" if len(datfiles) > 1 else "plot"
logging.info("Making %d %s" % (len(datfiles), plotword))
- if opts.NO_SUBPROC:
+ if args.NO_SUBPROC:
init_worker()
for i, df in enumerate(datfiles):
logging.info("Plotting %s (%d/%d remaining)" % (df, len(datfiles)-i, len(datfiles)))
process_datfile(df)
else:
- pool = multiprocessing.Pool(opts.NUM_THREADS, init_worker)
+ pool = multiprocessing.Pool(args.NUM_THREADS, init_worker)
try:
for i, _ in enumerate(pool.imap(process_datfile, datfiles)):
logging.info("Plotting %s (%d/%d remaining)" % (datfiles[i], len(datfiles)-i, len(datfiles)))
pool.close()
except KeyboardInterrupt:
print("Caught KeyboardInterrupt, terminating workers")
pool.terminate()
except ValueError as e:
print(e)
print("Perhaps your .dat file is corrupt?")
pool.terminate()
pool.join()
diff --git a/bin/make-plots-fast b/bin/make-plots-fast
--- a/bin/make-plots-fast
+++ b/bin/make-plots-fast
@@ -1,2852 +1,2852 @@
#! /usr/bin/env python
"""\
-Usage: %prog [options] file.dat [file2.dat ...]
+%(prog)s [options] file.dat [file2.dat ...]
TODO
* Optimise output for e.g. lots of same-height bins in a row
* Add a RatioFullRange directive to show the full range of error bars + MC envelope in the ratio
* Tidy LaTeX-writing code -- faster to compile one doc only, then split it?
* Handle boolean values flexibly (yes, no, true, false, etc. as well as 1, 0)
"""
from __future__ import print_function
##
## This program is copyright by Hendrik Hoeth <hoeth@linta.de> and
## the Rivet team https://rivet.hepforge.org. It may be used
## for scientific and private purposes. Patches are welcome, but please don't
## redistribute changed versions yourself.
##
## Check the Python version
import sys
if sys.version_info[:3] < (2,6,0):
print("make-plots requires Python version >= 2.6.0... exiting")
sys.exit(1)
## Try to rename the process on Linux
try:
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
libc.prctl(15, 'make-plots', 0, 0, 0)
except Exception as e:
pass
import os, logging, re
import tempfile
import getopt
import string
from math import *
## Regex patterns
pat_begin_block = re.compile(r'^#+\s*BEGIN ([A-Z0-9_]+) ?(\S+)?')
pat_end_block = re.compile('^#+\s*END ([A-Z0-9_]+)')
pat_comment = re.compile('^#|^\s*$')
pat_property = re.compile('^(\w+?)=(.*)$')
pat_path_property = re.compile('^(\S+?)::(\w+?)=(.*)$')
def fuzzyeq(a, b, tolerance=1e-6):
"Fuzzy equality comparison function for floats, with given fractional tolerance"
# if type(a) is not float or type(a) is not float:
# print(a, b)
if (a == 0 and abs(b) < 1e-12) or (b == 0 and abs(a) < 1e-12):
return True
return 2.0*abs(a-b)/abs(a+b) < tolerance
def inrange(x, a, b):
return x >= a and x < b
def floatify(x):
if type(x) is str:
x = x.split()
if not hasattr(x, "__len__"):
x = [x]
x = [float(a) for a in x]
return x[0] if len(x) == 1 else x
def floatpair(x):
if type(x) is str:
x = x.split()
if hasattr(x, "__len__"):
assert len(x) == 2
return [float(a) for a in x]
return [float(x), float(x)]
def is_end_marker(line, blockname):
m = pat_end_block.match(line)
return m and m.group(1) == blockname
def is_comment(line):
return pat_comment.match(line) is not None
class Described(object):
"Inherited functionality for objects holding a 'description' dictionary"
def __init__(self):
pass
def has_attr(self, key):
return key in self.description
def set_attr(self, key, val):
self.description[key] = val
def attr(self, key, default=None):
return self.description.get(key, default)
def attr_bool(self, key, default=None):
x = self.attr(key, default)
if x is None: return None
if str(x).lower() in ["1", "true", "yes", "on"]: return True
if str(x).lower() in ["0", "false", "no", "off"]: return False
return None
def attr_int(self, key, default=None):
x = self.attr(key, default)
try:
x = int(x)
except:
x = None
return x
def attr_float(self, key, default=None):
x = self.attr(key, default)
try:
x = float(x)
except:
x = None
return x
class InputData(Described):
def __init__(self, filename):
self.filename = filename
if not self.filename.endswith(".dat"):
self.filename += ".dat"
self.histos = {}
self.special = {}
self.functions = {}
self.description = {}
self.pathdescriptions = []
self.is2dim = False
f = open(self.filename)
for line in f:
m = pat_begin_block.match(line)
if m:
name, path = m.group(1,2)
if path is None and name != 'PLOT':
raise Exception('BEGIN sections need a path name.')
## Pass the reading of the block to separate functions
if name == 'PLOT':
self.read_input(f);
elif name == 'SPECIAL':
self.special[path] = Special(f)
elif name == 'HISTOGRAM' or name == 'HISTOGRAM2D':
self.histos[path] = Histogram(f, p=path)
# self.histos[path].path = path
self.description['is2dim'] = self.histos[path].is2dim
elif name == 'HISTO1D':
self.histos[path] = Histo1D(f, p=path)
elif name == 'HISTO2D':
self.histos[path] = Histo2D(f, p=path)
self.description['is2dim'] = True
elif name == 'COUNTER':
self.histos[path] = Counter(f, p=path)
elif name == 'VALUE':
self.histos[path] = Value(f, p=path)
elif name == 'FUNCTION':
self.functions[path] = Function(f)
# elif is_comment(line):
# continue
# else:
# self.read_path_based_input(line)
f.close()
- self.apply_config_files(opts.CONFIGFILES)
+ self.apply_config_files(args.CONFIGFILES)
## Plot (and subplot) sizing
# TODO: Use attr functions and bools properly
self.description.setdefault('PlotSizeX', 10.)
if self.description['is2dim']:
self.description['PlotSizeX'] -= 1.7
self.description['MainPlot'] = '1'
self.description['RatioPlot'] = '0'
if self.description.get('PlotSize', '') != '':
plotsizes = self.description['PlotSize'].split(',')
self.description['PlotSizeX'] = float(plotsizes[0])
self.description['PlotSizeY'] = float(plotsizes[1])
if len(plotsizes) == 3:
self.description['RatioPlotSizeY'] = float(plotsizes[2])
del self.description['PlotSize']
if self.description.get('MainPlot', '1') == '0':
## Ratio, no main
self.description['RatioPlot'] = '1' #< don't allow both to be zero!
self.description['PlotSizeY'] = 0.
self.description.setdefault('RatioPlotSizeY', 9.)
else:
if self.description.get('RatioPlot', '0') == '1':
## Main and ratio
self.description.setdefault('PlotSizeY', 6.)
self.description.setdefault('RatioPlotSizeY', self.description.get('RatioPlotYSize', 3.))
else:
## Main, no ratio
self.description.setdefault('PlotSizeY', self.description.get('PlotYSize', 9.))
self.description['RatioPlotSizeY'] = 0.
## Ensure numbers, not strings
self.description['PlotSizeX'] = float(self.description['PlotSizeX'])
self.description['PlotSizeY'] = float(self.description['PlotSizeY'])
self.description['RatioPlotSizeY'] = float(self.description['RatioPlotSizeY'])
# self.description['TopMargin'] = float(self.description['TopMargin'])
# self.description['BottomMargin'] = float(self.description['BottomMargin'])
self.description['LogX'] = str(self.description.get('LogX', 0)) in ["1", "yes", "true"]
self.description['LogY'] = str(self.description.get('LogY', 0)) in ["1", "yes", "true"]
self.description['LogZ'] = str(self.description.get('LogZ', 0)) in ["1", "yes", "true"]
if 'Rebin' in self.description:
for i in self.histos:
self.histos[i].description['Rebin'] = self.description['Rebin']
histoordermap = {}
histolist = list(self.histos.keys())
if 'DrawOnly' in self.description:
histolist = list(filter(list(self.histos.keys()).count, self.description['DrawOnly'].strip().split()))
for histo in histolist:
order = 0
if 'PlotOrder' in self.histos[histo].description:
order = int(self.histos[histo].description['PlotOrder'])
if not order in histoordermap:
histoordermap[order] = []
histoordermap[order].append(histo)
sortedhistolist = []
for i in sorted(histoordermap.keys()):
sortedhistolist.extend(histoordermap[i])
self.description['DrawOnly'] = sortedhistolist
## Inherit various values from histograms if not explicitly set
for k in ['LogX', 'LogY', 'LogZ',
'XLabel', 'YLabel', 'ZLabel',
'XCustomMajorTicks', 'YCustomMajorTicks', 'ZCustomMajorTicks']:
self.inherit_from_histos(k)
return
@property
def is2dim(self):
return self.attr_bool("is2dim", False)
@is2dim.setter
def is2dim(self, val):
self.set_attr("is2dim", val)
@property
def drawonly(self):
x = self.attr("DrawOnly")
if type(x) is str:
self.drawonly = x #< use setter to listify
return x if x else []
@drawonly.setter
def drawonly(self, val):
if type(val) is str:
val = val.strip().split()
self.set_attr("DrawOnly", val)
@property
def stacklist(self):
x = self.attr("Stack")
if type(x) is str:
self.stacklist = x #< use setter to listify
return x if x else []
@stacklist.setter
def stacklist(self, val):
if type(val) is str:
val = val.strip().split()
self.set_attr("Stack", val)
@property
def plotorder(self):
x = self.attr("PlotOrder")
if type(x) is str:
self.plotorder = x #< use setter to listify
return x if x else []
@plotorder.setter
def plotorder(self, val):
if type(val) is str:
val = val.strip().split()
self.set_attr("PlotOrder", val)
@property
def plotsizex(self):
return self.attr_float("PlotSizeX")
@plotsizex.setter
def plotsizex(self, val):
self.set_attr("PlotSizeX", val)
@property
def plotsizey(self):
return self.attr_float("PlotSizeY")
@plotsizey.setter
def plotsizey(self, val):
self.set_attr("PlotSizeY", val)
@property
def plotsize(self):
return [self.plotsizex, self.plotsizey]
@plotsize.setter
def plotsize(self, val):
if type(val) is str:
val = [float(x) for x in val.split(",")]
assert len(val) == 2
self.plotsizex = val[0]
self.plotsizey = val[1]
@property
def ratiosizey(self):
return self.attr_float("RatioPlotSizeY")
@ratiosizey.setter
def ratiosizey(self, val):
self.set_attr("RatioPlotSizeY", val)
@property
def scale(self):
return self.attr_float("Scale")
@scale.setter
def scale(self, val):
self.set_attr("Scale", val)
@property
def xmin(self):
return self.attr_float("XMin")
@xmin.setter
def xmin(self, val):
self.set_attr("XMin", val)
@property
def xmax(self):
return self.attr_float("XMax")
@xmax.setter
def xmax(self, val):
self.set_attr("XMax", val)
@property
def xrange(self):
return [self.xmin, self.xmax]
@xrange.setter
def xrange(self, val):
if type(val) is str:
val = [float(x) for x in val.split(",")]
assert len(val) == 2
self.xmin = val[0]
self.xmax = val[1]
@property
def ymin(self):
return self.attr_float("YMin")
@ymin.setter
def ymin(self, val):
self.set_attr("YMin", val)
@property
def ymax(self):
return self.attr_float("YMax")
@ymax.setter
def ymax(self, val):
self.set_attr("YMax", val)
@property
def yrange(self):
return [self.ymin, self.ymax]
@yrange.setter
def yrange(self, val):
if type(val) is str:
val = [float(y) for y in val.split(",")]
assert len(val) == 2
self.ymin = val[0]
self.ymax = val[1]
# TODO: add more rw properties for plotsize(x,y), ratiosize(y),
# show_mainplot, show_ratioplot, show_legend, log(x,y,z), rebin,
# drawonly, legendonly, plotorder, stack,
# label(x,y,z), majorticks(x,y,z), minorticks(x,y,z),
# min(x,y,z), max(x,y,z), range(x,y,z)
def inherit_from_histos(self, k):
"""Note: this will inherit the key from a random histogram:
only use if you're sure all histograms have this key!"""
if k not in self.description:
h = list(self.histos.values())[0]
if k in h.description:
self.description[k] = h.description[k]
def read_input(self, f):
for line in f:
if is_end_marker(line, 'PLOT'):
break
elif is_comment(line):
continue
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
if prop in self.description:
logging.debug("Overwriting property %s = %s -> %s" % (prop, self.description[prop], value))
## Use strip here to deal with DOS newlines containing \r
self.description[prop.strip()] = value.strip()
def apply_config_files(self, conffiles):
if conffiles is not None:
for filename in conffiles:
cf = open(filename,'r')
lines = cf.readlines()
for i in range(0, len(lines)):
## First evaluate PLOT sections
m = pat_begin_block.match(lines[i])
if m and m.group(1) == 'PLOT' and re.match(m.group(2),self.filename):
while i<len(lines)-1:
i = i+1
if is_end_marker(lines[i], 'PLOT'):
break
elif is_comment(lines[i]):
continue
m = pat_property.match(lines[i])
if m:
prop, value = m.group(1,2)
if prop in self.description:
logging.debug("Overwriting from conffile property %s = %s -> %s" % (prop, self.description[prop], value))
## Use strip here to deal with DOS newlines containing \r
self.description[prop.strip()] = value.strip()
elif is_comment(lines[i]):
continue
else:
## Then evaluate path-based settings, e.g. for HISTOGRAMs
m = pat_path_property.match(lines[i])
if m:
regex, prop, value = m.group(1,2,3)
for obj_dict in [self.special, self.histos, self.functions]:
for path, obj in obj_dict.items():
if re.match(regex, path):
## Use strip here to deal with DOS newlines containing \r
obj.description.update({prop.strip() : value.strip()})
cf.close()
class Plot(object):
def __init__(self, inputdata):
pass
def set_normalization(self,inputdata):
for method in ['NormalizeToIntegral', 'NormalizeToSum']:
if method in inputdata.description:
for i in inputdata.drawonly:
if not inputdata.histos[i].has_attr(method):
inputdata.histos[i].set_attr(method, inputdata.attr(method))
if inputdata.scale:
for i in inputdata.drawonly:
inputdata.histos[i].scale = inputdata.scale
for i in inputdata.drawonly:
inputdata.histos[i].mangle_input()
def stack_histograms(self,inputdata):
if 'Stack' in inputdata.description:
stackhists = [h for h in inputdata.attr('Stack').strip().split() if h in inputdata.histos]
previous = ''
for i in stackhists:
if previous != '':
inputdata.histos[i].add(inputdata.histos[previous])
previous = i
def set_histo_options(self,inputdata):
if 'ConnectGaps' in inputdata.description:
for i in inputdata.histos.keys():
if 'ConnectGaps' not in inputdata.histos[i].description:
inputdata.histos[i].description['ConnectGaps'] = inputdata.description['ConnectGaps']
# Counter and Value only have dummy x-axis, ticks wouldn't make sense here, so suppress them:
if 'Value object' in str(inputdata.histos) or 'Counter object' in str(inputdata.histos):
inputdata.description['XCustomMajorTicks'] = ''
inputdata.description['XCustomMinorTicks'] = ''
def set_borders(self, inputdata):
self.set_xmax(inputdata)
self.set_xmin(inputdata)
self.set_ymax(inputdata)
self.set_ymin(inputdata)
self.set_zmax(inputdata)
self.set_zmin(inputdata)
inputdata.description['Borders'] = (self.xmin, self.xmax, self.ymin, self.ymax, self.zmin, self.zmax)
def set_xmin(self, inputdata):
self.xmin = inputdata.xmin
if self.xmin is None:
xmins = [inputdata.histos[h].getXMin() for h in inputdata.description['DrawOnly']]
self.xmin = min(xmins) if xmins else 0.0
def set_xmax(self,inputdata):
self.xmax = inputdata.xmax
if self.xmax is None:
xmaxs = [inputdata.histos[h].getXMax() for h in inputdata.description['DrawOnly']]
self.xmax = min(xmaxs) if xmaxs else 1.0
def set_ymin(self,inputdata):
if inputdata.ymin is not None:
self.ymin = inputdata.ymin
else:
ymins = [inputdata.histos[i].getYMin(self.xmin, self.xmax, inputdata.description['LogY']) for i in inputdata.attr('DrawOnly')]
minymin = min(ymins) if ymins else 0.0
if inputdata.description['is2dim']:
self.ymin = minymin
else:
showzero = inputdata.attr_bool("ShowZero", True)
if showzero:
self.ymin = 0. if minymin > -1e-4 else 1.1*minymin
else:
self.ymin = 1.1*minymin if minymin < -1e-4 else 0 if minymin < 1e-4 else 0.9*minymin
if inputdata.description['LogY']:
ymins = [ymin for ymin in ymins if ymin > 0.0]
if not ymins:
if self.ymax == 0:
self.ymax = 1
ymins.append(2e-7*self.ymax)
minymin = min(ymins)
- fullrange = opts.FULL_RANGE
+ fullrange = args.FULL_RANGE
if inputdata.has_attr('FullRange'):
fullrange = inputdata.attr_bool('FullRange')
self.ymin = minymin/1.7 if fullrange else max(minymin/1.7, 2e-7*self.ymax)
if self.ymin == self.ymax:
self.ymin -= 1
self.ymax += 1
def set_ymax(self,inputdata):
if inputdata.has_attr('YMax'):
self.ymax = inputdata.attr_float('YMax')
else:
ymaxs = [inputdata.histos[h].getYMax(self.xmin, self.xmax) for h in inputdata.attr('DrawOnly')]
self.ymax = max(ymaxs) if ymaxs else 1.0
if not inputdata.is2dim:
self.ymax *= (1.7 if inputdata.attr_bool('LogY') else 1.1)
def set_zmin(self,inputdata):
if inputdata.has_attr('ZMin'):
self.zmin = inputdata.attr_float('ZMin')
else:
zmins = [inputdata.histos[i].getZMin(self.xmin, self.xmax, self.ymin, self.ymax) for i in inputdata.attr('DrawOnly')]
minzmin = min(zmins) if zmins else 0.0
self.zmin = minzmin
if zmins:
showzero = inputdata.attr_bool('ShowZero', True)
if showzero:
self.zmin = 0 if minzmin > -1e-4 else 1.1*minzmin
else:
self.zmin = 1.1*minzmin if minzmin < -1e-4 else 0. if minzmin < 1e-4 else 0.9*minzmin
if inputdata.attr_bool('LogZ', False):
zmins = [zmin for zmin in zmins if zmin > 0]
if not zmins:
if self.zmax == 0:
self.zmax = 1
zmins.append(2e-7*self.zmax)
minzmin = min(zmins)
- fullrange = inputdata.attr_bool("FullRange", opts.FULL_RANGE)
+ fullrange = inputdata.attr_bool("FullRange", args.FULL_RANGE)
self.zmin = minzmin/1.7 if fullrange else max(minzmin/1.7, 2e-7*self.zmax)
if self.zmin == self.zmax:
self.zmin -= 1
self.zmax += 1
def set_zmax(self,inputdata):
self.zmax = inputdata.attr_float('ZMax')
if self.zmax is None:
zmaxs = [inputdata.histos[h].getZMax(self.xmin, self.xmax, self.ymin, self.ymax) for h in inputdata.attr('DrawOnly')]
self.zmax = max(zmaxs) if zmaxs else 1.0
def draw(self):
pass
def write_header(self,inputdata):
out = '\\begin{multipage}\n'
out += '\\begin{pspicture}(0,0)(0,0)\n'
out += '\\psset{xunit=%scm}\n' %(inputdata.description['PlotSizeX'])
if inputdata.description['is2dim']:
colorseries = '{hsb}{grad}[rgb]{0,0,1}{-.700,0,0}'
if inputdata.description.get('ColorSeries', '') != '':
colorseries = inputdata.description['ColorSeries']
else:
colorseries = '{hsb}{grad}[rgb]{0,0,1}{-.700,0,0}'
out += ('\\definecolorseries{gradientcolors}%s\n' % colorseries)
out += ('\\resetcolorseries[130]{gradientcolors}\n')
return out
def write_footer(self):
out = '\\end{pspicture}\n'
out += '\\end{multipage}\n'
out += '%\n%\n'
return out
class MainPlot(Plot):
def __init__(self, inputdata):
self.set_normalization(inputdata)
self.stack_histograms(inputdata)
do_gof = inputdata.description.get('GofLegend', '0') == '1' or inputdata.description.get('GofFrame', '') != ''
do_taylor = inputdata.description.get('TaylorPlot', '0') == '1'
if do_gof and not do_taylor:
self.calculate_gof(inputdata)
self.set_histo_options(inputdata)
self.set_borders(inputdata)
self.yoffset = inputdata.description['PlotSizeY']
self.coors = Coordinates(inputdata)
def draw(self, inputdata):
out = ""
out += ('\n%\n% MainPlot\n%\n')
out += ('\\psset{yunit=%scm}\n' %(self.yoffset))
out += ('\\rput(0,-1){%\n')
out += ('\\psset{yunit=%scm}\n' %(inputdata.description['PlotSizeY']))
out += self._draw(inputdata)
out += ('}\n')
return out
def _draw(self, inputdata):
out = ""
## Draw a white background first
# TODO: Allow specifying in-frame bg color
out += '\n'
out += '\\psframe[linewidth=0pt,linestyle=none,fillstyle=solid,fillcolor=white,dimen=middle](0,0)(1,1)\n'
out += '\n'
# TODO: do this more compactly, e.g. by assigning sorting keys!
if inputdata.attr_bool('DrawSpecialFirst', False):
for s in inputdata.special.values():
out += s.draw(self.coors)
if inputdata.attr_bool('DrawFunctionFirst', False):
for f in inputdata.functions.values():
out += f.draw(self.coors)
for i in inputdata.description['DrawOnly']:
out += inputdata.histos[i].draw(self.coors)
else:
for i in inputdata.description['DrawOnly']:
out += inputdata.histos[i].draw(self.coors)
for f in inputdata.functions.values():
out += f.draw(self.coors)
else:
if inputdata.attr_bool('DrawFunctionFirst', False):
for f in inputdata.functions.values():
out += f.draw(self.coors)
for i in inputdata.description['DrawOnly']:
out += inputdata.histos[i].draw(self.coors)
else:
for i in inputdata.description['DrawOnly']:
out += inputdata.histos[i].draw(self.coors)
for f in inputdata.functions.values():
out += f.draw(self.coors)
for s in inputdata.special.values():
out += s.draw(self.coors)
if inputdata.attr_bool('Legend', False):
legend = Legend(inputdata.description,inputdata.histos,inputdata.functions)
out += legend.draw()
if inputdata.description['is2dim']:
colorscale = ColorScale(inputdata.description, self.coors)
out += colorscale.draw()
frame = Frame()
out += frame.draw(inputdata)
xcustommajortickmarks = inputdata.attr_int('XMajorTickMarks', -1)
xcustomminortickmarks = inputdata.attr_int('XMinorTickMarks', -1)
xcustommajorticks = xcustomminorticks = None
if inputdata.attr('XCustomMajorTicks'):
xcustommajorticks = []
x_label_pairs = inputdata.attr('XCustomMajorTicks').strip().split() #'\t')
if len(x_label_pairs) % 2 == 0:
for i in range(0, len(x_label_pairs), 2):
xcustommajorticks.append({'Value': float(x_label_pairs[i]), 'Label': x_label_pairs[i+1]})
else:
print("Warning: XCustomMajorTicks requires an even number of alternating pos/label entries")
if inputdata.attr('XCustomMinorTicks'):
xs = inputdata.attr('XCustomMinorTicks').strip().split() #'\t')
xcustomminorticks = [{'Value': float(x)} for x in xs]
xticks = XTicks(inputdata.description, self.coors)
drawxlabels = inputdata.attr_bool('PlotXTickLabels', True) and not inputdata.attr_bool('RatioPlot', False)
out += xticks.draw(custommajortickmarks=xcustommajortickmarks,
customminortickmarks=xcustomminortickmarks,
custommajorticks=xcustommajorticks,
customminorticks=xcustomminorticks,
drawlabels=drawxlabels)
ycustommajortickmarks = inputdata.attr_int('YMajorTickMarks', -1)
ycustomminortickmarks = inputdata.attr_int('YMinorTickMarks', -1)
ycustommajorticks = ycustomminorticks = None
if 'YCustomMajorTicks' in inputdata.description:
ycustommajorticks = []
y_label_pairs = inputdata.description['YCustomMajorTicks'].strip().split() #'\t')
if len(y_label_pairs) % 2 == 0:
for i in range(0, len(y_label_pairs), 2):
ycustommajorticks.append({'Value': float(y_label_pairs[i]), 'Label': y_label_pairs[i+1]})
else:
print("Warning: YCustomMajorTicks requires an even number of alternating pos/label entries")
if inputdata.has_attr('YCustomMinorTicks'):
ys = inputdata.attr('YCustomMinorTicks').strip().split() #'\t')
ycustomminorticks = [{'Value': float(y)} for y in ys]
yticks = YTicks(inputdata.description, self.coors)
drawylabels = inputdata.attr_bool('PlotYTickLabels', True)
out += yticks.draw(custommajortickmarks=ycustommajortickmarks,
customminortickmarks=ycustomminortickmarks,
custommajorticks=ycustommajorticks,
customminorticks=ycustomminorticks,
drawlabels=drawylabels)
labels = Labels(inputdata.description)
if inputdata.attr_bool('RatioPlot', False):
olab = labels.draw(['Title','YLabel'])
else:
if not inputdata.description['is2dim']:
olab = labels.draw(['Title','XLabel','YLabel'])
else:
olab = labels.draw(['Title','XLabel','YLabel','ZLabel'])
out += olab
return out
def calculate_gof(self, inputdata):
refdata = inputdata.description.get('GofReference')
if refdata is None:
refdata = inputdata.description.get('RatioPlotReference')
if refdata is None:
inputdata.description['GofLegend'] = '0'
inputdata.description['GofFrame'] = ''
return
def pickcolor(gof):
color = None
colordefs = {}
for i in inputdata.description.setdefault('GofFrameColor', '0:green 3:yellow 6:red!70').strip().split():
foo = i.split(':')
if len(foo) != 2:
continue
colordefs[float(foo[0])] = foo[1]
for i in sorted(colordefs.keys()):
if gof>=i:
color=colordefs[i]
return color
inputdata.description.setdefault('GofLegend', '0')
inputdata.description.setdefault('GofFrame', '')
inputdata.description.setdefault('FrameColor', None)
for i in inputdata.description['DrawOnly']:
if i == refdata:
continue
if inputdata.description['GofLegend']!='1' and i!=inputdata.description['GofFrame']:
continue
if inputdata.description.get('GofType', "chi2") != 'chi2':
return
gof = inputdata.histos[i].getChi2(inputdata.histos[refdata])
if i == inputdata.description['GofFrame'] and inputdata.description['FrameColor'] is None:
inputdata.description['FrameColor'] = pickcolor(gof)
if inputdata.histos[i].description.setdefault('Title', '') != '':
inputdata.histos[i].description['Title'] += ', '
inputdata.histos[i].description['Title'] += '$\\chi^2/n={}$%1.2f' %gof
class TaylorPlot(Plot):
def __init__(self, inputdata):
self.refdata = inputdata.description['TaylorPlotReference']
self.calculate_taylorcoordinates(inputdata)
def calculate_taylorcoordinates(self,inputdata):
foo = inputdata.description['DrawOnly'].pop(inputdata.description['DrawOnly'].index(self.refdata))
inputdata.description['DrawOnly'].append(foo)
for i in inputdata.description['DrawOnly']:
print(i)
print('meanbinval = ', inputdata.histos[i].getMeanBinValue())
print('sigmabinval = ', inputdata.histos[i].getSigmaBinValue())
print('chi2/nbins = ', inputdata.histos[i].getChi2(inputdata.histos[self.refdata]))
print('correlation = ', inputdata.histos[i].getCorrelation(inputdata.histos[self.refdata]))
print('distance = ', inputdata.histos[i].getRMSdistance(inputdata.histos[self.refdata]))
class RatioPlot(Plot):
def __init__(self, inputdata):
self.refdata = inputdata.description['RatioPlotReference']
self.yoffset = inputdata.description['PlotSizeY'] + inputdata.description['RatioPlotSizeY']
inputdata.description['RatioPlotStage'] = True
inputdata.description['PlotSizeY'] = inputdata.description['RatioPlotSizeY']
inputdata.description['LogY'] = False # TODO: actually, log ratio plots could be useful...
# TODO: It'd be nice it this wasn't so MC-specific
rpmode = inputdata.description.get('RatioPlotMode', "mcdata")
if rpmode == 'deviation':
inputdata.description['YLabel'] = '$(\\text{MC}-\\text{data})$'
inputdata.description['YMin'] = -3.5
inputdata.description['YMax'] = 3.5
elif rpmode == 'datamc':
inputdata.description['YLabel'] = 'Data/MC'
inputdata.description['YMin'] = 0.5
inputdata.description['YMax'] = 1.5
else:
inputdata.description['YLabel'] = 'MC/Data'
inputdata.description['YMin'] = 0.5
inputdata.description['YMax'] = 1.5
if 'RatioPlotYLabel' in inputdata.description:
inputdata.description['YLabel'] = inputdata.description['RatioPlotYLabel']
inputdata.description['YLabel']='\\rput(-%s,0){%s}'%(0.5*inputdata.description['PlotSizeY']/inputdata.description['PlotSizeX'],inputdata.description['YLabel'])
if 'RatioPlotYMin' in inputdata.description:
inputdata.description['YMin'] = inputdata.description['RatioPlotYMin']
if 'RatioPlotYMax' in inputdata.description:
inputdata.description['YMax'] = inputdata.description['RatioPlotYMax']
if 'RatioPlotErrorBandColor' not in inputdata.description:
inputdata.description['RatioPlotErrorBandColor'] = 'yellow'
if inputdata.description.get('RatioPlotSameStyle', '0') == '0':
inputdata.histos[self.refdata].description['ErrorBandColor'] = inputdata.description['RatioPlotErrorBandColor']
inputdata.histos[self.refdata].description['ErrorBands'] = '1'
inputdata.histos[self.refdata].description['ErrorBars'] = '0'
inputdata.histos[self.refdata].description['LineStyle'] = 'solid'
inputdata.histos[self.refdata].description['LineColor'] = 'black'
inputdata.histos[self.refdata].description['LineWidth'] = '0.3pt'
inputdata.histos[self.refdata].description['PolyMarker'] = ''
inputdata.histos[self.refdata].description['ConnectGaps'] = '1'
self.calculate_ratios(inputdata)
self.set_borders(inputdata)
self.coors = Coordinates(inputdata)
def draw(self, inputdata):
out = ""
out += ('\n%\n% RatioPlot\n%\n')
out += ('\\psset{yunit=%scm}\n' %(self.yoffset))
out += ('\\rput(0,-1){%\n')
out += ('\\psset{yunit=%scm}\n' %(inputdata.description['PlotSizeY']))
out += self._draw(inputdata)
out += ('}\n')
return out
def calculate_ratios(self, inputdata):
foo = inputdata.description['DrawOnly'].pop(inputdata.description['DrawOnly'].index(self.refdata))
if inputdata.histos[self.refdata].description.get('ErrorBands', '0') == '1':
inputdata.description['DrawOnly'].insert(0,foo)
else:
inputdata.description['DrawOnly'].append(foo)
rpmode = inputdata.description.get('RatioPlotMode', "mcdata")
for i in inputdata.description['DrawOnly']:
if i != self.refdata:
if rpmode == 'deviation':
inputdata.histos[i].deviation(inputdata.histos[self.refdata])
elif rpmode == 'datamc':
inputdata.histos[i].dividereverse(inputdata.histos[self.refdata])
inputdata.histos[i].description['ErrorBars'] = '1'
else:
inputdata.histos[i].divide(inputdata.histos[self.refdata])
if rpmode == 'deviation':
inputdata.histos[self.refdata].deviation(inputdata.histos[self.refdata])
elif rpmode == 'datamc':
inputdata.histos[self.refdata].dividereverse(inputdata.histos[self.refdata])
else:
inputdata.histos[self.refdata].divide(inputdata.histos[self.refdata])
def _draw(self, inputdata):
out = ""
## Draw a white background first
# TODO: Allow specifying in-frame bg color
out += '\n'
out += '\\psframe[linewidth=0pt,linestyle=none,fillstyle=solid,fillcolor=white,dimen=middle](0,0)(1,1)\n'
out += '\n'
for i in inputdata.description['DrawOnly']:
if inputdata.description.get('RatioPlotMode', 'mcdata') == 'datamc':
if i != self.refdata:
out += inputdata.histos[i].draw(self.coors)
else:
out += inputdata.histos[i].draw(self.coors)
frame = Frame()
out += frame.draw(inputdata)
# TODO: so much duplication with MainPlot... yuck!
if inputdata.description.get('XMajorTickMarks', '') != '':
xcustommajortickmarks = int(inputdata.description['XMajorTickMarks'])
else:
xcustommajortickmarks = -1
if inputdata.description.get('XMinorTickMarks', '') != '':
xcustomminortickmarks = int(inputdata.description['XMinorTickMarks'])
else:
xcustomminortickmarks =- 1
xcustommajorticks = None
if 'XCustomMajorTicks' in inputdata.description: # and inputdata.description['XCustomMajorTicks']!='':
xcustommajorticks = []
tickstr = inputdata.description['XCustomMajorTicks'].strip().split() #'\t')
if not len(tickstr) % 2:
for i in range(0, len(tickstr), 2):
xcustommajorticks.append({'Value': float(tickstr[i]), 'Label': tickstr[i+1]})
xcustomminorticks = None
if 'XCustomMinorTicks' in inputdata.description: # and inputdata.description['XCustomMinorTicks']!='':
xcustomminorticks = []
tickstr = inputdata.description['XCustomMinorTicks'].strip().split() #'\t')
for i in range(len(tickstr)):
xcustomminorticks.append({'Value': float(tickstr[i])})
xticks = XTicks(inputdata.description, self.coors)
drawlabels = inputdata.description.get('RatioPlotTickLabels', '1') == '1'
out += xticks.draw(custommajortickmarks=xcustommajortickmarks,
customminortickmarks=xcustomminortickmarks,
custommajorticks=xcustommajorticks,
customminorticks=xcustomminorticks,
drawlabels=drawlabels)
ycustommajortickmarks = inputdata.attr('YMajorTickMarks', '')
ycustommajortickmarks = int(ycustommajortickmarks) if ycustommajortickmarks else -1
ycustomminortickmarks = inputdata.attr('YMinorTickMarks', '')
ycustomminortickmarks = int(ycustomminortickmarks) if ycustomminortickmarks else -1
ycustommajorticks = None
if 'YCustomMajorTicks' in inputdata.description:
ycustommajorticks = []
tickstr = inputdata.description['YCustomMajorTicks'].strip().split() #'\t')
if not len(tickstr) % 2:
for i in range(0, len(tickstr), 2):
ycustommajorticks.append({'Value': float(tickstr[i]), 'Label': tickstr[i+1]})
ycustomminorticks = None
if 'YCustomMinorTicks' in inputdata.description:
ycustomminorticks = []
tickstr = inputdata.description['YCustomMinorTicks'].strip().split() #'\t')
for i in range(len(tickstr)):
ycustomminorticks.append({'Value': float(tickstr[i])})
yticks = YTicks(inputdata.description, self.coors)
out += yticks.draw(custommajortickmarks=ycustommajortickmarks,
customminortickmarks=ycustomminortickmarks,
custommajorticks=ycustommajorticks,
customminorticks=ycustomminorticks)
if not inputdata.attr_bool('MainPlot', True) and inputdata.attr_bool('Legend', False):
legend = Legend(inputdata.description, inputdata.histos, inputdata.functions)
out += legend.draw()
labels = Labels(inputdata.description)
lnames = ['XLabel','YLabel']
if not inputdata.attr_bool('MainPlot', True):
lnames.append("Title")
out += labels.draw(lnames)
return out
class Legend(Described):
def __init__(self, description, histos, functions):
self.histos = histos
self.functions = functions
self.description = description
def draw(self):
out = ""
out += '\n%\n% Legend\n%\n'
out += '\\rput[tr](%s,%s){%%\n' % (self.getLegendXPos(), self.getLegendYPos())
ypos = -0.05*6/self.description['PlotSizeY']
legendordermap = {}
legendlist = self.description['DrawOnly'] + list(self.functions.keys())
if 'LegendOnly' in self.description:
legendlist = []
for legend in self.description['LegendOnly'].strip().split():
if legend in self.histos or legend in self.functions:
legendlist.append(legend)
for legend in legendlist:
order = 0
if legend in self.histos and 'LegendOrder' in self.histos[legend].description:
order = int(self.histos[legend].description['LegendOrder'])
if legend in self.functions and 'LegendOrder' in self.functions[legend].description:
order = int(self.functions[legend].description['LegendOrder'])
if not order in legendordermap:
legendordermap[order] = []
legendordermap[order].append(legend)
foo = []
for i in sorted(legendordermap.keys()):
foo.extend(legendordermap[i])
rel_xpos_sign = 1.0
if self.getLegendAlign() == 'r':
rel_xpos_sign = -1.0
xpos1 = -0.10*rel_xpos_sign
xpos2 = -0.02*rel_xpos_sign
for i in foo:
if i in self.histos:
drawobject = self.histos[i]
elif i in self.functions:
drawobject = self.functions[i]
else:
continue
title = drawobject.getTitle()
if title == '':
continue
else:
out += ('\\rput[B%s](%s,%s){%s}\n' %(self.getLegendAlign(),rel_xpos_sign*0.1,ypos,title))
out += ('\\rput[B%s](%s,%s){%s\n' %(self.getLegendAlign(),rel_xpos_sign*0.1,ypos,'%'))
if drawobject.getErrorBands():
out += ('\\psframe[linewidth=0pt,linestyle=none,fillstyle=solid,fillcolor=%s,opacity=%s]' %(drawobject.getErrorBandColor(),drawobject.getErrorBandOpacity()))
out += ('(%s, 0.033)(%s, 0.001)\n' %(xpos1, xpos2))
out += ('\\psline[linestyle=' + drawobject.getLineStyle() \
+ ', linecolor=' + drawobject.getLineColor() \
+ ', linewidth=' + drawobject.getLineWidth() \
+ ', strokeopacity=' + drawobject.getLineOpacity() \
+ ', opacity=' + drawobject.getFillOpacity())
if drawobject.getLineDash() != '':
out += (', dash=' + drawobject.getLineDash())
if drawobject.getFillStyle()!='none':
out += (', fillstyle=' + drawobject.getFillStyle() \
+ ', fillcolor=' + drawobject.getFillColor() \
+ ', hatchcolor=' + drawobject.getHatchColor() \
+ ']{C-C}(%s, 0.030)(%s, 0.030)(%s, 0.004)(%s, 0.004)(%s, 0.030)\n' \
%(xpos1, xpos2, xpos2, xpos1, xpos1))
else:
out += ('](%s, 0.016)(%s, 0.016)\n' %(xpos1, xpos2))
if drawobject.getPolyMarker() != '':
out += (' \\psdot[dotstyle=' + drawobject.getPolyMarker() \
+ ', dotsize=' + drawobject.getDotSize() \
+ ', dotscale=' + drawobject.getDotScale() \
+ ', linecolor=' + drawobject.getLineColor() \
+ ', linewidth=' + drawobject.getLineWidth() \
+ ', linestyle=' + drawobject.getLineStyle() \
+ ', fillstyle=' + drawobject.getFillStyle() \
+ ', fillcolor=' + drawobject.getFillColor() \
+ ', strokeopacity=' + drawobject.getLineOpacity() \
+ ', opacity=' + drawobject.getFillOpacity() \
+ ', hatchcolor=' + drawobject.getHatchColor())
if drawobject.getFillStyle()!='none':
out += ('](%s, 0.028)\n' % (rel_xpos_sign*-0.06))
else:
out += ('](%s, 0.016)\n' % (rel_xpos_sign*-0.06))
out += ('}\n')
ypos -= 0.075*6/self.description['PlotSizeY']
if 'CustomLegend' in self.description:
for i in self.description['CustomLegend'].strip().split('\\\\'):
out += ('\\rput[B%s](%s,%s){%s}\n' %(self.getLegendAlign(),rel_xpos_sign*0.1,ypos,i))
ypos -= 0.075*6/self.description['PlotSizeY']
out += ('}\n')
return out
def getLegendXPos(self):
return self.description.get('LegendXPos', '0.95' if self.getLegendAlign() == 'r' else '0.53')
def getLegendYPos(self):
return self.description.get('LegendYPos', '0.93')
def getLegendAlign(self):
return self.description.get('LegendAlign', 'l')
class ColorScale(Described):
def __init__(self, description, coors):
self.description = description
self.coors = coors
def draw(self):
out = ''
out += '\n%\n% ColorScale\n%\n'
out += '\\rput(1,0){\n'
out += ' \\psset{xunit=4mm}\n'
out += ' \\rput(0.5,0){\n'
out += ' \\psset{yunit=0.0076923, linestyle=none, fillstyle=solid}\n'
out += ' \\multido{\\ic=0+1,\\id=1+1}{130}{\n'
out += ' \\psframe[fillcolor={gradientcolors!![\\ic]},dimen=inner,linewidth=0.1pt](0, \\ic)(1, \\id)\n'
out += ' }\n'
out += ' }\n'
out += ' \\rput(0.5,0){\n'
out += ' \\psframe[linewidth=0.3pt,dimen=middle](0,0)(1,1)\n'
zcustommajortickmarks = self.attr_int('ZMajorTickMarks', -1)
zcustomminortickmarks = self.attr_int('ZMinorTickMarks', -1)
zcustommajorticks = zcustomminorticks = None
if self.attr('ZCustomMajorTicks'):
zcustommajorticks = []
z_label_pairs = self.attr('ZCustomMajorTicks').strip().split() #'\t')
if len(z_label_pairs) % 2 == 0:
for i in range(0, len(z_label_pairs), 2):
zcustommajorticks.append({'Value': float(x_label_pairs[i]), 'Label': x_label_pairs[i+1]})
else:
print("Warning: ZCustomMajorTicks requires an even number of alternating pos/label entries")
if self.attr('ZCustomMinorTicks'):
zs = self.attr('ZCustomMinorTicks').strip().split() #'\t')
zcustomminorticks = [{'Value': float(x)} for x in xs]
drawzlabels = self.attr_bool('PlotZTickLabels', True)
zticks = ZTicks(self.description, self.coors)
out += zticks.draw(custommajortickmarks=zcustommajortickmarks,\
customminortickmarks=zcustomminortickmarks,\
custommajorticks=zcustommajorticks,\
customminorticks=zcustomminorticks,
drawlabels=drawzlabels)
out += ' }\n'
out += '}\n'
return out
class Labels(Described):
def __init__(self, description):
self.description = description
def draw(self, axis=[]):
out = ""
out += ('\n%\n% Labels\n%\n')
if 'Title' in self.description and (axis.count('Title') or axis==[]):
out += ('\\rput(0,1){\\rput[lB](0, 1.7\\labelsep){\\normalsize '+self.description['Title']+'}}\n')
if 'XLabel' in self.description and (axis.count('XLabel') or axis==[]):
xlabelsep = 4.7
if 'XLabelSep' in self.description:
xlabelsep=float(self.description['XLabelSep'])
out += ('\\rput(1,0){\\rput[rB](0,-%4.3f\\labelsep){\\normalsize '%(xlabelsep) +self.description['XLabel']+'}}\n')
if 'YLabel' in self.description and (axis.count('YLabel') or axis==[]):
ylabelsep = 6.5
if 'YLabelSep' in self.description:
ylabelsep=float(self.description['YLabelSep'])
out += ('\\rput(0,1){\\rput[rB]{90}(-%4.3f\\labelsep,0){\\normalsize '%(ylabelsep) +self.description['YLabel']+'}}\n')
if 'ZLabel' in self.description and (axis.count('ZLabel') or axis==[]):
zlabelsep = 5.3
if 'ZLabelSep' in self.description:
zlabelsep=float(self.description['ZLabelSep'])
out += ('\\rput(1,1){\\rput(%4.3f\\labelsep,0){\\psset{xunit=4mm}\\rput[lB]{270}(1.5,0){\\normalsize '%(zlabelsep) +self.description['ZLabel']+'}}}\n')
return out
class Special(Described):
def __init__(self, f):
self.description = {}
self.data = []
self.read_input(f)
def read_input(self, f):
for line in f:
if is_end_marker(line, 'SPECIAL'):
break
elif is_comment(line):
continue
else:
self.data.append(line)
def draw(self, coors):
out = ""
out += ('\n%\n% Special\n%\n')
import re
regex = re.compile(r'^(.*?)(\\physics[xy]?coor)\(\s?([0-9\.eE+-]+)\s?,\s?([0-9\.eE+-]+)\s?\)(.*)')
# TODO: More precise number string matching, something like this:
# num = r"-?[0-9]*(?:\.[0-9]*)(?:[eE][+-]?\d+]"
# regex = re.compile(r'^(.*?)(\\physics[xy]?coor)\(\s?(' + num + ')\s?,\s?(' + num + ')\s?\)(.*)')
for l in self.data:
while regex.search(l):
match = regex.search(l)
xcoor, ycoor = float(match.group(3)), float(match.group(4))
if match.group(2)[1:] in ["physicscoor", "physicsxcoor"]:
xcoor = coors.phys2frameX(xcoor)
if match.group(2)[1:] in ["physicscoor", "physicsycoor"]:
ycoor = coors.phys2frameY(ycoor)
line = "%s(%f, %f)%s" % (match.group(1), xcoor, ycoor, match.group(5))
l = line
out += l + "\n"
return out
class DrawableObject(Described):
def __init__(self, f):
pass
def getTitle(self):
return self.description.get("Title", "")
def getLineStyle(self):
if 'LineStyle' in self.description:
## I normally like there to be "only one way to do it", but providing
## this dashdotted/dotdashed synonym just seems humane ;-)
if self.description['LineStyle'] in ('dashdotted', 'dotdashed'):
self.description['LineStyle']='dashed'
self.description['LineDash']='3pt 3pt .8pt 3pt'
return self.description['LineStyle']
else:
return 'solid'
def getLineDash(self):
if 'LineDash' in self.description:
# Check if LineStyle=='dashdotted' before returning something
self.getLineStyle()
return self.description['LineDash']
else:
return ''
def getLineWidth(self):
return self.description.get("LineWidth", "0.8pt")
def getLineColor(self):
return self.description.get("LineColor", "black")
def getLineOpacity(self):
return self.description.get("LineOpacity", "1.0")
def getFillColor(self):
return self.description.get("FillColor", "white")
def getFillOpacity(self):
return self.description.get("FillOpacity", "1.0")
def getHatchColor(self):
return self.description.get("HatchColor", "black")
def getFillStyle(self):
return self.description.get("FillStyle", "none")
def getPolyMarker(self):
return self.description.get("PolyMarker", "")
def getDotSize(self):
return self.description.get("DotSize", "2pt 2")
def getDotScale(self):
return self.description.get("DotScale", "1")
def getErrorBars(self):
return bool(int(self.description.get("ErrorBars", "0")))
def getErrorBands(self):
return bool(int(self.description.get("ErrorBands", "0")))
def getErrorBandColor(self):
return self.description.get("ErrorBandColor", "yellow")
def getErrorBandOpacity(self):
return self.description.get("ErrorBandOpacity", "1.0")
def getSmoothLine(self):
return bool(int(self.description.get("SmoothLine", "0")))
def startclip(self):
return '\\psclip{\\psframe[linewidth=0, linestyle=none](0,0)(1,1)}\n'
def stopclip(self):
return '\\endpsclip\n'
def startpsset(self):
out = ""
out += ('\\psset{linecolor='+self.getLineColor()+'}\n')
out += ('\\psset{linewidth='+self.getLineWidth()+'}\n')
out += ('\\psset{linestyle='+self.getLineStyle()+'}\n')
out += ('\\psset{fillstyle='+self.getFillStyle()+'}\n')
out += ('\\psset{fillcolor='+self.getFillColor()+'}\n')
out += ('\\psset{hatchcolor='+self.getHatchColor()+'}\n')
out += ('\\psset{strokeopacity='+self.getLineOpacity()+'}\n')
out += ('\\psset{opacity='+self.getFillOpacity()+'}\n')
if self.getLineDash()!='':
out += ('\\psset{dash='+self.getLineDash()+'}\n')
return out
def stoppsset(self):
out = ""
out += ('\\psset{linecolor=black}\n')
out += ('\\psset{linewidth=0.8pt}\n')
out += ('\\psset{linestyle=solid}\n')
out += ('\\psset{fillstyle=none}\n')
out += ('\\psset{fillcolor=white}\n')
out += ('\\psset{hatchcolor=black}\n')
out += ('\\psset{strokeopacity=1.0}\n')
out += ('\\psset{opacity=1.0}\n')
return out
class Function(DrawableObject, Described):
def __init__(self, f):
self.description = {}
self.read_input(f)
def read_input(self, f):
self.code='def plotfunction(x):\n'
iscode=False
for line in f:
if is_end_marker(line, 'FUNCTION'):
break
elif is_comment(line):
continue
else:
m = pat_property.match(line)
if iscode:
self.code+=' '+line
elif m:
prop, value = m.group(1,2)
if prop=='Code':
iscode=True
else:
self.description[prop] = value
if not iscode:
print('++++++++++ ERROR: No code in function')
else:
foo = compile(self.code, '<string>', 'exec')
exec(foo)
self.plotfunction = plotfunction
def draw(self,coors):
out = ""
out += self.startclip()
out += self.startpsset()
xmin = coors.xmin()
if 'XMin' in self.description and self.description['XMin']:
xmin = float(self.description['XMin'])
xmax=coors.xmax()
if 'XMax' in self.description and self.description['XMax']:
xmax=float(self.description['XMax'])
# TODO: Space sample points logarithmically if LogX=1
dx = (xmax-xmin)/500.
x = xmin-dx
out += '\\pscurve'
if 'FillStyle' in self.description and self.description['FillStyle']!='none':
out += '(%s,%s)\n' % (coors.strphys2frameX(xmin),coors.strphys2frameY(coors.ymin()))
while x < (xmax+2*dx):
y = self.plotfunction(x)
out += ('(%s,%s)\n' % (coors.strphys2frameX(x), coors.strphys2frameY(y)))
x += dx
if 'FillStyle' in self.description and self.description['FillStyle']!='none':
out += '(%s,%s)\n' % (coors.strphys2frameX(xmax),coors.strphys2frameY(coors.ymin()))
out += self.stoppsset()
out += self.stopclip()
return out
class BinData(object):
"""\
Store bin edge and value+error(s) data for a 1D or 2D bin.
TODO: generalise/alias the attr names to avoid mention of x and y
"""
def __init__(self, low, high, val, err):
#print("@", low, high, val, err)
self.low = floatify(low)
self.high = floatify(high)
self.val = float(val)
self.err = floatpair(err)
@property
def is2D(self):
return hasattr(self.low, "__len__") and hasattr(self.high, "__len__")
@property
def isValid(self):
invalid_val = (isnan(self.val) or isnan(self.err[0]) or isnan(self.err[1]))
if invalid_val:
return False
if self.is2D:
invalid_low = any(isnan(x) for x in self.low)
invalid_high = any(isnan(x) for x in self.high)
else:
invalid_low, invalid_high = isnan(self.low), isnan(self.high)
return not (invalid_low or invalid_high)
@property
def xmin(self):
return self.low
@xmin.setter
def xmin(self,x):
self.low = x
@property
def xmax(self):
return self.high
@xmax.setter
def xmax(self,x):
self.high = x
@property
def xmid(self):
# TODO: Generalise to 2D
return (self.xmin + self.xmax) / 2.0
@property
def xwidth(self):
# TODO: Generalise to 2D
assert self.xmin <= self.xmax
return self.xmax - self.xmin
@property
def y(self):
return self.val
@y.setter
def y(self, x):
self.val = x
@property
def ey(self):
return self.err
@ey.setter
def ey(self, x):
self.err = x
@property
def ymin(self):
return self.y - self.ey[0]
@property
def ymax(self):
return self.y + self.ey[1]
def __getitem__(self, key):
"dict-like access for backward compatibility"
if key in ("LowEdge"):
return self.xmin
elif key == ("UpEdge", "HighEdge"):
return self.xmax
elif key == "Content":
return self.y
elif key == "Errors":
return self.ey
class Histogram(DrawableObject, Described):
def __init__(self, f, p=None):
self.description = {}
self.is2dim = False
self.data = []
self.read_input_data(f)
self.sigmabinvalue = None
self.meanbinvalue = None
self.path = p
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'HISTOGRAM'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.description[prop] = value
else:
## Detect symm errs
linearray = line.split()
if len(linearray) == 4:
self.data.append(BinData(*linearray))
## Detect asymm errs
elif len(linearray) == 5:
self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]]))
## Detect two-dimensionality
elif len(linearray) in [6,7]:
self.is2dim = True
# If asymm z error, use the max or average of +- error
err = float(linearray[5])
if len(linearray) == 7:
if self.description.get("ShowMaxZErr", 1):
err = max(err, float(linearray[6]))
else:
err = 0.5 * (err + float(linearray[6]))
self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], linearray[4], err))
## Unknown histo format
else:
raise RuntimeError("Unknown HISTOGRAM data line format with %d entries" % len(linearray))
def mangle_input(self):
norm2int = self.attr_bool("NormalizeToIntegral", False)
norm2sum = self.attr_bool("NormalizeToSum", False)
if norm2int or norm2sum:
if norm2int and norm2sum:
print("Can't normalize to Integral and to Sum at the same time. Will normalize to the sum.")
foo = 0
# TODO: change to "in self.data"?
for i in range(len(self.data)):
if norm2sum:
foo += self.data[i].val
else:
foo += self.data[i].val*(self.data[i].xmax-self.data[i].xmin)
# TODO: change to "in self.data"?
if foo != 0:
for i in range(len(self.data)):
self.data[i].val /= foo
self.data[i].err[0] /= foo
self.data[i].err[1] /= foo
scale = self.attr_float('Scale', 1.0)
if scale != 1.0:
# TODO: change to "in self.data"?
for i in range(len(self.data)):
self.data[i].val *= scale
self.data[i].err[0] *= scale
self.data[i].err[1] *= scale
if self.attr_int("Rebin", 1) > 1:
rebin = self.attr_int("Rebin", 1)
errortype = self.attr("ErrorType", "stat")
newdata = []
for i in range(0, (len(self.data)//rebin)*rebin, rebin):
foo = 0.
barl = 0.
baru = 0.
for j in range(rebin):
binwidth = self.data[i+j].xwidth
foo += self.data[i+j].val * binwidth
if errortype == "stat":
barl += (binwidth * self.data[i+j].err[0])**2
baru += (binwidth * self.data[i+j].err[1])**2
elif errortype == "env":
barl += self.data[i+j].ymin * binwidth
baru += self.data[i+j].ymax * binwidth
else:
logging.error("Rebinning for ErrorType not implemented.")
sys.exit(1)
newbinwidth = self.data[i+rebin-1].xmax - self.data[i].xmin
newcentral = foo/newbinwidth
if errortype == "stat":
newerror = [sqrt(barl)/newbinwidth, sqrt(baru)/newbinwidth]
elif errortype == "env":
newerror = [(foo-barl)/newbinwidth, (baru-foo)/newbinwidth]
newdata.append(BinData(self.data[i].xmin, self.data[i+rebin-1].xmax, newcentral, newerror))
self.data = newdata
def add(self, name):
if len(self.data) != len(name.data):
print('+++ Error in Histogram.add() for %s: different numbers of bins' % self.path)
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
self.data[i].val += name.data[i].val
self.data[i].err[0] = sqrt(self.data[i].err[0]**2 + name.data[i].err[0]**2)
self.data[i].err[1] = sqrt(self.data[i].err[1]**2 + name.data[i].err[1]**2)
else:
print('+++ Error in Histogram.add() for %s: binning of histograms differs' % self.path)
def divide(self, name):
#print(name.path, self.path)
if len(self.data) != len(name.data):
print('+++ Error in Histogram.divide() for %s: different numbers of bins' % self.path)
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
try:
self.data[i].err[0] /= name.data[i].val
except ZeroDivisionError:
self.data[i].err[0]=0.
try:
self.data[i].err[1] /= name.data[i].val
except ZeroDivisionError:
self.data[i].err[1]=0.
try:
self.data[i].val /= name.data[i].val
except ZeroDivisionError:
self.data[i].val=1.
# self.data[i].err[0] = sqrt(self.data[i].err[0]**2 + name.data[i].err[0]**2)
# self.data[i].err[1] = sqrt(self.data[i].err[1]**2 + name.data[i].err[1]**2)
else:
print('+++ Error in Histogram.divide() for %s: binning of histograms differs' % self.path)
def dividereverse(self, name):
if len(self.data) != len(name.data):
print('+++ Error in Histogram.dividereverse() for %s: different numbers of bins' % self.path)
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
try:
self.data[i].err[0] = name.data[i].err[0]/self.data[i].val
except ZeroDivisionError:
self.data[i].err[0]=0.
try:
self.data[i].err[1] = name.data[i].err[1]/self.data[i].val
except ZeroDivisionError:
self.data[i].err[1]=0.
try:
self.data[i].val = name.data[i].val/self.data[i].val
except ZeroDivisionError:
self.data[i].val=1.
else:
print('+++ Error in Histogram.dividereverse(): binning of histograms differs')
def deviation(self, name):
if len(self.data) != len(name.data):
print('+++ Error in Histogram.deviation() for %s: different numbers of bins' % self.path)
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
self.data[i].val -= name.data[i].val
try:
self.data[i].val /= 0.5*sqrt((name.data[i].err[0] + name.data[i].err[1])**2 + \
(self.data[i].err[0] + self.data[i].err[1])**2)
except ZeroDivisionError:
self.data[i].val = 0.0
try:
self.data[i].err[0] /= name.data[i].err[0]
except ZeroDivisionError:
self.data[i].err[0] = 0.0
try:
self.data[i].err[1] /= name.data[i].err[1]
except ZeroDivisionError:
self.data[i].err[1] = 0.0
else:
print('+++ Error in Histogram.deviation() for %s: binning of histograms differs' % self.path)
def getChi2(self, name):
chi2 = 0.
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
try:
chi2 += (self.data[i].val-name.data[i].val)**2/((0.5*self.data[i].err[0]+0.5*self.data[i].err[1])**2 + (0.5*name.data[i].err[0]+0.5*name.data[i].err[1])**2)
except ZeroDivisionError:
pass
else:
print('+++ Error in Histogram.getChi2() for %s: binning of histograms differs' % self.path)
return chi2/len(self.data)
def getSigmaBinValue(self):
if self.sigmabinvalue==None:
self.sigmabinvalue = 0.
sumofweights = 0.
for i in range(len(self.data)):
if self.is2dim:
binwidth = abs( (self.data[i].xmax[0] - self.data[i].xmin[0])
*(self.data[i].xmax[1] - self.data[i].xmin[1]))
else:
binwidth = abs(self.data[i].xmax - self.data[i].xmin)
self.sigmabinvalue += binwidth*(self.data[i].val-self.getMeanBinValue())**2
sumofweights += binwidth
self.sigmabinvalue = sqrt(self.sigmabinvalue/sumofweights)
return self.sigmabinvalue
def getMeanBinValue(self):
if self.meanbinvalue==None:
self.meanbinvalue = 0.
sumofweights = 0.
for i in range(len(self.data)):
if self.is2dim:
binwidth = abs( (self.data[i].xmax[0] - self.data[i].xmin[0])
*(self.data[i].xmax[1] - self.data[i].xmin[1]))
else:
binwidth = abs(self.data[i].xmax - self.data[i].xmin)
self.meanbinvalue += binwidth*self.data[i].val
sumofweights += binwidth
self.meanbinvalue /= sumofweights
return self.meanbinvalue
def getCorrelation(self, name):
correlation = 0.
sumofweights = 0.
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
if self.is2dim:
binwidth = abs( (self.data[i].xmax[0] - self.data[i].xmin[0])
* (self.data[i].xmax[1] - self.data[i].xmin[1]) )
else:
binwidth = abs(self.data[i].xmax - self.data[i].xmin)
correlation += binwidth * ( self.data[i].val - self.getMeanBinValue() ) \
* ( name.data[i].val - name.getMeanBinValue() )
sumofweights += binwidth
else:
print('+++ Error in Histogram.getCorrelation(): binning of histograms differs' % self.path)
correlation /= sumofweights
try:
correlation /= self.getSigmaBinValue()*name.getSigmaBinValue()
except ZeroDivisionError:
correlation = 0
return correlation
def getRMSdistance(self,name):
distance = 0.
sumofweights = 0.
for i in range(len(self.data)):
if fuzzyeq(self.data[i].xmin, name.data[i].xmin) and \
fuzzyeq(self.data[i].xmax, name.data[i].xmax):
if self.is2dim:
binwidth = abs( (self.data[i].xmax[0] - self.data[i].xmin[0])
* (self.data[i].xmax[1] - self.data[i].xmin[1]) )
else:
binwidth = abs(self.data[i].xmax - self.data[i].xmin)
distance += binwidth * ( (self.data[i].val - self.getMeanBinValue())
-(name.data[i].val - name.getMeanBinValue()))**2
sumofweights += binwidth
else:
print('+++ Error in Histogram.getRMSdistance() for %s: binning of histograms differs' % self.path)
distance = sqrt(distance/sumofweights)
return distance
def draw(self,coors):
seen_nan = False
out = ""
out += self.startclip()
out += self.startpsset()
if any(b.isValid for b in self.data):
out += "% START DATA\n"
if self.is2dim:
for b in self.data:
out += ('\\psframe')
color = int(129*coors.phys2frameZ(b.val))
if b.val > coors.zmax():
color = 129
if b.val < coors.zmin():
color = 0
if b.val <= coors.zmin():
out += ('[linewidth=0pt, linestyle=none, fillstyle=solid, fillcolor=white]')
else:
out += ('[linewidth=0pt, linestyle=none, fillstyle=solid, fillcolor={gradientcolors!!['+str(color)+']}]')
out += ('(' + coors.strphys2frameX(b.low[0]) + ', ' \
+ coors.strphys2frameY(b.low[1]) + ')(' \
+ coors.strphys2frameX(b.high[0]) + ', ' \
+ coors.strphys2frameY(b.high[1]) + ')\n')
else:
if self.getErrorBands():
self.description['SmoothLine'] = 0
for b in self.data:
if isnan(b.val) or isnan(b.err[0]) or isnan(b.err[1]):
seen_nan = True
continue
out += ('\\psframe[dimen=inner,linewidth=0pt,linestyle=none,fillstyle=solid,fillcolor=%s,opacity=%s]' % (self.getErrorBandColor(),self.getErrorBandOpacity()))
out += ('(' + coors.strphys2frameX(b.xmin) + ', ' \
+ coors.strphys2frameY(b.val - b.err[0]) + ')(' \
+ coors.strphys2frameX(b.xmax) + ', ' \
+ coors.strphys2frameY(b.val + b.err[1]) + ')\n')
if self.getErrorBars():
for b in self.data:
if isnan(b.val) or isnan(b.err[0]) or isnan(b.err[1]):
seen_nan = True
continue
if b.val == 0. and b.err == [0.,0.]:
continue
out += ('\\psline')
out += ('(' + coors.strphys2frameX(b.xmin) + ', ' \
+ coors.strphys2frameY(b.val) + ')(' \
+ coors.strphys2frameX(b.xmax) + ', ' \
+ coors.strphys2frameY(b.val) + ')\n')
out += ('\\psline')
bincenter = coors.strphys2frameX(.5*(b.xmin+b.xmax))
out += ('(' + bincenter + ', ' \
+ coors.strphys2frameY(b.val-b.err[0]) + ')(' \
+ bincenter + ', ' \
+ coors.strphys2frameY(b.val+b.err[1]) + ')\n')
if self.getSmoothLine():
out += '\\psbezier'
else:
out += '\\psline'
if self.getFillStyle() != 'none': # make sure that filled areas go all the way down to the x-axis
if coors.phys2frameX(self.data[0].xmin) > 1e-4:
out += '(' + coors.strphys2frameX(self.data[0].xmin) + ', -0.1)\n'
else:
out += '(-0.1, -0.1)\n'
for i, b in enumerate(self.data):
if isnan(b.val):
seen_nan = True
continue
if self.getSmoothLine():
out += ('(' + coors.strphys2frameX(0.5*(b.xmin+b.xmax)) + ', ' \
+ coors.strphys2frameY(b.val) + ')\n')
else:
out += ('(' + coors.strphys2frameX(b.xmin) + ', ' \
+ coors.strphys2frameY(b.val) + ')(' \
+ coors.strphys2frameX(b.xmax) + ', ' \
+ coors.strphys2frameY(b.val) + ')\n')
## Join/separate data points, with vertical/diagonal lines
if i+1 < len(self.data): #< If this is not the last point
if self.description.get('ConnectBins', '1') != '1':
out += ('\\psline')
else:
## If bins are joined, but there is a gap in binning, choose whether to fill the gap
if (abs(coors.phys2frameX(b.xmax) - coors.phys2frameX(self.data[i+1].xmin)) > 1e-4):
if self.description.get('ConnectGaps', '0') != '1':
out += ('\\psline')
# TODO: Perhaps use a new dashed line to fill the gap?
if self.getFillStyle() != 'none': # make sure that filled areas go all the way down to the x-axis
if (coors.phys2frameX(self.data[-1].xmax) < 1-1e-4):
out += '(' + coors.strphys2frameX(self.data[-1].xmax) + ', -0.1)\n'
else:
out += '(1.1, -0.1)\n'
#
if self.getPolyMarker() != '':
for b in self.data:
if isnan(b.val):
seen_nan = True
continue
if b.val == 0. and b.err == [0.,0.]:
continue
out += ('\\psdot[dotstyle=%s,dotsize=%s,dotscale=%s](' % (self.getPolyMarker(),self.getDotSize(),self.getDotScale()) \
+ coors.strphys2frameX(.5*(b.xmin+b.xmax)) + ', ' \
+ coors.strphys2frameY(b.val) + ')\n')
out += "% END DATA\n"
else:
print("WARNING: No valid bin value/errors/edges to plot!")
out += "% NO DATA!\n"
out += self.stoppsset()
out += self.stopclip()
if seen_nan:
print("WARNING: NaN-valued value or error bar!")
return out
# def is2dimensional(self):
# return self.is2dim
def getXMin(self):
if not self.data:
return 0
elif self.is2dim:
return min(b.low[0] for b in self.data)
else:
return min(b.xmin for b in self.data)
def getXMax(self):
if not self.data:
return 1
elif self.is2dim:
return max(b.high[0] for b in self.data)
else:
return max(b.xmax for b in self.data)
def getYMin(self, xmin, xmax, logy):
if not self.data:
return 0
elif self.is2dim:
return min(b.low[1] for b in self.data)
else:
yvalues = []
for b in self.data:
if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax):
foo = b.val
if self.getErrorBars() or self.getErrorBands():
foo -= b.err[0]
if not isnan(foo) and (not logy or foo > 0):
yvalues.append(foo)
return min(yvalues) if yvalues else self.data[0].val
def getYMax(self, xmin, xmax):
if not self.data:
return 1
elif self.is2dim:
return max(b.high[1] for b in self.data)
else:
yvalues = []
for b in self.data:
if (b.xmax > xmin or b.xmin >= xmin) and (b.xmin < xmax or b.xmax <= xmax):
foo = b.val
if self.getErrorBars() or self.getErrorBands():
foo += b.err[1]
if not isnan(foo): # and (not logy or foo > 0):
yvalues.append(foo)
return max(yvalues) if yvalues else self.data[0].val
def getZMin(self, xmin, xmax, ymin, ymax):
if not self.is2dim:
return 0
zvalues = []
for b in self.data:
if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax):
zvalues.append(b.val)
return min(zvalues)
def getZMax(self, xmin, xmax, ymin, ymax):
if not self.is2dim:
return 0
zvalues = []
for b in self.data:
if (b.xmax[0] > xmin and b.xmin[0] < xmax) and (b.xmax[1] > ymin and b.xmin[1] < ymax):
zvalues.append(b.val)
return max(zvalues)
class Value(Histogram):
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'VALUE'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.description[prop] = value
else:
linearray = line.split()
if len(linearray) == 3:
self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[2] ])) # dummy x-values
else:
raise Exception('Value does not have the expected number of columns. ' + line)
# TODO: specialise draw() here
class Counter(Histogram):
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'COUNTER'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.description[prop] = value
else:
linearray = line.split()
if len(linearray) == 2:
self.data.append(BinData(0.0, 1.0, linearray[0], [ linearray[1], linearray[1] ])) # dummy x-values
else:
raise Exception('Counter does not have the expected number of columns. ' + line)
# TODO: specialise draw() here
class Histo1D(Histogram):
def read_input_data(self, f):
for line in f:
if is_end_marker(line, 'HISTO1D'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.description[prop] = value
else:
linearray = line.split()
## Detect symm errs
# TODO: Not sure what the 8-param version is for... auto-compatibility with YODA format?
if len(linearray) in [4,8]:
self.data.append(BinData(linearray[0], linearray[1], linearray[2], linearray[3]))
## Detect asymm errs
elif len(linearray) == 5:
self.data.append(BinData(linearray[0], linearray[1], linearray[2], [linearray[3],linearray[4]]))
else:
raise Exception('Histo1D does not have the expected number of columns. ' + line)
# TODO: specialise draw() here
class Histo2D(Histogram):
def read_input_data(self, f):
self.is2dim = True #< Should really be done in a constructor, but this is easier for now...
for line in f:
if is_end_marker(line, 'HISTO2D'):
break
elif is_comment(line):
continue
else:
line = line.rstrip()
m = pat_property.match(line)
if m:
prop, value = m.group(1,2)
self.description[prop] = value
else:
linearray = line.split()
if len(linearray) in [6,7]:
# If asymm z error, use the max or average of +- error
err = float(linearray[5])
if len(linearray) == 7:
if self.description.get("ShowMaxZErr", 1):
err = max(err, float(linearray[6]))
else:
err = 0.5 * (err + float(linearray[6]))
self.data.append(BinData([linearray[0], linearray[2]], [linearray[1], linearray[3]], float(linearray[4]), err))
else:
raise Exception('Histo2D does not have the expected number of columns. '+line)
# TODO: specialise draw() here
#############################
class Frame(object):
def __init__(self):
self.framelinewidth = '0.3pt'
def draw(self,inputdata):
out = ('\n%\n% Frame\n%\n')
if inputdata.description.get('FrameColor') is not None:
color = inputdata.description['FrameColor']
# We want to draw this frame only once, so set it to False for next time:
inputdata.description['FrameColor']=None
# Calculate how high and wide the overall plot is
height = [0,0]
width = inputdata.attr('PlotSizeX')
if inputdata.attr_bool('RatioPlot', False):
height[1] = -inputdata.description['RatioPlotSizeY']
if not inputdata.attr_bool('MainPlot', True):
height[0] = inputdata.description['PlotSizeY']
else:
height[0] = -height[1]
height[1] = 0
# Get the margin widths
left = inputdata.description['LeftMargin']+0.1
right = inputdata.description['RightMargin']+0.1
top = inputdata.description['TopMargin']+0.1
bottom = inputdata.description['BottomMargin']+0.1
#
out += ('\\rput(0,1){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(top, color, -left, top/2, width+right, top/2))
out += ('\\rput(0,%scm){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(height[1], bottom, color, -left, -bottom/2, width+right, -bottom/2))
out += ('\\rput(0,0){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(left, color, -left/2, height[1]-0.05, -left/2, height[0]+0.05))
out += ('\\rput(1,0){\\psline[linewidth=%scm,linecolor=%s](%scm,%scm)(%scm,%scm)}\n' %(right, color, right/2, height[1]-0.05, right/2, height[0]+0.05))
out += ('\\psframe[linewidth='+self.framelinewidth+',dimen=middle](0,0)(1,1)\n')
return out
class Ticks(object):
def __init__(self, description, coors):
self.majorticklinewidth = '0.3pt'
self.minorticklinewidth = '0.3pt'
self.majorticklength = '9pt'
self.minorticklength = '4pt'
self.description = description
self.coors = coors
def draw_ticks(self, vmin, vmax, plotlog=False, custommajorticks=None, customminorticks=None, custommajortickmarks=-1, customminortickmarks=-1, drawlabels=True, twosided=False):
if vmax <= vmin:
raise Exception("Cannot place tick marks. Inconsistent min=%s and max=%s" % (vmin,vmax))
out = ""
if plotlog:
if vmin <= 0 or vmax <= 0:
raise Exception("Cannot place log axis min or max tick <= 0")
if custommajorticks is None:
x = int(log10(vmin))
n_labels = 0
while x < log10(vmax) + 1:
if 10**x >= vmin:
ticklabel = 10**x
if ticklabel > vmin and ticklabel < vmax:
out += self.draw_majortick(ticklabel,twosided)
if drawlabels:
out += self.draw_majorticklabel(ticklabel)
n_labels += 1
if ticklabel == vmin or ticklabel == vmax:
if drawlabels:
out += self.draw_majorticklabel(ticklabel)
n_labels+=1
for i in range(2,10):
ticklabel = i*10**(x-1)
if ticklabel > vmin and ticklabel < vmax:
out += self.draw_minortick(ticklabel,twosided)
if drawlabels and n_labels == 0:
if (i+1)*10**(x-1) < vmax: # some special care for the last minor tick
out += self.draw_minorticklabel(ticklabel)
else:
out += self.draw_minorticklabel(ticklabel, last=True)
x += 1
else:
print("Warning: custom major ticks not currently supported on log axes -- please contact the developers to request!")
elif custommajorticks is not None or customminorticks is not None:
if custommajorticks:
for i in range(len(custommajorticks)):
value = custommajorticks[i]['Value']
label = custommajorticks[i]['Label']
if value >= vmin and value <= vmax:
out += self.draw_majortick(value,twosided)
if drawlabels:
out += self.draw_majorticklabel(value, label=label)
if customminorticks:
for i in range(len(customminorticks)):
value = customminorticks[i]['Value']
if value >= vmin and value <= vmax:
out += self.draw_minortick(value,twosided)
else:
vrange = vmax - vmin
if isnan(vrange):
vrange, vmin, vmax = 1, 1, 2
digits = int(log10(vrange))+1
if vrange <= 1:
digits -= 1
foo = int(vrange/(10**(digits-1)))
if foo/9. > 0.5:
tickmarks = 10
elif foo/9. > 0.2:
tickmarks = 5
elif foo/9. > 0.1:
tickmarks = 2
if custommajortickmarks > -1:
if custommajortickmarks not in [1, 2, 5, 10, 20]:
print('+++ Error in Ticks.draw_ticks(): MajorTickMarks must be in [1, 2, 5, 10, 20]')
else:
tickmarks = custommajortickmarks
if tickmarks == 2 or tickmarks == 20:
minortickmarks = 3
else:
minortickmarks = 4
if customminortickmarks > -1:
minortickmarks = customminortickmarks
#
x = 0
while x > vmin*10**digits:
x -= tickmarks*100**(digits-1)
while x <= vmax*10**digits:
if x >= vmin*10**digits - tickmarks*100**(digits-1):
ticklabel = 1.*x/10**digits
if int(ticklabel) == ticklabel:
ticklabel = int(ticklabel)
if float(ticklabel-vmin)/vrange >= -1e-5:
if abs(ticklabel-vmin)/vrange > 1e-5 and abs(ticklabel-vmax)/vrange > 1e-5:
out += self.draw_majortick(ticklabel,twosided)
if drawlabels:
out += self.draw_majorticklabel(ticklabel)
xminor = x
for i in range(minortickmarks):
xminor += 1.*tickmarks*100**(digits-1)/(minortickmarks+1)
ticklabel = 1.*xminor/10**digits
if ticklabel > vmin and ticklabel < vmax:
if abs(ticklabel-vmin)/vrange > 1e-5 and abs(ticklabel-vmax)/vrange > 1e-5:
out += self.draw_minortick(ticklabel,twosided)
x += tickmarks*100**(digits-1)
return out
def draw(self):
pass
def draw_minortick(self, ticklabel, twosided):
pass
def draw_majortick(self, ticklabel, twosided):
pass
def draw_majorticklabel(self, ticklabel):
pass
def draw_minorticklabel(self, value, label='', last=False):
return ''
def get_ticklabel(self, value, plotlog=False, minor=False, lastminor=False):
label=''
prefix = ''
if plotlog:
bar = int(log10(value))
if bar < 0:
sign='-'
else:
sign='\\,'
if minor: # The power of ten is only to be added to the last minor tick label
if lastminor:
label = str(int(value/(10**bar))) + "\\cdot" + '10$^{'+sign+'\\text{'+str(abs(bar))+'}}$'
else:
label = str(int(value/(10**bar))) # The naked prefactor
else:
if bar==0:
label = '1'
else:
label = '10$^{'+sign+'\\text{'+str(abs(bar))+'}}$'
else:
if fabs(value) < 1e-10:
value = 0
label = str(value)
if "e" in label:
a, b = label.split("e")
astr = "%2.1f" % float(a)
bstr = str(int(b))
label = "\\smaller{%s $\\!\\cdot 10^{%s} $}" % (astr, bstr)
return label
class XTicks(Ticks):
def draw(self, custommajorticks=None, customminorticks=None, custommajortickmarks=-1, customminortickmarks=-1,drawlabels=True):
twosided = bool(int(self.description.get('XTwosidedTicks', '0')))
out = ""
out += ('\n%\n% X-Ticks\n%\n')
out += ('\\def\\majortickmarkx{\\psline[linewidth='+self.majorticklinewidth+'](0,0)(0,'+self.majorticklength+')}%\n')
out += ('\\def\\minortickmarkx{\\psline[linewidth='+self.minorticklinewidth+'](0,0)(0,'+self.minorticklength+')}%\n')
uselog = self.description['LogX'] and (self.coors.xmin() > 0 and self.coors.xmax() > 0)
out += self.draw_ticks(self.coors.xmin(), self.coors.xmax(),\
plotlog=uselog,\
custommajorticks=custommajorticks,\
customminorticks=customminorticks,\
custommajortickmarks=custommajortickmarks,\
customminortickmarks=customminortickmarks,\
drawlabels=drawlabels,\
twosided=twosided)
return out
def draw_minortick(self, ticklabel, twosided):
out = ''
out += '\\rput('+self.coors.strphys2frameX(ticklabel)+', 0){\\minortickmarkx}\n'
if twosided:
out += '\\rput{180}('+self.coors.strphys2frameX(ticklabel)+', 1){\\minortickmarkx}\n'
return out
def draw_minorticklabel(self, value, label='', last=False):
if not label:
label=self.get_ticklabel(value, int(self.description['LogX']), minor=True, lastminor=last)
if last: # Some more indentation for the last minor label
return ('\\rput('+self.coors.strphys2frameX(value)+', 0){\\rput[B](1.9\\labelsep,-2.3\\labelsep){\\strut{}'+label+'}}\n')
else:
return ('\\rput('+self.coors.strphys2frameX(value)+', 0){\\rput[B](0,-2.3\\labelsep){\\strut{}'+label+'}}\n')
def draw_majortick(self, ticklabel, twosided):
out = ''
out += '\\rput('+self.coors.strphys2frameX(ticklabel)+', 0){\\majortickmarkx}\n'
if twosided:
out += '\\rput{180}('+self.coors.strphys2frameX(ticklabel)+', 1){\\majortickmarkx}\n'
return out
def draw_majorticklabel(self, value, label=''):
if not label:
label = self.get_ticklabel(value, int(self.description['LogX']) and self.coors.xmin() > 0 and self.coors.xmax() > 0)
labelparts = label.split("\\n")
labelcode = label if len(labelparts) == 1 else ("\\shortstack{" + "\\\\ ".join(labelparts) + "}")
rtn = "\\rput(" + self.coors.strphys2frameX(value) + ", 0){\\rput[t](0,-\\labelsep){" + labelcode + "}}\n"
return rtn
class YTicks(Ticks):
def draw(self, custommajorticks=None, customminorticks=None, custommajortickmarks=-1, customminortickmarks=-1, drawlabels=True):
twosided = bool(int(self.description.get('YTwosidedTicks', '0')))
out = ""
out += ('\n%\n% Y-Ticks\n%\n')
out += ('\\def\\majortickmarky{\\psline[linewidth=%s](0,0)(%s,0)}%%\n' % (self.majorticklinewidth, self.majorticklength))
out += ('\\def\\minortickmarky{\\psline[linewidth=%s](0,0)(%s,0)}%%\n' % (self.minorticklinewidth, self.minorticklength))
uselog = self.description['LogY'] and self.coors.ymin() > 0 and self.coors.ymax() > 0
out += self.draw_ticks(self.coors.ymin(), self.coors.ymax(),
plotlog=uselog,
custommajorticks=custommajorticks,
customminorticks=customminorticks,
custommajortickmarks=custommajortickmarks,
customminortickmarks=customminortickmarks,
twosided=twosided,
drawlabels=drawlabels)
return out
def draw_minortick(self, ticklabel, twosided):
out = ''
out += '\\rput(0, '+self.coors.strphys2frameY(ticklabel)+'){\\minortickmarky}\n'
if twosided:
out += '\\rput{180}(1, '+self.coors.strphys2frameY(ticklabel)+'){\\minortickmarky}\n'
return out
def draw_majortick(self, ticklabel, twosided):
out = ''
out += '\\rput(0, '+self.coors.strphys2frameY(ticklabel)+'){\\majortickmarky}\n'
if twosided:
out += '\\rput{180}(1, '+self.coors.strphys2frameY(ticklabel)+'){\\majortickmarky}\n'
return out
def draw_majorticklabel(self, value, label=''):
if not label:
label = self.get_ticklabel(value, int(self.description['LogY']) and self.coors.ymin() > 0 and self.coors.ymax() > 0)
if self.description.get('RatioPlotMode', 'mcdata') == 'deviation' and self.description.get('RatioPlotStage'):
rtn = '\\uput[180]{0}(0, '+self.coors.strphys2frameY(value)+'){\\strut{}'+label+'\\,$\\sigma$}\n'
else:
labelparts = label.split("\\n")
labelcode = label if len(labelparts) == 1 else ("\\shortstack{" + "\\\\ ".join(labelparts) + "}")
rtn = "\\rput(0, " + self.coors.strphys2frameY(value) + "){\\rput[r](-\\labelsep,0){" + labelcode + "}}\n"
return rtn
class ZTicks(Ticks):
def __init__(self, description, coors):
self.majorticklinewidth = '0.3pt'
self.minorticklinewidth = '0.3pt'
self.majorticklength = '6pt'
self.minorticklength = '2.6pt'
self.description = description
self.coors = coors
def draw(self, custommajorticks=None, customminorticks=None, custommajortickmarks=-1, customminortickmarks=-1, drawlabels=True):
out = ""
out += ('\n%\n% Z-Ticks\n%\n')
out += ('\\def\\majortickmarkz{\\psline[linewidth='+self.majorticklinewidth+'](0,0)('+self.majorticklength+',0)}%\n')
out += ('\\def\\minortickmarkz{\\psline[linewidth='+self.minorticklinewidth+'](0,0)('+self.minorticklength+',0)}%\n')
out += self.draw_ticks(self.coors.zmin(), self.coors.zmax(),\
plotlog=self.description['LogZ'],\
custommajorticks=custommajorticks,\
customminorticks=customminorticks,\
custommajortickmarks=custommajortickmarks,\
customminortickmarks=customminortickmarks,\
twosided=False,\
drawlabels=drawlabels)
return out
def draw_minortick(self, ticklabel, twosided):
return '\\rput{180}(1, '+self.coors.strphys2frameZ(ticklabel)+'){\\minortickmarkz}\n'
def draw_majortick(self, ticklabel, twosided):
return '\\rput{180}(1, '+self.coors.strphys2frameZ(ticklabel)+'){\\majortickmarkz}\n'
def draw_majorticklabel(self, value, label=''):
if label=='':
label = self.get_ticklabel(value, int(self.description['LogZ']))
if self.description.get('RatioPlotMode', "mcdata") == 'deviation' and self.description.get('RatioPlotStage'):
return ('\\uput[0]{0}(1, '+self.coors.strphys2frameZ(value)+'){\\strut{}'+label+'\\,$\\sigma$}\n')
else:
return ('\\uput[0]{0}(1, '+self.coors.strphys2frameZ(value)+'){\\strut{}'+label+'}\n')
class Coordinates(object):
def __init__(self, inputdata):
self.description = inputdata.description
def phys2frameX(self, x):
if self.description['LogX']:
if x>0:
result = 1.*(log10(x)-log10(self.xmin()))/(log10(self.xmax())-log10(self.xmin()))
else:
return -10
else:
result = 1.*(x-self.xmin())/(self.xmax()-self.xmin())
if (fabs(result) < 1e-4):
return 0
else:
return min(max(result,-10),10)
def phys2frameY(self, y):
if self.description['LogY']:
if y > 0 and self.ymin() > 0 and self.ymax() > 0:
result = 1.*(log10(y)-log10(self.ymin()))/(log10(self.ymax())-log10(self.ymin()))
else:
return -10
else:
result = 1.*(y-self.ymin())/(self.ymax()-self.ymin())
if (fabs(result) < 1e-4):
return 0
else:
return min(max(result,-10),10)
def phys2frameZ(self, z):
if self.description['LogZ']:
if z>0:
result = 1.*(log10(z)-log10(self.zmin()))/(log10(self.zmax())-log10(self.zmin()))
else:
return -10
else:
result = 1.*(z-self.zmin())/(self.zmax()-self.zmin())
if (fabs(result) < 1e-4):
return 0
else:
return min(max(result,-10),10)
# TODO: Add frame2phys functions (to allow linear function sampling in the frame space rather than the physical space)
def strphys2frameX(self, x):
return str(self.phys2frameX(x))
def strphys2frameY(self, y):
return str(self.phys2frameY(y))
def strphys2frameZ(self, z):
return str(self.phys2frameZ(z))
def xmin(self):
return self.description['Borders'][0]
def xmax(self):
return self.description['Borders'][1]
def ymin(self):
return self.description['Borders'][2]
def ymax(self):
return self.description['Borders'][3]
def zmin(self):
return self.description['Borders'][4]
def zmax(self):
return self.description['Borders'][5]
####################
import shutil, subprocess
def try_cmd(args):
"Run the given command + args and return True/False if it succeeds or not"
try:
subprocess.check_output(args, stderr=subprocess.STDOUT)
return True
except:
return False
def have_cmd(cmd):
return try_cmd(["which", cmd])
####################
if __name__ == '__main__':
## Try to rename the process on Linux
try:
import ctypes
libc = ctypes.cdll.LoadLibrary('libc.so.6')
libc.prctl(15, 'make-plots', 0, 0, 0)
except Exception:
pass
## Try to use Psyco optimiser
try:
import psyco
psyco.full()
except ImportError:
pass
## Find number of (virtual) processing units
import multiprocessing
try:
numcores = multiprocessing.cpu_count()
except:
numcores = 1
## Parse command line options
- from optparse import OptionParser, OptionGroup
- parser = OptionParser(usage=__doc__)
- parser.add_option("-j", "-n", "--num-threads", dest="NUM_THREADS", type="int",
- default=numcores, help="max number of threads to be used [%s]" % numcores)
- parser.add_option("-o", "--outdir", dest="OUTPUT_DIR", default=None,
- help="choose the output directory (default = .dat dir)")
- parser.add_option("--font", dest="OUTPUT_FONT", choices="palatino,cm,times,helvetica,minion".split(","),
- default="palatino", help="choose the font to be used in the plots")
- parser.add_option("--palatino", dest="OUTPUT_FONT", action="store_const", const="palatino", default="palatino",
- help="use Palatino as font (default). DEPRECATED: Use --font")
- parser.add_option("--cm", dest="OUTPUT_FONT", action="store_const", const="cm", default="palatino",
- help="use Computer Modern as font. DEPRECATED: Use --font")
- parser.add_option("--times", dest="OUTPUT_FONT", action="store_const", const="times", default="palatino",
- help="use Times as font. DEPRECATED: Use --font")
- parser.add_option("--minion", dest="OUTPUT_FONT", action="store_const", const="minion", default="palatino",
- help="use Adobe Minion Pro as font. Note: You need to set TEXMFHOME first. DEPRECATED: Use --font")
- parser.add_option("--helvetica", dest="OUTPUT_FONT", action="store_const", const="helvetica", default="palatino",
- help="use Helvetica as font. DEPRECATED: Use --font")
- parser.add_option("-f", "--format", dest="OUTPUT_FORMAT", default="PDF",
- help="choose plot format, perhaps multiple comma-separated formats e.g. 'pdf' or 'tex,pdf,png' (default = PDF).")
- parser.add_option("--ps", dest="OUTPUT_FORMAT", action="store_const", const="PS", default="PDF",
- help="create PostScript output (default). DEPRECATED")
- parser.add_option("--pdf", dest="OUTPUT_FORMAT", action="store_const", const="PDF", default="PDF",
- help="create PDF output. DEPRECATED")
- parser.add_option("--eps", dest="OUTPUT_FORMAT", action="store_const", const="EPS", default="PDF",
- help="create Encapsulated PostScript output. DEPRECATED")
- parser.add_option("--png", dest="OUTPUT_FORMAT", action="store_const", const="PNG", default="PDF",
- help="create PNG output. DEPRECATED")
- parser.add_option("--pspng", dest="OUTPUT_FORMAT", action="store_const", const="PS,PNG", default="PDF",
- help="create PS and PNG output. DEPRECATED")
- parser.add_option("--pdfpng", dest="OUTPUT_FORMAT", action="store_const", const="PDF,PNG", default="PDF",
- help="create PDF and PNG output. DEPRECATED")
- parser.add_option("--epspng", dest="OUTPUT_FORMAT", action="store_const", const="EPS,PNG", default="PDF",
- help="create EPS and PNG output. DEPRECATED")
- parser.add_option("--tex", dest="OUTPUT_FORMAT", action="store_const", const="TEX", default="PDF",
- help="create TeX/LaTeX output.")
- parser.add_option("--no-cleanup", dest="NO_CLEANUP", action="store_true", default=False,
- help="keep temporary directory and print its filename.")
- parser.add_option("--no-subproc", dest="NO_SUBPROC", action="store_true", default=False,
- help="don't use subprocesses to render the plots in parallel -- useful for debugging.")
- parser.add_option("--full-range", dest="FULL_RANGE", action="store_true", default=False,
- help="plot full y range in log-y plots.")
- parser.add_option("-c", "--config", dest="CONFIGFILES", action="append", default=None,
- help="plot config file to be used. Overrides internal config blocks.")
- verbgroup = OptionGroup(parser, "Verbosity control")
- verbgroup.add_option("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL",
+ import argparse
+ parser = argparse.ArgumentParser(usage=__doc__)
+ parser.add_argument("DATFILES", nargs="+", help=".dat files to plot")
+ parser.add_argument("-j", "-n", "--num-threads", dest="NUM_THREADS", type=int,
+ default=numcores, help="max number of threads to be used [%s]" % numcores)
+ parser.add_argument("-o", "--outdir", dest="OUTPUT_DIR", default=None,
+ help="choose the output directory (default = .dat dir)")
+ parser.add_argument("--font", dest="OUTPUT_FONT", choices="palatino,cm,times,helvetica,minion".split(","),
+ default="palatino", help="choose the font to be used in the plots")
+ parser.add_argument("--palatino", dest="OUTPUT_FONT", action="store_const", const="palatino", default="palatino",
+ help="use Palatino as font (default). DEPRECATED: Use --font")
+ parser.add_argument("--cm", dest="OUTPUT_FONT", action="store_const", const="cm", default="palatino",
+ help="use Computer Modern as font. DEPRECATED: Use --font")
+ parser.add_argument("--times", dest="OUTPUT_FONT", action="store_const", const="times", default="palatino",
+ help="use Times as font. DEPRECATED: Use --font")
+ parser.add_argument("--minion", dest="OUTPUT_FONT", action="store_const", const="minion", default="palatino",
+ help="use Adobe Minion Pro as font. Note: You need to set TEXMFHOME first. DEPRECATED: Use --font")
+ parser.add_argument("--helvetica", dest="OUTPUT_FONT", action="store_const", const="helvetica", default="palatino",
+ help="use Helvetica as font. DEPRECATED: Use --font")
+ parser.add_argument("-f", "--format", dest="OUTPUT_FORMAT", default="PDF",
+ help="choose plot format, perhaps multiple comma-separated formats e.g. 'pdf' or 'tex,pdf,png' (default = PDF).")
+ parser.add_argument("--ps", dest="OUTPUT_FORMAT", action="store_const", const="PS", default="PDF",
+ help="create PostScript output (default). DEPRECATED")
+ parser.add_argument("--pdf", dest="OUTPUT_FORMAT", action="store_const", const="PDF", default="PDF",
+ help="create PDF output. DEPRECATED")
+ parser.add_argument("--eps", dest="OUTPUT_FORMAT", action="store_const", const="EPS", default="PDF",
+ help="create Encapsulated PostScript output. DEPRECATED")
+ parser.add_argument("--png", dest="OUTPUT_FORMAT", action="store_const", const="PNG", default="PDF",
+ help="create PNG output. DEPRECATED")
+ parser.add_argument("--pspng", dest="OUTPUT_FORMAT", action="store_const", const="PS,PNG", default="PDF",
+ help="create PS and PNG output. DEPRECATED")
+ parser.add_argument("--pdfpng", dest="OUTPUT_FORMAT", action="store_const", const="PDF,PNG", default="PDF",
+ help="create PDF and PNG output. DEPRECATED")
+ parser.add_argument("--epspng", dest="OUTPUT_FORMAT", action="store_const", const="EPS,PNG", default="PDF",
+ help="create EPS and PNG output. DEPRECATED")
+ parser.add_argument("--tex", dest="OUTPUT_FORMAT", action="store_const", const="TEX", default="PDF",
+ help="create TeX/LaTeX output.")
+ parser.add_argument("--no-cleanup", dest="NO_CLEANUP", action="store_true", default=False,
+ help="keep temporary directory and print its filename.")
+ parser.add_argument("--no-subproc", dest="NO_SUBPROC", action="store_true", default=False,
+ help="don't use subprocesses to render the plots in parallel -- useful for debugging.")
+ parser.add_argument("--full-range", dest="FULL_RANGE", action="store_true", default=False,
+ help="plot full y range in log-y plots.")
+ parser.add_argument("-c", "--config", dest="CONFIGFILES", action="append", default=None,
+ help="plot config file to be used. Overrides internal config blocks.")
+ verbgroup = parser.add_argument_group("Verbosity control")
+ verbgroup.add_argument("-v", "--verbose", action="store_const", const=logging.DEBUG, dest="LOGLEVEL",
default=logging.INFO, help="print debug (very verbose) messages")
- verbgroup.add_option("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL",
+ verbgroup.add_argument("-q", "--quiet", action="store_const", const=logging.WARNING, dest="LOGLEVEL",
default=logging.INFO, help="be very quiet")
- parser.add_option_group(verbgroup)
- opts, args = parser.parse_args()
+ args = parser.parse_args()
## Tweak the opts output
- logging.basicConfig(level=opts.LOGLEVEL, format="%(message)s")
- opts.OUTPUT_FONT = opts.OUTPUT_FONT.upper()
- opts.OUTPUT_FORMAT = opts.OUTPUT_FORMAT.upper().split(",")
- if opts.NUM_THREADS == 1:
- opts.NO_SUBPROC = True
+ logging.basicConfig(level=args.LOGLEVEL, format="%(message)s")
+ args.OUTPUT_FONT = args.OUTPUT_FONT.upper()
+ args.OUTPUT_FORMAT = args.OUTPUT_FORMAT.upper().split(",")
+ if args.NUM_THREADS == 1:
+ args.NO_SUBPROC = True
## Check for no args
- if len(args) == 0:
+ if len(args.DATFILES) == 0:
logging.error(parser.get_usage())
sys.exit(2)
## Check that the files exist
- for f in args:
+ for f in args.DATFILES:
if not os.access(f, os.R_OK):
print("Error: cannot read from %s" % f)
sys.exit(1)
## Test for external programs (kpsewhich, latex, dvips, ps2pdf/ps2eps, and convert)
- opts.LATEXPKGS = []
- if opts.OUTPUT_FORMAT != ["TEX"]:
+ args.LATEXPKGS = []
+ if args.OUTPUT_FORMAT != ["TEX"]:
try:
## latex
if not have_cmd("pdflatex"):
logging.error("ERROR: required program 'latex' could not be found. Exiting...")
sys.exit(1)
# ## dvips
# if not have_cmd("dvips"):
# logging.error("ERROR: required program 'dvips' could not be found. Exiting...")
# sys.exit(1)
# ## ps2pdf / ps2eps
- # if "PDF" in opts.OUTPUT_FORMAT:
+ # if "PDF" in args.OUTPUT_FORMAT:
# if not have_cmd("ps2pdf"):
# logging.error("ERROR: required program 'ps2pdf' (for PDF output) could not be found. Exiting...")
# sys.exit(1)
- # elif "EPS" in opts.OUTPUT_FORMAT:
+ # elif "EPS" in args.OUTPUT_FORMAT:
# if not have_cmd("ps2eps"):
# logging.error("ERROR: required program 'ps2eps' (for EPS output) could not be found. Exiting...")
# sys.exit(1)
## PNG output converter
- if "PNG" in opts.OUTPUT_FORMAT:
+ if "PNG" in args.OUTPUT_FORMAT:
if not have_cmd("convert"):
logging.error("ERROR: required program 'convert' (for PNG output) could not be found. Exiting...")
sys.exit(1)
## kpsewhich: required for LaTeX package testing
if not have_cmd("kpsewhich"):
logging.warning("WARNING: required program 'kpsewhich' (for LaTeX package checks) could not be found")
else:
## Check minion font
- if opts.OUTPUT_FONT == "MINION":
+ if args.OUTPUT_FONT == "MINION":
p = subprocess.Popen(["kpsewhich", "minion.sty"], stdout=subprocess.PIPE)
p.wait()
if p.returncode != 0:
logging.warning('Warning: Using "--minion" requires minion.sty to be installed. Ignoring it.')
- opts.OUTPUT_FONT = "PALATINO"
+ args.OUTPUT_FONT = "PALATINO"
## Check for HEP LaTeX packages
# TODO: remove HEP-specifics/non-standards?
for pkg in ["hepnames", "hepunits", "underscore"]:
p = subprocess.Popen(["kpsewhich", "%s.sty" % pkg], stdout=subprocess.PIPE)
p.wait()
if p.returncode == 0:
- opts.LATEXPKGS.append(pkg)
+ args.LATEXPKGS.append(pkg)
## Check for Palatino old style figures and small caps
- if opts.OUTPUT_FONT == "PALATINO":
+ if args.OUTPUT_FONT == "PALATINO":
p = subprocess.Popen(["kpsewhich", "ot1pplx.fd"], stdout=subprocess.PIPE)
p.wait()
if p.returncode == 0:
- opts.OUTPUT_FONT = "PALATINO_OSF"
+ args.OUTPUT_FONT = "PALATINO_OSF"
except Exception as e:
logging.warning("Problem while testing for external packages. I'm going to try and continue without testing, but don't hold your breath...")
# def init_worker():
# import signal
# signal.signal(signal.SIGINT, signal.SIG_IGN)
## Run rendering jobs
- datfiles = args
+ datfiles = args.DATFILES
plotword = "plots" if len(datfiles) > 1 else "plot"
logging.info("Making %d %s" % (len(datfiles), plotword))
## Create a temporary directory
tempdir = tempfile.mkdtemp('.make-plots')
- if opts.NO_CLEANUP:
+ if args.NO_CLEANUP:
logging.info('Keeping temp-files in %s' % tempdir)
## Create TeX file
texpath = os.path.join(tempdir, 'plots.tex')
texfile = open(texpath, 'w')
# if inputdata.description.get('LeftMargin', '') != '':
# inputdata.description['LeftMargin'] = float(inputdata.description['LeftMargin'])
# else:
# inputdata.description['LeftMargin'] = 1.4
# if inputdata.description.get('RightMargin', '') != '':
# inputdata.description['RightMargin'] = float(inputdata.description['RightMargin'])
# else:
# inputdata.description['RightMargin'] = 0.35
# if inputdata.description.get('TopMargin', '') != '':
# inputdata.description['TopMargin'] = float(inputdata.description['TopMargin'])
# else:
# inputdata.description['TopMargin'] = 0.65
# if inputdata.description.get('BottomMargin', '') != '':
# inputdata.description['BottomMargin'] = float(inputdata.description['BottomMargin'])
# else:
# inputdata.description['BottomMargin'] = 0.95
# if inputdata.description['is2dim']:
# inputdata.description['RightMargin'] += 1.7
# papersizex = inputdata.description['PlotSizeX'] + 0.1 + \
# inputdata.description['LeftMargin'] + inputdata.description['RightMargin']
# papersizey = inputdata.description['PlotSizeY'] + inputdata.description['RatioPlotSizeY'] + 0.1 + \
# inputdata.description['TopMargin'] + inputdata.description['BottomMargin']
out = ""
# out += '\\documentclass{article}\n'
# out += '\\documentclass[pstricks,multi]{standalone}\n'
out += '\\documentclass[multi=multipage,border=5]{standalone}\n'
- if opts.OUTPUT_FONT == "MINION":
+ if args.OUTPUT_FONT == "MINION":
out += ('\\usepackage{minion}\n')
- elif opts.OUTPUT_FONT == "PALATINO_OSF":
+ elif args.OUTPUT_FONT == "PALATINO_OSF":
out += ('\\usepackage[osf,sc]{mathpazo}\n')
- elif opts.OUTPUT_FONT == "PALATINO":
+ elif args.OUTPUT_FONT == "PALATINO":
out += ('\\usepackage{mathpazo}\n')
- elif opts.OUTPUT_FONT == "TIMES":
+ elif args.OUTPUT_FONT == "TIMES":
out += ('\\usepackage{mathptmx}\n')
- elif opts.OUTPUT_FONT == "HELVETICA":
+ elif args.OUTPUT_FONT == "HELVETICA":
out += ('\\renewcommand{\\familydefault}{\\sfdefault}\n')
out += ('\\usepackage{sfmath}\n')
out += ('\\usepackage{helvet}\n')
out += ('\\usepackage[symbolgreek]{mathastext}\n')
- for pkg in opts.LATEXPKGS:
+ for pkg in args.LATEXPKGS:
out += ('\\usepackage{%s}\n' % pkg)
out += ('\\usepackage{pst-all}\n')
out += ('\\usepackage{xcolor}\n')
out += ('\\selectcolormodel{rgb}\n')
out += ('\\definecolor{red}{HTML}{EE3311}\n') # (Google uses 'DC3912')
out += ('\\definecolor{blue}{HTML}{3366FF}')
out += ('\\definecolor{green}{HTML}{109618}')
out += ('\\definecolor{orange}{HTML}{FF9900}')
out += ('\\definecolor{lilac}{HTML}{990099}')
out += ('\\usepackage{amsmath}\n')
out += ('\\usepackage{amssymb}\n')
out += ('\\usepackage{relsize}\n')
# out += ('\\usepackage[dvips,\n')
# out += (' left=%4.3fcm, right=0cm,\n' % (inputdata.description['LeftMargin']-0.45,))
# out += (' top=%4.3fcm, bottom=0cm,\n' % (inputdata.description['TopMargin']-0.30,))
# out += (' paperwidth=%scm,paperheight=%scm\n' % (papersizex,papersizey))
# out += (']{geometry}\n')
# out += ('\\usepackage[pdfcrop={--margins 10}]{auto-pst-pdf}\n')
out += ('\\usepackage{auto-pst-pdf}\n')
out += '\n'
out += ('\\begin{document}\n')
#out += ('\\pagestyle{empty}\n')
out += ('\\SpecialCoor\n')
texfile.write(out)
## Process each datfile into the TeX doc
filenames = []
for i, datfile in enumerate(datfiles):
if os.path.splitext(datfile)[1] != ".dat":
raise Exception("Data file '%s' is not a make-plots .dat file" % datfile)
if not os.access(datfile, os.R_OK):
raise Exception("Could not read data file '%s'" % datfile)
## Get std paths
datpath = os.path.abspath(datfile)
datfile = os.path.basename(datpath)
datdir = os.path.dirname(datpath)
- outdir = opts.OUTPUT_DIR if opts.OUTPUT_DIR else datdir
+ outdir = args.OUTPUT_DIR if args.OUTPUT_DIR else datdir
filename = datfile.replace('.dat','')
filenames.append(filename)
## Copy datfile into tempdir
cwd = os.getcwd()
tempdatpath = os.path.join(tempdir, datfile)
shutil.copy(datpath, tempdir)
## Append TeX to file
inputdata = InputData(datpath)
p = Plot(inputdata)
texfile.write("\n\n")
texfile.write(p.write_header(inputdata))
if inputdata.attr_bool("MainPlot", True):
mp = MainPlot(inputdata)
texfile.write(mp.draw(inputdata))
if not inputdata.attr_bool("is2dim", False) and inputdata.attr_bool("RatioPlot", True) and inputdata.attr("RatioPlotReference"): # is not None:
rp = RatioPlot(inputdata)
texfile.write(rp.draw(inputdata))
texfile.write(p.write_footer())
texfile.write('\\end{document}\n')
texfile.close()
- if opts.OUTPUT_FORMAT != ["TEX"]:
+ if args.OUTPUT_FORMAT != ["TEX"]:
## Change into the temp dir
os.chdir(tempdir)
## Check for the required programs
pdflatexavailable = have_cmd("pdflatex")
pdftkavailable = have_cmd("pdftk")
convertavailable = have_cmd("convert")
def mkpdf(infile, outfile=None):
"Run pdfLaTeX (in non-stop mode)"
if not pdflatexavailable:
raise Exception("Required pdflatex not found")
#logging.debug(os.listdir("."))
texcmd = ["pdflatex", "-shell-escape", "\scrollmode\input", texpath]
logging.debug("TeX command: " + " ".join(texcmd))
texproc = subprocess.Popen(texcmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT) #, cwd=tempdir)
logging.debug(texproc.communicate()[0])
#texproc.wait()
rawoutfile = infile.replace(".tex", ".pdf")
if outfile:
logging.debug("Pre-move temp dir contents = ", os.listdir("."))
shutil.move(rawoutfile, outfile)
else:
outfile = rawoutfile
logging.debug("Temp dir contents = ", os.listdir("."))
return outfile
def splitpdf(infile, outfiles=None):
"Split a PDF into a PDF per page, and rename if target names are given"
if not pdftkavailable:
raise Exception("Required PDF splitter (pdftk) not found")
ptkcmd = ["pdftk", "plots.pdf", "burst", "output", "plots-%d.pdf"]
logging.debug("PDF split command = " + " ".join(ptkcmd))
ptkproc = subprocess.Popen(ptkcmd, stdout=subprocess.PIPE) #, cwd=tempdir)
ptkproc.wait()
from glob import glob
# picsfile = os.path.join(tempdir, "plots-pics.pdf")
picsfile = "plots-pics.pdf"
if os.path.exists(picsfile):
os.remove(picsfile)
#print ['{0:0>10}'.format(os.path.basename(x).replace("plots-", "")) for x in glob("plots-*.pdf")]
rawoutfiles = sorted(glob("plots-*.pdf"), key=lambda x: '{0:0>10}'.format(x.replace("plots-", "")))
if outfiles:
#print len(rawoutfiles), rawoutfiles
#print len(outfiles), outfiles
assert len(rawoutfiles) == len(outfiles)
logging.debug("Pre-move temp dir contents = ", os.listdir("."))
for (tmp, final) in zip(rawoutfiles, outfiles):
shutil.move(tmp, final)
else:
outfiles = rawoutfiles
logging.debug("Temp dir contents = ", os.listdir("."))
return outfiles
def mkpng(infile, outfile=None, density=100):
"Convert a PDF to PNG format"
if not convertavailable:
raise Exception("Required PNG maker program (convert) not found")
if not outfile:
outfile = infile.replace(".pdf", ".png")
pngcmd = ["convert", "-flatten", "-density", str(density), infile, "-quality", "100", "-sharpen", "0x1.0", outfile]
logging.debug("PDF -> PNG command = " + " ".join(pngcmd))
pngproc = subprocess.Popen(pngcmd, stdout=subprocess.PIPE) #, cwd=tempdir)
pngproc.wait()
logging.debug("Temp dir contents = ", os.listdir("."))
return outfile
## Make the aggregated PDF and split it to the correct names
pdf = mkpdf("plots.tex")
pdfs = splitpdf(pdf, [x+".pdf" for x in filenames])
## Convert the PDFs to PNGs if requested
- if "PNG" in opts.OUTPUT_FORMAT:
+ if "PNG" in args.OUTPUT_FORMAT:
for p in pdfs:
mkpng(p)
## Copy results back to main dir
logging.debug("Temp dir contents = ", os.listdir(tempdir))
- for fmt in opts.OUTPUT_FORMAT:
+ for fmt in args.OUTPUT_FORMAT:
for filename in filenames:
outname = "%s.%s" % (filename, fmt.lower())
outpath = os.path.join(tempdir, outname)
if os.path.exists(outpath):
shutil.copy(outpath, outdir)
else:
logging.error("No output file '%s'" % outname) # from processing %s" % (outname, datfile))
## Clean up
- if not opts.NO_CLEANUP:
+ if not args.NO_CLEANUP:
shutil.rmtree(tempdir, ignore_errors=True)
- # if opts.NO_SUBPROC:
+ # if args.NO_SUBPROC:
# init_worker()
# for i, df in enumerate(datfiles):
# logging.info("Plotting %s (%d/%d remaining)" % (df, len(datfiles)-i, len(datfiles)))
# process_datfile(df)
# else:
- # pool = multiprocessing.Pool(opts.NUM_THREADS, init_worker)
+ # pool = multiprocessing.Pool(args.NUM_THREADS, init_worker)
# try:
# for i, _ in enumerate(pool.imap(process_datfile, datfiles)):
# logging.info("Plotting %s (%d/%d remaining)" % (datfiles[i], len(datfiles)-i, len(datfiles)))
# pool.close()
# except KeyboardInterrupt:
# print "Caught KeyboardInterrupt, terminating workers"
# pool.terminate()
# pool.join()
diff --git a/bin/rivet-findid b/bin/rivet-findid
--- a/bin/rivet-findid
+++ b/bin/rivet-findid
@@ -1,191 +1,187 @@
#! /usr/bin/env python
-"""%prog ID [ID ...]
+"""%(prog)s ID [ID ...]
-%prog -- paper ID lookup helper for Rivet
+%(prog)s -- paper ID lookup helper for Rivet
Looks up the Rivet analysis and other ID formats matching the given ID.
Arguments:
ID A paper ID in one of the following formats
- arXiv: yymm.nnnnn
- arXiv: foo-bar/yymmnnn
- SPIRES: [S]nnnnnnn
- Inspire: [I]nnnnnn[n]"""
from __future__ import print_function
import rivet, sys, os, re
rivet.util.check_python_version()
rivet.util.set_process_name(os.path.basename(__file__))
def main():
## Handle command line args
- import optparse
- op = optparse.OptionParser(usage=__doc__)
- opts, args = op.parse_args()
- if not args:
- op.print_help()
- exit(1)
-
+ import argparse
+ parser = argparse.ArgumentParser(usage=__doc__)
+ parser.add_argument("IDCODES", nargs="+", help="IDs to look up")
+ args = parser.parse_args()
## Set up some variables before the loop over args
arxiv_pattern = re.compile('^\d\d[01]\d\.\d{4,5}$|^(hep-(ex|ph|th)|nucl-ex)/\d\d[01]\d{4}$')
spires_pattern = re.compile('^(S|I)?(\d{6}\d?)$')
-
## Loop over requested IDs
- for N, id in enumerate(args):
+ for N, id in enumerate(args.IDCODES):
a_match = arxiv_pattern.match(id)
s_match = spires_pattern.match(id)
RESULT = {}
if a_match:
RESULT = try_arxiv(id)
elif s_match:
prefix = s_match.group(1)
number = s_match.group(2)
if prefix == 'S' and len(number) == 7:
RESULT = try_spires(number)
elif prefix == 'I':
RESULT = try_inspire(number)
else:
if len(number) == 7:
RESULT = try_spires(number)
RESULT.update( try_inspire(number) )
else:
sys.stderr.write('error Pattern %s does not match any known ID pattern.\n' % id)
continue
rivet_candidates = []
if 'inspire' in RESULT:
rivet_candidates += try_rivet('I'+RESULT['inspire'])
if not rivet_candidates and 'spires' in RESULT:
rivet_candidates += try_rivet('S'+RESULT['spires'])
if rivet_candidates:
RESULT['rivet'] = rivet_candidates[0]
if N > 0:
print()
output(RESULT)
def output(result):
if not result.get('title'):
return
print('title %s' % result['title'])
ar = result.get('arxiv')
if ar:
print('arxiv %s' % ar)
print('arxiv_url http://arxiv.org/abs/%s' % ar)
sp = result.get('spires')
if sp:
print('spires %s' % sp)
insp = result.get('inspire')
if insp:
print('inspire %s' % insp)
print('inspire_url http://inspirehep.net/record/%s' % insp)
tex = result.get('bibtex')
if tex:
print('bibtex %s' % tex)
riv = result.get('rivet')
if riv:
print('rivet %s' % riv)
def try_arxiv(id):
url = 'http://inspirehep.net/search?p=eprint+%s&of=xm' % id
ret = _search_inspire(url)
if ret.get('arxiv') == id:
return ret
else:
return {}
def try_spires(id):
url = 'http://inspirehep.net/search?p=key+%s&of=xm' % id
ret = _search_inspire(url)
if ret.get('spires') == id:
return ret
else:
return {}
def try_inspire(id):
url = 'http://inspirehep.net/record/%s/export/xm' % id
ret = _search_inspire(url)
if ret.get('inspire') == id:
return ret
else:
return {}
def try_rivet(id):
id = re.compile(id)
import rivet
ALL_ANALYSES = rivet.AnalysisLoader.analysisNames()
return filter(id.search, ALL_ANALYSES)
def _search_inspire(url):
result = {}
try:
from urllib.request import urlopen
except ImportError:
from urllib2 import urlopen
urlstream = urlopen(url)
ET = rivet.util.import_ET()
tree = ET.parse(urlstream)
for i in tree.getiterator('{http://www.loc.gov/MARC21/slim}controlfield'):
if i.get('tag') == '001':
result['inspire'] = i.text
for i in tree.getiterator('{http://www.loc.gov/MARC21/slim}datafield'):
if i.get('tag') == '035':
entries = {}
for c in i.getchildren():
for k,v in c.items():
if k=='code':
entries[v] = c.text
if entries.get('9') == 'SPIRESTeX':
result['bibtex'] = entries['a']
if i.get('tag') == '037':
entries = {}
for c in i.getchildren():
for k,v in c.items():
if k=='code':
entries[v] = c.text
if entries.get('9') == 'arXiv':
result['arxiv'] = entries['a'].replace('arXiv:','')
elif i.get('tag') == '970':
for c in i.getchildren():
if c.text[:7] == 'SPIRES-':
result['spires'] = c.text[7:]
elif i.get('tag') == '245':
for c in i.getchildren():
result['title'] = c.text
return result
if __name__ == "__main__":
main()
diff --git a/bin/rivet-mkanalysis b/bin/rivet-mkanalysis
--- a/bin/rivet-mkanalysis
+++ b/bin/rivet-mkanalysis
@@ -1,375 +1,375 @@
#! /usr/bin/env python
"""\
-%prog: make templates of analysis source files for Rivet
+%(prog)s: make templates of analysis source files for Rivet
-Usage: %prog [--help|-h] [--srcroot=<srcrootdir>] <analysisname>
+Usage: %(prog)s [--help|-h] [--srcroot=<srcrootdir>] <analysisname>
Without the --srcroot flag, the analysis files will be created in the current
directory.
"""
import rivet, sys, os
rivet.util.check_python_version()
rivet.util.set_process_name(os.path.basename(__file__))
import logging
-
## Handle command line
-from optparse import OptionParser
-parser = OptionParser(usage=__doc__)
-parser.add_option("--srcroot", metavar="DIR", dest="SRCROOT", default=None,
- help="install the templates into the Rivet source tree (rooted " +
- "at directory DIR) rather than just creating all in the current dir")
-parser.add_option("-q", "--quiet", dest="LOGLEVEL", default=logging.INFO,
- action="store_const", const=logging.WARNING, help="only write out warning and error messages")
-parser.add_option("-v", "--verbose", dest="LOGLEVEL", default=logging.INFO,
- action="store_const", const=logging.DEBUG, help="provide extra debugging messages")
-parser.add_option("-i", "--inline-info", dest="INLINE", action="store_true",
- default=False, help="Put analysis info into source file instead of separate data file.")
-opts, args = parser.parse_args()
-logging.basicConfig(format="%(msg)s", level=opts.LOGLEVEL)
-ANANAMES = args
+import argparse
+parser = argparse.ArgumentParser(usage=__doc__)
+parser.add_argument("ANANAMES", nargs="+", help="names of analyses to make")
+parser.add_argument("--srcroot", metavar="DIR", dest="SRCROOT", default=None,
+ help="install the templates into the Rivet source tree (rooted " +
+ "at directory DIR) rather than just creating all in the current dir")
+parser.add_argument("-q", "--quiet", dest="LOGLEVEL", default=logging.INFO,
+ action="store_const", const=logging.WARNING, help="only write out warning and error messages")
+parser.add_argument("-v", "--verbose", dest="LOGLEVEL", default=logging.INFO,
+ action="store_const", const=logging.DEBUG, help="provide extra debugging messages")
+parser.add_argument("-i", "--inline-info", dest="INLINE", action="store_true",
+ default=False, help="Put analysis info into source file instead of separate data file.")
+args = parser.parse_args()
+logging.basicConfig(format="%(msg)s", level=args.LOGLEVEL)
+ANANAMES = args.ANANAMES
## Work out installation paths
-ANAROOT = os.path.abspath(opts.SRCROOT or os.getcwd())
+ANAROOT = os.path.abspath(args.SRCROOT or os.getcwd())
if not os.access(ANAROOT, os.W_OK):
logging.error("Can't write to source root directory %s" % ANAROOT)
sys.exit(1)
ANASRCDIR = os.getcwd()
ANAINFODIR = os.getcwd()
ANAPLOTDIR = os.getcwd()
-if opts.SRCROOT:
+if args.SRCROOT:
ANASRCDIR = os.path.join(ANAROOT, "src/Analyses")
ANAINFODIR = os.path.join(ANAROOT, "data/anainfo")
ANAPLOTDIR = os.path.join(ANAROOT, "data/plotinfo")
if not (os.path.exists(ANASRCDIR) and os.path.exists(ANAINFODIR) and os.path.exists(ANAPLOTDIR)):
logging.error("Rivet analysis dirs do not exist under %s" % ANAROOT)
sys.exit(1)
if not (os.access(ANASRCDIR, os.W_OK) and os.access(ANAINFODIR, os.W_OK) and os.access(ANAPLOTDIR, os.W_OK)):
logging.error("Can't write to Rivet analysis dirs under %s" % ANAROOT)
sys.exit(1)
## Check for disallowed characters in analysis names
import string
allowedchars = string.ascii_letters + string.digits + "_"
all_ok = True
for ananame in ANANAMES:
for c in ananame:
if c not in allowedchars:
logging.error("Analysis name '%s' contains disallowed character '%s'!" % (ananame, c))
all_ok = False
break
if not all_ok:
logging.error("Exiting... please ensure that all analysis names are valid")
sys.exit(1)
## Now make each analysis
for ANANAME in ANANAMES:
logging.info("Writing templates for %s to %s" % (ANANAME, ANAROOT))
## Extract some metadata from the name if it matches the standard pattern
import re
re_stdana = re.compile(r"^(\w+)_(\d{4})_(I|S)(\d+)$")
match = re_stdana.match(ANANAME)
STDANA = False
ANAEXPT = "<Insert the experiment name>"
ANACOLLIDER = "<Insert the collider name>"
ANAYEAR = "<Insert year of publication>"
INSPIRE_SPIRES = 'I'
ANAINSPIREID = "<Insert the Inspire ID>"
if match:
STDANA = True
ANAEXPT = match.group(1)
if ANAEXPT.upper() in ("ALICE", "ATLAS", "CMS", "LHCB"):
ANACOLLIDER = "LHC"
elif ANAEXPT.upper() in ("CDF", "D0"):
ANACOLLIDER = "Tevatron"
elif ANAEXPT.upper() == "BABAR":
ANACOLLIDER = "PEP-II"
elif ANAEXPT.upper() == "BELLE":
ANACOLLIDER = "KEKB"
ANAYEAR = match.group(2)
INSPIRE_SPIRES = match.group(3)
ANAINSPIREID = match.group(4)
if INSPIRE_SPIRES == "I":
ANAREFREPO = "Inspire"
else:
ANAREFREPO = "Spires"
KEYWORDS = {
"ANANAME" : ANANAME,
"ANAEXPT" : ANAEXPT,
"ANACOLLIDER" : ANACOLLIDER,
"ANAYEAR" : ANAYEAR,
"ANAREFREPO" : ANAREFREPO,
"ANAINSPIREID" : ANAINSPIREID
}
## Try to get bib info from SPIRES
ANABIBKEY = ""
ANABIBTEX = ""
bibkey, bibtex = None, None
if STDANA:
try:
logging.info("Getting Inspire/SPIRES biblio data for '%s'" % ANANAME)
bibkey, bibtex = rivet.spiresbib.get_bibtex_from_repo(INSPIRE_SPIRES, ANAINSPIREID)
except Exception as e:
logging.error("Inspire/SPIRES oops: %s" % e)
if bibkey and bibtex:
ANABIBKEY = bibkey
ANABIBTEX = bibtex
KEYWORDS["ANABIBKEY"] = ANABIBKEY
KEYWORDS["ANABIBTEX"] = ANABIBTEX
## Try to download YODA data file from HepData
if STDANA:
try:
try:
from urllib.request import urlretrieve
except:
from urllib import urlretrieve
#
hdurl = None
if INSPIRE_SPIRES == "I":
- hdurl = "http://www.hepdata.net/record/ins%s?format=yoda" % ANAINSPIREID
+ hdurl = "http://www.hepdata.net/record/ins%s?format=yoda&rivet=%s" % (ANAINSPIREID, ANANAME)
if not hdurl:
raise Exception("Couldn't identify a URL for getting reference data from HepData")
logging.info("Getting data file from HepData at %s" % hdurl)
tmpfile = urlretrieve(hdurl)[0]
#
import tarfile, shutil
tar = tarfile.open(tmpfile, mode="r")
fnames = tar.getnames()
if len(fnames) > 1:
logging.warning("Found more than one file in downloaded archive. Treating first as canonical")
tar.extractall()
shutil.move(fnames[0], "%s.yoda" % ANANAME)
except Exception as e:
logging.warning("Problem encountered retrieving from HepData: %s" % hdurl)
logging.warning("No reference data file written")
logging.debug("HepData oops: %s: %s" % (str(type(e)), e))
INLINEMETHODS = ""
- if opts.INLINE:
+ if args.INLINE:
KEYWORDS["ANAREFREPO_LOWER"] = KEYWORDS["ANAREFREPO"].lower()
INLINEMETHODS = """
public:
string experiment() const { return "%(ANAEXPT)s"; }
string year() const { return "%(ANAYEAR)s"; }
string %(ANAREFREPO_LOWER)sId() const { return "%(ANAINSPIREID)s"; }
string collider() const { return ""; }
string summary() const { return ""; }
string description() const { return ""; }
string runInfo() const { return ""; }
string bibKey() const { return "%(ANABIBKEY)s"; }
string bibTeX() const { return "%(ANABIBTEX)s"; }
string status() const { return "UNVALIDATED"; }
vector<string> authors() const { return vector<string>(); }
vector<string> references() const { return vector<string>(); }
vector<std::string> todos() const { return vector<string>(); }
""" % KEYWORDS
del KEYWORDS["ANAREFREPO_LOWER"]
KEYWORDS["INLINEMETHODS"] = INLINEMETHODS
if ANANAME.startswith("MC_"):
HISTOBOOKING = """\
book(_h["XXXX"], "myh1", 20, 0.0, 100.0);
book(_h["YYYY"], "myh2", logspace(20, 1e-2, 1e3));
book(_h["ZZZZ"], "myh3", {0.0, 1.0, 2.0, 4.0, 8.0, 16.0});
book(_p["AAAA"], "myp", 20, 0.0, 100.0);
book(_c["BBBB"], "myc");""" % KEYWORDS
else:
HISTOBOOKING = """\
// specify custom binning
book(_h["XXXX"], "myh1", 20, 0.0, 100.0);
book(_h["YYYY"], "myh2", logspace(20, 1e-2, 1e3));
book(_h["ZZZZ"], "myh3", {0.0, 1.0, 2.0, 4.0, 8.0, 16.0});
// take binning from reference data using HEPData ID (digits in "d01-x01-y01" etc.)
book(_h["AAAA"], 1, 1, 1);
book(_p["BBBB"], 2, 1, 1);
book(_c["CCCC"], 3, 1, 1);""" % KEYWORDS
KEYWORDS["HISTOBOOKING"] = HISTOBOOKING
ANASRCFILE = os.path.join(ANASRCDIR, ANANAME+".cc")
logging.debug("Writing implementation template to %s" % ANASRCFILE)
f = open(ANASRCFILE, "w")
src = '''\
// -*- C++ -*-
#include "Rivet/Analysis.hh"
#include "Rivet/Projections/FinalState.hh"
#include "Rivet/Projections/FastJets.hh"
#include "Rivet/Projections/DressedLeptons.hh"
#include "Rivet/Projections/MissingMomentum.hh"
#include "Rivet/Projections/PromptFinalState.hh"
namespace Rivet {
/// @brief Add a short analysis description here
class %(ANANAME)s : public Analysis {
public:
/// Constructor
DEFAULT_RIVET_ANALYSIS_CTOR(%(ANANAME)s);
/// @name Analysis methods
//@{
/// Book histograms and initialise projections before the run
void init() {
// Initialise and register projections
// the basic final-state projection:
// all final-state particles within
// the given eta acceptance
const FinalState fs(Cuts::abseta < 4.9);
// the final-state particles declared above are clustered using FastJet with
// the anti-kT algorithm and a jet-radius parameter 0.4
// muons and neutrinos are excluded from the clustering
FastJets jetfs(fs, FastJets::ANTIKT, 0.4, JetAlg::Muons::NONE, JetAlg::Invisibles::NONE);
declare(jetfs, "jets");
// FinalState of prompt photons and bare muons and electrons in the event
PromptFinalState photons(Cuts::abspid == PID::PHOTON);
PromptFinalState bare_leps(Cuts::abspid == PID::MUON || Cuts::abspid == PID::ELECTRON);
// dress the prompt bare leptons with prompt photons within dR < 0.1
// apply some fiducial cuts on the dressed leptons
Cut lepton_cuts = Cuts::abseta < 2.5 && Cuts::pT > 20*GeV;
DressedLeptons dressed_leps(photons, bare_leps, 0.1, lepton_cuts);
declare(dressed_leps, "leptons");
// missing momentum
declare(MissingMomentum(fs), "MET");
// Book histograms
%(HISTOBOOKING)s
}
/// Perform the per-event analysis
void analyze(const Event& event) {
/// @todo Do the event by event analysis here
// retrieve dressed leptons, sorted by pT
vector<DressedLepton> leptons = apply<DressedLeptons>(event, "leptons").dressedLeptons();
// retrieve clustered jets, sorted by pT, with a minimum pT cut
Jets jets = apply<FastJets>(event, "jets").jetsByPt(Cuts::pT > 30*GeV);
// remove all jets within dR < 0.2 of a dressed lepton
idiscardIfAnyDeltaRLess(jets, leptons, 0.2);
// select jets ghost-associated to B-hadrons with a certain fiducial selection
Jets bjets = filter_select(jets, [](const Jet& jet) {
return jet.bTagged(Cuts::pT > 5*GeV && Cuts::abseta < 2.5);
});
// veto event if there are no b-jets
if (bjets.empty()) vetoEvent;
// apply a missing-momentum cut
if (apply<MissingMomentum>(event, "MET").missingPt() < 30*GeV) vetoEvent;
// fill histogram with leading b-jet pT
_h["XXXX"]->fill(bjets[0].pT()/GeV);
}
/// Normalise histograms etc., after the run
void finalize() {
normalize(_h["YYYY"]); // normalize to unity
scale(_h["ZZZZ"], crossSection()/picobarn/sumOfWeights()); // norm to cross section
}
//@}
/// @name Histograms
//@{
map<string, Histo1DPtr> _h;
map<string, Profile1DPtr> _p;
map<string, CounterPtr> _c;
//@}
%(INLINEMETHODS)s
};
// The hook for the plugin system
DECLARE_RIVET_PLUGIN(%(ANANAME)s);
}
''' % KEYWORDS
f.write(src)
f.close()
ANAPLOTFILE = os.path.join(ANAPLOTDIR, ANANAME+".plot")
logging.debug("Writing plot template to %s" % ANAPLOTFILE)
f = open(ANAPLOTFILE, "w")
src = '''\
BEGIN PLOT /%(ANANAME)s/d01-x01-y01
Title=[Insert title for histogram d01-x01-y01 here]
XLabel=[Insert $x$-axis label for histogram d01-x01-y01 here]
YLabel=[Insert $y$-axis label for histogram d01-x01-y01 here]
# + any additional plot settings you might like, see make-plots documentation
END PLOT
# ... add more histograms as you need them ...
''' % KEYWORDS
f.write(src)
f.close()
- if opts.INLINE:
+ if args.INLINE:
sys.exit(0)
ANAINFOFILE = os.path.join(ANAINFODIR, ANANAME+".info")
logging.debug("Writing info template to %s" % ANAINFOFILE)
f = open(ANAINFOFILE, "w")
src = """\
Name: %(ANANAME)s
Year: %(ANAYEAR)s
Summary: <Insert short %(ANANAME)s description>
Experiment: %(ANAEXPT)s
Collider: %(ANACOLLIDER)s
%(ANAREFREPO)sID: %(ANAINSPIREID)s
Status: UNVALIDATED
Authors:
- Your Name <your@email.address>
#References:
#- '<Example: Eur.Phys.J. C76 (2016) no.7, 392>'
#- '<Example: DOI:10.1140/epjc/s10052-016-4184-8>'
#- '<Example: arXiv:1605.03814>'
RunInfo: <Describe event types, cuts, and other general generator config tips.>
#Beams: <Insert beam pair(s), e.g. [p+, p+] or [[p-, e-], [p-, e+]]>
#Energies: <Run energies or beam energy pairs in GeV, e.g. [13000] or [[8.0, 3.5]] or [630, 1800]. Order pairs to match "Beams">
#Luminosity_fb: <Insert integrated luminosity, in inverse fb>
Description:
'<A fairly long description, including what is measured
and if possible what it is useful for in terms of MC validation
and tuning. Use LaTeX for maths like $\pT > 50\;\GeV$.>'
Keywords: []
BibKey: %(ANABIBKEY)s
BibTeX: '%(ANABIBTEX)s'
ToDo:
- Implement the analysis, test it, remove this ToDo, and mark as VALIDATED :-)
""" % KEYWORDS
f.write(src)
f.close()
logging.info("Use e.g. 'rivet-buildplugin Rivet%s.so %s.cc' to compile the plugin" % (ANANAME, ANANAME))
diff --git a/bin/rivet-which b/bin/rivet-which
--- a/bin/rivet-which
+++ b/bin/rivet-which
@@ -1,31 +1,32 @@
#! /usr/bin/env python
"""
Path searching tool for files associated with the Rivet analysis toolkit.
TODO:
* Add auto-categorising of the searches based on file extension
* Add a switch to return all or just the first match
* Add switches to force searching in a particular file category (libs, info, ref data, plo files)
* Add partial name / regex searching? (Requires extending the Rivet library)
"""
from __future__ import print_function
-import rivet, sys, os
+import rivet, os
rivet.util.check_python_version()
rivet.util.set_process_name(os.path.basename(__file__))
-import sys, os, optparse
-op = optparse.OptionParser()
-ops, args = op.parse_args()
+import argparse
+parser = argparse.ArgumentParser(description=__doc__)
+parser.add_argument("ANALYSES", nargs="+", help="analyses to search for")
+args = parser.parse_args()
# print rivet.findAnalysisPlotFile()
# print rivet.findAnalysisLibFile()
# print rivet.findAnalysisInfoFile()
# print rivet.findAnalysisRefFile()
-for a in args:
+for a in args.ANALYSES:
try:
print(rivet.findAnalysisRefFile(a))
except:
print("No match for '%s'")
diff --git a/m4/libtool.m4 b/m4/libtool.m4
--- a/m4/libtool.m4
+++ b/m4/libtool.m4
@@ -1,8369 +1,8387 @@
# libtool.m4 - Configure libtool for the host system. -*-Autoconf-*-
#
# Copyright (C) 1996-2001, 2003-2015 Free Software Foundation, Inc.
# Written by Gordon Matzigkeit, 1996
#
# This file is free software; the Free Software Foundation gives
# unlimited permission to copy and/or distribute it, with or without
# modifications, as long as this notice is preserved.
m4_define([_LT_COPYING], [dnl
# Copyright (C) 2014 Free Software Foundation, Inc.
# This is free software; see the source for copying conditions. There is NO
# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
# GNU Libtool is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of of the License, or
# (at your option) any later version.
#
# As a special exception to the GNU General Public License, if you
# distribute this file as part of a program or library that is built
# using GNU Libtool, you may include this file under the same
# distribution terms that you use for the rest of that program.
#
# GNU Libtool is distributed in the hope that it will be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
])
# serial 58 LT_INIT
# LT_PREREQ(VERSION)
# ------------------
# Complain and exit if this libtool version is less that VERSION.
m4_defun([LT_PREREQ],
[m4_if(m4_version_compare(m4_defn([LT_PACKAGE_VERSION]), [$1]), -1,
[m4_default([$3],
[m4_fatal([Libtool version $1 or higher is required],
63)])],
[$2])])
# _LT_CHECK_BUILDDIR
# ------------------
# Complain if the absolute build directory name contains unusual characters
m4_defun([_LT_CHECK_BUILDDIR],
[case `pwd` in
*\ * | *\ *)
AC_MSG_WARN([Libtool does not cope well with whitespace in `pwd`]) ;;
esac
])
# LT_INIT([OPTIONS])
# ------------------
AC_DEFUN([LT_INIT],
[AC_PREREQ([2.62])dnl We use AC_PATH_PROGS_FEATURE_CHECK
AC_REQUIRE([AC_CONFIG_AUX_DIR_DEFAULT])dnl
AC_BEFORE([$0], [LT_LANG])dnl
AC_BEFORE([$0], [LT_OUTPUT])dnl
AC_BEFORE([$0], [LTDL_INIT])dnl
m4_require([_LT_CHECK_BUILDDIR])dnl
dnl Autoconf doesn't catch unexpanded LT_ macros by default:
m4_pattern_forbid([^_?LT_[A-Z_]+$])dnl
m4_pattern_allow([^(_LT_EOF|LT_DLGLOBAL|LT_DLLAZY_OR_NOW|LT_MULTI_MODULE)$])dnl
dnl aclocal doesn't pull ltoptions.m4, ltsugar.m4, or ltversion.m4
dnl unless we require an AC_DEFUNed macro:
AC_REQUIRE([LTOPTIONS_VERSION])dnl
AC_REQUIRE([LTSUGAR_VERSION])dnl
AC_REQUIRE([LTVERSION_VERSION])dnl
AC_REQUIRE([LTOBSOLETE_VERSION])dnl
m4_require([_LT_PROG_LTMAIN])dnl
_LT_SHELL_INIT([SHELL=${CONFIG_SHELL-/bin/sh}])
dnl Parse OPTIONS
_LT_SET_OPTIONS([$0], [$1])
# This can be used to rebuild libtool when needed
LIBTOOL_DEPS=$ltmain
# Always use our own libtool.
LIBTOOL='$(SHELL) $(top_builddir)/libtool'
AC_SUBST(LIBTOOL)dnl
_LT_SETUP
# Only expand once:
m4_define([LT_INIT])
])# LT_INIT
# Old names:
AU_ALIAS([AC_PROG_LIBTOOL], [LT_INIT])
AU_ALIAS([AM_PROG_LIBTOOL], [LT_INIT])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_PROG_LIBTOOL], [])
dnl AC_DEFUN([AM_PROG_LIBTOOL], [])
# _LT_PREPARE_CC_BASENAME
# -----------------------
m4_defun([_LT_PREPARE_CC_BASENAME], [
# Calculate cc_basename. Skip known compiler wrappers and cross-prefix.
func_cc_basename ()
{
for cc_temp in @S|@*""; do
case $cc_temp in
compile | *[[\\/]]compile | ccache | *[[\\/]]ccache ) ;;
distcc | *[[\\/]]distcc | purify | *[[\\/]]purify ) ;;
\-*) ;;
*) break;;
esac
done
func_cc_basename_result=`$ECHO "$cc_temp" | $SED "s%.*/%%; s%^$host_alias-%%"`
}
])# _LT_PREPARE_CC_BASENAME
# _LT_CC_BASENAME(CC)
# -------------------
# It would be clearer to call AC_REQUIREs from _LT_PREPARE_CC_BASENAME,
# but that macro is also expanded into generated libtool script, which
# arranges for $SED and $ECHO to be set by different means.
m4_defun([_LT_CC_BASENAME],
[m4_require([_LT_PREPARE_CC_BASENAME])dnl
AC_REQUIRE([_LT_DECL_SED])dnl
AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl
func_cc_basename $1
cc_basename=$func_cc_basename_result
])
# _LT_FILEUTILS_DEFAULTS
# ----------------------
# It is okay to use these file commands and assume they have been set
# sensibly after 'm4_require([_LT_FILEUTILS_DEFAULTS])'.
m4_defun([_LT_FILEUTILS_DEFAULTS],
[: ${CP="cp -f"}
: ${MV="mv -f"}
: ${RM="rm -f"}
])# _LT_FILEUTILS_DEFAULTS
# _LT_SETUP
# ---------
m4_defun([_LT_SETUP],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
AC_REQUIRE([_LT_PREPARE_SED_QUOTE_VARS])dnl
AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])dnl
_LT_DECL([], [PATH_SEPARATOR], [1], [The PATH separator for the build system])dnl
dnl
_LT_DECL([], [host_alias], [0], [The host system])dnl
_LT_DECL([], [host], [0])dnl
_LT_DECL([], [host_os], [0])dnl
dnl
_LT_DECL([], [build_alias], [0], [The build system])dnl
_LT_DECL([], [build], [0])dnl
_LT_DECL([], [build_os], [0])dnl
dnl
AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([LT_PATH_LD])dnl
AC_REQUIRE([LT_PATH_NM])dnl
dnl
AC_REQUIRE([AC_PROG_LN_S])dnl
test -z "$LN_S" && LN_S="ln -s"
_LT_DECL([], [LN_S], [1], [Whether we need soft or hard links])dnl
dnl
AC_REQUIRE([LT_CMD_MAX_LEN])dnl
_LT_DECL([objext], [ac_objext], [0], [Object file suffix (normally "o")])dnl
_LT_DECL([], [exeext], [0], [Executable file suffix (normally "")])dnl
dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_CHECK_SHELL_FEATURES])dnl
m4_require([_LT_PATH_CONVERSION_FUNCTIONS])dnl
m4_require([_LT_CMD_RELOAD])dnl
m4_require([_LT_CHECK_MAGIC_METHOD])dnl
m4_require([_LT_CHECK_SHAREDLIB_FROM_LINKLIB])dnl
m4_require([_LT_CMD_OLD_ARCHIVE])dnl
m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
m4_require([_LT_WITH_SYSROOT])dnl
m4_require([_LT_CMD_TRUNCATE])dnl
_LT_CONFIG_LIBTOOL_INIT([
# See if we are running on zsh, and set the options that allow our
# commands through without removal of \ escapes INIT.
if test -n "\${ZSH_VERSION+set}"; then
setopt NO_GLOB_SUBST
fi
])
if test -n "${ZSH_VERSION+set}"; then
setopt NO_GLOB_SUBST
fi
_LT_CHECK_OBJDIR
m4_require([_LT_TAG_COMPILER])dnl
case $host_os in
aix3*)
# AIX sometimes has problems with the GCC collect2 program. For some
# reason, if we set the COLLECT_NAMES environment variable, the problems
# vanish in a puff of smoke.
if test set != "${COLLECT_NAMES+set}"; then
COLLECT_NAMES=
export COLLECT_NAMES
fi
;;
esac
# Global variables:
ofile=libtool
can_build_shared=yes
# All known linkers require a '.a' archive for static linking (except MSVC,
# which needs '.lib').
libext=a
with_gnu_ld=$lt_cv_prog_gnu_ld
old_CC=$CC
old_CFLAGS=$CFLAGS
# Set sane defaults for various variables
test -z "$CC" && CC=cc
test -z "$LTCC" && LTCC=$CC
test -z "$LTCFLAGS" && LTCFLAGS=$CFLAGS
test -z "$LD" && LD=ld
test -z "$ac_objext" && ac_objext=o
_LT_CC_BASENAME([$compiler])
# Only perform the check for file, if the check method requires it
test -z "$MAGIC_CMD" && MAGIC_CMD=file
case $deplibs_check_method in
file_magic*)
if test "$file_magic_cmd" = '$MAGIC_CMD'; then
_LT_PATH_MAGIC
fi
;;
esac
# Use C for the default configuration in the libtool script
LT_SUPPORTED_TAG([CC])
_LT_LANG_C_CONFIG
_LT_LANG_DEFAULT_CONFIG
_LT_CONFIG_COMMANDS
])# _LT_SETUP
# _LT_PREPARE_SED_QUOTE_VARS
# --------------------------
# Define a few sed substitution that help us do robust quoting.
m4_defun([_LT_PREPARE_SED_QUOTE_VARS],
[# Backslashify metacharacters that are still active within
# double-quoted strings.
sed_quote_subst='s/\([["`$\\]]\)/\\\1/g'
# Same as above, but do not quote variable references.
double_quote_subst='s/\([["`\\]]\)/\\\1/g'
# Sed substitution to delay expansion of an escaped shell variable in a
# double_quote_subst'ed string.
delay_variable_subst='s/\\\\\\\\\\\$/\\\\\\$/g'
# Sed substitution to delay expansion of an escaped single quote.
delay_single_quote_subst='s/'\''/'\'\\\\\\\'\''/g'
# Sed substitution to avoid accidental globbing in evaled expressions
no_glob_subst='s/\*/\\\*/g'
])
# _LT_PROG_LTMAIN
# ---------------
# Note that this code is called both from 'configure', and 'config.status'
# now that we use AC_CONFIG_COMMANDS to generate libtool. Notably,
# 'config.status' has no value for ac_aux_dir unless we are using Automake,
# so we pass a copy along to make sure it has a sensible value anyway.
m4_defun([_LT_PROG_LTMAIN],
[m4_ifdef([AC_REQUIRE_AUX_FILE], [AC_REQUIRE_AUX_FILE([ltmain.sh])])dnl
_LT_CONFIG_LIBTOOL_INIT([ac_aux_dir='$ac_aux_dir'])
ltmain=$ac_aux_dir/ltmain.sh
])# _LT_PROG_LTMAIN
## ------------------------------------- ##
## Accumulate code for creating libtool. ##
## ------------------------------------- ##
# So that we can recreate a full libtool script including additional
# tags, we accumulate the chunks of code to send to AC_CONFIG_COMMANDS
# in macros and then make a single call at the end using the 'libtool'
# label.
# _LT_CONFIG_LIBTOOL_INIT([INIT-COMMANDS])
# ----------------------------------------
# Register INIT-COMMANDS to be passed to AC_CONFIG_COMMANDS later.
m4_define([_LT_CONFIG_LIBTOOL_INIT],
[m4_ifval([$1],
[m4_append([_LT_OUTPUT_LIBTOOL_INIT],
[$1
])])])
# Initialize.
m4_define([_LT_OUTPUT_LIBTOOL_INIT])
# _LT_CONFIG_LIBTOOL([COMMANDS])
# ------------------------------
# Register COMMANDS to be passed to AC_CONFIG_COMMANDS later.
m4_define([_LT_CONFIG_LIBTOOL],
[m4_ifval([$1],
[m4_append([_LT_OUTPUT_LIBTOOL_COMMANDS],
[$1
])])])
# Initialize.
m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS])
# _LT_CONFIG_SAVE_COMMANDS([COMMANDS], [INIT_COMMANDS])
# -----------------------------------------------------
m4_defun([_LT_CONFIG_SAVE_COMMANDS],
[_LT_CONFIG_LIBTOOL([$1])
_LT_CONFIG_LIBTOOL_INIT([$2])
])
# _LT_FORMAT_COMMENT([COMMENT])
# -----------------------------
# Add leading comment marks to the start of each line, and a trailing
# full-stop to the whole comment if one is not present already.
m4_define([_LT_FORMAT_COMMENT],
[m4_ifval([$1], [
m4_bpatsubst([m4_bpatsubst([$1], [^ *], [# ])],
[['`$\]], [\\\&])]m4_bmatch([$1], [[!?.]$], [], [.])
)])
## ------------------------ ##
## FIXME: Eliminate VARNAME ##
## ------------------------ ##
# _LT_DECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION], [IS-TAGGED?])
# -------------------------------------------------------------------
# CONFIGNAME is the name given to the value in the libtool script.
# VARNAME is the (base) name used in the configure script.
# VALUE may be 0, 1 or 2 for a computed quote escaped value based on
# VARNAME. Any other value will be used directly.
m4_define([_LT_DECL],
[lt_if_append_uniq([lt_decl_varnames], [$2], [, ],
[lt_dict_add_subkey([lt_decl_dict], [$2], [libtool_name],
[m4_ifval([$1], [$1], [$2])])
lt_dict_add_subkey([lt_decl_dict], [$2], [value], [$3])
m4_ifval([$4],
[lt_dict_add_subkey([lt_decl_dict], [$2], [description], [$4])])
lt_dict_add_subkey([lt_decl_dict], [$2],
[tagged?], [m4_ifval([$5], [yes], [no])])])
])
# _LT_TAGDECL([CONFIGNAME], VARNAME, VALUE, [DESCRIPTION])
# --------------------------------------------------------
m4_define([_LT_TAGDECL], [_LT_DECL([$1], [$2], [$3], [$4], [yes])])
# lt_decl_tag_varnames([SEPARATOR], [VARNAME1...])
# ------------------------------------------------
m4_define([lt_decl_tag_varnames],
[_lt_decl_filter([tagged?], [yes], $@)])
# _lt_decl_filter(SUBKEY, VALUE, [SEPARATOR], [VARNAME1..])
# ---------------------------------------------------------
m4_define([_lt_decl_filter],
[m4_case([$#],
[0], [m4_fatal([$0: too few arguments: $#])],
[1], [m4_fatal([$0: too few arguments: $#: $1])],
[2], [lt_dict_filter([lt_decl_dict], [$1], [$2], [], lt_decl_varnames)],
[3], [lt_dict_filter([lt_decl_dict], [$1], [$2], [$3], lt_decl_varnames)],
[lt_dict_filter([lt_decl_dict], $@)])[]dnl
])
# lt_decl_quote_varnames([SEPARATOR], [VARNAME1...])
# --------------------------------------------------
m4_define([lt_decl_quote_varnames],
[_lt_decl_filter([value], [1], $@)])
# lt_decl_dquote_varnames([SEPARATOR], [VARNAME1...])
# ---------------------------------------------------
m4_define([lt_decl_dquote_varnames],
[_lt_decl_filter([value], [2], $@)])
# lt_decl_varnames_tagged([SEPARATOR], [VARNAME1...])
# ---------------------------------------------------
m4_define([lt_decl_varnames_tagged],
[m4_assert([$# <= 2])dnl
_$0(m4_quote(m4_default([$1], [[, ]])),
m4_ifval([$2], [[$2]], [m4_dquote(lt_decl_tag_varnames)]),
m4_split(m4_normalize(m4_quote(_LT_TAGS)), [ ]))])
m4_define([_lt_decl_varnames_tagged],
[m4_ifval([$3], [lt_combine([$1], [$2], [_], $3)])])
# lt_decl_all_varnames([SEPARATOR], [VARNAME1...])
# ------------------------------------------------
m4_define([lt_decl_all_varnames],
[_$0(m4_quote(m4_default([$1], [[, ]])),
m4_if([$2], [],
m4_quote(lt_decl_varnames),
m4_quote(m4_shift($@))))[]dnl
])
m4_define([_lt_decl_all_varnames],
[lt_join($@, lt_decl_varnames_tagged([$1],
lt_decl_tag_varnames([[, ]], m4_shift($@))))dnl
])
# _LT_CONFIG_STATUS_DECLARE([VARNAME])
# ------------------------------------
# Quote a variable value, and forward it to 'config.status' so that its
# declaration there will have the same value as in 'configure'. VARNAME
# must have a single quote delimited value for this to work.
m4_define([_LT_CONFIG_STATUS_DECLARE],
[$1='`$ECHO "$][$1" | $SED "$delay_single_quote_subst"`'])
# _LT_CONFIG_STATUS_DECLARATIONS
# ------------------------------
# We delimit libtool config variables with single quotes, so when
# we write them to config.status, we have to be sure to quote all
# embedded single quotes properly. In configure, this macro expands
# each variable declared with _LT_DECL (and _LT_TAGDECL) into:
#
# <var>='`$ECHO "$<var>" | $SED "$delay_single_quote_subst"`'
m4_defun([_LT_CONFIG_STATUS_DECLARATIONS],
[m4_foreach([_lt_var], m4_quote(lt_decl_all_varnames),
[m4_n([_LT_CONFIG_STATUS_DECLARE(_lt_var)])])])
# _LT_LIBTOOL_TAGS
# ----------------
# Output comment and list of tags supported by the script
m4_defun([_LT_LIBTOOL_TAGS],
[_LT_FORMAT_COMMENT([The names of the tagged configurations supported by this script])dnl
available_tags='_LT_TAGS'dnl
])
# _LT_LIBTOOL_DECLARE(VARNAME, [TAG])
# -----------------------------------
# Extract the dictionary values for VARNAME (optionally with TAG) and
# expand to a commented shell variable setting:
#
# # Some comment about what VAR is for.
# visible_name=$lt_internal_name
m4_define([_LT_LIBTOOL_DECLARE],
[_LT_FORMAT_COMMENT(m4_quote(lt_dict_fetch([lt_decl_dict], [$1],
[description])))[]dnl
m4_pushdef([_libtool_name],
m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [libtool_name])))[]dnl
m4_case(m4_quote(lt_dict_fetch([lt_decl_dict], [$1], [value])),
[0], [_libtool_name=[$]$1],
[1], [_libtool_name=$lt_[]$1],
[2], [_libtool_name=$lt_[]$1],
[_libtool_name=lt_dict_fetch([lt_decl_dict], [$1], [value])])[]dnl
m4_ifval([$2], [_$2])[]m4_popdef([_libtool_name])[]dnl
])
# _LT_LIBTOOL_CONFIG_VARS
# -----------------------
# Produce commented declarations of non-tagged libtool config variables
# suitable for insertion in the LIBTOOL CONFIG section of the 'libtool'
# script. Tagged libtool config variables (even for the LIBTOOL CONFIG
# section) are produced by _LT_LIBTOOL_TAG_VARS.
m4_defun([_LT_LIBTOOL_CONFIG_VARS],
[m4_foreach([_lt_var],
m4_quote(_lt_decl_filter([tagged?], [no], [], lt_decl_varnames)),
[m4_n([_LT_LIBTOOL_DECLARE(_lt_var)])])])
# _LT_LIBTOOL_TAG_VARS(TAG)
# -------------------------
m4_define([_LT_LIBTOOL_TAG_VARS],
[m4_foreach([_lt_var], m4_quote(lt_decl_tag_varnames),
[m4_n([_LT_LIBTOOL_DECLARE(_lt_var, [$1])])])])
# _LT_TAGVAR(VARNAME, [TAGNAME])
# ------------------------------
m4_define([_LT_TAGVAR], [m4_ifval([$2], [$1_$2], [$1])])
# _LT_CONFIG_COMMANDS
# -------------------
# Send accumulated output to $CONFIG_STATUS. Thanks to the lists of
# variables for single and double quote escaping we saved from calls
# to _LT_DECL, we can put quote escaped variables declarations
# into 'config.status', and then the shell code to quote escape them in
# for loops in 'config.status'. Finally, any additional code accumulated
# from calls to _LT_CONFIG_LIBTOOL_INIT is expanded.
m4_defun([_LT_CONFIG_COMMANDS],
[AC_PROVIDE_IFELSE([LT_OUTPUT],
dnl If the libtool generation code has been placed in $CONFIG_LT,
dnl instead of duplicating it all over again into config.status,
dnl then we will have config.status run $CONFIG_LT later, so it
dnl needs to know what name is stored there:
[AC_CONFIG_COMMANDS([libtool],
[$SHELL $CONFIG_LT || AS_EXIT(1)], [CONFIG_LT='$CONFIG_LT'])],
dnl If the libtool generation code is destined for config.status,
dnl expand the accumulated commands and init code now:
[AC_CONFIG_COMMANDS([libtool],
[_LT_OUTPUT_LIBTOOL_COMMANDS], [_LT_OUTPUT_LIBTOOL_COMMANDS_INIT])])
])#_LT_CONFIG_COMMANDS
# Initialize.
m4_define([_LT_OUTPUT_LIBTOOL_COMMANDS_INIT],
[
# The HP-UX ksh and POSIX shell print the target directory to stdout
# if CDPATH is set.
(unset CDPATH) >/dev/null 2>&1 && unset CDPATH
sed_quote_subst='$sed_quote_subst'
double_quote_subst='$double_quote_subst'
delay_variable_subst='$delay_variable_subst'
_LT_CONFIG_STATUS_DECLARATIONS
LTCC='$LTCC'
LTCFLAGS='$LTCFLAGS'
compiler='$compiler_DEFAULT'
# A function that is used when there is no print builtin or printf.
func_fallback_echo ()
{
eval 'cat <<_LTECHO_EOF
\$[]1
_LTECHO_EOF'
}
# Quote evaled strings.
for var in lt_decl_all_varnames([[ \
]], lt_decl_quote_varnames); do
case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
*[[\\\\\\\`\\"\\\$]]*)
eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED \\"\\\$sed_quote_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes
;;
*)
eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\""
;;
esac
done
# Double-quote double-evaled strings.
for var in lt_decl_all_varnames([[ \
]], lt_decl_dquote_varnames); do
case \`eval \\\\\$ECHO \\\\""\\\\\$\$var"\\\\"\` in
*[[\\\\\\\`\\"\\\$]]*)
eval "lt_\$var=\\\\\\"\\\`\\\$ECHO \\"\\\$\$var\\" | \\\$SED -e \\"\\\$double_quote_subst\\" -e \\"\\\$sed_quote_subst\\" -e \\"\\\$delay_variable_subst\\"\\\`\\\\\\"" ## exclude from sc_prohibit_nested_quotes
;;
*)
eval "lt_\$var=\\\\\\"\\\$\$var\\\\\\""
;;
esac
done
_LT_OUTPUT_LIBTOOL_INIT
])
# _LT_GENERATED_FILE_INIT(FILE, [COMMENT])
# ------------------------------------
# Generate a child script FILE with all initialization necessary to
# reuse the environment learned by the parent script, and make the
# file executable. If COMMENT is supplied, it is inserted after the
# '#!' sequence but before initialization text begins. After this
# macro, additional text can be appended to FILE to form the body of
# the child script. The macro ends with non-zero status if the
# file could not be fully written (such as if the disk is full).
m4_ifdef([AS_INIT_GENERATED],
[m4_defun([_LT_GENERATED_FILE_INIT],[AS_INIT_GENERATED($@)])],
[m4_defun([_LT_GENERATED_FILE_INIT],
[m4_require([AS_PREPARE])]dnl
[m4_pushdef([AS_MESSAGE_LOG_FD])]dnl
[lt_write_fail=0
cat >$1 <<_ASEOF || lt_write_fail=1
#! $SHELL
# Generated by $as_me.
$2
SHELL=\${CONFIG_SHELL-$SHELL}
export SHELL
_ASEOF
cat >>$1 <<\_ASEOF || lt_write_fail=1
AS_SHELL_SANITIZE
_AS_PREPARE
exec AS_MESSAGE_FD>&1
_ASEOF
test 0 = "$lt_write_fail" && chmod +x $1[]dnl
m4_popdef([AS_MESSAGE_LOG_FD])])])# _LT_GENERATED_FILE_INIT
# LT_OUTPUT
# ---------
# This macro allows early generation of the libtool script (before
# AC_OUTPUT is called), incase it is used in configure for compilation
# tests.
AC_DEFUN([LT_OUTPUT],
[: ${CONFIG_LT=./config.lt}
AC_MSG_NOTICE([creating $CONFIG_LT])
_LT_GENERATED_FILE_INIT(["$CONFIG_LT"],
[# Run this file to recreate a libtool stub with the current configuration.])
cat >>"$CONFIG_LT" <<\_LTEOF
lt_cl_silent=false
exec AS_MESSAGE_LOG_FD>>config.log
{
echo
AS_BOX([Running $as_me.])
} >&AS_MESSAGE_LOG_FD
lt_cl_help="\
'$as_me' creates a local libtool stub from the current configuration,
for use in further configure time tests before the real libtool is
generated.
Usage: $[0] [[OPTIONS]]
-h, --help print this help, then exit
-V, --version print version number, then exit
-q, --quiet do not print progress messages
-d, --debug don't remove temporary files
Report bugs to <bug-libtool@gnu.org>."
lt_cl_version="\
m4_ifset([AC_PACKAGE_NAME], [AC_PACKAGE_NAME ])config.lt[]dnl
m4_ifset([AC_PACKAGE_VERSION], [ AC_PACKAGE_VERSION])
configured by $[0], generated by m4_PACKAGE_STRING.
Copyright (C) 2011 Free Software Foundation, Inc.
This config.lt script is free software; the Free Software Foundation
gives unlimited permision to copy, distribute and modify it."
while test 0 != $[#]
do
case $[1] in
--version | --v* | -V )
echo "$lt_cl_version"; exit 0 ;;
--help | --h* | -h )
echo "$lt_cl_help"; exit 0 ;;
--debug | --d* | -d )
debug=: ;;
--quiet | --q* | --silent | --s* | -q )
lt_cl_silent=: ;;
-*) AC_MSG_ERROR([unrecognized option: $[1]
Try '$[0] --help' for more information.]) ;;
*) AC_MSG_ERROR([unrecognized argument: $[1]
Try '$[0] --help' for more information.]) ;;
esac
shift
done
if $lt_cl_silent; then
exec AS_MESSAGE_FD>/dev/null
fi
_LTEOF
cat >>"$CONFIG_LT" <<_LTEOF
_LT_OUTPUT_LIBTOOL_COMMANDS_INIT
_LTEOF
cat >>"$CONFIG_LT" <<\_LTEOF
AC_MSG_NOTICE([creating $ofile])
_LT_OUTPUT_LIBTOOL_COMMANDS
AS_EXIT(0)
_LTEOF
chmod +x "$CONFIG_LT"
# configure is writing to config.log, but config.lt does its own redirection,
# appending to config.log, which fails on DOS, as config.log is still kept
# open by configure. Here we exec the FD to /dev/null, effectively closing
# config.log, so it can be properly (re)opened and appended to by config.lt.
lt_cl_success=:
test yes = "$silent" &&
lt_config_lt_args="$lt_config_lt_args --quiet"
exec AS_MESSAGE_LOG_FD>/dev/null
$SHELL "$CONFIG_LT" $lt_config_lt_args || lt_cl_success=false
exec AS_MESSAGE_LOG_FD>>config.log
$lt_cl_success || AS_EXIT(1)
])# LT_OUTPUT
# _LT_CONFIG(TAG)
# ---------------
# If TAG is the built-in tag, create an initial libtool script with a
# default configuration from the untagged config vars. Otherwise add code
# to config.status for appending the configuration named by TAG from the
# matching tagged config vars.
m4_defun([_LT_CONFIG],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
_LT_CONFIG_SAVE_COMMANDS([
m4_define([_LT_TAG], m4_if([$1], [], [C], [$1]))dnl
m4_if(_LT_TAG, [C], [
# See if we are running on zsh, and set the options that allow our
# commands through without removal of \ escapes.
if test -n "${ZSH_VERSION+set}"; then
setopt NO_GLOB_SUBST
fi
cfgfile=${ofile}T
trap "$RM \"$cfgfile\"; exit 1" 1 2 15
$RM "$cfgfile"
cat <<_LT_EOF >> "$cfgfile"
#! $SHELL
# Generated automatically by $as_me ($PACKAGE) $VERSION
-# Libtool was configured on host `(hostname || uname -n) 2>/dev/null | sed 1q`:
# NOTE: Changes made to this file will be lost: look at ltmain.sh.
# Provide generalized library-building support services.
# Written by Gordon Matzigkeit, 1996
_LT_COPYING
_LT_LIBTOOL_TAGS
# Configured defaults for sys_lib_dlsearch_path munging.
: \${LT_SYS_LIBRARY_PATH="$configure_time_lt_sys_library_path"}
# ### BEGIN LIBTOOL CONFIG
_LT_LIBTOOL_CONFIG_VARS
_LT_LIBTOOL_TAG_VARS
# ### END LIBTOOL CONFIG
_LT_EOF
cat <<'_LT_EOF' >> "$cfgfile"
# ### BEGIN FUNCTIONS SHARED WITH CONFIGURE
_LT_PREPARE_MUNGE_PATH_LIST
_LT_PREPARE_CC_BASENAME
# ### END FUNCTIONS SHARED WITH CONFIGURE
_LT_EOF
case $host_os in
aix3*)
cat <<\_LT_EOF >> "$cfgfile"
# AIX sometimes has problems with the GCC collect2 program. For some
# reason, if we set the COLLECT_NAMES environment variable, the problems
# vanish in a puff of smoke.
if test set != "${COLLECT_NAMES+set}"; then
COLLECT_NAMES=
export COLLECT_NAMES
fi
_LT_EOF
;;
esac
_LT_PROG_LTMAIN
# We use sed instead of cat because bash on DJGPP gets confused if
# if finds mixed CR/LF and LF-only lines. Since sed operates in
# text mode, it properly converts lines to CR/LF. This bash problem
# is reportedly fixed, but why not run on old versions too?
sed '$q' "$ltmain" >> "$cfgfile" \
|| (rm -f "$cfgfile"; exit 1)
mv -f "$cfgfile" "$ofile" ||
(rm -f "$ofile" && cp "$cfgfile" "$ofile" && rm -f "$cfgfile")
chmod +x "$ofile"
],
[cat <<_LT_EOF >> "$ofile"
dnl Unfortunately we have to use $1 here, since _LT_TAG is not expanded
dnl in a comment (ie after a #).
# ### BEGIN LIBTOOL TAG CONFIG: $1
_LT_LIBTOOL_TAG_VARS(_LT_TAG)
# ### END LIBTOOL TAG CONFIG: $1
_LT_EOF
])dnl /m4_if
],
[m4_if([$1], [], [
PACKAGE='$PACKAGE'
VERSION='$VERSION'
RM='$RM'
ofile='$ofile'], [])
])dnl /_LT_CONFIG_SAVE_COMMANDS
])# _LT_CONFIG
# LT_SUPPORTED_TAG(TAG)
# ---------------------
# Trace this macro to discover what tags are supported by the libtool
# --tag option, using:
# autoconf --trace 'LT_SUPPORTED_TAG:$1'
AC_DEFUN([LT_SUPPORTED_TAG], [])
# C support is built-in for now
m4_define([_LT_LANG_C_enabled], [])
m4_define([_LT_TAGS], [])
# LT_LANG(LANG)
# -------------
# Enable libtool support for the given language if not already enabled.
AC_DEFUN([LT_LANG],
[AC_BEFORE([$0], [LT_OUTPUT])dnl
m4_case([$1],
[C], [_LT_LANG(C)],
[C++], [_LT_LANG(CXX)],
[Go], [_LT_LANG(GO)],
[Java], [_LT_LANG(GCJ)],
[Fortran 77], [_LT_LANG(F77)],
[Fortran], [_LT_LANG(FC)],
[Windows Resource], [_LT_LANG(RC)],
[m4_ifdef([_LT_LANG_]$1[_CONFIG],
[_LT_LANG($1)],
[m4_fatal([$0: unsupported language: "$1"])])])dnl
])# LT_LANG
# _LT_LANG(LANGNAME)
# ------------------
m4_defun([_LT_LANG],
[m4_ifdef([_LT_LANG_]$1[_enabled], [],
[LT_SUPPORTED_TAG([$1])dnl
m4_append([_LT_TAGS], [$1 ])dnl
m4_define([_LT_LANG_]$1[_enabled], [])dnl
_LT_LANG_$1_CONFIG($1)])dnl
])# _LT_LANG
m4_ifndef([AC_PROG_GO], [
############################################################
# NOTE: This macro has been submitted for inclusion into #
# GNU Autoconf as AC_PROG_GO. When it is available in #
# a released version of Autoconf we should remove this #
# macro and use it instead. #
############################################################
m4_defun([AC_PROG_GO],
[AC_LANG_PUSH(Go)dnl
AC_ARG_VAR([GOC], [Go compiler command])dnl
AC_ARG_VAR([GOFLAGS], [Go compiler flags])dnl
_AC_ARG_VAR_LDFLAGS()dnl
AC_CHECK_TOOL(GOC, gccgo)
if test -z "$GOC"; then
if test -n "$ac_tool_prefix"; then
AC_CHECK_PROG(GOC, [${ac_tool_prefix}gccgo], [${ac_tool_prefix}gccgo])
fi
fi
if test -z "$GOC"; then
AC_CHECK_PROG(GOC, gccgo, gccgo, false)
fi
])#m4_defun
])#m4_ifndef
# _LT_LANG_DEFAULT_CONFIG
# -----------------------
m4_defun([_LT_LANG_DEFAULT_CONFIG],
[AC_PROVIDE_IFELSE([AC_PROG_CXX],
[LT_LANG(CXX)],
[m4_define([AC_PROG_CXX], defn([AC_PROG_CXX])[LT_LANG(CXX)])])
AC_PROVIDE_IFELSE([AC_PROG_F77],
[LT_LANG(F77)],
[m4_define([AC_PROG_F77], defn([AC_PROG_F77])[LT_LANG(F77)])])
AC_PROVIDE_IFELSE([AC_PROG_FC],
[LT_LANG(FC)],
[m4_define([AC_PROG_FC], defn([AC_PROG_FC])[LT_LANG(FC)])])
dnl The call to [A][M_PROG_GCJ] is quoted like that to stop aclocal
dnl pulling things in needlessly.
AC_PROVIDE_IFELSE([AC_PROG_GCJ],
[LT_LANG(GCJ)],
[AC_PROVIDE_IFELSE([A][M_PROG_GCJ],
[LT_LANG(GCJ)],
[AC_PROVIDE_IFELSE([LT_PROG_GCJ],
[LT_LANG(GCJ)],
[m4_ifdef([AC_PROG_GCJ],
[m4_define([AC_PROG_GCJ], defn([AC_PROG_GCJ])[LT_LANG(GCJ)])])
m4_ifdef([A][M_PROG_GCJ],
[m4_define([A][M_PROG_GCJ], defn([A][M_PROG_GCJ])[LT_LANG(GCJ)])])
m4_ifdef([LT_PROG_GCJ],
[m4_define([LT_PROG_GCJ], defn([LT_PROG_GCJ])[LT_LANG(GCJ)])])])])])
AC_PROVIDE_IFELSE([AC_PROG_GO],
[LT_LANG(GO)],
[m4_define([AC_PROG_GO], defn([AC_PROG_GO])[LT_LANG(GO)])])
AC_PROVIDE_IFELSE([LT_PROG_RC],
[LT_LANG(RC)],
[m4_define([LT_PROG_RC], defn([LT_PROG_RC])[LT_LANG(RC)])])
])# _LT_LANG_DEFAULT_CONFIG
# Obsolete macros:
AU_DEFUN([AC_LIBTOOL_CXX], [LT_LANG(C++)])
AU_DEFUN([AC_LIBTOOL_F77], [LT_LANG(Fortran 77)])
AU_DEFUN([AC_LIBTOOL_FC], [LT_LANG(Fortran)])
AU_DEFUN([AC_LIBTOOL_GCJ], [LT_LANG(Java)])
AU_DEFUN([AC_LIBTOOL_RC], [LT_LANG(Windows Resource)])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_CXX], [])
dnl AC_DEFUN([AC_LIBTOOL_F77], [])
dnl AC_DEFUN([AC_LIBTOOL_FC], [])
dnl AC_DEFUN([AC_LIBTOOL_GCJ], [])
dnl AC_DEFUN([AC_LIBTOOL_RC], [])
# _LT_TAG_COMPILER
# ----------------
m4_defun([_LT_TAG_COMPILER],
[AC_REQUIRE([AC_PROG_CC])dnl
_LT_DECL([LTCC], [CC], [1], [A C compiler])dnl
_LT_DECL([LTCFLAGS], [CFLAGS], [1], [LTCC compiler flags])dnl
_LT_TAGDECL([CC], [compiler], [1], [A language specific compiler])dnl
_LT_TAGDECL([with_gcc], [GCC], [0], [Is the compiler the GNU compiler?])dnl
# If no C compiler was specified, use CC.
LTCC=${LTCC-"$CC"}
# If no C compiler flags were specified, use CFLAGS.
LTCFLAGS=${LTCFLAGS-"$CFLAGS"}
# Allow CC to be a program name with arguments.
compiler=$CC
])# _LT_TAG_COMPILER
# _LT_COMPILER_BOILERPLATE
# ------------------------
# Check for compiler boilerplate output or warnings with
# the simple compiler test code.
m4_defun([_LT_COMPILER_BOILERPLATE],
[m4_require([_LT_DECL_SED])dnl
ac_outfile=conftest.$ac_objext
echo "$lt_simple_compile_test_code" >conftest.$ac_ext
eval "$ac_compile" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err
_lt_compiler_boilerplate=`cat conftest.err`
$RM conftest*
])# _LT_COMPILER_BOILERPLATE
# _LT_LINKER_BOILERPLATE
# ----------------------
# Check for linker boilerplate output or warnings with
# the simple link test code.
m4_defun([_LT_LINKER_BOILERPLATE],
[m4_require([_LT_DECL_SED])dnl
ac_outfile=conftest.$ac_objext
echo "$lt_simple_link_test_code" >conftest.$ac_ext
eval "$ac_link" 2>&1 >/dev/null | $SED '/^$/d; /^ *+/d' >conftest.err
_lt_linker_boilerplate=`cat conftest.err`
$RM -r conftest*
])# _LT_LINKER_BOILERPLATE
# _LT_REQUIRED_DARWIN_CHECKS
# -------------------------
m4_defun_once([_LT_REQUIRED_DARWIN_CHECKS],[
case $host_os in
rhapsody* | darwin*)
AC_CHECK_TOOL([DSYMUTIL], [dsymutil], [:])
AC_CHECK_TOOL([NMEDIT], [nmedit], [:])
AC_CHECK_TOOL([LIPO], [lipo], [:])
AC_CHECK_TOOL([OTOOL], [otool], [:])
AC_CHECK_TOOL([OTOOL64], [otool64], [:])
_LT_DECL([], [DSYMUTIL], [1],
[Tool to manipulate archived DWARF debug symbol files on Mac OS X])
_LT_DECL([], [NMEDIT], [1],
[Tool to change global to local symbols on Mac OS X])
_LT_DECL([], [LIPO], [1],
[Tool to manipulate fat objects and archives on Mac OS X])
_LT_DECL([], [OTOOL], [1],
[ldd/readelf like tool for Mach-O binaries on Mac OS X])
_LT_DECL([], [OTOOL64], [1],
[ldd/readelf like tool for 64 bit Mach-O binaries on Mac OS X 10.4])
AC_CACHE_CHECK([for -single_module linker flag],[lt_cv_apple_cc_single_mod],
[lt_cv_apple_cc_single_mod=no
if test -z "$LT_MULTI_MODULE"; then
# By default we will add the -single_module flag. You can override
# by either setting the environment variable LT_MULTI_MODULE
# non-empty at configure time, or by adding -multi_module to the
# link flags.
rm -rf libconftest.dylib*
echo "int foo(void){return 1;}" > conftest.c
echo "$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \
-dynamiclib -Wl,-single_module conftest.c" >&AS_MESSAGE_LOG_FD
$LTCC $LTCFLAGS $LDFLAGS -o libconftest.dylib \
-dynamiclib -Wl,-single_module conftest.c 2>conftest.err
_lt_result=$?
# If there is a non-empty error log, and "single_module"
# appears in it, assume the flag caused a linker warning
if test -s conftest.err && $GREP single_module conftest.err; then
cat conftest.err >&AS_MESSAGE_LOG_FD
# Otherwise, if the output was created with a 0 exit code from
# the compiler, it worked.
elif test -f libconftest.dylib && test 0 = "$_lt_result"; then
lt_cv_apple_cc_single_mod=yes
else
cat conftest.err >&AS_MESSAGE_LOG_FD
fi
rm -rf libconftest.dylib*
rm -f conftest.*
fi])
AC_CACHE_CHECK([for -exported_symbols_list linker flag],
[lt_cv_ld_exported_symbols_list],
[lt_cv_ld_exported_symbols_list=no
save_LDFLAGS=$LDFLAGS
echo "_main" > conftest.sym
LDFLAGS="$LDFLAGS -Wl,-exported_symbols_list,conftest.sym"
AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])],
[lt_cv_ld_exported_symbols_list=yes],
[lt_cv_ld_exported_symbols_list=no])
LDFLAGS=$save_LDFLAGS
])
AC_CACHE_CHECK([for -force_load linker flag],[lt_cv_ld_force_load],
[lt_cv_ld_force_load=no
cat > conftest.c << _LT_EOF
int forced_loaded() { return 2;}
_LT_EOF
echo "$LTCC $LTCFLAGS -c -o conftest.o conftest.c" >&AS_MESSAGE_LOG_FD
$LTCC $LTCFLAGS -c -o conftest.o conftest.c 2>&AS_MESSAGE_LOG_FD
echo "$AR cru libconftest.a conftest.o" >&AS_MESSAGE_LOG_FD
$AR cru libconftest.a conftest.o 2>&AS_MESSAGE_LOG_FD
echo "$RANLIB libconftest.a" >&AS_MESSAGE_LOG_FD
$RANLIB libconftest.a 2>&AS_MESSAGE_LOG_FD
cat > conftest.c << _LT_EOF
int main() { return 0;}
_LT_EOF
echo "$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a" >&AS_MESSAGE_LOG_FD
$LTCC $LTCFLAGS $LDFLAGS -o conftest conftest.c -Wl,-force_load,./libconftest.a 2>conftest.err
_lt_result=$?
if test -s conftest.err && $GREP force_load conftest.err; then
cat conftest.err >&AS_MESSAGE_LOG_FD
elif test -f conftest && test 0 = "$_lt_result" && $GREP forced_load conftest >/dev/null 2>&1; then
lt_cv_ld_force_load=yes
else
cat conftest.err >&AS_MESSAGE_LOG_FD
fi
rm -f conftest.err libconftest.a conftest conftest.c
rm -rf conftest.dSYM
])
case $host_os in
rhapsody* | darwin1.[[012]])
_lt_dar_allow_undefined='$wl-undefined ${wl}suppress' ;;
darwin1.*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
darwin*) # darwin 5.x on
# if running on 10.5 or later, the deployment target defaults
# to the OS version, if on x86, and 10.4, the deployment
# target defaults to 10.4. Don't you love it?
case ${MACOSX_DEPLOYMENT_TARGET-10.0},$host in
10.0,*86*-darwin8*|10.0,*-darwin[[91]]*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
10.[[012]][[,.]]*)
_lt_dar_allow_undefined='$wl-flat_namespace $wl-undefined ${wl}suppress' ;;
10.*)
_lt_dar_allow_undefined='$wl-undefined ${wl}dynamic_lookup' ;;
esac
;;
esac
if test yes = "$lt_cv_apple_cc_single_mod"; then
_lt_dar_single_mod='$single_module'
fi
if test yes = "$lt_cv_ld_exported_symbols_list"; then
_lt_dar_export_syms=' $wl-exported_symbols_list,$output_objdir/$libname-symbols.expsym'
else
_lt_dar_export_syms='~$NMEDIT -s $output_objdir/$libname-symbols.expsym $lib'
fi
if test : != "$DSYMUTIL" && test no = "$lt_cv_ld_force_load"; then
_lt_dsymutil='~$DSYMUTIL $lib || :'
else
_lt_dsymutil=
fi
;;
esac
])
# _LT_DARWIN_LINKER_FEATURES([TAG])
# ---------------------------------
# Checks for linker and compiler features on darwin
m4_defun([_LT_DARWIN_LINKER_FEATURES],
[
m4_require([_LT_REQUIRED_DARWIN_CHECKS])
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_automatic, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
if test yes = "$lt_cv_ld_force_load"; then
_LT_TAGVAR(whole_archive_flag_spec, $1)='`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience $wl-force_load,$conv\"; done; func_echo_all \"$new_convenience\"`'
m4_case([$1], [F77], [_LT_TAGVAR(compiler_needs_object, $1)=yes],
[FC], [_LT_TAGVAR(compiler_needs_object, $1)=yes])
else
_LT_TAGVAR(whole_archive_flag_spec, $1)=''
fi
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=$_lt_dar_allow_undefined
case $cc_basename in
ifort*|nagfor*) _lt_dar_can_shared=yes ;;
*) _lt_dar_can_shared=$GCC ;;
esac
if test yes = "$_lt_dar_can_shared"; then
output_verbose_link_cmd=func_echo_all
_LT_TAGVAR(archive_cmds, $1)="\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dsymutil"
_LT_TAGVAR(module_cmds, $1)="\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dsymutil"
_LT_TAGVAR(archive_expsym_cmds, $1)="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$libobjs \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring $_lt_dar_single_mod$_lt_dar_export_syms$_lt_dsymutil"
_LT_TAGVAR(module_expsym_cmds, $1)="sed -e 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC \$allow_undefined_flag -o \$lib -bundle \$libobjs \$deplibs \$compiler_flags$_lt_dar_export_syms$_lt_dsymutil"
m4_if([$1], [CXX],
[ if test yes != "$lt_cv_apple_cc_single_mod"; then
_LT_TAGVAR(archive_cmds, $1)="\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dsymutil"
_LT_TAGVAR(archive_expsym_cmds, $1)="sed 's|^|_|' < \$export_symbols > \$output_objdir/\$libname-symbols.expsym~\$CC -r -keep_private_externs -nostdlib -o \$lib-master.o \$libobjs~\$CC -dynamiclib \$allow_undefined_flag -o \$lib \$lib-master.o \$deplibs \$compiler_flags -install_name \$rpath/\$soname \$verstring$_lt_dar_export_syms$_lt_dsymutil"
fi
],[])
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
])
# _LT_SYS_MODULE_PATH_AIX([TAGNAME])
# ----------------------------------
# Links a minimal program and checks the executable
# for the system default hardcoded library path. In most cases,
# this is /usr/lib:/lib, but when the MPI compilers are used
# the location of the communication and MPI libs are included too.
# If we don't find anything, use the default library path according
# to the aix ld manual.
# Store the results from the different compilers for each TAGNAME.
# Allow to override them for all tags through lt_cv_aix_libpath.
m4_defun([_LT_SYS_MODULE_PATH_AIX],
[m4_require([_LT_DECL_SED])dnl
if test set = "${lt_cv_aix_libpath+set}"; then
aix_libpath=$lt_cv_aix_libpath
else
AC_CACHE_VAL([_LT_TAGVAR([lt_cv_aix_libpath_], [$1])],
[AC_LINK_IFELSE([AC_LANG_PROGRAM],[
lt_aix_libpath_sed='[
/Import File Strings/,/^$/ {
/^0/ {
s/^0 *\([^ ]*\) *$/\1/
p
}
}]'
_LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -H conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
# Check for a 64-bit object if we didn't find anything.
if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
_LT_TAGVAR([lt_cv_aix_libpath_], [$1])=`dump -HX64 conftest$ac_exeext 2>/dev/null | $SED -n -e "$lt_aix_libpath_sed"`
fi],[])
if test -z "$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])"; then
_LT_TAGVAR([lt_cv_aix_libpath_], [$1])=/usr/lib:/lib
fi
])
aix_libpath=$_LT_TAGVAR([lt_cv_aix_libpath_], [$1])
fi
])# _LT_SYS_MODULE_PATH_AIX
# _LT_SHELL_INIT(ARG)
# -------------------
m4_define([_LT_SHELL_INIT],
[m4_divert_text([M4SH-INIT], [$1
])])# _LT_SHELL_INIT
# _LT_PROG_ECHO_BACKSLASH
# -----------------------
# Find how we can fake an echo command that does not interpret backslash.
# In particular, with Autoconf 2.60 or later we add some code to the start
# of the generated configure script that will find a shell with a builtin
# printf (that we can use as an echo command).
m4_defun([_LT_PROG_ECHO_BACKSLASH],
[ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
AC_MSG_CHECKING([how to print strings])
# Test print first, because it will be a builtin if present.
if test "X`( print -r -- -n ) 2>/dev/null`" = X-n && \
test "X`print -r -- $ECHO 2>/dev/null`" = "X$ECHO"; then
ECHO='print -r --'
elif test "X`printf %s $ECHO 2>/dev/null`" = "X$ECHO"; then
ECHO='printf %s\n'
else
# Use this function as a fallback that always works.
func_fallback_echo ()
{
eval 'cat <<_LTECHO_EOF
$[]1
_LTECHO_EOF'
}
ECHO='func_fallback_echo'
fi
# func_echo_all arg...
# Invoke $ECHO with all args, space-separated.
func_echo_all ()
{
$ECHO "$*"
}
case $ECHO in
printf*) AC_MSG_RESULT([printf]) ;;
print*) AC_MSG_RESULT([print -r]) ;;
*) AC_MSG_RESULT([cat]) ;;
esac
m4_ifdef([_AS_DETECT_SUGGESTED],
[_AS_DETECT_SUGGESTED([
test -n "${ZSH_VERSION+set}${BASH_VERSION+set}" || (
ECHO='\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\\'
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO
ECHO=$ECHO$ECHO$ECHO$ECHO$ECHO$ECHO
PATH=/empty FPATH=/empty; export PATH FPATH
test "X`printf %s $ECHO`" = "X$ECHO" \
|| test "X`print -r -- $ECHO`" = "X$ECHO" )])])
_LT_DECL([], [SHELL], [1], [Shell to use when invoking shell scripts])
_LT_DECL([], [ECHO], [1], [An echo program that protects backslashes])
])# _LT_PROG_ECHO_BACKSLASH
# _LT_WITH_SYSROOT
# ----------------
AC_DEFUN([_LT_WITH_SYSROOT],
[AC_MSG_CHECKING([for sysroot])
AC_ARG_WITH([sysroot],
[AS_HELP_STRING([--with-sysroot@<:@=DIR@:>@],
[Search for dependent libraries within DIR (or the compiler's sysroot
if not specified).])],
[], [with_sysroot=no])
dnl lt_sysroot will always be passed unquoted. We quote it here
dnl in case the user passed a directory name.
lt_sysroot=
case $with_sysroot in #(
yes)
if test yes = "$GCC"; then
lt_sysroot=`$CC --print-sysroot 2>/dev/null`
fi
;; #(
/*)
lt_sysroot=`echo "$with_sysroot" | sed -e "$sed_quote_subst"`
;; #(
no|'')
;; #(
*)
AC_MSG_RESULT([$with_sysroot])
AC_MSG_ERROR([The sysroot must be an absolute path.])
;;
esac
AC_MSG_RESULT([${lt_sysroot:-no}])
_LT_DECL([], [lt_sysroot], [0], [The root where to search for ]dnl
[dependent libraries, and where our libraries should be installed.])])
# _LT_ENABLE_LOCK
# ---------------
m4_defun([_LT_ENABLE_LOCK],
[AC_ARG_ENABLE([libtool-lock],
[AS_HELP_STRING([--disable-libtool-lock],
[avoid locking (might break parallel builds)])])
test no = "$enable_libtool_lock" || enable_libtool_lock=yes
# Some flags need to be propagated to the compiler or linker for good
# libtool support.
case $host in
ia64-*-hpux*)
# Find out what ABI is being produced by ac_compile, and set mode
# options accordingly.
echo 'int i;' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case `/usr/bin/file conftest.$ac_objext` in
*ELF-32*)
HPUX_IA64_MODE=32
;;
*ELF-64*)
HPUX_IA64_MODE=64
;;
esac
fi
rm -rf conftest*
;;
*-*-irix6*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly.
echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
if test yes = "$lt_cv_prog_gnu_ld"; then
case `/usr/bin/file conftest.$ac_objext` in
*32-bit*)
LD="${LD-ld} -melf32bsmip"
;;
*N32*)
LD="${LD-ld} -melf32bmipn32"
;;
*64-bit*)
LD="${LD-ld} -melf64bmip"
;;
esac
else
case `/usr/bin/file conftest.$ac_objext` in
*32-bit*)
LD="${LD-ld} -32"
;;
*N32*)
LD="${LD-ld} -n32"
;;
*64-bit*)
LD="${LD-ld} -64"
;;
esac
fi
fi
rm -rf conftest*
;;
mips64*-*linux*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly.
echo '[#]line '$LINENO' "configure"' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
emul=elf
case `/usr/bin/file conftest.$ac_objext` in
*32-bit*)
emul="${emul}32"
;;
*64-bit*)
emul="${emul}64"
;;
esac
case `/usr/bin/file conftest.$ac_objext` in
*MSB*)
emul="${emul}btsmip"
;;
*LSB*)
emul="${emul}ltsmip"
;;
esac
case `/usr/bin/file conftest.$ac_objext` in
*N32*)
emul="${emul}n32"
;;
esac
LD="${LD-ld} -m $emul"
fi
rm -rf conftest*
;;
x86_64-*kfreebsd*-gnu|x86_64-*linux*|powerpc*-*linux*| \
s390*-*linux*|s390*-*tpf*|sparc*-*linux*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly. Note that the listed cases only cover the
# situations where additional linker options are needed (such as when
# doing 32-bit compilation for a host where ld defaults to 64-bit, or
# vice versa); the common cases where no linker options are needed do
# not appear in the list.
echo 'int i;' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case `/usr/bin/file conftest.o` in
*32-bit*)
case $host in
x86_64-*kfreebsd*-gnu)
LD="${LD-ld} -m elf_i386_fbsd"
;;
x86_64-*linux*)
case `/usr/bin/file conftest.o` in
*x86-64*)
LD="${LD-ld} -m elf32_x86_64"
;;
*)
LD="${LD-ld} -m elf_i386"
;;
esac
;;
powerpc64le-*linux*)
LD="${LD-ld} -m elf32lppclinux"
;;
powerpc64-*linux*)
LD="${LD-ld} -m elf32ppclinux"
;;
s390x-*linux*)
LD="${LD-ld} -m elf_s390"
;;
sparc64-*linux*)
LD="${LD-ld} -m elf32_sparc"
;;
esac
;;
*64-bit*)
case $host in
x86_64-*kfreebsd*-gnu)
LD="${LD-ld} -m elf_x86_64_fbsd"
;;
x86_64-*linux*)
LD="${LD-ld} -m elf_x86_64"
;;
powerpcle-*linux*)
LD="${LD-ld} -m elf64lppc"
;;
powerpc-*linux*)
LD="${LD-ld} -m elf64ppc"
;;
s390*-*linux*|s390*-*tpf*)
LD="${LD-ld} -m elf64_s390"
;;
sparc*-*linux*)
LD="${LD-ld} -m elf64_sparc"
;;
esac
;;
esac
fi
rm -rf conftest*
;;
*-*-sco3.2v5*)
# On SCO OpenServer 5, we need -belf to get full-featured binaries.
SAVE_CFLAGS=$CFLAGS
CFLAGS="$CFLAGS -belf"
AC_CACHE_CHECK([whether the C compiler needs -belf], lt_cv_cc_needs_belf,
[AC_LANG_PUSH(C)
AC_LINK_IFELSE([AC_LANG_PROGRAM([[]],[[]])],[lt_cv_cc_needs_belf=yes],[lt_cv_cc_needs_belf=no])
AC_LANG_POP])
if test yes != "$lt_cv_cc_needs_belf"; then
# this is probably gcc 2.8.0, egcs 1.0 or newer; no need for -belf
CFLAGS=$SAVE_CFLAGS
fi
;;
*-*solaris*)
# Find out what ABI is being produced by ac_compile, and set linker
# options accordingly.
echo 'int i;' > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile); then
case `/usr/bin/file conftest.o` in
*64-bit*)
case $lt_cv_prog_gnu_ld in
yes*)
case $host in
i?86-*-solaris*|x86_64-*-solaris*)
LD="${LD-ld} -m elf_x86_64"
;;
sparc*-*-solaris*)
LD="${LD-ld} -m elf64_sparc"
;;
esac
# GNU ld 2.21 introduced _sol2 emulations. Use them if available.
if ${LD-ld} -V | grep _sol2 >/dev/null 2>&1; then
LD=${LD-ld}_sol2
fi
;;
*)
if ${LD-ld} -64 -r -o conftest2.o conftest.o >/dev/null 2>&1; then
LD="${LD-ld} -64"
fi
;;
esac
;;
esac
fi
rm -rf conftest*
;;
esac
need_locks=$enable_libtool_lock
])# _LT_ENABLE_LOCK
# _LT_PROG_AR
# -----------
m4_defun([_LT_PROG_AR],
[AC_CHECK_TOOLS(AR, [ar], false)
: ${AR=ar}
: ${AR_FLAGS=cru}
_LT_DECL([], [AR], [1], [The archiver])
_LT_DECL([], [AR_FLAGS], [1], [Flags to create an archive])
AC_CACHE_CHECK([for archiver @FILE support], [lt_cv_ar_at_file],
[lt_cv_ar_at_file=no
AC_COMPILE_IFELSE([AC_LANG_PROGRAM],
[echo conftest.$ac_objext > conftest.lst
lt_ar_try='$AR $AR_FLAGS libconftest.a @conftest.lst >&AS_MESSAGE_LOG_FD'
AC_TRY_EVAL([lt_ar_try])
if test 0 -eq "$ac_status"; then
# Ensure the archiver fails upon bogus file names.
rm -f conftest.$ac_objext libconftest.a
AC_TRY_EVAL([lt_ar_try])
if test 0 -ne "$ac_status"; then
lt_cv_ar_at_file=@
fi
fi
rm -f conftest.* libconftest.a
])
])
if test no = "$lt_cv_ar_at_file"; then
archiver_list_spec=
else
archiver_list_spec=$lt_cv_ar_at_file
fi
_LT_DECL([], [archiver_list_spec], [1],
[How to feed a file listing to the archiver])
])# _LT_PROG_AR
# _LT_CMD_OLD_ARCHIVE
# -------------------
m4_defun([_LT_CMD_OLD_ARCHIVE],
[_LT_PROG_AR
AC_CHECK_TOOL(STRIP, strip, :)
test -z "$STRIP" && STRIP=:
_LT_DECL([], [STRIP], [1], [A symbol stripping program])
AC_CHECK_TOOL(RANLIB, ranlib, :)
test -z "$RANLIB" && RANLIB=:
_LT_DECL([], [RANLIB], [1],
[Commands used to install an old-style archive])
# Determine commands to create old-style static archives.
old_archive_cmds='$AR $AR_FLAGS $oldlib$oldobjs'
old_postinstall_cmds='chmod 644 $oldlib'
old_postuninstall_cmds=
if test -n "$RANLIB"; then
case $host_os in
bitrig* | openbsd*)
old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB -t \$tool_oldlib"
;;
*)
old_postinstall_cmds="$old_postinstall_cmds~\$RANLIB \$tool_oldlib"
;;
esac
old_archive_cmds="$old_archive_cmds~\$RANLIB \$tool_oldlib"
fi
case $host_os in
darwin*)
lock_old_archive_extraction=yes ;;
*)
lock_old_archive_extraction=no ;;
esac
_LT_DECL([], [old_postinstall_cmds], [2])
_LT_DECL([], [old_postuninstall_cmds], [2])
_LT_TAGDECL([], [old_archive_cmds], [2],
[Commands used to build an old-style archive])
_LT_DECL([], [lock_old_archive_extraction], [0],
[Whether to use a lock for old archive extraction])
])# _LT_CMD_OLD_ARCHIVE
# _LT_COMPILER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS,
# [OUTPUT-FILE], [ACTION-SUCCESS], [ACTION-FAILURE])
# ----------------------------------------------------------------
# Check whether the given compiler option works
AC_DEFUN([_LT_COMPILER_OPTION],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_SED])dnl
AC_CACHE_CHECK([$1], [$2],
[$2=no
m4_if([$4], , [ac_outfile=conftest.$ac_objext], [ac_outfile=$4])
echo "$lt_simple_compile_test_code" > conftest.$ac_ext
lt_compiler_flag="$3" ## exclude from sc_useless_quotes_in_assignment
# Insert the option either (1) after the last *FLAGS variable, or
# (2) before a word containing "conftest.", or (3) at the end.
# Note that $ac_compile itself does not contain backslashes and begins
# with a dollar sign (not a hyphen), so the echo should work correctly.
# The option is referenced via a variable to avoid confusing sed.
lt_compile=`echo "$ac_compile" | $SED \
-e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
-e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \
-e 's:$: $lt_compiler_flag:'`
(eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD)
(eval "$lt_compile" 2>conftest.err)
ac_status=$?
cat conftest.err >&AS_MESSAGE_LOG_FD
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
if (exit $ac_status) && test -s "$ac_outfile"; then
# The compiler can only warn and ignore the option if not recognized
# So say no if there are warnings other than the usual output.
$ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' >conftest.exp
$SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
if test ! -s conftest.er2 || diff conftest.exp conftest.er2 >/dev/null; then
$2=yes
fi
fi
$RM conftest*
])
if test yes = "[$]$2"; then
m4_if([$5], , :, [$5])
else
m4_if([$6], , :, [$6])
fi
])# _LT_COMPILER_OPTION
# Old name:
AU_ALIAS([AC_LIBTOOL_COMPILER_OPTION], [_LT_COMPILER_OPTION])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_COMPILER_OPTION], [])
# _LT_LINKER_OPTION(MESSAGE, VARIABLE-NAME, FLAGS,
# [ACTION-SUCCESS], [ACTION-FAILURE])
# ----------------------------------------------------
# Check whether the given linker option works
AC_DEFUN([_LT_LINKER_OPTION],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_SED])dnl
AC_CACHE_CHECK([$1], [$2],
[$2=no
save_LDFLAGS=$LDFLAGS
LDFLAGS="$LDFLAGS $3"
echo "$lt_simple_link_test_code" > conftest.$ac_ext
if (eval $ac_link 2>conftest.err) && test -s conftest$ac_exeext; then
# The linker can only warn and ignore the option if not recognized
# So say no if there are warnings
if test -s conftest.err; then
# Append any errors to the config.log.
cat conftest.err 1>&AS_MESSAGE_LOG_FD
$ECHO "$_lt_linker_boilerplate" | $SED '/^$/d' > conftest.exp
$SED '/^$/d; /^ *+/d' conftest.err >conftest.er2
if diff conftest.exp conftest.er2 >/dev/null; then
$2=yes
fi
else
$2=yes
fi
fi
$RM -r conftest*
LDFLAGS=$save_LDFLAGS
])
if test yes = "[$]$2"; then
m4_if([$4], , :, [$4])
else
m4_if([$5], , :, [$5])
fi
])# _LT_LINKER_OPTION
# Old name:
AU_ALIAS([AC_LIBTOOL_LINKER_OPTION], [_LT_LINKER_OPTION])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_LINKER_OPTION], [])
# LT_CMD_MAX_LEN
#---------------
AC_DEFUN([LT_CMD_MAX_LEN],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
# find the maximum length of command line arguments
AC_MSG_CHECKING([the maximum length of command line arguments])
AC_CACHE_VAL([lt_cv_sys_max_cmd_len], [dnl
i=0
teststring=ABCD
case $build_os in
msdosdjgpp*)
# On DJGPP, this test can blow up pretty badly due to problems in libc
# (any single argument exceeding 2000 bytes causes a buffer overrun
# during glob expansion). Even if it were fixed, the result of this
# check would be larger than it should be.
lt_cv_sys_max_cmd_len=12288; # 12K is about right
;;
gnu*)
# Under GNU Hurd, this test is not required because there is
# no limit to the length of command line arguments.
# Libtool will interpret -1 as no limit whatsoever
lt_cv_sys_max_cmd_len=-1;
;;
cygwin* | mingw* | cegcc*)
# On Win9x/ME, this test blows up -- it succeeds, but takes
# about 5 minutes as the teststring grows exponentially.
# Worse, since 9x/ME are not pre-emptively multitasking,
# you end up with a "frozen" computer, even though with patience
# the test eventually succeeds (with a max line length of 256k).
# Instead, let's just punt: use the minimum linelength reported by
# all of the supported platforms: 8192 (on NT/2K/XP).
lt_cv_sys_max_cmd_len=8192;
;;
mint*)
# On MiNT this can take a long time and run out of memory.
lt_cv_sys_max_cmd_len=8192;
;;
amigaos*)
# On AmigaOS with pdksh, this test takes hours, literally.
# So we just punt and use a minimum line length of 8192.
lt_cv_sys_max_cmd_len=8192;
;;
bitrig* | darwin* | dragonfly* | freebsd* | netbsd* | openbsd*)
# This has been around since 386BSD, at least. Likely further.
if test -x /sbin/sysctl; then
lt_cv_sys_max_cmd_len=`/sbin/sysctl -n kern.argmax`
elif test -x /usr/sbin/sysctl; then
lt_cv_sys_max_cmd_len=`/usr/sbin/sysctl -n kern.argmax`
else
lt_cv_sys_max_cmd_len=65536 # usable default for all BSDs
fi
# And add a safety zone
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4`
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3`
;;
interix*)
# We know the value 262144 and hardcode it with a safety zone (like BSD)
lt_cv_sys_max_cmd_len=196608
;;
os2*)
# The test takes a long time on OS/2.
lt_cv_sys_max_cmd_len=8192
;;
osf*)
# Dr. Hans Ekkehard Plesser reports seeing a kernel panic running configure
# due to this test when exec_disable_arg_limit is 1 on Tru64. It is not
# nice to cause kernel panics so lets avoid the loop below.
# First set a reasonable default.
lt_cv_sys_max_cmd_len=16384
#
if test -x /sbin/sysconfig; then
case `/sbin/sysconfig -q proc exec_disable_arg_limit` in
*1*) lt_cv_sys_max_cmd_len=-1 ;;
esac
fi
;;
sco3.2v5*)
lt_cv_sys_max_cmd_len=102400
;;
sysv5* | sco5v6* | sysv4.2uw2*)
kargmax=`grep ARG_MAX /etc/conf/cf.d/stune 2>/dev/null`
if test -n "$kargmax"; then
lt_cv_sys_max_cmd_len=`echo $kargmax | sed 's/.*[[ ]]//'`
else
lt_cv_sys_max_cmd_len=32768
fi
;;
*)
lt_cv_sys_max_cmd_len=`(getconf ARG_MAX) 2> /dev/null`
if test -n "$lt_cv_sys_max_cmd_len" && \
test undefined != "$lt_cv_sys_max_cmd_len"; then
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 4`
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \* 3`
else
# Make teststring a little bigger before we do anything with it.
# a 1K string should be a reasonable start.
for i in 1 2 3 4 5 6 7 8; do
teststring=$teststring$teststring
done
SHELL=${SHELL-${CONFIG_SHELL-/bin/sh}}
# If test is not a shell built-in, we'll probably end up computing a
# maximum length that is only half of the actual maximum length, but
# we can't tell.
while { test X`env echo "$teststring$teststring" 2>/dev/null` \
= "X$teststring$teststring"; } >/dev/null 2>&1 &&
test 17 != "$i" # 1/2 MB should be enough
do
i=`expr $i + 1`
teststring=$teststring$teststring
done
# Only check the string length outside the loop.
lt_cv_sys_max_cmd_len=`expr "X$teststring" : ".*" 2>&1`
teststring=
# Add a significant safety factor because C++ compilers can tack on
# massive amounts of additional arguments before passing them to the
# linker. It appears as though 1/2 is a usable value.
lt_cv_sys_max_cmd_len=`expr $lt_cv_sys_max_cmd_len \/ 2`
fi
;;
esac
])
if test -n "$lt_cv_sys_max_cmd_len"; then
AC_MSG_RESULT($lt_cv_sys_max_cmd_len)
else
AC_MSG_RESULT(none)
fi
max_cmd_len=$lt_cv_sys_max_cmd_len
_LT_DECL([], [max_cmd_len], [0],
[What is the maximum length of a command?])
])# LT_CMD_MAX_LEN
# Old name:
AU_ALIAS([AC_LIBTOOL_SYS_MAX_CMD_LEN], [LT_CMD_MAX_LEN])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_SYS_MAX_CMD_LEN], [])
# _LT_HEADER_DLFCN
# ----------------
m4_defun([_LT_HEADER_DLFCN],
[AC_CHECK_HEADERS([dlfcn.h], [], [], [AC_INCLUDES_DEFAULT])dnl
])# _LT_HEADER_DLFCN
# _LT_TRY_DLOPEN_SELF (ACTION-IF-TRUE, ACTION-IF-TRUE-W-USCORE,
# ACTION-IF-FALSE, ACTION-IF-CROSS-COMPILING)
# ----------------------------------------------------------------
m4_defun([_LT_TRY_DLOPEN_SELF],
[m4_require([_LT_HEADER_DLFCN])dnl
if test yes = "$cross_compiling"; then :
[$4]
else
lt_dlunknown=0; lt_dlno_uscore=1; lt_dlneed_uscore=2
lt_status=$lt_dlunknown
cat > conftest.$ac_ext <<_LT_EOF
[#line $LINENO "configure"
#include "confdefs.h"
#if HAVE_DLFCN_H
#include <dlfcn.h>
#endif
#include <stdio.h>
#ifdef RTLD_GLOBAL
# define LT_DLGLOBAL RTLD_GLOBAL
#else
# ifdef DL_GLOBAL
# define LT_DLGLOBAL DL_GLOBAL
# else
# define LT_DLGLOBAL 0
# endif
#endif
/* We may have to define LT_DLLAZY_OR_NOW in the command line if we
find out it does not work in some platform. */
#ifndef LT_DLLAZY_OR_NOW
# ifdef RTLD_LAZY
# define LT_DLLAZY_OR_NOW RTLD_LAZY
# else
# ifdef DL_LAZY
# define LT_DLLAZY_OR_NOW DL_LAZY
# else
# ifdef RTLD_NOW
# define LT_DLLAZY_OR_NOW RTLD_NOW
# else
# ifdef DL_NOW
# define LT_DLLAZY_OR_NOW DL_NOW
# else
# define LT_DLLAZY_OR_NOW 0
# endif
# endif
# endif
# endif
#endif
/* When -fvisibility=hidden is used, assume the code has been annotated
correspondingly for the symbols needed. */
#if defined __GNUC__ && (((__GNUC__ == 3) && (__GNUC_MINOR__ >= 3)) || (__GNUC__ > 3))
int fnord () __attribute__((visibility("default")));
#endif
int fnord () { return 42; }
int main ()
{
void *self = dlopen (0, LT_DLGLOBAL|LT_DLLAZY_OR_NOW);
int status = $lt_dlunknown;
if (self)
{
if (dlsym (self,"fnord")) status = $lt_dlno_uscore;
else
{
if (dlsym( self,"_fnord")) status = $lt_dlneed_uscore;
else puts (dlerror ());
}
/* dlclose (self); */
}
else
puts (dlerror ());
return status;
}]
_LT_EOF
if AC_TRY_EVAL(ac_link) && test -s "conftest$ac_exeext" 2>/dev/null; then
(./conftest; exit; ) >&AS_MESSAGE_LOG_FD 2>/dev/null
lt_status=$?
case x$lt_status in
x$lt_dlno_uscore) $1 ;;
x$lt_dlneed_uscore) $2 ;;
x$lt_dlunknown|x*) $3 ;;
esac
else :
# compilation failed
$3
fi
fi
rm -fr conftest*
])# _LT_TRY_DLOPEN_SELF
# LT_SYS_DLOPEN_SELF
# ------------------
AC_DEFUN([LT_SYS_DLOPEN_SELF],
[m4_require([_LT_HEADER_DLFCN])dnl
if test yes != "$enable_dlopen"; then
enable_dlopen=unknown
enable_dlopen_self=unknown
enable_dlopen_self_static=unknown
else
lt_cv_dlopen=no
lt_cv_dlopen_libs=
case $host_os in
beos*)
lt_cv_dlopen=load_add_on
lt_cv_dlopen_libs=
lt_cv_dlopen_self=yes
;;
mingw* | pw32* | cegcc*)
lt_cv_dlopen=LoadLibrary
lt_cv_dlopen_libs=
;;
cygwin*)
lt_cv_dlopen=dlopen
lt_cv_dlopen_libs=
;;
darwin*)
# if libdl is installed we need to link against it
AC_CHECK_LIB([dl], [dlopen],
[lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl],[
lt_cv_dlopen=dyld
lt_cv_dlopen_libs=
lt_cv_dlopen_self=yes
])
;;
tpf*)
# Don't try to run any link tests for TPF. We know it's impossible
# because TPF is a cross-compiler, and we know how we open DSOs.
lt_cv_dlopen=dlopen
lt_cv_dlopen_libs=
lt_cv_dlopen_self=no
;;
*)
AC_CHECK_FUNC([shl_load],
[lt_cv_dlopen=shl_load],
[AC_CHECK_LIB([dld], [shl_load],
[lt_cv_dlopen=shl_load lt_cv_dlopen_libs=-ldld],
[AC_CHECK_FUNC([dlopen],
[lt_cv_dlopen=dlopen],
[AC_CHECK_LIB([dl], [dlopen],
[lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-ldl],
[AC_CHECK_LIB([svld], [dlopen],
[lt_cv_dlopen=dlopen lt_cv_dlopen_libs=-lsvld],
[AC_CHECK_LIB([dld], [dld_link],
[lt_cv_dlopen=dld_link lt_cv_dlopen_libs=-ldld])
])
])
])
])
])
;;
esac
if test no = "$lt_cv_dlopen"; then
enable_dlopen=no
else
enable_dlopen=yes
fi
case $lt_cv_dlopen in
dlopen)
save_CPPFLAGS=$CPPFLAGS
test yes = "$ac_cv_header_dlfcn_h" && CPPFLAGS="$CPPFLAGS -DHAVE_DLFCN_H"
save_LDFLAGS=$LDFLAGS
wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $export_dynamic_flag_spec\"
save_LIBS=$LIBS
LIBS="$lt_cv_dlopen_libs $LIBS"
AC_CACHE_CHECK([whether a program can dlopen itself],
lt_cv_dlopen_self, [dnl
_LT_TRY_DLOPEN_SELF(
lt_cv_dlopen_self=yes, lt_cv_dlopen_self=yes,
lt_cv_dlopen_self=no, lt_cv_dlopen_self=cross)
])
if test yes = "$lt_cv_dlopen_self"; then
wl=$lt_prog_compiler_wl eval LDFLAGS=\"\$LDFLAGS $lt_prog_compiler_static\"
AC_CACHE_CHECK([whether a statically linked program can dlopen itself],
lt_cv_dlopen_self_static, [dnl
_LT_TRY_DLOPEN_SELF(
lt_cv_dlopen_self_static=yes, lt_cv_dlopen_self_static=yes,
lt_cv_dlopen_self_static=no, lt_cv_dlopen_self_static=cross)
])
fi
CPPFLAGS=$save_CPPFLAGS
LDFLAGS=$save_LDFLAGS
LIBS=$save_LIBS
;;
esac
case $lt_cv_dlopen_self in
yes|no) enable_dlopen_self=$lt_cv_dlopen_self ;;
*) enable_dlopen_self=unknown ;;
esac
case $lt_cv_dlopen_self_static in
yes|no) enable_dlopen_self_static=$lt_cv_dlopen_self_static ;;
*) enable_dlopen_self_static=unknown ;;
esac
fi
_LT_DECL([dlopen_support], [enable_dlopen], [0],
[Whether dlopen is supported])
_LT_DECL([dlopen_self], [enable_dlopen_self], [0],
[Whether dlopen of programs is supported])
_LT_DECL([dlopen_self_static], [enable_dlopen_self_static], [0],
[Whether dlopen of statically linked programs is supported])
])# LT_SYS_DLOPEN_SELF
# Old name:
AU_ALIAS([AC_LIBTOOL_DLOPEN_SELF], [LT_SYS_DLOPEN_SELF])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_LIBTOOL_DLOPEN_SELF], [])
# _LT_COMPILER_C_O([TAGNAME])
# ---------------------------
# Check to see if options -c and -o are simultaneously supported by compiler.
# This macro does not hard code the compiler like AC_PROG_CC_C_O.
m4_defun([_LT_COMPILER_C_O],
[m4_require([_LT_DECL_SED])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_TAG_COMPILER])dnl
AC_CACHE_CHECK([if $compiler supports -c -o file.$ac_objext],
[_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)],
[_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=no
$RM -r conftest 2>/dev/null
mkdir conftest
cd conftest
mkdir out
echo "$lt_simple_compile_test_code" > conftest.$ac_ext
lt_compiler_flag="-o out/conftest2.$ac_objext"
# Insert the option either (1) after the last *FLAGS variable, or
# (2) before a word containing "conftest.", or (3) at the end.
# Note that $ac_compile itself does not contain backslashes and begins
# with a dollar sign (not a hyphen), so the echo should work correctly.
lt_compile=`echo "$ac_compile" | $SED \
-e 's:.*FLAGS}\{0,1\} :&$lt_compiler_flag :; t' \
-e 's: [[^ ]]*conftest\.: $lt_compiler_flag&:; t' \
-e 's:$: $lt_compiler_flag:'`
(eval echo "\"\$as_me:$LINENO: $lt_compile\"" >&AS_MESSAGE_LOG_FD)
(eval "$lt_compile" 2>out/conftest.err)
ac_status=$?
cat out/conftest.err >&AS_MESSAGE_LOG_FD
echo "$as_me:$LINENO: \$? = $ac_status" >&AS_MESSAGE_LOG_FD
if (exit $ac_status) && test -s out/conftest2.$ac_objext
then
# The compiler can only warn and ignore the option if not recognized
# So say no if there are warnings
$ECHO "$_lt_compiler_boilerplate" | $SED '/^$/d' > out/conftest.exp
$SED '/^$/d; /^ *+/d' out/conftest.err >out/conftest.er2
if test ! -s out/conftest.er2 || diff out/conftest.exp out/conftest.er2 >/dev/null; then
_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes
fi
fi
chmod u+w . 2>&AS_MESSAGE_LOG_FD
$RM conftest*
# SGI C++ compiler will create directory out/ii_files/ for
# template instantiation
test -d out/ii_files && $RM out/ii_files/* && rmdir out/ii_files
$RM out/* && rmdir out
cd ..
$RM -r conftest
$RM conftest*
])
_LT_TAGDECL([compiler_c_o], [lt_cv_prog_compiler_c_o], [1],
[Does compiler simultaneously support -c and -o options?])
])# _LT_COMPILER_C_O
# _LT_COMPILER_FILE_LOCKS([TAGNAME])
# ----------------------------------
# Check to see if we can do hard links to lock some files if needed
m4_defun([_LT_COMPILER_FILE_LOCKS],
[m4_require([_LT_ENABLE_LOCK])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
_LT_COMPILER_C_O([$1])
hard_links=nottested
if test no = "$_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)" && test no != "$need_locks"; then
# do not overwrite the value of need_locks provided by the user
AC_MSG_CHECKING([if we can lock with hard links])
hard_links=yes
$RM conftest*
ln conftest.a conftest.b 2>/dev/null && hard_links=no
touch conftest.a
ln conftest.a conftest.b 2>&5 || hard_links=no
ln conftest.a conftest.b 2>/dev/null && hard_links=no
AC_MSG_RESULT([$hard_links])
if test no = "$hard_links"; then
AC_MSG_WARN(['$CC' does not support '-c -o', so 'make -j' may be unsafe])
need_locks=warn
fi
else
need_locks=no
fi
_LT_DECL([], [need_locks], [1], [Must we lock files when doing compilation?])
])# _LT_COMPILER_FILE_LOCKS
# _LT_CHECK_OBJDIR
# ----------------
m4_defun([_LT_CHECK_OBJDIR],
[AC_CACHE_CHECK([for objdir], [lt_cv_objdir],
[rm -f .libs 2>/dev/null
mkdir .libs 2>/dev/null
if test -d .libs; then
lt_cv_objdir=.libs
else
# MS-DOS does not allow filenames that begin with a dot.
lt_cv_objdir=_libs
fi
rmdir .libs 2>/dev/null])
objdir=$lt_cv_objdir
_LT_DECL([], [objdir], [0],
[The name of the directory that contains temporary libtool files])dnl
m4_pattern_allow([LT_OBJDIR])dnl
AC_DEFINE_UNQUOTED([LT_OBJDIR], "$lt_cv_objdir/",
[Define to the sub-directory where libtool stores uninstalled libraries.])
])# _LT_CHECK_OBJDIR
# _LT_LINKER_HARDCODE_LIBPATH([TAGNAME])
# --------------------------------------
# Check hardcoding attributes.
m4_defun([_LT_LINKER_HARDCODE_LIBPATH],
[AC_MSG_CHECKING([how to hardcode library paths into programs])
_LT_TAGVAR(hardcode_action, $1)=
if test -n "$_LT_TAGVAR(hardcode_libdir_flag_spec, $1)" ||
test -n "$_LT_TAGVAR(runpath_var, $1)" ||
test yes = "$_LT_TAGVAR(hardcode_automatic, $1)"; then
# We can hardcode non-existent directories.
if test no != "$_LT_TAGVAR(hardcode_direct, $1)" &&
# If the only mechanism to avoid hardcoding is shlibpath_var, we
# have to relink, otherwise we might link with an installed library
# when we should be linking with a yet-to-be-installed one
## test no != "$_LT_TAGVAR(hardcode_shlibpath_var, $1)" &&
test no != "$_LT_TAGVAR(hardcode_minus_L, $1)"; then
# Linking always hardcodes the temporary library directory.
_LT_TAGVAR(hardcode_action, $1)=relink
else
# We can link without hardcoding, and we can hardcode nonexisting dirs.
_LT_TAGVAR(hardcode_action, $1)=immediate
fi
else
# We cannot hardcode anything, or else we can only hardcode existing
# directories.
_LT_TAGVAR(hardcode_action, $1)=unsupported
fi
AC_MSG_RESULT([$_LT_TAGVAR(hardcode_action, $1)])
if test relink = "$_LT_TAGVAR(hardcode_action, $1)" ||
test yes = "$_LT_TAGVAR(inherit_rpath, $1)"; then
# Fast installation is not supported
enable_fast_install=no
elif test yes = "$shlibpath_overrides_runpath" ||
test no = "$enable_shared"; then
# Fast installation is not necessary
enable_fast_install=needless
fi
_LT_TAGDECL([], [hardcode_action], [0],
[How to hardcode a shared library path into an executable])
])# _LT_LINKER_HARDCODE_LIBPATH
# _LT_CMD_STRIPLIB
# ----------------
m4_defun([_LT_CMD_STRIPLIB],
[m4_require([_LT_DECL_EGREP])
striplib=
old_striplib=
AC_MSG_CHECKING([whether stripping libraries is possible])
if test -n "$STRIP" && $STRIP -V 2>&1 | $GREP "GNU strip" >/dev/null; then
test -z "$old_striplib" && old_striplib="$STRIP --strip-debug"
test -z "$striplib" && striplib="$STRIP --strip-unneeded"
AC_MSG_RESULT([yes])
else
# FIXME - insert some real tests, host_os isn't really good enough
case $host_os in
darwin*)
if test -n "$STRIP"; then
striplib="$STRIP -x"
old_striplib="$STRIP -S"
AC_MSG_RESULT([yes])
else
AC_MSG_RESULT([no])
fi
;;
*)
AC_MSG_RESULT([no])
;;
esac
fi
_LT_DECL([], [old_striplib], [1], [Commands to strip libraries])
_LT_DECL([], [striplib], [1])
])# _LT_CMD_STRIPLIB
# _LT_PREPARE_MUNGE_PATH_LIST
# ---------------------------
# Make sure func_munge_path_list() is defined correctly.
m4_defun([_LT_PREPARE_MUNGE_PATH_LIST],
[[# func_munge_path_list VARIABLE PATH
# -----------------------------------
# VARIABLE is name of variable containing _space_ separated list of
# directories to be munged by the contents of PATH, which is string
# having a format:
# "DIR[:DIR]:"
# string "DIR[ DIR]" will be prepended to VARIABLE
# ":DIR[:DIR]"
# string "DIR[ DIR]" will be appended to VARIABLE
# "DIRP[:DIRP]::[DIRA:]DIRA"
# string "DIRP[ DIRP]" will be prepended to VARIABLE and string
# "DIRA[ DIRA]" will be appended to VARIABLE
# "DIR[:DIR]"
# VARIABLE will be replaced by "DIR[ DIR]"
func_munge_path_list ()
{
case x@S|@2 in
x)
;;
*:)
eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'` \@S|@@S|@1\"
;;
x:*)
eval @S|@1=\"\@S|@@S|@1 `$ECHO @S|@2 | $SED 's/:/ /g'`\"
;;
*::*)
eval @S|@1=\"\@S|@@S|@1\ `$ECHO @S|@2 | $SED -e 's/.*:://' -e 's/:/ /g'`\"
eval @S|@1=\"`$ECHO @S|@2 | $SED -e 's/::.*//' -e 's/:/ /g'`\ \@S|@@S|@1\"
;;
*)
eval @S|@1=\"`$ECHO @S|@2 | $SED 's/:/ /g'`\"
;;
esac
}
]])# _LT_PREPARE_PATH_LIST
# _LT_SYS_DYNAMIC_LINKER([TAG])
# -----------------------------
# PORTME Fill in your ld.so characteristics
m4_defun([_LT_SYS_DYNAMIC_LINKER],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_OBJDUMP])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_CHECK_SHELL_FEATURES])dnl
m4_require([_LT_PREPARE_MUNGE_PATH_LIST])dnl
AC_MSG_CHECKING([dynamic linker characteristics])
m4_if([$1],
[], [
if test yes = "$GCC"; then
case $host_os in
darwin*) lt_awk_arg='/^libraries:/,/LR/' ;;
*) lt_awk_arg='/^libraries:/' ;;
esac
case $host_os in
mingw* | cegcc*) lt_sed_strip_eq='s|=\([[A-Za-z]]:\)|\1|g' ;;
*) lt_sed_strip_eq='s|=/|/|g' ;;
esac
lt_search_path_spec=`$CC -print-search-dirs | awk $lt_awk_arg | $SED -e "s/^libraries://" -e $lt_sed_strip_eq`
case $lt_search_path_spec in
*\;*)
# if the path contains ";" then we assume it to be the separator
# otherwise default to the standard path separator (i.e. ":") - it is
# assumed that no part of a normal pathname contains ";" but that should
# okay in the real world where ";" in dirpaths is itself problematic.
lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED 's/;/ /g'`
;;
*)
lt_search_path_spec=`$ECHO "$lt_search_path_spec" | $SED "s/$PATH_SEPARATOR/ /g"`
;;
esac
# Ok, now we have the path, separated by spaces, we can step through it
# and add multilib dir if necessary...
lt_tmp_lt_search_path_spec=
lt_multi_os_dir=/`$CC $CPPFLAGS $CFLAGS $LDFLAGS -print-multi-os-directory 2>/dev/null`
# ...but if some path component already ends with the multilib dir we assume
# that all is fine and trust -print-search-dirs as is (GCC 4.2? or newer).
case "$lt_multi_os_dir; $lt_search_path_spec " in
"/; "* | "/.; "* | "/./; "* | *"$lt_multi_os_dir "* | *"$lt_multi_os_dir/ "*)
lt_multi_os_dir=
;;
esac
for lt_sys_path in $lt_search_path_spec; do
if test -d "$lt_sys_path$lt_multi_os_dir"; then
lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path$lt_multi_os_dir"
elif test -n "$lt_multi_os_dir"; then
test -d "$lt_sys_path" && \
lt_tmp_lt_search_path_spec="$lt_tmp_lt_search_path_spec $lt_sys_path"
fi
done
lt_search_path_spec=`$ECHO "$lt_tmp_lt_search_path_spec" | awk '
BEGIN {RS = " "; FS = "/|\n";} {
lt_foo = "";
lt_count = 0;
for (lt_i = NF; lt_i > 0; lt_i--) {
if ($lt_i != "" && $lt_i != ".") {
if ($lt_i == "..") {
lt_count++;
} else {
if (lt_count == 0) {
lt_foo = "/" $lt_i lt_foo;
} else {
lt_count--;
}
}
}
}
if (lt_foo != "") { lt_freq[[lt_foo]]++; }
if (lt_freq[[lt_foo]] == 1) { print lt_foo; }
}'`
# AWK program above erroneously prepends '/' to C:/dos/paths
# for these hosts.
case $host_os in
mingw* | cegcc*) lt_search_path_spec=`$ECHO "$lt_search_path_spec" |\
$SED 's|/\([[A-Za-z]]:\)|\1|g'` ;;
esac
sys_lib_search_path_spec=`$ECHO "$lt_search_path_spec" | $lt_NL2SP`
else
sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
fi])
library_names_spec=
libname_spec='lib$name'
soname_spec=
shrext_cmds=.so
postinstall_cmds=
postuninstall_cmds=
finish_cmds=
finish_eval=
shlibpath_var=
shlibpath_overrides_runpath=unknown
version_type=none
dynamic_linker="$host_os ld.so"
sys_lib_dlsearch_path_spec="/lib /usr/lib"
need_lib_prefix=unknown
hardcode_into_libs=no
# when you set need_version to no, make sure it does not cause -set_version
# flags to be left without arguments
need_version=unknown
AC_ARG_VAR([LT_SYS_LIBRARY_PATH],
[User-defined run-time library search path.])
case $host_os in
aix3*)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname.a'
shlibpath_var=LIBPATH
# AIX 3 has no versioning support, so we append a major version to the name.
soname_spec='$libname$release$shared_ext$major'
;;
aix[[4-9]]*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
hardcode_into_libs=yes
if test ia64 = "$host_cpu"; then
# AIX 5 supports IA64
library_names_spec='$libname$release$shared_ext$major $libname$release$shared_ext$versuffix $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
else
# With GCC up to 2.95.x, collect2 would create an import file
# for dependence libraries. The import file would start with
# the line '#! .'. This would cause the generated library to
# depend on '.', always an invalid library. This was fixed in
# development snapshots of GCC prior to 3.0.
case $host_os in
aix4 | aix4.[[01]] | aix4.[[01]].*)
if { echo '#if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 97)'
echo ' yes '
echo '#endif'; } | $CC -E - | $GREP yes > /dev/null; then
:
else
can_build_shared=no
fi
;;
esac
# Using Import Files as archive members, it is possible to support
# filename-based versioning of shared library archives on AIX. While
# this would work for both with and without runtime linking, it will
# prevent static linking of such archives. So we do filename-based
# shared library versioning with .so extension only, which is used
# when both runtime linking and shared linking is enabled.
# Unfortunately, runtime linking may impact performance, so we do
# not want this to be the default eventually. Also, we use the
# versioned .so libs for executables only if there is the -brtl
# linker flag in LDFLAGS as well, or --with-aix-soname=svr4 only.
# To allow for filename-based versioning support, we need to create
# libNAME.so.V as an archive file, containing:
# *) an Import File, referring to the versioned filename of the
# archive as well as the shared archive member, telling the
# bitwidth (32 or 64) of that shared object, and providing the
# list of exported symbols of that shared object, eventually
# decorated with the 'weak' keyword
# *) the shared object with the F_LOADONLY flag set, to really avoid
# it being seen by the linker.
# At run time we better use the real file rather than another symlink,
# but for link time we create the symlink libNAME.so -> libNAME.so.V
case $with_aix_soname,$aix_use_runtimelinking in
# AIX (on Power*) has no versioning support, so currently we cannot hardcode correct
# soname into executable. Probably we can add versioning support to
# collect2, so additional links can be useful in future.
aix,yes) # traditional libtool
dynamic_linker='AIX unversionable lib.so'
# If using run time linking (on AIX 4.2 or later) use lib<name>.so
# instead of lib<name>.a to let people know that these are not
# typical AIX shared libraries.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
;;
aix,no) # traditional AIX only
dynamic_linker='AIX lib.a[(]lib.so.V[)]'
# We preserve .a as extension for shared libraries through AIX4.2
# and later when we are not doing run time linking.
library_names_spec='$libname$release.a $libname.a'
soname_spec='$libname$release$shared_ext$major'
;;
svr4,*) # full svr4 only
dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)]"
library_names_spec='$libname$release$shared_ext$major $libname$shared_ext'
# We do not specify a path in Import Files, so LIBPATH fires.
shlibpath_overrides_runpath=yes
;;
*,yes) # both, prefer svr4
dynamic_linker="AIX lib.so.V[(]$shared_archive_member_spec.o[)], lib.a[(]lib.so.V[)]"
library_names_spec='$libname$release$shared_ext$major $libname$shared_ext'
# unpreferred sharedlib libNAME.a needs extra handling
postinstall_cmds='test -n "$linkname" || linkname="$realname"~func_stripname "" ".so" "$linkname"~$install_shared_prog "$dir/$func_stripname_result.$libext" "$destdir/$func_stripname_result.$libext"~test -z "$tstripme" || test -z "$striplib" || $striplib "$destdir/$func_stripname_result.$libext"'
postuninstall_cmds='for n in $library_names $old_library; do :; done~func_stripname "" ".so" "$n"~test "$func_stripname_result" = "$n" || func_append rmfiles " $odir/$func_stripname_result.$libext"'
# We do not specify a path in Import Files, so LIBPATH fires.
shlibpath_overrides_runpath=yes
;;
*,no) # both, prefer aix
dynamic_linker="AIX lib.a[(]lib.so.V[)], lib.so.V[(]$shared_archive_member_spec.o[)]"
library_names_spec='$libname$release.a $libname.a'
soname_spec='$libname$release$shared_ext$major'
# unpreferred sharedlib libNAME.so.V and symlink libNAME.so need extra handling
postinstall_cmds='test -z "$dlname" || $install_shared_prog $dir/$dlname $destdir/$dlname~test -z "$tstripme" || test -z "$striplib" || $striplib $destdir/$dlname~test -n "$linkname" || linkname=$realname~func_stripname "" ".a" "$linkname"~(cd "$destdir" && $LN_S -f $dlname $func_stripname_result.so)'
postuninstall_cmds='test -z "$dlname" || func_append rmfiles " $odir/$dlname"~for n in $old_library $library_names; do :; done~func_stripname "" ".a" "$n"~func_append rmfiles " $odir/$func_stripname_result.so"'
;;
esac
shlibpath_var=LIBPATH
fi
;;
amigaos*)
case $host_cpu in
powerpc)
# Since July 2007 AmigaOS4 officially supports .so libraries.
# When compiling the executable, add -use-dynld -Lsobjs: to the compileline.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
;;
m68k)
library_names_spec='$libname.ixlibrary $libname.a'
# Create ${libname}_ixlibrary.a entries in /sys/libs.
finish_eval='for lib in `ls $libdir/*.ixlibrary 2>/dev/null`; do libname=`func_echo_all "$lib" | $SED '\''s%^.*/\([[^/]]*\)\.ixlibrary$%\1%'\''`; $RM /sys/libs/${libname}_ixlibrary.a; $show "cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a"; cd /sys/libs && $LN_S $lib ${libname}_ixlibrary.a || exit 1; done'
;;
esac
;;
beos*)
library_names_spec='$libname$shared_ext'
dynamic_linker="$host_os ld.so"
shlibpath_var=LIBRARY_PATH
;;
bsdi[[45]]*)
version_type=linux # correct to gnu/linux during the next big refactor
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
finish_cmds='PATH="\$PATH:/sbin" ldconfig $libdir'
shlibpath_var=LD_LIBRARY_PATH
sys_lib_search_path_spec="/shlib /usr/lib /usr/X11/lib /usr/contrib/lib /lib /usr/local/lib"
sys_lib_dlsearch_path_spec="/shlib /usr/lib /usr/local/lib"
# the default ld.so.conf also contains /usr/contrib/lib and
# /usr/X11R6/lib (/usr/X11 is a link to /usr/X11R6), but let us allow
# libtool to hard-code these into programs
;;
cygwin* | mingw* | pw32* | cegcc*)
version_type=windows
shrext_cmds=.dll
need_version=no
need_lib_prefix=no
case $GCC,$cc_basename in
yes,*)
# gcc
library_names_spec='$libname.dll.a'
# DLL is installed to $(libdir)/../bin by postinstall_cmds
postinstall_cmds='base_file=`basename \$file`~
dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~
dldir=$destdir/`dirname \$dlpath`~
test -d \$dldir || mkdir -p \$dldir~
$install_prog $dir/$dlname \$dldir/$dlname~
chmod a+x \$dldir/$dlname~
if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then
eval '\''$striplib \$dldir/$dlname'\'' || exit \$?;
fi'
postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
dlpath=$dir/\$dldll~
$RM \$dlpath'
shlibpath_overrides_runpath=yes
case $host_os in
cygwin*)
# Cygwin DLLs use 'cyg' prefix rather than 'lib'
soname_spec='`echo $libname | sed -e 's/^lib/cyg/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
m4_if([$1], [],[
sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/lib/w32api"])
;;
mingw* | cegcc*)
# MinGW DLLs use traditional 'lib' prefix
soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
;;
pw32*)
# pw32 DLLs use 'pw' prefix rather than 'lib'
library_names_spec='`echo $libname | sed -e 's/^lib/pw/'``echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
;;
esac
dynamic_linker='Win32 ld.exe'
;;
*,cl*)
# Native MSVC
libname_spec='$name'
soname_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext'
library_names_spec='$libname.dll.lib'
case $build_os in
mingw*)
sys_lib_search_path_spec=
lt_save_ifs=$IFS
IFS=';'
for lt_path in $LIB
do
IFS=$lt_save_ifs
# Let DOS variable expansion print the short 8.3 style file name.
lt_path=`cd "$lt_path" 2>/dev/null && cmd //C "for %i in (".") do @echo %~si"`
sys_lib_search_path_spec="$sys_lib_search_path_spec $lt_path"
done
IFS=$lt_save_ifs
# Convert to MSYS style.
sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | sed -e 's|\\\\|/|g' -e 's| \\([[a-zA-Z]]\\):| /\\1|g' -e 's|^ ||'`
;;
cygwin*)
# Convert to unix form, then to dos form, then back to unix form
# but this time dos style (no spaces!) so that the unix form looks
# like /cygdrive/c/PROGRA~1:/cygdr...
sys_lib_search_path_spec=`cygpath --path --unix "$LIB"`
sys_lib_search_path_spec=`cygpath --path --dos "$sys_lib_search_path_spec" 2>/dev/null`
sys_lib_search_path_spec=`cygpath --path --unix "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
;;
*)
sys_lib_search_path_spec=$LIB
if $ECHO "$sys_lib_search_path_spec" | [$GREP ';[c-zC-Z]:/' >/dev/null]; then
# It is most probably a Windows format PATH.
sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e 's/;/ /g'`
else
sys_lib_search_path_spec=`$ECHO "$sys_lib_search_path_spec" | $SED -e "s/$PATH_SEPARATOR/ /g"`
fi
# FIXME: find the short name or the path components, as spaces are
# common. (e.g. "Program Files" -> "PROGRA~1")
;;
esac
# DLL is installed to $(libdir)/../bin by postinstall_cmds
postinstall_cmds='base_file=`basename \$file`~
dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; echo \$dlname'\''`~
dldir=$destdir/`dirname \$dlpath`~
test -d \$dldir || mkdir -p \$dldir~
$install_prog $dir/$dlname \$dldir/$dlname'
postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; echo \$dlname'\''`~
dlpath=$dir/\$dldll~
$RM \$dlpath'
shlibpath_overrides_runpath=yes
dynamic_linker='Win32 link.exe'
;;
*)
# Assume MSVC wrapper
library_names_spec='$libname`echo $release | $SED -e 's/[[.]]/-/g'`$versuffix$shared_ext $libname.lib'
dynamic_linker='Win32 ld.exe'
;;
esac
# FIXME: first we should search . and the directory the executable is in
shlibpath_var=PATH
;;
darwin* | rhapsody*)
dynamic_linker="$host_os dyld"
version_type=darwin
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$major$shared_ext $libname$shared_ext'
soname_spec='$libname$release$major$shared_ext'
shlibpath_overrides_runpath=yes
shlibpath_var=DYLD_LIBRARY_PATH
shrext_cmds='`test .$module = .yes && echo .so || echo .dylib`'
m4_if([$1], [],[
sys_lib_search_path_spec="$sys_lib_search_path_spec /usr/local/lib"])
sys_lib_dlsearch_path_spec='/usr/local/lib /lib /usr/lib'
;;
dgux*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
;;
freebsd* | dragonfly*)
# DragonFly does not have aout. When/if they implement a new
# versioning mechanism, adjust this.
if test -x /usr/bin/objformat; then
objformat=`/usr/bin/objformat`
else
case $host_os in
freebsd[[23]].*) objformat=aout ;;
*) objformat=elf ;;
esac
fi
version_type=freebsd-$objformat
case $version_type in
freebsd-elf*)
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
need_version=no
need_lib_prefix=no
;;
freebsd-*)
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
need_version=yes
;;
esac
shlibpath_var=LD_LIBRARY_PATH
case $host_os in
freebsd2.*)
shlibpath_overrides_runpath=yes
;;
freebsd3.[[01]]* | freebsdelf3.[[01]]*)
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
;;
freebsd3.[[2-9]]* | freebsdelf3.[[2-9]]* | \
freebsd4.[[0-5]] | freebsdelf4.[[0-5]] | freebsd4.1.1 | freebsdelf4.1.1)
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
;;
*) # from 4.6 on, and DragonFly
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
;;
esac
;;
haiku*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
dynamic_linker="$host_os runtime_loader"
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LIBRARY_PATH
shlibpath_overrides_runpath=no
sys_lib_dlsearch_path_spec='/boot/home/config/lib /boot/common/lib /boot/system/lib'
hardcode_into_libs=yes
;;
hpux9* | hpux10* | hpux11*)
# Give a soname corresponding to the major version so that dld.sl refuses to
# link against other versions.
version_type=sunos
need_lib_prefix=no
need_version=no
case $host_cpu in
ia64*)
shrext_cmds='.so'
hardcode_into_libs=yes
dynamic_linker="$host_os dld.so"
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
if test 32 = "$HPUX_IA64_MODE"; then
sys_lib_search_path_spec="/usr/lib/hpux32 /usr/local/lib/hpux32 /usr/local/lib"
sys_lib_dlsearch_path_spec=/usr/lib/hpux32
else
sys_lib_search_path_spec="/usr/lib/hpux64 /usr/local/lib/hpux64"
sys_lib_dlsearch_path_spec=/usr/lib/hpux64
fi
;;
hppa*64*)
shrext_cmds='.sl'
hardcode_into_libs=yes
dynamic_linker="$host_os dld.sl"
shlibpath_var=LD_LIBRARY_PATH # How should we handle SHLIB_PATH
shlibpath_overrides_runpath=yes # Unless +noenvvar is specified.
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
sys_lib_search_path_spec="/usr/lib/pa20_64 /usr/ccs/lib/pa20_64"
sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
;;
*)
shrext_cmds='.sl'
dynamic_linker="$host_os dld.sl"
shlibpath_var=SHLIB_PATH
shlibpath_overrides_runpath=no # +s is required to enable SHLIB_PATH
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
;;
esac
# HP-UX runs *really* slowly unless shared libraries are mode 555, ...
postinstall_cmds='chmod 555 $lib'
# or fails outright, so override atomically:
install_override_mode=555
;;
interix[[3-9]]*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
dynamic_linker='Interix 3.x ld.so.1 (PE, like ELF)'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
;;
irix5* | irix6* | nonstopux*)
case $host_os in
nonstopux*) version_type=nonstopux ;;
*)
if test yes = "$lt_cv_prog_gnu_ld"; then
version_type=linux # correct to gnu/linux during the next big refactor
else
version_type=irix
fi ;;
esac
need_lib_prefix=no
need_version=no
soname_spec='$libname$release$shared_ext$major'
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$release$shared_ext $libname$shared_ext'
case $host_os in
irix5* | nonstopux*)
libsuff= shlibsuff=
;;
*)
case $LD in # libtool.m4 will add one of these switches to LD
*-32|*"-32 "|*-melf32bsmip|*"-melf32bsmip ")
libsuff= shlibsuff= libmagic=32-bit;;
*-n32|*"-n32 "|*-melf32bmipn32|*"-melf32bmipn32 ")
libsuff=32 shlibsuff=N32 libmagic=N32;;
*-64|*"-64 "|*-melf64bmip|*"-melf64bmip ")
libsuff=64 shlibsuff=64 libmagic=64-bit;;
*) libsuff= shlibsuff= libmagic=never-match;;
esac
;;
esac
shlibpath_var=LD_LIBRARY${shlibsuff}_PATH
shlibpath_overrides_runpath=no
sys_lib_search_path_spec="/usr/lib$libsuff /lib$libsuff /usr/local/lib$libsuff"
sys_lib_dlsearch_path_spec="/usr/lib$libsuff /lib$libsuff"
hardcode_into_libs=yes
;;
# No shared lib support for Linux oldld, aout, or coff.
linux*oldld* | linux*aout* | linux*coff*)
dynamic_linker=no
;;
linux*android*)
version_type=none # Android doesn't support versioned libraries.
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext'
soname_spec='$libname$release$shared_ext'
finish_cmds=
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
# This implies no fast_install, which is unacceptable.
# Some rework will be needed to allow for fast_install
# before this can be enabled.
hardcode_into_libs=yes
dynamic_linker='Android linker'
# Don't embed -rpath directories since the linker doesn't support them.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
;;
# This must be glibc/ELF.
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
finish_cmds='PATH="\$PATH:/sbin" ldconfig -n $libdir'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
# Some binutils ld are patched to set DT_RUNPATH
AC_CACHE_VAL([lt_cv_shlibpath_overrides_runpath],
[lt_cv_shlibpath_overrides_runpath=no
save_LDFLAGS=$LDFLAGS
save_libdir=$libdir
eval "libdir=/foo; wl=\"$_LT_TAGVAR(lt_prog_compiler_wl, $1)\"; \
LDFLAGS=\"\$LDFLAGS $_LT_TAGVAR(hardcode_libdir_flag_spec, $1)\""
AC_LINK_IFELSE([AC_LANG_PROGRAM([],[])],
[AS_IF([ ($OBJDUMP -p conftest$ac_exeext) 2>/dev/null | grep "RUNPATH.*$libdir" >/dev/null],
[lt_cv_shlibpath_overrides_runpath=yes])])
LDFLAGS=$save_LDFLAGS
libdir=$save_libdir
])
shlibpath_overrides_runpath=$lt_cv_shlibpath_overrides_runpath
# This implies no fast_install, which is unacceptable.
# Some rework will be needed to allow for fast_install
# before this can be enabled.
hardcode_into_libs=yes
# Ideally, we could use ldconfig to report *all* directores which are
# searched for libraries, however this is still not possible. Aside from not
# being certain /sbin/ldconfig is available, command
# 'ldconfig -N -X -v | grep ^/' on 64bit Fedora does not report /usr/lib64,
# even though it is searched at run-time. Try to do the best guess by
# appending ld.so.conf contents (and includes) to the search path.
if test -f /etc/ld.so.conf; then
lt_ld_extra=`awk '/^include / { system(sprintf("cd /etc; cat %s 2>/dev/null", \[$]2)); skip = 1; } { if (!skip) print \[$]0; skip = 0; }' < /etc/ld.so.conf | $SED -e 's/#.*//;/^[ ]*hwcap[ ]/d;s/[:, ]/ /g;s/=[^=]*$//;s/=[^= ]* / /g;s/"//g;/^$/d' | tr '\n' ' '`
sys_lib_dlsearch_path_spec="/lib /usr/lib $lt_ld_extra"
fi
# We used to test for /lib/ld.so.1 and disable shared libraries on
# powerpc, because MkLinux only supported shared libraries with the
# GNU dynamic linker. Since this was broken with cross compilers,
# most powerpc-linux boxes support dynamic linking these days and
# people can always --disable-shared, the test was removed, and we
# assume the GNU/Linux dynamic linker is in use.
dynamic_linker='GNU/Linux ld.so'
;;
+netbsdelf*-gnu)
+ version_type=linux
+ need_lib_prefix=no
+ need_version=no
+ library_names_spec='${libname}${release}${shared_ext}$versuffix ${libname}${release}${shared_ext}$major ${libname}${shared_ext}'
+ soname_spec='${libname}${release}${shared_ext}$major'
+ shlibpath_var=LD_LIBRARY_PATH
+ shlibpath_overrides_runpath=no
+ hardcode_into_libs=yes
+ dynamic_linker='NetBSD ld.elf_so'
+ ;;
+
netbsd*)
version_type=sunos
need_lib_prefix=no
need_version=no
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
dynamic_linker='NetBSD (a.out) ld.so'
else
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
dynamic_linker='NetBSD ld.elf_so'
fi
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
;;
newsos6)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
;;
*nto* | *qnx*)
version_type=qnx
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
dynamic_linker='ldqnx.so'
;;
openbsd* | bitrig*)
version_type=sunos
sys_lib_dlsearch_path_spec=/usr/lib
need_lib_prefix=no
if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
need_version=no
else
need_version=yes
fi
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
finish_cmds='PATH="\$PATH:/sbin" ldconfig -m $libdir'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
;;
os2*)
libname_spec='$name'
version_type=windows
shrext_cmds=.dll
need_version=no
need_lib_prefix=no
# OS/2 can only load a DLL with a base name of 8 characters or less.
soname_spec='`test -n "$os2dllname" && libname="$os2dllname";
v=$($ECHO $release$versuffix | tr -d .-);
n=$($ECHO $libname | cut -b -$((8 - ${#v})) | tr . _);
$ECHO $n$v`$shared_ext'
library_names_spec='${libname}_dll.$libext'
dynamic_linker='OS/2 ld.exe'
shlibpath_var=BEGINLIBPATH
sys_lib_search_path_spec="/lib /usr/lib /usr/local/lib"
sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
postinstall_cmds='base_file=`basename \$file`~
dlpath=`$SHELL 2>&1 -c '\''. $dir/'\''\$base_file'\''i; $ECHO \$dlname'\''`~
dldir=$destdir/`dirname \$dlpath`~
test -d \$dldir || mkdir -p \$dldir~
$install_prog $dir/$dlname \$dldir/$dlname~
chmod a+x \$dldir/$dlname~
if test -n '\''$stripme'\'' && test -n '\''$striplib'\''; then
eval '\''$striplib \$dldir/$dlname'\'' || exit \$?;
fi'
postuninstall_cmds='dldll=`$SHELL 2>&1 -c '\''. $file; $ECHO \$dlname'\''`~
dlpath=$dir/\$dldll~
$RM \$dlpath'
;;
osf3* | osf4* | osf5*)
version_type=osf
need_lib_prefix=no
need_version=no
soname_spec='$libname$release$shared_ext$major'
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
sys_lib_search_path_spec="/usr/shlib /usr/ccs/lib /usr/lib/cmplrs/cc /usr/lib /usr/local/lib /var/shlib"
sys_lib_dlsearch_path_spec=$sys_lib_search_path_spec
;;
rdos*)
dynamic_linker=no
;;
solaris*)
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
# ldd complains unless libraries are executable
postinstall_cmds='chmod +x $lib'
;;
sunos4*)
version_type=sunos
library_names_spec='$libname$release$shared_ext$versuffix $libname$shared_ext$versuffix'
finish_cmds='PATH="\$PATH:/usr/etc" ldconfig $libdir'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
if test yes = "$with_gnu_ld"; then
need_lib_prefix=no
fi
need_version=yes
;;
sysv4 | sysv4.3*)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
case $host_vendor in
sni)
shlibpath_overrides_runpath=no
need_lib_prefix=no
runpath_var=LD_RUN_PATH
;;
siemens)
need_lib_prefix=no
;;
motorola)
need_lib_prefix=no
need_version=no
shlibpath_overrides_runpath=no
sys_lib_search_path_spec='/lib /usr/lib /usr/ccs/lib'
;;
esac
;;
sysv4*MP*)
if test -d /usr/nec; then
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$shared_ext.$versuffix $libname$shared_ext.$major $libname$shared_ext'
soname_spec='$libname$shared_ext.$major'
shlibpath_var=LD_LIBRARY_PATH
fi
;;
sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*)
version_type=sco
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=yes
hardcode_into_libs=yes
if test yes = "$with_gnu_ld"; then
sys_lib_search_path_spec='/usr/local/lib /usr/gnu/lib /usr/ccs/lib /usr/lib /lib'
else
sys_lib_search_path_spec='/usr/ccs/lib /usr/lib'
case $host_os in
sco3.2v5*)
sys_lib_search_path_spec="$sys_lib_search_path_spec /lib"
;;
esac
fi
sys_lib_dlsearch_path_spec='/usr/lib'
;;
tpf*)
# TPF is a cross-target only. Preferred cross-host = GNU/Linux.
version_type=linux # correct to gnu/linux during the next big refactor
need_lib_prefix=no
need_version=no
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
shlibpath_var=LD_LIBRARY_PATH
shlibpath_overrides_runpath=no
hardcode_into_libs=yes
;;
uts4*)
version_type=linux # correct to gnu/linux during the next big refactor
library_names_spec='$libname$release$shared_ext$versuffix $libname$release$shared_ext$major $libname$shared_ext'
soname_spec='$libname$release$shared_ext$major'
shlibpath_var=LD_LIBRARY_PATH
;;
*)
dynamic_linker=no
;;
esac
AC_MSG_RESULT([$dynamic_linker])
test no = "$dynamic_linker" && can_build_shared=no
variables_saved_for_relink="PATH $shlibpath_var $runpath_var"
if test yes = "$GCC"; then
variables_saved_for_relink="$variables_saved_for_relink GCC_EXEC_PREFIX COMPILER_PATH LIBRARY_PATH"
fi
if test set = "${lt_cv_sys_lib_search_path_spec+set}"; then
sys_lib_search_path_spec=$lt_cv_sys_lib_search_path_spec
fi
if test set = "${lt_cv_sys_lib_dlsearch_path_spec+set}"; then
sys_lib_dlsearch_path_spec=$lt_cv_sys_lib_dlsearch_path_spec
fi
# remember unaugmented sys_lib_dlsearch_path content for libtool script decls...
configure_time_dlsearch_path=$sys_lib_dlsearch_path_spec
# ... but it needs LT_SYS_LIBRARY_PATH munging for other configure-time code
func_munge_path_list sys_lib_dlsearch_path_spec "$LT_SYS_LIBRARY_PATH"
# to be used as default LT_SYS_LIBRARY_PATH value in generated libtool
configure_time_lt_sys_library_path=$LT_SYS_LIBRARY_PATH
_LT_DECL([], [variables_saved_for_relink], [1],
[Variables whose values should be saved in libtool wrapper scripts and
restored at link time])
_LT_DECL([], [need_lib_prefix], [0],
[Do we need the "lib" prefix for modules?])
_LT_DECL([], [need_version], [0], [Do we need a version for libraries?])
_LT_DECL([], [version_type], [0], [Library versioning type])
_LT_DECL([], [runpath_var], [0], [Shared library runtime path variable])
_LT_DECL([], [shlibpath_var], [0],[Shared library path variable])
_LT_DECL([], [shlibpath_overrides_runpath], [0],
[Is shlibpath searched before the hard-coded library search path?])
_LT_DECL([], [libname_spec], [1], [Format of library name prefix])
_LT_DECL([], [library_names_spec], [1],
[[List of archive names. First name is the real one, the rest are links.
The last name is the one that the linker finds with -lNAME]])
_LT_DECL([], [soname_spec], [1],
[[The coded name of the library, if different from the real name]])
_LT_DECL([], [install_override_mode], [1],
[Permission mode override for installation of shared libraries])
_LT_DECL([], [postinstall_cmds], [2],
[Command to use after installation of a shared archive])
_LT_DECL([], [postuninstall_cmds], [2],
[Command to use after uninstallation of a shared archive])
_LT_DECL([], [finish_cmds], [2],
[Commands used to finish a libtool library installation in a directory])
_LT_DECL([], [finish_eval], [1],
[[As "finish_cmds", except a single script fragment to be evaled but
not shown]])
_LT_DECL([], [hardcode_into_libs], [0],
[Whether we should hardcode library paths into libraries])
_LT_DECL([], [sys_lib_search_path_spec], [2],
[Compile-time system search path for libraries])
_LT_DECL([sys_lib_dlsearch_path_spec], [configure_time_dlsearch_path], [2],
[Detected run-time system search path for libraries])
_LT_DECL([], [configure_time_lt_sys_library_path], [2],
[Explicit LT_SYS_LIBRARY_PATH set during ./configure time])
])# _LT_SYS_DYNAMIC_LINKER
# _LT_PATH_TOOL_PREFIX(TOOL)
# --------------------------
# find a file program that can recognize shared library
AC_DEFUN([_LT_PATH_TOOL_PREFIX],
[m4_require([_LT_DECL_EGREP])dnl
AC_MSG_CHECKING([for $1])
AC_CACHE_VAL(lt_cv_path_MAGIC_CMD,
[case $MAGIC_CMD in
[[\\/*] | ?:[\\/]*])
lt_cv_path_MAGIC_CMD=$MAGIC_CMD # Let the user override the test with a path.
;;
*)
lt_save_MAGIC_CMD=$MAGIC_CMD
lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
dnl $ac_dummy forces splitting on constant user-supplied paths.
dnl POSIX.2 word splitting is done only on the output of word expansions,
dnl not every word. This closes a longstanding sh security hole.
ac_dummy="m4_if([$2], , $PATH, [$2])"
for ac_dir in $ac_dummy; do
IFS=$lt_save_ifs
test -z "$ac_dir" && ac_dir=.
if test -f "$ac_dir/$1"; then
lt_cv_path_MAGIC_CMD=$ac_dir/"$1"
if test -n "$file_magic_test_file"; then
case $deplibs_check_method in
"file_magic "*)
file_magic_regex=`expr "$deplibs_check_method" : "file_magic \(.*\)"`
MAGIC_CMD=$lt_cv_path_MAGIC_CMD
if eval $file_magic_cmd \$file_magic_test_file 2> /dev/null |
$EGREP "$file_magic_regex" > /dev/null; then
:
else
cat <<_LT_EOF 1>&2
*** Warning: the command libtool uses to detect shared libraries,
*** $file_magic_cmd, produces output that libtool cannot recognize.
*** The result is that libtool may fail to recognize shared libraries
*** as such. This will affect the creation of libtool libraries that
*** depend on shared libraries, but programs linked with such libtool
*** libraries will work regardless of this problem. Nevertheless, you
*** may want to report the problem to your system manager and/or to
*** bug-libtool@gnu.org
_LT_EOF
fi ;;
esac
fi
break
fi
done
IFS=$lt_save_ifs
MAGIC_CMD=$lt_save_MAGIC_CMD
;;
esac])
MAGIC_CMD=$lt_cv_path_MAGIC_CMD
if test -n "$MAGIC_CMD"; then
AC_MSG_RESULT($MAGIC_CMD)
else
AC_MSG_RESULT(no)
fi
_LT_DECL([], [MAGIC_CMD], [0],
[Used to examine libraries when file_magic_cmd begins with "file"])dnl
])# _LT_PATH_TOOL_PREFIX
# Old name:
AU_ALIAS([AC_PATH_TOOL_PREFIX], [_LT_PATH_TOOL_PREFIX])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_PATH_TOOL_PREFIX], [])
# _LT_PATH_MAGIC
# --------------
# find a file program that can recognize a shared library
m4_defun([_LT_PATH_MAGIC],
[_LT_PATH_TOOL_PREFIX(${ac_tool_prefix}file, /usr/bin$PATH_SEPARATOR$PATH)
if test -z "$lt_cv_path_MAGIC_CMD"; then
if test -n "$ac_tool_prefix"; then
_LT_PATH_TOOL_PREFIX(file, /usr/bin$PATH_SEPARATOR$PATH)
else
MAGIC_CMD=:
fi
fi
])# _LT_PATH_MAGIC
# LT_PATH_LD
# ----------
# find the pathname to the GNU or non-GNU linker
AC_DEFUN([LT_PATH_LD],
[AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_PROG_ECHO_BACKSLASH])dnl
AC_ARG_WITH([gnu-ld],
[AS_HELP_STRING([--with-gnu-ld],
[assume the C compiler uses GNU ld @<:@default=no@:>@])],
[test no = "$withval" || with_gnu_ld=yes],
[with_gnu_ld=no])dnl
ac_prog=ld
if test yes = "$GCC"; then
# Check if gcc -print-prog-name=ld gives a path.
AC_MSG_CHECKING([for ld used by $CC])
case $host in
*-*-mingw*)
# gcc leaves a trailing carriage return, which upsets mingw
ac_prog=`($CC -print-prog-name=ld) 2>&5 | tr -d '\015'` ;;
*)
ac_prog=`($CC -print-prog-name=ld) 2>&5` ;;
esac
case $ac_prog in
# Accept absolute paths.
[[\\/]]* | ?:[[\\/]]*)
re_direlt='/[[^/]][[^/]]*/\.\./'
# Canonicalize the pathname of ld
ac_prog=`$ECHO "$ac_prog"| $SED 's%\\\\%/%g'`
while $ECHO "$ac_prog" | $GREP "$re_direlt" > /dev/null 2>&1; do
ac_prog=`$ECHO $ac_prog| $SED "s%$re_direlt%/%"`
done
test -z "$LD" && LD=$ac_prog
;;
"")
# If it fails, then pretend we aren't using GCC.
ac_prog=ld
;;
*)
# If it is relative, then search for the first ld in PATH.
with_gnu_ld=unknown
;;
esac
elif test yes = "$with_gnu_ld"; then
AC_MSG_CHECKING([for GNU ld])
else
AC_MSG_CHECKING([for non-GNU ld])
fi
AC_CACHE_VAL(lt_cv_path_LD,
[if test -z "$LD"; then
lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
for ac_dir in $PATH; do
IFS=$lt_save_ifs
test -z "$ac_dir" && ac_dir=.
if test -f "$ac_dir/$ac_prog" || test -f "$ac_dir/$ac_prog$ac_exeext"; then
lt_cv_path_LD=$ac_dir/$ac_prog
# Check to see if the program is GNU ld. I'd rather use --version,
# but apparently some variants of GNU ld only accept -v.
# Break only if it was the GNU/non-GNU ld that we prefer.
case `"$lt_cv_path_LD" -v 2>&1 </dev/null` in
*GNU* | *'with BFD'*)
test no != "$with_gnu_ld" && break
;;
*)
test yes != "$with_gnu_ld" && break
;;
esac
fi
done
IFS=$lt_save_ifs
else
lt_cv_path_LD=$LD # Let the user override the test with a path.
fi])
LD=$lt_cv_path_LD
if test -n "$LD"; then
AC_MSG_RESULT($LD)
else
AC_MSG_RESULT(no)
fi
test -z "$LD" && AC_MSG_ERROR([no acceptable ld found in \$PATH])
_LT_PATH_LD_GNU
AC_SUBST([LD])
_LT_TAGDECL([], [LD], [1], [The linker used to build libraries])
])# LT_PATH_LD
# Old names:
AU_ALIAS([AM_PROG_LD], [LT_PATH_LD])
AU_ALIAS([AC_PROG_LD], [LT_PATH_LD])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AM_PROG_LD], [])
dnl AC_DEFUN([AC_PROG_LD], [])
# _LT_PATH_LD_GNU
#- --------------
m4_defun([_LT_PATH_LD_GNU],
[AC_CACHE_CHECK([if the linker ($LD) is GNU ld], lt_cv_prog_gnu_ld,
[# I'd rather use --version here, but apparently some GNU lds only accept -v.
case `$LD -v 2>&1 </dev/null` in
*GNU* | *'with BFD'*)
lt_cv_prog_gnu_ld=yes
;;
*)
lt_cv_prog_gnu_ld=no
;;
esac])
with_gnu_ld=$lt_cv_prog_gnu_ld
])# _LT_PATH_LD_GNU
# _LT_CMD_RELOAD
# --------------
# find reload flag for linker
# -- PORTME Some linkers may need a different reload flag.
m4_defun([_LT_CMD_RELOAD],
[AC_CACHE_CHECK([for $LD option to reload object files],
lt_cv_ld_reload_flag,
[lt_cv_ld_reload_flag='-r'])
reload_flag=$lt_cv_ld_reload_flag
case $reload_flag in
"" | " "*) ;;
*) reload_flag=" $reload_flag" ;;
esac
reload_cmds='$LD$reload_flag -o $output$reload_objs'
case $host_os in
cygwin* | mingw* | pw32* | cegcc*)
if test yes != "$GCC"; then
reload_cmds=false
fi
;;
darwin*)
if test yes = "$GCC"; then
reload_cmds='$LTCC $LTCFLAGS -nostdlib $wl-r -o $output$reload_objs'
else
reload_cmds='$LD$reload_flag -o $output$reload_objs'
fi
;;
esac
_LT_TAGDECL([], [reload_flag], [1], [How to create reloadable object files])dnl
_LT_TAGDECL([], [reload_cmds], [2])dnl
])# _LT_CMD_RELOAD
# _LT_PATH_DD
# -----------
# find a working dd
m4_defun([_LT_PATH_DD],
[AC_CACHE_CHECK([for a working dd], [ac_cv_path_lt_DD],
[printf 0123456789abcdef0123456789abcdef >conftest.i
cat conftest.i conftest.i >conftest2.i
: ${lt_DD:=$DD}
AC_PATH_PROGS_FEATURE_CHECK([lt_DD], [dd],
[if "$ac_path_lt_DD" bs=32 count=1 <conftest2.i >conftest.out 2>/dev/null; then
cmp -s conftest.i conftest.out \
&& ac_cv_path_lt_DD="$ac_path_lt_DD" ac_path_lt_DD_found=:
fi])
rm -f conftest.i conftest2.i conftest.out])
])# _LT_PATH_DD
# _LT_CMD_TRUNCATE
# ----------------
# find command to truncate a binary pipe
m4_defun([_LT_CMD_TRUNCATE],
[m4_require([_LT_PATH_DD])
AC_CACHE_CHECK([how to truncate binary pipes], [lt_cv_truncate_bin],
[printf 0123456789abcdef0123456789abcdef >conftest.i
cat conftest.i conftest.i >conftest2.i
lt_cv_truncate_bin=
if "$ac_cv_path_lt_DD" bs=32 count=1 <conftest2.i >conftest.out 2>/dev/null; then
cmp -s conftest.i conftest.out \
&& lt_cv_truncate_bin="$ac_cv_path_lt_DD bs=4096 count=1"
fi
rm -f conftest.i conftest2.i conftest.out
test -z "$lt_cv_truncate_bin" && lt_cv_truncate_bin="$SED -e 4q"])
_LT_DECL([lt_truncate_bin], [lt_cv_truncate_bin], [1],
[Command to truncate a binary pipe])
])# _LT_CMD_TRUNCATE
# _LT_CHECK_MAGIC_METHOD
# ----------------------
# how to check for library dependencies
# -- PORTME fill in with the dynamic library characteristics
m4_defun([_LT_CHECK_MAGIC_METHOD],
[m4_require([_LT_DECL_EGREP])
m4_require([_LT_DECL_OBJDUMP])
AC_CACHE_CHECK([how to recognize dependent libraries],
lt_cv_deplibs_check_method,
[lt_cv_file_magic_cmd='$MAGIC_CMD'
lt_cv_file_magic_test_file=
lt_cv_deplibs_check_method='unknown'
# Need to set the preceding variable on all platforms that support
# interlibrary dependencies.
# 'none' -- dependencies not supported.
# 'unknown' -- same as none, but documents that we really don't know.
# 'pass_all' -- all dependencies passed with no checks.
# 'test_compile' -- check by making test program.
# 'file_magic [[regex]]' -- check by looking for files in library path
# that responds to the $file_magic_cmd with a given extended regex.
# If you have 'file' or equivalent on your system and you're not sure
# whether 'pass_all' will *always* work, you probably want this one.
case $host_os in
aix[[4-9]]*)
lt_cv_deplibs_check_method=pass_all
;;
beos*)
lt_cv_deplibs_check_method=pass_all
;;
bsdi[[45]]*)
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib)'
lt_cv_file_magic_cmd='/usr/bin/file -L'
lt_cv_file_magic_test_file=/shlib/libc.so
;;
cygwin*)
# func_win32_libid is a shell function defined in ltmain.sh
lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
lt_cv_file_magic_cmd='func_win32_libid'
;;
mingw* | pw32*)
# Base MSYS/MinGW do not provide the 'file' command needed by
# func_win32_libid shell function, so use a weaker test based on 'objdump',
# unless we find 'file', for example because we are cross-compiling.
if ( file / ) >/dev/null 2>&1; then
lt_cv_deplibs_check_method='file_magic ^x86 archive import|^x86 DLL'
lt_cv_file_magic_cmd='func_win32_libid'
else
# Keep this pattern in sync with the one in func_win32_libid.
lt_cv_deplibs_check_method='file_magic file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)'
lt_cv_file_magic_cmd='$OBJDUMP -f'
fi
;;
cegcc*)
# use the weaker test based on 'objdump'. See mingw*.
lt_cv_deplibs_check_method='file_magic file format pe-arm-.*little(.*architecture: arm)?'
lt_cv_file_magic_cmd='$OBJDUMP -f'
;;
darwin* | rhapsody*)
lt_cv_deplibs_check_method=pass_all
;;
freebsd* | dragonfly*)
if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
case $host_cpu in
i*86 )
# Not sure whether the presence of OpenBSD here was a mistake.
# Let's accept both of them until this is cleared up.
lt_cv_deplibs_check_method='file_magic (FreeBSD|OpenBSD|DragonFly)/i[[3-9]]86 (compact )?demand paged shared library'
lt_cv_file_magic_cmd=/usr/bin/file
lt_cv_file_magic_test_file=`echo /usr/lib/libc.so.*`
;;
esac
else
lt_cv_deplibs_check_method=pass_all
fi
;;
haiku*)
lt_cv_deplibs_check_method=pass_all
;;
hpux10.20* | hpux11*)
lt_cv_file_magic_cmd=/usr/bin/file
case $host_cpu in
ia64*)
lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|ELF-[[0-9]][[0-9]]) shared object file - IA64'
lt_cv_file_magic_test_file=/usr/lib/hpux32/libc.so
;;
hppa*64*)
[lt_cv_deplibs_check_method='file_magic (s[0-9][0-9][0-9]|ELF[ -][0-9][0-9])(-bit)?( [LM]SB)? shared object( file)?[, -]* PA-RISC [0-9]\.[0-9]']
lt_cv_file_magic_test_file=/usr/lib/pa20_64/libc.sl
;;
*)
lt_cv_deplibs_check_method='file_magic (s[[0-9]][[0-9]][[0-9]]|PA-RISC[[0-9]]\.[[0-9]]) shared library'
lt_cv_file_magic_test_file=/usr/lib/libc.sl
;;
esac
;;
interix[[3-9]]*)
# PIC code is broken on Interix 3.x, that's why |\.a not |_pic\.a here
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|\.a)$'
;;
irix5* | irix6* | nonstopux*)
case $LD in
*-32|*"-32 ") libmagic=32-bit;;
*-n32|*"-n32 ") libmagic=N32;;
*-64|*"-64 ") libmagic=64-bit;;
*) libmagic=never-match;;
esac
lt_cv_deplibs_check_method=pass_all
;;
# This must be glibc/ELF.
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
lt_cv_deplibs_check_method=pass_all
;;
-netbsd*)
+netbsd* | netbsdelf*-gnu)
if echo __ELF__ | $CC -E - | $GREP __ELF__ > /dev/null; then
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$'
else
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so|_pic\.a)$'
fi
;;
newos6*)
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (executable|dynamic lib)'
lt_cv_file_magic_cmd=/usr/bin/file
lt_cv_file_magic_test_file=/usr/lib/libnls.so
;;
*nto* | *qnx*)
lt_cv_deplibs_check_method=pass_all
;;
openbsd* | bitrig*)
if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|\.so|_pic\.a)$'
else
lt_cv_deplibs_check_method='match_pattern /lib[[^/]]+(\.so\.[[0-9]]+\.[[0-9]]+|_pic\.a)$'
fi
;;
osf3* | osf4* | osf5*)
lt_cv_deplibs_check_method=pass_all
;;
rdos*)
lt_cv_deplibs_check_method=pass_all
;;
solaris*)
lt_cv_deplibs_check_method=pass_all
;;
sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX* | sysv4*uw2*)
lt_cv_deplibs_check_method=pass_all
;;
sysv4 | sysv4.3*)
case $host_vendor in
motorola)
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[ML]]SB (shared object|dynamic lib) M[[0-9]][[0-9]]* Version [[0-9]]'
lt_cv_file_magic_test_file=`echo /usr/lib/libc.so*`
;;
ncr)
lt_cv_deplibs_check_method=pass_all
;;
sequent)
lt_cv_file_magic_cmd='/bin/file'
lt_cv_deplibs_check_method='file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB (shared object|dynamic lib )'
;;
sni)
lt_cv_file_magic_cmd='/bin/file'
lt_cv_deplibs_check_method="file_magic ELF [[0-9]][[0-9]]*-bit [[LM]]SB dynamic lib"
lt_cv_file_magic_test_file=/lib/libc.so
;;
siemens)
lt_cv_deplibs_check_method=pass_all
;;
pc)
lt_cv_deplibs_check_method=pass_all
;;
esac
;;
tpf*)
lt_cv_deplibs_check_method=pass_all
;;
os2*)
lt_cv_deplibs_check_method=pass_all
;;
esac
])
file_magic_glob=
want_nocaseglob=no
if test "$build" = "$host"; then
case $host_os in
mingw* | pw32*)
if ( shopt | grep nocaseglob ) >/dev/null 2>&1; then
want_nocaseglob=yes
else
file_magic_glob=`echo aAbBcCdDeEfFgGhHiIjJkKlLmMnNoOpPqQrRsStTuUvVwWxXyYzZ | $SED -e "s/\(..\)/s\/[[\1]]\/[[\1]]\/g;/g"`
fi
;;
esac
fi
file_magic_cmd=$lt_cv_file_magic_cmd
deplibs_check_method=$lt_cv_deplibs_check_method
test -z "$deplibs_check_method" && deplibs_check_method=unknown
_LT_DECL([], [deplibs_check_method], [1],
[Method to check whether dependent libraries are shared objects])
_LT_DECL([], [file_magic_cmd], [1],
[Command to use when deplibs_check_method = "file_magic"])
_LT_DECL([], [file_magic_glob], [1],
[How to find potential files when deplibs_check_method = "file_magic"])
_LT_DECL([], [want_nocaseglob], [1],
[Find potential files using nocaseglob when deplibs_check_method = "file_magic"])
])# _LT_CHECK_MAGIC_METHOD
# LT_PATH_NM
# ----------
# find the pathname to a BSD- or MS-compatible name lister
AC_DEFUN([LT_PATH_NM],
[AC_REQUIRE([AC_PROG_CC])dnl
AC_CACHE_CHECK([for BSD- or MS-compatible name lister (nm)], lt_cv_path_NM,
[if test -n "$NM"; then
# Let the user override the test.
lt_cv_path_NM=$NM
else
lt_nm_to_check=${ac_tool_prefix}nm
if test -n "$ac_tool_prefix" && test "$build" = "$host"; then
lt_nm_to_check="$lt_nm_to_check nm"
fi
for lt_tmp_nm in $lt_nm_to_check; do
lt_save_ifs=$IFS; IFS=$PATH_SEPARATOR
for ac_dir in $PATH /usr/ccs/bin/elf /usr/ccs/bin /usr/ucb /bin; do
IFS=$lt_save_ifs
test -z "$ac_dir" && ac_dir=.
tmp_nm=$ac_dir/$lt_tmp_nm
if test -f "$tmp_nm" || test -f "$tmp_nm$ac_exeext"; then
# Check to see if the nm accepts a BSD-compat flag.
# Adding the 'sed 1q' prevents false positives on HP-UX, which says:
# nm: unknown option "B" ignored
# Tru64's nm complains that /dev/null is an invalid object file
# MSYS converts /dev/null to NUL, MinGW nm treats NUL as empty
case $build_os in
mingw*) lt_bad_file=conftest.nm/nofile ;;
*) lt_bad_file=/dev/null ;;
esac
case `"$tmp_nm" -B $lt_bad_file 2>&1 | sed '1q'` in
*$lt_bad_file* | *'Invalid file or object type'*)
lt_cv_path_NM="$tmp_nm -B"
break 2
;;
*)
case `"$tmp_nm" -p /dev/null 2>&1 | sed '1q'` in
*/dev/null*)
lt_cv_path_NM="$tmp_nm -p"
break 2
;;
*)
lt_cv_path_NM=${lt_cv_path_NM="$tmp_nm"} # keep the first match, but
continue # so that we can try to find one that supports BSD flags
;;
esac
;;
esac
fi
done
IFS=$lt_save_ifs
done
: ${lt_cv_path_NM=no}
fi])
if test no != "$lt_cv_path_NM"; then
NM=$lt_cv_path_NM
else
# Didn't find any BSD compatible name lister, look for dumpbin.
if test -n "$DUMPBIN"; then :
# Let the user override the test.
else
AC_CHECK_TOOLS(DUMPBIN, [dumpbin "link -dump"], :)
case `$DUMPBIN -symbols -headers /dev/null 2>&1 | sed '1q'` in
*COFF*)
DUMPBIN="$DUMPBIN -symbols -headers"
;;
*)
DUMPBIN=:
;;
esac
fi
AC_SUBST([DUMPBIN])
if test : != "$DUMPBIN"; then
NM=$DUMPBIN
fi
fi
test -z "$NM" && NM=nm
AC_SUBST([NM])
_LT_DECL([], [NM], [1], [A BSD- or MS-compatible name lister])dnl
AC_CACHE_CHECK([the name lister ($NM) interface], [lt_cv_nm_interface],
[lt_cv_nm_interface="BSD nm"
echo "int some_variable = 0;" > conftest.$ac_ext
(eval echo "\"\$as_me:$LINENO: $ac_compile\"" >&AS_MESSAGE_LOG_FD)
(eval "$ac_compile" 2>conftest.err)
cat conftest.err >&AS_MESSAGE_LOG_FD
(eval echo "\"\$as_me:$LINENO: $NM \\\"conftest.$ac_objext\\\"\"" >&AS_MESSAGE_LOG_FD)
(eval "$NM \"conftest.$ac_objext\"" 2>conftest.err > conftest.out)
cat conftest.err >&AS_MESSAGE_LOG_FD
(eval echo "\"\$as_me:$LINENO: output\"" >&AS_MESSAGE_LOG_FD)
cat conftest.out >&AS_MESSAGE_LOG_FD
if $GREP 'External.*some_variable' conftest.out > /dev/null; then
lt_cv_nm_interface="MS dumpbin"
fi
rm -f conftest*])
])# LT_PATH_NM
# Old names:
AU_ALIAS([AM_PROG_NM], [LT_PATH_NM])
AU_ALIAS([AC_PROG_NM], [LT_PATH_NM])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AM_PROG_NM], [])
dnl AC_DEFUN([AC_PROG_NM], [])
# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
# --------------------------------
# how to determine the name of the shared library
# associated with a specific link library.
# -- PORTME fill in with the dynamic library characteristics
m4_defun([_LT_CHECK_SHAREDLIB_FROM_LINKLIB],
[m4_require([_LT_DECL_EGREP])
m4_require([_LT_DECL_OBJDUMP])
m4_require([_LT_DECL_DLLTOOL])
AC_CACHE_CHECK([how to associate runtime and link libraries],
lt_cv_sharedlib_from_linklib_cmd,
[lt_cv_sharedlib_from_linklib_cmd='unknown'
case $host_os in
cygwin* | mingw* | pw32* | cegcc*)
# two different shell functions defined in ltmain.sh;
# decide which one to use based on capabilities of $DLLTOOL
case `$DLLTOOL --help 2>&1` in
*--identify-strict*)
lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib
;;
*)
lt_cv_sharedlib_from_linklib_cmd=func_cygming_dll_for_implib_fallback
;;
esac
;;
*)
# fallback: assume linklib IS sharedlib
lt_cv_sharedlib_from_linklib_cmd=$ECHO
;;
esac
])
sharedlib_from_linklib_cmd=$lt_cv_sharedlib_from_linklib_cmd
test -z "$sharedlib_from_linklib_cmd" && sharedlib_from_linklib_cmd=$ECHO
_LT_DECL([], [sharedlib_from_linklib_cmd], [1],
[Command to associate shared and link libraries])
])# _LT_CHECK_SHAREDLIB_FROM_LINKLIB
# _LT_PATH_MANIFEST_TOOL
# ----------------------
# locate the manifest tool
m4_defun([_LT_PATH_MANIFEST_TOOL],
[AC_CHECK_TOOL(MANIFEST_TOOL, mt, :)
test -z "$MANIFEST_TOOL" && MANIFEST_TOOL=mt
AC_CACHE_CHECK([if $MANIFEST_TOOL is a manifest tool], [lt_cv_path_mainfest_tool],
[lt_cv_path_mainfest_tool=no
echo "$as_me:$LINENO: $MANIFEST_TOOL '-?'" >&AS_MESSAGE_LOG_FD
$MANIFEST_TOOL '-?' 2>conftest.err > conftest.out
cat conftest.err >&AS_MESSAGE_LOG_FD
if $GREP 'Manifest Tool' conftest.out > /dev/null; then
lt_cv_path_mainfest_tool=yes
fi
rm -f conftest*])
if test yes != "$lt_cv_path_mainfest_tool"; then
MANIFEST_TOOL=:
fi
_LT_DECL([], [MANIFEST_TOOL], [1], [Manifest tool])dnl
])# _LT_PATH_MANIFEST_TOOL
# _LT_DLL_DEF_P([FILE])
# ---------------------
# True iff FILE is a Windows DLL '.def' file.
# Keep in sync with func_dll_def_p in the libtool script
AC_DEFUN([_LT_DLL_DEF_P],
[dnl
test DEF = "`$SED -n dnl
-e '\''s/^[[ ]]*//'\'' dnl Strip leading whitespace
-e '\''/^\(;.*\)*$/d'\'' dnl Delete empty lines and comments
-e '\''s/^\(EXPORTS\|LIBRARY\)\([[ ]].*\)*$/DEF/p'\'' dnl
-e q dnl Only consider the first "real" line
$1`" dnl
])# _LT_DLL_DEF_P
# LT_LIB_M
# --------
# check for math library
AC_DEFUN([LT_LIB_M],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
LIBM=
case $host in
*-*-beos* | *-*-cegcc* | *-*-cygwin* | *-*-haiku* | *-*-pw32* | *-*-darwin*)
# These system don't have libm, or don't need it
;;
*-ncr-sysv4.3*)
AC_CHECK_LIB(mw, _mwvalidcheckl, LIBM=-lmw)
AC_CHECK_LIB(m, cos, LIBM="$LIBM -lm")
;;
*)
AC_CHECK_LIB(m, cos, LIBM=-lm)
;;
esac
AC_SUBST([LIBM])
])# LT_LIB_M
# Old name:
AU_ALIAS([AC_CHECK_LIBM], [LT_LIB_M])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([AC_CHECK_LIBM], [])
# _LT_COMPILER_NO_RTTI([TAGNAME])
# -------------------------------
m4_defun([_LT_COMPILER_NO_RTTI],
[m4_require([_LT_TAG_COMPILER])dnl
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=
if test yes = "$GCC"; then
case $cc_basename in
nvcc*)
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -Xcompiler -fno-builtin' ;;
*)
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin' ;;
esac
_LT_COMPILER_OPTION([if $compiler supports -fno-rtti -fno-exceptions],
lt_cv_prog_compiler_rtti_exceptions,
[-fno-rtti -fno-exceptions], [],
[_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)="$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1) -fno-rtti -fno-exceptions"])
fi
_LT_TAGDECL([no_builtin_flag], [lt_prog_compiler_no_builtin_flag], [1],
[Compiler flag to turn off builtin functions])
])# _LT_COMPILER_NO_RTTI
# _LT_CMD_GLOBAL_SYMBOLS
# ----------------------
m4_defun([_LT_CMD_GLOBAL_SYMBOLS],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_PROG_CC])dnl
AC_REQUIRE([AC_PROG_AWK])dnl
AC_REQUIRE([LT_PATH_NM])dnl
AC_REQUIRE([LT_PATH_LD])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_TAG_COMPILER])dnl
# Check for command to grab the raw symbol name followed by C symbol from nm.
AC_MSG_CHECKING([command to parse $NM output from $compiler object])
AC_CACHE_VAL([lt_cv_sys_global_symbol_pipe],
[
# These are sane defaults that work on at least a few old systems.
# [They come from Ultrix. What could be older than Ultrix?!! ;)]
# Character class describing NM global symbol codes.
symcode='[[BCDEGRST]]'
# Regexp to match symbols that can be accessed directly from C.
sympat='\([[_A-Za-z]][[_A-Za-z0-9]]*\)'
# Define system-specific variables.
case $host_os in
aix*)
symcode='[[BCDT]]'
;;
cygwin* | mingw* | pw32* | cegcc*)
symcode='[[ABCDGISTW]]'
;;
hpux*)
if test ia64 = "$host_cpu"; then
symcode='[[ABCDEGRST]]'
fi
;;
irix* | nonstopux*)
symcode='[[BCDEGRST]]'
;;
osf*)
symcode='[[BCDEGQRST]]'
;;
solaris*)
symcode='[[BDRT]]'
;;
sco3.2v5*)
symcode='[[DT]]'
;;
sysv4.2uw2*)
symcode='[[DT]]'
;;
sysv5* | sco5v6* | unixware* | OpenUNIX*)
symcode='[[ABDT]]'
;;
sysv4)
symcode='[[DFNSTU]]'
;;
esac
# If we're using GNU nm, then use its standard symbol codes.
case `$NM -V 2>&1` in
*GNU* | *'with BFD'*)
symcode='[[ABCDGIRSTW]]' ;;
esac
if test "$lt_cv_nm_interface" = "MS dumpbin"; then
# Gets list of data symbols to import.
lt_cv_sys_global_symbol_to_import="sed -n -e 's/^I .* \(.*\)$/\1/p'"
# Adjust the below global symbol transforms to fixup imported variables.
lt_cdecl_hook=" -e 's/^I .* \(.*\)$/extern __declspec(dllimport) char \1;/p'"
lt_c_name_hook=" -e 's/^I .* \(.*\)$/ {\"\1\", (void *) 0},/p'"
lt_c_name_lib_hook="\
-e 's/^I .* \(lib.*\)$/ {\"\1\", (void *) 0},/p'\
-e 's/^I .* \(.*\)$/ {\"lib\1\", (void *) 0},/p'"
else
# Disable hooks by default.
lt_cv_sys_global_symbol_to_import=
lt_cdecl_hook=
lt_c_name_hook=
lt_c_name_lib_hook=
fi
# Transform an extracted symbol line into a proper C declaration.
# Some systems (esp. on ia64) link data and code symbols differently,
# so use this general approach.
lt_cv_sys_global_symbol_to_cdecl="sed -n"\
$lt_cdecl_hook\
" -e 's/^T .* \(.*\)$/extern int \1();/p'"\
" -e 's/^$symcode$symcode* .* \(.*\)$/extern char \1;/p'"
# Transform an extracted symbol line into symbol name and symbol address
lt_cv_sys_global_symbol_to_c_name_address="sed -n"\
$lt_c_name_hook\
" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\
" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/p'"
# Transform an extracted symbol line into symbol name with lib prefix and
# symbol address.
lt_cv_sys_global_symbol_to_c_name_address_lib_prefix="sed -n"\
$lt_c_name_lib_hook\
" -e 's/^: \(.*\) .*$/ {\"\1\", (void *) 0},/p'"\
" -e 's/^$symcode$symcode* .* \(lib.*\)$/ {\"\1\", (void *) \&\1},/p'"\
" -e 's/^$symcode$symcode* .* \(.*\)$/ {\"lib\1\", (void *) \&\1},/p'"
# Handle CRLF in mingw tool chain
opt_cr=
case $build_os in
mingw*)
opt_cr=`$ECHO 'x\{0,1\}' | tr x '\015'` # option cr in regexp
;;
esac
# Try without a prefix underscore, then with it.
for ac_symprfx in "" "_"; do
# Transform symcode, sympat, and symprfx into a raw symbol and a C symbol.
symxfrm="\\1 $ac_symprfx\\2 \\2"
# Write the raw and C identifiers.
if test "$lt_cv_nm_interface" = "MS dumpbin"; then
# Fake it for dumpbin and say T for any non-static function,
# D for any global variable and I for any imported variable.
# Also find C++ and __fastcall symbols from MSVC++,
# which start with @ or ?.
lt_cv_sys_global_symbol_pipe="$AWK ['"\
" {last_section=section; section=\$ 3};"\
" /^COFF SYMBOL TABLE/{for(i in hide) delete hide[i]};"\
" /Section length .*#relocs.*(pick any)/{hide[last_section]=1};"\
" /^ *Symbol name *: /{split(\$ 0,sn,\":\"); si=substr(sn[2],2)};"\
" /^ *Type *: code/{print \"T\",si,substr(si,length(prfx))};"\
" /^ *Type *: data/{print \"I\",si,substr(si,length(prfx))};"\
" \$ 0!~/External *\|/{next};"\
" / 0+ UNDEF /{next}; / UNDEF \([^|]\)*()/{next};"\
" {if(hide[section]) next};"\
" {f=\"D\"}; \$ 0~/\(\).*\|/{f=\"T\"};"\
" {split(\$ 0,a,/\||\r/); split(a[2],s)};"\
" s[1]~/^[@?]/{print f,s[1],s[1]; next};"\
" s[1]~prfx {split(s[1],t,\"@\"); print f,t[1],substr(t[1],length(prfx))}"\
" ' prfx=^$ac_symprfx]"
else
lt_cv_sys_global_symbol_pipe="sed -n -e 's/^.*[[ ]]\($symcode$symcode*\)[[ ]][[ ]]*$ac_symprfx$sympat$opt_cr$/$symxfrm/p'"
fi
lt_cv_sys_global_symbol_pipe="$lt_cv_sys_global_symbol_pipe | sed '/ __gnu_lto/d'"
# Check to see that the pipe works correctly.
pipe_works=no
rm -f conftest*
cat > conftest.$ac_ext <<_LT_EOF
#ifdef __cplusplus
extern "C" {
#endif
char nm_test_var;
void nm_test_func(void);
void nm_test_func(void){}
#ifdef __cplusplus
}
#endif
int main(){nm_test_var='a';nm_test_func();return(0);}
_LT_EOF
if AC_TRY_EVAL(ac_compile); then
# Now try to grab the symbols.
nlist=conftest.nm
if AC_TRY_EVAL(NM conftest.$ac_objext \| "$lt_cv_sys_global_symbol_pipe" \> $nlist) && test -s "$nlist"; then
# Try sorting and uniquifying the output.
if sort "$nlist" | uniq > "$nlist"T; then
mv -f "$nlist"T "$nlist"
else
rm -f "$nlist"T
fi
# Make sure that we snagged all the symbols we need.
if $GREP ' nm_test_var$' "$nlist" >/dev/null; then
if $GREP ' nm_test_func$' "$nlist" >/dev/null; then
cat <<_LT_EOF > conftest.$ac_ext
/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */
#if defined _WIN32 || defined __CYGWIN__ || defined _WIN32_WCE
/* DATA imports from DLLs on WIN32 can't be const, because runtime
relocations are performed -- see ld's documentation on pseudo-relocs. */
# define LT@&t@_DLSYM_CONST
#elif defined __osf__
/* This system does not cope well with relocations in const data. */
# define LT@&t@_DLSYM_CONST
#else
# define LT@&t@_DLSYM_CONST const
#endif
#ifdef __cplusplus
extern "C" {
#endif
_LT_EOF
# Now generate the symbol file.
eval "$lt_cv_sys_global_symbol_to_cdecl"' < "$nlist" | $GREP -v main >> conftest.$ac_ext'
cat <<_LT_EOF >> conftest.$ac_ext
/* The mapping between symbol names and symbols. */
LT@&t@_DLSYM_CONST struct {
const char *name;
void *address;
}
lt__PROGRAM__LTX_preloaded_symbols[[]] =
{
{ "@PROGRAM@", (void *) 0 },
_LT_EOF
$SED "s/^$symcode$symcode* .* \(.*\)$/ {\"\1\", (void *) \&\1},/" < "$nlist" | $GREP -v main >> conftest.$ac_ext
cat <<\_LT_EOF >> conftest.$ac_ext
{0, (void *) 0}
};
/* This works around a problem in FreeBSD linker */
#ifdef FREEBSD_WORKAROUND
static const void *lt_preloaded_setup() {
return lt__PROGRAM__LTX_preloaded_symbols;
}
#endif
#ifdef __cplusplus
}
#endif
_LT_EOF
# Now try linking the two files.
mv conftest.$ac_objext conftstm.$ac_objext
lt_globsym_save_LIBS=$LIBS
lt_globsym_save_CFLAGS=$CFLAGS
LIBS=conftstm.$ac_objext
CFLAGS="$CFLAGS$_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)"
if AC_TRY_EVAL(ac_link) && test -s conftest$ac_exeext; then
pipe_works=yes
fi
LIBS=$lt_globsym_save_LIBS
CFLAGS=$lt_globsym_save_CFLAGS
else
echo "cannot find nm_test_func in $nlist" >&AS_MESSAGE_LOG_FD
fi
else
echo "cannot find nm_test_var in $nlist" >&AS_MESSAGE_LOG_FD
fi
else
echo "cannot run $lt_cv_sys_global_symbol_pipe" >&AS_MESSAGE_LOG_FD
fi
else
echo "$progname: failed program was:" >&AS_MESSAGE_LOG_FD
cat conftest.$ac_ext >&5
fi
rm -rf conftest* conftst*
# Do not use the global_symbol_pipe unless it works.
if test yes = "$pipe_works"; then
break
else
lt_cv_sys_global_symbol_pipe=
fi
done
])
if test -z "$lt_cv_sys_global_symbol_pipe"; then
lt_cv_sys_global_symbol_to_cdecl=
fi
if test -z "$lt_cv_sys_global_symbol_pipe$lt_cv_sys_global_symbol_to_cdecl"; then
AC_MSG_RESULT(failed)
else
AC_MSG_RESULT(ok)
fi
# Response file support.
if test "$lt_cv_nm_interface" = "MS dumpbin"; then
nm_file_list_spec='@'
elif $NM --help 2>/dev/null | grep '[[@]]FILE' >/dev/null; then
nm_file_list_spec='@'
fi
_LT_DECL([global_symbol_pipe], [lt_cv_sys_global_symbol_pipe], [1],
[Take the output of nm and produce a listing of raw symbols and C names])
_LT_DECL([global_symbol_to_cdecl], [lt_cv_sys_global_symbol_to_cdecl], [1],
[Transform the output of nm in a proper C declaration])
_LT_DECL([global_symbol_to_import], [lt_cv_sys_global_symbol_to_import], [1],
[Transform the output of nm into a list of symbols to manually relocate])
_LT_DECL([global_symbol_to_c_name_address],
[lt_cv_sys_global_symbol_to_c_name_address], [1],
[Transform the output of nm in a C name address pair])
_LT_DECL([global_symbol_to_c_name_address_lib_prefix],
[lt_cv_sys_global_symbol_to_c_name_address_lib_prefix], [1],
[Transform the output of nm in a C name address pair when lib prefix is needed])
_LT_DECL([nm_interface], [lt_cv_nm_interface], [1],
[The name lister interface])
_LT_DECL([], [nm_file_list_spec], [1],
[Specify filename containing input files for $NM])
]) # _LT_CMD_GLOBAL_SYMBOLS
# _LT_COMPILER_PIC([TAGNAME])
# ---------------------------
m4_defun([_LT_COMPILER_PIC],
[m4_require([_LT_TAG_COMPILER])dnl
_LT_TAGVAR(lt_prog_compiler_wl, $1)=
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_static, $1)=
m4_if([$1], [CXX], [
# C++ specific cases for pic, static, wl, etc.
if test yes = "$GXX"; then
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
case $host_os in
aix*)
# All AIX code is PIC.
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
fi
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
m68k)
# FIXME: we need at least 68020 code to build shared libraries, but
# adding the '-m68020' flag to GCC prevents building anything better,
# like '-m68040'.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4'
;;
esac
;;
beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
# PIC is the default for these OSes.
;;
mingw* | cygwin* | os2* | pw32* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
# Although the cygwin gcc ignores -fPIC, still need this for old-style
# (--disable-auto-import) libraries
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
case $host_os in
os2*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static'
;;
esac
;;
darwin* | rhapsody*)
# PIC is the default on this platform
# Common symbols not allowed in MH_DYLIB files
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
;;
*djgpp*)
# DJGPP does not support shared libraries at all
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
;;
haiku*)
# PIC is the default for Haiku.
# The "-static" flag exists, but is broken.
_LT_TAGVAR(lt_prog_compiler_static, $1)=
;;
interix[[3-9]]*)
# Interix 3.x gcc -fpic/-fPIC options generate broken code.
# Instead, we relocate shared libraries at runtime.
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic
fi
;;
hpux*)
# PIC is the default for 64-bit PA HP-UX, but not for 32-bit
# PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag
# sets the default TLS model and affects inlining.
case $host_cpu in
hppa*64*)
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
;;
*qnx* | *nto*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
else
case $host_os in
aix[[4-9]]*)
# All AIX code is PIC.
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
else
_LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp'
fi
;;
chorus*)
case $cc_basename in
cxch68*)
# Green Hills C++ Compiler
# _LT_TAGVAR(lt_prog_compiler_static, $1)="--no_auto_instantiation -u __main -u __premain -u _abort -r $COOL_DIR/lib/libOrb.a $MVME_DIR/lib/CC/libC.a $MVME_DIR/lib/classix/libcx.s.a"
;;
esac
;;
mingw* | cygwin* | os2* | pw32* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
;;
dgux*)
case $cc_basename in
ec++*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
;;
ghcx*)
# Green Hills C++ Compiler
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
;;
*)
;;
esac
;;
freebsd* | dragonfly*)
# FreeBSD uses GNU C++
;;
hpux9* | hpux10* | hpux11*)
case $cc_basename in
CC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive'
if test ia64 != "$host_cpu"; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
fi
;;
aCC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive'
case $host_cpu in
hppa*64*|ia64*)
# +Z the default
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
;;
esac
;;
*)
;;
esac
;;
interix*)
# This is c89, which is MS Visual C++ (no shared libs)
# Anyone wants to do a port?
;;
irix5* | irix6* | nonstopux*)
case $cc_basename in
CC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
# CC pic flag -KPIC is the default.
;;
*)
;;
esac
;;
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
case $cc_basename in
KCC*)
# KAI C++ Compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
ecpc* )
# old Intel C++ for x86_64, which still supported -KPIC.
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
icpc* )
# Intel C++, used to be incompatible with GCC.
# ICC 10 doesn't accept -KPIC any more.
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
pgCC* | pgcpp*)
# Portland Group C++ compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
cxx*)
# Compaq C++
# Make sure the PIC flag is empty. It appears that all Alpha
# Linux and Compaq Tru64 Unix objects are PIC.
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
xlc* | xlC* | bgxl[[cC]]* | mpixl[[cC]]*)
# IBM XL 8.0, 9.0 on PPC and BlueGene
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink'
;;
*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*)
# Sun C++ 5.9
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
;;
esac
;;
esac
;;
lynxos*)
;;
m88k*)
;;
mvs*)
case $cc_basename in
cxx*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-W c,exportall'
;;
*)
;;
esac
;;
- netbsd*)
+ netbsd* | netbsdelf*-gnu)
;;
*qnx* | *nto*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
osf3* | osf4* | osf5*)
case $cc_basename in
KCC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='--backend -Wl,'
;;
RCC*)
# Rational C++ 2.4.1
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
;;
cxx*)
# Digital/Compaq C++
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# Make sure the PIC flag is empty. It appears that all Alpha
# Linux and Compaq Tru64 Unix objects are PIC.
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
*)
;;
esac
;;
psos*)
;;
solaris*)
case $cc_basename in
CC* | sunCC*)
# Sun C++ 4.2, 5.x and Centerline C++
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
;;
gcx*)
# Green Hills C++ Compiler
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
;;
*)
;;
esac
;;
sunos4*)
case $cc_basename in
CC*)
# Sun C++ 4.x
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
lcc*)
# Lucid
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
;;
*)
;;
esac
;;
sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*)
case $cc_basename in
CC*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
esac
;;
tandem*)
case $cc_basename in
NCC*)
# NonStop-UX NCC 3.20
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
;;
*)
;;
esac
;;
vxworks*)
;;
*)
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
;;
esac
fi
],
[
if test yes = "$GCC"; then
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
case $host_os in
aix*)
# All AIX code is PIC.
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
fi
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
m68k)
# FIXME: we need at least 68020 code to build shared libraries, but
# adding the '-m68020' flag to GCC prevents building anything better,
# like '-m68040'.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-m68020 -resident32 -malways-restore-a4'
;;
esac
;;
beos* | irix5* | irix6* | nonstopux* | osf3* | osf4* | osf5*)
# PIC is the default for these OSes.
;;
mingw* | cygwin* | pw32* | os2* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
# Although the cygwin gcc ignores -fPIC, still need this for old-style
# (--disable-auto-import) libraries
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
case $host_os in
os2*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static'
;;
esac
;;
darwin* | rhapsody*)
# PIC is the default on this platform
# Common symbols not allowed in MH_DYLIB files
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
;;
haiku*)
# PIC is the default for Haiku.
# The "-static" flag exists, but is broken.
_LT_TAGVAR(lt_prog_compiler_static, $1)=
;;
hpux*)
# PIC is the default for 64-bit PA HP-UX, but not for 32-bit
# PA HP-UX. On IA64 HP-UX, PIC is the default but the pic flag
# sets the default TLS model and affects inlining.
case $host_cpu in
hppa*64*)
# +Z the default
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
;;
interix[[3-9]]*)
# Interix 3.x gcc -fpic/-fPIC options generate broken code.
# Instead, we relocate shared libraries at runtime.
;;
msdosdjgpp*)
# Just because we use GCC doesn't mean we suddenly get shared libraries
# on systems that don't support them.
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
enable_shared=no
;;
*nto* | *qnx*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)=-Kconform_pic
fi
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
;;
esac
case $cc_basename in
nvcc*) # Cuda Compiler Driver 2.2
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Xlinker '
if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)="-Xcompiler $_LT_TAGVAR(lt_prog_compiler_pic, $1)"
fi
;;
esac
else
# PORTME Check for flag to pass linker flags through the system compiler.
case $host_os in
aix*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
if test ia64 = "$host_cpu"; then
# AIX 5 now supports IA64 processor
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
else
_LT_TAGVAR(lt_prog_compiler_static, $1)='-bnso -bI:/lib/syscalls.exp'
fi
;;
darwin* | rhapsody*)
# PIC is the default on this platform
# Common symbols not allowed in MH_DYLIB files
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fno-common'
case $cc_basename in
nagfor*)
# NAG Fortran compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
esac
;;
mingw* | cygwin* | pw32* | os2* | cegcc*)
# This hack is so that the source file can tell whether it is being
# built for inclusion in a dll (and should export symbols for example).
m4_if([$1], [GCJ], [],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)='-DDLL_EXPORT'])
case $host_os in
os2*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-static'
;;
esac
;;
hpux9* | hpux10* | hpux11*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# PIC is the default for IA64 HP-UX and 64-bit HP-UX, but
# not for PA HP-UX.
case $host_cpu in
hppa*64*|ia64*)
# +Z the default
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='+Z'
;;
esac
# Is there a better lt_prog_compiler_static that works with the bundled CC?
_LT_TAGVAR(lt_prog_compiler_static, $1)='$wl-a ${wl}archive'
;;
irix5* | irix6* | nonstopux*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# PIC (with -KPIC) is the default.
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
case $cc_basename in
# old Intel for x86_64, which still supported -KPIC.
ecc*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
# icc used to be incompatible with GCC.
# ICC 10 doesn't accept -KPIC any more.
icc* | ifort*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
# Lahey Fortran 8.1.
lf95*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='--shared'
_LT_TAGVAR(lt_prog_compiler_static, $1)='--static'
;;
nagfor*)
# NAG Fortran compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,-Wl,,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
tcc*)
# Fabrice Bellard et al's Tiny C Compiler
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
pgcc* | pgf77* | pgf90* | pgf95* | pgfortran*)
# Portland Group compilers (*not* the Pentium gcc compiler,
# which looks to be a dead project)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
ccc*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# All Alpha code is PIC.
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
xl* | bgxl* | bgf* | mpixl*)
# IBM XL C 8.0/Fortran 10.1, 11.1 on PPC and BlueGene
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-qpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-qstaticlink'
;;
*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ Ceres\ Fortran* | *Sun*Fortran*\ [[1-7]].* | *Sun*Fortran*\ 8.[[0-3]]*)
# Sun Fortran 8.3 passes all unrecognized flags to the linker
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)=''
;;
*Sun\ F* | *Sun*Fortran*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
;;
*Sun\ C*)
# Sun C 5.9
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
;;
*Intel*\ [[CF]]*Compiler*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-static'
;;
*Portland\ Group*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fpic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
esac
;;
esac
;;
newsos6)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
*nto* | *qnx*)
# QNX uses GNU C++, but need to define -shared option too, otherwise
# it will coredump.
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-fPIC -shared'
;;
osf3* | osf4* | osf5*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
# All OSF/1 code is PIC.
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
rdos*)
_LT_TAGVAR(lt_prog_compiler_static, $1)='-non_shared'
;;
solaris*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
case $cc_basename in
f77* | f90* | f95* | sunf77* | sunf90* | sunf95*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld ';;
*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,';;
esac
;;
sunos4*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Qoption ld '
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-PIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
sysv4 | sysv4.2uw2* | sysv4.3*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-Kconform_pic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
fi
;;
sysv5* | unixware* | sco3.2v5* | sco5v6* | OpenUNIX*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-KPIC'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
unicos*)
_LT_TAGVAR(lt_prog_compiler_wl, $1)='-Wl,'
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
;;
uts4*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)='-pic'
_LT_TAGVAR(lt_prog_compiler_static, $1)='-Bstatic'
;;
*)
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no
;;
esac
fi
])
case $host_os in
# For platforms that do not support PIC, -DPIC is meaningless:
*djgpp*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)=
;;
*)
_LT_TAGVAR(lt_prog_compiler_pic, $1)="$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])"
;;
esac
AC_CACHE_CHECK([for $compiler option to produce PIC],
[_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)],
[_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_prog_compiler_pic, $1)])
_LT_TAGVAR(lt_prog_compiler_pic, $1)=$_LT_TAGVAR(lt_cv_prog_compiler_pic, $1)
#
# Check to make sure the PIC flag actually works.
#
if test -n "$_LT_TAGVAR(lt_prog_compiler_pic, $1)"; then
_LT_COMPILER_OPTION([if $compiler PIC flag $_LT_TAGVAR(lt_prog_compiler_pic, $1) works],
[_LT_TAGVAR(lt_cv_prog_compiler_pic_works, $1)],
[$_LT_TAGVAR(lt_prog_compiler_pic, $1)@&t@m4_if([$1],[],[ -DPIC],[m4_if([$1],[CXX],[ -DPIC],[])])], [],
[case $_LT_TAGVAR(lt_prog_compiler_pic, $1) in
"" | " "*) ;;
*) _LT_TAGVAR(lt_prog_compiler_pic, $1)=" $_LT_TAGVAR(lt_prog_compiler_pic, $1)" ;;
esac],
[_LT_TAGVAR(lt_prog_compiler_pic, $1)=
_LT_TAGVAR(lt_prog_compiler_can_build_shared, $1)=no])
fi
_LT_TAGDECL([pic_flag], [lt_prog_compiler_pic], [1],
[Additional compiler flags for building library objects])
_LT_TAGDECL([wl], [lt_prog_compiler_wl], [1],
[How to pass a linker flag through the compiler])
#
# Check to make sure the static flag actually works.
#
wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1) eval lt_tmp_static_flag=\"$_LT_TAGVAR(lt_prog_compiler_static, $1)\"
_LT_LINKER_OPTION([if $compiler static flag $lt_tmp_static_flag works],
_LT_TAGVAR(lt_cv_prog_compiler_static_works, $1),
$lt_tmp_static_flag,
[],
[_LT_TAGVAR(lt_prog_compiler_static, $1)=])
_LT_TAGDECL([link_static_flag], [lt_prog_compiler_static], [1],
[Compiler flag to prevent dynamic linking])
])# _LT_COMPILER_PIC
# _LT_LINKER_SHLIBS([TAGNAME])
# ----------------------------
# See if the linker supports building shared libraries.
m4_defun([_LT_LINKER_SHLIBS],
[AC_REQUIRE([LT_PATH_LD])dnl
AC_REQUIRE([LT_PATH_NM])dnl
m4_require([_LT_PATH_MANIFEST_TOOL])dnl
m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_DECL_SED])dnl
m4_require([_LT_CMD_GLOBAL_SYMBOLS])dnl
m4_require([_LT_TAG_COMPILER])dnl
AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
m4_if([$1], [CXX], [
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
case $host_os in
aix[[4-9]]*)
# If we're using GNU nm, then we don't want the "-C" option.
# -C means demangle to GNU nm, but means don't demangle to AIX nm.
# Without the "-l" option, or with the "-B" option, AIX nm treats
# weak defined symbols like other global defined symbols, whereas
# GNU nm marks them as "W".
# While the 'weak' keyword is ignored in the Export File, we need
# it in the Import File for the 'aix-soname' feature, so we have
# to replace the "-B" option with "-P" for AIX nm.
if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then
_LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols'
else
_LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols'
fi
;;
pw32*)
_LT_TAGVAR(export_symbols_cmds, $1)=$ltdll_cmds
;;
cygwin* | mingw* | cegcc*)
case $cc_basename in
cl*)
_LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*'
;;
*)
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
;;
esac
;;
+ linux* | k*bsd*-gnu | gnu*)
+ _LT_TAGVAR(link_all_deplibs, $1)=no
+ ;;
*)
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
;;
esac
], [
runpath_var=
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_cmds, $1)=
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(compiler_needs_object, $1)=no
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED '\''s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(old_archive_from_new_cmds, $1)=
_LT_TAGVAR(old_archive_from_expsyms_cmds, $1)=
_LT_TAGVAR(thread_safe_flag_spec, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
# include_expsyms should be a list of space-separated symbols to be *always*
# included in the symbol list
_LT_TAGVAR(include_expsyms, $1)=
# exclude_expsyms can be an extended regexp of symbols to exclude
# it will be wrapped by ' (' and ')$', so one must not match beginning or
# end of line. Example: 'a|bc|.*d.*' will exclude the symbols 'a' and 'bc',
# as well as any symbol that contains 'd'.
_LT_TAGVAR(exclude_expsyms, $1)=['_GLOBAL_OFFSET_TABLE_|_GLOBAL__F[ID]_.*']
# Although _GLOBAL_OFFSET_TABLE_ is a valid symbol C name, most a.out
# platforms (ab)use it in PIC code, but their linkers get confused if
# the symbol is explicitly referenced. Since portable code cannot
# rely on this symbol name, it's probably fine to never include it in
# preloaded symbol tables.
# Exclude shared library initialization/finalization symbols.
dnl Note also adjust exclude_expsyms for C++ above.
extract_expsyms_cmds=
case $host_os in
cygwin* | mingw* | pw32* | cegcc*)
# FIXME: the MSVC++ port hasn't been tested in a loooong time
# When not using gcc, we currently assume that we are using
# Microsoft Visual C++.
if test yes != "$GCC"; then
with_gnu_ld=no
fi
;;
interix*)
# we just hope/assume this is gcc and not c89 (= MSVC++)
with_gnu_ld=yes
;;
openbsd* | bitrig*)
with_gnu_ld=no
;;
+ linux* | k*bsd*-gnu | gnu*)
+ _LT_TAGVAR(link_all_deplibs, $1)=no
+ ;;
esac
_LT_TAGVAR(ld_shlibs, $1)=yes
# On some targets, GNU ld is compatible enough with the native linker
# that we're better off using the native interface for both.
lt_use_gnu_ld_interface=no
if test yes = "$with_gnu_ld"; then
case $host_os in
aix*)
# The AIX port of GNU ld has always aspired to compatibility
# with the native linker. However, as the warning in the GNU ld
# block says, versions before 2.19.5* couldn't really create working
# shared libraries, regardless of the interface used.
case `$LD -v 2>&1` in
*\ \(GNU\ Binutils\)\ 2.19.5*) ;;
*\ \(GNU\ Binutils\)\ 2.[[2-9]]*) ;;
*\ \(GNU\ Binutils\)\ [[3-9]]*) ;;
*)
lt_use_gnu_ld_interface=yes
;;
esac
;;
*)
lt_use_gnu_ld_interface=yes
;;
esac
fi
if test yes = "$lt_use_gnu_ld_interface"; then
# If archive_cmds runs LD, not CC, wlarc should be empty
wlarc='$wl'
# Set some defaults for GNU ld with shared library support. These
# are reset later if shared libraries are not supported. Putting them
# here allows them to be overridden if necessary.
runpath_var=LD_RUN_PATH
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
# ancient GNU ld didn't support --whole-archive et. al.
if $LD --help 2>&1 | $GREP 'no-whole-archive' > /dev/null; then
_LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
else
_LT_TAGVAR(whole_archive_flag_spec, $1)=
fi
supports_anon_versioning=no
case `$LD -v | $SED -e 's/([^)]\+)\s\+//' 2>&1` in
*GNU\ gold*) supports_anon_versioning=yes ;;
*\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.10.*) ;; # catch versions < 2.11
*\ 2.11.93.0.2\ *) supports_anon_versioning=yes ;; # RH7.3 ...
*\ 2.11.92.0.12\ *) supports_anon_versioning=yes ;; # Mandrake 8.2 ...
*\ 2.11.*) ;; # other 2.11 versions
*) supports_anon_versioning=yes ;;
esac
# See if GNU ld supports shared libraries.
case $host_os in
aix[[3-9]]*)
# On AIX/PPC, the GNU linker is very broken
if test ia64 != "$host_cpu"; then
_LT_TAGVAR(ld_shlibs, $1)=no
cat <<_LT_EOF 1>&2
*** Warning: the GNU linker, at least up to release 2.19, is reported
*** to be unable to reliably create shared libraries on AIX.
*** Therefore, libtool is disabling shared libraries support. If you
*** really care for shared libraries, you may want to install binutils
*** 2.20 or above, or modify your PATH so that a non-GNU linker is found.
*** You will then need to restart the configuration process.
_LT_EOF
fi
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)=''
;;
m68k)
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
;;
esac
;;
beos*)
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
# Joseph Beckenbach <jrb3@best.com> says some releases of gcc
# support --undefined. This deserves some investigation. FIXME
_LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
cygwin* | mingw* | pw32* | cegcc*)
# _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
# as there is no search path for DLLs.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols'
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1 DATA/;s/^.*[[ ]]__nm__\([[^ ]]*\)[[ ]][[^ ]]*/\1 DATA/;/^I[[ ]]/d;/^[[AITW]][[ ]]/s/.* //'\'' | sort | uniq > $export_symbols'
_LT_TAGVAR(exclude_expsyms, $1)=['[_]+GLOBAL_OFFSET_TABLE_|[_]+GLOBAL__[FID]_.*|[_]+head_[A-Za-z0-9_]+_dll|[A-Za-z0-9_]+_dll_iname']
if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
# If the export-symbols file already is a .def file, use it as
# is; otherwise, prepend EXPORTS...
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp $export_symbols $output_objdir/$soname.def;
else
echo EXPORTS > $output_objdir/$soname.def;
cat $export_symbols >> $output_objdir/$soname.def;
fi~
$CC -shared $output_objdir/$soname.def $libobjs $deplibs $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
haiku*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
os2*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
shrext_cmds=.dll
_LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
prefix_cmds="$SED"~
if test EXPORTS = "`$SED 1q $export_symbols`"; then
prefix_cmds="$prefix_cmds -e 1d";
fi~
prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
interix[[3-9]]*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc.
# Instead, shared libraries are loaded at an image base (0x10000000 by
# default) and relocated if they conflict, which is a slow very memory
# consuming and fragmenting process. To avoid this, we pick a random,
# 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link
# time. Moving up from 0x10000000 also allows more sbrk(2) space.
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
;;
gnu* | linux* | tpf* | k*bsd*-gnu | kopensolaris*-gnu)
tmp_diet=no
if test linux-dietlibc = "$host_os"; then
case $cc_basename in
diet\ *) tmp_diet=yes;; # linux-dietlibc with static linking (!diet-dyn)
esac
fi
if $LD --help 2>&1 | $EGREP ': supported targets:.* elf' > /dev/null \
&& test no = "$tmp_diet"
then
tmp_addflag=' $pic_flag'
tmp_sharedflag='-shared'
case $cc_basename,$host_cpu in
pgcc*) # Portland Group C compiler
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
tmp_addflag=' $pic_flag'
;;
pgf77* | pgf90* | pgf95* | pgfortran*)
# Portland Group f77 and f90 compilers
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
tmp_addflag=' $pic_flag -Mnomain' ;;
ecc*,ia64* | icc*,ia64*) # Intel C compiler on ia64
tmp_addflag=' -i_dynamic' ;;
efc*,ia64* | ifort*,ia64*) # Intel Fortran compiler on ia64
tmp_addflag=' -i_dynamic -nofor_main' ;;
ifc* | ifort*) # Intel Fortran compiler
tmp_addflag=' -nofor_main' ;;
lf95*) # Lahey Fortran 8.1
_LT_TAGVAR(whole_archive_flag_spec, $1)=
tmp_sharedflag='--shared' ;;
nagfor*) # NAGFOR 5.3
tmp_sharedflag='-Wl,-shared' ;;
xl[[cC]]* | bgxl[[cC]]* | mpixl[[cC]]*) # IBM XL C 8.0 on PPC (deal with xlf below)
tmp_sharedflag='-qmkshrobj'
tmp_addflag= ;;
nvcc*) # Cuda Compiler Driver 2.2
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
_LT_TAGVAR(compiler_needs_object, $1)=yes
;;
esac
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*) # Sun C 5.9
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
_LT_TAGVAR(compiler_needs_object, $1)=yes
tmp_sharedflag='-G' ;;
*Sun\ F*) # Sun Fortran 8.3
tmp_sharedflag='-G' ;;
esac
_LT_TAGVAR(archive_cmds, $1)='$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
if test yes = "$supports_anon_versioning"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
echo "local: *; };" >> $output_objdir/$libname.ver~
$CC '"$tmp_sharedflag""$tmp_addflag"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib'
fi
case $cc_basename in
tcc*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)='-rdynamic'
;;
xlf* | bgf* | bgxlf* | mpixlf*)
# IBM XL Fortran 10.1 on PPC cannot create shared libs itself
_LT_TAGVAR(whole_archive_flag_spec, $1)='--whole-archive$convenience --no-whole-archive'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(archive_cmds, $1)='$LD -shared $libobjs $deplibs $linker_flags -soname $soname -o $lib'
if test yes = "$supports_anon_versioning"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
echo "local: *; };" >> $output_objdir/$libname.ver~
$LD -shared $libobjs $deplibs $linker_flags -soname $soname -version-script $output_objdir/$libname.ver -o $lib'
fi
;;
esac
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
- netbsd*)
+ netbsd* | netbsdelf*-gnu)
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable $libobjs $deplibs $linker_flags -o $lib'
wlarc=
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
fi
;;
solaris*)
if $LD -v 2>&1 | $GREP 'BFD 2\.8' > /dev/null; then
_LT_TAGVAR(ld_shlibs, $1)=no
cat <<_LT_EOF 1>&2
*** Warning: The releases 2.8.* of the GNU linker cannot reliably
*** create shared libraries on Solaris systems. Therefore, libtool
*** is disabling shared libraries support. We urge you to upgrade GNU
*** binutils to release 2.9.1 or newer. Another option is to modify
*** your PATH or compiler configuration so that the native linker is
*** used, and then restart.
_LT_EOF
elif $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
sysv5* | sco3.2v5* | sco5v6* | unixware* | OpenUNIX*)
case `$LD -v 2>&1` in
*\ [[01]].* | *\ 2.[[0-9]].* | *\ 2.1[[0-5]].*)
_LT_TAGVAR(ld_shlibs, $1)=no
cat <<_LT_EOF 1>&2
*** Warning: Releases of the GNU linker prior to 2.16.91.0.3 cannot
*** reliably create shared libraries on SCO systems. Therefore, libtool
*** is disabling shared libraries support. We urge you to upgrade GNU
*** binutils to release 2.16.91.0.3 or newer. Another option is to modify
*** your PATH or compiler configuration so that the native linker is
*** used, and then restart.
_LT_EOF
;;
*)
# For security reasons, it is highly recommended that you always
# use absolute paths for naming shared libraries, and exclude the
# DT_RUNPATH tag from executables and libraries. But doing so
# requires that you compile everything twice, which is a pain.
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
sunos4*)
_LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bshareable -o $lib $libobjs $deplibs $linker_flags'
wlarc=
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
if test no = "$_LT_TAGVAR(ld_shlibs, $1)"; then
runpath_var=
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
fi
else
# PORTME fill in a description of your system's linker (not GNU ld)
case $host_os in
aix3*)
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=yes
_LT_TAGVAR(archive_expsym_cmds, $1)='$LD -o $output_objdir/$soname $libobjs $deplibs $linker_flags -bE:$export_symbols -T512 -H512 -bM:SRE~$AR $AR_FLAGS $lib $output_objdir/$soname'
# Note: this linker hardcodes the directories in LIBPATH if there
# are no directories specified by -L.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
if test yes = "$GCC" && test -z "$lt_prog_compiler_static"; then
# Neither direct hardcoding nor static linking is supported with a
# broken collect2.
_LT_TAGVAR(hardcode_direct, $1)=unsupported
fi
;;
aix[[4-9]]*)
if test ia64 = "$host_cpu"; then
# On IA64, the linker does run time linking by default, so we don't
# have to do anything special.
aix_use_runtimelinking=no
exp_sym_flag='-Bexport'
no_entry_flag=
else
# If we're using GNU nm, then we don't want the "-C" option.
# -C means demangle to GNU nm, but means don't demangle to AIX nm.
# Without the "-l" option, or with the "-B" option, AIX nm treats
# weak defined symbols like other global defined symbols, whereas
# GNU nm marks them as "W".
# While the 'weak' keyword is ignored in the Export File, we need
# it in the Import File for the 'aix-soname' feature, so we have
# to replace the "-B" option with "-P" for AIX nm.
if $NM -V 2>&1 | $GREP 'GNU' > /dev/null; then
_LT_TAGVAR(export_symbols_cmds, $1)='$NM -Bpg $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W")) && ([substr](\$ 3,1,1) != ".")) { if (\$ 2 == "W") { print \$ 3 " weak" } else { print \$ 3 } } }'\'' | sort -u > $export_symbols'
else
_LT_TAGVAR(export_symbols_cmds, $1)='`func_echo_all $NM | $SED -e '\''s/B\([[^B]]*\)$/P\1/'\''` -PCpgl $libobjs $convenience | awk '\''{ if (((\$ 2 == "T") || (\$ 2 == "D") || (\$ 2 == "B") || (\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) && ([substr](\$ 1,1,1) != ".")) { if ((\$ 2 == "W") || (\$ 2 == "V") || (\$ 2 == "Z")) { print \$ 1 " weak" } else { print \$ 1 } } }'\'' | sort -u > $export_symbols'
fi
aix_use_runtimelinking=no
# Test if we are trying to use run time linking or normal
# AIX style linking. If -brtl is somewhere in LDFLAGS, we
# have runtime linking enabled, and use it for executables.
# For shared libraries, we enable/disable runtime linking
# depending on the kind of the shared library created -
# when "with_aix_soname,aix_use_runtimelinking" is:
# "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables
# "aix,yes" lib.so shared, rtl:yes, for executables
# lib.a static archive
# "both,no" lib.so.V(shr.o) shared, rtl:yes
# lib.a(lib.so.V) shared, rtl:no, for executables
# "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a(lib.so.V) shared, rtl:no
# "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a static archive
case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*)
for ld_flag in $LDFLAGS; do
if (test x-brtl = "x$ld_flag" || test x-Wl,-brtl = "x$ld_flag"); then
aix_use_runtimelinking=yes
break
fi
done
if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then
# With aix-soname=svr4, we create the lib.so.V shared archives only,
# so we don't have lib.a shared libs to link our executables.
# We have to force runtime linking in this case.
aix_use_runtimelinking=yes
LDFLAGS="$LDFLAGS -Wl,-brtl"
fi
;;
esac
exp_sym_flag='-bexport'
no_entry_flag='-bnoentry'
fi
# When large executables or shared objects are built, AIX ld can
# have problems creating the table of contents. If linking a library
# or program results in "error TOC overflow" add -mminimal-toc to
# CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
# enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
_LT_TAGVAR(archive_cmds, $1)=''
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='$wl-f,'
case $with_aix_soname,$aix_use_runtimelinking in
aix,*) ;; # traditional, no import file
svr4,* | *,yes) # use import file
# The Import File defines what to hardcode.
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
;;
esac
if test yes = "$GCC"; then
case $host_os in aix4.[[012]]|aix4.[[012]].*)
# We only want to do this on AIX 4.2 and lower, the check
# below for broken collect2 doesn't work under 4.3+
collect2name=`$CC -print-prog-name=collect2`
if test -f "$collect2name" &&
strings "$collect2name" | $GREP resolve_lib_name >/dev/null
then
# We have reworked collect2
:
else
# We have old collect2
_LT_TAGVAR(hardcode_direct, $1)=unsupported
# It fails to find uninstalled libraries when the uninstalled
# path is not listed in the libpath. Setting hardcode_minus_L
# to unsupported forces relinking
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=
fi
;;
esac
shared_flag='-shared'
if test yes = "$aix_use_runtimelinking"; then
shared_flag="$shared_flag "'$wl-G'
fi
# Need to ensure runtime linking is disabled for the traditional
# shared library, or the linker may eventually find shared libraries
# /with/ Import File - we do not want to mix them.
shared_flag_aix='-shared'
shared_flag_svr4='-shared $wl-G'
else
# not using gcc
if test ia64 = "$host_cpu"; then
# VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
# chokes on -Wl,-G. The following line is correct:
shared_flag='-G'
else
if test yes = "$aix_use_runtimelinking"; then
shared_flag='$wl-G'
else
shared_flag='$wl-bM:SRE'
fi
shared_flag_aix='$wl-bM:SRE'
shared_flag_svr4='$wl-G'
fi
fi
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall'
# It seems that -bexpall does not export symbols beginning with
# underscore (_), so it is better to generate a list of symbols to export.
_LT_TAGVAR(always_export_symbols, $1)=yes
if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then
# Warning - without using the other runtime loading flags (-brtl),
# -berok will link without error, but may produce a broken library.
_LT_TAGVAR(allow_undefined_flag, $1)='-berok'
# Determine the default libpath from the value encoded in an
# empty executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag
else
if test ia64 = "$host_cpu"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib'
_LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs"
_LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols"
else
# Determine the default libpath from the value encoded in an
# empty executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
# Warning - without using the other run time loading flags,
# -berok will link without error, but may produce a broken library.
_LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok'
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok'
if test yes = "$with_gnu_ld"; then
# We only use this code for GNU lds that support --whole-archive.
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive'
else
# Exported symbols can be pulled into shared objects from archives
_LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
_LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d'
# -brtl affects multiple linker settings, -berok does not and is overridden later
compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`'
if test svr4 != "$with_aix_soname"; then
# This is similar to how AIX traditionally builds its shared libraries.
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname'
fi
if test aix != "$with_aix_soname"; then
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp'
else
# used by -dlpreopen to get the symbols
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir'
fi
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d'
fi
fi
;;
amigaos*)
case $host_cpu in
powerpc)
# see comment about AmigaOS4 .so support
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)=''
;;
m68k)
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/a2ixlibrary.data~$ECHO "#define NAME $libname" > $output_objdir/a2ixlibrary.data~$ECHO "#define LIBRARY_ID 1" >> $output_objdir/a2ixlibrary.data~$ECHO "#define VERSION $major" >> $output_objdir/a2ixlibrary.data~$ECHO "#define REVISION $revision" >> $output_objdir/a2ixlibrary.data~$AR $AR_FLAGS $lib $libobjs~$RANLIB $lib~(cd $output_objdir && a2ixlibrary -32)'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
;;
esac
;;
bsdi[[45]]*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)=-rdynamic
;;
cygwin* | mingw* | pw32* | cegcc*)
# When not using gcc, we currently assume that we are using
# Microsoft Visual C++.
# hardcode_libdir_flag_spec is actually meaningless, as there is
# no search path for DLLs.
case $cc_basename in
cl*)
# Native MSVC
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='@'
# Tell ltmain to make .lib files, not .a files.
libext=lib
# Tell ltmain to make .dll files, not .so files.
shrext_cmds=.dll
# FIXME: Setting linknames here is a bad hack.
_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames='
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp "$export_symbols" "$output_objdir/$soname.def";
echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp";
else
$SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp;
fi~
$CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
linknames='
# The linker will not automatically build a static lib if we build a DLL.
# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
_LT_TAGVAR(exclude_expsyms, $1)='_NULL_IMPORT_DESCRIPTOR|_IMPORT_DESCRIPTOR_.*'
_LT_TAGVAR(export_symbols_cmds, $1)='$NM $libobjs $convenience | $global_symbol_pipe | $SED -e '\''/^[[BCDGRS]][[ ]]/s/.*[[ ]]\([[^ ]]*\)/\1,DATA/'\'' | $SED -e '\''/^[[AITW]][[ ]]/s/.*[[ ]]//'\'' | sort | uniq > $export_symbols'
# Don't use ranlib
_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
lt_tool_outputfile="@TOOL_OUTPUT@"~
case $lt_outputfile in
*.exe|*.EXE) ;;
*)
lt_outputfile=$lt_outputfile.exe
lt_tool_outputfile=$lt_tool_outputfile.exe
;;
esac~
if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then
$MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
$RM "$lt_outputfile.manifest";
fi'
;;
*)
# Assume MSVC wrapper
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
# Tell ltmain to make .lib files, not .a files.
libext=lib
# Tell ltmain to make .dll files, not .so files.
shrext_cmds=.dll
# FIXME: Setting linknames here is a bad hack.
_LT_TAGVAR(archive_cmds, $1)='$CC -o $lib $libobjs $compiler_flags `func_echo_all "$deplibs" | $SED '\''s/ -lc$//'\''` -link -dll~linknames='
# The linker will automatically build a .lib file if we build a DLL.
_LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
# FIXME: Should let the user specify the lib program.
_LT_TAGVAR(old_archive_cmds, $1)='lib -OUT:$oldlib$oldobjs$old_deplibs'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
esac
;;
darwin* | rhapsody*)
_LT_DARWIN_LINKER_FEATURES($1)
;;
dgux*)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
# FreeBSD 2.2.[012] allows us to include c++rt0.o to get C++ constructor
# support. Future versions do this automatically, but an explicit c++rt0.o
# does not break anything, and helps significantly (at the cost of a little
# extra space).
freebsd2.2*)
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags /usr/lib/c++rt0.o'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
# Unfortunately, older versions of FreeBSD 2 do not have this feature.
freebsd2.*)
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
# FreeBSD 3 and greater uses gcc -shared to do shared libraries.
freebsd* | dragonfly*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
hpux9*)
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $libobjs $deplibs $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
else
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$LD -b +b $install_libdir -o $output_objdir/$soname $libobjs $deplibs $linker_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(hardcode_direct, $1)=yes
# hardcode_minus_L: Not really in the search PATH,
# but as the default location of the library.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
;;
hpux10*)
if test yes,no = "$GCC,$with_gnu_ld"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'
fi
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# hardcode_minus_L: Not really in the search PATH,
# but as the default location of the library.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
fi
;;
hpux11*)
if test yes,no = "$GCC,$with_gnu_ld"; then
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
else
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $libobjs $deplibs $compiler_flags'
;;
*)
m4_if($1, [], [
# Older versions of the 11.00 compiler do not understand -b yet
# (HP92453-01 A.11.01.20 doesn't, HP92453-01 B.11.X.35175-35176.GP does)
_LT_LINKER_OPTION([if $CC understands -b],
_LT_TAGVAR(lt_cv_prog_compiler__b, $1), [-b],
[_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'],
[_LT_TAGVAR(archive_cmds, $1)='$LD -b +h $soname +b $install_libdir -o $lib $libobjs $deplibs $linker_flags'])],
[_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $libobjs $deplibs $compiler_flags'])
;;
esac
fi
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
case $host_cpu in
hppa*64*|ia64*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# hardcode_minus_L: Not really in the search PATH,
# but as the default location of the library.
_LT_TAGVAR(hardcode_minus_L, $1)=yes
;;
esac
fi
;;
irix5* | irix6* | nonstopux*)
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
# Try to use the -exported_symbol ld option, if it does not
# work, assume that -exports_file does not work either and
# implicitly export all symbols.
# This should be the same for all languages, so no per-tag cache variable.
AC_CACHE_CHECK([whether the $host_os linker accepts -exported_symbol],
[lt_cv_irix_exported_symbol],
[save_LDFLAGS=$LDFLAGS
LDFLAGS="$LDFLAGS -shared $wl-exported_symbol ${wl}foo $wl-update_registry $wl/dev/null"
AC_LINK_IFELSE(
[AC_LANG_SOURCE(
[AC_LANG_CASE([C], [[int foo (void) { return 0; }]],
[C++], [[int foo (void) { return 0; }]],
[Fortran 77], [[
subroutine foo
end]],
[Fortran], [[
subroutine foo
end]])])],
[lt_cv_irix_exported_symbol=yes],
[lt_cv_irix_exported_symbol=no])
LDFLAGS=$save_LDFLAGS])
if test yes = "$lt_cv_irix_exported_symbol"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations $wl-exports_file $wl$export_symbols -o $lib'
fi
+ _LT_TAGVAR(link_all_deplibs, $1)=no
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -exports_file $export_symbols -o $lib'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)='no'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(inherit_rpath, $1)=yes
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
linux*)
case $cc_basename in
tcc*)
# Fabrice Bellard et al's Tiny C Compiler
_LT_TAGVAR(ld_shlibs, $1)=yes
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
;;
- netbsd*)
+ netbsd* | netbsdelf*-gnu)
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $libobjs $deplibs $linker_flags' # a.out
else
_LT_TAGVAR(archive_cmds, $1)='$LD -shared -o $lib $libobjs $deplibs $linker_flags' # ELF
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
newsos6)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*nto* | *qnx*)
;;
openbsd* | bitrig*)
if test -f /usr/libexec/ld.so; then
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
if test -z "`echo __ELF__ | $CC -E - | $GREP __ELF__`"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags $wl-retain-symbols-file,$export_symbols'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
fi
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
os2*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
shrext_cmds=.dll
_LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
prefix_cmds="$SED"~
if test EXPORTS = "`$SED 1q $export_symbols`"; then
prefix_cmds="$prefix_cmds -e 1d";
fi~
prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
osf3*)
if test yes = "$GCC"; then
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
else
_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)='no'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
;;
osf4* | osf5*) # as osf3* with the addition of -msym flag
if test yes = "$GCC"; then
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $pic_flag $libobjs $deplibs $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
else
_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $libobjs $deplibs $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done; printf "%s\\n" "-hidden">> $lib.exp~
$CC -shared$allow_undefined_flag $wl-input $wl$lib.exp $compiler_flags $libobjs $deplibs -soname $soname `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~$RM $lib.exp'
# Both c and cxx compiler support -rpath directly
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)='no'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
;;
solaris*)
_LT_TAGVAR(no_undefined_flag, $1)=' -z defs'
if test yes = "$GCC"; then
wlarc='$wl'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $wl-z ${wl}text $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -shared $pic_flag $wl-z ${wl}text $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
else
case `$CC -V 2>&1` in
*"Compilers 5.0"*)
wlarc=''
_LT_TAGVAR(archive_cmds, $1)='$LD -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$LD -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $linker_flags~$RM $lib.exp'
;;
*)
wlarc='$wl'
_LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h $soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -G$allow_undefined_flag -M $lib.exp -h $soname -o $lib $libobjs $deplibs $compiler_flags~$RM $lib.exp'
;;
esac
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
case $host_os in
solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
*)
# The compiler driver will combine and reorder linker options,
# but understands '-z linker_flag'. GCC discards it without '$wl',
# but is careful enough not to reorder.
# Supported since Solaris 2.6 (maybe 2.5.1?)
if test yes = "$GCC"; then
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract'
else
_LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract'
fi
;;
esac
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
sunos4*)
if test sequent = "$host_vendor"; then
# Use $CC to link under sequent, because it throws in some extra .o
# files that make .init and .fini sections work.
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h $soname -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$LD -assert pure-text -Bstatic -o $lib $libobjs $deplibs $linker_flags'
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
sysv4)
case $host_vendor in
sni)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=yes # is this really true???
;;
siemens)
## LD is ld it makes a PLAMLIB
## CC just makes a GrossModule.
_LT_TAGVAR(archive_cmds, $1)='$LD -G -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(reload_cmds, $1)='$CC -r -o $output$reload_objs'
_LT_TAGVAR(hardcode_direct, $1)=no
;;
motorola)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_direct, $1)=no #Motorola manual says yes, but my tests say they lie
;;
esac
runpath_var='LD_RUN_PATH'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
sysv4.3*)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(export_dynamic_flag_spec, $1)='-Bexport'
;;
sysv4*MP*)
if test -d /usr/nec; then
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
runpath_var=LD_RUN_PATH
hardcode_runpath_var=yes
_LT_TAGVAR(ld_shlibs, $1)=yes
fi
;;
sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*)
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
runpath_var='LD_RUN_PATH'
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
fi
;;
sysv5* | sco3.2v5* | sco5v6*)
# Note: We CANNOT use -z defs as we might desire, because we do not
# link with -lc, and that would cause any symbols used from libc to
# always be unresolved, which means just about no library would
# ever link correctly. If we're not using GNU ld we use -z text
# though, which does catch some bad symbols but isn't as heavy-handed
# as -z defs.
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport'
runpath_var='LD_RUN_PATH'
if test yes = "$GCC"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
fi
;;
uts4*)
_LT_TAGVAR(archive_cmds, $1)='$LD -G -h $soname -o $lib $libobjs $deplibs $linker_flags'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
if test sni = "$host_vendor"; then
case $host in
sysv4 | sysv4.2uw2* | sysv4.3* | sysv5*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Blargedynsym'
;;
esac
fi
fi
])
AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)])
test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no
_LT_TAGVAR(with_gnu_ld, $1)=$with_gnu_ld
_LT_DECL([], [libext], [0], [Old archive suffix (normally "a")])dnl
_LT_DECL([], [shrext_cmds], [1], [Shared library suffix (normally ".so")])dnl
_LT_DECL([], [extract_expsyms_cmds], [2],
[The commands to extract the exported symbol list from a shared archive])
#
# Do we need to explicitly link libc?
#
case "x$_LT_TAGVAR(archive_cmds_need_lc, $1)" in
x|xyes)
# Assume -lc should be added
_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
if test yes,yes = "$GCC,$enable_shared"; then
case $_LT_TAGVAR(archive_cmds, $1) in
*'~'*)
# FIXME: we may have to deal with multi-command sequences.
;;
'$CC '*)
# Test whether the compiler implicitly links with -lc since on some
# systems, -lgcc has to come before -lc. If gcc already passes -lc
# to ld, don't add -lc before -lgcc.
AC_CACHE_CHECK([whether -lc should be explicitly linked in],
[lt_cv_]_LT_TAGVAR(archive_cmds_need_lc, $1),
[$RM conftest*
echo "$lt_simple_compile_test_code" > conftest.$ac_ext
if AC_TRY_EVAL(ac_compile) 2>conftest.err; then
soname=conftest
lib=conftest
libobjs=conftest.$ac_objext
deplibs=
wl=$_LT_TAGVAR(lt_prog_compiler_wl, $1)
pic_flag=$_LT_TAGVAR(lt_prog_compiler_pic, $1)
compiler_flags=-v
linker_flags=-v
verstring=
output_objdir=.
libname=conftest
lt_save_allow_undefined_flag=$_LT_TAGVAR(allow_undefined_flag, $1)
_LT_TAGVAR(allow_undefined_flag, $1)=
if AC_TRY_EVAL(_LT_TAGVAR(archive_cmds, $1) 2\>\&1 \| $GREP \" -lc \" \>/dev/null 2\>\&1)
then
lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=no
else
lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
fi
_LT_TAGVAR(allow_undefined_flag, $1)=$lt_save_allow_undefined_flag
else
cat conftest.err 1>&5
fi
$RM conftest*
])
_LT_TAGVAR(archive_cmds_need_lc, $1)=$lt_cv_[]_LT_TAGVAR(archive_cmds_need_lc, $1)
;;
esac
fi
;;
esac
_LT_TAGDECL([build_libtool_need_lc], [archive_cmds_need_lc], [0],
[Whether or not to add -lc for building shared libraries])
_LT_TAGDECL([allow_libtool_libs_with_static_runtimes],
[enable_shared_with_static_runtimes], [0],
[Whether or not to disallow shared libs when runtime libs are static])
_LT_TAGDECL([], [export_dynamic_flag_spec], [1],
[Compiler flag to allow reflexive dlopens])
_LT_TAGDECL([], [whole_archive_flag_spec], [1],
[Compiler flag to generate shared objects directly from archives])
_LT_TAGDECL([], [compiler_needs_object], [1],
[Whether the compiler copes with passing no objects directly])
_LT_TAGDECL([], [old_archive_from_new_cmds], [2],
[Create an old-style archive from a shared archive])
_LT_TAGDECL([], [old_archive_from_expsyms_cmds], [2],
[Create a temporary old-style archive to link instead of a shared archive])
_LT_TAGDECL([], [archive_cmds], [2], [Commands used to build a shared archive])
_LT_TAGDECL([], [archive_expsym_cmds], [2])
_LT_TAGDECL([], [module_cmds], [2],
[Commands used to build a loadable module if different from building
a shared archive.])
_LT_TAGDECL([], [module_expsym_cmds], [2])
_LT_TAGDECL([], [with_gnu_ld], [1],
[Whether we are building with GNU ld or not])
_LT_TAGDECL([], [allow_undefined_flag], [1],
[Flag that allows shared libraries with undefined symbols to be built])
_LT_TAGDECL([], [no_undefined_flag], [1],
[Flag that enforces no undefined symbols])
_LT_TAGDECL([], [hardcode_libdir_flag_spec], [1],
[Flag to hardcode $libdir into a binary during linking.
This must work even if $libdir does not exist])
_LT_TAGDECL([], [hardcode_libdir_separator], [1],
[Whether we need a single "-rpath" flag with a separated argument])
_LT_TAGDECL([], [hardcode_direct], [0],
[Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes
DIR into the resulting binary])
_LT_TAGDECL([], [hardcode_direct_absolute], [0],
[Set to "yes" if using DIR/libNAME$shared_ext during linking hardcodes
DIR into the resulting binary and the resulting library dependency is
"absolute", i.e impossible to change by setting $shlibpath_var if the
library is relocated])
_LT_TAGDECL([], [hardcode_minus_L], [0],
[Set to "yes" if using the -LDIR flag during linking hardcodes DIR
into the resulting binary])
_LT_TAGDECL([], [hardcode_shlibpath_var], [0],
[Set to "yes" if using SHLIBPATH_VAR=DIR during linking hardcodes DIR
into the resulting binary])
_LT_TAGDECL([], [hardcode_automatic], [0],
[Set to "yes" if building a shared library automatically hardcodes DIR
into the library and all subsequent libraries and executables linked
against it])
_LT_TAGDECL([], [inherit_rpath], [0],
[Set to yes if linker adds runtime paths of dependent libraries
to runtime path list])
_LT_TAGDECL([], [link_all_deplibs], [0],
[Whether libtool must link a program against all its dependency libraries])
_LT_TAGDECL([], [always_export_symbols], [0],
[Set to "yes" if exported symbols are required])
_LT_TAGDECL([], [export_symbols_cmds], [2],
[The commands to list exported symbols])
_LT_TAGDECL([], [exclude_expsyms], [1],
[Symbols that should not be listed in the preloaded symbols])
_LT_TAGDECL([], [include_expsyms], [1],
[Symbols that must always be exported])
_LT_TAGDECL([], [prelink_cmds], [2],
[Commands necessary for linking programs (against libraries) with templates])
_LT_TAGDECL([], [postlink_cmds], [2],
[Commands necessary for finishing linking programs])
_LT_TAGDECL([], [file_list_spec], [1],
[Specify filename containing input files])
dnl FIXME: Not yet implemented
dnl _LT_TAGDECL([], [thread_safe_flag_spec], [1],
dnl [Compiler flag to generate thread safe objects])
])# _LT_LINKER_SHLIBS
# _LT_LANG_C_CONFIG([TAG])
# ------------------------
# Ensure that the configuration variables for a C compiler are suitably
# defined. These variables are subsequently used by _LT_CONFIG to write
# the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_C_CONFIG],
[m4_require([_LT_DECL_EGREP])dnl
lt_save_CC=$CC
AC_LANG_PUSH(C)
# Source file extension for C test sources.
ac_ext=c
# Object file extension for compiled C test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code="int some_variable = 0;"
# Code to be used in simple link tests
lt_simple_link_test_code='int main(){return(0);}'
_LT_TAG_COMPILER
# Save the default compiler, since it gets overwritten when the other
# tags are being tested, and _LT_TAGVAR(compiler, []) is a NOP.
compiler_DEFAULT=$CC
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
if test -n "$compiler"; then
_LT_COMPILER_NO_RTTI($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
LT_SYS_DLOPEN_SELF
_LT_CMD_STRIPLIB
# Report what library types will actually be built
AC_MSG_CHECKING([if libtool supports shared libraries])
AC_MSG_RESULT([$can_build_shared])
AC_MSG_CHECKING([whether to build shared libraries])
test no = "$can_build_shared" && enable_shared=no
# On AIX, shared libraries and static libraries use the same namespace, and
# are all built from PIC.
case $host_os in
aix3*)
test yes = "$enable_shared" && enable_static=no
if test -n "$RANLIB"; then
archive_cmds="$archive_cmds~\$RANLIB \$lib"
postinstall_cmds='$RANLIB $lib'
fi
;;
aix[[4-9]]*)
if test ia64 != "$host_cpu"; then
case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
yes,aix,yes) ;; # shared object as lib.so file only
yes,svr4,*) ;; # shared object as lib.so archive member only
yes,*) enable_static=no ;; # shared object in lib.a archive as well
esac
fi
;;
esac
AC_MSG_RESULT([$enable_shared])
AC_MSG_CHECKING([whether to build static libraries])
# Make sure either enable_shared or enable_static is yes.
test yes = "$enable_shared" || enable_static=yes
AC_MSG_RESULT([$enable_static])
_LT_CONFIG($1)
fi
AC_LANG_POP
CC=$lt_save_CC
])# _LT_LANG_C_CONFIG
# _LT_LANG_CXX_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for a C++ compiler are suitably
# defined. These variables are subsequently used by _LT_CONFIG to write
# the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_CXX_CONFIG],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
m4_require([_LT_DECL_EGREP])dnl
m4_require([_LT_PATH_MANIFEST_TOOL])dnl
if test -n "$CXX" && ( test no != "$CXX" &&
( (test g++ = "$CXX" && `g++ -v >/dev/null 2>&1` ) ||
(test g++ != "$CXX"))); then
AC_PROG_CXXCPP
else
_lt_caught_CXX_error=yes
fi
AC_LANG_PUSH(C++)
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(compiler_needs_object, $1)=no
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=unsupported
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
_LT_TAGVAR(no_undefined_flag, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
# Source file extension for C++ test sources.
ac_ext=cpp
# Object file extension for compiled C++ test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# No sense in running all these tests if we already determined that
# the CXX compiler isn't working. Some variables (like enable_shared)
# are currently assumed to apply to all compilers on this platform,
# and will be corrupted by setting them based on a non-working compiler.
if test yes != "$_lt_caught_CXX_error"; then
# Code to be used in simple compile tests
lt_simple_compile_test_code="int some_variable = 0;"
# Code to be used in simple link tests
lt_simple_link_test_code='int main(int, char *[[]]) { return(0); }'
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_LD=$LD
lt_save_GCC=$GCC
GCC=$GXX
lt_save_with_gnu_ld=$with_gnu_ld
lt_save_path_LD=$lt_cv_path_LD
if test -n "${lt_cv_prog_gnu_ldcxx+set}"; then
lt_cv_prog_gnu_ld=$lt_cv_prog_gnu_ldcxx
else
$as_unset lt_cv_prog_gnu_ld
fi
if test -n "${lt_cv_path_LDCXX+set}"; then
lt_cv_path_LD=$lt_cv_path_LDCXX
else
$as_unset lt_cv_path_LD
fi
test -z "${LDCXX+set}" || LD=$LDCXX
CC=${CXX-"c++"}
CFLAGS=$CXXFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
if test -n "$compiler"; then
# We don't want -fno-exception when compiling C++ code, so set the
# no_builtin_flag separately
if test yes = "$GXX"; then
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=' -fno-builtin'
else
_LT_TAGVAR(lt_prog_compiler_no_builtin_flag, $1)=
fi
if test yes = "$GXX"; then
# Set up default GNU C++ configuration
LT_PATH_LD
# Check if GNU C++ uses GNU ld as the underlying linker, since the
# archiving commands below assume that GNU ld is being used.
if test yes = "$with_gnu_ld"; then
_LT_TAGVAR(archive_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC $pic_flag -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
# If archive_cmds runs LD, not CC, wlarc should be empty
# XXX I think wlarc can be eliminated in ltcf-cxx, but I need to
# investigate it a little bit more. (MM)
wlarc='$wl'
# ancient GNU ld didn't support --whole-archive et. al.
if eval "`$CC -print-prog-name=ld` --help 2>&1" |
$GREP 'no-whole-archive' > /dev/null; then
_LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
else
_LT_TAGVAR(whole_archive_flag_spec, $1)=
fi
else
with_gnu_ld=no
wlarc=
# A generic and very simple default shared library creation
# command for GNU C++ for the case where it uses the native
# linker, instead of GNU ld. If possible, this setting should
# overridden to take advantage of the native linker features on
# the platform it is being used on.
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib'
fi
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
else
GXX=no
with_gnu_ld=no
wlarc=
fi
# PORTME: fill in a description of your system's C++ link characteristics
AC_MSG_CHECKING([whether the $compiler linker ($LD) supports shared libraries])
_LT_TAGVAR(ld_shlibs, $1)=yes
case $host_os in
aix3*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
aix[[4-9]]*)
if test ia64 = "$host_cpu"; then
# On IA64, the linker does run time linking by default, so we don't
# have to do anything special.
aix_use_runtimelinking=no
exp_sym_flag='-Bexport'
no_entry_flag=
else
aix_use_runtimelinking=no
# Test if we are trying to use run time linking or normal
# AIX style linking. If -brtl is somewhere in LDFLAGS, we
# have runtime linking enabled, and use it for executables.
# For shared libraries, we enable/disable runtime linking
# depending on the kind of the shared library created -
# when "with_aix_soname,aix_use_runtimelinking" is:
# "aix,no" lib.a(lib.so.V) shared, rtl:no, for executables
# "aix,yes" lib.so shared, rtl:yes, for executables
# lib.a static archive
# "both,no" lib.so.V(shr.o) shared, rtl:yes
# lib.a(lib.so.V) shared, rtl:no, for executables
# "both,yes" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a(lib.so.V) shared, rtl:no
# "svr4,*" lib.so.V(shr.o) shared, rtl:yes, for executables
# lib.a static archive
case $host_os in aix4.[[23]]|aix4.[[23]].*|aix[[5-9]]*)
for ld_flag in $LDFLAGS; do
case $ld_flag in
*-brtl*)
aix_use_runtimelinking=yes
break
;;
esac
done
if test svr4,no = "$with_aix_soname,$aix_use_runtimelinking"; then
# With aix-soname=svr4, we create the lib.so.V shared archives only,
# so we don't have lib.a shared libs to link our executables.
# We have to force runtime linking in this case.
aix_use_runtimelinking=yes
LDFLAGS="$LDFLAGS -Wl,-brtl"
fi
;;
esac
exp_sym_flag='-bexport'
no_entry_flag='-bnoentry'
fi
# When large executables or shared objects are built, AIX ld can
# have problems creating the table of contents. If linking a library
# or program results in "error TOC overflow" add -mminimal-toc to
# CXXFLAGS/CFLAGS for g++/gcc. In the cases where that is not
# enough to fix the problem, add -Wl,-bbigtoc to LDFLAGS.
_LT_TAGVAR(archive_cmds, $1)=''
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='$wl-f,'
case $with_aix_soname,$aix_use_runtimelinking in
aix,*) ;; # no import file
svr4,* | *,yes) # use import file
# The Import File defines what to hardcode.
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
;;
esac
if test yes = "$GXX"; then
case $host_os in aix4.[[012]]|aix4.[[012]].*)
# We only want to do this on AIX 4.2 and lower, the check
# below for broken collect2 doesn't work under 4.3+
collect2name=`$CC -print-prog-name=collect2`
if test -f "$collect2name" &&
strings "$collect2name" | $GREP resolve_lib_name >/dev/null
then
# We have reworked collect2
:
else
# We have old collect2
_LT_TAGVAR(hardcode_direct, $1)=unsupported
# It fails to find uninstalled libraries when the uninstalled
# path is not listed in the libpath. Setting hardcode_minus_L
# to unsupported forces relinking
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=
fi
esac
shared_flag='-shared'
if test yes = "$aix_use_runtimelinking"; then
shared_flag=$shared_flag' $wl-G'
fi
# Need to ensure runtime linking is disabled for the traditional
# shared library, or the linker may eventually find shared libraries
# /with/ Import File - we do not want to mix them.
shared_flag_aix='-shared'
shared_flag_svr4='-shared $wl-G'
else
# not using gcc
if test ia64 = "$host_cpu"; then
# VisualAge C++, Version 5.5 for AIX 5L for IA-64, Beta 3 Release
# chokes on -Wl,-G. The following line is correct:
shared_flag='-G'
else
if test yes = "$aix_use_runtimelinking"; then
shared_flag='$wl-G'
else
shared_flag='$wl-bM:SRE'
fi
shared_flag_aix='$wl-bM:SRE'
shared_flag_svr4='$wl-G'
fi
fi
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-bexpall'
# It seems that -bexpall does not export symbols beginning with
# underscore (_), so it is better to generate a list of symbols to
# export.
_LT_TAGVAR(always_export_symbols, $1)=yes
if test aix,yes = "$with_aix_soname,$aix_use_runtimelinking"; then
# Warning - without using the other runtime loading flags (-brtl),
# -berok will link without error, but may produce a broken library.
# The "-G" linker flag allows undefined symbols.
_LT_TAGVAR(no_undefined_flag, $1)='-bernotok'
# Determine the default libpath from the value encoded in an empty
# executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $deplibs $wl'$no_entry_flag' $compiler_flags `if test -n "$allow_undefined_flag"; then func_echo_all "$wl$allow_undefined_flag"; else :; fi` $wl'$exp_sym_flag:\$export_symbols' '$shared_flag
else
if test ia64 = "$host_cpu"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $libdir:/usr/lib:/lib'
_LT_TAGVAR(allow_undefined_flag, $1)="-z nodefs"
_LT_TAGVAR(archive_expsym_cmds, $1)="\$CC $shared_flag"' -o $output_objdir/$soname $libobjs $deplibs '"\$wl$no_entry_flag"' $compiler_flags $wl$allow_undefined_flag '"\$wl$exp_sym_flag:\$export_symbols"
else
# Determine the default libpath from the value encoded in an
# empty executable.
_LT_SYS_MODULE_PATH_AIX([$1])
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-blibpath:$libdir:'"$aix_libpath"
# Warning - without using the other run time loading flags,
# -berok will link without error, but may produce a broken library.
_LT_TAGVAR(no_undefined_flag, $1)=' $wl-bernotok'
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-berok'
if test yes = "$with_gnu_ld"; then
# We only use this code for GNU lds that support --whole-archive.
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive'
else
# Exported symbols can be pulled into shared objects from archives
_LT_TAGVAR(whole_archive_flag_spec, $1)='$convenience'
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=yes
_LT_TAGVAR(archive_expsym_cmds, $1)='$RM -r $output_objdir/$realname.d~$MKDIR $output_objdir/$realname.d'
# -brtl affects multiple linker settings, -berok does not and is overridden later
compiler_flags_filtered='`func_echo_all "$compiler_flags " | $SED -e "s%-brtl\\([[, ]]\\)%-berok\\1%g"`'
if test svr4 != "$with_aix_soname"; then
# This is similar to how AIX traditionally builds its shared
# libraries. Need -bnortl late, we may have -brtl in LDFLAGS.
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_aix' -o $output_objdir/$realname.d/$soname $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$AR $AR_FLAGS $output_objdir/$libname$release.a $output_objdir/$realname.d/$soname'
fi
if test aix != "$with_aix_soname"; then
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$CC '$shared_flag_svr4' -o $output_objdir/$realname.d/$shared_archive_member_spec.o $libobjs $deplibs $wl-bnoentry '$compiler_flags_filtered'$wl-bE:$export_symbols$allow_undefined_flag~$STRIP -e $output_objdir/$realname.d/$shared_archive_member_spec.o~( func_echo_all "#! $soname($shared_archive_member_spec.o)"; if test shr_64 = "$shared_archive_member_spec"; then func_echo_all "# 64"; else func_echo_all "# 32"; fi; cat $export_symbols ) > $output_objdir/$realname.d/$shared_archive_member_spec.imp~$AR $AR_FLAGS $output_objdir/$soname $output_objdir/$realname.d/$shared_archive_member_spec.o $output_objdir/$realname.d/$shared_archive_member_spec.imp'
else
# used by -dlpreopen to get the symbols
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$MV $output_objdir/$realname.d/$soname $output_objdir'
fi
_LT_TAGVAR(archive_expsym_cmds, $1)="$_LT_TAGVAR(archive_expsym_cmds, $1)"'~$RM -r $output_objdir/$realname.d'
fi
fi
;;
beos*)
if $LD --help 2>&1 | $GREP ': supported targets:.* elf' > /dev/null; then
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
# Joseph Beckenbach <jrb3@best.com> says some releases of gcc
# support --undefined. This deserves some investigation. FIXME
_LT_TAGVAR(archive_cmds, $1)='$CC -nostart $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
chorus*)
case $cc_basename in
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
cygwin* | mingw* | pw32* | cegcc*)
case $GXX,$cc_basename in
,cl* | no,cl*)
# Native MSVC
# hardcode_libdir_flag_spec is actually meaningless, as there is
# no search path for DLLs.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=' '
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=yes
_LT_TAGVAR(file_list_spec, $1)='@'
# Tell ltmain to make .lib files, not .a files.
libext=lib
# Tell ltmain to make .dll files, not .so files.
shrext_cmds=.dll
# FIXME: Setting linknames here is a bad hack.
_LT_TAGVAR(archive_cmds, $1)='$CC -o $output_objdir/$soname $libobjs $compiler_flags $deplibs -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~linknames='
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp "$export_symbols" "$output_objdir/$soname.def";
echo "$tool_output_objdir$soname.def" > "$output_objdir/$soname.exp";
else
$SED -e '\''s/^/-link -EXPORT:/'\'' < $export_symbols > $output_objdir/$soname.exp;
fi~
$CC -o $tool_output_objdir$soname $libobjs $compiler_flags $deplibs "@$tool_output_objdir$soname.exp" -Wl,-DLL,-IMPLIB:"$tool_output_objdir$libname.dll.lib"~
linknames='
# The linker will not automatically build a static lib if we build a DLL.
# _LT_TAGVAR(old_archive_from_new_cmds, $1)='true'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
# Don't use ranlib
_LT_TAGVAR(old_postinstall_cmds, $1)='chmod 644 $oldlib'
_LT_TAGVAR(postlink_cmds, $1)='lt_outputfile="@OUTPUT@"~
lt_tool_outputfile="@TOOL_OUTPUT@"~
case $lt_outputfile in
*.exe|*.EXE) ;;
*)
lt_outputfile=$lt_outputfile.exe
lt_tool_outputfile=$lt_tool_outputfile.exe
;;
esac~
func_to_tool_file "$lt_outputfile"~
if test : != "$MANIFEST_TOOL" && test -f "$lt_outputfile.manifest"; then
$MANIFEST_TOOL -manifest "$lt_tool_outputfile.manifest" -outputresource:"$lt_tool_outputfile" || exit 1;
$RM "$lt_outputfile.manifest";
fi'
;;
*)
# g++
# _LT_TAGVAR(hardcode_libdir_flag_spec, $1) is actually meaningless,
# as there is no search path for DLLs.
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-all-symbols'
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
if $LD --help 2>&1 | $GREP 'auto-import' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
# If the export-symbols file already is a .def file, use it as
# is; otherwise, prepend EXPORTS...
_LT_TAGVAR(archive_expsym_cmds, $1)='if _LT_DLL_DEF_P([$export_symbols]); then
cp $export_symbols $output_objdir/$soname.def;
else
echo EXPORTS > $output_objdir/$soname.def;
cat $export_symbols >> $output_objdir/$soname.def;
fi~
$CC -shared -nostdlib $output_objdir/$soname.def $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $output_objdir/$soname $wl--enable-auto-image-base -Xlinker --out-implib -Xlinker $lib'
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
darwin* | rhapsody*)
_LT_DARWIN_LINKER_FEATURES($1)
;;
os2*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-L$libdir'
_LT_TAGVAR(hardcode_minus_L, $1)=yes
_LT_TAGVAR(allow_undefined_flag, $1)=unsupported
shrext_cmds=.dll
_LT_TAGVAR(archive_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
emxexp $libobjs | $SED /"_DLL_InitTerm"/d >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(archive_expsym_cmds, $1)='$ECHO "LIBRARY ${soname%$shared_ext} INITINSTANCE TERMINSTANCE" > $output_objdir/$libname.def~
$ECHO "DESCRIPTION \"$libname\"" >> $output_objdir/$libname.def~
$ECHO "DATA MULTIPLE NONSHARED" >> $output_objdir/$libname.def~
$ECHO EXPORTS >> $output_objdir/$libname.def~
prefix_cmds="$SED"~
if test EXPORTS = "`$SED 1q $export_symbols`"; then
prefix_cmds="$prefix_cmds -e 1d";
fi~
prefix_cmds="$prefix_cmds -e \"s/^\(.*\)$/_\1/g\""~
cat $export_symbols | $prefix_cmds >> $output_objdir/$libname.def~
$CC -Zdll -Zcrtdll -o $output_objdir/$soname $libobjs $deplibs $compiler_flags $output_objdir/$libname.def~
emximp -o $lib $output_objdir/$libname.def'
_LT_TAGVAR(old_archive_From_new_cmds, $1)='emximp -o $output_objdir/${libname}_dll.a $output_objdir/$libname.def'
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=yes
;;
dgux*)
case $cc_basename in
ec++*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
ghcx*)
# Green Hills C++ Compiler
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
freebsd2.*)
# C++ shared libraries reported to be fairly broken before
# switch to ELF
_LT_TAGVAR(ld_shlibs, $1)=no
;;
freebsd-elf*)
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
;;
freebsd* | dragonfly*)
# FreeBSD 3 and later use GNU C++ and GNU ld with standard ELF
# conventions
_LT_TAGVAR(ld_shlibs, $1)=yes
;;
haiku*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
hpux9*)
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
# but as the default
# location of the library.
case $cc_basename in
CC*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
aCC*)
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -b $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $EGREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
;;
*)
if test yes = "$GXX"; then
_LT_TAGVAR(archive_cmds, $1)='$RM $output_objdir/$soname~$CC -shared -nostdlib $pic_flag $wl+b $wl$install_libdir -o $output_objdir/$soname $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~test "x$output_objdir/$soname" = "x$lib" || mv $output_objdir/$soname $lib'
else
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
hpux10*|hpux11*)
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl+b $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
case $host_cpu in
hppa*64*|ia64*)
;;
*)
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
;;
esac
fi
case $host_cpu in
hppa*64*|ia64*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
;;
*)
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(hardcode_minus_L, $1)=yes # Not in the search PATH,
# but as the default
# location of the library.
;;
esac
case $cc_basename in
CC*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
aCC*)
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -b $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
esac
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`($CC -b $CFLAGS -v conftest.$objext 2>&1) | $GREP "\-L"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
;;
*)
if test yes = "$GXX"; then
if test no = "$with_gnu_ld"; then
case $host_cpu in
hppa*64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib -fPIC $wl+h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
ia64*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+nodefaultrpath -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $pic_flag $wl+h $wl$soname $wl+b $wl$install_libdir -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
;;
esac
fi
else
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
interix[[3-9]]*)
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
# Hack: On Interix 3.x, we cannot compile PIC because of a broken gcc.
# Instead, shared libraries are loaded at an image base (0x10000000 by
# default) and relocated if they conflict, which is a slow very memory
# consuming and fragmenting process. To avoid this, we pick a random,
# 256 KiB-aligned image base between 0x50000000 and 0x6FFC0000 at link
# time. Moving up from 0x10000000 also allows more sbrk(2) space.
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='sed "s|^|_|" $export_symbols >$output_objdir/$soname.expsym~$CC -shared $pic_flag $libobjs $deplibs $compiler_flags $wl-h,$soname $wl--retain-symbols-file,$output_objdir/$soname.expsym $wl--image-base,`expr ${RANDOM-$$} % 4096 / 2 \* 262144 + 1342177280` -o $lib'
;;
irix5* | irix6*)
case $cc_basename in
CC*)
# SGI C++
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -all -multigot $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
# Archives containing C++ object files must be created using
# "CC -ar", where "CC" is the IRIX C++ compiler. This is
# necessary to make sure instantiated templates are included
# in the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -ar -WR,-u -o $oldlib $oldobjs'
;;
*)
if test yes = "$GXX"; then
if test no = "$with_gnu_ld"; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
else
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` -o $lib'
fi
fi
_LT_TAGVAR(link_all_deplibs, $1)=yes
;;
esac
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
_LT_TAGVAR(inherit_rpath, $1)=yes
;;
linux* | k*bsd*-gnu | kopensolaris*-gnu | gnu*)
case $cc_basename in
KCC*)
# Kuck and Associates, Inc. (KAI) C++ Compiler
# KCC will only create a shared library if the output file
# ends with ".so" (or ".sl" for HP-UX), so rename the library
# to its proper name (with version) after linking.
_LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo $lib | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib $wl-retain-symbols-file,$export_symbols; mv \$templib $lib'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`$CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1 | $GREP "ld"`; rm -f libconftest$shared_ext; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
# Archives containing C++ object files must be created using
# "CC -Bstatic", where "CC" is the KAI C++ compiler.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs'
;;
icpc* | ecpc* )
# Intel C++
with_gnu_ld=yes
# version 8.0 and above of icpc choke on multiply defined symbols
# if we add $predep_objects and $postdep_objects, however 7.1 and
# earlier do not add the objects themselves.
case `$CC -V 2>&1` in
*"Version 7."*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
*) # Version 8.0 or newer
tmp_idyn=
case $host_cpu in
ia64*) tmp_idyn=' -i_dynamic';;
esac
_LT_TAGVAR(archive_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared'"$tmp_idyn"' $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
esac
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive$convenience $wl--no-whole-archive'
;;
pgCC* | pgcpp*)
# Portland Group C++ compiler
case `$CC -V` in
*pgCC\ [[1-5]].* | *pgcpp\ [[1-5]].*)
_LT_TAGVAR(prelink_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $objs $libobjs $compile_deplibs~
compile_command="$compile_command `find $tpldir -name \*.o | sort | $NL2SP`"'
_LT_TAGVAR(old_archive_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $oldobjs$old_deplibs~
$AR $AR_FLAGS $oldlib$oldobjs$old_deplibs `find $tpldir -name \*.o | sort | $NL2SP`~
$RANLIB $oldlib'
_LT_TAGVAR(archive_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='tpldir=Template.dir~
rm -rf $tpldir~
$CC --prelink_objects --instantiation_dir $tpldir $predep_objects $libobjs $deplibs $convenience $postdep_objects~
$CC -shared $pic_flag $predep_objects $libobjs $deplibs `find $tpldir -name \*.o | sort | $NL2SP` $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
*) # Version 6 and above use weak symbols
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname $wl-retain-symbols-file $wl$export_symbols -o $lib'
;;
esac
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl--rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`for conv in $convenience\"\"; do test -n \"$conv\" && new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
;;
cxx*)
# Compaq C++
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname -o $lib $wl-retain-symbols-file $wl$export_symbols'
runpath_var=LD_RUN_PATH
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld .*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "X$list" | $Xsed'
;;
xl* | mpixl* | bgxl*)
# IBM XL 8.0 on PPC, with GNU ld
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl--export-dynamic'
_LT_TAGVAR(archive_cmds, $1)='$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname -o $lib'
if test yes = "$supports_anon_versioning"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $output_objdir/$libname.ver~
cat $export_symbols | sed -e "s/\(.*\)/\1;/" >> $output_objdir/$libname.ver~
echo "local: *; };" >> $output_objdir/$libname.ver~
$CC -qmkshrobj $libobjs $deplibs $compiler_flags $wl-soname $wl$soname $wl-version-script $wl$output_objdir/$libname.ver -o $lib'
fi
;;
*)
case `$CC -V 2>&1 | sed 5q` in
*Sun\ C*)
# Sun C++ 5.9
_LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
_LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file $wl$export_symbols'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl--whole-archive`new_convenience=; for conv in $convenience\"\"; do test -z \"$conv\" || new_convenience=\"$new_convenience,$conv\"; done; func_echo_all \"$new_convenience\"` $wl--no-whole-archive'
_LT_TAGVAR(compiler_needs_object, $1)=yes
# Not sure whether something based on
# $CC $CFLAGS -v conftest.$objext -o libconftest$shared_ext 2>&1
# would be better.
output_verbose_link_cmd='func_echo_all'
# Archives containing C++ object files must be created using
# "CC -xar", where "CC" is the Sun C++ compiler. This is
# necessary to make sure instantiated templates are included
# in the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs'
;;
esac
;;
esac
;;
lynxos*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
m88k*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
mvs*)
case $cc_basename in
cxx*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
netbsd*)
if echo __ELF__ | $CC -E - | $GREP __ELF__ >/dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$LD -Bshareable -o $lib $predep_objects $libobjs $deplibs $postdep_objects $linker_flags'
wlarc=
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
fi
# Workaround some broken pre-1.5 toolchains
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP conftest.$objext | $SED -e "s:-lgcc -lc -lgcc::"'
;;
*nto* | *qnx*)
_LT_TAGVAR(ld_shlibs, $1)=yes
;;
openbsd* | bitrig*)
if test -f /usr/libexec/ld.so; then
_LT_TAGVAR(hardcode_direct, $1)=yes
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=yes
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
if test -z "`echo __ELF__ | $CC -E - | grep __ELF__`"; then
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $pic_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-retain-symbols-file,$export_symbols -o $lib'
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-E'
_LT_TAGVAR(whole_archive_flag_spec, $1)=$wlarc'--whole-archive$convenience '$wlarc'--no-whole-archive'
fi
output_verbose_link_cmd=func_echo_all
else
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
osf3* | osf4* | osf5*)
case $cc_basename in
KCC*)
# Kuck and Associates, Inc. (KAI) C++ Compiler
# KCC will only create a shared library if the output file
# ends with ".so" (or ".sl" for HP-UX), so rename the library
# to its proper name (with version) after linking.
_LT_TAGVAR(archive_cmds, $1)='tempext=`echo $shared_ext | $SED -e '\''s/\([[^()0-9A-Za-z{}]]\)/\\\\\1/g'\''`; templib=`echo "$lib" | $SED -e "s/\$tempext\..*/.so/"`; $CC $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags --soname $soname -o \$templib; mv \$templib $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath,$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Archives containing C++ object files must be created using
# the KAI C++ compiler.
case $host in
osf3*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -Bstatic -o $oldlib $oldobjs' ;;
*) _LT_TAGVAR(old_archive_cmds, $1)='$CC -o $oldlib $oldobjs' ;;
esac
;;
RCC*)
# Rational C++ 2.4.1
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
cxx*)
case $host in
osf3*)
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $soname `test -n "$verstring" && func_echo_all "$wl-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
;;
*)
_LT_TAGVAR(allow_undefined_flag, $1)=' -expect_unresolved \*'
_LT_TAGVAR(archive_cmds, $1)='$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname `test -n "$verstring" && func_echo_all "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='for i in `cat $export_symbols`; do printf "%s %s\\n" -exported_symbol "\$i" >> $lib.exp; done~
echo "-hidden">> $lib.exp~
$CC -shared$allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags -msym -soname $soname $wl-input $wl$lib.exp `test -n "$verstring" && $ECHO "-set_version $verstring"` -update_registry $output_objdir/so_locations -o $lib~
$RM $lib.exp'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-rpath $libdir'
;;
esac
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
#
# There doesn't appear to be a way to prevent this compiler from
# explicitly linking system object files so we need to strip them
# from the output so that they don't get included in the library
# dependencies.
output_verbose_link_cmd='templist=`$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP "ld" | $GREP -v "ld:"`; templist=`func_echo_all "$templist" | $SED "s/\(^.*ld.*\)\( .*ld.*$\)/\1/"`; list= ; for z in $templist; do case $z in conftest.$objext) list="$list $z";; *.$objext);; *) list="$list $z";;esac; done; func_echo_all "$list"'
;;
*)
if test yes,no = "$GXX,$with_gnu_ld"; then
_LT_TAGVAR(allow_undefined_flag, $1)=' $wl-expect_unresolved $wl\*'
case $host in
osf3*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $allow_undefined_flag $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-msym $wl-soname $wl$soname `test -n "$verstring" && func_echo_all "$wl-set_version $wl$verstring"` $wl-update_registry $wl$output_objdir/so_locations -o $lib'
;;
esac
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-rpath $wl$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=:
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
else
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
fi
;;
esac
;;
psos*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
sunos4*)
case $cc_basename in
CC*)
# Sun C++ 4.x
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
lcc*)
# Lucid
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
solaris*)
case $cc_basename in
CC* | sunCC*)
# Sun C++ 4.2, 5.x and Centerline C++
_LT_TAGVAR(archive_cmds_need_lc,$1)=yes
_LT_TAGVAR(no_undefined_flag, $1)=' -zdefs'
_LT_TAGVAR(archive_cmds, $1)='$CC -G$allow_undefined_flag -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -G$allow_undefined_flag $wl-M $wl$lib.exp -h$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='-R$libdir'
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
case $host_os in
solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
*)
# The compiler driver will combine and reorder linker options,
# but understands '-z linker_flag'.
# Supported since Solaris 2.6 (maybe 2.5.1?)
_LT_TAGVAR(whole_archive_flag_spec, $1)='-z allextract$convenience -z defaultextract'
;;
esac
_LT_TAGVAR(link_all_deplibs, $1)=yes
output_verbose_link_cmd='func_echo_all'
# Archives containing C++ object files must be created using
# "CC -xar", where "CC" is the Sun C++ compiler. This is
# necessary to make sure instantiated templates are included
# in the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC -xar -o $oldlib $oldobjs'
;;
gcx*)
# Green Hills C++ Compiler
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib'
# The C++ compiler must be used to create the archive.
_LT_TAGVAR(old_archive_cmds, $1)='$CC $LDFLAGS -archive -o $oldlib $oldobjs'
;;
*)
# GNU C++ compiler with Solaris linker
if test yes,no = "$GXX,$with_gnu_ld"; then
_LT_TAGVAR(no_undefined_flag, $1)=' $wl-z ${wl}defs'
if $CC --version | $GREP -v '^2\.7' > /dev/null; then
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $pic_flag -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -shared $pic_flag -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -shared $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
else
# g++ 2.7 appears to require '-G' NOT '-shared' on this
# platform.
_LT_TAGVAR(archive_cmds, $1)='$CC -G -nostdlib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags $wl-h $wl$soname -o $lib'
_LT_TAGVAR(archive_expsym_cmds, $1)='echo "{ global:" > $lib.exp~cat $export_symbols | $SED -e "s/\(.*\)/\1;/" >> $lib.exp~echo "local: *; };" >> $lib.exp~
$CC -G -nostdlib $wl-M $wl$lib.exp $wl-h $wl$soname -o $lib $predep_objects $libobjs $deplibs $postdep_objects $compiler_flags~$RM $lib.exp'
# Commands to make compiler produce verbose output that lists
# what "hidden" libraries, object files and flags are used when
# linking a shared library.
output_verbose_link_cmd='$CC -G $CFLAGS -v conftest.$objext 2>&1 | $GREP -v "^Configured with:" | $GREP "\-L"'
fi
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R $wl$libdir'
case $host_os in
solaris2.[[0-5]] | solaris2.[[0-5]].*) ;;
*)
_LT_TAGVAR(whole_archive_flag_spec, $1)='$wl-z ${wl}allextract$convenience $wl-z ${wl}defaultextract'
;;
esac
fi
;;
esac
;;
sysv4*uw2* | sysv5OpenUNIX* | sysv5UnixWare7.[[01]].[[10]]* | unixware7* | sco3.2v5.0.[[024]]*)
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
runpath_var='LD_RUN_PATH'
case $cc_basename in
CC*)
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
;;
sysv5* | sco3.2v5* | sco5v6*)
# Note: We CANNOT use -z defs as we might desire, because we do not
# link with -lc, and that would cause any symbols used from libc to
# always be unresolved, which means just about no library would
# ever link correctly. If we're not using GNU ld we use -z text
# though, which does catch some bad symbols but isn't as heavy-handed
# as -z defs.
_LT_TAGVAR(no_undefined_flag, $1)='$wl-z,text'
_LT_TAGVAR(allow_undefined_flag, $1)='$wl-z,nodefs'
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(hardcode_shlibpath_var, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)='$wl-R,$libdir'
_LT_TAGVAR(hardcode_libdir_separator, $1)=':'
_LT_TAGVAR(link_all_deplibs, $1)=yes
_LT_TAGVAR(export_dynamic_flag_spec, $1)='$wl-Bexport'
runpath_var='LD_RUN_PATH'
case $cc_basename in
CC*)
_LT_TAGVAR(archive_cmds, $1)='$CC -G $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -G $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(old_archive_cmds, $1)='$CC -Tprelink_objects $oldobjs~
'"$_LT_TAGVAR(old_archive_cmds, $1)"
_LT_TAGVAR(reload_cmds, $1)='$CC -Tprelink_objects $reload_objs~
'"$_LT_TAGVAR(reload_cmds, $1)"
;;
*)
_LT_TAGVAR(archive_cmds, $1)='$CC -shared $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
_LT_TAGVAR(archive_expsym_cmds, $1)='$CC -shared $wl-Bexport:$export_symbols $wl-h,$soname -o $lib $libobjs $deplibs $compiler_flags'
;;
esac
;;
tandem*)
case $cc_basename in
NCC*)
# NonStop-UX NCC 3.20
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
;;
vxworks*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
*)
# FIXME: insert proper C++ library support
_LT_TAGVAR(ld_shlibs, $1)=no
;;
esac
AC_MSG_RESULT([$_LT_TAGVAR(ld_shlibs, $1)])
test no = "$_LT_TAGVAR(ld_shlibs, $1)" && can_build_shared=no
_LT_TAGVAR(GCC, $1)=$GXX
_LT_TAGVAR(LD, $1)=$LD
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
_LT_SYS_HIDDEN_LIBDEPS($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi # test -n "$compiler"
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
LDCXX=$LD
LD=$lt_save_LD
GCC=$lt_save_GCC
with_gnu_ld=$lt_save_with_gnu_ld
lt_cv_path_LDCXX=$lt_cv_path_LD
lt_cv_path_LD=$lt_save_path_LD
lt_cv_prog_gnu_ldcxx=$lt_cv_prog_gnu_ld
lt_cv_prog_gnu_ld=$lt_save_with_gnu_ld
fi # test yes != "$_lt_caught_CXX_error"
AC_LANG_POP
])# _LT_LANG_CXX_CONFIG
# _LT_FUNC_STRIPNAME_CNF
# ----------------------
# func_stripname_cnf prefix suffix name
# strip PREFIX and SUFFIX off of NAME.
# PREFIX and SUFFIX must not contain globbing or regex special
# characters, hashes, percent signs, but SUFFIX may contain a leading
# dot (in which case that matches only a dot).
#
# This function is identical to the (non-XSI) version of func_stripname,
# except this one can be used by m4 code that may be executed by configure,
# rather than the libtool script.
m4_defun([_LT_FUNC_STRIPNAME_CNF],[dnl
AC_REQUIRE([_LT_DECL_SED])
AC_REQUIRE([_LT_PROG_ECHO_BACKSLASH])
func_stripname_cnf ()
{
case @S|@2 in
.*) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%\\\\@S|@2\$%%"`;;
*) func_stripname_result=`$ECHO "@S|@3" | $SED "s%^@S|@1%%; s%@S|@2\$%%"`;;
esac
} # func_stripname_cnf
])# _LT_FUNC_STRIPNAME_CNF
# _LT_SYS_HIDDEN_LIBDEPS([TAGNAME])
# ---------------------------------
# Figure out "hidden" library dependencies from verbose
# compiler output when linking a shared library.
# Parse the compiler output and extract the necessary
# objects, libraries and library flags.
m4_defun([_LT_SYS_HIDDEN_LIBDEPS],
[m4_require([_LT_FILEUTILS_DEFAULTS])dnl
AC_REQUIRE([_LT_FUNC_STRIPNAME_CNF])dnl
# Dependencies to place before and after the object being linked:
_LT_TAGVAR(predep_objects, $1)=
_LT_TAGVAR(postdep_objects, $1)=
_LT_TAGVAR(predeps, $1)=
_LT_TAGVAR(postdeps, $1)=
_LT_TAGVAR(compiler_lib_search_path, $1)=
dnl we can't use the lt_simple_compile_test_code here,
dnl because it contains code intended for an executable,
dnl not a library. It's possible we should let each
dnl tag define a new lt_????_link_test_code variable,
dnl but it's only used here...
m4_if([$1], [], [cat > conftest.$ac_ext <<_LT_EOF
int a;
void foo (void) { a = 0; }
_LT_EOF
], [$1], [CXX], [cat > conftest.$ac_ext <<_LT_EOF
class Foo
{
public:
Foo (void) { a = 0; }
private:
int a;
};
_LT_EOF
], [$1], [F77], [cat > conftest.$ac_ext <<_LT_EOF
subroutine foo
implicit none
integer*4 a
a=0
return
end
_LT_EOF
], [$1], [FC], [cat > conftest.$ac_ext <<_LT_EOF
subroutine foo
implicit none
integer a
a=0
return
end
_LT_EOF
], [$1], [GCJ], [cat > conftest.$ac_ext <<_LT_EOF
public class foo {
private int a;
public void bar (void) {
a = 0;
}
};
_LT_EOF
], [$1], [GO], [cat > conftest.$ac_ext <<_LT_EOF
package foo
func foo() {
}
_LT_EOF
])
_lt_libdeps_save_CFLAGS=$CFLAGS
case "$CC $CFLAGS " in #(
*\ -flto*\ *) CFLAGS="$CFLAGS -fno-lto" ;;
*\ -fwhopr*\ *) CFLAGS="$CFLAGS -fno-whopr" ;;
*\ -fuse-linker-plugin*\ *) CFLAGS="$CFLAGS -fno-use-linker-plugin" ;;
esac
dnl Parse the compiler output and extract the necessary
dnl objects, libraries and library flags.
if AC_TRY_EVAL(ac_compile); then
# Parse the compiler output and extract the necessary
# objects, libraries and library flags.
# Sentinel used to keep track of whether or not we are before
# the conftest object file.
pre_test_object_deps_done=no
for p in `eval "$output_verbose_link_cmd"`; do
case $prev$p in
-L* | -R* | -l*)
# Some compilers place space between "-{L,R}" and the path.
# Remove the space.
if test x-L = "$p" ||
test x-R = "$p"; then
prev=$p
continue
fi
# Expand the sysroot to ease extracting the directories later.
if test -z "$prev"; then
case $p in
-L*) func_stripname_cnf '-L' '' "$p"; prev=-L; p=$func_stripname_result ;;
-R*) func_stripname_cnf '-R' '' "$p"; prev=-R; p=$func_stripname_result ;;
-l*) func_stripname_cnf '-l' '' "$p"; prev=-l; p=$func_stripname_result ;;
esac
fi
case $p in
=*) func_stripname_cnf '=' '' "$p"; p=$lt_sysroot$func_stripname_result ;;
esac
if test no = "$pre_test_object_deps_done"; then
case $prev in
-L | -R)
# Internal compiler library paths should come after those
# provided the user. The postdeps already come after the
# user supplied libs so there is no need to process them.
if test -z "$_LT_TAGVAR(compiler_lib_search_path, $1)"; then
_LT_TAGVAR(compiler_lib_search_path, $1)=$prev$p
else
_LT_TAGVAR(compiler_lib_search_path, $1)="${_LT_TAGVAR(compiler_lib_search_path, $1)} $prev$p"
fi
;;
# The "-l" case would never come before the object being
# linked, so don't bother handling this case.
esac
else
if test -z "$_LT_TAGVAR(postdeps, $1)"; then
_LT_TAGVAR(postdeps, $1)=$prev$p
else
_LT_TAGVAR(postdeps, $1)="${_LT_TAGVAR(postdeps, $1)} $prev$p"
fi
fi
prev=
;;
*.lto.$objext) ;; # Ignore GCC LTO objects
*.$objext)
# This assumes that the test object file only shows up
# once in the compiler output.
if test "$p" = "conftest.$objext"; then
pre_test_object_deps_done=yes
continue
fi
if test no = "$pre_test_object_deps_done"; then
if test -z "$_LT_TAGVAR(predep_objects, $1)"; then
_LT_TAGVAR(predep_objects, $1)=$p
else
_LT_TAGVAR(predep_objects, $1)="$_LT_TAGVAR(predep_objects, $1) $p"
fi
else
if test -z "$_LT_TAGVAR(postdep_objects, $1)"; then
_LT_TAGVAR(postdep_objects, $1)=$p
else
_LT_TAGVAR(postdep_objects, $1)="$_LT_TAGVAR(postdep_objects, $1) $p"
fi
fi
;;
*) ;; # Ignore the rest.
esac
done
# Clean up.
rm -f a.out a.exe
else
echo "libtool.m4: error: problem compiling $1 test program"
fi
$RM -f confest.$objext
CFLAGS=$_lt_libdeps_save_CFLAGS
# PORTME: override above test on systems where it is broken
m4_if([$1], [CXX],
[case $host_os in
interix[[3-9]]*)
# Interix 3.5 installs completely hosed .la files for C++, so rather than
# hack all around it, let's just trust "g++" to DTRT.
_LT_TAGVAR(predep_objects,$1)=
_LT_TAGVAR(postdep_objects,$1)=
_LT_TAGVAR(postdeps,$1)=
;;
esac
])
case " $_LT_TAGVAR(postdeps, $1) " in
*" -lc "*) _LT_TAGVAR(archive_cmds_need_lc, $1)=no ;;
esac
_LT_TAGVAR(compiler_lib_search_dirs, $1)=
if test -n "${_LT_TAGVAR(compiler_lib_search_path, $1)}"; then
_LT_TAGVAR(compiler_lib_search_dirs, $1)=`echo " ${_LT_TAGVAR(compiler_lib_search_path, $1)}" | $SED -e 's! -L! !g' -e 's!^ !!'`
fi
_LT_TAGDECL([], [compiler_lib_search_dirs], [1],
[The directories searched by this compiler when creating a shared library])
_LT_TAGDECL([], [predep_objects], [1],
[Dependencies to place before and after the objects being linked to
create a shared library])
_LT_TAGDECL([], [postdep_objects], [1])
_LT_TAGDECL([], [predeps], [1])
_LT_TAGDECL([], [postdeps], [1])
_LT_TAGDECL([], [compiler_lib_search_path], [1],
[The library search path used internally by the compiler when linking
a shared library])
])# _LT_SYS_HIDDEN_LIBDEPS
# _LT_LANG_F77_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for a Fortran 77 compiler are
# suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_F77_CONFIG],
[AC_LANG_PUSH(Fortran 77)
if test -z "$F77" || test no = "$F77"; then
_lt_disable_F77=yes
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
_LT_TAGVAR(no_undefined_flag, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
# Source file extension for f77 test sources.
ac_ext=f
# Object file extension for compiled f77 test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# No sense in running all these tests if we already determined that
# the F77 compiler isn't working. Some variables (like enable_shared)
# are currently assumed to apply to all compilers on this platform,
# and will be corrupted by setting them based on a non-working compiler.
if test yes != "$_lt_disable_F77"; then
# Code to be used in simple compile tests
lt_simple_compile_test_code="\
subroutine t
return
end
"
# Code to be used in simple link tests
lt_simple_link_test_code="\
program t
end
"
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_GCC=$GCC
lt_save_CFLAGS=$CFLAGS
CC=${F77-"f77"}
CFLAGS=$FFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
GCC=$G77
if test -n "$compiler"; then
AC_MSG_CHECKING([if libtool supports shared libraries])
AC_MSG_RESULT([$can_build_shared])
AC_MSG_CHECKING([whether to build shared libraries])
test no = "$can_build_shared" && enable_shared=no
# On AIX, shared libraries and static libraries use the same namespace, and
# are all built from PIC.
case $host_os in
aix3*)
test yes = "$enable_shared" && enable_static=no
if test -n "$RANLIB"; then
archive_cmds="$archive_cmds~\$RANLIB \$lib"
postinstall_cmds='$RANLIB $lib'
fi
;;
aix[[4-9]]*)
if test ia64 != "$host_cpu"; then
case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
yes,aix,yes) ;; # shared object as lib.so file only
yes,svr4,*) ;; # shared object as lib.so archive member only
yes,*) enable_static=no ;; # shared object in lib.a archive as well
esac
fi
;;
esac
AC_MSG_RESULT([$enable_shared])
AC_MSG_CHECKING([whether to build static libraries])
# Make sure either enable_shared or enable_static is yes.
test yes = "$enable_shared" || enable_static=yes
AC_MSG_RESULT([$enable_static])
_LT_TAGVAR(GCC, $1)=$G77
_LT_TAGVAR(LD, $1)=$LD
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi # test -n "$compiler"
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
fi # test yes != "$_lt_disable_F77"
AC_LANG_POP
])# _LT_LANG_F77_CONFIG
# _LT_LANG_FC_CONFIG([TAG])
# -------------------------
# Ensure that the configuration variables for a Fortran compiler are
# suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_FC_CONFIG],
[AC_LANG_PUSH(Fortran)
if test -z "$FC" || test no = "$FC"; then
_lt_disable_FC=yes
fi
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(allow_undefined_flag, $1)=
_LT_TAGVAR(always_export_symbols, $1)=no
_LT_TAGVAR(archive_expsym_cmds, $1)=
_LT_TAGVAR(export_dynamic_flag_spec, $1)=
_LT_TAGVAR(hardcode_direct, $1)=no
_LT_TAGVAR(hardcode_direct_absolute, $1)=no
_LT_TAGVAR(hardcode_libdir_flag_spec, $1)=
_LT_TAGVAR(hardcode_libdir_separator, $1)=
_LT_TAGVAR(hardcode_minus_L, $1)=no
_LT_TAGVAR(hardcode_automatic, $1)=no
_LT_TAGVAR(inherit_rpath, $1)=no
_LT_TAGVAR(module_cmds, $1)=
_LT_TAGVAR(module_expsym_cmds, $1)=
_LT_TAGVAR(link_all_deplibs, $1)=unknown
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
_LT_TAGVAR(no_undefined_flag, $1)=
_LT_TAGVAR(whole_archive_flag_spec, $1)=
_LT_TAGVAR(enable_shared_with_static_runtimes, $1)=no
# Source file extension for fc test sources.
ac_ext=${ac_fc_srcext-f}
# Object file extension for compiled fc test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# No sense in running all these tests if we already determined that
# the FC compiler isn't working. Some variables (like enable_shared)
# are currently assumed to apply to all compilers on this platform,
# and will be corrupted by setting them based on a non-working compiler.
if test yes != "$_lt_disable_FC"; then
# Code to be used in simple compile tests
lt_simple_compile_test_code="\
subroutine t
return
end
"
# Code to be used in simple link tests
lt_simple_link_test_code="\
program t
end
"
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_GCC=$GCC
lt_save_CFLAGS=$CFLAGS
CC=${FC-"f95"}
CFLAGS=$FCFLAGS
compiler=$CC
GCC=$ac_cv_fc_compiler_gnu
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
if test -n "$compiler"; then
AC_MSG_CHECKING([if libtool supports shared libraries])
AC_MSG_RESULT([$can_build_shared])
AC_MSG_CHECKING([whether to build shared libraries])
test no = "$can_build_shared" && enable_shared=no
# On AIX, shared libraries and static libraries use the same namespace, and
# are all built from PIC.
case $host_os in
aix3*)
test yes = "$enable_shared" && enable_static=no
if test -n "$RANLIB"; then
archive_cmds="$archive_cmds~\$RANLIB \$lib"
postinstall_cmds='$RANLIB $lib'
fi
;;
aix[[4-9]]*)
if test ia64 != "$host_cpu"; then
case $enable_shared,$with_aix_soname,$aix_use_runtimelinking in
yes,aix,yes) ;; # shared object as lib.so file only
yes,svr4,*) ;; # shared object as lib.so archive member only
yes,*) enable_static=no ;; # shared object in lib.a archive as well
esac
fi
;;
esac
AC_MSG_RESULT([$enable_shared])
AC_MSG_CHECKING([whether to build static libraries])
# Make sure either enable_shared or enable_static is yes.
test yes = "$enable_shared" || enable_static=yes
AC_MSG_RESULT([$enable_static])
_LT_TAGVAR(GCC, $1)=$ac_cv_fc_compiler_gnu
_LT_TAGVAR(LD, $1)=$LD
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
_LT_SYS_HIDDEN_LIBDEPS($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_SYS_DYNAMIC_LINKER($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi # test -n "$compiler"
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
fi # test yes != "$_lt_disable_FC"
AC_LANG_POP
])# _LT_LANG_FC_CONFIG
# _LT_LANG_GCJ_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for the GNU Java Compiler compiler
# are suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_GCJ_CONFIG],
[AC_REQUIRE([LT_PROG_GCJ])dnl
AC_LANG_SAVE
# Source file extension for Java test sources.
ac_ext=java
# Object file extension for compiled Java test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code="class foo {}"
# Code to be used in simple link tests
lt_simple_link_test_code='public class conftest { public static void main(String[[]] argv) {}; }'
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_GCC=$GCC
GCC=yes
CC=${GCJ-"gcj"}
CFLAGS=$GCJFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_TAGVAR(LD, $1)=$LD
_LT_CC_BASENAME([$compiler])
# GCJ did not exist at the time GCC didn't implicitly link libc in.
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
if test -n "$compiler"; then
_LT_COMPILER_NO_RTTI($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi
AC_LANG_RESTORE
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
])# _LT_LANG_GCJ_CONFIG
# _LT_LANG_GO_CONFIG([TAG])
# --------------------------
# Ensure that the configuration variables for the GNU Go compiler
# are suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_GO_CONFIG],
[AC_REQUIRE([LT_PROG_GO])dnl
AC_LANG_SAVE
# Source file extension for Go test sources.
ac_ext=go
# Object file extension for compiled Go test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code="package main; func main() { }"
# Code to be used in simple link tests
lt_simple_link_test_code='package main; func main() { }'
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_GCC=$GCC
GCC=yes
CC=${GOC-"gccgo"}
CFLAGS=$GOFLAGS
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_TAGVAR(LD, $1)=$LD
_LT_CC_BASENAME([$compiler])
# Go did not exist at the time GCC didn't implicitly link libc in.
_LT_TAGVAR(archive_cmds_need_lc, $1)=no
_LT_TAGVAR(old_archive_cmds, $1)=$old_archive_cmds
_LT_TAGVAR(reload_flag, $1)=$reload_flag
_LT_TAGVAR(reload_cmds, $1)=$reload_cmds
## CAVEAT EMPTOR:
## There is no encapsulation within the following macros, do not change
## the running order or otherwise move them around unless you know exactly
## what you are doing...
if test -n "$compiler"; then
_LT_COMPILER_NO_RTTI($1)
_LT_COMPILER_PIC($1)
_LT_COMPILER_C_O($1)
_LT_COMPILER_FILE_LOCKS($1)
_LT_LINKER_SHLIBS($1)
_LT_LINKER_HARDCODE_LIBPATH($1)
_LT_CONFIG($1)
fi
AC_LANG_RESTORE
GCC=$lt_save_GCC
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
])# _LT_LANG_GO_CONFIG
# _LT_LANG_RC_CONFIG([TAG])
# -------------------------
# Ensure that the configuration variables for the Windows resource compiler
# are suitably defined. These variables are subsequently used by _LT_CONFIG
# to write the compiler configuration to 'libtool'.
m4_defun([_LT_LANG_RC_CONFIG],
[AC_REQUIRE([LT_PROG_RC])dnl
AC_LANG_SAVE
# Source file extension for RC test sources.
ac_ext=rc
# Object file extension for compiled RC test sources.
objext=o
_LT_TAGVAR(objext, $1)=$objext
# Code to be used in simple compile tests
lt_simple_compile_test_code='sample MENU { MENUITEM "&Soup", 100, CHECKED }'
# Code to be used in simple link tests
lt_simple_link_test_code=$lt_simple_compile_test_code
# ltmain only uses $CC for tagged configurations so make sure $CC is set.
_LT_TAG_COMPILER
# save warnings/boilerplate of simple test code
_LT_COMPILER_BOILERPLATE
_LT_LINKER_BOILERPLATE
# Allow CC to be a program name with arguments.
lt_save_CC=$CC
lt_save_CFLAGS=$CFLAGS
lt_save_GCC=$GCC
GCC=
CC=${RC-"windres"}
CFLAGS=
compiler=$CC
_LT_TAGVAR(compiler, $1)=$CC
_LT_CC_BASENAME([$compiler])
_LT_TAGVAR(lt_cv_prog_compiler_c_o, $1)=yes
if test -n "$compiler"; then
:
_LT_CONFIG($1)
fi
GCC=$lt_save_GCC
AC_LANG_RESTORE
CC=$lt_save_CC
CFLAGS=$lt_save_CFLAGS
])# _LT_LANG_RC_CONFIG
# LT_PROG_GCJ
# -----------
AC_DEFUN([LT_PROG_GCJ],
[m4_ifdef([AC_PROG_GCJ], [AC_PROG_GCJ],
[m4_ifdef([A][M_PROG_GCJ], [A][M_PROG_GCJ],
[AC_CHECK_TOOL(GCJ, gcj,)
test set = "${GCJFLAGS+set}" || GCJFLAGS="-g -O2"
AC_SUBST(GCJFLAGS)])])[]dnl
])
# Old name:
AU_ALIAS([LT_AC_PROG_GCJ], [LT_PROG_GCJ])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([LT_AC_PROG_GCJ], [])
# LT_PROG_GO
# ----------
AC_DEFUN([LT_PROG_GO],
[AC_CHECK_TOOL(GOC, gccgo,)
])
# LT_PROG_RC
# ----------
AC_DEFUN([LT_PROG_RC],
[AC_CHECK_TOOL(RC, windres,)
])
# Old name:
AU_ALIAS([LT_AC_PROG_RC], [LT_PROG_RC])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([LT_AC_PROG_RC], [])
# _LT_DECL_EGREP
# --------------
# If we don't have a new enough Autoconf to choose the best grep
# available, choose the one first in the user's PATH.
m4_defun([_LT_DECL_EGREP],
[AC_REQUIRE([AC_PROG_EGREP])dnl
AC_REQUIRE([AC_PROG_FGREP])dnl
test -z "$GREP" && GREP=grep
_LT_DECL([], [GREP], [1], [A grep program that handles long lines])
_LT_DECL([], [EGREP], [1], [An ERE matcher])
_LT_DECL([], [FGREP], [1], [A literal string matcher])
dnl Non-bleeding-edge autoconf doesn't subst GREP, so do it here too
AC_SUBST([GREP])
])
# _LT_DECL_OBJDUMP
# --------------
# If we don't have a new enough Autoconf to choose the best objdump
# available, choose the one first in the user's PATH.
m4_defun([_LT_DECL_OBJDUMP],
[AC_CHECK_TOOL(OBJDUMP, objdump, false)
test -z "$OBJDUMP" && OBJDUMP=objdump
_LT_DECL([], [OBJDUMP], [1], [An object symbol dumper])
AC_SUBST([OBJDUMP])
])
# _LT_DECL_DLLTOOL
# ----------------
# Ensure DLLTOOL variable is set.
m4_defun([_LT_DECL_DLLTOOL],
[AC_CHECK_TOOL(DLLTOOL, dlltool, false)
test -z "$DLLTOOL" && DLLTOOL=dlltool
_LT_DECL([], [DLLTOOL], [1], [DLL creation program])
AC_SUBST([DLLTOOL])
])
# _LT_DECL_SED
# ------------
# Check for a fully-functional sed program, that truncates
# as few characters as possible. Prefer GNU sed if found.
m4_defun([_LT_DECL_SED],
[AC_PROG_SED
test -z "$SED" && SED=sed
Xsed="$SED -e 1s/^X//"
_LT_DECL([], [SED], [1], [A sed program that does not truncate output])
_LT_DECL([], [Xsed], ["\$SED -e 1s/^X//"],
[Sed that helps us avoid accidentally triggering echo(1) options like -n])
])# _LT_DECL_SED
m4_ifndef([AC_PROG_SED], [
############################################################
# NOTE: This macro has been submitted for inclusion into #
# GNU Autoconf as AC_PROG_SED. When it is available in #
# a released version of Autoconf we should remove this #
# macro and use it instead. #
############################################################
m4_defun([AC_PROG_SED],
[AC_MSG_CHECKING([for a sed that does not truncate output])
AC_CACHE_VAL(lt_cv_path_SED,
[# Loop through the user's path and test for sed and gsed.
# Then use that list of sed's as ones to test for truncation.
as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
for as_dir in $PATH
do
IFS=$as_save_IFS
test -z "$as_dir" && as_dir=.
for lt_ac_prog in sed gsed; do
for ac_exec_ext in '' $ac_executable_extensions; do
if $as_executable_p "$as_dir/$lt_ac_prog$ac_exec_ext"; then
lt_ac_sed_list="$lt_ac_sed_list $as_dir/$lt_ac_prog$ac_exec_ext"
fi
done
done
done
IFS=$as_save_IFS
lt_ac_max=0
lt_ac_count=0
# Add /usr/xpg4/bin/sed as it is typically found on Solaris
# along with /bin/sed that truncates output.
for lt_ac_sed in $lt_ac_sed_list /usr/xpg4/bin/sed; do
test ! -f "$lt_ac_sed" && continue
cat /dev/null > conftest.in
lt_ac_count=0
echo $ECHO_N "0123456789$ECHO_C" >conftest.in
# Check for GNU sed and select it if it is found.
if "$lt_ac_sed" --version 2>&1 < /dev/null | grep 'GNU' > /dev/null; then
lt_cv_path_SED=$lt_ac_sed
break
fi
while true; do
cat conftest.in conftest.in >conftest.tmp
mv conftest.tmp conftest.in
cp conftest.in conftest.nl
echo >>conftest.nl
$lt_ac_sed -e 's/a$//' < conftest.nl >conftest.out || break
cmp -s conftest.out conftest.nl || break
# 10000 chars as input seems more than enough
test 10 -lt "$lt_ac_count" && break
lt_ac_count=`expr $lt_ac_count + 1`
if test "$lt_ac_count" -gt "$lt_ac_max"; then
lt_ac_max=$lt_ac_count
lt_cv_path_SED=$lt_ac_sed
fi
done
done
])
SED=$lt_cv_path_SED
AC_SUBST([SED])
AC_MSG_RESULT([$SED])
])#AC_PROG_SED
])#m4_ifndef
# Old name:
AU_ALIAS([LT_AC_PROG_SED], [AC_PROG_SED])
dnl aclocal-1.4 backwards compatibility:
dnl AC_DEFUN([LT_AC_PROG_SED], [])
# _LT_CHECK_SHELL_FEATURES
# ------------------------
# Find out whether the shell is Bourne or XSI compatible,
# or has some other useful features.
m4_defun([_LT_CHECK_SHELL_FEATURES],
[if ( (MAIL=60; unset MAIL) || exit) >/dev/null 2>&1; then
lt_unset=unset
else
lt_unset=false
fi
_LT_DECL([], [lt_unset], [0], [whether the shell understands "unset"])dnl
# test EBCDIC or ASCII
case `echo X|tr X '\101'` in
A) # ASCII based system
# \n is not interpreted correctly by Solaris 8 /usr/ucb/tr
lt_SP2NL='tr \040 \012'
lt_NL2SP='tr \015\012 \040\040'
;;
*) # EBCDIC based system
lt_SP2NL='tr \100 \n'
lt_NL2SP='tr \r\n \100\100'
;;
esac
_LT_DECL([SP2NL], [lt_SP2NL], [1], [turn spaces into newlines])dnl
_LT_DECL([NL2SP], [lt_NL2SP], [1], [turn newlines into spaces])dnl
])# _LT_CHECK_SHELL_FEATURES
# _LT_PATH_CONVERSION_FUNCTIONS
# -----------------------------
# Determine what file name conversion functions should be used by
# func_to_host_file (and, implicitly, by func_to_host_path). These are needed
# for certain cross-compile configurations and native mingw.
m4_defun([_LT_PATH_CONVERSION_FUNCTIONS],
[AC_REQUIRE([AC_CANONICAL_HOST])dnl
AC_REQUIRE([AC_CANONICAL_BUILD])dnl
AC_MSG_CHECKING([how to convert $build file names to $host format])
AC_CACHE_VAL(lt_cv_to_host_file_cmd,
[case $host in
*-*-mingw* )
case $build in
*-*-mingw* ) # actually msys
lt_cv_to_host_file_cmd=func_convert_file_msys_to_w32
;;
*-*-cygwin* )
lt_cv_to_host_file_cmd=func_convert_file_cygwin_to_w32
;;
* ) # otherwise, assume *nix
lt_cv_to_host_file_cmd=func_convert_file_nix_to_w32
;;
esac
;;
*-*-cygwin* )
case $build in
*-*-mingw* ) # actually msys
lt_cv_to_host_file_cmd=func_convert_file_msys_to_cygwin
;;
*-*-cygwin* )
lt_cv_to_host_file_cmd=func_convert_file_noop
;;
* ) # otherwise, assume *nix
lt_cv_to_host_file_cmd=func_convert_file_nix_to_cygwin
;;
esac
;;
* ) # unhandled hosts (and "normal" native builds)
lt_cv_to_host_file_cmd=func_convert_file_noop
;;
esac
])
to_host_file_cmd=$lt_cv_to_host_file_cmd
AC_MSG_RESULT([$lt_cv_to_host_file_cmd])
_LT_DECL([to_host_file_cmd], [lt_cv_to_host_file_cmd],
[0], [convert $build file names to $host format])dnl
AC_MSG_CHECKING([how to convert $build file names to toolchain format])
AC_CACHE_VAL(lt_cv_to_tool_file_cmd,
[#assume ordinary cross tools, or native build.
lt_cv_to_tool_file_cmd=func_convert_file_noop
case $host in
*-*-mingw* )
case $build in
*-*-mingw* ) # actually msys
lt_cv_to_tool_file_cmd=func_convert_file_msys_to_w32
;;
esac
;;
esac
])
to_tool_file_cmd=$lt_cv_to_tool_file_cmd
AC_MSG_RESULT([$lt_cv_to_tool_file_cmd])
_LT_DECL([to_tool_file_cmd], [lt_cv_to_tool_file_cmd],
[0], [convert $build files to toolchain format])dnl
])# _LT_PATH_CONVERSION_FUNCTIONS
diff --git a/pyext/rivet/Makefile.am b/pyext/rivet/Makefile.am
--- a/pyext/rivet/Makefile.am
+++ b/pyext/rivet/Makefile.am
@@ -1,22 +1,23 @@
EXTRA_DIST = \
__init__.py \
+ hepdatautils.py \
spiresbib.py util.py \
plotinfo.py aopaths.py \
core.pyx rivet.pxd \
core.cpp
MAINTAINERCLEANFILES = core.cpp
BUILT_SOURCES = core.cpp
if WITH_CYTHON
RIVETINCLUDE = $(top_srcdir)/include/Rivet/
core.cpp: core.pyx rivet.pxd $(RIVETINCLUDE)/Analysis.hh $(RIVETINCLUDE)/AnalysisHandler.hh $(RIVETINCLUDE)/AnalysisLoader.hh $(RIVETINCLUDE)/Run.hh
cython $(srcdir)/core.pyx --cplus \
-I $(srcdir) -I $(srcdir)/include \
-I $(builddir) -I $(builddir)/include \
-o $@
else
core.cpp:
@echo "Not (re)generating core.cpp since Cython is not installed"
endif
diff --git a/src/Tools/Nsubjettiness/MeasureDefinition.cc b/src/Tools/Nsubjettiness/MeasureDefinition.cc
--- a/src/Tools/Nsubjettiness/MeasureDefinition.cc
+++ b/src/Tools/Nsubjettiness/MeasureDefinition.cc
@@ -1,628 +1,628 @@
// Nsubjettiness Package
// Questions/Comments? jthaler@jthaler.net
//
// Copyright (c) 2011-14
// Jesse Thaler, Ken Van Tilburg, Christopher K. Vermilion, and TJ Wilkason
//
// $Id: MeasureDefinition.cc 946 2016-06-14 19:11:27Z jthaler $
//----------------------------------------------------------------------
// This file is part of FastJet contrib.
//
// It is free software; you can redistribute it and/or modify it under
// the terms of the GNU General Public License as published by the
// Free Software Foundation; either version 2 of the License, or (at
// your option) any later version.
//
// It is distributed in the hope that it will be useful, but WITHOUT
// ANY WARRANTY; without even the implied warranty of MERCHANTABILITY
// or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public
// License for more details.
//
// You should have received a copy of the GNU General Public License
// along with this code. If not, see <http://www.gnu.org/licenses/>.
//----------------------------------------------------------------------
// #include "AxesRefiner.hh"
#include "MeasureDefinition.hh"
#include <iomanip>
namespace Rivet {
namespace Nsubjettiness { using namespace fastjet;
///////
//
// Measure Function
//
///////
//descriptions updated to include measure type
std::string DefaultMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Default Measure (should not be used directly)";
return stream.str();
}
std::string NormalizedMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Normalized Measure (beta = " << _beta << ", R0 = " << _R0 << ")";
return stream.str();
}
std::string UnnormalizedMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Unnormalized Measure (beta = " << _beta << ", in GeV)";
return stream.str();
}
std::string NormalizedCutoffMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Normalized Cutoff Measure (beta = " << _beta << ", R0 = " << _R0 << ", Rcut = " << _Rcutoff << ")";
return stream.str();
}
std::string UnnormalizedCutoffMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Unnormalized Cutoff Measure (beta = " << _beta << ", Rcut = " << _Rcutoff << ", in GeV)";
return stream.str();
}
//std::string DeprecatedGeometricMeasure::description() const {
// std::stringstream stream;
// stream << std::fixed << std::setprecision(2)
// << "Deprecated Geometric Measure (beta = " << _jet_beta << ", in GeV)";
// return stream.str();
//}
//std::string DeprecatedGeometricCutoffMeasure::description() const {
// std::stringstream stream;
// stream << std::fixed << std::setprecision(2)
// << "Deprecated Geometric Cutoff Measure (beta = " << _jet_beta << ", Rcut = " << _Rcutoff << ", in GeV)";
// return stream.str();
//}
std::string ConicalMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Conical Measure (beta = " << _beta << ", Rcut = " << _Rcutoff << ", in GeV)";
return stream.str();
}
std::string OriginalGeometricMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Original Geometric Measure (Rcut = " << _Rcutoff << ", in GeV)";
return stream.str();
}
std::string ModifiedGeometricMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Modified Geometric Measure (Rcut = " << _Rcutoff << ", in GeV)";
return stream.str();
}
std::string ConicalGeometricMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "Conical Geometric Measure (beta = " << _jet_beta << ", gamma = " << _beam_gamma << ", Rcut = " << _Rcutoff << ", in GeV)";
return stream.str();
}
std::string XConeMeasure::description() const {
std::stringstream stream;
stream << std::fixed << std::setprecision(2)
<< "XCone Measure (beta = " << _jet_beta << ", Rcut = " << _Rcutoff << ", in GeV)";
return stream.str();
}
// Return all of the necessary TauComponents for specific input particles and axes
TauComponents MeasureDefinition::component_result(const std::vector<fastjet::PseudoJet>& particles,
const std::vector<fastjet::PseudoJet>& axes) const {
// first find partition
TauPartition partition = get_partition(particles,axes);
// then return result calculated from partition
return component_result_from_partition(partition,axes);
}
TauPartition MeasureDefinition::get_partition(const std::vector<fastjet::PseudoJet>& particles,
const std::vector<fastjet::PseudoJet>& axes) const {
TauPartition myPartition(axes.size());
// Figures out the partiting of the input particles into the various jet pieces
// Based on which axis the parition is closest to
for (unsigned i = 0; i < particles.size(); i++) {
// find minimum distance; start with beam (-1) for reference
int j_min = -1;
double minRsq;
if (has_beam()) minRsq = beam_distance_squared(particles[i]);
else minRsq = std::numeric_limits<double>::max(); // make it large value
// check to see which axis the particle is closest to
for (unsigned j = 0; j < axes.size(); j++) {
double tempRsq = jet_distance_squared(particles[i],axes[j]); // delta R distance
if (tempRsq < minRsq) {
minRsq = tempRsq;
j_min = j;
}
}
if (j_min == -1) {
assert(has_beam()); // should have beam for this to make sense.
myPartition.push_back_beam(particles[i],i);
} else {
myPartition.push_back_jet(j_min,particles[i],i);
}
}
return myPartition;
}
// Uses existing partition and calculates result
// TODO: Can we cache this for speed up when doing area subtraction?
TauComponents MeasureDefinition::component_result_from_partition(const TauPartition& partition,
const std::vector<fastjet::PseudoJet>& axes) const {
std::vector<double> jetPieces(axes.size(), 0.0);
double beamPiece = 0.0;
double tauDen = 0.0;
if (!has_denominator()) tauDen = 1.0; // if no denominator, then 1.0 for no normalization factor
// first find jet pieces
for (unsigned j = 0; j < axes.size(); j++) {
std::vector<PseudoJet> thisPartition = partition.jet(j).constituents();
for (unsigned i = 0; i < thisPartition.size(); i++) {
jetPieces[j] += jet_numerator(thisPartition[i],axes[j]); //numerator jet piece
if (has_denominator()) tauDen += denominator(thisPartition[i]); // denominator
}
}
// then find beam piece
if (has_beam()) {
std::vector<PseudoJet> beamPartition = partition.beam().constituents();
for (unsigned i = 0; i < beamPartition.size(); i++) {
beamPiece += beam_numerator(beamPartition[i]); //numerator beam piece
if (has_denominator()) tauDen += denominator(beamPartition[i]); // denominator
}
}
// create jets for storage in TauComponents
std::vector<PseudoJet> jets = partition.jets();
return TauComponents(_tau_mode, jetPieces, beamPiece, tauDen, jets, axes);
}
// new methods added to generalize energy and angle squared for different measure types
double DefaultMeasure::energy(const PseudoJet& jet) const {
double energy;
switch (_measure_type) {
case pt_R :
case perp_lorentz_dot :
energy = jet.perp();
break;
case E_theta :
case lorentz_dot :
energy = jet.e();
break;
default : {
assert(_measure_type == pt_R || _measure_type == E_theta || _measure_type == lorentz_dot || _measure_type == perp_lorentz_dot);
energy = std::numeric_limits<double>::quiet_NaN();
break;
}
}
return energy;
}
double DefaultMeasure::angleSquared(const PseudoJet& jet1, const PseudoJet& jet2) const {
double pseudoRsquared;
switch(_measure_type) {
case pt_R : {
pseudoRsquared = jet1.squared_distance(jet2);
break;
}
case E_theta : {
// doesn't seem to be a fastjet built in for this
double dot = jet1.px()*jet2.px() + jet1.py()*jet2.py() + jet1.pz()*jet2.pz();
double norm1 = sqrt(jet1.px()*jet1.px() + jet1.py()*jet1.py() + jet1.pz()*jet1.pz());
double norm2 = sqrt(jet2.px()*jet2.px() + jet2.py()*jet2.py() + jet2.pz()*jet2.pz());
double costheta = dot/(norm1 * norm2);
if (costheta > 1.0) costheta = 1.0; // Need to handle case of numerical overflow
double theta = acos(costheta);
pseudoRsquared = theta*theta;
break;
}
case lorentz_dot : {
double dotproduct = dot_product(jet1,jet2);
pseudoRsquared = 2.0 * dotproduct / (jet1.e() * jet2.e());
break;
}
case perp_lorentz_dot : {
PseudoJet lightJet = lightFrom(jet2); // assuming jet2 is the axis
double dotproduct = dot_product(jet1,lightJet);
pseudoRsquared = 2.0 * dotproduct / (lightJet.pt() * jet1.pt());
break;
}
default : {
assert(_measure_type == pt_R || _measure_type == E_theta || _measure_type == lorentz_dot || _measure_type == perp_lorentz_dot);
pseudoRsquared = std::numeric_limits<double>::quiet_NaN();
break;
}
}
return pseudoRsquared;
}
///////
//
// Axes Refining
//
///////
// uses minimization of N-jettiness to continually update axes until convergence.
// The function returns the axes found at the (local) minimum
// This is the general axes refiner that can be used for a generic measure (but is
// overwritten in the case of the conical measure and the deprecated geometric measure)
std::vector<fastjet::PseudoJet> MeasureDefinition::get_one_pass_axes(int n_jets,
const std::vector <fastjet::PseudoJet> & particles,
const std::vector<fastjet::PseudoJet>& currentAxes,
int nAttempts,
double accuracy) const {
assert(n_jets == (int)currentAxes.size());
std::vector<fastjet::PseudoJet> seedAxes = currentAxes;
std::vector<fastjet::PseudoJet> temp_axes(seedAxes.size(),fastjet::PseudoJet(0,0,0,0));
for (unsigned int k = 0; k < seedAxes.size(); k++) {
seedAxes[k] = lightFrom(seedAxes[k]) * seedAxes[k].E(); // making light-like, but keeping energy
}
double seedTau = result(particles, seedAxes);
std::vector<fastjet::PseudoJet> bestAxesSoFar = seedAxes;
double bestTauSoFar = seedTau;
for (int i_att = 0; i_att < nAttempts; i_att++) {
std::vector<fastjet::PseudoJet> newAxes(seedAxes.size(),fastjet::PseudoJet(0,0,0,0));
std::vector<fastjet::PseudoJet> summed_jets(seedAxes.size(), fastjet::PseudoJet(0,0,0,0));
// find closest axis and assign to that
for (unsigned int i = 0; i < particles.size(); i++) {
// start from unclustered beam measure
int minJ = -1;
double minDist = beam_distance_squared(particles[i]);
// which axis am I closest to?
for (unsigned int j = 0; j < seedAxes.size(); j++) {
double tempDist = jet_distance_squared(particles[i],seedAxes[j]);
if (tempDist < minDist) {
minDist = tempDist;
minJ = j;
}
}
// if not unclustered, then cluster
if (minJ != -1) {
summed_jets[minJ] += particles[i]; // keep track of energy to use later.
if (_useAxisScaling) {
double pseudoMomentum = dot_product(lightFrom(seedAxes[minJ]),particles[i]) + accuracy; // need small offset to avoid potential divide by zero issues
double axis_scaling = (double)jet_numerator(particles[i], seedAxes[minJ])/pseudoMomentum;
newAxes[minJ] += particles[i]*axis_scaling;
}
}
}
if (!_useAxisScaling) newAxes = summed_jets;
// convert the axes to LightLike and then back to PseudoJet
for (unsigned int k = 0; k < newAxes.size(); k++) {
if (newAxes[k].perp() > 0) {
newAxes[k] = lightFrom(newAxes[k]);
newAxes[k] *= summed_jets[k].E(); // scale by energy to get sensible result
}
}
// calculate tau on new axes
double newTau = result(particles, newAxes);
// find the smallest value of tau (and the corresponding axes) so far
if (newTau < bestTauSoFar) {
bestAxesSoFar = newAxes;
bestTauSoFar = newTau;
}
if (fabs(newTau - seedTau) < accuracy) {// close enough for jazz
seedAxes = newAxes;
seedTau = newTau;
break;
}
seedAxes = newAxes;
seedTau = newTau;
}
// return the axes corresponding to the smallest tau found throughout all iterations
// this is to prevent the minimization from returning a non-minimized of tau due to potential oscillations around the minimum
return bestAxesSoFar;
}
// One pass minimization for the DefaultMeasure
// Given starting axes, update to find better axes by using Kmeans clustering around the old axes
template <int N>
std::vector<LightLikeAxis> DefaultMeasure::UpdateAxesFast(const std::vector <LightLikeAxis> & old_axes,
const std::vector <fastjet::PseudoJet> & inputJets,
double accuracy
) const {
assert(old_axes.size() == N);
// some storage, declared static to save allocation/re-allocation costs
static LightLikeAxis new_axes[N];
static fastjet::PseudoJet new_jets[N];
for (int n = 0; n < N; ++n) {
new_axes[n].reset(0.0,0.0,0.0,0.0);
new_jets[n].reset_momentum(0.0,0.0,0.0,0.0);
}
double precision = accuracy; //TODO: actually cascade this in
/////////////// Assignment Step //////////////////////////////////////////////////////////
std::vector<int> assignment_index(inputJets.size());
int k_assign = -1;
for (unsigned i = 0; i < inputJets.size(); i++){
double smallestDist = std::numeric_limits<double>::max(); //large number
for (int k = 0; k < N; k++) {
double thisDist = old_axes[k].DistanceSq(inputJets[i]);
if (thisDist < smallestDist) {
smallestDist = thisDist;
k_assign = k;
}
}
if (smallestDist > sq(_Rcutoff)) {k_assign = -1;}
assignment_index[i] = k_assign;
}
//////////////// Update Step /////////////////////////////////////////////////////////////
double distPhi, old_dist;
for (unsigned i = 0; i < inputJets.size(); i++) {
int old_jet_i = assignment_index[i];
if (old_jet_i == -1) {continue;}
const fastjet::PseudoJet& inputJet_i = inputJets[i];
LightLikeAxis& new_axis_i = new_axes[old_jet_i];
double inputPhi_i = inputJet_i.phi();
double inputRap_i = inputJet_i.rap();
// optimize pow() call
// add noise (the precision term) to make sure we don't divide by zero
if (_beta == 1.0) {
double DR = std::sqrt(sq(precision) + old_axes[old_jet_i].DistanceSq(inputJet_i));
old_dist = 1.0/DR;
} else if (_beta == 2.0) {
old_dist = 1.0;
} else if (_beta == 0.0) {
double DRSq = sq(precision) + old_axes[old_jet_i].DistanceSq(inputJet_i);
old_dist = 1.0/DRSq;
} else {
old_dist = sq(precision) + old_axes[old_jet_i].DistanceSq(inputJet_i);
old_dist = std::pow(old_dist, (0.5*_beta-1.0));
}
// TODO: Put some of these addition functions into light-like axes
// rapidity sum
new_axis_i.set_rap(new_axis_i.rap() + inputJet_i.perp() * inputRap_i * old_dist);
// phi sum
distPhi = inputPhi_i - old_axes[old_jet_i].phi();
if (fabs(distPhi) <= M_PI){
new_axis_i.set_phi( new_axis_i.phi() + inputJet_i.perp() * inputPhi_i * old_dist );
} else if (distPhi > M_PI) {
new_axis_i.set_phi( new_axis_i.phi() + inputJet_i.perp() * (-2*M_PI + inputPhi_i) * old_dist );
} else if (distPhi < -M_PI) {
new_axis_i.set_phi( new_axis_i.phi() + inputJet_i.perp() * (+2*M_PI + inputPhi_i) * old_dist );
}
// weights sum
new_axis_i.set_weight( new_axis_i.weight() + inputJet_i.perp() * old_dist );
// momentum magnitude sum
new_jets[old_jet_i] += inputJet_i;
}
// normalize sums
for (int k = 0; k < N; k++) {
if (new_axes[k].weight() == 0) {
// no particles were closest to this axis! Return to old axis instead of (0,0,0,0)
new_axes[k] = old_axes[k];
} else {
new_axes[k].set_rap( new_axes[k].rap() / new_axes[k].weight() );
new_axes[k].set_phi( new_axes[k].phi() / new_axes[k].weight() );
new_axes[k].set_phi( std::fmod(new_axes[k].phi() + 2*M_PI, 2*M_PI) );
new_axes[k].set_mom( std::sqrt(new_jets[k].modp2()) );
}
}
std::vector<LightLikeAxis> new_axes_vec(N);
for (unsigned k = 0; k < N; ++k) new_axes_vec[k] = new_axes[k];
return new_axes_vec;
}
// Given N starting axes, this function updates all axes to find N better axes.
// (This is just a wrapper for the templated version above.)
// TODO: Consider removing this in a future version
std::vector<LightLikeAxis> DefaultMeasure::UpdateAxes(const std::vector <LightLikeAxis> & old_axes,
const std::vector <fastjet::PseudoJet> & inputJets,
double accuracy) const {
int N = old_axes.size();
switch (N) {
case 1: return UpdateAxesFast<1>(old_axes, inputJets, accuracy);
case 2: return UpdateAxesFast<2>(old_axes, inputJets, accuracy);
case 3: return UpdateAxesFast<3>(old_axes, inputJets, accuracy);
case 4: return UpdateAxesFast<4>(old_axes, inputJets, accuracy);
case 5: return UpdateAxesFast<5>(old_axes, inputJets, accuracy);
case 6: return UpdateAxesFast<6>(old_axes, inputJets, accuracy);
case 7: return UpdateAxesFast<7>(old_axes, inputJets, accuracy);
case 8: return UpdateAxesFast<8>(old_axes, inputJets, accuracy);
case 9: return UpdateAxesFast<9>(old_axes, inputJets, accuracy);
case 10: return UpdateAxesFast<10>(old_axes, inputJets, accuracy);
case 11: return UpdateAxesFast<11>(old_axes, inputJets, accuracy);
case 12: return UpdateAxesFast<12>(old_axes, inputJets, accuracy);
case 13: return UpdateAxesFast<13>(old_axes, inputJets, accuracy);
case 14: return UpdateAxesFast<14>(old_axes, inputJets, accuracy);
case 15: return UpdateAxesFast<15>(old_axes, inputJets, accuracy);
case 16: return UpdateAxesFast<16>(old_axes, inputJets, accuracy);
case 17: return UpdateAxesFast<17>(old_axes, inputJets, accuracy);
case 18: return UpdateAxesFast<18>(old_axes, inputJets, accuracy);
case 19: return UpdateAxesFast<19>(old_axes, inputJets, accuracy);
case 20: return UpdateAxesFast<20>(old_axes, inputJets, accuracy);
default: std::cout << "N-jettiness is hard-coded to only allow up to 20 jets!" << std::endl;
return std::vector<LightLikeAxis>();
}
}
// uses minimization of N-jettiness to continually update axes until convergence.
// The function returns the axes found at the (local) minimum
std::vector<fastjet::PseudoJet> DefaultMeasure::get_one_pass_axes(int n_jets,
const std::vector <fastjet::PseudoJet> & inputJets,
const std::vector<fastjet::PseudoJet>& seedAxes,
int nAttempts,
double accuracy
) const {
// if the measure type doesn't use the pt_R metric, then the standard minimization scheme should be used
if (_measure_type != pt_R) {
return MeasureDefinition::get_one_pass_axes(n_jets, inputJets, seedAxes, nAttempts, accuracy);
}
// convert from PseudoJets to LightLikeAxes
std::vector< LightLikeAxis > old_axes(n_jets, LightLikeAxis(0,0,0,0));
for (int k = 0; k < n_jets; k++) {
old_axes[k].set_rap( seedAxes[k].rap() );
old_axes[k].set_phi( seedAxes[k].phi() );
old_axes[k].set_mom( seedAxes[k].modp() );
}
// Find new axes by iterating (only one pass here)
std::vector< LightLikeAxis > new_axes(n_jets, LightLikeAxis(0,0,0,0));
double cmp = std::numeric_limits<double>::max(); //large number
int h = 0;
while (cmp > accuracy && h < nAttempts) { // Keep updating axes until near-convergence or too many update steps
cmp = 0.0;
h++;
new_axes = UpdateAxes(old_axes, inputJets,accuracy); // Update axes
for (int k = 0; k < n_jets; k++) {
cmp += old_axes[k].Distance(new_axes[k]);
}
cmp = cmp / ((double) n_jets);
old_axes = new_axes;
}
// Convert from internal LightLikeAxes to PseudoJet
std::vector<fastjet::PseudoJet> outputAxes;
for (int k = 0; k < n_jets; k++) {
fastjet::PseudoJet temp = old_axes[k].ConvertToPseudoJet();
outputAxes.push_back(temp);
}
// this is used to debug the minimization routine to make sure that it works.
- bool do_debug = false;
+ /*bool do_debug = false;
if (do_debug) {
// get this information to make sure that minimization is working properly
double seed_tau = result(inputJets, seedAxes);
double outputTau = result(inputJets, outputAxes);
assert(outputTau <= seed_tau);
- }
+ }*/
return outputAxes;
}
//// One-pass minimization for the Deprecated Geometric Measure
//// Uses minimization of the geometric distance in order to find the minimum axes.
//// It continually updates until it reaches convergence or it reaches the maximum number of attempts.
//// This is essentially the same as a stable cone finder.
//std::vector<fastjet::PseudoJet> DeprecatedGeometricCutoffMeasure::get_one_pass_axes(int n_jets,
// const std::vector <fastjet::PseudoJet> & particles,
// const std::vector<fastjet::PseudoJet>& currentAxes,
// int nAttempts,
// double accuracy) const {
//
// assert(n_jets == (int)currentAxes.size()); //added int casting to get rid of compiler warning
//
// std::vector<fastjet::PseudoJet> seedAxes = currentAxes;
// double seedTau = result(particles, seedAxes);
//
// for (int i = 0; i < nAttempts; i++) {
//
// std::vector<fastjet::PseudoJet> newAxes(seedAxes.size(),fastjet::PseudoJet(0,0,0,0));
//
// // find closest axis and assign to that
// for (unsigned int i = 0; i < particles.size(); i++) {
//
// // start from unclustered beam measure
// int minJ = -1;
// double minDist = beam_distance_squared(particles[i]);
//
// // which axis am I closest to?
// for (unsigned int j = 0; j < seedAxes.size(); j++) {
// double tempDist = jet_distance_squared(particles[i],seedAxes[j]);
// if (tempDist < minDist) {
// minDist = tempDist;
// minJ = j;
// }
// }
//
// // if not unclustered, then cluster
// if (minJ != -1) newAxes[minJ] += particles[i];
// }
//
// // calculate tau on new axes
// seedAxes = newAxes;
// double tempTau = result(particles, newAxes);
//
// // close enough to stop?
// if (fabs(tempTau - seedTau) < accuracy) break;
// seedTau = tempTau;
// }
//
// return seedAxes;
//}
// Go from internal LightLikeAxis to PseudoJet
fastjet::PseudoJet LightLikeAxis::ConvertToPseudoJet() {
double px, py, pz, E;
E = _mom;
pz = (std::exp(2.0*_rap) - 1.0) / (std::exp(2.0*_rap) + 1.0) * E;
px = std::cos(_phi) * std::sqrt( std::pow(E,2) - std::pow(pz,2) );
py = std::sin(_phi) * std::sqrt( std::pow(E,2) - std::pow(pz,2) );
return fastjet::PseudoJet(px,py,pz,E);
}
} //namespace Nsubjettiness
}

File Metadata

Mime Type
text/x-diff
Expires
Sat, Dec 21, 5:05 PM (18 h, 37 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
4023574
Default Alt Text
(825 KB)

Event Timeline