Page MenuHomeHEPForge

No OneTemporary

diff --git a/Changes.md b/Changes.md
index 6685a19..b46b5f3 100644
--- a/Changes.md
+++ b/Changes.md
@@ -1,67 +1,69 @@
# Changelog
This is the log for changes to the HEJ program. Further changes to the HEJ API
are documented in `Changes-API.md`. If you are using HEJ as a library, please
also read the changes there.
## Version 2.X
### 2.X.0
* Resummation for W bosons with jets
- New subleading processes `extremal qqx` & `central qqx` for a quark and
anti-quark in the final state, e.g. `g g => u d_bar Wm g` (the other
subleading processes also work with W's)
- `HEJFOG` can generate mutliple jets together with a (off-shell) W bosons
decaying into lepton & neutrino
* Resummation can now be performed on `unordered` subleading processes
in pure jets.
* Allow multiplication and division of multiple scale functions e.g.
`H_T/2*m_j1j2`
* Print cross sections at end of run
* Follow HepMC convention for particle Status codes: incoming = 11,
decaying = 2, outgoing = 1 (unchanged)
* Partons now have a Colour charge
- Colours are read from and written to LHE files
- For reweighted events the colours are created according to leading colour in
the FKL limit
* Grouped `event treatment` for subleading channels together in runcard
- Rename `non-HEJ` processes to `non-resummable`
* Read electro-weak constants from input
- new mandatory setting `vev` to change vacuum expectation value
- new mandatory settings `particle properties` to specify mass & width of
bosons
- FOG: decays are now specified in `decays` setting (previously under
`particle properties`)
* Allow changing the regulator lambda in input (`regulator parameter`, only for
advanced users)
* Use `git-lfs` for raw data in test (`make test` now requires `git-lfs`)
* Added support to read `hdf5` event files suggested in
[arXiv:1905.05120](https://arxiv.org/abs/1905.05120) (needs
[HighFive](https://github.com/BlueBrain/HighFive))
* Support input with avarage weight equal to the cross section (`IDWTUP=1 or 4`)
* Dropped support for HepMC 3.0.0, either HepMC version 2 or >3.1 is required
- It is now possible to write out both HepMC 2 and HepMC 3 events at the same
time
+* Optional setting to specify maximal number of Fixed Order events (`max
+ events`, default is all)
## 2.0.5
* Fixed event classification for input not ordered in rapidity
### 2.0.4
* Fixed wrong path of `HEJ_INCLUDE_DIR` in `hej-config.cmake`
### 2.0.3
* Fixed parsing of (numerical factor) * (base scale) in configuration
* Don't change scale names, but sanitise Rivet output file names instead
### 2.0.2
* Changed scale names to `"_over_"` and `"_times_"` for proper file names (was
`"/"` and `"*"` before)
### 2.0.1
* Fixed name of fixed-order generator in error message.
diff --git a/config.yml b/config.yml
index dd3f465..ff01962 100644
--- a/config.yml
+++ b/config.yml
@@ -1,115 +1,116 @@
# number of attempted resummation phase space points for each input event
trials: 10
min extparton pt: 30 # minimum transverse momentum of extremal partons
# maximum soft transverse momentum fraction in extremal jets
#
# max ext soft pt fraction: 0.1
resummation jets: # resummation jet properties
min pt: 35 # minimum jet transverse momentum
algorithm: antikt # jet clustering algorithm
R: 0.4 # jet R parameter
fixed order jets: # properties of input jets
min pt: 30
# by default, algorithm and R are like for resummation jets
# treatment of he various event classes
# the supported settings are: reweight, keep, discard
# non-resummable events cannot be reweighted
event treatment:
FKL: reweight
unordered: keep
extremal qqx: keep
central qqx: keep
non-resummable: keep
# central scale choice or choices
#
# scales: [125, max jet pperp, H_T/2, 2*jet invariant mass, m_j1j2]
scales: 91.188
# factors by which the central scales should be multiplied
# renormalisation and factorisation scales are varied independently
#
# scale factors: [0.5, 0.7071, 1, 1.41421, 2]
# maximum ratio between renormalisation and factorisation scale
#
# max scale ratio: 2.0001
# import scale setting functions
#
# import scales:
# lib_my_scales.so: [scale0,scale1]
log correction: false # whether or not to include higher order logs
# event output files
#
# the supported formats are
# - Les Houches (suffix .lhe)
# - HepMC2 (suffix .hepmc)
# - HepMC3 (suffix .hepmc3 or .hepmc)
# TODO: - ROOT ntuples (suffix .root)
#
# An output file's format is deduced either automatically from the suffix
# or from an explicit specification, e.g.
# - Les Houches: outfile
event output:
- HEJ.lhe
# - HEJ_events.hepmc
# to use a rivet analysis
#
# analysis:
# rivet: MC_XS # rivet analysis name
# output: HEJ # name of the yoda files, ".yoda" and scale suffix will be added
#
# to use a custom analysis
#
# analysis:
# plugin: /path/to/libmyanalysis.so
# my analysis parameter: some value
# selection of random number generator and seed
# the choices are
# - mixmax (seed is an integer)
# - ranlux64 (seed is a filename containing parameters)
random generator:
name: mixmax
# seed: 1
# Vacuum expectation value
vev: 246.2196508
# Properties of the weak gauge bosons
particle properties:
Higgs:
mass: 125
width: 0.004165
W:
mass: 80.385
width: 2.085
Z:
mass: 91.187
width: 2.495
# parameters for Higgs-gluon couplings
# this requires compilation with qcdloop
#
# Higgs coupling:
# use impact factors: false
# mt: 174
# include bottom: true
# mb: 4.7
## ---------------------------------------------------------------------- ##
## The following settings are only intended for advances users. ##
## Please DO NOT SET them unless you know exactly what you are doing! ##
## ---------------------------------------------------------------------- ##
#
+# max events: -1 # Maximal number of fixed order Events to process
# regulator parameter: 0.2 # The regulator lambda for the subtraction terms
diff --git a/doc/sphinx/HEJ.rst b/doc/sphinx/HEJ.rst
index 6ae67be..a7bd4ec 100644
--- a/doc/sphinx/HEJ.rst
+++ b/doc/sphinx/HEJ.rst
@@ -1,359 +1,365 @@
.. _`Running HEJ 2`:
Running HEJ 2
=============
Quick start
-----------
In order to run HEJ 2, you need a configuration file and a file
containing fixed-order events. A sample configuration is given by the
:file:`config.yml` file distributed together with HEJ 2. Events in the
Les Houches Event File format can be generated with standard Monte Carlo
generators like `MadGraph5_aMC@NLO <https://launchpad.net/mg5amcnlo>`_
or `Sherpa <https://sherpa.hepforge.org/trac/wiki>`_. If HEJ 2 was
compiled with `HDF5 <https://www.hdfgroup.org/>`_ support, it can also
read event files in the format suggested in
`arXiv:1905.05120 <https://arxiv.org/abs/1905.05120>`_.
HEJ 2 assumes that the cross section is given by the sum of the event
weights. Depending on the fixed-order generator it may be necessary to
adjust the weights in the Les Houches Event File accordingly.
The processes supported by HEJ 2 are
- Pure multijet production
- Production of a Higgs boson with jets
- Production of a W boson with jets
..
- *TODO* Production of a Z boson or photon with jets
where at least two jets are required in each case. For the time being,
only leading-order events are supported.
After generating an event file :file:`events.lhe` adjust the parameters
under the `fixed order jets`_ setting in :file:`config.yml` to the
settings in the fixed-order generation. Resummation can then be added by
running::
HEJ config.yml events.lhe
Using the default settings, this will produce an output event file
:file:`HEJ.lhe` with events including high-energy resummation.
When using the `Docker image <https://hub.docker.com/r/hejdock/hej>`_,
HEJ can be run with
.. code-block:: bash
docker run -v $PWD:$PWD -w $PWD hejdock/hej HEJ config.yml events.lhe
.. _`HEJ 2 settings`:
Settings
--------
HEJ 2 configuration files follow the `YAML <http://yaml.org/>`_
format. The following configuration parameters are supported:
.. _`trials`:
**trials**
High-energy resummation is performed by generating a number of
resummation phase space configurations corresponding to an input
fixed-order event. This parameter specifies how many such
configurations HEJ 2 should try to generate for each input
event. Typical values vary between 10 and 100.
.. _`min extparton pt`:
**min extparton pt**
Specifies the minimum transverse momentum in GeV of the most forward
and the most backward parton. This setting is needed to regulate an
otherwise uncancelled divergence. Its value should be slightly below
the minimum transverse momentum of jets specified by `resummation
jets: min pt`_. See also the `max ext soft pt fraction`_ setting.
.. _`max ext soft pt fraction`:
**max ext soft pt fraction**
Specifies the maximum fraction that soft radiation can contribute to
the transverse momentum of each the most forward and the most backward
jet. Values between around 0.05 and 0.1 are recommended. See also the
`min extparton pt`_ setting.
.. _`fixed order jets`:
**fixed order jets**
This tag collects a number of settings specifying the jet definition
in the event input. The settings should correspond to the ones used in
the fixed-order Monte Carlo that generated the input events.
.. _`fixed order jets: min pt`:
**min pt**
Minimum transverse momentum in GeV of fixed-order jets.
.. _`fixed order jets: algorithm`:
**algorithm**
The algorithm used to define jets. Allowed settings are
:code:`kt`, :code:`cambridge`, :code:`antikt`,
:code:`cambridge for passive`. See the `FastJet
<http://fastjet.fr/>`_ documentation for a description of these
algorithms.
.. _`fixed order jets: R`:
**R**
The R parameter used in the jet algorithm, roughly corresponding
to the jet radius in the plane spanned by the rapidity and the
azimuthal angle.
.. _`resummation jets`:
**resummation jets**
This tag collects a number of settings specifying the jet definition
in the observed, i.e. resummed events. These settings are optional, by
default the same values as for the `fixed order jets`_ are assumed.
.. _`resummation jets: min pt`:
**min pt**
Minimum transverse momentum in GeV of resummation jets. This
should be between 25% and 50% larger than the minimum transverse
momentum of fixed order jets set by `fixed order jets: min pt`_.
.. _`resummation jets: algorithm`:
**algorithm**
The algorithm used to define jets. The HEJ 2 approach to
resummation relies on properties of :code:`antikt` jets, so this
value is strongly recommended. For a list of possible other
values, see the `fixed order jets: algorithm`_ setting.
.. _`resummation jets: R`:
**R**
The R parameter used in the jet algorithm.
.. _`event treatment`:
**event treatment**
Specify how to treat different event types. The different event types
contribute to different orders in the high-energy limit. The possible values
are :code:`reweight` to enable resummation, :code:`keep` to keep the events as
they are up to a possible change of renormalisation and factorisation scale,
and :code:`discard` to discard these events.
.. _`FKL`:
**FKL**
Specifies how to treat events respecting FKL rapidity ordering. These
configurations are dominant in the high-energy limit.
.. _`unordered`:
**unordered**
Specifies how to treat events with one emission that does not respect FKL
ordering, e.g. :code:`u d => g u d`. In the high-energy limit, such
configurations are logarithmically suppressed compared to FKL
configurations.
.. _`extremal qqx`:
**extremal qqx**
Specifies how to treat events with a quark-antiquark pair as extremal
partons in rapidity, e.g. :code:`g d => u u_bar d`. In the high-energy
limit, such configurations are logarithmically suppressed compared to FKL
configurations. :code:`reweight` is currently only supported for W boson
plus jets production.
.. _`central qqx`:
**central qqx**
Specifies how to treat events with a quark-antiquark pair central in
rapidity, e.g. :code:`g g => g u u_bar g`. In the high-energy limit, such
configurations are logarithmically suppressed compared to FKL
configurations. :code:`reweight` is currently only supported for W boson
plus jets production.
.. _`non-resummable`:
**non-resummable**
Specifies how to treat events where no resummation is possible. Only
:code:`keep` or :code:`discard` are valid options, *not* :code:`reweight`
for obvious reasons.
.. _`scales`:
**scales**
Specifies the renormalisation and factorisation scales for the output
events. This can either be a single entry or a list :code:`[scale1,
scale2, ...]`. For the case of a list the first entry defines the
central scale. Possible values are fixed numbers to set the scale in
GeV or the following:
- :code:`H_T`: The sum of the scalar transverse momenta of all
final-state particles
- :code:`max jet pperp`: The maximum transverse momentum of all jets
- :code:`jet invariant mass`: Sum of the invariant masses of all jets
- :code:`m_j1j2`: Invariant mass between the two hardest jets.
Scales can be multiplied or divided by overall factors, e.g. :code:`H_T/2`.
It is also possible to import scales from an external library, see
:ref:`Custom scales`
.. _`scale factors`:
**scale factors**
A list of numeric factors by which each of the `scales`_ should be
multiplied. Renormalisation and factorisation scales are varied
independently. For example, a list with entries :code:`[0.5, 2]`
would give the four scale choices (0.5μ\ :sub:`r`, 0.5μ\ :sub:`f`);
(0.5μ\ :sub:`r`, 2μ\ :sub:`f`); (2μ\ :sub:`r`, 0.5μ\ :sub:`f`); (2μ\
:sub:`r`, 2μ\ :sub:`f`) in this order. The ordering corresponds to
the order of the final event weights.
.. _`max scale ratio`:
**max scale ratio**
Specifies the maximum factor by which renormalisation and
factorisation scales may difer. For a value of :code:`2` and the
example given for the `scale factors`_ the scale choices
(0.5μ\ :sub:`r`, 2μ\ :sub:`f`) and (2μ\ :sub:`r`, 0.5μ\ :sub:`f`)
will be discarded.
.. _`log correction`:
**log correction**
Whether to include corrections due to the evolution of the strong
coupling constant in the virtual corrections. Allowed values are
:code:`true` and :code:`false`.
.. _`event output`:
**event output**
Specifies the name of a single event output file or a list of such
files. The file format is either specified explicitly or derived from
the suffix. For example, :code:`events.lhe` or, equivalently
:code:`Les Houches: events.lhe` generates an output event file
:code:`events.lhe` in the Les Houches format. The supported formats
are
- :code:`file.lhe` or :code:`Les Houches: file`: The Les Houches
event file format.
- :code:`file.hepmc2` or :code:`HepMC2: file`: HepMC format version 2.
- :code:`file.hepmc3` or :code:`HepMC3: file`: HepMC format version 3.
- :code:`file.hepmc` or :code:`HepMC: file`: The latest supported
version of the HepMC format, currently version 3.
.. _`random generator`:
**random generator**
Sets parameters for random number generation.
.. _`random generator: name`:
**name**
Which random number generator to use. Currently, :code:`mixmax`
and :code:`ranlux64` are supported. Mixmax is recommended. See
the `CLHEP documentation
<http://proj-clhep.web.cern.ch/proj-clhep/index.html#docu>`_ for
details on the generators.
.. _`random generator: seed`:
**seed**
The seed for random generation. This should be a single number for
:code:`mixmax` and the name of a state file for :code:`ranlux64`.
.. _`analysis`:
**analysis**
Name and Setting for the event analyses; either a custom
analysis plugin or Rivet. For the first the :code:`plugin` sub-entry
should be set to the analysis file path. All further entries are passed on
to the analysis. To use Rivet a list of Rivet analyses have to be
given in :code:`rivet` and prefix for the yoda file has to be set
through :code:`output`. See :ref:`Writing custom analyses` for details.
.. _`vev`:
**vev**
Higgs vacuum expectation value in GeV. All electro-weak constants are derived
from this together with the `particle properties`_.
.. _`particle properties`:
**particle properties**
Specifies various properties of the different particles (Higgs, W or Z). All
electro-weak constants are derived from these together with the :ref:`vacuum
expectation value<vev>`.
.. _`particle properties: particle`:
**Higgs, W or Z**
The particle (Higgs, |W+| or |W-|, Z) for which the following properties
are defined.
.. |W+| replace:: W\ :sup:`+`
.. |W-| replace:: W\ :sup:`-`
.. _`particle properties: particle: mass`:
**mass**
The mass of the particle in GeV.
.. _`particle properties: particle: width`:
**width**
The total decay width of the particle in GeV.
.. _`Higgs coupling`:
**Higgs coupling**
This collects a number of settings concerning the effective coupling
of the Higgs boson to gluons. This is only relevant for the
production process of a Higgs boson with jets and only supported if
HEJ 2 was compiled with `QCDLoop
<https://github.com/scarrazza/qcdloop>`_ support.
.. _`Higgs coupling: use impact factors`:
**use impact factors**
Whether to use impact factors for the coupling to the most forward
and most backward partons. Impact factors imply the infinite
top-quark mass limit.
.. _`Higgs coupling: mt`:
**mt**
The value of the top-quark mass in GeV. If this is not specified,
the limit of an infinite mass is taken.
.. _`Higgs coupling: include bottom`:
**include bottom**
Whether to include the Higgs coupling to bottom quarks.
.. _`Higgs coupling: mb`:
**mb**
The value of the bottom-quark mass in GeV. Only used for the Higgs
coupling, external bottom-quarks are always assumed to be massless.
Advanced Settings
~~~~~~~~~~~~~~~~~
All of the following settings are optional. Please **do not set** any of the
following options, unless you know exactly what you are doing. The default
behaviour gives the most reliable results for a wide range of observables.
+.. _`max events`:
+
+**max events**
+ Maximal number of (input) Fixed Order events. HEJ will stop after processing
+ `max events` many events. Default considers all events.
+
.. _`regulator parameter`:
**regulator parameter**
Slicing parameter to regularise the subtraction term, called :math:`\lambda`
in `arxiv:1706.01002 <https://arxiv.org/abs/1706.01002>`_. Default is 0.2
diff --git a/include/HEJ/EventReader.hh b/include/HEJ/EventReader.hh
index 10cd277..e912d6a 100644
--- a/include/HEJ/EventReader.hh
+++ b/include/HEJ/EventReader.hh
@@ -1,57 +1,57 @@
/** \file
* \brief Header file for event reader interface
*
* This header defines an abstract base class for reading events from files.
*
* \authors The HEJ collaboration (see AUTHORS for details)
* \date 2019
* \copyright GPLv2 or later
*/
#pragma once
#include <memory>
#include <string>
#include "LHEF/LHEF.h"
#include "HEJ/optional.hh"
namespace HEJ{
class EventData;
// Abstract base class for reading events from files
struct EventReader {
//! Read an event
virtual bool read_event() = 0;
//! Access header text
virtual std::string const & header() const = 0;
//! Access run information
virtual LHEF::HEPRUP const & heprup() const = 0;
//! Access last read event
virtual LHEF::HEPEUP const & hepeup() const = 0;
//! Guess number of events from header
- virtual HEJ::optional<int> number_events() const {
+ virtual HEJ::optional<size_t> number_events() const {
size_t start = header().rfind("Number of Events");
start = header().find_first_of("123456789", start);
if(start == std::string::npos) {
return {};
}
const size_t end = header().find_first_not_of("0123456789", start);
return std::stoi(header().substr(start, end - start));
}
virtual ~EventReader() = default;
};
//! Factory function for event readers
/**
* @param filename The name of the input file
* @returns A pointer to an instance of an EventReader
* for the input file
*/
std::unique_ptr<EventReader> make_reader(std::string const & filename);
}
diff --git a/include/HEJ/HDF5Reader.hh b/include/HEJ/HDF5Reader.hh
index d0f67f9..5b00fed 100644
--- a/include/HEJ/HDF5Reader.hh
+++ b/include/HEJ/HDF5Reader.hh
@@ -1,47 +1,47 @@
/** \file
* \brief Header file for reading events in the HDF5 event format.
*
* \authors The HEJ collaboration (see AUTHORS for details)
* \date 2019
* \copyright GPLv2 or later
*/
#pragma once
#include "HEJ/EventReader.hh"
namespace HEJ{
//! Class for reading events from a file in the HDF5 file format
/**
* @details This format is specified in \cite Hoeche:2019rti.
*/
class HDF5Reader : public EventReader{
public:
//! Contruct object reading from the given file
explicit HDF5Reader(std::string const & filename);
//! Read an event
bool read_event() override;
//! Access header text
std::string const & header() const override;
//! Access run information
LHEF::HEPRUP const & heprup() const override;
//! Access last read event
LHEF::HEPEUP const & hepeup() const override;
//! Get number of events
- HEJ::optional<int> number_events() const override;
+ HEJ::optional<size_t> number_events() const override;
private:
struct HDF5ReaderImpl;
struct HDF5ReaderImplDeleter {
void operator()(HDF5ReaderImpl* p);
};
std::unique_ptr<HDF5ReaderImpl, HDF5ReaderImplDeleter> impl_;
};
}
diff --git a/include/HEJ/config.hh b/include/HEJ/config.hh
index 0655088..ad62d69 100644
--- a/include/HEJ/config.hh
+++ b/include/HEJ/config.hh
@@ -1,198 +1,200 @@
/** \file
* \brief HEJ 2 configuration parameters
*
* \authors The HEJ collaboration (see AUTHORS for details)
* \date 2019
* \copyright GPLv2 or later
*/
#pragma once
#include <string>
#include "fastjet/JetDefinition.hh"
#include "yaml-cpp/yaml.h"
#include "HEJ/Constants.hh"
#include "HEJ/event_types.hh"
#include "HEJ/EWConstants.hh"
#include "HEJ/HiggsCouplingSettings.hh"
#include "HEJ/optional.hh"
#include "HEJ/output_formats.hh"
#include "HEJ/ScaleFunction.hh"
namespace HEJ{
//! Jet parameters
struct JetParameters{
fastjet::JetDefinition def; /**< Jet Definition */
double min_pt; /**< Minimum Jet Transverse Momentum */
};
//! Settings for scale variation
struct ScaleConfig{
//! Base scale choices
std::vector<ScaleFunction> base;
//! Factors for multiplicative scale variation
std::vector<double> factors;
//! Maximum ratio between renormalisation and factorisation scale
double max_ratio;
};
//! Settings for random number generator
struct RNGConfig {
//! Random number generator name
std::string name;
//! Optional initial seed
optional<std::string> seed;
};
/**! Possible treatments for fixed-order input events.
*
* The program will decide on how to treat an event based on
* the value of this enumeration.
*/
enum class EventTreatment{
reweight, /**< Perform resummation */
keep, /**< Keep the event */
discard, /**< Discard the event */
};
//! Container to store the treatments for various event types
using EventTreatMap = std::map<event_type::EventType, EventTreatment>;
/**! Input parameters.
*
* This struct handles stores all configuration parameters
* needed in a HEJ 2 run.
*
* \internal To add a new option:
* 1. Add a member to the Config struct.
* 2. Inside "src/YAMLreader.cc":
* - Add the option name to the "supported" Node in
* get_supported_options.
* - Initialise the new Config member in to_Config.
* The functions set_from_yaml (for mandatory options) and
* set_from_yaml_if_defined (non-mandatory) may be helpful.
* 3. Add a new entry (with short description) to config.yaml
* 4. Update the user documentation in "doc/Sphinx/"
*/
struct Config {
//! Parameters for scale variation
ScaleConfig scales;
//! Resummation jet properties
JetParameters resummation_jets;
//! Fixed-order jet properties
JetParameters fixed_order_jets;
//! Minimum transverse momentum for extremal partons
double min_extparton_pt;
//! Maximum transverse momentum fraction from soft radiation in extremal jets
double max_ext_soft_pt_fraction;
//! The regulator lambda for the subtraction terms
double regulator_lambda = CLAMBDA;
//! Number of resummation configurations to generate per fixed-order event
int trials;
+ //! Maximal number of events
+ optional<size_t> max_events;
//! Whether to include the logarithmic correction from \f$\alpha_s\f$ running
bool log_correction;
//! Event output files names and formats
std::vector<OutputFile> output;
//! Parameters for random number generation
RNGConfig rng;
//! Map to decide what to do for different event types
EventTreatMap treat;
//! Parameters for custom analyses
YAML::Node analysis_parameters;
//! Settings for effective Higgs-gluon coupling
HiggsCouplingSettings Higgs_coupling;
//! elector weak parameters
EWConstants ew_parameters;
};
//! Configuration options for the PhaseSpacePoint class
struct PhaseSpacePointConfig {
//! Properties of resummation jets
JetParameters jet_param;
//! Minimum transverse momentum for extremal partons
double min_extparton_pt;
//! Maximum transverse momentum fraction from soft radiation in extremal jets
double max_ext_soft_pt_fraction;
};
//! Configuration options for the MatrixElement class
struct MatrixElementConfig {
MatrixElementConfig() = default;
MatrixElementConfig(
bool log_correction,
HiggsCouplingSettings Higgs_coupling,
EWConstants ew_parameters,
double regulator_lambda = CLAMBDA
):
log_correction{log_correction},
Higgs_coupling{Higgs_coupling},
ew_parameters{ew_parameters},
regulator_lambda{regulator_lambda}
{}
//! Whether to include the logarithmic correction from \f$\alpha_s\f$ running
bool log_correction;
//! Settings for effective Higgs-gluon coupling
HiggsCouplingSettings Higgs_coupling;
//! elector weak parameters
EWConstants ew_parameters;
//! The regulator lambda for the subtraction terms
double regulator_lambda = CLAMBDA;
};
//! Configuration options for the EventReweighter class
struct EventReweighterConfig {
//! Settings for phase space point generation
PhaseSpacePointConfig psp_config;
//! Settings for matrix element calculation
MatrixElementConfig ME_config;
//! Properties of resummation jets
JetParameters jet_param;
//! Treatment of the various event types
EventTreatMap treat;
};
/**! Extract PhaseSpacePointConfig from Config
*
* \internal We do not provide a PhaseSpacePointConfig constructor from Config
* so that PhaseSpacePointConfig remains an aggregate.
* This faciliates writing client code (e.g. the HEJ fixed-order generator)
* that creates a PhaseSpacePointConfig *without* a Config object.
*
* @see to_MatrixElementConfig, to_EventReweighterConfig
*/
inline
PhaseSpacePointConfig to_PhaseSpacePointConfig(Config const & conf) {
return {
conf.resummation_jets,
conf.min_extparton_pt,
conf.max_ext_soft_pt_fraction
};
}
/**! Extract MatrixElementConfig from Config
*
* @see to_PhaseSpacePointConfig, to_EventReweighterConfig
*/
inline
MatrixElementConfig to_MatrixElementConfig(Config const & conf) {
return {conf.log_correction, conf.Higgs_coupling,
conf.ew_parameters, conf.regulator_lambda};
}
/**! Extract EventReweighterConfig from Config
*
* @see to_PhaseSpacePointConfig, to_MatrixElementConfig
*/
inline
EventReweighterConfig to_EventReweighterConfig(Config const & conf) {
return {
to_PhaseSpacePointConfig(conf),
to_MatrixElementConfig(conf),
conf.resummation_jets, conf.treat
};
}
} // namespace HEJ
diff --git a/src/HDF5Reader.cc b/src/HDF5Reader.cc
index ef3ba1e..4689ed2 100644
--- a/src/HDF5Reader.cc
+++ b/src/HDF5Reader.cc
@@ -1,309 +1,309 @@
/**
* \authors The HEJ collaboration (see AUTHORS for details)
* \date 2019
* \copyright GPLv2 or later
*/
#include "HEJ/HDF5Reader.hh"
#ifdef HEJ_BUILD_WITH_HDF5
#include <numeric>
#include <iterator>
#include "highfive/H5File.hpp"
namespace HEJ {
namespace {
// buffer size for reader
// each "reading from disk" reads "chunk_size" many event at once
constexpr std::size_t chunk_size = 10000;
struct ParticleData {
std::vector<int> id;
std::vector<int> status;
std::vector<int> mother1;
std::vector<int> mother2;
std::vector<int> color1;
std::vector<int> color2;
std::vector<double> px;
std::vector<double> py;
std::vector<double> pz;
std::vector<double> e;
std::vector<double> m;
std::vector<double> lifetime;
std::vector<double> spin;
};
struct EventRecords {
std::vector<int> particle_start;
std::vector<int> nparticles;
std::vector<int> pid;
std::vector<double> weight;
std::vector<double> scale;
std::vector<double> fscale;
std::vector<double> rscale;
std::vector<double> aqed;
std::vector<double> aqcd;
double trials;
ParticleData particles;
};
class ConstEventIterator {
public:
// iterator traits
using iterator_category = std::bidirectional_iterator_tag;
using value_type = LHEF::HEPEUP;
using difference_type = std::ptrdiff_t;
using pointer = const LHEF::HEPEUP*;
using reference = LHEF::HEPEUP const &;
using iterator = ConstEventIterator;
friend iterator cbegin(EventRecords const & records) noexcept;
friend iterator cend(EventRecords const & records) noexcept;
iterator& operator++() {
particle_offset_ += records_.get().nparticles[idx_];
++idx_;
return *this;
}
iterator& operator--() {
--idx_;
particle_offset_ -= records_.get().nparticles[idx_];
return *this;
}
iterator operator--(int) {
auto res = *this;
--(*this);
return res;
}
bool operator==(iterator const & other) const {
return idx_ == other.idx_;
}
bool operator!=(iterator other) const {
return !(*this == other);
}
value_type operator*() const {
value_type hepeup{};
auto const & r = records_.get();
hepeup.NUP = r.nparticles[idx_];
hepeup.IDPRUP = r.pid[idx_];
hepeup.XWGTUP = r.weight[idx_]/r.trials;
hepeup.weights.emplace_back(hepeup.XWGTUP, nullptr);
hepeup.SCALUP = r.scale[idx_];
hepeup.scales.muf = r.fscale[idx_];
hepeup.scales.mur = r.rscale[idx_];
hepeup.AQEDUP = r.aqed[idx_];
hepeup.AQCDUP = r.aqcd[idx_];
const size_t start = particle_offset_;
const size_t end = start + hepeup.NUP;
auto const & p = r.particles;
hepeup.IDUP = std::vector<long>( begin(p.id)+start, begin(p.id)+end );
hepeup.ISTUP = std::vector<int>( begin(p.status)+start, begin(p.status)+end );
hepeup.VTIMUP = std::vector<double>( begin(p.lifetime)+start, begin(p.lifetime)+end );
hepeup.SPINUP = std::vector<double>( begin(p.spin)+start, begin(p.spin)+end );
hepeup.MOTHUP.resize(hepeup.NUP);
hepeup.ICOLUP.resize(hepeup.NUP);
hepeup.PUP.resize(hepeup.NUP);
for(size_t i = 0; i < hepeup.MOTHUP.size(); ++i) {
const size_t idx = start + i;
assert(idx < end);
hepeup.MOTHUP[i] = std::make_pair(p.mother1[idx], p.mother2[idx]);
hepeup.ICOLUP[i] = std::make_pair(p.color1[idx], p.color2[idx]);
hepeup.PUP[i] = std::vector<double>{
p.px[idx], p.py[idx], p.pz[idx], p.e[idx], p.m[idx]
};
}
return hepeup;
}
private:
explicit ConstEventIterator(EventRecords const & records):
records_{records} {}
std::reference_wrapper<const EventRecords> records_;
size_t idx_ = 0;
size_t particle_offset_ = 0;
}; // end ConstEventIterator
ConstEventIterator cbegin(EventRecords const & records) noexcept {
return ConstEventIterator{records};
}
ConstEventIterator cend(EventRecords const & records) noexcept {
auto it =ConstEventIterator{records};
it.idx_ = records.aqcd.size(); // or size of any other records member
return it;
}
} // end anonymous namespace
struct HDF5Reader::HDF5ReaderImpl{
HighFive::File file;
std::size_t event_idx;
std::size_t particle_idx;
std::size_t nevents;
EventRecords records;
ConstEventIterator cur_event;
LHEF::HEPRUP heprup;
LHEF::HEPEUP hepeup;
explicit HDF5ReaderImpl(std::string const & filename):
file{filename},
event_idx{0},
particle_idx{0},
nevents{
file.getGroup("event")
.getDataSet("nparticles") // or any other dataset
.getSpace().getDimensions().front()
},
records{},
cur_event{cbegin(records)},
heprup{},
hepeup{}
{
read_heprup();
read_event_records(chunk_size);
}
void read_heprup() {
const auto init = file.getGroup("init");
init.getDataSet( "PDFgroupA" ).read(heprup.PDFGUP.first);
init.getDataSet( "PDFgroupB" ).read(heprup.PDFGUP.second);
init.getDataSet( "PDFsetA" ).read(heprup.PDFSUP.first);
init.getDataSet( "PDFsetB" ).read(heprup.PDFSUP.second);
init.getDataSet( "beamA" ).read(heprup.IDBMUP.first);
init.getDataSet( "beamB" ).read(heprup.IDBMUP.second);
init.getDataSet( "energyA" ).read(heprup.EBMUP.first);
init.getDataSet( "energyB" ).read(heprup.EBMUP.second);
init.getDataSet( "numProcesses" ).read(heprup.NPRUP);
init.getDataSet( "weightingStrategy" ).read(heprup.IDWTUP);
const auto proc_info = file.getGroup("procInfo");
proc_info.getDataSet( "procId" ).read(heprup.LPRUP);
proc_info.getDataSet( "xSection" ).read(heprup.XSECUP);
proc_info.getDataSet( "error" ).read(heprup.XERRUP);
// TODO: is this identification correct?
proc_info.getDataSet( "unitWeight" ).read(heprup.XMAXUP);
std::vector<double> trials;
file.getGroup("event").getDataSet("trials").read(trials);
records.trials = std::accumulate(begin(trials), end(trials), 0.);
}
std::size_t read_event_records(std::size_t count) {
count = std::min(count, nevents-event_idx);
auto events = file.getGroup("event");
events.getDataSet("nparticles").select({event_idx}, {count}).read(records.nparticles);
assert(records.nparticles.size() == count);
events.getDataSet("pid").select( {event_idx}, {count} ).read( records.pid );
events.getDataSet("weight").select( {event_idx}, {count} ).read( records.weight );
events.getDataSet("scale").select( {event_idx}, {count} ).read( records.scale );
events.getDataSet("fscale").select( {event_idx}, {count} ).read( records.fscale );
events.getDataSet("rscale").select( {event_idx}, {count} ).read( records.rscale );
events.getDataSet("aqed").select( {event_idx}, {count} ).read( records.aqed );
events.getDataSet("aqcd").select( {event_idx}, {count} ).read( records.aqcd );
const std::size_t particle_count = std::accumulate(
begin(records.nparticles), end(records.nparticles), 0
);
auto pdata = file.getGroup("particle");
auto & particles = records.particles;
pdata.getDataSet("id").select( {particle_idx}, {particle_count} ).read( particles.id );
pdata.getDataSet("status").select( {particle_idx}, {particle_count} ).read( particles.status );
pdata.getDataSet("mother1").select( {particle_idx}, {particle_count} ).read( particles.mother1 );
pdata.getDataSet("mother2").select( {particle_idx}, {particle_count} ).read( particles.mother2 );
pdata.getDataSet("color1").select( {particle_idx}, {particle_count} ).read( particles.color1 );
pdata.getDataSet("color2").select( {particle_idx}, {particle_count} ).read( particles.color2 );
pdata.getDataSet("px").select( {particle_idx}, {particle_count} ).read( particles.px );
pdata.getDataSet("py").select( {particle_idx}, {particle_count} ).read( particles.py );
pdata.getDataSet("pz").select( {particle_idx}, {particle_count} ).read( particles.pz );
pdata.getDataSet("e").select( {particle_idx}, {particle_count} ).read( particles.e );
pdata.getDataSet("m").select( {particle_idx}, {particle_count} ).read( particles.m );
pdata.getDataSet("lifetime").select( {particle_idx}, {particle_count} ).read( particles.lifetime );
pdata.getDataSet("spin").select( {particle_idx}, {particle_count} ).read( particles.spin );
event_idx += count;
particle_idx += particle_count;
return count;
}
};
HDF5Reader::HDF5Reader(std::string const & filename):
impl_{
new HDF5Reader::HDF5ReaderImpl{filename},
HDF5Reader::HDF5ReaderImplDeleter{}
}
{}
bool HDF5Reader::read_event() {
if(impl_->cur_event == cend(impl_->records)) {
// end of active chunk, read new events from file
const auto events_read = impl_->read_event_records(chunk_size);
impl_->cur_event = cbegin(impl_->records);
if(events_read == 0) return false;
}
impl_->hepeup = *impl_->cur_event;
++impl_->cur_event;
return true;
}
namespace {
static const std::string nothing = "";
}
std::string const & HDF5Reader::header() const {
return nothing;
}
LHEF::HEPRUP const & HDF5Reader::heprup() const {
return impl_->heprup;
}
LHEF::HEPEUP const & HDF5Reader::hepeup() const {
return impl_->hepeup;
}
- HEJ::optional<int> HDF5Reader::number_events() const {
+ HEJ::optional<size_t> HDF5Reader::number_events() const {
return impl_->nevents;
}
}
#else // no HDF5 support
namespace HEJ {
class HDF5Reader::HDF5ReaderImpl{};
HDF5Reader::HDF5Reader(std::string const &){
throw std::invalid_argument{
"Failed to create HDF5 reader: "
"HEJ 2 was built without HDF5 support"
};
}
bool HDF5Reader::read_event() {
throw std::logic_error{"unreachable"};
}
std::string const & HDF5Reader::header() const {
throw std::logic_error{"unreachable"};
}
LHEF::HEPRUP const & HDF5Reader::heprup() const {
throw std::logic_error{"unreachable"};
}
LHEF::HEPEUP const & HDF5Reader::hepeup() const {
throw std::logic_error{"unreachable"};
}
- HEJ::optional<int> HDF5Reader::number_events() const {
+ HEJ::optional<size_t> HDF5Reader::number_events() const {
throw std::logic_error{"unreachable"};
}
}
#endif
namespace HEJ {
void HDF5Reader::HDF5ReaderImplDeleter::operator()(HDF5ReaderImpl* p) {
delete p;
}
}
diff --git a/src/YAMLreader.cc b/src/YAMLreader.cc
index e352b3d..9a7922d 100644
--- a/src/YAMLreader.cc
+++ b/src/YAMLreader.cc
@@ -1,509 +1,510 @@
/**
* \authors The HEJ collaboration (see AUTHORS for details)
* \date 2019
* \copyright GPLv2 or later
*/
#include "HEJ/YAMLreader.hh"
#include <algorithm>
#include <iostream>
#include <limits>
#include <map>
#include <string>
#include <unordered_map>
#include <vector>
#include <dlfcn.h>
#include "HEJ/ScaleFunction.hh"
#include "HEJ/event_types.hh"
#include "HEJ/output_formats.hh"
#include "HEJ/Constants.hh"
namespace HEJ{
class Event;
namespace{
//! Get YAML tree of supported options
/**
* The configuration file is checked against this tree of options
* in assert_all_options_known.
*/
YAML::Node const & get_supported_options(){
const static YAML::Node supported = [](){
YAML::Node supported;
static const auto opts = {
"trials", "min extparton pt", "max ext soft pt fraction",
"scales", "scale factors", "max scale ratio", "import scales",
- "log correction", "event output", "analysis", "regulator parameter",
- "vev"
+ "log correction", "event output", "analysis", "vev",
+ "regulator parameter", "max events"
};
// add subnodes to "supported" - the assigned value is irrelevant
for(auto && opt: opts) supported[opt] = "";
for(auto && jet_opt: {"min pt", "algorithm", "R"}){
supported["resummation jets"][jet_opt] = "";
supported["fixed order jets"][jet_opt] = "";
}
for(auto && opt: {"mt", "use impact factors", "include bottom", "mb"}){
supported["Higgs coupling"][opt] = "";
}
for(auto && opt: {"name", "seed"}){
supported["random generator"][opt] = "";
}
for(auto && opt: {"FKL", "unordered", "extremal qqx", "central qqx", "non-resummable"}){
supported["event treatment"][opt] = "";
}
for(auto && particle_type: {"Higgs", "W", "Z"}){
for(auto && particle_opt: {"mass", "width"}){
supported["particle properties"][particle_type][particle_opt] = "";
}
}
return supported;
}();
return supported;
}
fastjet::JetAlgorithm to_JetAlgorithm(std::string const & algo){
using namespace fastjet;
static const std::map<std::string, fastjet::JetAlgorithm> known = {
{"kt", kt_algorithm},
{"cambridge", cambridge_algorithm},
{"antikt", antikt_algorithm},
{"cambridge for passive", cambridge_for_passive_algorithm},
{"plugin", plugin_algorithm}
};
const auto res = known.find(algo);
if(res == known.end()){
throw std::invalid_argument("Unknown jet algorithm \"" + algo + "\"");
}
return res->second;
}
EventTreatment to_EventTreatment(std::string const & name){
static const std::map<std::string, EventTreatment> known = {
{"reweight", EventTreatment::reweight},
{"keep", EventTreatment::keep},
{"discard", EventTreatment::discard}
};
const auto res = known.find(name);
if(res == known.end()){
throw std::invalid_argument("Unknown event treatment \"" + name + "\"");
}
return res->second;
}
} // namespace anonymous
namespace detail{
void set_from_yaml(fastjet::JetAlgorithm & setting, YAML::Node const & yaml){
setting = to_JetAlgorithm(yaml.as<std::string>());
}
void set_from_yaml(EventTreatment & setting, YAML::Node const & yaml){
setting = to_EventTreatment(yaml.as<std::string>());
}
void set_from_yaml(ParticleID & setting, YAML::Node const & yaml){
setting = to_ParticleID(yaml.as<std::string>());
}
} // namespace detail
JetParameters get_jet_parameters(
YAML::Node const & node,
std::string const & entry
){
assert(node);
JetParameters result;
fastjet::JetAlgorithm jet_algo = fastjet::antikt_algorithm;
double R;
set_from_yaml_if_defined(jet_algo, node, entry, "algorithm");
set_from_yaml(R, node, entry, "R");
result.def = fastjet::JetDefinition{jet_algo, R};
set_from_yaml(result.min_pt, node, entry, "min pt");
return result;
}
RNGConfig to_RNGConfig(
YAML::Node const & node,
std::string const & entry
){
assert(node);
RNGConfig result;
set_from_yaml(result.name, node, entry, "name");
set_from_yaml_if_defined(result.seed, node, entry, "seed");
return result;
}
ParticleProperties get_particle_properties(
YAML::Node const & node, std::string const & entry,
std::string const & boson
){
ParticleProperties result;
set_from_yaml(result.mass, node, entry, boson, "mass");
set_from_yaml(result.width, node, entry, boson, "width");
return result;
}
EWConstants get_ew_parameters(YAML::Node const & node){
EWConstants result;
double vev;
set_from_yaml(vev, node, "vev");
result.set_vevWZH(vev,
get_particle_properties(node, "particle properties", "W"),
get_particle_properties(node, "particle properties", "Z"),
get_particle_properties(node, "particle properties", "Higgs")
);
return result;
}
HiggsCouplingSettings get_Higgs_coupling(
YAML::Node const & node,
std::string const & entry
){
assert(node);
static constexpr double mt_max = 2e4;
#ifndef HEJ_BUILD_WITH_QCDLOOP
if(node[entry]){
throw std::invalid_argument{
"Higgs coupling settings require building HEJ 2 "
"with QCDloop support"
};
}
#endif
HiggsCouplingSettings settings;
set_from_yaml_if_defined(settings.mt, node, entry, "mt");
set_from_yaml_if_defined(settings.mb, node, entry, "mb");
set_from_yaml_if_defined(settings.include_bottom, node, entry, "include bottom");
set_from_yaml_if_defined(settings.use_impact_factors, node, entry, "use impact factors");
if(settings.use_impact_factors){
if(settings.mt != std::numeric_limits<double>::infinity()){
throw std::invalid_argument{
"Conflicting settings: "
"impact factors may only be used in the infinite top mass limit"
};
}
}
else{
// huge values of the top mass are numerically unstable
settings.mt = std::min(settings.mt, mt_max);
}
return settings;
}
FileFormat to_FileFormat(std::string const & name){
static const std::map<std::string, FileFormat> known = {
{"Les Houches", FileFormat::Les_Houches},
{"HepMC", FileFormat::HepMC},
{"HepMC2", FileFormat::HepMC2},
{"HepMC3", FileFormat::HepMC3}
};
const auto res = known.find(name);
if(res == known.end()){
throw std::invalid_argument("Unknown file format \"" + name + "\"");
}
return res->second;
}
std::string extract_suffix(std::string const & filename){
size_t separator = filename.rfind('.');
if(separator == filename.npos) return {};
return filename.substr(separator + 1);
}
FileFormat format_from_suffix(std::string const & filename){
const std::string suffix = extract_suffix(filename);
if(suffix == "lhe") return FileFormat::Les_Houches;
if(suffix == "hepmc") return FileFormat::HepMC;
if(suffix == "hepmc3") return FileFormat::HepMC3;
if(suffix == "hepmc2") return FileFormat::HepMC2;
throw std::invalid_argument{
"Can't determine format for output file \"" + filename + "\""
};
}
void assert_all_options_known(
YAML::Node const & conf, YAML::Node const & supported
){
if(!conf.IsMap()) return;
if(!supported.IsMap()) throw invalid_type{"must not have sub-entries"};
for(auto const & entry: conf){
const auto name = entry.first.as<std::string>();
if(! supported[name]) throw unknown_option{name};
/* check sub-options, e.g. 'resummation jets: min pt'
* we don't check analysis sub-options
* those depend on the analysis being used and should be checked there
* similar for "import scales"
*/
if(name != "analysis" && name != "import scales"){
try{
assert_all_options_known(conf[name], supported[name]);
}
catch(unknown_option const & ex){
throw unknown_option{name + ": " + ex.what()};
}
catch(invalid_type const & ex){
throw invalid_type{name + ": " + ex.what()};
}
}
}
}
} // namespace HEJ
namespace YAML {
Node convert<HEJ::OutputFile>::encode(HEJ::OutputFile const & outfile) {
Node node;
node[to_string(outfile.format)] = outfile.name;
return node;
};
bool convert<HEJ::OutputFile>::decode(Node const & node, HEJ::OutputFile & out) {
switch(node.Type()){
case NodeType::Map: {
YAML::const_iterator it = node.begin();
out.format = HEJ::to_FileFormat(it->first.as<std::string>());
out.name = it->second.as<std::string>();
return true;
}
case NodeType::Scalar:
out.name = node.as<std::string>();
out.format = HEJ::format_from_suffix(out.name);
return true;
default:
return false;
}
}
} // namespace YAML
namespace HEJ{
namespace detail{
void set_from_yaml(OutputFile & setting, YAML::Node const & yaml){
setting = yaml.as<OutputFile>();
}
}
namespace{
void update_fixed_order_jet_parameters(
JetParameters & fixed_order_jets, YAML::Node const & yaml
){
if(!yaml["fixed order jets"]) return;
set_from_yaml_if_defined(
fixed_order_jets.min_pt, yaml, "fixed order jets", "min pt"
);
fastjet::JetAlgorithm algo = fixed_order_jets.def.jet_algorithm();
set_from_yaml_if_defined(algo, yaml, "fixed order jets", "algorithm");
double R = fixed_order_jets.def.R();
set_from_yaml_if_defined(R, yaml, "fixed order jets", "R");
fixed_order_jets.def = fastjet::JetDefinition{algo, R};
}
// like std::stod, but throw if not the whole string can be converted
double to_double(std::string const & str){
std::size_t pos;
const double result = std::stod(str, &pos);
if(pos < str.size()){
throw std::invalid_argument(str + " is not a valid double value");
}
return result;
}
using EventScale = double (*)(Event const &);
void import_scale_functions(
std::string const & file,
std::vector<std::string> const & scale_names,
std::unordered_map<std::string, EventScale> & known
) {
auto handle = dlopen(file.c_str(), RTLD_NOW);
char * error = dlerror();
if(error != nullptr) throw std::runtime_error{error};
for(auto const & scale: scale_names) {
void * sym = dlsym(handle, scale.c_str());
error = dlerror();
if(error != nullptr) throw std::runtime_error{error};
known.emplace(scale, reinterpret_cast<EventScale>(sym));
}
}
auto get_scale_map(
YAML::Node const & yaml
) {
std::unordered_map<std::string, EventScale> scale_map;
scale_map.emplace("H_T", H_T);
scale_map.emplace("max jet pperp", max_jet_pt);
scale_map.emplace("jet invariant mass", jet_invariant_mass);
scale_map.emplace("m_j1j2", m_j1j2);
if(yaml["import scales"]) {
if(! yaml["import scales"].IsMap()) {
throw invalid_type{"Entry 'import scales' is not a map"};
}
for(auto const & import: yaml["import scales"]) {
const auto file = import.first.as<std::string>();
const auto scale_names =
import.second.IsSequence()
?import.second.as<std::vector<std::string>>()
:std::vector<std::string>{import.second.as<std::string>()};
import_scale_functions(file, scale_names, scale_map);
}
}
return scale_map;
}
// simple (as in non-composite) scale functions
/**
* An example for a simple scale function would be H_T,
* H_T/2 is then composite (take H_T and then divide by 2)
*/
ScaleFunction parse_simple_ScaleFunction(
std::string const & scale_fun,
std::unordered_map<std::string, EventScale> const & known
) {
assert(
scale_fun.empty() ||
(!std::isspace(scale_fun.front()) && !std::isspace(scale_fun.back()))
);
const auto it = known.find(scale_fun);
if(it != end(known)) return {it->first, it->second};
try{
const double scale = to_double(scale_fun);
return {scale_fun, FixedScale{scale}};
} catch(std::invalid_argument const &){}
throw std::invalid_argument{"Unknown scale choice: \"" + scale_fun + "\""};
}
std::string trim_front(std::string const & str){
const auto new_begin = std::find_if(
begin(str), end(str), [](char c){ return ! std::isspace(c); }
);
return std::string(new_begin, end(str));
}
std::string trim_back(std::string str){
size_t pos = str.size() - 1;
// use guaranteed wrap-around behaviour to check whether we have
// traversed the whole string
for(; pos < str.size() && std::isspace(str[pos]); --pos) {}
str.resize(pos + 1); // note that pos + 1 can be 0
return str;
}
ScaleFunction parse_ScaleFunction(
std::string const & scale_fun,
std::unordered_map<std::string, EventScale> const & known
){
assert(
scale_fun.empty() ||
(!std::isspace(scale_fun.front()) && !std::isspace(scale_fun.back()))
);
// parse from right to left => a/b/c gives (a/b)/c
const size_t delim = scale_fun.find_last_of("*/");
if(delim == scale_fun.npos){
return parse_simple_ScaleFunction(scale_fun, known);
}
const std::string first = trim_back(std::string{scale_fun, 0, delim});
const std::string second = trim_front(std::string{scale_fun, delim+1});
if(scale_fun[delim] == '/'){
return parse_ScaleFunction(first, known)
/ parse_ScaleFunction(second, known);
}
else{
assert(scale_fun[delim] == '*');
return parse_ScaleFunction(first, known)
* parse_ScaleFunction(second, known);
}
}
EventTreatMap get_event_treatment(
YAML::Node const & node, std::string const & entry
){
using namespace event_type;
EventTreatMap treat {
{no_2_jets, EventTreatment::discard},
{bad_final_state, EventTreatment::discard},
{FKL, EventTreatment::discard},
{unob, EventTreatment::discard},
{unof, EventTreatment::discard},
{qqxexb, EventTreatment::discard},
{qqxexf, EventTreatment::discard},
{qqxmid, EventTreatment::discard},
{non_resummable, EventTreatment::discard}
};
set_from_yaml(treat.at(FKL), node, entry, "FKL");
set_from_yaml(treat.at(unob), node, entry, "unordered");
treat.at(unof) = treat.at(unob);
set_from_yaml(treat.at(qqxexb), node, entry, "extremal qqx");
treat.at(qqxexf) = treat.at(qqxexb);
set_from_yaml(treat.at(qqxmid), node, entry, "central qqx");
set_from_yaml(treat.at(non_resummable), node, entry, "non-resummable");
if(treat[non_resummable] == EventTreatment::reweight){
throw std::invalid_argument{"Cannot reweight Fixed Order events"};
}
return treat;
}
Config to_Config(YAML::Node const & yaml){
try{
assert_all_options_known(yaml, get_supported_options());
}
catch(unknown_option const & ex){
throw unknown_option{std::string{"Unknown option '"} + ex.what() + "'"};
}
Config config;
config.resummation_jets = get_jet_parameters(yaml, "resummation jets");
config.fixed_order_jets = config.resummation_jets;
update_fixed_order_jet_parameters(config.fixed_order_jets, yaml);
set_from_yaml(config.min_extparton_pt, yaml, "min extparton pt");
// Sets the standard value, then changes this if defined
config.regulator_lambda=CLAMBDA;
set_from_yaml_if_defined(config.regulator_lambda, yaml, "regulator parameter");
+ set_from_yaml_if_defined(config.max_events, yaml, "max events");
config.max_ext_soft_pt_fraction = 0.1;
set_from_yaml_if_defined(
config.max_ext_soft_pt_fraction, yaml, "max ext soft pt fraction"
);
set_from_yaml(config.trials, yaml, "trials");
set_from_yaml(config.log_correction, yaml, "log correction");
config.treat = get_event_treatment(yaml, "event treatment");
set_from_yaml_if_defined(config.output, yaml, "event output");
config.rng = to_RNGConfig(yaml, "random generator");
set_from_yaml_if_defined(config.analysis_parameters, yaml, "analysis");
config.scales = to_ScaleConfig(yaml);
config.ew_parameters = get_ew_parameters(yaml);
config.Higgs_coupling = get_Higgs_coupling(yaml, "Higgs coupling");
return config;
}
} // namespace anonymous
ScaleConfig to_ScaleConfig(YAML::Node const & yaml){
ScaleConfig config;
auto scale_funs = get_scale_map(yaml);
std::vector<std::string> scales;
set_from_yaml(scales, yaml, "scales");
config.base.reserve(scales.size());
std::transform(
begin(scales), end(scales), std::back_inserter(config.base),
[scale_funs](auto const & entry){
return parse_ScaleFunction(entry, scale_funs);
}
);
set_from_yaml_if_defined(config.factors, yaml, "scale factors");
config.max_ratio = std::numeric_limits<double>::infinity();
set_from_yaml_if_defined(config.max_ratio, yaml, "max scale ratio");
return config;
}
Config load_config(std::string const & config_file){
try{
return to_Config(YAML::LoadFile(config_file));
}
catch(...){
std::cerr << "Error reading " << config_file << ":\n ";
throw;
}
}
} // namespace HEJ
diff --git a/src/bin/HEJ.cc b/src/bin/HEJ.cc
index eb089aa..91c433d 100644
--- a/src/bin/HEJ.cc
+++ b/src/bin/HEJ.cc
@@ -1,240 +1,239 @@
/**
* \authors The HEJ collaboration (see AUTHORS for details)
* \date 2019
* \copyright GPLv2 or later
*/
#include <array>
#include <chrono>
#include <iostream>
#include <limits>
#include <memory>
#include <numeric>
#include "yaml-cpp/yaml.h"
#include "fastjet/ClusterSequence.hh"
#include "HEJ/CombinedEventWriter.hh"
#include "HEJ/config.hh"
#include "HEJ/CrossSectionAccumulator.hh"
#include "HEJ/Event.hh"
#include "HEJ/EventReader.hh"
#include "HEJ/EventReweighter.hh"
#include "HEJ/get_analysis.hh"
#include "HEJ/make_RNG.hh"
#include "HEJ/optional.hh"
#include "HEJ/ProgressBar.hh"
#include "HEJ/stream.hh"
#include "HEJ/Version.hh"
#include "HEJ/YAMLreader.hh"
HEJ::Config load_config(char const * filename){
try{
return HEJ::load_config(filename);
}
catch(std::exception const & exc){
std::cerr << "Error: " << exc.what() << '\n';
std::exit(EXIT_FAILURE);
}
}
std::unique_ptr<HEJ::Analysis> get_analysis(
YAML::Node const & parameters
){
try{
return HEJ::get_analysis(parameters);
}
catch(std::exception const & exc){
std::cerr << "Failed to load analysis: " << exc.what() << '\n';
std::exit(EXIT_FAILURE);
}
}
// unique_ptr is a workaround:
// HEJ::optional is a better fit, but gives spurious errors with g++ 7.3.0
std::unique_ptr<HEJ::ProgressBar<double>> make_progress_bar(
std::vector<double> const & xs
) {
if(xs.empty()) return {};
const double Born_xs = std::accumulate(begin(xs), end(xs), 0.);
return std::make_unique<HEJ::ProgressBar<double>>(std::cout, Born_xs);
}
std::string time_to_string(const time_t time){
char s[30];
struct tm * p = localtime(&time);
strftime(s, 30, "%a %b %d %Y %H:%M:%S", p);
return s;
}
int main(int argn, char** argv) {
using clock = std::chrono::system_clock;
- if (argn < 3) {
+ if (argn != 3) {
std::cerr << "\n# Usage:\n."<< argv[0] <<" config_file input_file\n\n";
return EXIT_FAILURE;
}
const auto start_time = clock::now();
{
std::cout << "Starting " << HEJ::Version::package_name_full()
<< ", revision " << HEJ::Version::revision() << " ("
<< time_to_string(clock::to_time_t(start_time)) << ")" << std::endl;
}
fastjet::ClusterSequence::print_banner();
// read configuration
const HEJ::Config config = load_config(argv[1]);
auto reader = HEJ::make_reader(argv[2]);
assert(reader);
std::unique_ptr<HEJ::Analysis> analysis = get_analysis(
config.analysis_parameters
);
assert(analysis != nullptr);
auto heprup{ reader->heprup() };
heprup.generators.emplace_back(LHEF::XMLTag{});
heprup.generators.back().name = HEJ::Version::package_name();
heprup.generators.back().version = HEJ::Version::String();
analysis->initialise(heprup);
HEJ::CombinedEventWriter writer{config.output, std::move(heprup)};
double global_reweight = 1.;
- int max_events = std::numeric_limits<int>::max();
+ const auto & max_events = config.max_events;
// if we need the event number:
- if(std::abs(heprup.IDWTUP) == 4 || std::abs(heprup.IDWTUP) == 1 || argn > 3){
+ if(std::abs(heprup.IDWTUP) == 4 || std::abs(heprup.IDWTUP) == 1 || max_events){
// try to read from LHE head
auto input_events{reader->number_events()};
if(!input_events) {
// else count manually
auto t_reader = HEJ::make_reader(argv[2]);
input_events = 0;
while(t_reader->read_event()) ++(*input_events);
}
if(std::abs(heprup.IDWTUP) == 4 || std::abs(heprup.IDWTUP) == 1){
// IDWTUP 4 or 1 assume average(weight)=xs, but we need sum(weights)=xs
std::cout << "Found IDWTUP " << heprup.IDWTUP << ": "
<< "assuming \"cross section = average weight\".\n"
<< "converting to \"cross section = sum of weights\" ";
global_reweight /= *input_events;
}
- if(argn > 3){
+ if(max_events && (*input_events > *max_events)){
// maximal number of events given
- max_events = std::stoi(argv[3]);
- global_reweight = *input_events/static_cast<double>(max_events);
- std::cout << "Processing " << max_events
+ global_reweight = *input_events/static_cast<double>(*max_events);
+ std::cout << "Processing " << *max_events
<< " out of " << *input_events << " events\n";
}
}
HEJ::ScaleGenerator scale_gen{
config.scales.base,
config.scales.factors,
config.scales.max_ratio
};
auto ran = HEJ::make_RNG(config.rng.name, config.rng.seed);
assert(ran != nullptr);
HEJ::EventReweighter hej{
reader->heprup(),
std::move(scale_gen),
to_EventReweighterConfig(config),
*ran
};
// status infos & eye candy
- int nevent = 0;
+ size_t nevent = 0;
std::array<int, HEJ::event_type::last_type + 1>
nevent_type{0}, nfailed_type{0};
auto progress = make_progress_bar(reader->heprup().XSECUP);
HEJ::CrossSectionAccumulator xs;
std::map<HEJ::StatusCode, int> status_counter;
size_t total_trials = 0;
// Loop over the events in the input file
- while(reader->read_event() && nevent < max_events){
+ while(reader->read_event() && (!max_events || nevent < *max_events) ){
++nevent;
// reweight events so that the total cross section is conserved
auto hepeup = reader->hepeup();
hepeup.setWeight(0, global_reweight * hepeup.weight());
HEJ::Event::EventData event_data{hepeup};
event_data.reconstruct_intermediate();
// calculate HEJ weight
HEJ::Event FO_event{
std::move(event_data).cluster(
config.fixed_order_jets.def, config.fixed_order_jets.min_pt
)
};
if(FO_event.central().weight == 0) {
static const bool warned_once = [argv,nevent](){
std::cerr
<< "WARNING: event number " << nevent
<< " in " << argv[2] << " has zero weight. "
"Ignoring this and all further events with vanishing weight.\n";
return true;
}();
(void) warned_once; // shut up compiler warnings
continue;
}
auto resummed_events = hej.reweight(FO_event, config.trials);
for(auto const & s: hej.status())
++status_counter[s];
total_trials+=hej.status().size();
++nevent_type[FO_event.type()];
if(resummed_events.empty()) ++nfailed_type[FO_event.type()];
for(auto const & ev: resummed_events){
//TODO: move pass_cuts to after phase space point generation
if(analysis->pass_cuts(ev, FO_event)){
analysis->fill(ev, FO_event);
writer.write(ev);
xs.fill(ev);
}
}
if(progress) progress->increment(FO_event.central().weight);
} // main event loop
std::cout << '\n';
analysis->finalise();
using namespace HEJ::event_type;
std::cout<< "Events processed: " << nevent << '\n';
std::cout << '\t' << name(EventType::first_type) << ": "
<< nevent_type[EventType::first_type]
<< ", failed to reconstruct " << nfailed_type[EventType::first_type]
<< '\n';
for(auto i=EventType::first_type+1; i<=EventType::last_type; i*=2){
std::cout << '\t' << name(static_cast<EventType>(i)) << ": "
<< nevent_type[i]
<< ", failed to reconstruct " << nfailed_type[i]
<< '\n';
}
std::cout << '\n' << xs << '\n';
std::cout << "Generation statistic: "
<< status_counter[HEJ::StatusCode::good] << "/" << total_trials
<< " trials successful.\n";
for(auto && entry: status_counter){
const double fraction = static_cast<double>(entry.second)/total_trials;
const int percent = std::round(100*fraction);
std::cout << std::left << std::setw(17) << (to_string(entry.first) + ":")
<< " [";
for(int i = 0; i < percent/2; ++i) std::cout << '#';
for(int i = percent/2; i < 50; ++i) std::cout << ' ';
std::cout << "] " <<std::setw(2)<<std::right<< percent << "%\n";
}
std::chrono::duration<double> run_time = (clock::now() - start_time);
std::cout << "\nFinished " << HEJ::Version::package_name() << " at "
<< time_to_string(clock::to_time_t(clock::now()))
<< "\n=> Runtime: " << run_time.count() << " sec ("
<< nevent/run_time.count() << " Events/sec).\n";
return EXIT_SUCCESS;
}

File Metadata

Mime Type
text/x-diff
Expires
Sun, Feb 23, 2:29 PM (22 h, 25 m)
Storage Engine
blob
Storage Format
Raw Data
Storage Handle
4468901
Default Alt Text
(66 KB)

Event Timeline