diff --git a/Changes.md b/Changes.md index 6736462..0661e98 100644 --- a/Changes.md +++ b/Changes.md @@ -1,143 +1,145 @@ # Changelog This is the log for changes to the HEJ program. Further changes to the HEJ API are documented in [`Changes-API.md`](Changes-API.md). If you are using HEJ as a library, please also read the changes there. ## Version 2.1 This release adds support for two new processes: * W boson with jets. * Jets with a charged lepton-antilepton pair via a virtual Z boson or photon. In addition, the complete set of first subleading processes (unordered gluon, central and extremal quark-antiquark pair) is implemented for pure jets and W + jets, see [arXiv:TODO](https://arxiv.org/abs/TODO). Unordered gluon emission is also supported for Higgs boson + jets and Z boson/photon + jets. This release include many changes to the code, which affect users of HEJ as a library (see [`Changes-API.md`](Changes-API.md)). ### 2.1.0 #### New Processes * Resummation for W bosons with jets - New subleading processes `extremal qqx` & `central qqx` for a quark and anti-quark in the final state, e.g. `g g => u d_bar Wm g` (the other subleading processes also work with W's) - `HEJFOG` can generate multiple jets together with a (off-shell) W bosons decaying into lepton & neutrino * Resummation for jets with a charged lepton-antilepton pair via a virtual Z boson or photon. Includes the `unordered` subleading process. * Resummation can now be performed on all subleading processes within pure jets also. This includes `unordered`, `extremal qqbar` and `central qqbar` processes. #### More Physics implementation * Partons now have a Colour charge - Colours are read from and written to LHE files - For reweighted events the colours are created according to leading colour in the FKL limit * Use relative fraction for soft transverse momentum in tagging jets (`soft pt regulator`) as new (optional) parameter. - This supersedes `min extparton pt`, which is marked **deprecated** and will be removed in future versions. - This is a direct replacement for the old `max ext soft pt fraction`, which is also **deprecated**. +* Changed the redistribution of the momenta of soft emissions. Now also bosons + can take part of the recoil (previously only jets). #### Updates to Runcard * Allow multiplication and division of multiple scale functions e.g. `H_T/2*m_j1j2` * Grouped `event treatment` for subleading channels together in runcard - Rename `non-HEJ` processes to `non-resummable` * Read electro-weak constants from input - new mandatory setting `vev` to change vacuum expectation value - new mandatory settings `particle properties` to specify mass & width of bosons - `HEJFOG`: decays are now specified in `decays` setting (previously under `particle properties`) * Allow loading multiple analyses with `analyses`. The old `analysis` (with "i") is marked **deprecated**. * Optional setting to specify maximal number of Fixed Order events (`max events`, default is all) * Allow changing the regulator lambda in input (`regulator parameter`, only for advanced users) #### Changes to Input/Output * Added support to read & write `hdf5` event files suggested in [arXiv:1905.05120](https://arxiv.org/abs/1905.05120) (needs [HighFive](https://github.com/BlueBrain/HighFive)) * Support input with average weight equal to the cross section (`IDWTUP=1 or 4`) * Support unmodified Les Houches Event Files written by Sherpa with `cross section = sum(weights)/sum(trials)` * Analyses now get general run information (`LHEF::HEPRUP`) in the constructor. **This might break previously written, external analyses!** - external analyses should now be created with `make_analysis(YAML::Node const & config, LHEF::HEPRUP const & heprup)` * Support `rivet` version 3 with both `HepMC` version 2 and 3 - Multiple weights with `rivet 3` will only create one `.yoda` file (instead of one per weight/scale) * Added option to unweight only resummation events (`unweight: {type: resummation}`) * Added option for partially unweighting resummation events, similar to the fixed-order generator. * Improved unweighting algorithm. * Follow HepMC convention for particle Status codes: incoming = 11, decaying = 2, outgoing = 1 (unchanged) #### Miscellaneous * Print cross sections at end of run * Added example analysis & scale to `examples/`. Everything in `examples/` will be build when the flag `-DBUILD_EXAMPLES=TRUE` is set in `cmake`. * Dropped support for HepMC 3.0.0, either HepMC version 2 or >3.1 is required - It is now possible to write out both HepMC 2 and HepMC 3 events at the same time * Require LHADPF version 6. Dropped support for all other versions. * Use `git-lfs` for raw data in test (`make test` now requires `git-lfs`) * Currents are now generated with [`FORM`](https://github.com/vermaseren/form) - `FORM` is included as a `git submodule`, use `git submodule update --init` to download `FORM` * Create [Sphinx](http://sphinx-doc.org/) and [Doxygen](http://doxygen.org/) documentation by `make sphinx` or `make doxygen` in your `build/` folder ## Version 2.0 First release of HEJ 2. Complete code rewrite compared to HEJ 1. Improved matching to Fixed Order ([arXiv:1805.04446](https://arxiv.org/abs/1805.04446)). Implemented processes: Higgs boson with jets (FKL and unordered gluon emission, with finite quark mass loop, [arXiv:1812.08072](https://arxiv.org/abs/1812.08072)), and pure jets (only FKL). See [arXiv:1902.08430](https://arxiv.org/abs/1902.08430) ### 2.0.6 * Fixed compiling rivet when YODA headers are _outside_ of rivet directory ### 2.0.5 * Fixed event classification for input not ordered in rapidity ### 2.0.4 * Fixed wrong path of `HEJ_INCLUDE_DIR` in `hej-config.cmake` ### 2.0.3 * Fixed parsing of (numerical factor) * (base scale) in configuration * Don't change scale names, but sanitise Rivet output file names instead ### 2.0.2 * Changed scale names to `"_over_"` and `"_times_"` for proper file names (was `"/"` and `"*"` before) ### 2.0.1 * Fixed name of fixed-order generator in error message. ### 2.0.0 * First release diff --git a/doc/developer_manual/developer_manual.tex b/doc/developer_manual/developer_manual.tex index efbc6c1..a6ddcbd 100644 --- a/doc/developer_manual/developer_manual.tex +++ b/doc/developer_manual/developer_manual.tex @@ -1,2034 +1,2042 @@ \documentclass[a4paper,11pt]{article} \usepackage{fourier} \usepackage[T1]{fontenc} \usepackage{microtype} \usepackage{geometry} \usepackage{enumitem} \setlist[description]{leftmargin=\parindent,labelindent=\parindent} \usepackage{amsmath} \usepackage{amssymb} \usepackage[utf8x]{inputenc} \usepackage{graphicx} \usepackage{xcolor} \usepackage{todonotes} \usepackage{listings} \usepackage{xspace} \usepackage{tikz} \usepackage{slashed} \usepackage{subcaption} \usetikzlibrary{arrows.meta} \usetikzlibrary{shapes} \usetikzlibrary{calc} \usepackage[colorlinks,linkcolor={blue!50!black}]{hyperref} \graphicspath{{build/figures/}{figures/}} \usepackage[left]{showlabels} \renewcommand{\showlabelfont}{\footnotesize\color{darkgreen}} \emergencystretch \hsize \setlength{\marginparwidth}{2cm} \newcommand{\HEJ}{{\tt HEJ}\xspace} \newcommand{\HIGHEJ}{\emph{High Energy Jets}\xspace} \newcommand{\cmake}{\href{https://cmake.org/}{cmake}\xspace} \newcommand{\html}{\href{https://www.w3.org/html/}{html}\xspace} \newcommand{\YAML}{\href{http://yaml.org/}{YAML}\xspace} \newcommand{\QCDloop}{\href{https://github.com/scarrazza/qcdloop}{QCDloop}\xspace} \newcommand{\FORM}{{\tt FORM}\xspace} \newcommand\matel[4][]{\mathinner{\langle#2\vert#3\vert#4\rangle}_{#1}} \newcommand{\as}{\alpha_s} \DeclareRobustCommand{\mathgraphics}[1]{\vcenter{\hbox{\includegraphics{#1}}}} \def\spa#1.#2{\left\langle#1\,#2\right\rangle} \def\spb#1.#2{\left[#1\,#2\right]} \def\spaa#1.#2.#3{\langle\mskip-1mu{#1} | #2 | {#3}\mskip-1mu\rangle} \def\spbb#1.#2.#3{[\mskip-1mu{#1} | #2 | {#3}\mskip-1mu]} \def\spab#1.#2.#3{\langle\mskip-1mu{#1} | #2 | {#3}\mskip-1mu\rangle} \def\spba#1.#2.#3{\langle\mskip-1mu{#1}^+ | #2 | {#3}^+\mskip-1mu\rangle} \def\spav#1.#2.#3{\|\mskip-1mu{#1} | #2 | {#3}\mskip-1mu\|^2} \def\jc#1.#2.#3{j^{#1}_{#2#3}} % expectation value \newcommand{\ev}[1]{\text{E}\left[#1\right]} \definecolor{darkgreen}{rgb}{0,0.4,0} \lstset{ % backgroundcolor=\color{lightgray}, % choose the background color; you must add \usepackage{color} or \usepackage{xcolor} basicstyle=\footnotesize\usefont{T1}{DejaVuSansMono-TLF}{m}{n}, % the size of the fonts that are used for the code breakatwhitespace=false, % sets if automatic breaks should only happen at whitespace breaklines=false, % sets automatic line breaking captionpos=t, % sets the caption-position to bottom commentstyle=\color{red}, % comment style deletekeywords={...}, % if you want to delete keywords from the given language escapeinside={\%*}{*)}, % if you want to add LaTeX within your code extendedchars=true, % lets you use non-ASCII characters; for 8-bits encodings only, does not work with UTF-8 frame=false, % adds a frame around the code keepspaces=true, % keeps spaces in text, useful for keeping indentation of code (possibly needs columns=flexible) keywordstyle=\color{blue}, % keyword style otherkeywords={}, % if you want to add more keywords to the set numbers=none, % where to put the line-numbers; possible values are (none, left, right) numbersep=5pt, % how far the line-numbers are from the code rulecolor=\color{black}, % if not set, the frame-color may be changed on line-breaks within not-black text (e.g. comments (green here)) showspaces=false, % show spaces everywhere adding particular underscores; it overrides 'showstringspaces' showstringspaces=false, % underline spaces within strings only showtabs=false, % show tabs within strings adding particular underscores stepnumber=2, % the step between two line-numbers. If it's 1, each line will be numbered stringstyle=\color{gray}, % string literal style tabsize=2, % sets default tabsize to 2 spaces title=\lstname, emph={}, emphstyle=\color{darkgreen} } \begin{document} \tikzstyle{mynode}=[rectangle split,rectangle split parts=2, draw,rectangle split part fill={lightgray, none}] \title{\HEJ 2 developer manual} \author{} \maketitle \tableofcontents \newpage \section{Overview} \label{sec:overview} \HEJ 2 is a C++ program and library implementing an algorithm to apply \HIGHEJ resummation~\cite{Andersen:2008ue,Andersen:2008gc} to pre-generated fixed-order events. This document is intended to give an overview over the concepts and structure of this implementation. \subsection{Project structure} \label{sec:project} \HEJ 2 is developed under the \href{https://git-scm.com/}{git} version control system. The main repository is on the IPPP \href{https://gitlab.com/}{gitlab} server under \url{https://gitlab.dur.scotgrid.ac.uk/hej/hej}. To get a local copy, get an account on the gitlab server and use \begin{lstlisting}[language=sh,caption={}] git clone git@gitlab.dur.scotgrid.ac.uk:hej/hej.git \end{lstlisting} This should create a directory \texttt{hej} with the following contents: \begin{description} \item[README.md:] Basic information concerning \HEJ. \item[doc:] Contains additional documentation, see section~\ref{sec:doc}. \item[include:] Contains the C++ header files. \item[src:] Contains the C++ source files. \item[examples:] Example analyses and scale setting code. \item[t:] Contains the source code for the automated tests. \item[CMakeLists.txt:] Configuration file for the \cmake build system. See section~\ref{sec:cmake}. \item[cmake:] Auxiliary files for \cmake. This includes modules for finding installed software in \texttt{cmake/Modules} and templates for code generation during the build process in \texttt{cmake/Templates}. \item[config.yml:] Sample configuration file for running \HEJ 2. \item[current\_generator:] Contains the code for the current generator, see section~\ref{sec:cur_gen}. \item[FixedOrderGen:] Contains the code for the fixed-order generator, see section~\ref{sec:HEJFOG}. \item[COPYING:] License file. \item[AUTHORS:] List of \HEJ authors. \item[Changes-API.md:] Change history for the \HEJ API (application programming interface). \item[Changes.md:] History of changes relevant for \HEJ users. \end{description} In the following all paths are given relative to the \texttt{hej} directory. \subsection{Documentation} \label{sec:doc} The \texttt{doc} directory contains user documentation in \texttt{doc/sphinx} and the configuration to generate source code documentation in \texttt{doc/doxygen}. The user documentation explains how to install and run \HEJ 2. The format is \href{http://docutils.sourceforge.net/rst.html}{reStructuredText}, which is mostly human-readable. Other formats, like \html, can be generated with the \href{http://www.sphinx-doc.org/en/master/}{sphinx} generator with \begin{lstlisting}[language=sh,caption={}] make sphinx \end{lstlisting} To document the source code we use \href{https://www.stack.nl/~dimitri/doxygen/}{doxygen}. To generate \html documentation, use the command \begin{lstlisting}[language=sh,caption={}] make doxygen \end{lstlisting} in your \texttt{build/} directory. \subsection{Build system} \label{sec:cmake} For the most part, \HEJ 2 is a library providing classes and functions that can be used to add resummation to fixed-order events. In addition, there is a relatively small executable program leveraging this library to read in events from an input file and produce resummation events. Both the library and the program are built and installed with the help of \cmake. Debug information can be turned on by using \begin{lstlisting}[language=sh,caption={}] cmake base/directory -DCMAKE_BUILD_TYPE=Debug make install \end{lstlisting} This facilitates the use of debuggers like \href{https://www.gnu.org/software/gdb/}{gdb}. The main \cmake configuration file is \texttt{CMakeLists.txt}. It defines the compiler flags, software prerequisites, header and source files used to build \HEJ 2, and the automated tests. \texttt{cmake/Modules} contains module files that help with the detection of the software prerequisites and \texttt{cmake/Templates} template files for the automatic generation of header and source files. For example, this allows to only keep the version information in one central location (\texttt{CMakeLists.txt}) and automatically generate a header file from the template \texttt{Version.hh.in} to propagate this to the C++ code. \subsection{General coding guidelines} \label{sec:notes} The goal is to make the \HEJ 2 code well-structured and readable. Here are a number of guidelines to this end. \begin{description} \item[Observe the boy scout rule.] Always leave the code cleaner than how you found it. Ugly hacks can be useful for testing, but shouldn't make their way into the main branch. \item[Ask if something is unclear.] Often there is a good reason why code is written the way it is. Sometimes that reason is only obvious to the original author (use \lstinline!git blame! to find them), in which case they should be poked to add a comment. Sometimes there is no good reason, but nobody has had the time to come up with something better, yet. In some places the code might just be bad. \item[Don't break tests.] There are a number of tests in the \texttt{t} directory, which can be run with \lstinline!make test!. Ideally, all tests should run successfully in each git revision. If your latest commit broke a test and you haven't pushed to the central repository yet, you can fix it with \lstinline!git commit --amend!. If an earlier local commit broke a test, you can use \lstinline!git rebase -i! if you feel confident. Additionally each \lstinline!git push! is also automatically tested via the GitLab CI (see appendix~\ref{sec:CI}). \item[Test your new code.] When you add some new functionality, also add an automated test. This can be useful even if you don't know the ``correct'' result because it prevents the code from changing its behaviour silently in the future. \href{http://www.valgrind.org/}{valgrind} is a very useful tool to detect potential memory leaks. The code coverage of all tests can be generated with \href{https://gcovr.com/en/stable/}{gcovr}. Therefore add the flag \lstinline!-DTEST_COVERAGE=True! to cmake and run \lstinline!make ctest_coverage!. \item[Stick to the coding style.] It is somewhat easier to read code that has a uniform coding and indentation style. We don't have a strict style, but it helps if your code looks similar to what is already there. \end{description} \section{Program flow} \label{sec:flow} A run of the \HEJ 2 program has three stages: initialisation, event processing, and cleanup. The following sections outline these stages and their relations to the various classes and functions in the code. Unless denoted otherwise, all classes and functions are part of the \lstinline!HEJ! namespace. The code for the \HEJ 2 program is in \texttt{src/bin/HEJ.cc}, all other code comprises the \HEJ 2 library. Classes and free functions are usually implemented in header and source files with a corresponding name, i.e. the code for \lstinline!MyClass! can usually be found in \texttt{include/HEJ/MyClass.hh} and \texttt{src/MyClass.cc}. \subsection{Initialisation} \label{sec:init} The first step is to load and parse the \YAML configuration file. The entry point for this is the \lstinline!load_config! function and the related code can be found in \texttt{include/HEJ/YAMLreader.hh}, \texttt{include/HEJ/config.hh} and the corresponding \texttt{.cc} files in the \texttt{src} directory. The implementation is based on the \href{https://github.com/jbeder/yaml-cpp}{yaml-cpp} library. The \lstinline!load_config! function returns a \lstinline!Config! object containing all settings. To detect potential mistakes as early as possible, we throw an exception whenever one of the following errors occurs: \begin{itemize} \item There is an unknown option in the \YAML file. \item A setting is invalid, for example a string is given where a number would be expected. \item An option value is not set. \end{itemize} The third rule is sometimes relaxed for ``advanced'' settings with an obvious default, like for importing custom scales or analyses. The information stored in the \lstinline!Config! object is then used to initialise various objects required for the event processing stage described in section~\ref{sec:processing}. First, the \lstinline!get_analyses! function creates a number objects that inherits from the \lstinline!Analysis! interface.\footnote{In the context of C++ the proper technical expression is ``pure abstract class''.} Using an interface allows us to decide the concrete type of the analysis at run time instead of having to make a compile-time decision. Depending on the settings, \lstinline!get_analyses! creates a number of user-defined analyses loaded from one or more external libraries and \textsc{rivet} analyses (see the user documentation \url{https://hej.web.cern.ch/HEJ/doc/current/user/}) Together with a number of further objects, whose roles are described in section~\ref{sec:processing}, we also initialise the global random number generator. We again use an interface to defer deciding the concrete type until the program is actually run. Currently, we support the \href{https://mixmax.hepforge.org/}{MIXMAX} (\texttt{include/HEJ/Mixmax.hh}) and Ranlux64 (\texttt{include/HEJ/Ranlux64.hh}) random number generators, both are provided by \href{http://proj-clhep.web.cern.ch/}{CLHEP}. We also set up a \lstinline!HEJ::EventReader! object for reading events either in the the Les Houches event file format~\cite{Alwall:2006yp} or an \href{https://www.hdfgroup.org/}{HDF5}-based format~\cite{Hoeche:2019rti}. To allow making the decision at run time, \lstinline!HEJ::EventReader! is an abstract base class defined in \texttt{include/HEJ/EventReader.hh} and the implementations of the derived classes are in \texttt{include/HEJ/LesHouchesReader.hh}, \texttt{include/HEJ/HDF5Reader.hh} and the corresponding \texttt{.cc} source files in the \texttt{src} directory. The \lstinline!LesHouchesReader! leverages \href{http://home.thep.lu.se/~leif/LHEF/}{\texttt{include/LHEF/LHEF.h}}. A small wrapper around the \href{https://www.boost.org/doc/libs/1_67_0/libs/iostreams/doc/index.html}{boost iostreams} library allows us to also read event files compressed with \href{https://www.gnu.org/software/gzip/}{gzip}. The wrapper code is in \texttt{include/HEJ/stream.hh} and the \texttt{src/stream.cc}. If unweighting is enabled, we also initialise an unweighter as defined in \texttt{include/HEJ/Unweighter.hh}. The unweighting strategies are explained in section~\ref{sec:unweight}. \subsection{Event processing} \label{sec:processing} In the second stage events are continuously read from the event file. After jet clustering, a number of corresponding resummation events are generated for each input event and fed into the analyses and a number of output files. The roles of various classes and functions are illustrated in the following flow chart: \begin{center} \begin{tikzpicture}[node distance=2cm and 5mm] \node (reader) [mynode] {\lstinline!EventReader::read_event!\nodepart{second}{read event}}; \node (data) [mynode,below=of reader] {\lstinline!Event::EventData! constructor\nodepart{second}{convert to \HEJ object}}; \node (cluster) [mynode,below=of data] {\lstinline!Event::EventData::cluster!\nodepart{second}{cluster jets \& classify \lstinline!EventType!}}; \node (resum) [mynode,below=of cluster] {\lstinline!EventReweighter::reweight!\nodepart{second}{perform resummation}}; \node (cut) [mynode,below=of resum] {\lstinline!Analysis::pass_cuts!\nodepart{second}{apply cuts}}; \node (cut) [mynode,below=of resum] {\lstinline!Analysis::pass_cuts!\nodepart{second}{apply cuts}}; \node (unweight) [mynode,below=of cut] {\lstinline!Unweighter::unweight!\nodepart{second}{unweight (optional)}}; \node (fill) [mynode,below left=of unweight] {\lstinline!Analysis::fill!\nodepart{second}{analyse event}}; \node (write) [mynode,below right=of unweight] {\lstinline!CombinedEventWriter::write!\nodepart{second}{write out event}}; \node (control) [below=of unweight] {}; \draw[-{Latex[length=3mm, width=1.5mm]}] (reader.south) -- node[left] {\lstinline!LHEF::HEPEUP!} (data.north); \draw[-{Latex[length=3mm, width=1.5mm]}] (data.south) -- node[left] {\lstinline!Event::EventData!} (cluster.north); \draw[-{Latex[length=3mm, width=1.5mm]}] (cluster.south) -- node[left] {\lstinline!Event!} (resum.north); \draw[-{Latex[length=3mm, width=1.5mm]}] (resum.south) -- (cut.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(resum.south)+(10mm, 0cm)$) -- ($(cut.north)+(10mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(resum.south)+(5mm, 0cm)$) -- ($(cut.north)+(5mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(resum.south)-(5mm, 0cm)$) -- ($(cut.north)-(5mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(resum.south)-(10mm, 0cm)$) -- node[left] {\lstinline!Event!} ($(cut.north)-(10mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] (cut.south) -- (unweight.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(cut.south)+(7mm, 0cm)$) -- ($(unweight.north)+(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(cut.south)-(7mm, 0cm)$) -- node[left] {\lstinline!Event!} ($(unweight.north)-(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(unweight.south)-(3mm,0mm)$) .. controls ($(control)-(3mm,0mm)$) ..node[left] {\lstinline!Event!} (fill.east); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(unweight.south)-(3mm,0mm)$) .. controls ($(control)-(3mm,0mm)$) .. (write.west); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(unweight.south)+(3mm,0mm)$) .. controls ($(control)+(3mm,0mm)$) .. (fill.east); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(unweight.south)+(3mm,0mm)$) .. controls ($(control)+(3mm,0mm)$) ..node[right] {\lstinline!Event!} (write.west); \end{tikzpicture} \end{center} \lstinline!EventData! is an intermediate container, its members are completely accessible. In contrast after jet clustering and classification the phase space inside \lstinline!Event! can not be changed any more (\href{https://wikipedia.org/wiki/Builder_pattern}{Builder design pattern}). The resummation is performed by the \lstinline!EventReweighter! class, which is described in more detail in section~\ref{sec:resum}. The \lstinline!CombinedEventWriter! writes events to zero or more output files. To this end, it contains a number of objects implementing the \lstinline!EventWriter! interface. These event writers typically write the events to a file in a given format. We currently have the \lstinline!LesHouchesWriter! for event files in the Les Houches Event File format, the \lstinline!HDF5Writer! for \href{https://www.hdfgroup.org/}{HDF5}~\cite{Hoeche:2019rti} and the \lstinline!HepMC2Writer! or \lstinline!HepMC3Writer! for the \href{https://hepmc.web.cern.ch/hepmc/}{HepMC} format (Version 2 and 3). \subsection{Resummation} \label{sec:resum} In the \lstinline!EventReweighter::reweight! member function, we first classify the input fixed-order event (FKL, unordered, non-resummable, \dots) and decide according to the user settings whether to discard, keep, or resum the event. If we perform resummation for the given event, we generate a number of trial \lstinline!PhaseSpacePoint! objects. Phase space generation is discussed in more detail in section~\ref{sec:pspgen}. We then perform jet clustering according to the settings for the resummation jets on each \lstinline!PhaseSpacePoint!, update the factorisation and renormalisation scale in the resulting \lstinline!Event! and reweight it according to the ratio of pdf factors and \HEJ matrix elements between resummation and original fixed-order event: \begin{center} \begin{tikzpicture}[node distance=1.5cm and 5mm] \node (in) {}; \node (treat) [diamond,draw,below=of in,minimum size=3.5cm, label={[anchor=west, inner sep=8pt]west:discard}, label={[anchor=east, inner sep=14pt]east:keep}, label={[anchor=south, inner sep=20pt]south:reweight} ] {}; \draw (treat.north west) -- (treat.south east); \draw (treat.north east) -- (treat.south west); \node (psp) [mynode,below=of treat] {\lstinline!PhaseSpacePoint! constructor}; \node (cluster) [mynode,below=of psp] {\lstinline!Event::EventData::cluster!\nodepart{second}{cluster jets}}; \node (colour) [mynode,below=of cluster] {\lstinline!Event::generate_colours()!\nodepart{second}{generate particle colour}}; \node (gen_scales) [mynode,below=of colour] {\lstinline!ScaleGenerator::operator()!\nodepart{second}{update scales}}; \node (rescale) [mynode,below=of gen_scales] {\lstinline!PDF::pdfpt!, \lstinline!MatrixElement!\nodepart{second}{reweight}}; \node (out) [below of=rescale] {}; \draw[-{Latex[length=3mm, width=1.5mm]}] (in.south) -- node[left] {\lstinline!Event!} (treat.north); \draw[-{Latex[length=3mm, width=1.5mm]}] (treat.south) -- node[left] {\lstinline!Event!} (psp.north); \draw[-{Latex[length=3mm, width=1.5mm]}] (psp.south) -- (cluster.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(psp.south)+(7mm, 0cm)$) -- ($(cluster.north)+(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(psp.south)-(7mm, 0cm)$) -- node[left] {\lstinline!PhaseSpacePoint!} ($(cluster.north)-(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] (cluster.south) -- (colour.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(cluster.south)+(7mm, 0cm)$) -- ($(colour.north)+(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(cluster.south)-(7mm, 0cm)$) -- node[left] {\lstinline!Event!} ($(colour.north)-(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] (colour.south) -- (gen_scales.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(colour.south)+(7mm, 0cm)$) -- ($(gen_scales.north)+(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(colour.south)-(7mm, 0cm)$) -- node[left] {\lstinline!Event!} ($(gen_scales.north)-(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] (gen_scales.south) -- (rescale.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(gen_scales.south)+(7mm, 0cm)$) -- ($(rescale.north)+(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(gen_scales.south)-(7mm, 0cm)$) -- node[left] {\lstinline!Event!} ($(rescale.north)-(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] (rescale.south) -- (out.north); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(rescale.south)+(7mm, 0cm)$) -- ($(out.north)+(7mm, 0cm)$); \draw[-{Latex[length=3mm, width=1.5mm]}] ($(rescale.south)-(7mm, 0cm)$) -- node[left] {\lstinline!Event!} ($(out.north)-(7mm, 0cm)$); \node (helper) at ($(treat.east) + (15mm,0cm)$) {}; \draw[-{Latex[length=3mm, width=1.5mm]}] (treat.east) -- ($(treat.east) + (15mm,0cm)$) -- node[left] {\lstinline!Event!} (helper |- gen_scales.east) -- (gen_scales.east) ; \end{tikzpicture} \end{center} \subsection{Phase space point generation} \label{sec:pspgen} The resummed and matched \HEJ cross section for pure jet production of FKL configurations is given by (cf. eq. (3) of~\cite{Andersen:2018tnm}) \begin{align} \label{eq:resumdijetFKLmatched2} % \begin{split} \sigma&_{2j}^\mathrm{resum, match}=\sum_{f_1, f_2}\ \sum_m \prod_{j=1}^m\left( \int_{p_{j\perp}^B=0}^{p_{j\perp}^B=\infty} \frac{\mathrm{d}^2\mathbf{p}_{j\perp}^B}{(2\pi)^3}\ \int \frac{\mathrm{d} y_j^B}{2} \right) \ (2\pi)^4\ \delta^{(2)}\!\!\left(\sum_{k=1}^{m} \mathbf{p}_{k\perp}^B\right)\nonumber\\ &\times\ x_a^B\ f_{a, f_1}(x_a^B, Q_a^B)\ x_b^B\ f_{b, f_2}(x_b^B, Q_b^B)\ \frac{\overline{\left|\mathcal{M}_\text{LO}^{f_1f_2\to f_1g\cdots gf_2}\big(\big\{p^B_j\big\}\big)\right|}^2}{(\hat {s}^B)^2}\nonumber\\ & \times (2\pi)^{-4+3m}\ 2^m \nonumber\\ &\times\ \sum_{n=2}^\infty\ \int_{p_{1\perp}=p_{\perp,\mathrm{min}} }^{p_{1\perp}=\infty} \frac{\mathrm{d}^2\mathbf{p}_{1\perp}}{(2\pi)^3}\ \int_{p_{n\perp}=p_{\perp,\mathrm{min}}}^{p_{n\perp}=\infty} \frac{\mathrm{d}^2\mathbf{p}_{n\perp}}{(2\pi)^3}\ \prod_{i=2}^{n-1}\int_{p_{i\perp}=\lambda}^{p_{i\perp}=\infty} \frac{\mathrm{d}^2\mathbf{p}_{i\perp}}{(2\pi)^3}\ (2\pi)^4\ \delta^{(2)}\!\!\left(\sum_{k=1}^n \mathbf{p}_{k\perp}\right )\\ &\times \ \mathbf{T}_y \prod_{i=1}^n \left(\int \frac{\mathrm{d} y_i}{2}\right)\ \mathcal{O}_{mj}^e\ \left(\prod_{l=1}^{m-1}\delta^{(2)}(\mathbf{p}_{\mathcal{J}_{l}\perp}^B - \mathbf{j}_{l\perp})\right)\ \left(\prod_{l=1}^m\delta(y^B_{\mathcal{J}_l}-y_{\mathcal{J}_l})\right) \ \mathcal{O}_{2j}(\{p_i\})\nonumber\\ &\times \frac{(\hat{s}^B)^2}{\hat{s}^2}\ \frac{x_a f_{a,f_1}(x_a, Q_a)\ x_b f_{b,f_2}(x_b, Q_b)}{x_a^B\ f_{a,f_1}(x_a^B, Q_a^B)\ x_b^B\ f_{b,f_2}(x_b^B, Q_b^B)}\ \frac{\overline{\left|\mathcal{M}_{\mathrm{HEJ}}^{f_1 f_2\to f_1 g\cdots gf_2}(\{ p_i\})\right|}^2}{\overline{\left|\mathcal{M}_\text{LO, HEJ}^{f_1f_2\to f_1g\cdots gf_2}\big(\big\{p^B_j\big\}\big)\right|}^{2}} \,.\nonumber % \end{split} \end{align} The first two lines correspond to the generation of the fixed-order input events with incoming partons $f_1, f_2$ and outgoing momenta $p_j^B$, where $\mathbf{p}_{j\perp}^B$ and $y_j^B$ denote the respective transverse momentum and rapidity. Note that, at leading order, these coincide with the fixed-order jet momenta $p_{\mathcal{J}_j}^B$. $f_{a,f_1}(x_a, Q_a),f_{b,f_2}(x_b, Q_b)$ are the pdf factors for the incoming partons with momentum fractions $x_a$ and $x_b$. The square of the partonic centre-of-mass energy is denoted by $\hat{s}^B$ and $\mathcal{M}_\text{LO}^{f_1f_2\to f_1g\cdots gf_2}$ is the leading-order matrix element. The third line is a factor accounting for the different multiplicities between fixed-order and resummation events. Lines four and five are the integration over the resummation phase space described in this section. $p_i$ are the momenta of the outgoing partons in resummation phase space. $\mathbf{T}_y$ denotes rapidity ordering and $\mathcal{O}_{mj}^e$ projects out the exclusive $m$-jet component. The relation between resummation and fixed-order momenta is fixed by the $\delta$ functions. The first sets each transverse fixed-order jet momentum to some function $\mathbf{j_{l\perp}}$ of the resummation momenta. The exact form is described in section~\ref{sec:ptj_res}. The second $\delta$ forces the rapidities of resummation and fixed-order jets to be the same. Finally, the last line is the reweighting of pdf and matrix element factors already shown in section~\ref{sec:resum}. There are two kinds of cut-off in the integration over the resummation partons. $\lambda$ is a technical cut-off connected to the cancellation of infrared divergencies between real and virtual corrections. Its numerical value is set in \texttt{include/HEJ/Constants.h}. $p_{\perp,\mathrm{min}}$ regulates and \emph{uncancelled} divergence in the extremal parton momenta. Its size is set by the user configuration \url{https://hej.web.cern.ch/HEJ/doc/current/user/HEJ.html#settings}. It is straightforward to generalise eq.~(\ref{eq:resumdijetFKLmatched2}) to unordered configurations and processes with additional colourless emissions, for example a Higgs or electroweak boson. In the latter case only the fixed-order integration and the matrix elements change. \subsubsection{Gluon Multiplicity} \label{sec:psp_ng} The first step in evaluating the resummation phase space in eq.~(\ref{eq:resumdijetFKLmatched2}) is to randomly pick terms in the sum over the number of emissions. This sampling of the gluon multiplicity is done in the \lstinline!PhaseSpacePoint::sample_ng! function in \texttt{src/PhaseSpacePoint.cc}. The typical number of extra emissions depends strongly on the rapidity span of the underlying fixed-order event. Let us, for example, consider a fixed-order FKL-type multi-jet configuration with rapidities $y_{j_f},\,y_{j_b}$ of the most forward and backward jets, respectively. By eq.~(\ref{eq:resumdijetFKLmatched2}), the jet multiplicity and the rapidity of each jet are conserved when adding resummation. This implies that additional hard radiation is restricted to rapidities $y$ within a region $y_{j_b} \lesssim y \lesssim y_{j_f}$. Within \HEJ, we require the most forward and most backward emissions to be hard \todo{specify how hard} in order to avoid divergences, so this constraint in fact applies to \emph{all} additional radiation. To simplify the remaining discussion, let us remove the FKL rapidity ordering \begin{equation} \label{eq:remove_y_order} \mathbf{T}_y \prod_{i=1}^n\int \frac{\mathrm{d}y_i}{2} = \frac{1}{n!}\prod_{i=1}^n\int \frac{\mathrm{d}y_i}{2}\,, \end{equation} where all rapidity integrals now cover a region which is approximately bounded by $y_{j_b}$ and $y_{j_f}$. Each of the $m$ jets has to contain at least one parton; selecting random emissions we can rewrite the phase space integrals as \begin{equation} \label{eq:select_jets} \frac{1}{n!}\prod_{i=1}^n\int [\mathrm{d}p_i] = \left(\prod_{i=1}^{m}\int [\mathrm{d}p_i]\ {\cal J}_i(p_i)\right) \frac{1}{n_g!}\prod_{i=m+1}^{m+n_g}\int [\mathrm{d}p_i] \end{equation} with jet selection functions \begin{equation} \label{eq:def_jet_selection} {\cal J}_i(p) = \begin{cases} 1 &p\text{ clustered into jet }i\\ 0 & \text{otherwise} \end{cases} \end{equation} and $n_g \equiv n - m$. Here and in the following we use the short-hand notation $[\mathrm{d}p_i]$ to denote the phase-space measure for parton $i$. As is evident from eq.~\eqref{eq:select_jets}, adding an extra emission $n_g+1$ introduces a suppression factor $\tfrac{1}{n_g+1}$. However, the additional phase space integral also results in an enhancement proportional to $\Delta y_{j_f j_b} = y_{j_f} - y_{j_b}$. This is a result of the rapidity-independence of the MRK limit of the integrand, consisting of the matrix elements divided by the flux factor. Indeed, we observe that the typical number of gluon emissions is to a good approximation proportional to the rapidity separation and the phase space integral is dominated by events with $n_g \approx \Delta y_{j_f j_b}$. For the actual phase space sampling, we assume a Poisson distribution and extract the mean number of gluon emissions in different rapidity bins and fit the results to a linear function in $\Delta y_{j_f j_b}$, finding a coefficient of $0.975$ for the inclusive production of a Higgs boson with two jets. Here are the observed and fitted average gluon multiplicities as a function of $\Delta y_{j_f j_b}$: \begin{center} \includegraphics[width=.75\textwidth]{ng_mean} \end{center} As shown for two rapidity slices the assumption of a Poisson distribution is also a good approximation: \begin{center} \includegraphics[width=.49\textwidth]{{ng_1.5}.pdf}\hfill \includegraphics[width=.49\textwidth]{{ng_5.5}.pdf} \end{center} For configurations beyond leading-log, gluon emission is not allowed everywhere and the rapidity span has to be adjusted. When determining the most forward and backward jets, we exclude unordered emissions and the extremal jet in extremal quark-antiquark emissions. In addition, we subtract the rapidity region between central quark and antiquark emission. Technically, we could have two or more additional gluons inside one of the latter jets, in which case there could be gluons that end up in between the jet centres but still outside the region spanned by quark and antiquark. However, the phase space for this is very small and should not affect the average number of emitted gluons very much. \subsubsection{Number of Gluons inside Jets} \label{sec:psp_ng_jet} For each of the $n_g$ gluon emissions we can split the phase-space integral into a (disconnected) region inside the jets and a remainder: \begin{equation} \label{eq:psp_split} \int [\mathrm{d}p_i] = \int [\mathrm{d}p_i]\, \theta\bigg(\sum_{j=1}^{m}{\cal J}_j(p_i)\bigg) + \int [\mathrm{d}p_i]\, \bigg[1-\theta\bigg(\sum_{j=1}^{m}{\cal J}_j(p_i)\bigg)\bigg]\,. \end{equation} The next step is to decide how many of the gluons will form part of a jet. This is done in the \lstinline!PhaseSpacePoint::sample_ng_jets! function. We choose an importance sampling which is flat in the plane spanned by the azimuthal angle $\phi$ and the rapidity $y$. This is observed in BFKL and valid in the limit of Multi-Regge-Kinematics (MRK). Furthermore, we assume anti-$k_t$ jets, which cover an area of $\pi R^2$. In principle, the total accessible area in the $y$-$\phi$ plane is given by $2\pi \Delta y_{fb}$, where $\Delta y_{fb}\geq \Delta y_{j_f j_b}$ is the a priori unknown rapidity separation between the most forward and backward partons. In most cases the extremal jets consist of single partons, so that $\Delta y_{fb} = \Delta y_{j_f j_b}$. For the less common case of two partons forming a jet we observe a maximum distance of $R$ between the constituents and the jet centre. In rare cases jets have more than two constituents. Empirically, they are always within a distance of $\tfrac{5}{3}R$ to the centre of the jet, so $\Delta y_{fb} \leq \Delta y_{j_f j_b} + \tfrac{10}{3} R$. In practice, the extremal partons are required to carry a large fraction of the jet transverse momentum and will therefore be much closer to the jet axis. In summary, for sufficiently large rapidity separations we can use the approximation $\Delta y_{fb} \approx \Delta y_{j_f j_b}$. This scenario is depicted here: \begin{center} \includegraphics[width=0.5\linewidth]{ps_large_y} \end{center} If there is no overlap between jets, the probability $p_{\cal J, >}$ for an extra gluon to end up inside a jet is then given by \begin{equation} \label{eq:p_J_large} p_{\cal J, >} = \frac{(m - 1)\*R^2}{2\Delta y_{j_f j_b}}\,. \end{equation} For a very small rapidity separation, eq.~\eqref{eq:p_J_large} obviously overestimates the true probability. The maximum phase space covered by jets in the limit of a vanishing rapidity distance between all partons is $2mR \Delta y_{fb}$: \begin{center} \includegraphics[width=0.5\linewidth]{ps_small_y} \end{center} We therefore estimate the probability for a parton to end up inside a jet as \begin{equation} \label{eq:p_J} p_{\cal J} = \min\bigg(\frac{(m - 1)\*R^2}{2\Delta y_{j_f j_b}}, \frac{mR}{\pi}\bigg)\,. \end{equation} Here we compare this estimate with the actually observed fraction of additional emissions into jets as a function of the rapidity separation: \begin{center} \includegraphics[width=0.75\linewidth]{pJ} \end{center} Again, for configurations beyond leading-log the allowed rapidity range has to be modified as explained in section~\ref{sec:psp_ng}. We also subtract the areas of unordered jets and the extremal jet in extremal quark-antiquark emission when we estimate the phase space area covered by jets. The reason is that in this case any additional emission inside such a jet would have to beyond the hard parton dominating the next jet in rapidity, which is possible but rare. \subsubsection{Gluons outside Jets} \label{sec:gluons_nonjet} Using our estimate for the probability of a gluon to be a jet constituent, we choose a number $n_{g,{\cal J}}$ of gluons inside jets, which also fixes the number $n_g - n_{g,{\cal J}}$ of gluons outside jets. As explained later on, we need to generate the momenta of the gluons outside jets first. This is done in \lstinline!PhaseSpacePoint::gen_non_jet!. The azimuthal angle $\phi$ is generated flat within $0\leq \phi \leq 2 \pi$. The allowed rapidity interval is set by the most forward and backward partons, which are necessarily inside jets. Since these parton rapidities are not known at this point, we also have to postpone the rapidity generation for the gluons outside jets. For the scalar transverse momentum $p_\perp = |\mathbf{p}_\perp|$ of a gluon outside jets we use the parametrisation \begin{equation} \label{eq:p_nonjet} p_\perp = \lambda + \tilde{p}_\perp\*\tan(\tau\*r)\,, \qquad \tau = \arctan\bigg(\frac{p_{\perp{\cal J}_\text{min}} - \lambda}{\tilde{p}_\perp}\bigg)\,. \end{equation} For $r \in [0,1)$, $p_\perp$ is always less than the minimum momentum $p_{\perp{\cal J}_\text{min}}$ required for a jet. $\tilde{p}_\perp$ is a free parameter, a good empirical value is $\tilde{p}_\perp = [1.3 + 0.2\*(n_g - n_{g,\cal J})]\,$GeV \subsubsection{Resummation jet momenta} \label{sec:ptj_res} +Since \HEJ generated additional soft emissions, their recoil have to be +compensated through the remaining particles. In the traditional reshuffling this +recoil was only distributed over all jets. However in anticipation of including +more bosons ($WW+2j$), or less jets ($H+1j$) this changed to also include bosons +in the reshuffling. In the following we will still always call the recoilers +\emph{jet} instead of generic \emph{Born particles}, since the bosons are +treated completely equivalent to (massive) jets. + On the one hand, each jet momentum is given by the sum of its constituent momenta. On the other hand, the resummation jet momenta are fixed by the constraints in line five of the master equation~\eqref{eq:resumdijetFKLmatched2}. We therefore have to calculate the resummation jet momenta from these constraints before generating the momenta of the gluons inside jets. This is done in \lstinline!PhaseSpacePoint::reshuffle! and in the free \lstinline!resummation_jet_momenta! function (declared in \texttt{resummation\_jet.hh}). The resummation jet momenta are determined by the $\delta$ functions in line five of eq.~(\ref{eq:resumdijetFKLmatched2}). The rapidities are fixed to the rapidities of the jets in the input fixed-order events, so that the FKL ordering is guaranteed to be preserved. In traditional \HEJ reshuffling the transverse momentum are given through \begin{equation} \label{eq:ptreassign_old} \mathbf{p}^B_{\mathcal{J}_{l\perp}} = \mathbf{j}_{l\perp} \equiv \mathbf{p}_{\mathcal{J}_{l}\perp} + \mathbf{q}_\perp \,\frac{|\mathbf{p}_{\mathcal{J}_{l}\perp}|}{P_\perp}, \end{equation} where $\mathbf{q}_\perp = \sum_{j=1}^n \mathbf{p}_{i\perp} \bigg[1-\theta\bigg(\sum_{j=1}^{m}{\cal J}_j(p_i)\bigg)\bigg] $ is the total transverse momentum of all partons \emph{outside} jets and $P_\perp = \sum_{j=1}^m |\mathbf{p}_{\mathcal{J}_{j}\perp}|$. Since the total transverse momentum of an event vanishes, we can also use $\mathbf{q}_\perp = - \sum_{j=1}^m \mathbf{p}_{\mathcal{J}_{j}\perp}$. Eq.~(\ref{eq:ptreassign}) is a non-linear system of equations in the resummation jet momenta $\mathbf{p}_{\mathcal{J}_{l}\perp}$. Hence we would have to solve \begin{equation} \label{eq:ptreassign_eq} \mathbf{p}_{\mathcal{J}_{l}\perp}=\mathbf{j}^B_{l\perp} \equiv\mathbf{j}_{l\perp}^{-1} \left(\mathbf{p}^B_{\mathcal{J}_{l\perp}}\right) \end{equation} numerically. Since solving such a system is computationally expensive, we instead change the reshuffling around to be linear in the resummation jet momenta. Hence~\eqref{eq:ptreassign_eq} gets replaces by \begin{equation} \label{eq:ptreassign} \mathbf{p}_{\mathcal{J}_{l\perp}} = \mathbf{j}^B_{l\perp} \equiv \mathbf{p}^B_{\mathcal{J}_{l}\perp} - \mathbf{q}_\perp \,\frac{|\mathbf{p}^B_{\mathcal{J}_{l}\perp}|}{P^B_\perp}, \end{equation} which is linear in the resummation momentum. Consequently the equivalent of~\eqref{eq:ptreassign_old} is non-linear in the Born momentum. However the exact form of~\eqref{eq:ptreassign_old} is not relevant for the resummation. Both methods have been tested for two and three jets with the \textsc{rivet} standard analysis \texttt{MC\_JETS}. They didn't show any differences even after $10^9$ events. The reshuffling relation~\eqref{eq:ptreassign} allows the transverse momenta $p^B_{\mathcal{J}_{l\perp}}$ of the fixed-order jets to be somewhat below the minimum transverse momentum of resummation jets. It is crucial that this difference does not become too large, as the fixed-order cross section diverges for vanishing transverse momenta. In the production of a Higgs boson with resummation jets above $30\,$GeV we observe that the contribution from fixed-order events with jets softer than about $20\,$GeV can be safely neglected. This is shown in the following plot of the differential cross section over the transverse momentum of the softest fixed-order jet: \begin{center} \includegraphics[width=.75\textwidth]{ptBMin} \end{center} Finally, we have to account for the fact that the reshuffling relation~\eqref{eq:ptreassign} is non-linear in the Born momenta. To arrive at the master formula~\eqref{eq:resumdijetFKLmatched2} for the cross section, we have introduced unity in the form of an integral over the Born momenta with $\delta$ functions in the integrand, that is \begin{equation} \label{eq:delta_intro} 1 = \int_{p_{j\perp}^B=0}^{p_{j\perp}^B=\infty} \mathrm{d}^2\mathbf{p}_{j\perp}^B\delta^{(2)}(\mathbf{p}_{\mathcal{J}_{j\perp}}^B - \mathbf{j}_{j\perp})\,. \end{equation} If the arguments of the $\delta$ functions are not linear in the Born momenta, we have to compensate with additional Jacobians as factors. Explicitly, for the reshuffling relation~\eqref{eq:ptreassign} we have \begin{equation} \label{eq:delta_rewrite} \prod_{l=1}^m \delta^{(2)}(\mathbf{p}_{\mathcal{J}_{l\perp}}^B - \mathbf{j}_{l\perp}) = \Delta \prod_{l=1}^m \delta^{(2)}(\mathbf{p}_{\mathcal{J}_{l\perp}} - \mathbf{j}_{l\perp}^B)\,, \end{equation} where $\mathbf{j}_{l\perp}^B$ is given by~\eqref{eq:ptreassign_eq} and only depends on the Born momenta. We have extended the product to run to $m$ instead of $m-1$ by eliminating the last $\delta$ function $\delta^{(2)}\!\!\left(\sum_{k=1}^n \mathbf{p}_{k\perp}\right )$. The Jacobian $\Delta$ is the determinant of a $2m \times 2m$ matrix with $l, l' = 1,\dots,m$ and $X, X' = x,y$. \begin{equation} \label{eq:jacobian} \Delta = \left|\frac{\partial\,\mathbf{j}^B_{l'\perp}}{\partial\, \mathbf{p}^B_{{\cal J}_l \perp}} \right| = \left| \delta_{l l'} \delta_{X X'} - \frac{q_X\, p^B_{{\cal J}_{l'}X'}}{\left|\mathbf{p}^B_{{\cal J}_{l'} \perp}\right| P^B_\perp}\left(\delta_{l l'} - \frac{\left|\mathbf{p}^B_{{\cal J}_l \perp}\right|}{P^B_\perp}\right)\right|\,. \end{equation} The determinant is calculated in \lstinline!resummation_jet_weight!, again coming from the \texttt{resummation\_jet.hh} header. Having to introduce this Jacobian is not a disadvantage specific to the new reshuffling. If we instead use the old reshuffling relation~\eqref{eq:ptreassign_old} we \emph{also} have to introduce a similar Jacobian since we actually want to integrate over the resummation phase space and need to transform the argument of the $\delta$ function to be linear in the resummation momenta for this. \subsubsection{Gluons inside Jets} \label{sec:gluons_jet} After the steps outlined in section~\ref{sec:psp_ng_jet}, we have a total number of $m + n_{g,{\cal J}}$ constituents. In \lstinline!PhaseSpacePoint::distribute_jet_partons! we distribute them randomly among the jets such that each jet has at least one constituent. We then generate their momenta in \lstinline!PhaseSpacePoint::split! using the \lstinline!Splitter! class. The phase space integral for a jet ${\cal J}$ is given by \begin{equation} \label{eq:ps_jetparton} \prod_{i\text{ in }{\cal J}} \bigg(\int \mathrm{d}\mathbf{p}_{i\perp}\ \int \mathrm{d} y_i \bigg)\delta^{(2)}\Big(\sum_{i\text{ in }{\cal J}} \mathbf{p}_{i\perp} - \mathbf{j}_{\perp}^B\Big)\delta(y_{\mathcal{J}}-y^B_{\mathcal{J}})\,. \end{equation} For jets with a single constituent, the parton momentum is obiously equal to the jet momentum. In the case of two constituents, we observe that the partons are always inside the jet cone with radius $R$ and often very close to the jet centre. The following plots show the typical relative distance $\Delta R/R$ for this scenario: \begin{center} \includegraphics[width=0.45\linewidth]{dR_2} \includegraphics[width=0.45\linewidth]{dR_2_small} \end{center} According to this preference for small values of $\Delta R$, we parametrise the $\Delta R$ integrals as \begin{equation} \label{eq:dR_sampling} \frac{\Delta R}{R} = \begin{cases} 0.25\,x_R & x_R < 0.4 \\ 1.5\,x_R - 0.5 & x_R \geq 0.4 \end{cases}\,. \end{equation} Next, we generate $\Theta_1 \equiv \Theta$ and use the constraint $\Theta_2 = \Theta \pm \pi$. The transverse momentum of the first parton is then given by \begin{equation} \label{eq:delta_constraints} p_{1\perp} = \frac{p_{\mathcal{J} y} - \tan(\phi_2) p_{\mathcal{J} x}}{\sin(\phi_1) - \tan(\phi_2)\cos(\phi_1)}\,. \end{equation} We get $p_{2\perp}$ by exchanging $1 \leftrightarrow 2$ in the indices. To obtain the Jacobian of the transformation, we start from the single jet phase space eq.~(\ref{eq:ps_jetparton}) with the rapidity delta function already rewritten to be linear in the rapidity of the last parton, i.e. \begin{equation} \label{eq:jet_2p} \prod_{i=1,2} \bigg(\int \mathrm{d}\mathbf{p}_{i\perp}\ \int \mathrm{d} y_i \bigg)\delta^{(2)}\Big(\mathbf{p}_{1\perp} + \mathbf{p}_{2\perp} - \mathbf{j}_{\perp}^B\Big)\delta(y_2- \dots)\,. \end{equation} The integral over the second parton momentum is now trivial; we can just replace the integral over $y_2$ with the equivalent constraint \begin{equation} \label{eq:R2} \int \mathrm{d}R_2 \ \delta\bigg(R_2 - \bigg[\phi_{\cal J} - \arctan \bigg(\frac{p_{{\cal J}y} - p_{1y}}{p_{{\cal J}x} - p_{1x}}\bigg)\bigg]/\cos \Theta\bigg) \,. \end{equation} In order to fix the integral over $p_{1\perp}$ instead, we rewrite this $\delta$ function. This introduces the Jacobian \begin{equation} \label{eq:jac_pt1} \bigg|\frac{\partial p_{1\perp}}{\partial R_2} \bigg| = \frac{\cos(\Theta)\mathbf{p}_{2\perp}^2}{p_{{\cal J}\perp}\sin(\phi_{\cal J}-\phi_1)}\,. \end{equation} The final form of the integral over the two parton momenta is then \begin{equation} \label{eq:ps_jet_2p} \int \mathrm{d}R_1\ R_1 \int \mathrm{d}R_2 \int \mathrm{d}x_\Theta\ 2\pi \int \mathrm{d}p_{1\perp}\ p_{1\perp} \int \mathrm{d}p_{2\perp} \ \bigg|\frac{\partial p_{1\perp}}{\partial R_2} \bigg|\delta(p_{1\perp} -\dots) \delta(p_{2\perp} - \dots)\,. \end{equation} As is evident from section~\ref{sec:psp_ng_jet}, jets with three or more constituents are rare and an efficient phase-space sampling is less important. For such jets, we exploit the observation that partons with a distance larger than $R_{\text{max}} = \tfrac{5}{3} R$ to the jet centre are never clustered into the jet. Assuming $N$ constituents, we generate all components for the first $N-1$ partons and fix the remaining parton with the $\delta$-functional. In order to end up inside the jet, we use the parametrisation \begin{align} \label{eq:ps_jet_param} \phi_i ={}& \phi_{\cal J} + \Delta \phi_i\,, & \Delta \phi_i ={}& \Delta R_i \cos(\Theta_i)\,, \\ y_i ={}& y_{\cal J} + \Delta y_i\,, & \Delta y_i ={}& \Delta R_i \sin(\Theta_i)\,, \end{align} and generate $\Theta_i$ and $\Delta R_i$ randomly with $\Delta R_i \leq R_{\text{max}}$ and the empiric value $R_{\text{max}} = 5\*R/3$. We can then write the phase space integral for a single parton as $(p_\perp = |\mathbf{p}_\perp|)$ \begin{equation} \label{eq:ps_jetparton_x} \int \mathrm{d}\mathbf{p}_{\perp}\ \int \mathrm{d} y \approx \int_{\Box} \mathrm{d}x_{\perp} \mathrm{d}x_{ R} \mathrm{d}x_{\theta}\ 2\*\pi\,\*R_{\text{max}}^2\,\*x_{R}\,\*p_{\perp}\,\*(p_{\perp,\text{max}} - p_{\perp,\text{min}}) \end{equation} with \begin{align} \label{eq:ps_jetparton_parameters} \Delta \phi ={}& R_{\text{max}}\*x_{R}\*\cos(2\*\pi\*x_\theta)\,,& \Delta y ={}& R_{\text{max}}\*x_{R}\*\sin(2\*\pi\*x_\theta)\,, \\ p_{\perp} ={}& (p_{\perp,\text{max}} - p_{\perp,\text{min}})\*x_\perp + p_{\perp,\text{min}}\,. \end{align} $p_{\perp,\text{max}}$ is determined from the requirement that the total contribution from the first $n-1$ partons --- i.e. the projection onto the jet $p_{\perp}$ axis --- must never exceed the jet $p_\perp$. This gives \todo{This bound is too high} \begin{equation} \label{eq:pt_max} p_{i\perp,\text{max}} = \frac{p_{{\cal J}\perp} - \sum_{j.frm} inside \texttt{current\_generator} it generates a corresponding header \texttt{include/HEJ/currents/.hh} inside the build directory. The header can be included with \begin{lstlisting}[language=C++,caption={}] #include "HEJ/currents/.hh" \end{lstlisting} The naming scheme is \begin{itemize} \item \texttt{j1\_j2.frm} for the contraction of current \texttt{j1} with current \texttt{j2}. \item \texttt{j1\_vx\_j2.frm} for the contraction of current \texttt{j1} with the vertex \texttt{vx} and the current \texttt{j2}. \end{itemize} For instance, \texttt{juno\_qqbarW\_j.frm} would indicate the contraction of an unordered current with a (central) quark-antiquark-W emission vertex and a standard FKL current. \subsection{Implementing new current contractions} \label{sec:cur_gen_new} Before adding a new current contraction, first find a representation where all momenta that appear inside the currents are \textcolor{red}{lightlike} and have \textcolor{red}{positive energy}. Section~\ref{sec:p_massive} describes how massive momenta can be decomposed into massless ones. For momenta $p'$ with negative energies replace $p' \to - p$ and exploit that negative- and positive-energy spinors are related by a phase, which is usually irrelevant: $u^\lambda(-p) = \pm i u^\lambda(p)$. To implement a current contraction \lstinline!jcontr! create a new file \texttt{jcontr.frm} inside the \texttt{current\_generator} directory. \texttt{jcontr.frm} should contain \FORM code, see \url{https://www.nikhef.nl/~form/} for more information on \FORM. Here is a small example: \begin{lstlisting}[caption={}] * FORM comments are started by an asterisk * at the beginning of a line * First include the relevant headers #include- include/helspin.frm #include- include/write.frm * Define the symbols that appear. * UPPERCASE symbols are reserved for internal use vectors p1,...,p10; indices mu1,...,mu10; * Define local expressions of the form [NAME HELICITIES] * for the current contractions for all relevant helicity configurations #do HELICITY1={+,-} #do HELICITY2={+,-} * We use the Current function * Current(h, p1, mu1, ..., muX, p2) = * u(p1) \gamma_{mu1} ... \gamma_{muX} u(p2) * where h=+1 or h=-1 is the spinor helicity. * All momenta appearing as arguments have to be *lightlike* local [jcontr `HELICITY1'`HELICITY2'] = Current(`HELICITY1'1, p1, mu1, p2, mu2, p3) *Current(`HELICITY2'1, p4, mu2, p2, mu1, p1); #enddo #enddo .sort * Main procedure that calculates the contraction #call ContractCurrents .sort * Optimise expression format O4; * Format in a (mostly) c compatible way format c; * Write start of C++ header file #call WriteHeader(`OUTPUT') * Write a template function jcontr * taking as template arguments two helicities * and as arguments the momenta p1,...,p4 * returning the contractions [jcontr HELICITIES] defined above #call WriteOptimised(`OUTPUT',jcontr,2,p1,p2,p3,p4) * Wrap up #call WriteFooter(`OUTPUT') .end \end{lstlisting} \subsection{Calculation of contractions} \label{sec:contr_calc} In order to describe the algorithm for the calculation of current contractions we first have to define the currents and establish some useful relations. \subsubsection{Massive momenta} \label{sec:p_massive} We want to use relations for lightlike momenta. Momenta $P$ that are \emph{not} lightlike can be written as the sum of two lightlike momenta: \begin{equation} \label{eq:P_massive} P^\mu = p^\mu + q^\mu\,, \qquad p^2 = q^2 = 0 \,. \end{equation} This decomposition is not unique. If we impose the arbitrary constraint $q_\perp = 0$ and require real-valued momentum components we can use the ansatz \begin{align} \label{eq:P_massive_p} p_\mu ={}& P_\perp\*(\cosh y, \cos \phi, \sin \phi, \sinh y)\,,\\ \label{eq:P_massive_q} q_\mu ={}& (E, 0, 0, s\,E)\,, \end{align} where $P_\perp$ is the transverse momentum of $P$ and $\phi$ the corresponding azimuthal angle. For the remaining parameters we obtain \begin{align} \label{eq:P_massive_plus} P^+ > 0:& & y ={}& \log \frac{P^+}{P_\perp}\,,\qquad E = \frac{P^2}{2P^+}\,,\qquad s = -1\,,\\ \label{eq:P_massive_minus} P^- > 0:& & y ={}& \log \frac{P_\perp}{P^-}\,,\qquad E = \frac{P^2}{2P^-}\,,\qquad s = +1\,. \end{align} The decomposition is implemented in the \lstinline!split_into_lightlike! function. \subsubsection{Currents and current relations} \label{sec:current_relations} Our starting point are generalised currents \begin{equation} \label{eq:j_gen} j^{\pm}(p, \mu_1,\dots,\mu_{2N-1},q) = \bar{u}^{\pm}(p)\gamma_{\mu_1} \dots \gamma_{\mu_{2N-1}} u^\pm(q)\,. \end{equation} Since there are no masses, we can consider two-component chiral spinors \begin{align} \label{eq:u_plus} u^+(p)={}& \left(\sqrt{p^+}, \sqrt{p^-} \hat{p}_\perp \right) \,,\\ \label{eq:u_minus} u^-(p)={}& \left(\sqrt{p^-} \hat{p}^*_\perp, -\sqrt{p^+}\right)\,, \end{align} with $p^\pm = E\pm p_z,\, \hat{p}_\perp = \tfrac{p_\perp}{|p_\perp|},\, p_\perp = p_x + i p_y$. The spinors for vanishing transverse momentum are obtained by replacing $\hat{p}_\perp \to -1$. This gives \begin{equation} \label{eq:jminus_gen} j^-(p,\mu_1,\dots,\mu_{2N-1},q) = u^{-,\dagger}(p)\ \sigma^-_{\mu_1}\ \sigma^+_{\mu_2}\dots\sigma^-_{\mu_{2N-1}}\ u^{-}(q)\,. \end{equation} where $\sigma_\mu^\pm = (1, \pm \sigma_i)$ and $\sigma_i$ are the Pauli matrices \begin{equation} \label{eq:Pauli_matrices} \sigma_1 = \begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix} \,, \qquad \sigma_2 = \begin{pmatrix} 0 & -i\\ i & 0 \end{pmatrix} \,, \qquad \sigma_3 = \begin{pmatrix} 1 & 0\\ 0 & -1 \end{pmatrix} \,. \end{equation} For positive-helicity currents we can either flip all helicities in eq.~(\ref{eq:jminus_gen}) or reverse the order of the arguments, i.e. \begin{equation} \label{eq:jplus_gen} j^+(p,\mu_1,\dots,\mu_{2N-1},q) = \big(j^-(p,\mu_1,\dots,\mu_{2N-1},q)\big)^* = j^-(q,\mu_{2N-1},\dots,\mu_1,p) \,. \end{equation} Using the standard spinor-helicity notation we have \begin{gather} \label{eq:spinors_spinhel} u^+(p) = | p \rangle\,, \qquad u^-(p) = | p ]\,, \qquad u^{+,\dagger}(p) = [ p |\,, \qquad u^{-,\dagger}(p) = \langle p |\,,\\ \label{eq:current_spinhel} j^-(p,\mu_1,\dots,\mu_{2N-1},q) = \langle p |\ \mu_1\ \dots\ \mu_{2N-1}\ | q ] \,.\\ \label{eq:contraction_spinhel} P_{\mu_i} j^-(p,\mu_1,\dots,\mu_{2N-1},q) = \langle p |\ \mu_1\ \dots\ \mu_{i-1}\ P\ \mu_{i+1}\ \dots\ \mu_{2N-1}\ | q ] \,. \end{gather} Lightlike momenta $p$ can be decomposed into spinor products: \begin{equation} \label{eq:p_decomp} \slashed{p} = |p\rangle [p| + |p] \langle p |\,. \end{equation} Taking into account helicity conservation this gives the following relations: \begingroup \addtolength{\jot}{1em} \begin{align} \label{eq:p_in_current} \langle p |\ \mu_1\ \dots\ \mu_i\ P\ \mu_{i+1}\ \dots\ \mu_{2N-1}\ | q ] ={}& \begin{cases} \langle p |\ \mu_1\ \dots\ \mu_i\ |P]\ \langle P|\ \mu_{i+1}\ \dots\ \mu_{2N-1}\ | q ]& i \text{ even}\\ \langle p |\ \mu_1\ \dots\ \mu_i\ |P\rangle\ [ P|\ \mu_{i+1}\ \dots\ \mu_{2N-1}\ | q ]& i \text{ odd} \end{cases}\,,\\ \label{eq:p_in_angle} \langle p |\ \mu_1\ \dots\ \mu_i\ P\ \mu_{i+1}\ \dots\ \mu_{2N}\ | q \rangle ={}& \begin{cases} \langle p |\ \mu_1\ \dots\ \mu_i\ |P]\ \langle P| \mu_{i+1}\ \dots\ \mu_{2N}\ | q \rangle & i \text{ even}\\ \langle p |\ \mu_1\ \dots\ \mu_i\ |P\rangle\ \big(\langle P| \mu_{i+1}\ \dots\ \mu_{2N}\ | q ]\big)^* & i \text{ odd} \end{cases}\,,\\ \label{eq:p_in_square} [ p |\ \mu_1\ \dots\ \mu_i\ P\ \mu_{i+1}\ \dots\ \mu_{2N}\ | q ] ={}& \begin{cases} \big(\langle p |\ \mu_1\ \dots\ \mu_i\ |P]\big)^* \ [ P| \mu_{i+1}\ \dots\ \mu_{2N}\ | q ] & i \text{ even}\\ [ p |\ \mu_1\ \dots\ \mu_i\ |P]\ \langle P| \mu_{i+1}\ \dots\ \mu_{2N}\ | q ] & i \text{ odd} \end{cases}\,. \end{align} \endgroup For contractions of vector currents we can use the Fierz identity \begin{equation} \label{eq:Fierz} \langle p|\ \mu\ |q]\ \langle k|\ \mu\ |l] = 2 \spa p.k \spb l.q\,. \end{equation} The scalar angle and square products are given by \begin{align} \label{eq:angle_product} \spa p.q ={}& {\big(u^-(p)\big)}^\dagger u^+(q) = \sqrt{p^-q^+}\hat{p}_{i,\perp} - \sqrt{p^+q^-}\hat{q}_{j,\perp} = - \spa q.p\,,\\ \label{eq:square_product} \spb p.q ={}& {\big(u^+(p)\big)}^\dagger u^-(q) = -\spa p.q ^* = - \spb q.p\,. \end{align} We also define polarisation vectors \begin{equation} \label{eq:pol_vector} \epsilon_\mu^-(p_g, p_r) = \frac{\langle p_r|\mu|p_g]}{\sqrt{2}\spa p_g.{p_r}}\,,\qquad\epsilon_\mu^+(p_g, p_r) = \frac{\langle p_g|\mu|p_r]}{\sqrt{2}\spb p_g.{p_r}}\,. \end{equation} fulfilling \begin{equation} \label{eq:pol_vector_norm} \epsilon_\mu^\lambda(p_g, p_r)\big[\epsilon^{\mu\,\lambda'}(p_g, p_r)\big]^* = -\delta_{\lambda\lambda'}\,. \end{equation} \subsubsection{Contraction algorithm} \label{sec:contr_calc_algo} The contractions are now calculated as follows: \begin{enumerate} \item Use equations \eqref{eq:jplus_gen}, \eqref{eq:current_spinhel} to write all currents in a canonical form. \item Assume that all momenta are lightlike and use the relations \eqref{eq:p_in_current}, \eqref{eq:p_in_angle}, \eqref{eq:p_in_square} to split up currents that are contracted with momenta. \item Apply the Fierz transformation~\eqref{eq:Fierz} to eliminate contractions between vector currents. \item Write the arguments of the antisymmetric angle and scalar products in canonical order, see equations~\eqref{eq:angle_product} ,\eqref{eq:square_product}. \end{enumerate} The corresponding \lstinline!ContractCurrents! procedure is implemented in \texttt{include/helspin.fm}. \section{The fixed-order generator} \label{sec:HEJFOG} Even at leading order, standard fixed-order generators can only generate events with a limited number of final-state particles within reasonable CPU time. The purpose of the fixed-order generator is to supplement this with high-multiplicity input events according to the first two lines of eq.~\eqref{eq:resumdijetFKLmatched2} with the \HEJ approximation $\mathcal{M}_\text{LO, \HEJ}^{f_1f_2\to f_1g\cdots gf_2}$ instead of the full fixed-order matrix element $\mathcal{M}_\text{LO}^{f_1f_2\to f_1g\cdots gf_2}$. Its usage is described in the user documentation \url{https://hej.web.cern.ch/HEJ/doc/current/user/HEJFOG.html}. \subsection{File structure} \label{sec:HEJFOG_structure} The code for the fixed-order generator is in the \texttt{FixedOrderGen} directory, which contains the following: \begin{description} \item[include:] Contains the C++ header files. \item[src:] Contains the C++ source files. \item[t:] Contains the source code for the automated tests. \item[CMakeLists.txt:] Configuration file for the \cmake build system. \item[configFO.yml:] Sample configuration file for the fixed-order generator. \end{description} The code is generally in the \lstinline!HEJFOG! namespace. Functions and classes \lstinline!MyClass! are usually declared in \texttt{include/MyClass.hh} and implemented in \texttt{src/MyClass.cc}. \subsection{Program flow} \label{sec:prog_flow} A single run of the fixed-order generator consists of three or four stages. First, we perform initialisation similar to \HEJ 2, see section~\ref{sec:init}. Since there is a lot of overlap we frequently reuse classes and functions from \HEJ 2, i.e. from the \lstinline!HEJ! namespace. The code for parsing the configuration file is in \texttt{include/config.hh} and implemented in \texttt{src/config.cc}. If partial unweighting is requested in the user settings \url{https://hej.web.cern.ch/HEJ/doc/current/user/HEJFOG.html#settings}, the initialisation is followed by a calibration phase. We use a \lstinline!EventGenerator! to produce a number of trial events. We use these to calibrate the \lstinline!Unweighter! in its constructor and produce a first batch of partially unweighted events. This also allows us to estimate our unweighting efficiency. In the next step, we continue to generate events and potentially unweight them. Once the user-defined target number of events is reached, we adjust their weights according to the number of required trials. As in \HEJ 2 (see section~\ref{sec:processing}), we pass the final events to a number of \lstinline!HEJ::Analysis! objects and a \lstinline!HEJ::CombinedEventWriter!. \subsection{Event generation} \label{sec:evgen} Event generation is performed by the \lstinline!EventGenerator::gen_event! member function. We begin by generating a \lstinline!PhaseSpacePoint!. This is not to be confused with the resummation phase space points represented by \lstinline!HEJ::PhaseSpacePoint!! After jet clustering, we compute the leading-order matrix element (see section~\ref{sec:ME}) and pdf factors. The phase space point generation is performed in the \lstinline!PhaseSpacePoint! constructor. We first construct the user-defined number of $n_p$ partons (by default gluons) in \lstinline!PhaseSpacePoint::gen_LO_partons!. We use flat sampling in rapidity and azimuthal angle. The scalar transverse momenta is generated based on a random variable $x_{p_\perp}$ according to \begin{equation} \label{eq:pt_sampling} p_\perp = p_{\perp,\text{min}} + p_{\perp,\text{par}} \tan\left( x_{p_\perp} \arctan\left( \frac{p_{\perp,\text{max}} - p_{\perp,\text{min}}}{p_{\perp,\text{par}}} \right) \right)\,, \end{equation} where $p_{\perp,\text{min}}$ is the minimum jet transverse momentum, $p_{\perp,\text{max}}$ is the maximum transverse parton momentum, tentatively set to the beam energy, and $p_{\perp,\text{par}}$ is a generation parameter set to the heuristically determined value of \begin{equation} \label{eq:pt_par} p_{\perp,\text{par}}=p_{\perp,\min}+\frac{n_p}{5}. \end{equation} The problem with this generation is that the transverse momenta peak at the minimum transverse momentum required for fixed-order jets. However, if we use the generated events as input for \HEJ resummation, events with such soft transverse momenta hardly contribute, see section~\ref{sec:ptj_res}. To generate efficient input for resummation, there is the user option \texttt{peak pt}, which specifies the dominant transverse momentum for resummation jets. If this option is set, most jets will be generated as above, but with $p_{\perp,\text{min}}$ set to the peak transverse momentum $p_{\perp, \text{peak}}$. In addition, there is a small chance of around $2\%$ to generate softer jets. The heuristic ansatz for the transverse momentum distribution in the ``soft'' region is \begin{equation} \label{FO_pt_soft} \frac{\partial \sigma}{\partial p_\perp} \propto e^{n_p\frac{p_\perp- p_{\perp, \text{peak}}}{\bar{p}_\perp}}\,, \end{equation} where $n_p$ is the number of partons and $\bar{p}_\perp \approx 4\,$GeV. To achieve this distribution, we use \begin{equation} \label{eq:FO_pt_soft_sampling} p_\perp = p_{\perp, \text{peak}} + \bar{p}_\perp \frac{\log x_{p_\perp}}{n_p} \end{equation} and discard the phase space point if the parton is too soft, i.e. below the threshold for fixed-order jets. After ensuring that all partons form separate jets, we generate any potential colourless emissions. We then determine the incoming momenta and flavours in \lstinline!PhaseSpacePoint::reconstruct_incoming! and adjust the outgoing flavours to ensure an FKL configuration. Finally, we may reassign outgoing flavours to generate suppressed (for example unordered) configurations. \input{currents} \appendix \section{Continuous Integration} \label{sec:CI} Whenever you are implementing something new or fixed a bug, please also add a test for the new behaviour to \texttt{t/CMakeLists.txt} via \lstinline!add_test!. These test can be triggered by running \lstinline!make test! or \lstinline!ctest! after compiling. A typical test should be at most a few seconds, so it can be potentially run on each commit change by each developer. If you require a longer, more careful test, preferably on top of a small one, surround it with \begin{lstlisting}[caption={}] if(${TEST_ALL}) add_test( NAME t_feature COMMAND really_long_test ) endif() \end{lstlisting} Afterwards you can execute the longer tests with\footnote{No recompiling is needed, as long as only the \lstinline!add_test! command is guarded, not the compiling commands itself.} \begin{lstlisting}[language=sh,caption={}] cmake base/directory -DTEST_ALL=TRUE make test \end{lstlisting} On top of that you should add \href{https://en.cppreference.com/w/cpp/error/assert}{\lstinline!assert!s} in the code itself. They are only executed when compiled with \lstinline!CMAKE_BUILD_TYPE=Debug!, without slowing down release code. So you can use them everywhere to test \textit{expected} or \textit{assumed} behaviour, e.g. requiring a Higgs boson or relying on rapidity ordering. GitLab provides ways to directly test code via \textit{Continuous integrations}. The CI is controlled by \texttt{.gitlab-ci.yml}. For all options for the YAML file see \href{https://docs.gitlab.com/ee/ci/yaml/}{docs.gitlab.com/ee/ci/yaml/}.https://gitlab.dur.scotgrid.ac.uk/hej/docold/tree/master/Theses GitLab also provides a small tool to check that YAML syntax is correct under \lstinline!CI/CD > Pipelines > CI Lint! or \href{https://gitlab.dur.scotgrid.ac.uk/hej/HEJ/-/ci/lint}{gitlab.dur.scotgrid.ac.uk/hej/HEJ/-/ci/lint}. Currently the CI is configured to trigger a \textit{Pipeline} on each \lstinline!git push!. The corresponding \textit{GitLab runners} are configured under \lstinline!CI/CD Settings>Runners! in the GitLab UI. All runners use a \href{https://www.docker.com/}{docker} image as virtual environments\footnote{To use only Docker runners set the \lstinline!docker! tag in \texttt{.gitlab-ci.yml}.}. The specific docker images maintained separately. If you add a new dependencies, please also provide a docker image for the CI. The goal to be able to test \HEJ with all possible configurations. Each pipeline contains multiple stages (see \lstinline!stages! in \texttt{.gitlab-ci.yml}) which are executed in order from top to bottom. Additionally each stage contains multiple jobs. For example the stage \lstinline!build! contains the jobs \lstinline!build:basic!, \lstinline!build:qcdloop!, \lstinline!build:rivet!, etc., which compile \HEJ for different environments and dependencies, by using different Docker images. Jobs starting with an dot are ignored by the Runner, e.g. \lstinline!.HEJ_build! is only used as a template, but never executed directly. Only after all jobs of the previous stage was executed without any error the next stage will start. To pass information between multiple stages we use \lstinline!artifacts!. The runner will automatically load all artifacts form all \lstinline!dependencies! for each job\footnote{If no dependencies are defined \textit{all} artifacts from all previous jobs are downloaded. Thus please specify an empty dependence if you do not want to load any artifacts.}. For example the compiled \HEJ code from \lstinline!build:basic! gets loaded in \lstinline!test:basic! and \lstinline!FOG:build:basic!, without recompiling \HEJ again. Additionally artifacts can be downloaded from the GitLab web page, which could be handy for debugging. We also trigger some jobs \lstinline!only! on specific events. For example we only push the code to \href{https://phab.hepforge.org/source/hej/repository/v2.0/}{HepForge} on release branches (e.g. v2.0). Also we only execute the \textit{long} tests for merge requests, on pushes for any release or the \lstinline!master! branch, or when triggered manually from the GitLab web page. The actual commands are given in the \lstinline!before_script!, \lstinline!script! and \lstinline!after_script! \footnote{\lstinline!after_script! is always executed} sections, and are standard Linux shell commands (dependent on the docker image). Any failed command, i.e. returning not zero, stops the job and making the pipeline fail entirely. Most tests are just running \lstinline!make test! or are based on it. Thus, to emphasise it again, write tests for your code in \lstinline!cmake!. The CI is only intended to make automated testing in different environments easier. \section{Monte Carlo uncertainty} \label{sec:MC_err} Since \HEJ is reweighting each Fixed Order point with multiple resummation events, the Monte Carlo uncertainty of \HEJ is a little bit more complicated then usual. We start by defining the \HEJ cross section after $N$ FO points \begin{align} \sigma_N:=\sum_{i}^N x_i \sum_{j}^{M_i} y_{i,j}=:\sum_i^N\sum_j^{M_i} w_{i,j}, \end{align} where $x_i$ are the FO weights\footnote{In this definition $x_i$ can be zero, see the discussion in the next section.}, $y_{i,j}$ are the reweighting weights , and $M_i$ the number of resummation points. We can set $M=M_i \forall i$ by potentially adding some points with $y_{i,j}=0$, i.e. $M$ correspond to the \lstinline!trials! in \lstinline!EventReweighter!. $w_{i,j}$ are the weights as written out by \HEJ. The expectation value of $\sigma$ is then \begin{align} \ev{\sigma_N}= \sum_i \ev{x_i}\sum_j\ev{y_{i,j}}=M \mu_x\sum_i\mu_{y_i},\label{eq:true_sigma} \end{align} with $\mu_{x/y}$ being the (true) mean value of $x$ or $y$, i.e. \begin{align} \mu_{x}:=\ev{\bar{x}}=\ev{\frac{\sum_i x_i}{N}}=\ev{x}. \end{align} The true underlying standard derivation on $\sigma_N$, assuming $\delta_{x}$ and $\delta_{y_i}$ are the standard derivations of $x$ and $y_i$ is \begin{align} \delta_{\sigma_N}^2&=M^2 \delta_{x}^2 \sum_i \mu_{y_i}^2 +M \mu_x^2 \sum_i \delta_{y_i}^2. \label{eq:true_err} \end{align} Notice that each point $i$ can have an different expectation for $y_i$. Since we do not know the true distribution of $x$ and $y$ we need to estimate it. We use the standard derivation \begin{align} \tilde{\delta}_{x_i}^2&:=\left(x_i-\bar x\right)^2 =\left(\frac{N-1}{N} x_i - \frac{\sum_{j\neq i} x_j}{N}\right)^2 \label{eq:err_x}\\ \tilde{\delta}_{y_{i,j}}^2&:=\left(y_{i,j}-\bar y_i\right)^2 \label{eq:err_y}, \end{align} and the mean values $\bar x$ and $\bar y$, to get an estimator for $\delta_{\sigma_N}$ \begin{align} \tilde\delta_{\sigma_N}^2&=M^2 \sum_i \tilde\delta_{x_i}^2 \bar{y_i}^2 +\sum_{i,j} x_i^2\tilde\delta_{y_{i,j}}^2. \label{eq:esti_err} \end{align} Trough error propagation we can connect the estimated uncertainties back to the fundamental ones \begin{align} \delta_{\tilde{\delta}_{x_i}}^2=\frac{N-1}{N} \delta_x^2. \end{align} Together with $\delta_x^2=\ev{x^2}-\ev{x}^2$ and $\ev{\tilde\delta}=0$ this leads to \begin{align} \ev{\tilde{\delta}_{x_i}^2 \bar y_i^2}&=\ev{\tilde{\delta}_{x_i} \bar y_i}^2 +\delta_{\tilde{\delta}_{x_i}}^2 \mu_{y_i}^2 +\delta_{y_i}^2 \mu_{\tilde\delta}^2 \\ &=\frac{N-1}{N} \delta_x^2\mu_{y_i}^2, \end{align} and a similar results for $y$. Therefore \begin{align} \ev{\delta_{\sigma_N}}=\frac{N-1}{N} M^2 \delta_{x}^2 \sum_i \mu_{y_i}^2 +\frac{M-1}{M} M \mu_x^2 \sum_i \delta_{y_i}^2, \end{align} where we can compensate for the additional factors compared to~\eqref{eq:true_err}, by replacing \begin{align} \tilde\delta_x&\to\frac{N}{N-1}\tilde\delta_x \label{eq:xcom_bias}\\ \tilde\delta_{y_i}&\to\frac{M}{M-1}\tilde\delta_{y_i}. \label{eq:ycom_bias} \end{align} Thus~\eqref{eq:esti_err} is an unbiased estimator of $\delta_{\sigma_N}$. \subsection{Number of events vs. number of trials} Even though the above calculation is completely valid, it is unpractical. Both $x_i$ and $y_{ij}$ could be zero, but zero weight events are typically not written out. In that sense $N$ and $M$ are the \textit{number of trials} it took to generate $N'$ and $M'$ (non-zero) events. We can not naively replace all $N$ and $M$ with $N'$ and $M'$ in the above equations, since this would also change the definition of the average $\bar x$ and $\bar y$. For illustration let us consider unweighted events, with all weights equal to $x'$, without changing the cross section $\sum_i^N x_i=\sum_i^{N'} x'_i=N' x'$. Then the average trial weight is unequal to the average event weight \begin{align} \bar x = \frac{\sum_i^{N} x_i}{N} = \frac{\sum_i^{N'} x'}{N}=x'\frac{N'}{N} \neq x'=\frac{\sum_i^{N'} x'}{N'}. \end{align} $N=N'$ would correspond to an $100\%$ efficient unweighting, i.e. a perfect sampling, where we know the analytical results. In particular using $N'$ instead of $N$ in the standard derivation gives \begin{align} \sum_i \left(x_i-\frac{\sum_i^{N} x_i}{N'}\right)^2=\sum_i \left(x'-x' \frac{\sum_i^{N'}}{N'}\right)^2=0, \end{align} which is obviously not true in general for $\tilde\delta^2_x$. Hence we would have to use the number of trials $N$ everywhere. This would require an additional parameter to be passed with each events, which is not always available in practice\footnote{ \texttt{Sherpa} gives the number of trials, as an \lstinline!attribute::trials! of \lstinline!HEPEUP! in the \texttt{LHE} file, or similarly as a data member in the HDF5 format \cite{Hoeche:2019rti}. The \texttt{LHE} standard itself provides the variable \lstinline!ntries! per event (see \href{https://phystev.cnrs.fr/wiki/2017:groups:tools:lhe}{this proposal}), though I have not seen this used anywhere.}. Instead we use \begin{align} \tilde\delta_{x}'^2:=\sum_i^{N} x_i^2\geq\tilde\delta_x^2, \label{eq:err_prac} \end{align} where the bias of $\delta_x'^2$ vanishes for large $N$. Thus we can use the sum of weight squares~\eqref{eq:err_prac} instead of~\eqref{eq:err_x} and~\eqref{eq:err_y}, without worrying about the difference between trials and generated events. The total error~\eqref{eq:esti_err} becomes \begin{align} \tilde\delta_{\sigma_N}^2=\sum_i \left(\sum_j w_{i,j}\right)^2+\sum_{i,j} \left(w_{i,j}\right)^2, \end{align} which (conveniently) only dependent on the \HEJ weights $w_{i,j}$. \section{Explicit formulas for vector currents} \label{sec:j_vec} Using eqs.~\eqref{eq:u_plus}\eqref{eq:u_minus}\eqref{eq:jminus_gen}\eqref{eq:Pauli_matrices}\eqref{eq:jplus_gen}, the vector currents read \begin{align} \label{eq:j-_explicit} j^-_\mu(p, q) ={}& \begin{pmatrix} \sqrt{p^+\,q^+} + \sqrt{p^-\,q^-} \hat{p}_{\perp} \hat{q}_{\perp}^*\\ \sqrt{p^-\,q^+}\, \hat{p}_{\perp} + \sqrt{p^+\,q^-}\,\hat{q}_{\perp}^*\\ -i \sqrt{p^-\,q^+}\, \hat{p}_{\perp} + i \sqrt{p^+\,q^-}\, \hat{q}_{\perp}^*\\ \sqrt{p^+\,q^+} - \sqrt{p^-\,q^-}\, \hat{p}_{\perp}\, \hat{q}_{\perp}^* \end{pmatrix}\,,\\ j^+_\mu(p, q) ={}&\big(j^-_\mu(p, q)\big)^*\,,\\ j^\pm_\mu(q, p) ={}&\big(j^\pm_\mu(p, q)\big)^*\,. \end{align} If $q= p_{\text{in}}$ is the momentum of an incoming parton, we have $\hat{p}_{\text{in} \perp} = -1$ and either $p_{\text{in}}^+ = 0$ or $p_{\text{in}}^- = 0$. The current simplifies further:\todo{Helicities flipped w.r.t code} \begin{align} \label{eq:j_explicit} j^-_\mu(p_{\text{out}}, p_{\text{in}}) ={}& \begin{pmatrix} \sqrt{p_{\text{in}}^+\,p_{\text{out}}^+}\\ \sqrt{p_{\text{in}}^+\,p_{\text{out}}^-} \ \hat{p}_{\text{out}\,\perp}\\ -i\,j^-_1\\ j^-_0 \end{pmatrix} & p_{\text{in}\,z} > 0\,,\\ j^-_\mu(p_{\text{out}}, p_{\text{in}}) ={}& \begin{pmatrix} -\sqrt{p_{\text{in}}^-\,p_{\text{out}}^{-\phantom{+}}} \ \hat{p}_{\text{out}\,\perp}\\ - \sqrt{p_{\text{in}}^-\,p_{\text{out}}^+}\\ i\,j^-_1\\ -j^-_0 \end{pmatrix} & p_{\text{in}\,z} < 0\,. \end{align} \bibliographystyle{JHEP} \bibliography{biblio} \end{document} diff --git a/include/HEJ/Config.hh b/include/HEJ/Config.hh index 4e3acd1..6867e0b 100644 --- a/include/HEJ/Config.hh +++ b/include/HEJ/Config.hh @@ -1,255 +1,254 @@ /** \file * \brief HEJ 2 configuration parameters * * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #pragma once #include #include #include #include #include "fastjet/JetDefinition.hh" #include "yaml-cpp/yaml.h" #include "HEJ/Constants.hh" #include "HEJ/EWConstants.hh" #include "HEJ/Fraction.hh" #include "HEJ/HiggsCouplingSettings.hh" #include "HEJ/ScaleFunction.hh" #include "HEJ/event_types.hh" #include "HEJ/optional.hh" #include "HEJ/output_formats.hh" namespace HEJ { //! Jet parameters struct JetParameters{ fastjet::JetDefinition def; /**< Jet Definition */ double min_pt{}; /**< Minimum Jet Transverse Momentum */ }; //! Settings for scale variation struct ScaleConfig{ //! Base scale choices std::vector base; //! Factors for multiplicative scale variation std::vector factors; //! Maximum ratio between renormalisation and factorisation scale double max_ratio{}; }; //! Settings for random number generator struct RNGConfig { //! Random number generator name std::string name; //! Optional initial seed optional seed; }; //! Settings for partial unweighting struct PartialUnweightConfig { //! Number of trials for training size_t trials; //! Maximum distance in standard deviations from mean logarithmic weight double max_dev; }; /**! Possible treatments for fixed-order input events. * * The program will decide on how to treat an event based on * the value of this enumeration. */ enum class EventTreatment{ reweight, /**< Perform resummation */ keep, /**< Keep the event */ discard, /**< Discard the event */ }; //! Container to store the treatments for various event types using EventTreatMap = std::map; //! Possible setting for the event weight enum class WeightType{ weighted, //!< weighted events unweighted_resum, //!< unweighted only resummation part partially_unweighted //!< mixed weighted and unweighted }; /**! Input parameters. * * This struct handles stores all configuration parameters * needed in a HEJ 2 run. * * \internal To add a new option: * 1. Add a member to the Config struct. * 2. Inside "src/YAMLreader.cc": * - Add the option name to the "supported" Node in * get_supported_options. * - Initialise the new Config member in to_Config. * The functions set_from_yaml (for mandatory options) and * set_from_yaml_if_defined (non-mandatory) may be helpful. * 3. Add a new entry (with short description) to config.yaml * 4. Update the user documentation in "doc/Sphinx/" */ struct Config { //! %Parameters for scale variation ScaleConfig scales; //! Resummation jet properties JetParameters resummation_jets; //! Fixed-order jet properties JetParameters fixed_order_jets; //! Minimum transverse momentum for extremal partons //! \deprecated This will be removed in future versions. //! Use \ref soft_pt_regulator instead. double min_extparton_pt = 0.; //! \deprecated This is equivalent to\ref soft_pt_regulator //! and will be removed in future versions. optional> max_ext_soft_pt_fraction{}; //! @brief Maximum transverse momentum fraction from soft radiation in any //! tagging jet (e.g. extremal or qqx jet) Fraction soft_pt_regulator{ DEFAULT_SOFT_PT_REGULATOR }; //! The regulator lambda for the subtraction terms double regulator_lambda = CLAMBDA; //! Number of resummation configurations to generate per fixed-order event size_t trials{}; //! Maximal number of events optional max_events; //! Whether to include the logarithmic correction from \f$\alpha_s\f$ running bool log_correction{}; //! Event output files names and formats std::vector output; //! Parameters for random number generation RNGConfig rng; //! Map to decide what to do for different event types EventTreatMap treat; //! %Parameters for custom analysis //! @deprecated use analyses_parameters instead YAML::Node analysis_parameters; //! %Parameters for custom analyses std::vector analyses_parameters; //! Settings for effective Higgs-gluon coupling HiggsCouplingSettings Higgs_coupling; //! elector weak parameters EWConstants ew_parameters; //! Type of event weight e.g. (un)weighted WeightType weight_type; //! Settings for partial unweighting optional unweight_config; }; //! Configuration options for the PhaseSpacePoint class struct PhaseSpacePointConfig { PhaseSpacePointConfig() = default; PhaseSpacePointConfig( JetParameters jet_param, double min_extparton_pt = 0., Fraction soft_pt_regulator = Fraction{DEFAULT_SOFT_PT_REGULATOR} ): - jet_param{jet_param}, + jet_param{std::move(jet_param)}, min_extparton_pt{min_extparton_pt}, - max_ext_soft_pt_fraction{}, soft_pt_regulator{std::move(soft_pt_regulator)} {} //! Properties of resummation jets JetParameters jet_param; //! Minimum transverse momentum for extremal partons //! \deprecated This will be removed in future versions. //! Use \ref soft_pt_regulator instead. double min_extparton_pt = 0.; //! \deprecated This is equivalent to\ref soft_pt_regulator //! and will be removed in future versions. optional> max_ext_soft_pt_fraction{}; //! @brief Maximum transverse momentum fraction from soft radiation in any //! tagging jet (e.g. extremal or qqx jet) Fraction soft_pt_regulator{ DEFAULT_SOFT_PT_REGULATOR }; }; //! Configuration options for the MatrixElement class struct MatrixElementConfig { MatrixElementConfig() = default; MatrixElementConfig( bool log_correction, HiggsCouplingSettings Higgs_coupling, EWConstants ew_parameters, double regulator_lambda = CLAMBDA ): log_correction{log_correction}, Higgs_coupling{std::move(Higgs_coupling)}, ew_parameters{std::move(ew_parameters)}, regulator_lambda{regulator_lambda} {} //! Whether to include the logarithmic correction from \f$\alpha_s\f$ running bool log_correction{}; //! Settings for effective Higgs-gluon coupling HiggsCouplingSettings Higgs_coupling; //! elector weak parameters EWConstants ew_parameters; //! The regulator lambda for the subtraction terms double regulator_lambda = CLAMBDA; }; //! Configuration options for the EventReweighter class struct EventReweighterConfig { //! Settings for phase space point generation PhaseSpacePointConfig psp_config; //! Settings for matrix element calculation MatrixElementConfig ME_config; //! Access properties of resummation jets JetParameters & jet_param() { return psp_config.jet_param;} //! Access properties of resummation jets (const version) JetParameters const & jet_param() const { return psp_config.jet_param;} //! Treatment of the various event types EventTreatMap treat; }; /**! Extract PhaseSpacePointConfig from Config * * \internal We do not provide a PhaseSpacePointConfig constructor from Config * so that PhaseSpacePointConfig remains an aggregate. * This faciliates writing client code (e.g. the HEJ fixed-order generator) * that creates a PhaseSpacePointConfig *without* a Config object. * * @see to_MatrixElementConfig, to_EventReweighterConfig */ inline PhaseSpacePointConfig to_PhaseSpacePointConfig(Config const & conf) { return { conf.resummation_jets, conf.min_extparton_pt, conf.max_ext_soft_pt_fraction?*conf.max_ext_soft_pt_fraction :conf.soft_pt_regulator }; } /**! Extract MatrixElementConfig from Config * * @see to_PhaseSpacePointConfig, to_EventReweighterConfig */ inline MatrixElementConfig to_MatrixElementConfig(Config const & conf) { return {conf.log_correction, conf.Higgs_coupling, conf.ew_parameters, conf.regulator_lambda}; } /**! Extract EventReweighterConfig from Config * * @see to_PhaseSpacePointConfig, to_MatrixElementConfig */ inline EventReweighterConfig to_EventReweighterConfig(Config const & conf) { return { to_PhaseSpacePointConfig(conf), to_MatrixElementConfig(conf), conf.treat }; } } // namespace HEJ diff --git a/include/HEJ/Event.hh b/include/HEJ/Event.hh index d9cc041..480aadf 100644 --- a/include/HEJ/Event.hh +++ b/include/HEJ/Event.hh @@ -1,374 +1,374 @@ /** \file * \brief Declares the Event class and helpers * * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #pragma once #include #include #include #include #include #include #include #include "boost/iterator/filter_iterator.hpp" #include "fastjet/ClusterSequence.hh" #include "fastjet/PseudoJet.hh" #include "HEJ/Constants.hh" -#include "HEJ/event_types.hh" #include "HEJ/Parameters.hh" #include "HEJ/Particle.hh" +#include "HEJ/event_types.hh" namespace LHEF { class HEPEUP; class HEPRUP; } namespace fastjet { class JetDefinition; } namespace HEJ { struct RNG; struct UnclusteredEvent; /** @brief An event with clustered jets * * This is the main HEJ 2 event class. * It contains kinematic information including jet clustering, * parameter (e.g. scale) settings and the event weight. */ class Event { public: class EventData; //! Iterator over partons using ConstPartonIterator = boost::filter_iterator< bool (*)(Particle const &), std::vector::const_iterator >; //! Reverse Iterator over partons using ConstReversePartonIterator = std::reverse_iterator< ConstPartonIterator>; //! No default Constructor Event() = delete; //! Event Constructor adding jet clustering to an unclustered event //! @deprecated UnclusteredEvent will be replaced by EventData in HEJ 2.2.0 [[deprecated("UnclusteredEvent will be replaced by EventData")]] Event( UnclusteredEvent const & ev, fastjet::JetDefinition const & jet_def, double min_jet_pt ); //! @name Particle Access //! @{ //! Incoming particles std::array const & incoming() const{ return incoming_; } //! Outgoing particles std::vector const & outgoing() const{ return outgoing_; } //! Iterator to the first outgoing parton ConstPartonIterator begin_partons() const; //! Iterator to the first outgoing parton ConstPartonIterator cbegin_partons() const; //! Iterator to the end of the outgoing partons ConstPartonIterator end_partons() const; //! Iterator to the end of the outgoing partons ConstPartonIterator cend_partons() const; //! Reverse Iterator to the first outgoing parton ConstReversePartonIterator rbegin_partons() const; //! Reverse Iterator to the first outgoing parton ConstReversePartonIterator crbegin_partons() const; //! Reverse Iterator to the first outgoing parton ConstReversePartonIterator rend_partons() const; //! Reverse Iterator to the first outgoing parton ConstReversePartonIterator crend_partons() const; //! Particle decays /** * The key in the returned map corresponds to the index in the * vector returned by outgoing() */ std::unordered_map> const & decays() const { return decays_; } //! The jets formed by the outgoing partons, sorted in rapidity std::vector const & jets() const{ return jets_; } //! @} //! @name Weight variations //! @{ //! All chosen parameter, i.e. scale choices (const version) Parameters const & parameters() const{ return parameters_; } //! All chosen parameter, i.e. scale choices Parameters & parameters(){ return parameters_; } //! Central parameter choice (const version) EventParameters const & central() const{ return parameters_.central; } //! Central parameter choice EventParameters & central(){ return parameters_.central; } //! Parameter (scale) variations (const version) std::vector const & variations() const{ return parameters_.variations; } //! Parameter (scale) variations std::vector & variations(){ return parameters_.variations; } //! Parameter (scale) variation (const version) /** * @param i Index of the requested variation */ EventParameters const & variations(std::size_t i) const{ return parameters_.variations.at(i); } //! Parameter (scale) variation /** * @param i Index of the requested variation */ EventParameters & variations(std::size_t i){ return parameters_.variations.at(i); } //! @} //! Indices of the jets the outgoing partons belong to /** * @param jets Jets to be tested * @returns A vector containing, for each outgoing parton, * the index in the vector of jets the considered parton * belongs to. If the parton is not inside any of the * passed jets, the corresponding index is set to -1. */ std::vector particle_jet_indices( std::vector const & jets ) const { return cs_.particle_jet_indices(jets); } //! particle_jet_indices() of the Event jets() std::vector particle_jet_indices() const { return particle_jet_indices(jets()); } //! Jet definition used for clustering fastjet::JetDefinition const & jet_def() const{ return cs_.jet_def(); } //! Minimum jet transverse momentum double min_jet_pt() const{ return min_jet_pt_; } //! Event type event_type::EventType type() const{ return type_; } //! Give colours to each particle /** * @returns true if new colours are generated, i.e. same as is_resummable() * @details Colour ordering is done according to leading colour in the MRK * limit, see \cite Andersen:2011zd. This only affects \ref * is_resummable() "HEJ" configurations, all other \ref event_type * "EventTypes" will be ignored. * @note This overwrites all previously set colours. */ bool generate_colours(RNG & /*ran*/); //! Check that current colours are leading in the high energy limit /** * @details Checks that the colour configuration can be split up in * multiple, rapidity ordered, non-overlapping ladders. Such * configurations are leading in the MRK limit, see * \cite Andersen:2011zd * * @note This is _not_ to be confused with \ref is_resummable(), however * for all resummable states it is possible to create a leading colour * configuration, see generate_colours() */ bool is_leading_colour() const; /** * @brief Check if given event could have been produced by HEJ * @details A HEJ state has to fulfil: * 1. type() has to be \ref is_resummable() "resummable" * 2. Soft radiation in the tagging jets contributes at most to * `soft_pt_regulator` of the total jet \f$ p_\perp \f$ * * @note This is true for any resummed stated produced by the * EventReweighter or any \ref is_resummable() "resummable" Leading * Order state. * * @param soft_pt_regulator Maximum transverse momentum fraction from soft * radiation in tagging jets * @param min_pt Absolute minimal \f$ p_\perp \f$, * \b deprecated use soft_pt_regulator instead * @return True if this state could have been produced by HEJ */ bool valid_hej_state( double soft_pt_regulator = DEFAULT_SOFT_PT_REGULATOR, double min_pt = 0. ) const; private: //! \internal //! @brief Construct Event explicitly from input. /** This is only intended to be called from EventData. * * \warning The input is taken _as is_, sorting and classification has to be * done externally, i.e. by EventData */ Event( std::array && incoming, std::vector && outgoing, std::unordered_map> && decays, Parameters && parameters, fastjet::JetDefinition const & jet_def, double min_jet_pt ); //! Iterator over partons (non-const) using PartonIterator = boost::filter_iterator< bool (*)(Particle const &), std::vector::iterator >; //! Reverse Iterator over partons (non-const) using ReversePartonIterator = std::reverse_iterator; //! Iterator to the first outgoing parton (non-const) PartonIterator begin_partons(); //! Iterator to the end of the outgoing partons (non-const) PartonIterator end_partons(); //! Reverse Iterator to the first outgoing parton (non-const) ReversePartonIterator rbegin_partons(); //! Reverse Iterator to the first outgoing parton (non-const) ReversePartonIterator rend_partons(); std::array incoming_; std::vector outgoing_; std::unordered_map> decays_; std::vector jets_; Parameters parameters_; fastjet::ClusterSequence cs_; double min_jet_pt_; event_type::EventType type_; }; // end class Event //! Class to store general Event setup, i.e. Phase space and weights class Event::EventData { public: //! Default Constructor EventData() = default; //! Constructor from LesHouches event information EventData(LHEF::HEPEUP const & hepeup); //! Constructor with all values given EventData( std::array incoming, std::vector outgoing, std::unordered_map> decays, Parameters parameters ): incoming(std::move(incoming)), outgoing(std::move(outgoing)), decays(std::move(decays)), parameters(std::move(parameters)) {} //! Generate an Event from the stored EventData. /** * @details Do jet clustering and classification. * Use this to generate an Event. * * @note Calling this function destroys EventData * * @param jet_def Jet definition * @param min_jet_pt minimal \f$p_T\f$ for each jet * * @returns Full clustered and classified event. */ Event cluster( fastjet::JetDefinition const & jet_def, double min_jet_pt); //! Alias for cluster() Event operator()( fastjet::JetDefinition const & jet_def, double const min_jet_pt){ return cluster(jet_def, min_jet_pt); } //! Sort particles in rapidity void sort(); //! Reconstruct intermediate particles from final-state leptons /** * Final-state leptons are created from virtual photons, W, or Z bosons. * This function tries to reconstruct such intermediate bosons if they * are not part of the event record. */ void reconstruct_intermediate(); //! Incoming particles std::array incoming; //! Outcoing particles std::vector outgoing; //! Particle decays in the format {outgoing index, decay products} std::unordered_map> decays; //! Parameters, e.g. scale or inital weight Parameters parameters; }; // end class EventData //! Print Event std::ostream& operator<<(std::ostream & os, Event const & ev); //! Square of the partonic centre-of-mass energy \f$\hat{s}\f$ double shat(Event const & ev); //! Convert an event to a LHEF::HEPEUP LHEF::HEPEUP to_HEPEUP(Event const & event, LHEF::HEPRUP * /*heprup*/); // put deprecated warning at the end, so don't get the warning inside Event.hh, // additionally doxygen can not identify [[deprecated]] correctly struct [[deprecated("UnclusteredEvent will be replaced by EventData")]] UnclusteredEvent; //! An event before jet clustering //! @deprecated UnclusteredEvent will be replaced by EventData in HEJ 2.2.0 struct UnclusteredEvent{ //! Default Constructor UnclusteredEvent() = default; //! Constructor from LesHouches event information UnclusteredEvent(LHEF::HEPEUP const & hepeup); std::array incoming; /**< Incoming Particles */ std::vector outgoing; /**< Outgoing Particles */ //! Particle decays in the format {outgoing index, decay products} std::unordered_map> decays; //! Central parameter (e.g. scale) choice EventParameters central; std::vector variations; /**< For parameter variation */ }; } // namespace HEJ diff --git a/include/HEJ/LorentzVector.hh b/include/HEJ/LorentzVector.hh index 5efb532..4d23589 100644 --- a/include/HEJ/LorentzVector.hh +++ b/include/HEJ/LorentzVector.hh @@ -1,74 +1,91 @@ /** \file * \brief Auxiliary functions for Lorentz vectors * * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #pragma once #include #include #include "CLHEP/Vector/LorentzVector.h" +#include "HEJ/Particle.hh" + namespace HEJ { //! "dot" product inline auto dot( CLHEP::HepLorentzVector const & pi, CLHEP::HepLorentzVector const & pj ) { return pi.dot(pj); } //! "angle" product angle(pi, pj) = \ std::complex angle( CLHEP::HepLorentzVector const & pi, CLHEP::HepLorentzVector const & pj ); //! "square" product square(pi, pj) = [i j] std::complex square( CLHEP::HepLorentzVector const & pi, CLHEP::HepLorentzVector const & pj ); //! Invariant mass inline auto m2(CLHEP::HepLorentzVector const & h1) { return h1.m2(); } //! Plus component inline auto plus(CLHEP::HepLorentzVector const & h1) { return h1.plus(); } //! Minus component inline auto minus(CLHEP::HepLorentzVector const & h1) { return h1.minus(); } inline auto perphat(CLHEP::HepLorentzVector const & h1) { const auto perp = std::complex{h1.x(), h1.y()}; return perp/std::abs(perp); } //! Split a single Lorentz vector into two lightlike Lorentz vectors /** * @param P Lorentz vector to be split * @returns A pair (p, q) of Lorentz vectors with P = p + q and p^2 = q^2 = 0 * * P.perp() has to be positive. * * p.e() is guaranteed to be positive. * In addition, if either of P.plus() or P.minus() is positive, * q.e() has the same sign as P.m2() */ std::pair split_into_lightlike(CLHEP::HepLorentzVector const & P); + + inline + CLHEP::HepLorentzVector to_HepLorentzVector(fastjet::PseudoJet const & mom){ + return {mom.px(), mom.py(), mom.pz(), mom.e()}; + } + + inline + CLHEP::HepLorentzVector to_HepLorentzVector(Particle const & particle){ + return to_HepLorentzVector(particle.p); + } + + inline + fastjet::PseudoJet to_PseudoJet(CLHEP::HepLorentzVector const & mom){ + return {mom.px(), mom.py(), mom.pz(), mom.e()}; + } } // namespace HEJ diff --git a/include/HEJ/PhaseSpacePoint.hh b/include/HEJ/PhaseSpacePoint.hh index 4e93b11..85fd82b 100644 --- a/include/HEJ/PhaseSpacePoint.hh +++ b/include/HEJ/PhaseSpacePoint.hh @@ -1,198 +1,204 @@ /** \file * \brief Contains the PhaseSpacePoint Class * * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #pragma once #include #include #include +#include #include #include "fastjet/PseudoJet.hh" #include "HEJ/Config.hh" #include "HEJ/Event.hh" #include "HEJ/Particle.hh" #include "HEJ/StatusCode.hh" namespace HEJ { struct RNG; //! Generated point in resummation phase space class PhaseSpacePoint{ public: //! No default PhaseSpacePoint Constructor PhaseSpacePoint() = delete; //! PhaseSpacePoint Constructor /** * @param ev Clustered Jet Event * @param conf Configuration parameters * @param ran Random number generator */ PhaseSpacePoint( Event const & ev, PhaseSpacePointConfig conf, RNG & ran ); //! Get phase space point weight double weight() const{ return weight_; } //! Access incoming particles std::array const & incoming() const{ return incoming_; } //! Access outgoing particles std::vector const & outgoing() const{ return outgoing_; } //! Particle decays /** * The key in the returned map corresponds to the index in the * vector returned by outgoing() */ std::unordered_map> const & decays() const{ return decays_; } //! Status code of generation StatusCode status() const{ return status_; } static constexpr int NG_MAX = 1000; //!< maximum number of extra gluons private: friend Event::EventData to_EventData(PhaseSpacePoint psp); //! /internal returns the clustered jets sorted in rapidity std::vector cluster_jets( std::vector const & partons ) const; bool pass_resummation_cuts( std::vector const & jets ) const; bool pass_extremal_cuts( fastjet::PseudoJet const & ext_parton, fastjet::PseudoJet const & jet ) const; double estimate_emission_rapidity_range(Event const & event) const; double estimate_ng_mean(Event const & event) const; int sample_ng(Event const & event, RNG & ran); int sample_ng_jets(Event const & event, int ng, RNG & ran); double probability_in_jet(Event const & event) const; std::vector gen_non_jet( int ng_non_jet, double ptmin, double ptmax, RNG & ran ); void rescale_qqx_rapidities( std::vector & out_partons, std::vector const & jets, double ymin1, double ymax2, int qqxbackjet ); void rescale_rapidities( std::vector & partons, double ymin, double ymax ); - std::vector reshuffle( - std::vector const & Born_jets, + //! @return jets & Bosons + std::pair< std::vector, + optional > + reshuffle( + Event const & ev, fastjet::PseudoJet const & q ); /** \interal final jet test * - number of jets must match Born kinematics * - no partons designated as nonjet may end up inside jets * - all other outgoing partons *must* end up inside jets * - the extremal (in rapidity) partons must be inside the extremal jets * - rapidities must be the same (by construction) */ bool jets_ok( std::vector const & Born_jets, std::vector const & partons ) const; void reconstruct_incoming(std::array const & Born_incoming); /** \interal Distribute gluon in jet * @param jets jets to distribute gluon in * @param ng_jets number of gluons * @param qqxbackjet position of first (backwards) qqx jet * * relies on JetSplitter */ std::vector split( std::vector const & jets, int ng_jets, std::size_t qqxbackjet, RNG & ran ); std::vector distribute_jet_partons( int ng_jets, std::vector const & jets, RNG & ran ); std::vector split( std::vector const & jets, std::vector const & np_in_jet, std::size_t qqxbackjet, RNG & ran ); bool split_preserved_jets( std::vector const & jets, std::vector const & jet_partons ) const; template Particle const & most_backward_FKL( std::vector const & partons ) const; template Particle const & most_forward_FKL( std::vector const & partons ) const; template Particle & most_backward_FKL(std::vector & partons) const; template Particle & most_forward_FKL(std::vector & partons) const; bool extremal_ok( std::vector const & partons ) const; /** \internal * Assigns PDG IDs to outgoing partons, i.e. labels them as quarks * * \note This function assumes outgoing_ to be pure partonic when called, * i.e. A/W/Z/h bosons should _not be set_ at this stage */ void label_quarks(Event const & event); /** \internal * This function will label the qqx pair in a qqx event back to their * original types from the input event. */ void label_qqx(Event const & event); - void copy_AWZH_boson_from(Event const & event); + void boost_AWZH_boson_from( + fastjet::PseudoJet const & boosted_boson, Event const & event + ); bool momentum_conserved() const; bool contains_idx( fastjet::PseudoJet const & jet, fastjet::PseudoJet const & parton ) const; bool unob_, unof_, qqxb_, qqxf_, qqxmid_; double weight_; PhaseSpacePointConfig param_; std::array incoming_; std::vector outgoing_; //! \internal Particle decays in the format {outgoing index, decay products} std::unordered_map> decays_; StatusCode status_; }; //! Extract Event::EventData from PhaseSpacePoint Event::EventData to_EventData(PhaseSpacePoint psp); } // namespace HEJ diff --git a/include/HEJ/YAMLreader.hh b/include/HEJ/YAMLreader.hh index 5f45944..d9e3d9f 100644 --- a/include/HEJ/YAMLreader.hh +++ b/include/HEJ/YAMLreader.hh @@ -1,270 +1,270 @@ /** \file * \brief The file which handles the configuration file parameters * * The configuration files parameters are read and then stored * within this objects. * * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #pragma once #include #include #include #include "yaml-cpp/yaml.h" #include "fastjet/JetDefinition.hh" #include "HEJ/Config.hh" #include "HEJ/Fraction.hh" #include "HEJ/PDG_codes.hh" #include "HEJ/exceptions.hh" #include "HEJ/optional.hh" #include "HEJ/utility.hh" namespace HEJ { struct OutputFile; //! Load configuration from file /** * @param config_file Name of the YAML configuration file * @returns The HEJ 2 configuration */ Config load_config(std::string const & config_file); //! Set option using the corresponding YAML entry /** * @param setting Option variable to be set * @param yaml Root of the YAML configuration * @param names Name of the entry * * If the entry does not exist or has the wrong type or format * an exception is thrown. * * For example * @code * set_from_yaml(foobar, yaml, "foo", "bar") * @endcode * is equivalent to * @code * foobar = yaml["foo"]["bar"].as() * @endcode * with improved diagnostics on errors. * * @see set_from_yaml_if_defined */ template void set_from_yaml( T & setting, YAML::Node const & yaml, YamlNames const & ... names ); //! Set option using the corresponding YAML entry, if present /** * @param setting Option variable to be set * @param yaml Root of the YAML configuration * @param names Name of the entry * * This function works similar to set_from_yaml, but does not * throw any exception if the requested YAML entry does not exist. * * @see set_from_yaml */ template void set_from_yaml_if_defined( T & setting, YAML::Node const & yaml, YamlNames const & ... names ); //! Extract jet parameters from YAML configuration JetParameters get_jet_parameters( YAML::Node const & node, std::string const & entry ); //! Extract Higgs coupling settings from YAML configuration HiggsCouplingSettings get_Higgs_coupling( YAML::Node const & node, std::string const & entry ); //! Extract the EW parameters from YAML configuration EWConstants get_ew_parameters(YAML::Node const & node); //! Extract scale setting parameters from YAML configuration ScaleConfig to_ScaleConfig(YAML::Node const & yaml); //! Extract random number generator settings from YAML configuration RNGConfig to_RNGConfig(YAML::Node const & node, std::string const & entry); //! Check whether all options in configuration are supported /** * @param conf Configuration to be checked * @param supported Tree of supported options * * If conf contains an entry that does not appear in supported * an unknown_option exception is thrown. Sub-entries of "analysis" * (if present) are not checked. * * @see unknown_option */ void assert_all_options_known( YAML::Node const & conf, YAML::Node const & supported ); namespace detail{ void set_from_yaml(fastjet::JetAlgorithm & setting, YAML::Node const & yaml); void set_from_yaml(EventTreatment & setting, YAML::Node const & yaml); void set_from_yaml(ParticleID & setting, YAML::Node const & yaml); void set_from_yaml(OutputFile & setting, YAML::Node const & yaml); void set_from_yaml(WeightType & setting, YAML::Node const & yaml); inline void set_from_yaml(YAML::Node & setting, YAML::Node const & yaml){ setting = yaml; } template void set_from_yaml(Scalar & setting, YAML::Node const & yaml){ assert(yaml); if(!yaml.IsScalar()){ throw invalid_type{"value is not a scalar"}; } try{ setting = yaml.as(); } catch(...){ throw invalid_type{ "value " + yaml.as() + " cannot be converted to a " + type_string(setting) }; } } template void set_from_yaml(optional & setting, YAML::Node const & yaml){ - T tmp; + T tmp{}; set_from_yaml(tmp, yaml); setting = tmp; } template void set_from_yaml(std::vector & setting, YAML::Node const & yaml){ assert(yaml); // special case: treat a single value like a vector with one element if(yaml.IsScalar()){ setting.resize(1); return set_from_yaml(setting.front(), yaml); } if(yaml.IsSequence()){ setting.resize(yaml.size()); for(size_t i = 0; i < setting.size(); ++i){ set_from_yaml(setting[i], yaml[i]); } return; } throw invalid_type{""}; } template void set_from_yaml( T & setting, YAML::Node const & yaml, FirstName const & name, YamlNames && ... names ){ if(!yaml[name]) throw missing_option{""}; set_from_yaml( setting, yaml[name], std::forward(names)... ); } template void set_from_yaml_if_defined(T & setting, YAML::Node const & yaml){ return set_from_yaml(setting, yaml); } template void set_from_yaml_if_defined( T & setting, YAML::Node const & yaml, FirstName const & name, YamlNames && ... names ){ if(!yaml[name]) return; set_from_yaml_if_defined( setting, yaml[name], std::forward(names)... ); } } // namespace detail template void set_from_yaml( T & setting, YAML::Node const & yaml, YamlNames const & ... names ){ try{ detail::set_from_yaml(setting, yaml, names...); } catch(invalid_type const & ex){ throw invalid_type{ "In option " + join(": ", names...) + ": " + ex.what() }; } catch(missing_option const &){ throw missing_option{ "No entry for mandatory option " + join(": ", names...) }; } catch(std::invalid_argument const & ex){ throw missing_option{ "In option " + join(": ", names...) + ":" " invalid value " + ex.what() }; } } template void set_from_yaml_if_defined( T & setting, YAML::Node const & yaml, YamlNames const & ... names ){ try{ detail::set_from_yaml_if_defined(setting, yaml, names...); } catch(invalid_type const & ex){ throw invalid_type{ "In option " + join(": ", names...) + ": " + ex.what() }; } catch(std::invalid_argument const & ex){ throw missing_option{ "In option " + join(": ", names...) + ":" " invalid value " + ex.what() }; } } } // namespace HEJ namespace YAML { template<> struct convert { static Node encode(HEJ::OutputFile const & outfile); static bool decode(Node const & node, HEJ::OutputFile & out); }; template struct convert> { static Node encode(HEJ::Fraction const & f) { return encode(Real{f}); } static bool decode(Node const & node, HEJ::Fraction & f) { Real r; if(!convert::decode(node, r)) return false; f = r; return true; } }; } // namespace YAML diff --git a/include/HEJ/resummation_jet.hh b/include/HEJ/resummation_jet.hh index dcc1c0d..83e5eba 100644 --- a/include/HEJ/resummation_jet.hh +++ b/include/HEJ/resummation_jet.hh @@ -1,43 +1,43 @@ /** \file * \brief Functions to calculate the kinematics of resummation jets, * i.e. resuffling the jet momenta * * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #pragma once #include namespace fastjet { class PseudoJet; } namespace HEJ { /** * \brief Calculate the resummation jet momenta * @param p_born born Jet Momenta * @param qperp Sum of non-jet Parton Transverse Momenta * @returns Resummation Jet Momenta */ std::vector resummation_jet_momenta( - std::vector const & p_born, + std::vector const & p_born, fastjet::PseudoJet const & qperp ); /** * \brief Calculate additional weight from changing the jet momenta * @param p_born born Jet Momenta * @param qperp Sum of non-jet Parton Transverse Momenta * * Computes the Jacobian for changing the original delta functions * expressed in terms of jet momenta to delta functions of the * parton momenta in the resummation phase space */ double resummation_jet_weight( - std::vector const & p_born, + std::vector const & p_born, fastjet::PseudoJet const & qperp ); } // namespace HEJ diff --git a/src/HDF5Writer.cc b/src/HDF5Writer.cc index 7ebeec4..0e0b33b 100644 --- a/src/HDF5Writer.cc +++ b/src/HDF5Writer.cc @@ -1,431 +1,432 @@ /** * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #include "HEJ/HDF5Writer.hh" #include #include "LHEF/LHEF.h" #include "HEJ/ConfigFlags.hh" #ifdef HEJ_BUILD_WITH_HDF5 #include #include #include #include #include #include #include "highfive/H5File.hpp" #include "HEJ/Event.hh" #include "HEJ/event_types.hh" #else #include "HEJ/exceptions.hh" #endif #ifdef HEJ_BUILD_WITH_HDF5 namespace HEJ { using HighFive::File; using HighFive::DataSpace; namespace { using std::size_t; constexpr size_t CHUNK_SIZE = 1000; constexpr unsigned COMPRESSION_LEVEL = 3; size_t to_index(event_type::EventType const type){ return type==0?0:std::floor(std::log2(static_cast(type)))+1; } template void write_dataset(HighFive::Group & group, std::string const & name, T val) { using data_t = std::decay_t; group.createDataSet(name, DataSpace::From(val)).write(val); } template void write_dataset( HighFive::Group & group, std::string const & name, std::vector const & val ) { using data_t = std::decay_t; group.createDataSet(name, DataSpace::From(val)).write(val); } struct Cache { explicit Cache(size_t capacity): capacity{capacity} { nparticles.reserve(capacity); start.reserve(capacity); pid.reserve(capacity); weight.reserve(capacity); scale.reserve(capacity); fscale.reserve(capacity); rscale.reserve(capacity); aqed.reserve(capacity); aqcd.reserve(capacity); trials.reserve(capacity); npLO.reserve(capacity); npNLO.reserve(capacity); } void fill(Event const & ev) { const auto hepeup = to_HEPEUP(ev, nullptr); // HEJ event to get nice wrapper const auto num_partons = std::distance(ev.cbegin_partons(), ev.cend_partons()); assert(num_partons>0); // Number of partons for LO matching, HEJ requires at least 2 partons npLO.emplace_back(num_partons>1?num_partons-2:num_partons); // Number of real emissions in NLO, HEJ is LO -> -1 npNLO.emplace_back(-1); fill_event_params(hepeup); fill_event_particles(hepeup); } void fill_event_params(LHEF::HEPEUP const & ev) { nparticles.emplace_back(ev.NUP); start.emplace_back(particle_pos); pid.emplace_back(ev.IDPRUP); weight.emplace_back(ev.XWGTUP); scale.emplace_back(ev.SCALUP); fscale.emplace_back(ev.scales.muf); rscale.emplace_back(ev.scales.mur); aqed.emplace_back(ev.AQEDUP); aqcd.emplace_back(ev.AQCDUP); // set first trial=1 for first event // -> sum(trials) = 1 -> xs=sum(weights)/sum(trials) as in Sherpa if(particle_pos == 0){ trials.emplace_back(1.); } else { trials.emplace_back(0.); } particle_pos += ev.NUP; } void fill_event_particles(LHEF::HEPEUP const & ev) { id.insert(end(id), begin(ev.IDUP), end(ev.IDUP)); status.insert(end(status), begin(ev.ISTUP), end(ev.ISTUP)); lifetime.insert(end(lifetime), begin(ev.VTIMUP), end(ev.VTIMUP)); spin.insert(end(spin), begin(ev.SPINUP), end(ev.SPINUP)); for(int i = 0; i < ev.NUP; ++i) { mother1.emplace_back(ev.MOTHUP[i].first); mother2.emplace_back(ev.MOTHUP[i].second); color1.emplace_back(ev.ICOLUP[i].first); color2.emplace_back(ev.ICOLUP[i].second); px.emplace_back(ev.PUP[i][0]); py.emplace_back(ev.PUP[i][1]); pz.emplace_back(ev.PUP[i][2]); e.emplace_back(ev.PUP[i][3]); m.emplace_back(ev.PUP[i][4]); } } bool is_full() const { return nparticles.size() >= capacity; } void clear() { nparticles.clear(); start.clear(); pid.clear(); id.clear(); status.clear(); mother1.clear(); mother2.clear(); color1.clear(); color2.clear(); weight.clear(); scale.clear(); fscale.clear(); rscale.clear(); aqed.clear(); aqcd.clear(); trials.clear(); npLO.clear(); npNLO.clear(); px.clear(); py.clear(); pz.clear(); e.clear(); m.clear(); lifetime.clear(); spin.clear(); } size_t capacity; std::vector nparticles, start, pid, id, status, mother1, mother2, color1, color2, npLO, npNLO; std::vector weight, scale, fscale, rscale, aqed, aqcd, trials, px, py, pz, e, m, lifetime, spin; size_t particle_pos = 0; }; } // namespace struct HDF5Writer::HDF5WriterImpl: EventWriter{ File file; LHEF::HEPRUP heprup; Cache cache{CHUNK_SIZE}; size_t event_idx = 0; size_t particle_idx = 0; HDF5WriterImpl(std::string const & filename, LHEF::HEPRUP && hepr): file{filename, File::ReadWrite | File::Create | File::Truncate}, heprup{std::forward(hepr)} { // TODO: code duplication with Les Houches Writer const size_t max_number_types = to_index(event_type::last_type)+1; heprup.NPRUP = max_number_types; // ids of event types heprup.LPRUP.clear(); heprup.LPRUP.reserve(max_number_types); heprup.LPRUP.emplace_back(0); for(size_t i=event_type::first_type+1; i<=event_type::last_type; i*=2) { heprup.LPRUP.emplace_back(i); } heprup.XSECUP = std::vector(max_number_types, 0.); heprup.XERRUP = std::vector(max_number_types, 0.); heprup.XMAXUP = std::vector(max_number_types, 0.); write_init(); create_event_group(); create_particle_group(); } void write_init() { auto init = file.createGroup("init"); write_dataset(init, "PDFgroupA" , heprup.PDFGUP.first); write_dataset(init, "PDFgroupB" , heprup.PDFGUP.second); write_dataset(init, "PDFsetA" , heprup.PDFSUP.first); write_dataset(init, "PDFsetB" , heprup.PDFSUP.second); write_dataset(init, "beamA" , heprup.IDBMUP.first); write_dataset(init, "beamB" , heprup.IDBMUP.second); write_dataset(init, "energyA" , heprup.EBMUP.first); write_dataset(init, "energyB" , heprup.EBMUP.second); write_dataset(init, "numProcesses" , heprup.NPRUP); write_dataset(init, "weightingStrategy", heprup.IDWTUP); auto proc_info = file.createGroup("procInfo"); write_dataset(proc_info, "procId", heprup.LPRUP); } static HighFive::DataSetCreateProps const & hdf5_chunk() { static const auto props = [](){ HighFive::DataSetCreateProps props; props.add(HighFive::Chunking({CHUNK_SIZE})); props.add(HighFive::Deflate(COMPRESSION_LEVEL)); return props; }(); return props; } void create_event_group() { static const auto dim = DataSpace({0}, {DataSpace::UNLIMITED}); auto events = file.createGroup("event"); events.createDataSet("nparticles", dim, hdf5_chunk()); events.createDataSet("start", dim, hdf5_chunk()); events.createDataSet("pid", dim, hdf5_chunk()); events.createDataSet("weight", dim, hdf5_chunk()); events.createDataSet("scale", dim, hdf5_chunk()); events.createDataSet("fscale", dim, hdf5_chunk()); events.createDataSet("rscale", dim, hdf5_chunk()); events.createDataSet("aqed", dim, hdf5_chunk()); events.createDataSet("aqcd", dim, hdf5_chunk()); events.createDataSet("trials", dim, hdf5_chunk()); events.createDataSet("npLO", dim, hdf5_chunk()); events.createDataSet("npNLO", dim, hdf5_chunk()); } void resize_event_group(size_t new_size) { auto events = file.getGroup("event"); events.getDataSet("nparticles").resize({new_size}); events.getDataSet("start").resize({new_size}); events.getDataSet("pid").resize({new_size}); events.getDataSet("weight").resize({new_size}); events.getDataSet("scale").resize({new_size}); events.getDataSet("fscale").resize({new_size}); events.getDataSet("rscale").resize({new_size}); events.getDataSet("aqed").resize({new_size}); events.getDataSet("aqcd").resize({new_size}); events.getDataSet("trials").resize({new_size}); events.getDataSet("npLO").resize({new_size}); events.getDataSet("npNLO").resize({new_size}); } void create_particle_group() { static const auto dim = DataSpace({0}, {DataSpace::UNLIMITED}); auto particles = file.createGroup("particle"); particles.createDataSet("id", dim, hdf5_chunk()); particles.createDataSet("status", dim, hdf5_chunk()); particles.createDataSet("mother1", dim, hdf5_chunk()); particles.createDataSet("mother2", dim, hdf5_chunk()); particles.createDataSet("color1", dim, hdf5_chunk()); particles.createDataSet("color2", dim, hdf5_chunk()); particles.createDataSet("px", dim, hdf5_chunk()); particles.createDataSet("py", dim, hdf5_chunk()); particles.createDataSet("pz", dim, hdf5_chunk()); particles.createDataSet("e", dim, hdf5_chunk()); particles.createDataSet("m", dim, hdf5_chunk()); particles.createDataSet("lifetime", dim, hdf5_chunk()); particles.createDataSet("spin", dim, hdf5_chunk()); } void resize_particle_group(size_t new_size) { auto particles = file.getGroup("particle"); particles.getDataSet("id").resize({new_size}); particles.getDataSet("status").resize({new_size}); particles.getDataSet("mother1").resize({new_size}); particles.getDataSet("mother2").resize({new_size}); particles.getDataSet("color1").resize({new_size}); particles.getDataSet("color2").resize({new_size}); particles.getDataSet("px").resize({new_size}); particles.getDataSet("py").resize({new_size}); particles.getDataSet("pz").resize({new_size}); particles.getDataSet("e").resize({new_size}); particles.getDataSet("m").resize({new_size}); particles.getDataSet("lifetime").resize({new_size}); particles.getDataSet("spin").resize({new_size}); } void write(Event const & ev) override { cache.fill(ev); if(cache.is_full()) { dump_cache(); } const double wt = ev.central().weight; const size_t idx = to_index(ev.type()); heprup.XSECUP[idx] += wt; heprup.XERRUP[idx] += wt*wt; if(wt > heprup.XMAXUP[idx]){ heprup.XMAXUP[idx] = wt; } } void dump_cache() { write_event_params(); write_event_particles(); cache.clear(); } void write_event_params() { auto events = file.getGroup("event"); // choose arbitrary dataset to find size const auto dataset = events.getDataSet("nparticles"); const size_t size = dataset.getSpace().getDimensions().front(); resize_event_group(size + cache.nparticles.size()); +// NOLINTNEXTLINE #define WRITE_FROM_CACHE(GROUP, PROPERTY) \ GROUP.getDataSet(#PROPERTY).select({size}, {cache.PROPERTY.size()}).write(cache.PROPERTY) WRITE_FROM_CACHE(events, nparticles); WRITE_FROM_CACHE(events, start); WRITE_FROM_CACHE(events, pid); WRITE_FROM_CACHE(events, weight); WRITE_FROM_CACHE(events, scale); WRITE_FROM_CACHE(events, fscale); WRITE_FROM_CACHE(events, rscale); WRITE_FROM_CACHE(events, aqed); WRITE_FROM_CACHE(events, aqcd); WRITE_FROM_CACHE(events, trials); WRITE_FROM_CACHE(events, npLO); WRITE_FROM_CACHE(events, npNLO); } void write_event_particles() { auto particles = file.getGroup("particle"); // choose arbitrary dataset to find size const auto dataset = particles.getDataSet("id"); const size_t size = dataset.getSpace().getDimensions().front(); resize_particle_group(size + cache.id.size()); WRITE_FROM_CACHE(particles, id); WRITE_FROM_CACHE(particles, status); WRITE_FROM_CACHE(particles, lifetime); WRITE_FROM_CACHE(particles, spin); WRITE_FROM_CACHE(particles, mother1); WRITE_FROM_CACHE(particles, mother2); WRITE_FROM_CACHE(particles, color1); WRITE_FROM_CACHE(particles, color2); WRITE_FROM_CACHE(particles, px); WRITE_FROM_CACHE(particles, py); WRITE_FROM_CACHE(particles, pz); WRITE_FROM_CACHE(particles, e); WRITE_FROM_CACHE(particles, m); } #undef WRITE_FROM_CACHE void finish() override { if(finished()) throw std::ios_base::failure("HDF5Writer writer already finished."); EventWriter::finish(); dump_cache(); auto proc_info = file.getGroup("procInfo"); write_dataset(proc_info, "xSection", heprup.XSECUP); write_dataset(proc_info, "error", heprup.XERRUP); write_dataset(proc_info, "unitWeight", heprup.XMAXUP); file.flush(); } ~HDF5WriterImpl() override { finish_or_abort(this, "HDF5Writer"); } }; HDF5Writer::HDF5Writer(std::string const & filename, LHEF::HEPRUP heprup): impl_{std::make_unique( filename, std::move(heprup))} {} void HDF5Writer::write(Event const & ev){ impl_->write(ev); } void HDF5Writer::finish(){ impl_->finish(); } } // namespace HEJ #else // no HDF5 support namespace HEJ { struct HDF5Writer::HDF5WriterImpl{}; HDF5Writer::HDF5Writer(std::string const & /*file*/, LHEF::HEPRUP /*heprup*/){ throw std::invalid_argument{ "Failed to create HDF5 writer: " "HEJ 2 was built without HDF5 support" }; } void HDF5Writer::write(Event const & /*ev*/){ assert(false); } void HDF5Writer::finish(){ assert(false); } } #endif namespace HEJ { HDF5Writer::~HDF5Writer() = default; } diff --git a/src/Hjets.cc b/src/Hjets.cc index babae9f..62e0b6c 100644 --- a/src/Hjets.cc +++ b/src/Hjets.cc @@ -1,972 +1,973 @@ /** * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #include "HEJ/jets.hh" #include "HEJ/Hjets.hh" #include #include #include #include #include #include "HEJ/ConfigFlags.hh" #include "HEJ/Constants.hh" #include "HEJ/LorentzVector.hh" #include "HEJ/utility.hh" // generated headers #include "HEJ/currents/j_h_j.hh" #include "HEJ/currents/juno_h_j.hh" #ifdef HEJ_BUILD_WITH_QCDLOOP #include "qcdloop/qcdloop.h" #include "HEJ/currents/jh_j.hh" #else #include "HEJ/exceptions.hh" #endif namespace HEJ { namespace currents { namespace { // short hand for math functions using std::norm; using std::abs; using std::conj; using std::pow; using std::sqrt; constexpr double infinity = std::numeric_limits::infinity(); // NOLINT // Loop integrals #ifdef HEJ_BUILD_WITH_QCDLOOP const COM LOOPRWFACTOR = (COM(0.,1.)*M_PI*M_PI)/pow((2.*M_PI),4); COM B0DD(HLV const & q, double mq) { static std::vector> result(3); static auto ql_B0 = [](){ ql::Bubble,double,double> ql_B0; ql_B0.setCacheSize(100); return ql_B0; }(); static std::vector masses(2); static std::vector momenta(1); for(auto & m: masses) m = mq*mq; momenta.front() = q.m2(); ql_B0.integral(result, 1, masses, momenta); return result[0]; } COM C0DD(HLV const & q1, HLV const & q2, double mq) { static std::vector> result(3); static auto ql_C0 = [](){ ql::Triangle,double,double> ql_C0; ql_C0.setCacheSize(100); return ql_C0; }(); static std::vector masses(3); static std::vector momenta(3); for(auto & m: masses) m = mq*mq; momenta[0] = q1.m2(); momenta[1] = q2.m2(); momenta[2] = (q1+q2).m2(); ql_C0.integral(result, 1, masses, momenta); return result[0]; } COM D0DD(HLV const & q1, HLV const & q2, HLV q3, double mq) { static std::vector> result(3); static auto ql_D0 = [](){ ql::Box,double,double> ql_D0; ql_D0.setCacheSize(100); return ql_D0; }(); static std::vector masses(4); static std::vector momenta(6); for(auto & m: masses) m = mq*mq; momenta[0] = q1.m2(); momenta[1] = q2.m2(); momenta[2] = q3.m2(); momenta[3] = (q1+q2+q3).m2(); momenta[4] = (q1+q2).m2(); momenta[5] = (q2+q3).m2(); ql_D0.integral(result, 1, masses, momenta); return result[0]; } // Kallen lambda functions, see eq:lambda in developer manual double lambda(const double s1, const double s2, const double s3) { return s1*s1 + s2*s2 + s3*s3 - 2*s1*s2 - 2*s1*s3 - 2*s2*s3; } // eq: T_1 in developer manual COM T1(HLV const & q1, HLV const & q2, const double m) { const double q12 = q1.m2(); const double q22 = q2.m2(); const HLV ph = q1 - q2; const double ph2 = ph.m2(); const double lam = lambda(q12, q22, ph2); assert(m > 0.); const double m2 = m*m; return - C0DD(q1, -q2, m)*(2.*m2 + 1./2.*(q12 + q22 - ph2) + 2.*q12*q22*ph2/lam) - (B0DD(q2, m) - B0DD(ph, m))*(q22 - q12 - ph2)*q22/lam - (B0DD(q1, m) - B0DD(ph, m))*(q12 - q22 - ph2)*q12/lam - 1.; } // eq: T_2 in developer manual COM T2(HLV const & q1, HLV const & q2, const double m) { const double q12 = q1.m2(); const double q22 = q2.m2(); const HLV ph = q1 - q2; const double ph2 = ph.m2(); const double lam = lambda(q12, q22, ph2); assert(m > 0.); const double m2 = m*m; return C0DD(q1, -q2, m)*( 4.*m2/lam*(ph2 - q12 - q22) - 1. - 4.*q12*q22/lam*( 1 + 3.*ph2*(q12 + q22 - ph2)/lam ) ) - (B0DD(q2, m) - B0DD(ph, m))*(1. + 6.*q12/lam*(q22 - q12 + ph2))*2.*q22/lam - (B0DD(q1, m) - B0DD(ph, m))*(1. + 6.*q22/lam*(q12 - q22 + ph2))*2.*q12/lam - 2.*(q12 + q22 - ph2)/lam; } #else // no QCDloop COM T1(HLV const & /*q1*/, HLV const & /*q2*/, double /*mt*/){ throw std::logic_error{"T1 called without QCDloop support"}; } COM T2(HLV const & /*q1*/, HLV const & /*q2*/, double /*mt*/){ throw std::logic_error{"T2 called without QCDloop support"}; } #endif // prefactors of g^{\mu \nu} and q_2^\mu q_1^\nu in Higgs boson emission vertex // see eq:VH in developer manual, but *without* global factor \alpha_s std::array TT( HLV const & qH1, HLV const & qH2, const double mt, const bool inc_bottom, const double mb, const double vev ) { if(mt == infinity) { std::array res = {qH1.dot(qH2), 1.}; for(auto & tt: res) tt /= (3.*M_PI*vev); return res; } std::array res = {T1(qH1, qH2, mt), T2(qH1, qH2, mt)}; for(auto & tt: res) tt *= mt*mt; if(inc_bottom) { res[0] += mb*mb*T1(qH1, qH2, mb); res[1] += mb*mb*T2(qH1, qH2, mb); } for(auto & tt: res) tt /= M_PI*vev; return res; } /** * @brief Higgs+Jets FKL Contributions, function to handle all incoming types. * @param p1out Outgoing Particle 1 * @param p1in Incoming Particle 1 * @param p2out Outgoing Particle 2 * @param p2in Incoming Particle 2 * @param qH1 t-channel momenta into higgs vertex * @param qH2 t-channel momenta out of higgs vertex * @param mt top mass (inf or value) * @param inc_bottom whether to include bottom mass effects (true) or not (false) * @param mb bottom mass (value) * @param vev Higgs vacuum expectation value * * Calculates j^\mu H j_\mu. FKL with higgs vertex somewhere in the FKL chain. * Handles all possible incoming states. */ double j_h_j( HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, const double mt, const bool inc_bottom, const double mb, const double vev ){ using helicity::plus; using helicity::minus; const auto qqH1 = split_into_lightlike(qH1); const HLV qH11 = qqH1.first; const HLV qH12 = -qqH1.second; const auto qqH2 = split_into_lightlike(qH2); const HLV qH21 = qqH2.first; const HLV qH22 = -qqH2.second; // since qH1.m2(), qH2.m2() < 0 the following assertions are always true assert(qH11.e() > 0); assert(qH12.e() > 0); assert(qH21.e() > 0); assert(qH22.e() > 0); const auto T_pref = TT(qH1, qH2, mt, inc_bottom, mb, vev); const COM amp_mm = HEJ::j_h_j( p1out, p1in, p2out, p2in, qH11, qH12, qH21, qH22, T_pref[0], T_pref[1] ); const COM amp_mp = HEJ::j_h_j( p1out, p1in, p2out, p2in, qH11, qH12, qH21, qH22, T_pref[0], T_pref[1] ); const COM amp_pm = HEJ::j_h_j( p1out, p1in, p2out, p2in, qH11, qH12, qH21, qH22, T_pref[0], T_pref[1] ); const COM amp_pp = HEJ::j_h_j( p1out, p1in, p2out, p2in, qH11, qH12, qH21, qH22, T_pref[0], T_pref[1] ); static constexpr double num_hel = 4.; // square of amplitudes, averaged over helicities const double amp2 = (norm(amp_mm) + norm(amp_mp) + norm(amp_pm) + norm(amp_pp))/num_hel; return amp2/((p1in-p1out).m2()*(p2in-p2out).m2()*qH1.m2()*qH2.m2()); } // } } // namespace double ME_H_qQ(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_qQbar(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_qbarQ(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_qbarQbar(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_qg(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev) * K_g(p2out,p2in)/C_A; } double ME_H_qbarg(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev) * K_g(p2out,p2in)/C_A; } double ME_H_gg(HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return j_h_j(p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev) * K_g(p2out,p2in)/C_A * K_g(p1out,p1in)/C_A; } //@} namespace { template double amp_juno_h_j( HLV const & pa, HLV const & pb, HLV const & pg, HLV const & p1, HLV const & p2, HLV const & qH11, HLV const & qH12, HLV const & qH21, HLV const & qH22, std::array const & T_pref ) { // TODO: code duplication with Wjets and pure jets const COM u1 = U1_h_j(pa,p1,pb,p2,pg,qH11,qH12,qH21,qH22,T_pref[0],T_pref[1]); const COM u2 = U2_h_j(pa,p1,pb,p2,pg,qH11,qH12,qH21,qH22,T_pref[0],T_pref[1]); const COM l = L_h_j(pa,p1,pb,p2,pg,qH11,qH12,qH21,qH22,T_pref[0],T_pref[1]); return 2.*C_F*std::real((l-u1)*std::conj(l+u2)) + 2.*C_F*C_F/3.*std::norm(u1+u2) ; } //@{ /** * @brief Higgs+Jets Unordered Contributions, function to handle all incoming types. * @param pg Unordered Gluon momenta * @param p1out Outgoing Particle 1 * @param p1in Incoming Particle 1 * @param p2out Outgoing Particle 2 * @param p2in Incoming Particle 2 * @param qH1 t-channel momenta into higgs vertex * @param qH2 t-channel momenta out of higgs vertex * @param mt top mass (inf or value) * @param inc_bottom whether to include bottom mass effects (true) or not (false) * @param mb bottom mass (value) * * Calculates j_{uno}^\mu H j_\mu. Unordered with higgs vertex * somewhere in the FKL chain. Handles all possible incoming states. */ double juno_h_j( HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, const double mt, const bool incBot, const double mb, const double vev ){ using helicity::plus; using helicity::minus; const auto qqH1 = split_into_lightlike(qH1); const HLV qH11 = qqH1.first; const HLV qH12 = -qqH1.second; const auto qqH2 = split_into_lightlike(qH2); const HLV qH21 = qqH2.first; const HLV qH22 = -qqH2.second; // since qH1.m2(), qH2.m2() < 0 the following assertions are always true assert(qH11.e() > 0); assert(qH12.e() > 0); assert(qH21.e() > 0); assert(qH22.e() > 0); const auto T_pref = TT(qH1, qH2, mt, incBot, mb, vev); // only 4 out of the 8 helicity amplitudes are independent // we still compute all of them for better numerical stability (mirror check) MultiArray amp; +// NOLINTNEXTLINE #define ASSIGN_HEL(RES, J, H1, H2, HG) \ RES[H1][H2][HG] = J( \ p1in, p2in, pg, p1out, p2out, qH11, qH12, qH21, qH22, T_pref \ - ) // NOLINT + ) ASSIGN_HEL(amp, amp_juno_h_j, minus, minus, minus); ASSIGN_HEL(amp, amp_juno_h_j, minus, minus, plus); ASSIGN_HEL(amp, amp_juno_h_j, minus, plus, minus); ASSIGN_HEL(amp, amp_juno_h_j, minus, plus, plus); ASSIGN_HEL(amp, amp_juno_h_j, plus, minus, minus); ASSIGN_HEL(amp, amp_juno_h_j, plus, minus, plus); ASSIGN_HEL(amp, amp_juno_h_j, plus, plus, minus); ASSIGN_HEL(amp, amp_juno_h_j, plus, plus, plus); #undef ASSIGN_HEL const HLV q1 = p1in-p1out; // Top End const HLV q2 = p2out-p2in; // Bottom End const HLV qg = p1in-p1out-pg; // Extra bit post-gluon double ampsq = 0.; for(Helicity h1: {minus, plus}) { for(Helicity h2: {minus, plus}) { for(Helicity hg: {minus, plus}) { ampsq += amp[h1][h2][hg]; } } } ampsq /= 16.*qg.m2()*qH1.m2()*qH2.m2()*q2.m2(); // Factor of (Cf/Ca) for each quark to match ME_H_qQ. ampsq*=C_F*C_F/C_A/C_A; return ampsq; } } // namespace double ME_H_unob_qQ(HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return juno_h_j(pg, p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_unob_qbarQ(HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return juno_h_j(pg, p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_unob_qQbar(HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return juno_h_j(pg, p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_unob_qbarQbar(HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return juno_h_j(pg, p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev); } double ME_H_unob_gQ(HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return juno_h_j(pg, p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev) *K_g(p2out,p2in)/C_F; } double ME_H_unob_gQbar(HLV const & pg, HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & qH1, HLV const & qH2, double mt, bool include_bottom, double mb, double vev ){ return juno_h_j(pg, p1out, p1in, p2out, p2in, qH1, qH2, mt, include_bottom, mb, vev) *K_g(p2out,p2in)/C_F; } //@} // Begin finite mass stuff #ifdef HEJ_BUILD_WITH_QCDLOOP namespace { // All the stuff needed for the box functions in qg->qgH now... COM E1(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2=-(k1+k2+kh); double Delta, Sigma, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; Sigma = 4.*s12*s34 - pow(S1+S2,2); return LOOPRWFACTOR*(-s12*D0DD(k2, k1, q2, mq)*(1 - 8.*mq*mq/s12 + S2/(2.*s12) + S2*(s12 - 8.*mq*mq)*(s34 + S1)/(2.*s12*Delta) + 2.*(s34 + S1)*(s34 + S1)/Delta + S2*pow((s34 + S1),3)/Delta/Delta) - ((s12 + S2)*C0DD(k2, k1 + q2, mq) - s12*C0DD(k1, k2, mq) + (S1 - S2)*C0DD(k1 + k2, q2, mq) - S1*C0DD(k1, q2, mq))*(S2*(s12 - 4.*mq*mq)/(2.*s12*Delta) + 2.*(s34 + S1)/Delta + S2*pow((s34 + S1),2)/Delta/Delta) + (C0DD(k1, q2, mq) - C0DD(k1 + k2, q2, mq))*(1. - 4.*mq*mq/s12) - C0DD(k1 + k2, q2, mq)*2.*s34/ S1 - (B0DD(k1 + q2, mq) - B0DD(k1 + k2 + q2, mq))*2.*s34*(s34 + S1)/(S1*Delta) + (B0DD(q2, mq) - B0DD(k1 + k2 + q2, mq) + s12*C0DD(k1 + k2, q2, mq))*(2.*s34*(s34 + S1)*(S1 - S2)/(Delta*Sigma) + 2.*s34*(s34 + S1)/(S1*Delta)) + (B0DD(k1 + k2, mq) - B0DD(k1 + k2 + q2, mq) - (s34 + S1 + S2)*C0DD(k1 + k2, q2, mq))*2.*(s34 + S1)*(2.*s12*s34 - S2*(S1 + S2))/(Delta*Sigma)); } COM F1(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, Sigma, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; Sigma = 4.*s12*s34 - pow(S1+S2,2); return LOOPRWFACTOR*(-S2*D0DD(k1, k2, q2, mq)*(0.5 - (s12 - 8.*mq*mq)*(s34 + S2)/(2.*Delta) - s12*pow((s34 + S2),3)/Delta/Delta) + ((s12 + S1)*C0DD(k1, k2 + q2, mq) - s12*C0DD(k1, k2, mq) - (S1 - S2)*C0DD(k1 + k2, q2, mq) - S2*C0DD(k2, q2, mq))*(S2*(s12 - 4.*mq*mq)/(2.*s12*Delta) + S2*pow((s34 + S2),2)/Delta/Delta) - (C0DD(k1 + k2, q2, mq) - C0DD(k1, k2 + q2, mq))*(1. - 4.*mq*mq/s12) - C0DD(k1, k2 + q2, mq) + (B0DD(k2 + q2, mq) - B0DD(k1 + k2 + q2, mq))*2.*pow((s34 + S2),2)/((s12 + S1)*Delta) - (B0DD( q2, mq) - B0DD(k1 + k2 + q2, mq) + s12*C0DD(k1 + k2, q2, mq))*2.*s34*(s34 + S2)*(S2 - S1)/(Delta*Sigma) + (B0DD( k1 + k2, mq) - B0DD(k1 + k2 + q2, mq) - (s34 + S1 + S2)*C0DD(k1 + k2, q2, mq))*2.*(s34 + S2)*(2.*s12*s34 - S2*(S1 + S2))/(Delta*Sigma)); } COM G1(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; return LOOPRWFACTOR*(S2*D0DD(k1, q2, k2, mq)*(Delta/s12/s12 - 4.*mq*mq/s12) - S2*((s12 + S1)*C0DD(k1, k2 + q2, mq) - S1*C0DD(k1, q2, mq))*(1./ s12/s12 - (s12 - 4.*mq*mq)/(2.*s12*Delta)) - S2*((s12 + S2)*C0DD(k1 + q2, k2, mq) - S2*C0DD(k2, q2, mq))*(1./ s12/s12 + (s12 - 4.*mq*mq)/(2.*s12*Delta)) - C0DD(k1, q2, mq) - (C0DD(k1, k2 + q2, mq) - C0DD(k1, q2, mq))*4.*mq*mq/ s12 + (B0DD(k1 + q2, mq) - B0DD(k1 + k2 + q2, mq))*2./ s12 + (B0DD(k1 + q2, mq) - B0DD(q2, mq))*2.*s34/(s12*S1) + (B0DD(k2 + q2, mq) - B0DD(k1 + k2 + q2, mq))*2.*(s34 + S2)/(s12*(s12 + S1))); } COM E4(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, Sigma, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; Sigma = 4.*s12*s34 - pow(S1+S2,2); return LOOPRWFACTOR* (-s12*D0DD(k2, k1, q2, mq)*(0.5 - (S1 - 8.*mq*mq)*(s34 + S1)/(2.*Delta) - s12*pow((s34 + S1),3)/Delta/Delta) + ((s12 + S2)*C0DD(k2, k1 + q2, mq) - s12*C0DD(k1, k2, mq) + (S1 - S2)*C0DD(k1 + k2, q2, mq) - S1*C0DD(k1, q2, mq))*((S1 - 4.*mq*mq)/(2.*Delta) + s12*pow((s34 + S1),2)/Delta/Delta) - C0DD(k1 + k2, q2, mq) + (B0DD(k1 + q2, mq) - B0DD(k1 + k2 + q2, mq))*(2.*s34/Delta + 2.*s12*(s34 + S1)/((s12 + S2)*Delta)) - (B0DD( q2, mq) - B0DD(k1 + k2 + q2, mq) + s12*C0DD(k1 + k2, q2, mq))*((2.*s34*(2.*s12*s34 - S2*(S1 + S2) + s12*(S1 - S2)))/(Delta*Sigma)) + (B0DD(k1 + k2, mq) - B0DD(k1 + k2 + q2, mq) - (s34 + S1 + S2)*C0DD(k1 + k2, q2, mq)) *((2.*s12*(2.*s12*s34 - S1*(S1 + S2) + s34*(S2 - S1)))/(Delta*Sigma))); } COM F4(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, Sigma, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; Sigma = 4.*s12*s34 - pow(S1+S2,2); return LOOPRWFACTOR* (-s12*D0DD(k1, k2, q2, mq)*(0.5 + (S1 - 8.*mq*mq)*(s34 + S2)/(2.*Delta) + s12*pow((s34 + S2),3)/Delta/Delta) - ((s12 + S1)*C0DD(k1, k2 + q2, mq) - s12*C0DD(k1, k2, mq) - (S1 - S2)*C0DD(k1 + k2, q2, mq) - S2*C0DD(k2, q2, mq))*((S1 - 4.*mq*mq)/(2.*Delta) + s12*pow((s34 + S2),2)/Delta/Delta) - C0DD(k1 + k2, q2, mq) - (B0DD(k2 + q2, mq) - B0DD(k1 + k2 + q2, mq))*2.*(s34 + S2)/Delta + (B0DD(q2, mq) - B0DD(k1 + k2 + q2, mq) + s12*C0DD(k1 + k2, q2, mq))*2.*s34*(2.*s12*s34 - S1*(S1 + S2) + s12*(S2 - S1))/(Delta*Sigma) - (B0DD(k1 + k2, mq) - B0DD(k1 + k2 + q2, mq) - (s34 + S1 + S2)*C0DD(k1 + k2, q2, mq)) *(2.*s12*(2.*s12*s34 - S2*(S1 + S2) + s34*(S1 - S2))/(Delta*Sigma))); } COM G4(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; return LOOPRWFACTOR* (-D0DD(k1, q2, k2, mq)*(Delta/s12 + (s12 + S1)/2. - 4.*mq*mq) + ((s12 + S1)*C0DD(k1, k2 + q2, mq) - S1*C0DD(k1, q2, mq))*(1./ s12 - (S1 - 4.*mq*mq)/(2.*Delta)) + ((s12 + S2)*C0DD( k1 + q2, k2, mq) - S2*C0DD(k2, q2, mq))*(1./ s12 + (S1 - 4.*mq*mq)/(2.*Delta)) + (B0DD( k1 + k2 + q2, mq) - B0DD(k1 + q2, mq))*2./(s12 + S2)); } COM E10(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, Sigma, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; Sigma = 4.*s12*s34 - pow(S1+S2,2); return LOOPRWFACTOR*(-s12*D0DD(k2, k1, q2, mq)*((s34 + S1)/Delta + 12.*mq*mq*S1*(s34 + S1)/Delta/Delta - 4.*s12*S1*pow((s34 + S1),3)/Delta/Delta/Delta) - ((s12 + S2)*C0DD(k2, k1 + q2, mq) - s12*C0DD(k1, k2, mq) + (S1 - S2)*C0DD(k1 + k2, q2, mq) - S1*C0DD(k1, q2, mq))*(1./Delta + 4.*mq*mq*S1/Delta/Delta - 4.*s12*S1*pow((s34 + S1),2)/Delta/Delta/Delta) + C0DD(k1 + k2, q2, mq)*(4.*s12*s34*(S1 - S2)/(Delta*Sigma) - 4.*(s12 - 2.*mq*mq)*(2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma)) + (B0DD(k1 + q2, mq) - B0DD(k1 + k2 + q2, mq))*(4.*(s34 + S1)/((s12 + S2)*Delta) + 8.*S1*(s34 + S1)/Delta/Delta) + (B0DD(q2, mq) - B0DD(k1 + k2 + q2, mq) + s12*C0DD(k1 + k2, q2, mq))*(12.*s34*(2.*s12 + S1 + S2)*(2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma*Sigma) - 4.*s34*(4.*s12 + 3.*S1 + S2)/(Delta*Sigma) + 8.*s12*s34*(s34*(s12 + S2) - S1*(s34 + S1))/(Delta*Delta*Sigma)) + (B0DD(k1 + k2, mq) - B0DD(k1 + k2 + q2, mq) - (s34 + S1 + S2)*C0DD(k1 + k2, q2, mq))*(12.*s12*(2.*s34 + S1 + S2)*(2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma*Sigma) + 8.*s12*S1*(s34*(s12 + S2) - S1*(s34 + S1))/(Delta*Delta*Sigma))) + (COM(0.,1.)/(4.*M_PI*M_PI))*((2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma)); } COM F10(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, Sigma, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; Sigma = 4.*s12*s34 - pow(S1+S2,2); return LOOPRWFACTOR* (s12*D0DD(k1, k2, q2, mq)*((s34 + S2)/Delta - 4.*mq*mq/Delta + 12.*mq*mq*s34*(s12 + S1)/Delta/Delta - 4.*s12*pow((s34 + S2),2)/Delta/Delta - 4.*s12*S1*pow((s34 + S2),3)/Delta/Delta/Delta) + ((s12 + S1)*C0DD(k1, k2 + q2, mq) - s12*C0DD(k1, k2, mq) - (S1 - S2)*C0DD(k1 + k2, q2, mq) - S2*C0DD(k2, q2, mq))*(1./Delta + 4.*mq*mq*S1/Delta/Delta - 4.*s12*(s34 + S2)/Delta/Delta - 4.*s12*S1*pow((s34 + S2),2)/Delta/Delta/Delta) - C0DD(k1 + k2, q2, mq)*(4.*s12*s34/(S2*Delta) + 4.*s12*s34*(S2 - S1)/(Delta*Sigma) + 4.*(s12 - 2.*mq*mq)*(2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma)) - (B0DD( k2 + q2, mq) - B0DD(k1 + k2 + q2, mq))*(4.*s34/(S2*Delta) + 8.*s34*(s12 + S1)/Delta/Delta) - (B0DD(q2, mq) - B0DD(k1 + k2 + q2, mq) + s12*C0DD(k1 + k2, q2, mq))*(-12*s34*(2*s12 + S1 + S2)*(2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma*Sigma) - 4.*s12*s34*s34/(S2*Delta*Delta) + 4.*s34*S1/(Delta*Sigma) - 4.*s34*(s12*s34*(2.*s12 + S2) - S1*S1*(2.*s12 + S1))/(Delta*Delta*Sigma)) - (B0DD(k1 + k2, mq) - B0DD(k1 + k2 + q2, mq) - (s34 + S1 + S2)*C0DD(k1 + k2, q2, mq))*(-12.*s12*(2.*s34 + S1 + S2)*(2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma*Sigma) + 8.*s12*(2.*s34 + S1)/(Delta*Sigma) - 8.*s12*s34*(2.*s12*s34 - S1*(S1 + S2) + s12*(S2 - S1))/(Delta*Delta*Sigma))) + (COM(0.,1.)/(4.*M_PI*M_PI))*((2.*s12*s34 - S1*(S1 + S2))/(Delta*Sigma)); } COM G10(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ HLV q2 = -(k1+k2+kh); double Delta, S1, S2, s12, s34; S1 = 2.*k1.dot(q2); S2 = 2.*k2.dot(q2); s12 = 2.*k1.dot(k2); s34 = q2.m2(); Delta = s12*s34 - S1*S2; return LOOPRWFACTOR* (-D0DD(k1, q2, k2, mq)*(1. + 4.*S1*mq*mq/Delta) + ((s12 + S1)*C0DD(k1, k2 + q2, mq) - S1*C0DD(k1, q2, mq))*(1./Delta + 4.*S1*mq*mq/Delta/Delta) - ((s12 + S2)*C0DD(k1 + q2, k2, mq) - S2*C0DD(k2, q2, mq))*(1./Delta + 4.*S1*mq*mq/Delta/Delta) + (B0DD(k1 + k2 + q2, mq) - B0DD(k1 + q2, mq))*4.*(s34 + S1)/(Delta*(s12 + S2)) + (B0DD(q2, mq) - B0DD(k2 + q2, mq))*4.*s34/(Delta*S2)); } COM H1DD(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ return E1(k1,k2,kh,mq)+F1(k1,k2,kh,mq)+G1(k1,k2,kh,mq); } COM H4DD(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ return E4(k1,k2,kh,mq)+F4(k1,k2,kh,mq)+G4(k1,k2,kh,mq); } COM H10DD(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ return E10(k1,k2,kh,mq)+F10(k1,k2,kh,mq)+G10(k1,k2,kh,mq); } COM H2DD(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ return -1.*H1DD(k2,k1,kh,mq); } COM H5DD(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ return -1.*H4DD(k2,k1,kh,mq); } COM H12DD(HLV const & k1, HLV const & k2, HLV const & kh, double mq){ return -1.*H10DD(k2,k1,kh,mq); } HLV parity_flip(HLV const & p){ HLV flippedVector; flippedVector.setE(p.e()); flippedVector.setX(-p.x()); flippedVector.setY(-p.y()); flippedVector.setZ(-p.z()); return flippedVector; } template COM jh_j( HLV const & pa, HLV const & p1, HLV const & pb, HLV const & p2, HLV const & ph1, HLV const & ph2, double const mq ) { return (pa.z() > 0)? jh_j_forward(pa, p1, pb, p2, ph1, ph2, mq): jh_j_backward(pa, p1, pb, p2, ph1, ph2, mq) ; } template COM amp_jh_j( HLV const & pa, HLV const & p1, HLV const & pb, HLV const & p2, HLV const & ph1, HLV const & ph2, double const mq, bool const include_bottom, double const mq2, double const vev ) { COM res = 4.*mq*mq/vev*jh_j(pa, p1, pb, p2, ph1, ph2, mq); if(include_bottom) { res += 4.*mq2*mq2/vev*jh_j(pa, p1, pb, p2, ph1, ph2, mq2); } return res; } // sum over jh_j helicity amplitudes squared with + incoming gluon double ampsq_sum_jh_j( HLV const & pa, HLV const & p1, HLV const & pb, HLV const & p2, HLV const & ph1, HLV const & ph2, double const mq, bool const include_bottom, double const mq2, double const vev ) { using helicity::plus; using helicity::minus; using std::norm; const COM appp = amp_jh_j( pa, p1, pb, p2, ph1, ph2, mq, include_bottom, mq2, vev ); const COM appm = amp_jh_j( pa, p1, pb, p2, ph1, ph2, mq, include_bottom, mq2, vev ); const COM apmp = amp_jh_j( pa, p1, pb, p2, ph1, ph2, mq, include_bottom, mq2, vev ); const COM apmm = amp_jh_j( pa, p1, pb, p2, ph1, ph2, mq, include_bottom, mq2, vev ); return norm(appp) + norm(appm) + norm(apmp) + norm(apmm); } } // namespace // Higgs emitted close to gluon with full mt effects. double ME_Houtside_gq( HLV const & p1out, HLV const & p1in, HLV const & p2out, HLV const & p2in, HLV const & pH, const double mq, const bool include_bottom, const double mq2, const double vev ){ using helicity::plus; using helicity::minus; const auto ph = split_into_lightlike(pH); assert(ph.first.e() > 0); assert(ph.second.e() > 0); // incoming gluon with + helicity const double ME_plus = ampsq_sum_jh_j( p1in, p1out, p2in, p2out, ph.first, ph.second, mq, include_bottom, mq2, vev ); // incoming gluon with - helicity const double ME_minus = ampsq_sum_jh_j( parity_flip(p1in), parity_flip(p1out), parity_flip(p2in), parity_flip(p2out), parity_flip(ph.first), parity_flip(ph.second), mq, include_bottom, mq2, vev ); const double prop = m2(p1in - p1out - pH); return (ME_plus + ME_minus)/(prop*prop); } #endif // HEJ_BUILD_WITH_QCDLOOP double C2gHgm(HLV const & p2, HLV const & p1, HLV const & pH, double vev){ const double A=1./(3.*M_PI*vev); // Implements Eq. (4.22) in hep-ph/0301013 with modifications to incoming plus momenta double s12 = NAN; double p1p = NAN; double p2p = NAN; COM p1perp; COM p3perp; COM phperp; // Determine first whether this is the case p1p\sim php>>p3p or the opposite s12=p1.invariantMass2(-p2); if (p2.pz()>0.) { // case considered in hep-ph/0301013 p1p=p1.plus(); p2p=p2.plus(); } else { // opposite case p1p=p1.minus(); p2p=p2.minus(); } p1perp=p1.px()+COM(0,1)*p1.py(); phperp=pH.px()+COM(0,1)*pH.py(); p3perp=-(p1perp+phperp); COM temp=COM(0,1)*A/(2.*s12)*(p2p/p1p*conj(p1perp)*p3perp +p1p/p2p*p1perp*conj(p3perp)); temp=temp*conj(temp); return temp.real(); } double C2gHgp(HLV const & p2, HLV const & p1, HLV const & pH, double vev){ const double A=1./(3.*M_PI*vev); // Implements Eq. (4.23) in hep-ph/0301013 double s12 = NAN; double php = NAN; double p1p = NAN; double phm = NAN; COM p1perp; COM p3perp; COM phperp; // Determine first whether this is the case p1p\sim php>>p3p or the opposite s12=p1.invariantMass2(-p2); if (p2.pz()>0.) { // case considered in hep-ph/0301013 php=pH.plus(); phm=pH.minus(); p1p=p1.plus(); } else { // opposite case php=pH.minus(); phm=pH.plus(); p1p=p1.minus(); } p1perp=p1.px()+COM(0,1)*p1.py(); phperp=pH.px()+COM(0,1)*pH.py(); p3perp=-(p1perp+phperp); COM temp=-COM(0,1)*A/(2.*s12)*( conj(p1perp*p3perp)*pow(php/p1p,2)/(1.+php/p1p) +s12*(pow(conj(phperp),2)/(pow(abs(phperp),2)+p1p*phm) -pow(conj(p3perp)+(1.+php/p1p)*conj(p1perp),2) /((1.+php/p1p)*(pH.m2()+2.*p1.dot(pH)))) ); temp=temp*conj(temp); return temp.real(); } double C2qHqm(HLV const & p2, HLV const & p1, HLV const & pH, double vev){ const double A=1./(3.*M_PI*vev); // Implements Eq. (4.21) in hep-ph/0301013 double s12 = NAN; double p2p = NAN; double p1p = NAN; COM p1perp; COM p3perp; COM phperp; // Determine first whether this is the case p1p\sim php>>p3p or the opposite s12=p1.invariantMass2(-p2); if (p2.pz()>0.) { // case considered in hep-ph/0301013 p2p=p2.plus(); p1p=p1.plus(); } else { // opposite case p2p=p2.minus(); p1p=p1.minus(); } p1perp=p1.px()+COM(0,1)*p1.py(); phperp=pH.px()+COM(0,1)*pH.py(); p3perp=-(p1perp+phperp); COM temp=A/(2.*s12)*( sqrt(p2p/p1p)*p3perp*conj(p1perp) +sqrt(p1p/p2p)*p1perp*conj(p3perp) ); temp=temp*conj(temp); return temp.real(); } } // namespace currents } // namespace HEJ diff --git a/src/MatrixElement.cc b/src/MatrixElement.cc index 594bf6a..39dc861 100644 --- a/src/MatrixElement.cc +++ b/src/MatrixElement.cc @@ -1,2141 +1,2136 @@ /** * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #include "HEJ/MatrixElement.hh" #include #include #include #include #include #include #include #include #include -#include "CLHEP/Vector/LorentzVector.h" - #include "fastjet/PseudoJet.hh" #include "HEJ/ConfigFlags.hh" #include "HEJ/Constants.hh" #include "HEJ/EWConstants.hh" #include "HEJ/Event.hh" #include "HEJ/HiggsCouplingSettings.hh" #include "HEJ/Hjets.hh" +#include "HEJ/LorentzVector.hh" #include "HEJ/PDG_codes.hh" #include "HEJ/Particle.hh" #include "HEJ/Wjets.hh" #include "HEJ/Zjets.hh" #include "HEJ/event_types.hh" #include "HEJ/exceptions.hh" #include "HEJ/jets.hh" #include "HEJ/utility.hh" namespace HEJ { double MatrixElement::omega0( double alpha_s, double mur, fastjet::PseudoJet const & q_j ) const { const double lambda = param_.regulator_lambda; const double result = - alpha_s*N_C/M_PI*std::log(q_j.perp2()/(lambda*lambda)); if(! param_.log_correction) return result; return ( 1. + alpha_s/(4.*M_PI)*BETA0*std::log(mur*mur/(q_j.perp()*lambda)) )*result; } Weights MatrixElement::operator()(Event const & event) const { std::vector tree_kin_part=tree_kin(event); std::vector virtual_part=virtual_corrections(event); if(tree_kin_part.size() != virtual_part.size()) { throw std::logic_error("tree and virtuals have different sizes"); } Weights sum = Weights{0., std::vector(event.variations().size(), 0.)}; for(size_t i=0; i tree_kin_part=tree_kin(event); double sum = 0.; for(double i : tree_kin_part) { sum += i; } return tree_param(event)*sum; } Weights MatrixElement::tree_param(Event const & event) const { if(! is_resummable(event.type())) { return Weights{0., std::vector(event.variations().size(), 0.)}; } Weights result; // only compute once for each renormalisation scale std::unordered_map known; result.central = tree_param(event, event.central().mur); known.emplace(event.central().mur, result.central); for(auto const & var: event.variations()) { const auto ME_it = known.find(var.mur); if(ME_it == end(known)) { const double wt = tree_param(event, var.mur); result.variations.emplace_back(wt); known.emplace(var.mur, wt); } else { result.variations.emplace_back(ME_it->second); } } return result; } std::vector MatrixElement::virtual_corrections(Event const & event) const { if(! is_resummable(event.type())) { return {Weights{0., std::vector(event.variations().size(), 0.)}}; } // only compute once for each renormalisation scale std::unordered_map > known_vec; std::vector central_vec=virtual_corrections(event, event.central().mur); known_vec.emplace(event.central().mur, central_vec); for(auto const & var: event.variations()) { const auto ME_it = known_vec.find(var.mur); if(ME_it == end(known_vec)) { known_vec.emplace(var.mur, virtual_corrections(event, var.mur)); } } // At this stage known_vec contains one vector of virtual corrections for each mur value // Now put this into a vector of Weights std::vector result_vec; for(size_t i=0; isecond.at(i)); } result_vec.emplace_back(result); } return result_vec; } double MatrixElement::virtual_corrections_W( Event const & event, const double mur, Particle const & WBoson ) const{ auto const & in = event.incoming(); const auto partons = filter_partons(event.outgoing()); fastjet::PseudoJet const & pa = in.front().p; #ifndef NDEBUG fastjet::PseudoJet const & pb = in.back().p; double const norm = (in.front().p + in.back().p).E(); #endif assert(std::is_sorted(partons.begin(), partons.end(), rapidity_less{})); assert(partons.size() >= 2); assert(pa.pz() < pb.pz()); fastjet::PseudoJet q = pa - partons[0].p; std::size_t first_idx = 0; std::size_t last_idx = partons.size() - 1; #ifndef NDEBUG bool wc = true; #endif bool wqq = false; // With extremal qqx or unordered gluon outside the extremal // partons then it is not part of the FKL ladder and does not // contribute to the virtual corrections. W emitted from the // most backward leg must be taken into account in t-channel if (event.type() == event_type::unob) { q -= partons[1].p; ++first_idx; if (in[0].type != partons[1].type ){ q -= WBoson.p; #ifndef NDEBUG wc=false; #endif } } else if (event.type() == event_type::qqxexb) { q -= partons[1].p; ++first_idx; if (std::abs(partons[0].type) != std::abs(partons[1].type)){ q -= WBoson.p; #ifndef NDEBUG wc=false; #endif } } else { if(event.type() == event_type::unof || event.type() == event_type::qqxexf){ --last_idx; } if (in[0].type != partons[0].type ){ q -= WBoson.p; #ifndef NDEBUG wc=false; #endif } } std::size_t first_idx_qqx = last_idx; std::size_t last_idx_qqx = last_idx; //if qqxMid event, virtual correction do not occur between //qqx pair. if(event.type() == event_type::qqxmid){ const auto backquark = std::find_if( begin(partons) + 1, end(partons) - 1 , [](Particle const & s){ return (s.type != pid::gluon); } ); if(backquark == end(partons) || (backquark+1)->type==pid::gluon) return 0; if(std::abs(backquark->type) != std::abs((backquark+1)->type)) { wqq=true; #ifndef NDEBUG wc=false; #endif } last_idx = std::distance(begin(partons), backquark); first_idx_qqx = last_idx+1; } double exponent = 0; const double alpha_s = alpha_s_(mur); for(std::size_t j = first_idx; j < last_idx; ++j){ exponent += omega0(alpha_s, mur, q)*( partons[j+1].rapidity() - partons[j].rapidity() ); q -=partons[j+1].p; } // End Loop one if (last_idx != first_idx_qqx) q -= partons[last_idx+1].p; if (wqq) q -= WBoson.p; for(std::size_t j = first_idx_qqx; j < last_idx_qqx; ++j){ exponent += omega0(alpha_s, mur, q)*( partons[j+1].rapidity() - partons[j].rapidity() ); q -= partons[j+1].p; } #ifndef NDEBUG if (wc) q -= WBoson.p; assert( nearby(q, -1*pb, norm) || is_AWZH_boson(partons.back().type) || event.type() == event_type::unof || event.type() == event_type::qqxexf ); #endif return std::exp(exponent); } std::vector MatrixElement::virtual_corrections_Z_qq( Event const & event, const double mur, Particle const & ZBoson ) const{ auto const & in = event.incoming(); const auto partons = filter_partons(event.outgoing()); fastjet::PseudoJet const & pa = in.front().p; #ifndef NDEBUG fastjet::PseudoJet const & pb = in.back().p; #endif assert(std::is_sorted(partons.begin(), partons.end(), rapidity_less{})); assert(partons.size() >= 2); assert(pa.pz() < pb.pz()); fastjet::PseudoJet q_t = pa - partons[0].p - ZBoson.p; fastjet::PseudoJet q_b = pa - partons[0].p; size_t first_idx = 0; size_t last_idx = partons.size() - 1; // Unordered gluon does not contribute to the virtual corrections if (event.type() == event_type::unob) { // Gluon is partons[0] and is already subtracted // partons[1] is the backward quark q_t -= partons[1].p; q_b -= partons[1].p; ++first_idx; } else if (event.type() == event_type::unof) { // End sum at forward quark --last_idx; } double sum_top=0.; double sum_bot=0.; double sum_mix=0.; const double alpha_s = alpha_s_(mur); for(size_t j = first_idx; j < last_idx; ++j){ const double dy = partons[j+1].rapidity() - partons[j].rapidity(); const double tmp_top = omega0(alpha_s, mur, q_t)*dy; const double tmp_bot = omega0(alpha_s, mur, q_b)*dy; sum_top += tmp_top; sum_bot += tmp_bot; sum_mix += (tmp_top + tmp_bot) / 2.; q_t -= partons[j+1].p; q_b -= partons[j+1].p; } return {exp(sum_top), exp(sum_bot), exp(sum_mix)}; } double MatrixElement::virtual_corrections_Z_qg( Event const & event, const double mur, Particle const & ZBoson, const bool is_gq_event ) const{ auto const & in = event.incoming(); const auto partons = filter_partons(event.outgoing()); fastjet::PseudoJet const & pa = in.front().p; #ifndef NDEBUG fastjet::PseudoJet const & pb = in.back().p; #endif assert(std::is_sorted(partons.begin(), partons.end(), rapidity_less{})); assert(partons.size() >= 2); assert(pa.pz() < pb.pz()); // If this is a gq event, don't subtract the Z momentum from first q fastjet::PseudoJet q = (is_gq_event ? pa - partons[0].p : pa - partons[0].p - ZBoson.p); size_t first_idx = 0; size_t last_idx = partons.size() - 1; // Unordered gluon does not contribute to the virtual corrections if (event.type() == event_type::unob) { // Gluon is partons[0] and is already subtracted // partons[1] is the backward quark q -= partons[1].p; ++first_idx; } else if (event.type() == event_type::unof) { // End sum at forward quark --last_idx; } double sum=0.; const double alpha_s = alpha_s_(mur); for(size_t j = first_idx; j < last_idx; ++j){ sum += omega0(alpha_s, mur, q)*(partons[j+1].rapidity() - partons[j].rapidity()); q -= partons[j+1].p; } return exp(sum); } std::vector MatrixElement::virtual_corrections( Event const & event, const double mur ) const{ auto const & in = event.incoming(); auto const & out = event.outgoing(); fastjet::PseudoJet const & pa = in.front().p; #ifndef NDEBUG fastjet::PseudoJet const & pb = in.back().p; double const norm = (in.front().p + in.back().p).E(); #endif const auto AWZH_boson = std::find_if( begin(out), end(out), [](Particle const & p){ return is_AWZH_boson(p); } ); if(AWZH_boson != end(out) && std::abs(AWZH_boson->type) == pid::Wp){ return {virtual_corrections_W(event, mur, *AWZH_boson)}; } if(AWZH_boson != end(out) && AWZH_boson->type == pid::Z_photon_mix){ if(is_gluon(in.back().type)){ // This is a qg event return {virtual_corrections_Z_qg(event, mur, *AWZH_boson, false)}; } if(is_gluon(in.front().type)){ // This is a gq event return {virtual_corrections_Z_qg(event, mur, *AWZH_boson, true)}; } // This is a qq event return virtual_corrections_Z_qq(event, mur, *AWZH_boson); } assert(std::is_sorted(out.begin(), out.end(), rapidity_less{})); assert(out.size() >= 2); assert(pa.pz() < pb.pz()); fastjet::PseudoJet q = pa - out[0].p; std::size_t first_idx = 0; std::size_t last_idx = out.size() - 1; // if there is a Higgs boson, extremal qqx or unordered gluon // outside the extremal partons then it is not part of the FKL // ladder and does not contribute to the virtual corrections if((out.front().type == pid::Higgs) || event.type() == event_type::unob || event.type() == event_type::qqxexb){ q -= out[1].p; ++first_idx; } if((out.back().type == pid::Higgs) || event.type() == event_type::unof || event.type() == event_type::qqxexf){ --last_idx; } std::size_t first_idx_qqx = last_idx; std::size_t last_idx_qqx = last_idx; //if qqxMid event, virtual correction do not occur between //qqx pair. if(event.type() == event_type::qqxmid){ const auto backquark = std::find_if( begin(out) + 1, end(out) - 1 , [](Particle const & s){ return (s.type != pid::gluon && is_parton(s.type)); } ); if(backquark == end(out) || (backquark+1)->type==pid::gluon) return {0.}; last_idx = std::distance(begin(out), backquark); first_idx_qqx = last_idx+1; } double exponent = 0; const double alpha_s = alpha_s_(mur); for(std::size_t j = first_idx; j < last_idx; ++j){ exponent += omega0(alpha_s, mur, q)*( out[j+1].rapidity() - out[j].rapidity() ); q -= out[j+1].p; } if (last_idx != first_idx_qqx) q -= out[last_idx+1].p; for(std::size_t j = first_idx_qqx; j < last_idx_qqx; ++j){ exponent += omega0(alpha_s, mur, q)*( out[j+1].rapidity() - out[j].rapidity() ); q -= out[j+1].p; } assert( nearby(q, -1*pb, norm) || out.back().type == pid::Higgs || event.type() == event_type::unof || event.type() == event_type::qqxexf ); return {std::exp(exponent)}; } namespace { //! Lipatov vertex for partons emitted into extremal jets CLHEP::HepLorentzVector CLipatov( CLHEP::HepLorentzVector const & qav, CLHEP::HepLorentzVector const & qbv, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2 ) { const CLHEP::HepLorentzVector p5 = qav-qbv; const CLHEP::HepLorentzVector CL = -(qav+qbv) + p1*(qav.m2()/p5.dot(p1) + 2.*p5.dot(p2)/p1.dot(p2)) - p2*(qbv.m2()/p5.dot(p2) + 2.*p5.dot(p1)/p1.dot(p2)); return CL; } double C2Lipatov( CLHEP::HepLorentzVector const & qav, CLHEP::HepLorentzVector const & qbv, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2 ){ const CLHEP::HepLorentzVector CL = CLipatov(qav, qbv, p1, p2); return -CL.dot(CL); } //! Lipatov vertex with soft subtraction for partons emitted into extremal jets double C2Lipatovots( CLHEP::HepLorentzVector const & qav, CLHEP::HepLorentzVector const & qbv, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2, const double lambda ) { const double Cls=(C2Lipatov(qav, qbv, p1, p2)/(qav.m2()*qbv.m2())); const double kperp=(qav-qbv).perp(); if (kperp>lambda) return Cls; return Cls-4./(kperp*kperp); } double C2Lipatov_Mix( CLHEP::HepLorentzVector const & qav_t, CLHEP::HepLorentzVector const & qbv_t, CLHEP::HepLorentzVector const & qav_b, CLHEP::HepLorentzVector const & qbv_b, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2 ) { const CLHEP::HepLorentzVector CL_t = CLipatov(qav_t, qbv_t, p1, p2); const CLHEP::HepLorentzVector CL_b = CLipatov(qav_b, qbv_b, p1, p2); return -CL_t.dot(CL_b); } double C2Lipatovots_Mix( CLHEP::HepLorentzVector const & qav_t, CLHEP::HepLorentzVector const & qbv_t, CLHEP::HepLorentzVector const & qav_b, CLHEP::HepLorentzVector const & qbv_b, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2, const double lambda ) { const double Cls = C2Lipatov_Mix(qav_t, qbv_t, qav_b, qbv_b, p1, p2) / sqrt(qav_t.m2() * qbv_t.m2() * qav_b.m2() * qbv_b.m2()); const double kperp = (qav_t - qbv_t).perp(); if (kperp > lambda){ return Cls; } return Cls - 4.0 / (kperp * kperp); } CLHEP::HepLorentzVector CLipatov( CLHEP::HepLorentzVector const & qav, CLHEP::HepLorentzVector const & qbv, CLHEP::HepLorentzVector const & pim, CLHEP::HepLorentzVector const & pip, CLHEP::HepLorentzVector const & pom, CLHEP::HepLorentzVector const & pop ){ const CLHEP::HepLorentzVector p5 = qav-qbv; const CLHEP::HepLorentzVector CL = -(qav+qbv) + qav.m2()*(1./p5.dot(pip)*pip + 1./p5.dot(pop)*pop)/2. - qbv.m2()*(1./p5.dot(pim)*pim + 1./p5.dot(pom)*pom)/2. + ( pip*(p5.dot(pim)/pip.dot(pim) + p5.dot(pom)/pip.dot(pom)) + pop*(p5.dot(pim)/pop.dot(pim) + p5.dot(pom)/pop.dot(pom)) - pim*(p5.dot(pip)/pip.dot(pim) + p5.dot(pop)/pop.dot(pim)) - pom*(p5.dot(pip)/pip.dot(pom) + p5.dot(pop)/pop.dot(pom)) )/2.; return CL; } //! Lipatov vertex double C2Lipatov( // B CLHEP::HepLorentzVector const & qav, CLHEP::HepLorentzVector const & qbv, CLHEP::HepLorentzVector const & pim, CLHEP::HepLorentzVector const & pip, CLHEP::HepLorentzVector const & pom, CLHEP::HepLorentzVector const & pop ){ const CLHEP::HepLorentzVector CL = CLipatov(qav, qbv, pim, pip, pom, pop); return -CL.dot(CL); } //! Lipatov vertex with soft subtraction double C2Lipatovots( CLHEP::HepLorentzVector const & qav, CLHEP::HepLorentzVector const & qbv, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2, const double lambda ) { const double Cls=(C2Lipatov(qav, qbv, pa, pb, p1, p2)/(qav.m2()*qbv.m2())); const double kperp=(qav-qbv).perp(); if (kperp>lambda) return Cls; return Cls-4./(kperp*kperp); } double C2Lipatov_Mix( CLHEP::HepLorentzVector const & qav_t, CLHEP::HepLorentzVector const & qbv_t, CLHEP::HepLorentzVector const & qav_b, CLHEP::HepLorentzVector const & qbv_b, CLHEP::HepLorentzVector const & pim, CLHEP::HepLorentzVector const & pip, CLHEP::HepLorentzVector const & pom, CLHEP::HepLorentzVector const & pop ) { const CLHEP::HepLorentzVector CL_t = CLipatov(qav_t, qbv_t, pim, pip, pom, pop); const CLHEP::HepLorentzVector CL_b = CLipatov(qav_b, qbv_b, pim, pip, pom, pop); return -CL_t.dot(CL_b); } double C2Lipatovots_Mix( CLHEP::HepLorentzVector const & qav_t, CLHEP::HepLorentzVector const & qbv_t, CLHEP::HepLorentzVector const & qav_b, CLHEP::HepLorentzVector const & qbv_b, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & p2, const double lambda ) { const double Cls = C2Lipatov_Mix(qav_t, qbv_t, qav_b, qbv_b, pa, pb, p1, p2) / sqrt(qav_t.m2() * qbv_t.m2() * qav_b.m2() * qbv_b.m2()); const double kperp = (qav_t - qbv_t).perp(); if (kperp > lambda) { return Cls; } return Cls - 4.0 / (kperp * kperp); } /** Matrix element squared for tree-level current-current scattering * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pg Unordered gluon momentum * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @returns ME Squared for Tree-Level Current-Current Scattering * * @note The unof contribution can be calculated by reversing the argument ordering. */ double ME_uno_current( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pg, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa ){ using namespace currents; assert(aptype!=pid::gluon); // aptype cannot be gluon if (bptype==pid::gluon) { if (is_quark(aptype)) return ME_unob_qg(pg,p1,pa,pn,pb); return ME_unob_qbarg(pg,p1,pa,pn,pb); } if (is_antiquark(bptype)) { if (is_quark(aptype)) return ME_unob_qQbar(pg,p1,pa,pn,pb); return ME_unob_qbarQbar(pg,p1,pa,pn,pb); } //bptype == quark if (is_quark(aptype)) return ME_unob_qQ(pg,p1,pa,pn,pb); return ME_unob_qbarQ(pg,p1,pa,pn,pb); } /** Matrix element squared for tree-level current-current scattering * @param bptype Particle b PDG ID * @param pgin Incoming gluon momentum * @param pq Quark from splitting Momentum * @param pqbar Anti-quark from splitting Momentum * @param pn Particle n Momentum * @param pb Particle b Momentum * @param swap_q_qx Boolean. Ordering of qqbar pair. False: pqbar extremal. * @returns ME Squared for Tree-Level Current-Current Scattering * * @note The qqxf contribution can be calculated by reversing the argument ordering. */ double ME_qqx_current( ParticleID bptype, CLHEP::HepLorentzVector const & pgin, CLHEP::HepLorentzVector const & pq, CLHEP::HepLorentzVector const & pqbar, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, bool const swap_q_qx ){ using namespace currents; if (bptype==pid::gluon) { if (swap_q_qx) // pq extremal return ME_Exqqx_qqbarg(pgin,pq,pqbar,pn,pb); // pqbar extremal return ME_Exqqx_qbarqg(pgin,pq,pqbar,pn,pb); } // b leg quark line if (swap_q_qx) //extremal pq return ME_Exqqx_qqbarQ(pgin,pq,pqbar,pn,pb); return ME_Exqqx_qbarqQ(pgin,pq,pqbar,pn,pb); } /* \brief Matrix element squared for central qqx tree-level current-current * scattering * * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param nabove Number of gluons emitted before central qqxpair * @param nbelow Number of gluons emitted after central qqxpair * @param pa Initial state a Momentum * @param pb Initial state b Momentum * @param pq Final state qbar Momentum * @param pqbar Final state q Momentum * @param partons Vector of all outgoing partons * @returns ME Squared for qqxmid Tree-Level Current-Current Scattering */ double ME_qqxmid_current( ParticleID aptype, ParticleID bptype, int nabove, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & pq, CLHEP::HepLorentzVector const & pqbar, std::vector const & partons ){ using namespace currents; // CAM factors for the qqx amps, and qqbar ordering (default, pq backwards) const bool swap_q_qx=pqbar.rapidity() < pq.rapidity(); double wt=1.; if (aptype==pid::gluon) wt*=K_g(partons.front(),pa)/C_F; if (bptype==pid::gluon) wt*=K_g(partons.back(),pb)/C_F; return wt*ME_Cenqqx_qq(pa, pb, partons, is_antiquark(bptype), is_antiquark(aptype), swap_q_qx, nabove); } /** Matrix element squared for tree-level current-current scattering * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @returns ME Squared for Tree-Level Current-Current Scattering */ double ME_current( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa ){ using namespace currents; if (aptype==pid::gluon && bptype==pid::gluon) { return ME_gg(pn,pb,p1,pa); } if (aptype==pid::gluon && bptype!=pid::gluon) { if (is_quark(bptype)) return ME_qg(pn,pb,p1,pa); return ME_qbarg(pn,pb,p1,pa); } if (bptype==pid::gluon && aptype!=pid::gluon) { if (is_quark(aptype)) return ME_qg(p1,pa,pn,pb); return ME_qbarg(p1,pa,pn,pb); } // they are both quark if (is_quark(bptype)) { if (is_quark(aptype)) return ME_qQ(pn,pb,p1,pa); return ME_qQbar(pn,pb,p1,pa); } if (is_quark(aptype)) return ME_qQbar(p1,pa,pn,pb); return ME_qbarQbar(pn,pb,p1,pa); } /** Matrix element squared for tree-level current-current scattering With W+Jets * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @param wc Boolean. True->W Emitted from b. Else; emitted from leg a * @returns ME Squared for Tree-Level Current-Current Scattering */ double ME_W_current( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, bool const wc, ParticleProperties const & Wprop ){ using namespace currents; // We know it cannot be gg incoming. assert(!(aptype==pid::gluon && bptype==pid::gluon)); if (aptype==pid::gluon && bptype!=pid::gluon) { if (is_quark(bptype)) return ME_W_qg(pn,plbar,pl,pb,p1,pa,Wprop); return ME_W_qbarg(pn,plbar,pl,pb,p1,pa,Wprop); } if (bptype==pid::gluon && aptype!=pid::gluon) { if (is_quark(aptype)) return ME_W_qg(p1,plbar,pl,pa,pn,pb,Wprop); return ME_W_qbarg(p1,plbar,pl,pa,pn,pb,Wprop); } // they are both quark if (wc){ // emission off b, (first argument pbout) if (is_quark(bptype)) { if (is_quark(aptype)) return ME_W_qQ(pn,plbar,pl,pb,p1,pa,Wprop); return ME_W_qQbar(pn,plbar,pl,pb,p1,pa,Wprop); } if (is_quark(aptype)) return ME_W_qbarQ(pn,plbar,pl,pb,p1,pa,Wprop); return ME_W_qbarQbar(pn,plbar,pl,pb,p1,pa,Wprop); } // emission off a, (first argument paout) if (is_quark(aptype)) { if (is_quark(bptype)) return ME_W_qQ(p1,plbar,pl,pa,pn,pb,Wprop); return ME_W_qQbar(p1,plbar,pl,pa,pn,pb,Wprop); } // a is anti-quark if (is_quark(bptype)) return ME_W_qbarQ(p1,plbar,pl,pa,pn,pb,Wprop); return ME_W_qbarQbar(p1,plbar,pl,pa,pn,pb,Wprop); } /** Matrix element squared for backwards uno tree-level current-current * scattering With W+Jets * * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @param pg Unordered gluon momentum * @param wc Boolean. True->W Emitted from b. Else; emitted from leg a * @returns ME Squared for unob Tree-Level Current-Current Scattering * * @note The unof contribution can be calculated by reversing the argument ordering. */ double ME_W_uno_current( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pg, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, bool const wc, ParticleProperties const & Wprop ){ using namespace currents; // we know they are not both gluons assert(bptype != pid::gluon || aptype != pid::gluon); if (bptype == pid::gluon && aptype != pid::gluon) { // b gluon => W emission off a if (is_quark(aptype)) return ME_Wuno_qg(p1,pa,pn,pb,pg,plbar,pl,Wprop); return ME_Wuno_qbarg(p1,pa,pn,pb,pg,plbar,pl,Wprop); } // they are both quark if (wc) {// emission off b, i.e. b is first current if (is_quark(bptype)){ if (is_quark(aptype)) return ME_W_unob_qQ(p1,pa,pn,pb,pg,plbar,pl,Wprop); return ME_W_unob_qQbar(p1,pa,pn,pb,pg,plbar,pl,Wprop); } if (is_quark(aptype)) return ME_W_unob_qbarQ(p1,pa,pn,pb,pg,plbar,pl,Wprop); return ME_W_unob_qbarQbar(p1,pa,pn,pb,pg,plbar,pl,Wprop); } // wc == false, emission off a, i.e. a is first current if (is_quark(aptype)) { if (is_quark(bptype)) //qq return ME_Wuno_qQ(p1,pa,pn,pb,pg,plbar,pl,Wprop); //qqbar return ME_Wuno_qQbar(p1,pa,pn,pb,pg,plbar,pl,Wprop); } // a is anti-quark if (is_quark(bptype)) //qbarq return ME_Wuno_qbarQ(p1,pa,pn,pb,pg,plbar,pl,Wprop); //qbarqbar return ME_Wuno_qbarQbar(p1,pa,pn,pb,pg,plbar,pl,Wprop); } /** \brief Matrix element squared for backward qqx tree-level current-current * scattering With W+Jets * * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pa Initial state a Momentum * @param pb Initial state b Momentum * @param pq Final state q Momentum * @param pqbar Final state qbar Momentum * @param pn Final state n Momentum * @param plbar Final state anti-lepton momentum * @param pl Final state lepton momentum * @param swap_q_qx Boolean. Ordering of qqbar pair. False: pqbar extremal. * @param wc Boolean. True->W Emitted from b. Else; emitted from leg a * @returns ME Squared for qqxb Tree-Level Current-Current Scattering * * @note calculate forwards qqx contribution by reversing argument ordering. */ double ME_W_qqx_current( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & pq, CLHEP::HepLorentzVector const & pqbar, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, bool const swap_q_qx, bool const wc, ParticleProperties const & Wprop ){ using namespace currents; // With qqbar we could have 2 incoming gluons and W Emission if (aptype==pid::gluon && bptype==pid::gluon) { //a gluon, b gluon gg->qqbarWg // This will be a wqqx emission as there is no other possible W Emission // Site. if (swap_q_qx) return ME_WExqqx_qqbarg(pa, pqbar, plbar, pl, pq, pn, pb, Wprop); return ME_WExqqx_qbarqg(pa, pq, plbar, pl, pqbar, pn, pb, Wprop); } assert(aptype==pid::gluon && bptype!=pid::gluon ); //a gluon => W emission off b leg or qqx if (!wc){ // W Emitted from backwards qqx if (swap_q_qx) return ME_WExqqx_qqbarQ(pa, pqbar, plbar, pl, pq, pn, pb, Wprop); return ME_WExqqx_qbarqQ(pa, pq, plbar, pl, pqbar, pn, pb, Wprop); } // W Must be emitted from forwards leg. return ME_W_Exqqx_QQq(pb, pa, pn, pq, pqbar, plbar, pl, is_antiquark(bptype), Wprop); } /* \brief Matrix element squared for central qqx tree-level current-current * scattering With W+Jets * * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param nabove Number of gluons emitted before central qqxpair * @param nbelow Number of gluons emitted after central qqxpair * @param pa Initial state a Momentum * @param pb Initial state b Momentum\ * @param pq Final state qbar Momentum * @param pqbar Final state q Momentum * @param partons Vector of all outgoing partons * @param plbar Final state anti-lepton momentum * @param pl Final state lepton momentum * @param wqq Boolean. True siginfies W boson is emitted from Central qqx * @param wc Boolean. wc=true signifies w boson emitted from leg b; if wqq=false. * @returns ME Squared for qqxmid Tree-Level Current-Current Scattering */ double ME_W_qqxmid_current( ParticleID aptype, ParticleID bptype, int nabove, int nbelow, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & pq, CLHEP::HepLorentzVector const & pqbar, std::vector const & partons, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, bool const wqq, bool const wc, ParticleProperties const & Wprop ){ using namespace currents; // CAM factors for the qqx amps, and qqbar ordering (default, pq backwards) const bool swap_q_qx=pqbar.rapidity() < pq.rapidity(); double wt=1.; if (aptype==pid::gluon) wt*=K_g(partons.front(),pa)/C_F; if (bptype==pid::gluon) wt*=K_g(partons.back(),pb)/C_F; if(wqq) return wt*ME_WCenqqx_qq(pa, pb, pl, plbar, partons, is_antiquark(aptype),is_antiquark(bptype), swap_q_qx, nabove, Wprop); return wt*ME_W_Cenqqx_qq(pa, pb, pl, plbar, partons, is_antiquark(aptype), is_antiquark(bptype), swap_q_qx, nabove, nbelow, wc, Wprop); } /** Matrix element squared for tree-level current-current scattering With Z+Jets * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @param plbar Final state positron momentum * @param pl Final state electron momentum * @param Zprop Z properties * @param stw2 Value of sin(theta_w)^2 * @param ctw Value of cos(theta_w) * @returns ME Squared for Tree-Level Current-Current Scattering */ std::vector ME_Z_current( const ParticleID aptype, const ParticleID bptype, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, ParticleProperties const & Zprop, const double stw2, const double ctw ){ using namespace currents; // we know they are not both gluons assert(!is_gluon(aptype) || !is_gluon(bptype)); if(is_anyquark(aptype) && is_gluon(bptype)){ // This is a qg event return { ME_Z_qg(pa,pb,p1,pn,plbar,pl,aptype,bptype,Zprop,stw2,ctw) }; } if(is_gluon(aptype) && is_anyquark(bptype)){ // This is a gq event return { ME_Z_qg(pb,pa,pn,p1,plbar,pl,bptype,aptype,Zprop,stw2,ctw) }; } assert(is_anyquark(aptype) && is_anyquark(bptype)); // This is a qq event return ME_Z_qQ(pa,pb,p1,pn,plbar,pl,aptype,bptype,Zprop,stw2,ctw); } /** Matrix element squared for backwards uno tree-level current-current * scattering With Z+Jets * * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @param pg Unordered gluon momentum * @param plbar Final state positron momentum * @param pl Final state electron momentum * @param Zprop Z properties * @param stw2 Value of sin(theta_w)^2 * @param ctw Value of cos(theta_w) * @returns ME Squared for unob Tree-Level Current-Current Scattering * * @note The unof contribution can be calculated by reversing the argument ordering. */ std::vector ME_Z_uno_current( const ParticleID aptype, const ParticleID bptype, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pg, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, ParticleProperties const & Zprop, const double stw2, const double ctw ){ using namespace currents; // we know they are not both gluons assert(!is_gluon(aptype) || !is_gluon(bptype)); if (is_anyquark(aptype) && is_gluon(bptype)) { // This is a qg event return { ME_Zuno_qg(pa,pb,pg,p1,pn,plbar,pl,aptype,bptype,Zprop,stw2,ctw) }; } if (is_gluon(aptype) && is_anyquark(bptype)) { // This is a gq event return { ME_Zuno_qg(pb,pa,pg,pn,p1,plbar,pl,bptype,aptype,Zprop,stw2,ctw) }; } assert(is_anyquark(aptype) && is_anyquark(bptype)); // This is a qq event return ME_Zuno_qQ(pa,pb,pg,p1,pn,plbar,pl,aptype,bptype,Zprop,stw2,ctw); } /** \brief Matrix element squared for tree-level current-current scattering with Higgs * @param aptype Particle a PDG ID * @param bptype Particle b PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @param qH t-channel momentum before Higgs * @param qHp1 t-channel momentum after Higgs * @returns ME Squared for Tree-Level Current-Current Scattering with Higgs */ double ME_Higgs_current( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & qH, // t-channel momentum before Higgs CLHEP::HepLorentzVector const & qHp1, // t-channel momentum after Higgs double mt, bool include_bottom, double mb, double vev ){ using namespace currents; if (aptype==pid::gluon && bptype==pid::gluon) // gg initial state return ME_H_gg(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev); if (aptype==pid::gluon&&bptype!=pid::gluon) { if (is_quark(bptype)) return ME_H_qg(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev)*4./9.; return ME_H_qbarg(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev)*4./9.; } if (bptype==pid::gluon && aptype!=pid::gluon) { if (is_quark(aptype)) return ME_H_qg(p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev)*4./9.; return ME_H_qbarg(p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev)*4./9.; } // they are both quark if (is_quark(bptype)) { if (is_quark(aptype)) return ME_H_qQ(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev)*4.*4./(9.*9.); return ME_H_qQbar(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev)*4.*4./(9.*9.); } if (is_quark(aptype)) return ME_H_qbarQ(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev)*4.*4./(9.*9.); return ME_H_qbarQbar(pn,pb,p1,pa,-qHp1,-qH,mt,include_bottom,mb,vev)*4.*4./(9.*9.); } /** \brief Current matrix element squared with Higgs and unordered backward emission * @param aptype Particle A PDG ID * @param bptype Particle B PDG ID * @param pn Particle n Momentum * @param pb Particle b Momentum * @param pg Unordered back Particle Momentum * @param p1 Particle 1 Momentum * @param pa Particle a Momentum * @param qH t-channel momentum before Higgs * @param qHp1 t-channel momentum after Higgs * @returns ME Squared with Higgs and unordered backward emission * * @note This function assumes unordered gluon backwards from pa-p1 current. * For unof, reverse call order */ double ME_Higgs_current_uno( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pg, CLHEP::HepLorentzVector const & pn, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & qH, // t-channel momentum before Higgs CLHEP::HepLorentzVector const & qHp1, // t-channel momentum after Higgs double mt, bool include_bottom, double mb, double vev ){ using namespace currents; if (bptype==pid::gluon && aptype!=pid::gluon) { if (is_quark(aptype)) return ME_H_unob_gQ(pg,p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev); return ME_H_unob_gQbar(pg,p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev); } // they are both quark if (is_quark(aptype)) { if (is_quark(bptype)) return ME_H_unob_qQ(pg,p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev); return ME_H_unob_qbarQ(pg,p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev); } if (is_quark(bptype)) return ME_H_unob_qQbar(pg,p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev); return ME_H_unob_qbarQbar(pg,p1,pa,pn,pb,-qH,-qHp1,mt,include_bottom,mb,vev); } - CLHEP::HepLorentzVector to_HepLorentzVector(Particle const & particle){ - return {particle.p.px(), particle.p.py(), particle.p.pz(), particle.p.E()}; - } - void validate(MatrixElementConfig const & config) { #ifndef HEJ_BUILD_WITH_QCDLOOP if(!config.Higgs_coupling.use_impact_factors) { throw std::invalid_argument{ "Invalid Higgs coupling settings.\n" "HEJ without QCDloop support can only use impact factors.\n" "Set use_impact_factors to true or recompile HEJ.\n" }; } #endif if(config.Higgs_coupling.use_impact_factors && config.Higgs_coupling.mt != std::numeric_limits::infinity()) { throw std::invalid_argument{ "Conflicting settings: " "impact factors may only be used in the infinite top mass limit" }; } } } // namespace MatrixElement::MatrixElement( std::function alpha_s, MatrixElementConfig conf ): alpha_s_{std::move(alpha_s)}, param_{std::move(conf)} { validate(param_); } std::vector MatrixElement::tree_kin( Event const & ev ) const { if(! is_resummable(ev.type())) return {0.}; auto AWZH_boson = std::find_if( begin(ev.outgoing()), end(ev.outgoing()), [](Particle const & p){return is_AWZH_boson(p);} ); if(AWZH_boson == end(ev.outgoing())) return {tree_kin_jets(ev)}; switch(AWZH_boson->type){ case pid::Higgs: return {tree_kin_Higgs(ev)}; case pid::Wp: case pid::Wm: return {tree_kin_W(ev)}; case pid::Z_photon_mix: return tree_kin_Z(ev); // TODO case pid::photon: case pid::Z: default: throw not_implemented("Emission of boson of unsupported type"); } } namespace { constexpr int EXTREMAL_JET_IDX = 1; constexpr int NO_EXTREMAL_JET_IDX = 0; bool treat_as_extremal(Particle const & parton){ return parton.p.user_index() == EXTREMAL_JET_IDX; } template double FKL_ladder_weight( InputIterator begin_gluon, InputIterator end_gluon, CLHEP::HepLorentzVector const & q0, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pn, double lambda ){ double wt = 1; auto qi = q0; for(auto gluon_it = begin_gluon; gluon_it != end_gluon; ++gluon_it){ assert(gluon_it->type == pid::gluon); const auto g = to_HepLorentzVector(*gluon_it); const auto qip1 = qi - g; if(treat_as_extremal(*gluon_it)){ wt *= C2Lipatovots(qip1, qi, pa, pb, lambda)*C_A; } else{ wt *= C2Lipatovots(qip1, qi, pa, pb, p1, pn, lambda)*C_A; } qi = qip1; } return wt; } template std::vector FKL_ladder_weight_mix( InputIterator begin_gluon, InputIterator end_gluon, CLHEP::HepLorentzVector const & q0_t, CLHEP::HepLorentzVector const & q0_b, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, CLHEP::HepLorentzVector const & p1, CLHEP::HepLorentzVector const & pn, const double lambda ){ double wt_top = 1; double wt_bot = 1; double wt_mix = 1; auto qi_t = q0_t; auto qi_b = q0_b; for(auto gluon_it = begin_gluon; gluon_it != end_gluon; ++gluon_it){ assert(gluon_it->type == pid::gluon); const auto g = to_HepLorentzVector(*gluon_it); const auto qip1_t = qi_t - g; const auto qip1_b = qi_b - g; if(treat_as_extremal(*gluon_it)){ wt_top *= C2Lipatovots(qip1_t, qi_t, pa, pb, lambda)*C_A; wt_bot *= C2Lipatovots(qip1_b, qi_b, pa, pb, lambda)*C_A; wt_mix *= C2Lipatovots_Mix(qip1_t, qi_t, qip1_b, qi_b, pa, pb, lambda)*C_A; } else{ wt_top *= C2Lipatovots(qip1_t, qi_t, pa, pb, p1, pn, lambda)*C_A; wt_bot *= C2Lipatovots(qip1_b, qi_b, pa, pb, p1, pn, lambda)*C_A; wt_mix *= C2Lipatovots_Mix(qip1_t, qi_t, qip1_b, qi_b, pa, pb, p1, pn, lambda)*C_A; } qi_t = qip1_t; qi_b = qip1_b; } return {wt_top, wt_bot, wt_mix}; } std::vector tag_extremal_jet_partons( Event const & ev ){ auto out_partons = filter_partons(ev.outgoing()); if(out_partons.size() == ev.jets().size()){ // no additional emissions in extremal jets, don't need to tag anything for(auto & parton: out_partons){ parton.p.set_user_index(NO_EXTREMAL_JET_IDX); } return out_partons; } auto const & jets = ev.jets(); assert(jets.size() >= 2); auto most_backward = begin(jets); auto most_forward = end(jets) - 1; // skip jets caused by unordered emission or qqx if(ev.type() == event_type::unob || ev.type() == event_type::qqxexb){ assert(jets.size() >= 3); ++most_backward; } else if(ev.type() == event_type::unof || ev.type() == event_type::qqxexf){ assert(jets.size() >= 3); --most_forward; } const auto extremal_jet_indices = ev.particle_jet_indices( {*most_backward, *most_forward} ); assert(extremal_jet_indices.size() == out_partons.size()); for(std::size_t i = 0; i < out_partons.size(); ++i){ assert(is_parton(out_partons[i])); const int idx = (extremal_jet_indices[i]>=0)? EXTREMAL_JET_IDX: NO_EXTREMAL_JET_IDX; out_partons[i].p.set_user_index(idx); } return out_partons; } double tree_kin_jets_qqxmid( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, std::vector const & partons, double lambda ){ CLHEP::HepLorentzVector pq; CLHEP::HepLorentzVector pqbar; const auto backmidquark = std::find_if( begin(partons)+1, end(partons)-1, [](Particle const & s){ return s.type != pid::gluon; } ); assert(backmidquark!=end(partons)-1); if (is_quark(backmidquark->type)){ pq = to_HepLorentzVector(*backmidquark); pqbar = to_HepLorentzVector(*(backmidquark+1)); } else { pqbar = to_HepLorentzVector(*backmidquark); pq = to_HepLorentzVector(*(backmidquark+1)); } auto p1 = to_HepLorentzVector(partons[0]); auto pn = to_HepLorentzVector(partons[partons.size() - 1]); auto q0 = pa - p1; // t-channel momentum after qqx auto qqxt = q0; const auto begin_ladder = cbegin(partons) + 1; const auto end_ladder_1 = (backmidquark); const auto begin_ladder_2 = (backmidquark+2); const auto end_ladder = cend(partons) - 1; for(auto parton_it = begin_ladder; parton_it < begin_ladder_2; ++parton_it){ qqxt -= to_HepLorentzVector(*parton_it); } const int nabove = std::distance(begin_ladder, backmidquark); std::vector partonsHLV; partonsHLV.reserve(partons.size()); for (std::size_t i = 0; i != partons.size(); ++i) { partonsHLV.push_back(to_HepLorentzVector(partons[i])); } const double current_factor = ME_qqxmid_current( aptype, bptype, nabove, pa, pb, pq, pqbar, partonsHLV ); const double ladder_factor = FKL_ladder_weight( begin_ladder, end_ladder_1, q0, pa, pb, p1, pn, lambda )*FKL_ladder_weight( begin_ladder_2, end_ladder, qqxt, pa, pb, p1, pn, lambda ); return current_factor*ladder_factor; } template double tree_kin_jets_qqx(InIter BeginIn, InIter EndIn, partIter BeginPart, partIter EndPart, double lambda){ const bool swap_q_qx = is_quark(*BeginPart); const auto pgin = to_HepLorentzVector(*BeginIn); const auto pb = to_HepLorentzVector(*(EndIn-1)); const auto pq = to_HepLorentzVector(*(BeginPart+(swap_q_qx?0:1))); const auto pqbar = to_HepLorentzVector(*(BeginPart+(swap_q_qx?1:0))); const auto p1 = to_HepLorentzVector(*(BeginPart)); const auto pn = to_HepLorentzVector(*(EndPart-1)); assert((BeginIn)->type==pid::gluon); // Incoming a must be gluon. const double current_factor = ME_qqx_current( (EndIn-1)->type, pgin, pq, pqbar, pn, pb, swap_q_qx )/(4.*(N_C*N_C - 1.)); const double ladder_factor = FKL_ladder_weight( (BeginPart+2), (EndPart-1), pgin-pq-pqbar, pgin, pb, p1, pn, lambda ); return current_factor*ladder_factor; } template double tree_kin_jets_uno(InIter BeginIn, InIter EndIn, partIter BeginPart, partIter EndPart, double lambda ){ const auto pa = to_HepLorentzVector(*BeginIn); const auto pb = to_HepLorentzVector(*(EndIn-1)); const auto pg = to_HepLorentzVector(*BeginPart); const auto p1 = to_HepLorentzVector(*(BeginPart+1)); const auto pn = to_HepLorentzVector(*(EndPart-1)); const double current_factor = ME_uno_current( (BeginIn)->type, (EndIn-1)->type, pg, pn, pb, p1, pa ); const double ladder_factor = FKL_ladder_weight( (BeginPart+2), (EndPart-1), pa-p1-pg, pa, pb, p1, pn, lambda ); return current_factor*ladder_factor; } } // namespace double MatrixElement::tree_kin_jets(Event const & ev) const { auto const & incoming = ev.incoming(); const auto partons = tag_extremal_jet_partons(ev); if (ev.type()==event_type::FKL){ const auto pa = to_HepLorentzVector(incoming[0]); const auto pb = to_HepLorentzVector(incoming[1]); const auto p1 = to_HepLorentzVector(partons.front()); const auto pn = to_HepLorentzVector(partons.back()); return ME_current( incoming[0].type, incoming[1].type, pn, pb, p1, pa )/(4.*(N_C*N_C - 1.))*FKL_ladder_weight( begin(partons) + 1, end(partons) - 1, pa - p1, pa, pb, p1, pn, param_.regulator_lambda ); } if (ev.type()==event_type::unordered_backward){ return tree_kin_jets_uno(incoming.begin(), incoming.end(), partons.begin(), partons.end(), param_.regulator_lambda); } if (ev.type()==event_type::unordered_forward){ return tree_kin_jets_uno(incoming.rbegin(), incoming.rend(), partons.rbegin(), partons.rend(), param_.regulator_lambda); } if (ev.type()==event_type::extremal_qqxb){ return tree_kin_jets_qqx(incoming.begin(), incoming.end(), partons.begin(), partons.end(), param_.regulator_lambda); } if (ev.type()==event_type::extremal_qqxf){ return tree_kin_jets_qqx(incoming.rbegin(), incoming.rend(), partons.rbegin(), partons.rend(), param_.regulator_lambda); } if (ev.type()==event_type::central_qqx){ return tree_kin_jets_qqxmid(incoming[0].type, incoming[1].type, to_HepLorentzVector(incoming[0]), to_HepLorentzVector(incoming[1]), partons, param_.regulator_lambda); } throw std::logic_error("Cannot reweight non-resummable processes in Pure Jets"); } namespace { double tree_kin_W_FKL( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, std::vector const & partons, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, double lambda, ParticleProperties const & Wprop ){ auto p1 = to_HepLorentzVector(partons[0]); auto pn = to_HepLorentzVector(partons[partons.size() - 1]); const auto begin_ladder = cbegin(partons) + 1; const auto end_ladder = cend(partons) - 1; bool wc = aptype==partons[0].type; //leg b emits w auto q0 = pa - p1; if(!wc) q0 -= pl + plbar; const double current_factor = ME_W_current( aptype, bptype, pn, pb, p1, pa, plbar, pl, wc, Wprop ); const double ladder_factor = FKL_ladder_weight( begin_ladder, end_ladder, q0, pa, pb, p1, pn, lambda ); return current_factor*ladder_factor; } template double tree_kin_W_uno(InIter BeginIn, partIter BeginPart, partIter EndPart, const CLHEP::HepLorentzVector & plbar, const CLHEP::HepLorentzVector & pl, double lambda, ParticleProperties const & Wprop ){ const auto pa = to_HepLorentzVector(*BeginIn); const auto pb = to_HepLorentzVector(*(BeginIn+1)); const auto pg = to_HepLorentzVector(*BeginPart); const auto p1 = to_HepLorentzVector(*(BeginPart+1)); const auto pn = to_HepLorentzVector(*(EndPart-1)); bool wc = (BeginIn)->type==(BeginPart+1)->type; //leg b emits w auto q0 = pa - p1 - pg; if(!wc) q0 -= pl + plbar; const double current_factor = ME_W_uno_current( (BeginIn)->type, (BeginIn+1)->type, pn, pb, p1, pa, pg, plbar, pl, wc, Wprop ); const double ladder_factor = FKL_ladder_weight( BeginPart+2, EndPart-1, q0, pa, pb, p1, pn, lambda ); return current_factor*ladder_factor; } template double tree_kin_W_qqx(InIter BeginIn, partIter BeginPart, partIter EndPart, const CLHEP::HepLorentzVector & plbar, const CLHEP::HepLorentzVector & pl, double lambda, ParticleProperties const & Wprop ){ const bool swap_q_qx=is_quark(*BeginPart); const auto pa = to_HepLorentzVector(*BeginIn); const auto pb = to_HepLorentzVector(*(BeginIn+1)); const auto pq = to_HepLorentzVector(*(BeginPart+(swap_q_qx?0:1))); const auto pqbar = to_HepLorentzVector(*(BeginPart+(swap_q_qx?1:0))); const auto p1 = to_HepLorentzVector(*(BeginPart)); const auto pn = to_HepLorentzVector(*(EndPart-1)); const bool wc = (BeginIn+1)->type!=(EndPart-1)->type; //leg b emits w auto q0 = pa - pq - pqbar; if(!wc) q0 -= pl + plbar; const double current_factor = ME_W_qqx_current( (BeginIn)->type, (BeginIn+1)->type, pa, pb, pq, pqbar, pn, plbar, pl, swap_q_qx, wc, Wprop ); const double ladder_factor = FKL_ladder_weight( BeginPart+2, EndPart-1, q0, pa, pb, p1, pn, lambda ); return current_factor*ladder_factor; } double tree_kin_W_qqxmid( ParticleID aptype, ParticleID bptype, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, std::vector const & partons, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, double lambda, ParticleProperties const & Wprop ){ CLHEP::HepLorentzVector pq; CLHEP::HepLorentzVector pqbar; const auto backmidquark = std::find_if( begin(partons)+1, end(partons)-1, [](Particle const & s){ return s.type != pid::gluon; } ); assert(backmidquark!=end(partons)-1); if (is_quark(backmidquark->type)){ pq = to_HepLorentzVector(*backmidquark); pqbar = to_HepLorentzVector(*(backmidquark+1)); } else { pqbar = to_HepLorentzVector(*backmidquark); pq = to_HepLorentzVector(*(backmidquark+1)); } auto p1 = to_HepLorentzVector(partons.front()); auto pn = to_HepLorentzVector(partons.back()); auto q0 = pa - p1; // t-channel momentum after qqx auto qqxt = q0; bool wqq = backmidquark->type != -(backmidquark+1)->type; // qqx emit W bool wc = !wqq && (aptype==partons.front().type); //leg b emits w assert(!wqq || !wc); if(wqq){ // emission from qqx qqxt -= pl + plbar; } else if(!wc) { // emission from leg a q0 -= pl + plbar; qqxt -= pl + plbar; } const auto begin_ladder = cbegin(partons) + 1; const auto end_ladder_1 = (backmidquark); const auto begin_ladder_2 = (backmidquark+2); const auto end_ladder = cend(partons) - 1; for(auto parton_it = begin_ladder; parton_it < begin_ladder_2; ++parton_it){ qqxt -= to_HepLorentzVector(*parton_it); } const int nabove = std::distance(begin_ladder, backmidquark); const int nbelow = std::distance(begin_ladder_2, end_ladder); std::vector partonsHLV; partonsHLV.reserve(partons.size()); for (std::size_t i = 0; i != partons.size(); ++i) { partonsHLV.push_back(to_HepLorentzVector(partons[i])); } const double current_factor = ME_W_qqxmid_current( aptype, bptype, nabove, nbelow, pa, pb, pq, pqbar, partonsHLV, plbar, pl, wqq, wc, Wprop ); const double ladder_factor = FKL_ladder_weight( begin_ladder, end_ladder_1, q0, pa, pb, p1, pn, lambda )*FKL_ladder_weight( begin_ladder_2, end_ladder, qqxt, pa, pb, p1, pn, lambda ); return current_factor*C_A*C_A/(N_C*N_C-1.)*ladder_factor; } } // namespace double MatrixElement::tree_kin_W(Event const & ev) const { using namespace event_type; auto const & incoming(ev.incoming()); #ifndef NDEBUG // assert that there is exactly one decay corresponding to the W assert(ev.decays().size() == 1); auto const & w_boson{ std::find_if(ev.outgoing().cbegin(), ev.outgoing().cend(), [] (Particle const & p) -> bool { return std::abs(p.type) == ParticleID::Wp; }) }; assert(w_boson != ev.outgoing().cend()); assert( static_cast(ev.decays().cbegin()->first) == std::distance(ev.outgoing().cbegin(), w_boson) ); #endif // find decay products of W auto const & decay{ ev.decays().cbegin()->second }; assert(decay.size() == 2); assert( ( is_anylepton(decay.at(0)) && is_anyneutrino(decay.at(1)) ) || ( is_anylepton(decay.at(1)) && is_anyneutrino(decay.at(0)) ) ); // get lepton & neutrino CLHEP::HepLorentzVector plbar; CLHEP::HepLorentzVector pl; if (decay.at(0).type < 0){ plbar = to_HepLorentzVector(decay.at(0)); pl = to_HepLorentzVector(decay.at(1)); } else{ pl = to_HepLorentzVector(decay.at(0)); plbar = to_HepLorentzVector(decay.at(1)); } const auto pa = to_HepLorentzVector(incoming[0]); const auto pb = to_HepLorentzVector(incoming[1]); const auto partons = tag_extremal_jet_partons(ev); if(ev.type() == FKL){ return tree_kin_W_FKL(incoming[0].type, incoming[1].type, pa, pb, partons, plbar, pl, param_.regulator_lambda, param_.ew_parameters.Wprop()); } if(ev.type() == unordered_backward){ return tree_kin_W_uno(cbegin(incoming), cbegin(partons), cend(partons), plbar, pl, param_.regulator_lambda, param_.ew_parameters.Wprop()); } if(ev.type() == unordered_forward){ return tree_kin_W_uno(crbegin(incoming), crbegin(partons), crend(partons), plbar, pl, param_.regulator_lambda, param_.ew_parameters.Wprop()); } if(ev.type() == extremal_qqxb){ return tree_kin_W_qqx(cbegin(incoming), cbegin(partons), cend(partons), plbar, pl, param_.regulator_lambda, param_.ew_parameters.Wprop()); } if(ev.type() == extremal_qqxf){ return tree_kin_W_qqx(crbegin(incoming), crbegin(partons), crend(partons), plbar, pl, param_.regulator_lambda, param_.ew_parameters.Wprop()); } assert(ev.type() == central_qqx); return tree_kin_W_qqxmid(incoming[0].type, incoming[1].type, pa, pb, partons, plbar, pl, param_.regulator_lambda, param_.ew_parameters.Wprop()); } namespace { std::vector tree_kin_Z_FKL( const ParticleID aptype, const ParticleID bptype, CLHEP::HepLorentzVector const & pa, CLHEP::HepLorentzVector const & pb, std::vector const & partons, CLHEP::HepLorentzVector const & plbar, CLHEP::HepLorentzVector const & pl, const double lambda, ParticleProperties const & Zprop, const double stw2, const double ctw ){ const auto p1 = to_HepLorentzVector(partons[0]); const auto pn = to_HepLorentzVector(partons[partons.size() - 1]); const auto begin_ladder = cbegin(partons) + 1; const auto end_ladder = cend(partons) - 1; const std::vector current_factor = ME_Z_current( aptype, bptype, pn, pb, p1, pa, plbar, pl, Zprop, stw2, ctw ); std::vector ladder_factor; if(is_gluon(bptype)){ // This is a qg event const auto q0 = pa-p1-plbar-pl; ladder_factor.push_back(FKL_ladder_weight(begin_ladder, end_ladder, q0, pa, pb, p1, pn, lambda)); } else if(is_gluon(aptype)){ // This is a gq event const auto q0 = pa-p1; ladder_factor.push_back(FKL_ladder_weight(begin_ladder, end_ladder, q0, pa, pb, p1, pn, lambda)); } else { // This is a qq event const auto q0 = pa-p1-plbar-pl; const auto q1 = pa-p1; ladder_factor=FKL_ladder_weight_mix(begin_ladder, end_ladder, q0, q1, pa, pb, p1, pn, lambda); } std::vector result; for(size_t i=0; i std::vector tree_kin_Z_uno(InIter BeginIn, partIter BeginPart, partIter EndPart, const CLHEP::HepLorentzVector & plbar, const CLHEP::HepLorentzVector & pl, const double lambda, ParticleProperties const & Zprop, const double stw2, const double ctw){ const auto pa = to_HepLorentzVector(*BeginIn); const auto pb = to_HepLorentzVector(*(BeginIn+1)); const auto pg = to_HepLorentzVector(*BeginPart); const auto p1 = to_HepLorentzVector(*(BeginPart+1)); const auto pn = to_HepLorentzVector(*(EndPart-1)); const ParticleID aptype = (BeginIn)->type; const ParticleID bptype = (BeginIn+1)->type; const std::vector current_factor = ME_Z_uno_current( aptype, bptype, pn, pb, p1, pa, pg, plbar, pl, Zprop, stw2, ctw ); std::vector ladder_factor; if(is_gluon(bptype)){ // This is a qg event const auto q0 = pa-pg-p1-plbar-pl; ladder_factor.push_back(FKL_ladder_weight(BeginPart+2, EndPart-1, q0, pa, pb, p1, pn, lambda)); }else if(is_gluon(aptype)){ // This is a gq event const auto q0 = pa-pg-p1; ladder_factor.push_back(FKL_ladder_weight(BeginPart+2, EndPart-1, q0, pa, pb, p1, pn, lambda)); }else{ // This is a qq event const auto q0 = pa-pg-p1-plbar-pl; const auto q1 = pa-pg-p1; ladder_factor=FKL_ladder_weight_mix(BeginPart+2, EndPart-1, q0, q1, pa, pb, p1, pn, lambda); } std::vector result; for(size_t i=0; i MatrixElement::tree_kin_Z(Event const & ev) const { using namespace event_type; auto const & incoming(ev.incoming()); // find decay products of Z auto const & decay{ ev.decays().cbegin()->second }; assert(decay.size() == 2); assert(is_anylepton(decay.at(0)) && !is_anyneutrino(decay.at(0)) && decay.at(0).type==-decay.at(1).type); // get leptons CLHEP::HepLorentzVector plbar; CLHEP::HepLorentzVector pl; if (decay.at(0).type < 0){ plbar = to_HepLorentzVector(decay.at(0)); pl = to_HepLorentzVector(decay.at(1)); } else{ pl = to_HepLorentzVector(decay.at(0)); plbar = to_HepLorentzVector(decay.at(1)); } const auto pa = to_HepLorentzVector(incoming[0]); const auto pb = to_HepLorentzVector(incoming[1]); const auto partons = tag_extremal_jet_partons(ev); const double stw2 = param_.ew_parameters.sin2_tw(); const double ctw = param_.ew_parameters.cos_tw(); if(ev.type() == FKL){ return tree_kin_Z_FKL(incoming[0].type, incoming[1].type, pa, pb, partons, plbar, pl, param_.regulator_lambda, param_.ew_parameters.Zprop(), stw2, ctw); } if(ev.type() == unordered_backward){ return tree_kin_Z_uno(cbegin(incoming), cbegin(partons), cend(partons), plbar, pl, param_.regulator_lambda, param_.ew_parameters.Zprop(), stw2, ctw); } if(ev.type() == unordered_forward){ return tree_kin_Z_uno(crbegin(incoming), crbegin(partons), crend(partons), plbar, pl, param_.regulator_lambda, param_.ew_parameters.Zprop(), stw2, ctw); } throw std::logic_error("Can only reweight FKL or uno processes in Z+Jets"); } double MatrixElement::tree_kin_Higgs(Event const & ev) const { if(is_uno(ev.type())){ return tree_kin_Higgs_between(ev); } if(ev.outgoing().front().type == pid::Higgs){ return tree_kin_Higgs_first(ev); } if(ev.outgoing().back().type == pid::Higgs){ return tree_kin_Higgs_last(ev); } return tree_kin_Higgs_between(ev); } namespace { // Colour acceleration multipliers, for gluons see eq. (7) in arXiv:0910.5113 #ifdef HEJ_BUILD_WITH_QCDLOOP double K( ParticleID type, CLHEP::HepLorentzVector const & pout, CLHEP::HepLorentzVector const & pin ){ if(type == pid::gluon) return currents::K_g(pout, pin); return C_F; } #endif // Colour factor in strict MRK limit double K_MRK(ParticleID type) { return (type == pid::gluon)?C_A:C_F; } } // namespace double MatrixElement::MH2_forwardH( CLHEP::HepLorentzVector const & p1out, CLHEP::HepLorentzVector const & p1in, ParticleID type2, CLHEP::HepLorentzVector const & p2out, CLHEP::HepLorentzVector const & p2in, CLHEP::HepLorentzVector const & pH, double t1, double t2 ) const{ using namespace currents; ignore(p2out, p2in); const double shat = p1in.invariantMass2(p2in); const double vev = param_.ew_parameters.vev(); // gluon case #ifdef HEJ_BUILD_WITH_QCDLOOP if(!param_.Higgs_coupling.use_impact_factors){ return K(type2, p2out, p2in)*C_A*1./(16*M_PI*M_PI)*t1/t2*ME_Houtside_gq( p1out, p1in, p2out, p2in, pH, param_.Higgs_coupling.mt, param_.Higgs_coupling.include_bottom, param_.Higgs_coupling.mb, vev )/(4*(N_C*N_C - 1)); } #endif return K_MRK(type2)/C_A*9./2.*shat*shat*( C2gHgp(p1in,p1out,pH,vev) + C2gHgm(p1in,p1out,pH,vev) )/(t1*t2); } double MatrixElement::tree_kin_Higgs_first(Event const & ev) const { auto const & incoming = ev.incoming(); auto const & outgoing = ev.outgoing(); assert(outgoing.front().type == pid::Higgs); if(outgoing[1].type != pid::gluon) { assert(incoming.front().type == outgoing[1].type); return tree_kin_Higgs_between(ev); } const auto pH = to_HepLorentzVector(outgoing.front()); const auto partons = tag_extremal_jet_partons( ev ); const auto pa = to_HepLorentzVector(incoming[0]); const auto pb = to_HepLorentzVector(incoming[1]); const auto p1 = to_HepLorentzVector(partons.front()); const auto pn = to_HepLorentzVector(partons.back()); const auto q0 = pa - p1 - pH; const double t1 = q0.m2(); const double t2 = (pn - pb).m2(); return MH2_forwardH( p1, pa, incoming[1].type, pn, pb, pH, t1, t2 )*FKL_ladder_weight( begin(partons) + 1, end(partons) - 1, q0, pa, pb, p1, pn, param_.regulator_lambda ); } double MatrixElement::tree_kin_Higgs_last(Event const & ev) const { auto const & incoming = ev.incoming(); auto const & outgoing = ev.outgoing(); assert(outgoing.back().type == pid::Higgs); if(outgoing[outgoing.size()-2].type != pid::gluon) { assert(incoming.back().type == outgoing[outgoing.size()-2].type); return tree_kin_Higgs_between(ev); } const auto pH = to_HepLorentzVector(outgoing.back()); const auto partons = tag_extremal_jet_partons( ev ); const auto pa = to_HepLorentzVector(incoming[0]); const auto pb = to_HepLorentzVector(incoming[1]); auto p1 = to_HepLorentzVector(partons.front()); const auto pn = to_HepLorentzVector(partons.back()); auto q0 = pa - p1; const double t1 = q0.m2(); const double t2 = (pn + pH - pb).m2(); return MH2_forwardH( pn, pb, incoming[0].type, p1, pa, pH, t2, t1 )*FKL_ladder_weight( begin(partons) + 1, end(partons) - 1, q0, pa, pb, p1, pn, param_.regulator_lambda ); } namespace { template double tree_kin_Higgs_uno(InIter BeginIn, InIter EndIn, partIter BeginPart, partIter EndPart, CLHEP::HepLorentzVector const & qH, CLHEP::HepLorentzVector const & qHp1, double mt, bool inc_bot, double mb, double vev ){ const auto pa = to_HepLorentzVector(*BeginIn); const auto pb = to_HepLorentzVector(*(EndIn-1)); const auto pg = to_HepLorentzVector(*BeginPart); const auto p1 = to_HepLorentzVector(*(BeginPart+1)); const auto pn = to_HepLorentzVector(*(EndPart-1)); return ME_Higgs_current_uno( (BeginIn)->type, (EndIn-1)->type, pg, pn, pb, p1, pa, qH, qHp1, mt, inc_bot, mb, vev ); } } // namespace double MatrixElement::tree_kin_Higgs_between(Event const & ev) const { using namespace event_type; auto const & incoming = ev.incoming(); auto const & outgoing = ev.outgoing(); const auto the_Higgs = std::find_if( begin(outgoing), end(outgoing), [](Particle const & s){ return s.type == pid::Higgs; } ); assert(the_Higgs != end(outgoing)); const auto pH = to_HepLorentzVector(*the_Higgs); const auto partons = tag_extremal_jet_partons(ev); const auto pa = to_HepLorentzVector(incoming[0]); const auto pb = to_HepLorentzVector(incoming[1]); auto p1 = to_HepLorentzVector( partons[(ev.type() == unob)?1:0] ); auto pn = to_HepLorentzVector( partons[partons.size() - ((ev.type() == unof)?2:1)] ); auto first_after_Higgs = begin(partons) + (the_Higgs-begin(outgoing)); assert( (first_after_Higgs == end(partons) && ( (ev.type() == unob) || partons.back().type != pid::gluon )) || first_after_Higgs->rapidity() >= the_Higgs->rapidity() ); assert( (first_after_Higgs == begin(partons) && ( (ev.type() == unof) || partons.front().type != pid::gluon )) || (first_after_Higgs-1)->rapidity() <= the_Higgs->rapidity() ); // always treat the Higgs as if it were in between the extremal FKL partons if(first_after_Higgs == begin(partons)) ++first_after_Higgs; else if(first_after_Higgs == end(partons)) --first_after_Higgs; // t-channel momentum before Higgs auto qH = pa; for(auto parton_it = begin(partons); parton_it != first_after_Higgs; ++parton_it){ qH -= to_HepLorentzVector(*parton_it); } auto q0 = pa - p1; auto begin_ladder = begin(partons) + 1; auto end_ladder = end(partons) - 1; double current_factor = NAN; if(ev.type() == FKL){ current_factor = ME_Higgs_current( incoming[0].type, incoming[1].type, pn, pb, p1, pa, qH, qH - pH, param_.Higgs_coupling.mt, param_.Higgs_coupling.include_bottom, param_.Higgs_coupling.mb, param_.ew_parameters.vev() ); } else if(ev.type() == unob){ current_factor = C_A*C_A/2*tree_kin_Higgs_uno( begin(incoming), end(incoming), begin(partons), end(partons), qH, qH-pH, param_.Higgs_coupling.mt, param_.Higgs_coupling.include_bottom, param_.Higgs_coupling.mb, param_.ew_parameters.vev() ); const auto p_unob = to_HepLorentzVector(partons.front()); q0 -= p_unob; p1 += p_unob; ++begin_ladder; } else if(ev.type() == unof){ current_factor = C_A*C_A/2*tree_kin_Higgs_uno( rbegin(incoming), rend(incoming), rbegin(partons), rend(partons), qH-pH, qH, param_.Higgs_coupling.mt, param_.Higgs_coupling.include_bottom, param_.Higgs_coupling.mb, param_.ew_parameters.vev() ); pn += to_HepLorentzVector(partons.back()); --end_ladder; } else{ throw std::logic_error("Can only reweight FKL or uno processes in H+Jets"); } const double ladder_factor = FKL_ladder_weight( begin_ladder, first_after_Higgs, q0, pa, pb, p1, pn, param_.regulator_lambda )*FKL_ladder_weight( first_after_Higgs, end_ladder, qH - pH, pa, pb, p1, pn, param_.regulator_lambda ); return current_factor*C_A*C_A/(N_C*N_C-1.)*ladder_factor; } namespace { double get_AWZH_coupling(Event const & ev, double alpha_s, double alpha_w) { const auto AWZH_boson = std::find_if( begin(ev.outgoing()), end(ev.outgoing()), [](auto const & p){return is_AWZH_boson(p);} ); if(AWZH_boson == end(ev.outgoing())) return 1.; switch(AWZH_boson->type){ case pid::Higgs: return alpha_s*alpha_s; case pid::Wp: case pid::Wm: case pid::Z_photon_mix: return alpha_w*alpha_w; // TODO case pid::photon: case pid::Z: default: throw not_implemented("Emission of boson of unsupported type"); } } } // namespace double MatrixElement::tree_param(Event const & ev, double mur) const { assert(is_resummable(ev.type())); const auto begin_partons = ev.begin_partons(); const auto end_partons = ev.end_partons(); const auto num_partons = std::distance(begin_partons, end_partons); const double alpha_s = alpha_s_(mur); const double gs2 = 4.*M_PI*alpha_s; double res = std::pow(gs2, num_partons); if(param_.log_correction){ // use alpha_s(q_perp), evolved to mur assert(num_partons >= 2); const auto first_emission = std::next(begin_partons); const auto last_emission = std::prev(end_partons); for(auto parton = first_emission; parton != last_emission; ++parton){ res *= 1. + alpha_s/(2.*M_PI)*BETA0*std::log(mur/parton->perp()); } } return get_AWZH_coupling(ev, alpha_s, param_.ew_parameters.alpha_w())*res; } } // namespace HEJ diff --git a/src/PhaseSpacePoint.cc b/src/PhaseSpacePoint.cc index b265860..0f29081 100644 --- a/src/PhaseSpacePoint.cc +++ b/src/PhaseSpacePoint.cc @@ -1,871 +1,929 @@ /** * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #include "HEJ/PhaseSpacePoint.hh" #include #include #include #include #include #include #include #include #include #include -#include #include "fastjet/ClusterSequence.hh" #include "fastjet/JetDefinition.hh" #include "HEJ/Constants.hh" #include "HEJ/Event.hh" #include "HEJ/JetSplitter.hh" #include "HEJ/PDG_codes.hh" #include "HEJ/RNG.hh" #include "HEJ/event_types.hh" #include "HEJ/kinematics.hh" #include "HEJ/resummation_jet.hh" #include "HEJ/utility.hh" namespace HEJ { namespace { constexpr int MAX_JET_USER_IDX = PhaseSpacePoint::NG_MAX; bool is_nonjet_parton(fastjet::PseudoJet const & parton){ assert(parton.user_index() != -1); return parton.user_index() > MAX_JET_USER_IDX; } bool is_jet_parton(fastjet::PseudoJet const & parton){ assert(parton.user_index() != -1); return parton.user_index() <= MAX_JET_USER_IDX; } namespace user_idx { //! user indices for partons with extremal rapidity enum ID: int { qqxmid1 = -9, qqxmid2 = -8, qqxb = -7, qqxf = -6, unob = -5, unof = -4, backward_fkl = -3, forward_fkl = -2, }; } // namespace user_idx using UID = user_idx::ID; double phase_space_normalisation( int num_Born_jets, int num_out_partons ){ return std::pow(16.*std::pow(M_PI,3), num_Born_jets - num_out_partons); } } // namespace Event::EventData to_EventData(PhaseSpacePoint psp){ Event::EventData result; result.incoming = std::move(psp).incoming_; // NOLINT(bugprone-use-after-move) result.outgoing = std::move(psp).outgoing_; // NOLINT(bugprone-use-after-move) // technically Event::EventData doesn't have to be sorted, // but PhaseSpacePoint should be anyway assert( std::is_sorted( begin(result.outgoing), end(result.outgoing), rapidity_less{} ) ); assert(result.outgoing.size() >= 2); static_assert( std::numeric_limits::has_quiet_NaN, "no quiet NaN for double" ); constexpr double nan = std::numeric_limits::quiet_NaN(); result.decays = std::move(psp).decays_; // NOLINT(bugprone-use-after-move) result.parameters.central = {nan, nan, psp.weight()}; // NOLINT(bugprone-use-after-move) return result; } std::vector PhaseSpacePoint::cluster_jets( std::vector const & partons ) const{ fastjet::ClusterSequence cs(partons, param_.jet_param.def); return sorted_by_rapidity(cs.inclusive_jets(param_.jet_param.min_pt)); } bool PhaseSpacePoint::pass_resummation_cuts( std::vector const & jets ) const{ return cluster_jets(jets).size() == jets.size(); } namespace { // find iterators to central qqbar emission auto get_central_qqbar(Event const & ev) { // find born quarks (ignore extremal partons) auto const firstquark = std::find_if( std::next(ev.begin_partons()), std::prev(ev.end_partons(), 2), [](Particle const & s){ return (is_anyquark(s)); } ); // assert that it is a q-q_bar pair. assert(std::distance(firstquark, ev.end_partons()) != 2); assert( ( is_quark(*firstquark) && is_antiquark(*std::next(firstquark)) ) || ( is_antiquark(*firstquark) && is_quark(*std::next(firstquark)) ) ); return std::make_pair(firstquark, std::next(firstquark)); } //! returns index of most backward q-qbar jet template int get_back_quark_jet(Event const & ev, Iterator firstquark){ // find jets at FO corresponding to the quarks // technically this isn't necessary for LO std::vector const born_indices{ ev.particle_jet_indices() }; const auto firstquark_idx = std::distance(ev.begin_partons(), firstquark); int const firstjet_idx = born_indices[firstquark_idx]; assert(firstjet_idx>0); assert( born_indices[firstquark_idx+1] == firstjet_idx+1 ); return firstjet_idx; } //! returns index of most backward q-qbar jet int getBackQuarkJet(Event const & ev){ const auto firstquark = get_central_qqbar(ev).first; return get_back_quark_jet(ev, firstquark); } - } + } // namespace double PhaseSpacePoint::estimate_emission_rapidity_range( Event const & ev ) const { assert(std::is_sorted(begin(ev.jets()), end(ev.jets()), rapidity_less{})); double delta_y = most_forward_FKL(ev.jets()).rapidity() - most_backward_FKL(ev.jets()).rapidity(); // neglect tiny probability for emission between central qqbar pair if(ev.type() == event_type::central_qqx) { const int qjet = getBackQuarkJet(ev); delta_y -= ev.jets()[qjet+1].rapidity() - ev.jets()[qjet].rapidity(); } assert(delta_y >= 0); return delta_y; } double PhaseSpacePoint::estimate_ng_mean(Event const & ev) const { // Formula derived from fit in arXiv:1805.04446 (see Fig. 2) constexpr double GLUONS_PER_RAPIDITY = 0.975052; return GLUONS_PER_RAPIDITY*estimate_emission_rapidity_range(ev); } int PhaseSpacePoint::sample_ng(Event const & event, RNG & ran){ const double ng_mean = estimate_ng_mean(event); std::poisson_distribution dist(ng_mean); const int ng = dist(ran); assert(ng >= 0); assert(ng < NG_MAX); weight_ *= std::tgamma(ng + 1)*std::exp(ng_mean)*std::pow(ng_mean, -ng); return ng; } - void PhaseSpacePoint::copy_AWZH_boson_from(Event const & event){ + void PhaseSpacePoint::boost_AWZH_boson_from( + fastjet::PseudoJet const & boosted_boson, Event const & event + ){ auto const & from = event.outgoing(); - const auto AWZH_boson = std::find_if( + const auto original_boson = std::find_if( begin(from), end(from), [](Particle const & p){ return is_AWZH_boson(p); } ); - if(AWZH_boson == end(from)) return; + if(original_boson == end(from)) return; auto insertion_point = std::lower_bound( - begin(outgoing_), end(outgoing_), *AWZH_boson, rapidity_less{} + begin(outgoing_), end(outgoing_), *original_boson, rapidity_less{} ); - outgoing_.insert(insertion_point, *AWZH_boson); + // copy AWZH particle + outgoing_.insert(insertion_point, + {original_boson->type, boosted_boson, original_boson->colour} + ); + assert(std::is_sorted(begin(outgoing_), end(outgoing_), rapidity_less{})); - // copy decay products - const int idx = std::distance(begin(from), AWZH_boson); + // copy & boost decay products + const int idx = std::distance(begin(from), original_boson); assert(idx >= 0); const auto decay_it = event.decays().find(idx); - if(decay_it != end(event.decays())){ - const int new_idx = std::distance(begin(outgoing_), insertion_point); - assert(new_idx >= 0); - assert(outgoing_[new_idx].type == AWZH_boson->type); - decays_.emplace(new_idx, decay_it->second); + if(decay_it == end(event.decays())) + return; + + const int new_idx = std::distance(begin(outgoing_), insertion_point); + assert(new_idx >= 0); + assert(outgoing_[new_idx].type == original_boson->type); + auto decayparticles=decay_it->second; + + // change the momenta of the decay products. + for(auto & particle: decayparticles){ + auto & p = particle.p; + // boost _to_ rest frame of input boson + p.unboost(original_boson->p); + // then boost _from_ rest frame of shuffled boson + p.boost(boosted_boson); } - assert(std::is_sorted(begin(outgoing_), end(outgoing_), rapidity_less{})); + decays_.emplace(new_idx, decayparticles); } namespace { template void label_extremal_qqx( ConstIterator born_begin, ConstIterator born_end, Iterator first_out ){ // find born quarks const auto firstquark = std::find_if( born_begin, born_end-1, [](Particle const & s){ return (is_anyquark(s)); } ); assert(firstquark != born_end-1); const auto secondquark = std::find_if( firstquark+1, born_end, [](Particle const & s){ return (is_anyquark(s)); } ); assert(secondquark != born_end); assert( ( is_quark(*firstquark) && is_antiquark(*secondquark) ) || ( is_antiquark(*firstquark) && is_quark(*secondquark) )); assert(first_out->type == ParticleID::gluon); assert((first_out+1)->type == ParticleID::gluon); // copy type from born first_out->type = firstquark->type; (first_out+1)->type = secondquark->type; } } // namespace void PhaseSpacePoint::label_qqx(Event const & event){ assert(std::is_sorted(begin(outgoing_), end(outgoing_), rapidity_less{})); assert(filter_partons(outgoing_).size() == outgoing_.size()); if(qqxb_){ label_extremal_qqx( event.outgoing().cbegin(), event.outgoing().cend(), outgoing_.begin() ); return; } if(qqxf_){ // same as qqxb with reversed order label_extremal_qqx( event.outgoing().crbegin(), event.outgoing().crend(), outgoing_.rbegin() ); return; } // central qqx const auto firstquark = get_central_qqbar(event).first; // find jets at FO corresponding to the quarks // technically this isn't necessary for LO const auto firstjet_idx = get_back_quark_jet(event, firstquark); // find corresponding jets after resummation fastjet::ClusterSequence cs{to_PseudoJet(outgoing_), param_.jet_param.def}; auto const jets = fastjet::sorted_by_rapidity( cs.inclusive_jets( param_.jet_param.min_pt )); std::vector const resum_indices{ cs.particle_jet_indices({jets}) }; // assert that jets didn't move assert(nearby_ep( ( event.jets().cbegin()+firstjet_idx )->rapidity(), jets[ firstjet_idx ].rapidity(), 1e-2) ); assert(nearby_ep( ( event.jets().cbegin()+firstjet_idx+1 )->rapidity(), jets[ firstjet_idx+1 ].rapidity(), 1e-2) ); // find last partons in first (central) jet size_t idx_out = 0; for(size_t i=resum_indices.size()-2; i>0; --i) if(resum_indices[i] == firstjet_idx){ idx_out = i; break; } assert(idx_out != 0); // check that there is sufficient pt in jets from the quarks const double minpartonjetpt = 1. - param_.soft_pt_regulator; if (outgoing_[idx_out].p.pt()pt()){ weight_=0.; status_ = StatusCode::wrong_jets; return; } if (outgoing_[idx_out+1].p.pt()pt()){ weight_=0.; status_ = StatusCode::wrong_jets; return; } // check that no additional emission between jets // such configurations are possible if we have an gluon gets generated // inside the rapidities of the qqx chain, but clusted to a // differnet/outside jet. Changing this is non trivial if(resum_indices[idx_out+1] != resum_indices[idx_out]+1){ weight_=0.; status_ = StatusCode::gluon_in_qqx; return; } outgoing_[idx_out].type = firstquark->type; outgoing_[idx_out+1].type = std::next(firstquark)->type; } void PhaseSpacePoint::label_quarks(Event const & ev){ const auto WZEmit = std::find_if( begin(ev.outgoing()), end(ev.outgoing()), [](Particle const & s){ return (std::abs(s.type) == pid::Wp || s.type == pid::Z_photon_mix); } ); if (WZEmit != end(ev.outgoing())){ if(!qqxb_) { const size_t backward_FKL_idx = unob_?1:0; const auto backward_FKL = std::next(ev.begin_partons(), backward_FKL_idx); outgoing_[backward_FKL_idx].type = backward_FKL->type; } if(!qqxf_) { const size_t forward_FKL_idx = unof_?1:0; const auto forward_FKL = std::prev(ev.end_partons(), 1+forward_FKL_idx); outgoing_.rbegin()[unof_].type = forward_FKL->type; // NOLINT } } else { most_backward_FKL(outgoing_).type = ev.incoming().front().type; most_forward_FKL(outgoing_).type = ev.incoming().back().type; } if(qqxmid_||qqxb_||qqxf_){ label_qqx(ev); } } PhaseSpacePoint::PhaseSpacePoint( Event const & ev, PhaseSpacePointConfig conf, RNG & ran ): unob_{ev.type() == event_type::unob}, unof_{ev.type() == event_type::unof}, qqxb_{ev.type() == event_type::qqxexb}, qqxf_{ev.type() == event_type::qqxexf}, qqxmid_{ev.type() == event_type::qqxmid}, param_{std::move(conf)}, status_{unspecified} { // legacy code: override new variable with old if(param_.max_ext_soft_pt_fraction){ param_.soft_pt_regulator = *param_.max_ext_soft_pt_fraction; param_.max_ext_soft_pt_fraction = {}; } weight_ = 1; auto const & Born_jets = ev.jets(); const int ng = sample_ng(ev, ran); weight_ /= std::tgamma(ng + 1); const int ng_jets = sample_ng_jets(ev, ng, ran); std::vector out_partons = gen_non_jet( ng - ng_jets, CMINPT, param_.jet_param.min_pt, ran ); int qqxbackjet(-1); if(qqxmid_){ qqxbackjet = getBackQuarkJet(ev); } + // reshuffle soft momenta const auto qperp = std::accumulate( begin(out_partons), end(out_partons), fastjet::PseudoJet{} ); - const auto jets = reshuffle(Born_jets, qperp); + + std::vector jets; + optional boson; + std::tie(jets, boson) = reshuffle(ev, qperp); if(weight_ == 0.) { status_ = failed_reshuffle; return; } if(! pass_resummation_cuts(jets)){ status_ = failed_resummation_cuts; weight_ = 0.; return; } + + // split jets in multiple partons std::vector jet_partons = split( jets, ng_jets, qqxbackjet, ran ); if(weight_ == 0.) { status_ = StatusCode::failed_split; return; } + + // rescale rapidity interval if(qqxmid_){ rescale_qqx_rapidities( out_partons, jets, most_backward_FKL(jet_partons).rapidity(), most_forward_FKL(jet_partons).rapidity(), qqxbackjet ); } else{ rescale_rapidities( out_partons, most_backward_FKL(jet_partons).rapidity(), most_forward_FKL(jet_partons).rapidity() ); } if(! cluster_jets(out_partons).empty()){ weight_ = 0.; status_ = StatusCode::empty_jets; return; } std::sort(begin(out_partons), end(out_partons), rapidity_less{}); assert( std::is_sorted(begin(jet_partons), end(jet_partons), rapidity_less{}) ); const auto first_jet_parton = out_partons.insert( end(out_partons), begin(jet_partons), end(jet_partons) ); std::inplace_merge( begin(out_partons), first_jet_parton, end(out_partons), rapidity_less{} ); if(! jets_ok(Born_jets, out_partons)){ weight_ = 0.; status_ = StatusCode::wrong_jets; return; } weight_ *= phase_space_normalisation(Born_jets.size(), out_partons.size()); outgoing_.reserve(out_partons.size() + 1); // one slot for possible A, W, Z, H for( auto it = std::make_move_iterator(out_partons.begin()); it != std::make_move_iterator(out_partons.end()); ++it ){ outgoing_.emplace_back( Particle{pid::gluon, *it, {}}); } assert(!outgoing_.empty()); label_quarks(ev); if(weight_ == 0.) { //! @TODO optimise s.t. this is not possible // status is handled internally return; } - copy_AWZH_boson_from(ev); + // reattach boson & decays + if(boson){ + boost_AWZH_boson_from(*boson, ev); + } reconstruct_incoming(ev.incoming()); status_ = StatusCode::good; } std::vector PhaseSpacePoint::gen_non_jet( int const ng_non_jet, double const ptmin, double const ptmax, RNG & ran ){ // heuristic parameters for pt sampling const double ptpar = 1.3 + ng_non_jet/5.; const double temp1 = std::atan((ptmax - ptmin)/ptpar); std::vector partons(ng_non_jet); for(int i = 0; i < ng_non_jet; ++i){ const double r1 = ran.flat(); const double pt = ptmin + ptpar*std::tan(r1*temp1); const double temp2 = std::cos(r1*temp1); const double phi = 2*M_PI*ran.flat(); weight_ *= 2.0*M_PI*pt*ptpar*temp1/(temp2*temp2); // we don't know the allowed rapidity span yet, // set a random value to be rescaled later on const double y = ran.flat(); partons[i].reset_PtYPhiM(pt, y, phi); // Set user index higher than any jet-parton index // in order to assert that these are not inside jets partons[i].set_user_index(i + 1 + NG_MAX); assert(ptmin-1e-5 <= partons[i].pt() && partons[i].pt() <= ptmax+1e-5); } assert(std::all_of(partons.cbegin(), partons.cend(), is_nonjet_parton)); return sorted_by_rapidity(partons); } void PhaseSpacePoint::rescale_qqx_rapidities( std::vector & out_partons, std::vector const & jets, const double ymin1, const double ymax2, const int qqxbackjet ){ const double ymax1 = jets[qqxbackjet].rapidity(); const double ymin2 = jets[qqxbackjet+1].rapidity(); constexpr double ep = 1e-7; const double tot_y = ymax1 - ymin1 + ymax2 - ymin2; std::vector> refpart( out_partons.begin(), out_partons.end()); double ratio = (ymax1 - ymin1)/tot_y; const auto gap{ std::find_if(refpart.begin(), refpart.end(), [ratio](fastjet::PseudoJet const & p){ return (p.rapidity()>=ratio);} ) }; double ymin = ymin1; double ymax = ymax1; double dy = ymax - ymin - 2*ep; double offset = 0.; for(auto it_part=refpart.begin(); it_part & partons, double ymin, double ymax ){ constexpr double ep = 1e-7; for(auto & parton: partons){ assert(0 <= parton.rapidity() && parton.rapidity() <= 1); const double dy = ymax - ymin - 2*ep; const double y = ymin + ep + dy*parton.rapidity(); parton.reset_momentum_PtYPhiM(parton.pt(), y, parton.phi()); weight_ *= dy; assert(ymin <= parton.rapidity() && parton.rapidity() <= ymax); } } namespace { template auto min(T const & a, T const & b, Rest&&... r) { using std::min; return min(a, min(b, std::forward(r)...)); } } double PhaseSpacePoint::probability_in_jet(Event const & ev) const{ const double dy = estimate_emission_rapidity_range(ev); const double R = param_.jet_param.def.R(); // jets into which we predominantly emit - const int njets = ev.jets().size() - unof_ - unob_ - qqxb_ - qqxf_; + const auto njets = ev.jets().size() - unof_ - unob_ - qqxb_ - qqxf_; //NOLINT assert(njets >= 2); - const double p_J_y_large = (njets-1)*R*R/(2.*dy); + const double p_J_y_large = (njets-1.)*R*R/(2.*dy); const double p_J_y0 = njets*R/M_PI; return min(p_J_y_large, p_J_y0, 1.); } int PhaseSpacePoint::sample_ng_jets(Event const & event, int ng, RNG & ran){ const double p_J = probability_in_jet(event); std::binomial_distribution<> bin_dist(ng, p_J); const int ng_J = bin_dist(ran); weight_ *= std::pow(p_J, -ng_J)*std::pow(1 - p_J, ng_J - ng); return ng_J; } - std::vector PhaseSpacePoint::reshuffle( - std::vector const & Born_jets, + std::pair< std::vector, + optional > + PhaseSpacePoint::reshuffle( + Event const & ev, fastjet::PseudoJet const & q ){ - if(q == fastjet::PseudoJet{0, 0, 0, 0}) return Born_jets; - auto jets = resummation_jet_momenta(Born_jets, q); - if(jets.empty()){ + // Create a copy of the outgoing momenta not containing decay products + std::vector born_momenta; + born_momenta.reserve(ev.jets().size()); + std::transform(ev.jets().cbegin(), ev.jets().cend(), + back_inserter(born_momenta), + [](fastjet::PseudoJet const & t) { return &t; }); + + // check if there is one (or more) bosons in the event. + const auto AWZH_boson = std::find_if( + begin(ev.outgoing()), end(ev.outgoing()), + [](Particle const & p){ return is_AWZH_boson(p); } + ); + + optional boson; + if(AWZH_boson != end(ev.outgoing())){ + boson = AWZH_boson->p; + } + + // reshuffle all momenta + if(q == fastjet::PseudoJet{0, 0, 0, 0}) return {ev.jets(), boson}; + // add boson to reshuffling + if(boson) { + born_momenta.push_back(&*boson); + } + auto shuffle_momenta = resummation_jet_momenta(born_momenta, q); + if(shuffle_momenta.empty()){ weight_ = 0; return {}; } - // additional Jacobian to ensure Born integration over delta gives 1 - weight_ *= resummation_jet_weight(Born_jets, q); - return jets; + weight_ *= resummation_jet_weight(born_momenta, q); + + // take out boson again + optional shuffle_boson; + if(boson){ + shuffle_boson = std::move(shuffle_momenta.back()); + shuffle_momenta.pop_back(); + } + + return {shuffle_momenta, shuffle_boson}; } std::vector PhaseSpacePoint::distribute_jet_partons( int ng_jets, std::vector const & jets, RNG & ran ){ size_t first_valid_jet = 0; size_t num_valid_jets = jets.size(); const double R_eff = 5./3.*param_.jet_param.def.R(); // if there is an unordered jet too far away from the FKL jets // then extra gluon constituents of the unordered jet would // violate the FKL rapidity ordering if((unob_||qqxb_) && jets[0].delta_R(jets[1]) > R_eff){ ++first_valid_jet; --num_valid_jets; } else if((unof_||qqxf_) && jets[jets.size()-1].delta_R(jets[jets.size()-2]) > R_eff){ --num_valid_jets; } std::vector np(jets.size(), 1); for(int i = 0; i < ng_jets; ++i){ ++np[first_valid_jet + ran.flat() * num_valid_jets]; } weight_ *= std::pow(num_valid_jets, ng_jets); return np; } #ifndef NDEBUG namespace { bool tagged_FKL_backward( std::vector const & jet_partons ){ return std::find_if( begin(jet_partons), end(jet_partons), [](fastjet::PseudoJet const & p){ return p.user_index() == UID::backward_fkl; } ) != end(jet_partons); } bool tagged_FKL_forward( std::vector const & jet_partons ){ // the most forward FKL parton is most likely near the end of jet_partons; // start search from there return std::find_if( jet_partons.rbegin(), jet_partons.rend(), [](fastjet::PseudoJet const & p){ return p.user_index() == UID::forward_fkl; } ) != jet_partons.rend(); } bool tagged_FKL_extremal( std::vector const & jet_partons ){ return tagged_FKL_backward(jet_partons) && tagged_FKL_forward(jet_partons); } } // namespace #endif std::vector PhaseSpacePoint::split( std::vector const & jets, int ng_jets, size_t qqxbackjet, RNG & ran ){ return split( jets, distribute_jet_partons(ng_jets, jets, ran), qqxbackjet, ran); } bool PhaseSpacePoint::pass_extremal_cuts( fastjet::PseudoJet const & ext_parton, fastjet::PseudoJet const & jet ) const{ if(ext_parton.pt() < param_.min_extparton_pt) return false; return (ext_parton - jet).pt()/jet.pt() < param_.soft_pt_regulator; } std::vector PhaseSpacePoint::split( std::vector const & jets, std::vector const & np, size_t qqxbackjet, RNG & ran ){ assert(! jets.empty()); assert(jets.size() == np.size()); assert(pass_resummation_cuts(jets)); const size_t most_backward_FKL_idx = 0 + unob_ + qqxb_; // NOLINT const size_t most_forward_FKL_idx = jets.size() - 1 - unof_ - qqxf_; // NOLINT auto const & jet = param_.jet_param; const JetSplitter jet_splitter{jet.def, jet.min_pt}; std::vector jet_partons; // randomly distribute jet gluons among jets for(size_t i = 0; i < jets.size(); ++i){ auto split_res = jet_splitter.split(jets[i], np[i], ran); weight_ *= split_res.weight; if(weight_ == 0) return {}; assert( std::all_of( begin(split_res.constituents), end(split_res.constituents), is_jet_parton ) ); const auto first_new_parton = jet_partons.insert( end(jet_partons), begin(split_res.constituents), end(split_res.constituents) ); // mark uno and extremal FKL emissions here so we can check // their position once all emissions are generated // also mark qqxmid partons, and apply appropriate pt cut. auto extremal = end(jet_partons); if (i == most_backward_FKL_idx){ //FKL backward emission extremal = std::min_element( first_new_parton, end(jet_partons), rapidity_less{} ); extremal->set_user_index(UID::backward_fkl); } else if(((unob_ || qqxb_) && i == 0)){ // unordered/qqxb extremal = std::min_element( first_new_parton, end(jet_partons), rapidity_less{} ); extremal->set_user_index((unob_)?UID::unob:UID::qqxb); } else if (i == most_forward_FKL_idx){ extremal = std::max_element( first_new_parton, end(jet_partons), rapidity_less{} ); extremal->set_user_index(UID::forward_fkl); } else if(((unof_ || qqxf_) && i == jets.size() - 1)){ // unordered/qqxf extremal = std::max_element( first_new_parton, end(jet_partons), rapidity_less{} ); extremal->set_user_index((unof_)?UID::unof:UID::qqxf); } else if((qqxmid_ && i == qqxbackjet)){ extremal = std::max_element( first_new_parton, end(jet_partons), rapidity_less{} ); extremal->set_user_index(UID::qqxmid1); } else if((qqxmid_ && i == qqxbackjet+1)){ extremal = std::min_element( first_new_parton, end(jet_partons), rapidity_less{} ); extremal->set_user_index(UID::qqxmid2); } if( extremal != end(jet_partons) && !pass_extremal_cuts(*extremal, jets[i]) ){ weight_ = 0; return {}; } } assert(tagged_FKL_extremal(jet_partons)); std::sort(begin(jet_partons), end(jet_partons), rapidity_less{}); if( !extremal_ok(jet_partons) || !split_preserved_jets(jets, jet_partons) ){ weight_ = 0.; return {}; } return jet_partons; } bool PhaseSpacePoint::extremal_ok( std::vector const & partons ) const{ assert(std::is_sorted(begin(partons), end(partons), rapidity_less{})); if(unob_ && partons.front().user_index() != UID::unob) return false; if(unof_ && partons.back().user_index() != UID::unof) return false; if(qqxb_ && partons.front().user_index() != UID::qqxb) return false; if(qqxf_ && partons.back().user_index() != UID::qqxf) return false; return most_backward_FKL(partons).user_index() == UID::backward_fkl && most_forward_FKL(partons).user_index() == UID::forward_fkl; } bool PhaseSpacePoint::split_preserved_jets( std::vector const & jets, std::vector const & jet_partons ) const{ assert(std::is_sorted(begin(jets), end(jets), rapidity_less{})); const auto split_jets = cluster_jets(jet_partons); // this can happen if two overlapping jets // are both split into more than one parton if(split_jets.size() != jets.size()) return false; for(size_t i = 0; i < split_jets.size(); ++i){ // this can happen if there are two overlapping jets // and a parton is assigned to the "wrong" jet if(!nearby_ep(jets[i].rapidity(), split_jets[i].rapidity(), 1e-2)){ return false; } } return true; } template Particle const & PhaseSpacePoint::most_backward_FKL( std::vector const & partons ) const{ return partons[0 + unob_ + qqxb_]; } template Particle const & PhaseSpacePoint::most_forward_FKL( std::vector const & partons ) const{ const size_t idx = partons.size() - 1 - unof_ - qqxf_; assert(idx < partons.size()); return partons[idx]; } template Particle & PhaseSpacePoint::most_backward_FKL( std::vector & partons ) const{ return partons[0 + unob_ + qqxb_]; } template Particle & PhaseSpacePoint::most_forward_FKL( std::vector & partons ) const{ const size_t idx = partons.size() - 1 - unof_ - qqxf_; assert(idx < partons.size()); return partons[idx]; } bool PhaseSpacePoint::contains_idx( fastjet::PseudoJet const & jet, fastjet::PseudoJet const & parton ) const { auto const & constituents = jet.constituents(); const int idx = parton.user_index(); const bool injet = std::find_if( begin(constituents), end(constituents), [idx](fastjet::PseudoJet const & con){return con.user_index() == idx;} ) != end(constituents); const double minpartonjetpt = 1. - param_.soft_pt_regulator; return ((parton.pt()>minpartonjetpt*jet.pt())&&injet); } bool PhaseSpacePoint::jets_ok( std::vector const & Born_jets, std::vector const & partons ) const{ fastjet::ClusterSequence cs(partons, param_.jet_param.def); const auto jets = sorted_by_rapidity(cs.inclusive_jets(param_.jet_param.min_pt)); if(jets.size() != Born_jets.size()) return false; int in_jet = 0; for(auto const & jet : jets){ assert(jet.has_constituents()); for(auto && parton: jet.constituents()){ if(is_nonjet_parton(parton)) return false; } in_jet += jet.constituents().size(); } const int expect_in_jet = std::count_if( partons.cbegin(), partons.cend(), is_jet_parton ); if(in_jet != expect_in_jet) return false; // note that PseudoJet::contains does not work here if(! ( contains_idx(most_backward_FKL(jets), most_backward_FKL(partons)) && contains_idx(most_forward_FKL(jets), most_forward_FKL(partons)) )) return false; if(unob_ && !contains_idx(jets.front(), partons.front())) return false; if(qqxb_ && !contains_idx(jets.front(), partons.front())) return false; if(unof_ && !contains_idx(jets.back(), partons.back())) return false; if(qqxf_ && !contains_idx(jets.back(), partons.back())) return false; #ifndef NDEBUG for(size_t i = 0; i < jets.size(); ++i){ assert(nearby_ep(jets[i].rapidity(), Born_jets[i].rapidity(), 1e-2)); } #endif return true; } void PhaseSpacePoint::reconstruct_incoming( std::array const & Born_incoming ){ std::tie(incoming_[0].p, incoming_[1].p) = incoming_momenta(outgoing_); for(size_t i = 0; i < incoming_.size(); ++i){ incoming_[i].type = Born_incoming[i].type; } assert(momentum_conserved()); } bool PhaseSpacePoint::momentum_conserved() const{ fastjet::PseudoJet diff; for(auto const & in: incoming()) diff += in.p; const double norm = diff.E(); for(auto const & out: outgoing()) diff -= out.p; return nearby(diff, fastjet::PseudoJet{}, norm); } } //namespace HEJ diff --git a/src/Zjets.cc b/src/Zjets.cc index e994e09..f5096ff 100644 --- a/src/Zjets.cc +++ b/src/Zjets.cc @@ -1,491 +1,495 @@ /** * \authors The HEJ collaboration (see AUTHORS for details) * \date 2020 * \copyright GPLv2 or later */ #include #include "HEJ/Constants.hh" #include "HEJ/EWConstants.hh" #include "HEJ/PDG_codes.hh" #include "HEJ/jets.hh" #include "HEJ/utility.hh" // generated headers #include "HEJ/currents/jV_j.hh" #include "HEJ/currents/jV_juno.hh" #include "HEJ/currents/jVuno_j.hh" namespace HEJ { namespace currents { namespace { // Z propagator COM ZProp(const double q, ParticleProperties const & zprop){ return 1. / (q - zprop.mass*zprop.mass + COM(0.,1.)*zprop.width*zprop.mass); } // Photon propagator COM GProp(const double q) { return 1. / q; } // Weak charge template double Zq(ParticleID PID, double stw2, double ctw); // Weak charge - Positive Spin template<> double Zq( const ParticleID PID, const double stw2, const double ctw ) { using namespace pid; // quarks if (PID == d || PID == s || PID == b) return (+ 1.0 * stw2 / 3.0) / ctw; if (PID == u || PID == c) return (- 2.0 * stw2 / 3.0) / ctw; // antiquarks if (PID == d_bar || PID == s_bar || PID == b_bar) return (+ 0.5 - 1.0 * stw2 / 3.0) / ctw; if (PID == u_bar || PID == c_bar) return (- 0.5 + 2.0 * stw2 / 3.0) / ctw; // electron if (PID == electron) return stw2 / ctw; throw std::logic_error("ERROR! No weak charge found"); } // Weak charge - Negative Spin template<> double Zq( const ParticleID PID, const double stw2, const double ctw ) { using namespace pid; // quarks if (PID == d || PID == s || PID == b) return (- 0.5 + 1.0 * stw2 / 3.0) / ctw; if (PID == u || PID == c) return (+ 0.5 - 2.0 * stw2 / 3.0) / ctw; // antiquarks if (PID == d_bar || PID == s_bar || PID == b_bar) return (- 1.0 * stw2 / 3.0) / ctw; if (PID == u_bar || PID == c_bar) return (+ 2.0 * stw2 / 3.0) / ctw; // electron if (PID == electron) return (-1.0 / 2.0 + stw2) / ctw; throw std::logic_error("ERROR! No weak charge found"); } // Electric charge double Gq (const ParticleID PID) { using namespace pid; if (PID == d || PID == s || PID == b) return -1./3.; if (PID == u || PID == c) return +2./3.; if (PID == d_bar || PID == s_bar || PID == b_bar) return +1./3.; if (PID == u_bar || PID == c_bar) return -2./3.; throw std::logic_error("ERROR! No electric charge found"); } //! Prefactor for Z+Jets Contributions /** * @brief Z+Jets Contributions Prefactor * @param aptype Incoming Particle 1 type (Z emission) * @param propZ Z Propagator * @param propG Photon Propagator * @param stw2 Value of sin(theta_w)^2 * @param ctw Value of cos(theta_w) * @returns Prefactors for Z+Jets for all helicity combinations * (includes couplings and propagators) */ MultiArray Z_amp_pref( const ParticleID aptype, COM const & propZ, COM const & propG, const double stw2, const double ctw ){ using helicity::plus; using helicity::minus; const double zq_a_p = Zq(aptype, stw2, ctw); const double zq_a_m = Zq(aptype, stw2, ctw); const double ze_p = Zq(pid::electron, stw2, ctw); const double ze_m = Zq(pid::electron, stw2, ctw); const double gq_a = Gq(aptype); MultiArray res; res[ plus][ plus] = -2.*(zq_a_p * ze_p * propZ - gq_a * propG * stw2); res[ plus][minus] = -2.*(zq_a_p * ze_m * propZ - gq_a * propG * stw2); res[minus][minus] = -2.*(zq_a_m * ze_m * propZ - gq_a * propG * stw2); res[minus][ plus] = -2.*(zq_a_m * ze_p * propZ - gq_a * propG * stw2); return res; } //! Z+Jets FKL Contribution /** * @brief Z+Jets FKL Contribution * @param pa Incoming Particle 1 (Z emission) * @param pb Incoming Particle 2 * @param p1 Outgoing Particle 1 (Z emission) * @param p2 Outgoing Particle 2 * @param pep Outgoing positron * @param pem Outgoing electron * @returns j_Z^\mu j_\mu for all helicities h1, hl, h2 */ MultiArray jZ_j( const HLV & pa, const HLV & pb, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem ){ using helicity::plus; using helicity::minus; MultiArray res; +// NOLINTNEXTLINE #define ASSIGN_HEL(RES, J, H1, HL, H2) \ - RES[H1][HL][H2] = J(pa, p1, pb, p2, pem, pep) // NOLINT + RES[H1][HL][H2] = J(pa, p1, pb, p2, pem, pep) ASSIGN_HEL(res, jV_j, plus, minus, minus); ASSIGN_HEL(res, jV_j, plus, minus, plus); ASSIGN_HEL(res, jV_j, plus, plus, minus); ASSIGN_HEL(res, jV_j, plus, plus, plus); #undef ASSIGN_HEL for(auto hl: {minus, plus}) { for(auto h2: {minus, plus}) { res[minus][hl][h2] = std::conj(res[plus][flip(hl)][flip(h2)]); } } return res; } // X and Y as used in contractions with unordered currents struct XY { COM X; COM Y; }; /** * @brief Z+Jets Unordered Contribution, unordered on Z side * @tparam h1 Helicity of line 1 (Z emission line) * @tparam hl Lepton Helicity * @tparam h2 Helicity of line 2 * @tparam hg Helicity of unordered gluon * @param pa Incoming Particle 1 (Z and Uno emission) * @param pb Incoming Particle 2 * @param pg Unordered Gluon * @param p1 Outgoing Particle 1 (Z and Uno emission) * @param p2 Outgoing Particle 2 * @param pep Outgoing positron * @param pem Outgoing electron * @returns X: (U1-L), Y: (U2+l) * * Calculates j_Z_{uno}^\mu j_\mu. Ie, unordered with Z emission same side. */ template XY amp_jZuno_j( const HLV & pa, const HLV & pb, const HLV & pg, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem ){ const COM u1 = U1(p1, p2, pa, pb, pg, pem, pep); const COM u2 = U2(p1, p2, pa, pb, pg, pem, pep); const COM l = L (p1, p2, pa, pb, pg, pem, pep); return {u1 - l, u2 + l}; } MultiArray jZuno_j( const HLV & pa, const HLV & pb, const HLV & pg, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem ){ using helicity::plus; using helicity::minus; MultiArray xy; + // NOLINTNEXTLINE #define ASSIGN_HEL(XY, J, H1, H2, H3, H4) \ XY[H1][H2][H3][H4] = J(pa, pb, pg, p1, p2, pep, pem) // NOLINT ASSIGN_HEL(xy, amp_jZuno_j, minus, minus, minus, minus); ASSIGN_HEL(xy, amp_jZuno_j, minus, minus, minus, plus); ASSIGN_HEL(xy, amp_jZuno_j, minus, minus, plus, minus); ASSIGN_HEL(xy, amp_jZuno_j, minus, minus, plus, plus); ASSIGN_HEL(xy, amp_jZuno_j, minus, plus, minus, minus); ASSIGN_HEL(xy, amp_jZuno_j, minus, plus, minus, plus); ASSIGN_HEL(xy, amp_jZuno_j, minus, plus, plus, minus); ASSIGN_HEL(xy, amp_jZuno_j, minus, plus, plus, plus); ASSIGN_HEL(xy, amp_jZuno_j, plus, minus, minus, minus); ASSIGN_HEL(xy, amp_jZuno_j, plus, minus, minus, plus); ASSIGN_HEL(xy, amp_jZuno_j, plus, minus, plus, minus); ASSIGN_HEL(xy, amp_jZuno_j, plus, minus, plus, plus); ASSIGN_HEL(xy, amp_jZuno_j, plus, plus, minus, minus); ASSIGN_HEL(xy, amp_jZuno_j, plus, plus, minus, plus); ASSIGN_HEL(xy, amp_jZuno_j, plus, plus, plus, minus); ASSIGN_HEL(xy, amp_jZuno_j, plus, plus, plus, plus); #undef ASSIGN_HEL return xy; } /** * @brief Z+Jets Unordered Contribution, unordered opposite to Z side * @tparam h1 Helicity of line 1 (Z emission) * @tparam hl Lepton Helicity * @tparam h2 Helicity of line 2 (unordered emission) * @tparam hg Helicity of unordered gluon * @param pa Incoming Particle 1 (Z emission) * @param pb Incoming Particle 2 (unordered emission) * @param p1 Outgoing Particle 1 (Z emission) * @param p2 Outgoing Particle 2 (unordered emission) * @param pg Unordered Gluon * @param pep Outgoing positron * @param pem Outgoing electron * @returns X: (U1-L), Y: (U2+l) * * Calculates j_Z^\mu j_{uno}_\mu. Ie, unordered with Z emission opposite side. */ template XY amp_jZ_juno( const HLV & pa, const HLV & pb, const HLV & p1, const HLV & p2, const HLV & pg, const HLV & pep, const HLV & pem ){ const COM u1 = U1_jV(pa, p1, pb, p2, pg, pem, pep); const COM u2 = U2_jV(pa, p1, pb, p2, pg, pem, pep); const COM l = L_jV (pa, p1, pb, p2, pg, pem, pep); return {u1 - l, u2 + l}; } MultiArray jZ_juno( const HLV & pa, const HLV & pb, const HLV & p1, const HLV & p2, const HLV & pg, const HLV & pep, const HLV & pem ){ using helicity::plus; using helicity::minus; MultiArray xy; + +// NOLINTNEXTLINE #define ASSIGN_HEL(XY, J, H1, H2, H3, H4) \ - XY[H1][H2][H3][H4] = J(pa, pb, p1, p2, pg, pep, pem) // NOLINT + XY[H1][H2][H3][H4] = J(pa, pb, p1, p2, pg, pep, pem) ASSIGN_HEL(xy, amp_jZ_juno, minus, minus, minus, minus); ASSIGN_HEL(xy, amp_jZ_juno, minus, minus, minus, plus); ASSIGN_HEL(xy, amp_jZ_juno, minus, minus, plus, minus); ASSIGN_HEL(xy, amp_jZ_juno, minus, minus, plus, plus); ASSIGN_HEL(xy, amp_jZ_juno, minus, plus, minus, minus); ASSIGN_HEL(xy, amp_jZ_juno, minus, plus, minus, plus); ASSIGN_HEL(xy, amp_jZ_juno, minus, plus, plus, minus); ASSIGN_HEL(xy, amp_jZ_juno, minus, plus, plus, plus); ASSIGN_HEL(xy, amp_jZ_juno, plus, minus, minus, minus); ASSIGN_HEL(xy, amp_jZ_juno, plus, minus, minus, plus); ASSIGN_HEL(xy, amp_jZ_juno, plus, minus, plus, minus); ASSIGN_HEL(xy, amp_jZ_juno, plus, minus, plus, plus); ASSIGN_HEL(xy, amp_jZ_juno, plus, plus, minus, minus); ASSIGN_HEL(xy, amp_jZ_juno, plus, plus, minus, plus); ASSIGN_HEL(xy, amp_jZ_juno, plus, plus, plus, minus); ASSIGN_HEL(xy, amp_jZ_juno, plus, plus, plus, plus); #undef ASSIGN_HEL return xy; } } // Anonymous Namespace std::vector ME_Z_qQ(const HLV & pa, const HLV & pb, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem, const ParticleID aptype, const ParticleID bptype, ParticleProperties const & zprop, const double stw2, const double ctw ){ using helicity::minus; using helicity::plus; const HLV pZ = pep + pem; const COM propZ = ZProp(pZ.m2(), zprop); const COM propG = GProp(pZ.m2()); MultiArray pref_top = Z_amp_pref(aptype, propZ, propG, stw2, ctw); MultiArray pref_bot = Z_amp_pref(bptype, propZ, propG, stw2, ctw); MultiArray coeff_top = jZ_j(pa, pb, p1, p2, pep, pem); MultiArray coeff_bot = jZ_j(pb, pa, p2, p1, pep, pem); double sum_top=0.; double sum_bot=0.; double sum_mix=0.; for(auto h1: {minus, plus}){ for(auto hl: {minus, plus}){ for(auto h2: {minus, plus}){ const COM res_top = pref_top[h1][hl] * coeff_top[h1][hl][h2]; const COM res_bot = pref_bot[h2][hl] * coeff_bot[h2][hl][h1]; sum_top += norm(res_top); sum_bot += norm(res_bot); sum_mix += 2.0 * real(res_top * conj(res_bot)); } } } const double t1_top = (pa-p1-pZ).m2(); const double t2_top = (pb-p2 ).m2(); const double t1_bot = (pa-p1 ).m2(); const double t2_bot = (pb-p2-pZ).m2(); sum_top /= t1_top * t2_top; sum_bot /= t1_bot * t2_bot; sum_mix /= sqrt(t1_top * t2_top * t1_bot * t2_bot); // Colour factor: (CF*CA)/2 // Colour and helicity average: 1/(4*Nc^2) const double pref = (C_F*C_A) / (8*N_C*N_C); return {sum_top*pref, sum_bot*pref, sum_mix*pref}; } double ME_Z_qg(const HLV & pa, const HLV & pb, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem, const ParticleID aptype, const ParticleID /*bptype*/, ParticleProperties const & zprop, const double stw2, const double ctw ){ using helicity::minus; using helicity::plus; const HLV pZ = pep + pem; const COM propZ = ZProp(pZ.m2(), zprop); const COM propG = GProp(pZ.m2()); MultiArray pref = Z_amp_pref(aptype, propZ, propG, stw2, ctw); MultiArray coeff = jZ_j(pa, pb, p1, p2, pep, pem); double sum = 0.; for(auto h1: {minus, plus}){ for(auto hl: {minus, plus}){ for(auto h2: {minus, plus}){ sum += norm(pref[h1][hl] * coeff[h1][hl][h2]); } } } sum /= (pa-p1-pZ).m2()*(pb-p2).m2(); // Colour factor: (CF*CA)/2 // Colour and helicity average: 1/(4*Nc^2) // Divide by CF because of gluon (K_g -> CA) //! TODO explain magic 8 sum *= C_A / (8.*N_C*N_C); // Multiply by CAM return sum * K_g(p2, pb); } std::vector ME_Zuno_qQ(const HLV & pa, const HLV & pb, const HLV & pg, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem, const ParticleID aptype, const ParticleID bptype, ParticleProperties const & zprop, const double stw2, const double ctw ){ using helicity::minus; using helicity::plus; const HLV pZ = pep + pem; const COM propZ = ZProp(pZ.m2(), zprop); const COM propG = GProp(pZ.m2()); MultiArray prefact_top = Z_amp_pref(aptype, propZ, propG, stw2, ctw); MultiArray prefact_bot = Z_amp_pref(bptype, propZ, propG, stw2, ctw); const MultiArray coeff_top = jZuno_j(pa, pb, pg, p1, p2, pep, pem); const MultiArray coeff_bot = jZ_juno(pb, pa, p2, p1, pg, pep, pem); double sum_top=0.; double sum_bot=0.; double sum_mix=0.; for(auto h1: {minus, plus}){ for(auto hl: {minus, plus}){ for(auto h2: {minus, plus}){ for(auto hg: {minus, plus}){ const COM pref_top = prefact_top[h1][hl]; const COM x_top = coeff_top[h1][hl][h2][hg].X; const COM y_top = coeff_top[h1][hl][h2][hg].Y; const COM pref_bot = prefact_bot[h2][hl]; const COM x_bot = coeff_bot[h2][hl][h1][hg].X; const COM y_bot = coeff_bot[h2][hl][h1][hg].Y; sum_top += norm(pref_top) * (C_A*C_F*C_F/2.*(norm(x_top)+norm(y_top)) - C_F/2.*(x_top*conj(y_top)).real()); sum_bot += norm(pref_bot) * (C_A*C_F*C_F/2.*(norm(x_bot)+norm(y_bot)) - C_F/2.*(x_bot*conj(y_bot)).real()); const COM xx = C_A*C_F*C_F/2. * pref_top * x_top * conj(pref_bot * x_bot); const COM yy = C_A*C_F*C_F/2. * pref_top * y_top * conj(pref_bot * y_bot); const COM xy = -C_F/2. * (pref_top * x_top * conj(pref_bot * y_bot) + pref_top * y_top * conj(pref_bot * x_bot)); sum_mix += 2.0 * real(xx + yy + xy); } } } } const double t1_top = (pa-pg-p1-pZ).m2(); const double t2_top = (pb-p2 ).m2(); const double t1_bot = (pa-pg-p1).m2(); const double t2_bot = (pb-p2-pZ).m2(); sum_top /= t1_top * t2_top; sum_bot /= t1_bot * t2_bot; sum_mix /= sqrt(t1_top * t2_top * t1_bot * t2_bot); //Helicity sum and average over initial states const double pref = 1./(4.*C_A*C_A); return {sum_top*pref, sum_bot*pref, sum_mix*pref}; } double ME_Zuno_qg(const HLV & pa, const HLV & pb, const HLV & pg, const HLV & p1, const HLV & p2, const HLV & pep, const HLV & pem, const ParticleID aptype, const ParticleID /*bptype*/, ParticleProperties const & zprop, const double stw2, const double ctw ){ using helicity::minus; using helicity::plus; const HLV pZ = pep + pem; const COM propZ = ZProp(pZ.m2(), zprop); const COM propG = GProp(pZ.m2()); MultiArray pref = Z_amp_pref(aptype, propZ, propG, stw2, ctw); const auto coeff = jZuno_j(pa, pb, pg, p1, p2, pep, pem); double sum = 0.; for(auto h1: {minus, plus}){ for(auto hl: {minus, plus}){ for(auto h2: {minus, plus}){ for(auto hg: {minus, plus}){ const COM X = coeff[h1][hl][h2][hg].X; const COM Y = coeff[h1][hl][h2][hg].Y; sum += norm(pref[h1][hl]) * (C_A*C_F*C_F/2.*(norm(X)+norm(Y)) - C_F/2.*(X*conj(Y)).real()); } } } } sum /= (pa-pg-p1-pZ).m2()*(p2-pb).m2(); //Helicity sum and average over initial states sum /= (4.*C_A*C_A); // Multiply by CAM return sum * (K_g(p2, pb) / C_F); } } // namespace currents } // namespace HEJ diff --git a/src/resummation_jet.cc b/src/resummation_jet.cc index 1584463..f2a656d 100644 --- a/src/resummation_jet.cc +++ b/src/resummation_jet.cc @@ -1,114 +1,114 @@ /** * \authors The HEJ collaboration (see AUTHORS for details) * \date 2019-2020 * \copyright GPLv2 or later */ #include "HEJ/resummation_jet.hh" #include #include #include #include #include #include "fastjet/PseudoJet.hh" #include "HEJ/utility.hh" namespace HEJ { std::vector resummation_jet_momenta( - std::vector const & p_born, + std::vector const & p_born, fastjet::PseudoJet const & qperp ) { // for "new" reshuffling p^B = p + qperp*|p^B|/P^B double Pperp_born = 0.; - for(auto const & p: p_born) Pperp_born += p.perp(); + for(auto const & p: p_born) Pperp_born += p->perp(); std::vector p_res; p_res.reserve(p_born.size()); for(auto const & pB: p_born) { - const double px = pB.px() - qperp.px()*pB.perp()/Pperp_born; - const double py = pB.py() - qperp.py()*pB.perp()/Pperp_born; - const double pperp = std::sqrt(px*px + py*py); + const double px = pB->px() - qperp.px()*pB->perp()/Pperp_born; + const double py = pB->py() - qperp.py()*pB->perp()/Pperp_born; + const double mperp = std::sqrt(px*px + py*py + pB->m2()); // keep the rapidities fixed - const double pz = pperp*std::sinh(pB.rapidity()); - const double E = pperp*std::cosh(pB.rapidity()); + const double pz = mperp*std::sinh(pB->rapidity()); + const double E = mperp*std::cosh(pB->rapidity()); p_res.emplace_back(px, py, pz, E); assert( nearby_ep( p_res.back().rapidity(), - pB.rapidity(), + pB->rapidity(), 1e-5 ) ); } return p_res; } namespace { enum coordinates : std::size_t { x1, x2 }; namespace ublas = boost::numeric::ublas; template double det(ublas::matrix_expression const& m) { ublas::permutation_matrix pivots{m().size1()}; Matrix mLu{m()}; const auto is_singular = lu_factorize(mLu, pivots); if(is_singular) return 0.; double det = 1.0; for (std::size_t i = 0; i < pivots.size(); ++i){ if (pivots(i) != i) det = -det; det *= mLu(i,i); } return det; } using ublas::matrix; } // namespace double resummation_jet_weight( - std::vector const & p_born, + std::vector const & p_born, fastjet::PseudoJet const & qperp ) { using std::size_t; static constexpr int num_coordinates = 2; auto Jacobian = matrix{ num_coordinates*p_born.size(), num_coordinates*p_born.size() }; double P_perp = 0.; - for(auto const & J: p_born) P_perp += J.perp(); + for(auto const & J: p_born) P_perp += J->perp(); for(size_t l = 0; l < p_born.size(); ++l){ - const double Jl_perp = p_born[l].perp(); + const double Jl_perp = p_born[l]->perp(); for(size_t lp = 0; lp < p_born.size(); ++lp){ const int delta_l = static_cast(l == lp); auto const & Jlp = p_born[lp]; - const double Jlp_perp = Jlp.perp(); + const double Jlp_perp = Jlp->perp(); for(size_t x = x1; x <= x2; ++x){ const double qxy = (x==x1)?qperp.px():qperp.py(); for(size_t xp = x1; xp <= x2; ++xp){ - const double Jxy = (xp==x1)?Jlp.px():Jlp.py(); + const double Jxy = (xp==x1)?Jlp->px():Jlp->py(); const int delta_x = static_cast(x == xp); Jacobian(2*l + x, 2*lp + xp) = + delta_l*delta_x - qxy*Jxy/(P_perp*Jlp_perp) * (delta_l - Jl_perp/P_perp); } } } } return det(Jacobian); } } // namespace HEJ