Page MenuHomeHEPForge

Reorganise the data storage
Open, NormalPublic

Description

Copied from original trac ticket:
https://laura.hepforge.org/trac/ticket/14

At present the data is read in via a TTree in LauFitDataTree. Then when all the PDFs are asked to cache information they make their own local copies of their abscissas as well as the likelihood value or some intermediate calculated values. In fits with very large data samples this could cause memory problems.

Better to have only one copy of the data in memory and give the PDFs references to their abscissa values. They can still store locally their likelihood values and/or intermediate values.

In the case of extremely large statistics even this might break down so think about what to do then. Perhaps just use the TTree (with tweaked buffering options)?

Event Timeline

tlatham created this object with edit policy "Laura (Project)".
tlatham triaged this task as High priority.
tlatham lowered the priority of this task from High to Normal.
tlatham changed the visibility from "All Users" to "Public (No Login Required)".Nov 4 2019, 11:37 AM