I understand that a small proportion of events at the LHC that would not trigger on any deterministic trigger are saved on what might be called a random trigger, so that, amongst other uses, proposed new triggers can be tested. What proportion of events are saved on deterministic and on random triggers? Also, how much is that data used and for what other purposes?
This question was prompted by a Cosmic Variance Post, http://blogs.discovermagazine.com/cosmicvariance/2012/07/13/particle-physics-and-cosmology-in-auckland/, where this question may also be found.
Answer
Random triggers (or at least those uncorrellated with the data (many experiments use one pulse per second called "1PPS" for this purpose); I think CMS calls them "zero bias") give you an unambiguous measurement of things that might contaminate the data. Electronics noise, cosmics rates, radio-contamination of the detector elements are some of the things which can be extracted this way.
Further, when you have a information discarding trigger (as the big collider experiments must (their data rates are sky-high as it is)), you will generally also take some un-filtered or lightly filtered triggers. These are to allow you to measure (rather than calculating on a theoretical basis) the efficiencies related to the way in which the trigger discards information. CMS calls one set of these their "minimum bias" triggers.
Trigger design is a major part of designing high-rate experiments and has lots of statistics and not a little art to it.
No comments:
Post a Comment