I reorganized the question to clarify exactly what it is that I'm asking.
Suppose an experiment is performed where a particle detector records 50 particles per second, on average.
Absent any other considerations, it seems easy enough to come up with any number of theories to explain these results. For example, either of the following two theories would seem to be acceptible:
- assume the emitter produces 100 particles per second, implying a detector efficiency of 50%
- assume the emitter produces 200 particles per second, implying a detector efficiency of 25%
The only constraint in designing a consistent model is that the particle production rate times the detector efficiency must equal the number of particles detected. It seems that I can assume any emission rate that is greater than or equal to my actual measured detection rate.
In other words, we seem to be free to multiply the presumed emission rate by any positive factor, as long as we reduce the efficiency of the detector by the same factor.
The context of the question is this: detector efficiency is crucial to the argument in every Bell Test experiment I'm familiar with. But as far as I can tell, assumptions about detector efficiency are determined in practice so as not to violate the tenets of Quantum Theory. If that's the case, then the argument becomes circular and Bell tests only provide evidence that QT axioms are consistent with experiment, but doesn't decide between QT and other possible theories.
This line of thought led me to wonder whether the proportionality between macroscopic and sub-atomic energy and mass constants (for example, the Compton wavelength) aren't similarly under-determined.
To be clear, I'm asking a question about detector theory, and am not concerned about accuracy or calibration.
Thanks in advance, and let me know if my question can be improved, or if it's ambiguous in any way.
Answer
assume the emitter produces 100 particles per second, implying a detector efficiency of 50%
- assume the emitter produces 200 particles per second, implying a detector efficiency of 25%
The only constraint in designing a consistent model is that the particle production rate times the detector efficiency must equal the number of particles detected. It seems that I can assume any emission rate that is greater than or equal to my actual measured detection rate.
Cart before the horse? The calibration of the detector, i.e. the efficiency, is done by measurements independent of the experiment under study.
In finding the efficiency of a detector to be used in an experiment, one does not assume an emission rate randomly. One uses a source with a known lifetime and, yes, the quantum mechanical models fitting the particular matter of the detector.
After the efficiency versus energy of a detector is known, general experiments can be devised to use this detector, i.e. independent of the particular experiments,sources and calculations used for calibration. The basic assumption is that a photon of a particular energy, talking of photon detectors, would register in the detector independent of the source. One would use the efficiency to get at the real numbers from a new source.
No comments:
Post a Comment