Friday, 6 November 2015

quantum mechanics - Why can't the outcome of a QM measurement be calculated a-priori?


Quantum Mechanics is very successful in determining the overall statistical distribution of many measurements of the same process.


On the other hand, it is completely clueless in determining the outcome of a single measurement. It can only describe it as having a "random" outcome within the predicted distribution.


Where does this randomness come from? Has physics "given up" on the existence of microscopic physical laws by saying that single measurements are not bound to a physical law?


As a side note: repeating the same measurement over and over with the same apparatus makes the successive measurements non-independent, statistically speaking. There could be a hidden "stateful" mechanism influencing the results. Has any study of fundamental QM features been performed taking this into account? What was the outcome?




Edit: since 2 out of 3 questions seem to me not to answer my original question, maybe a clarification on the question itself will improve the quality of the page :-)


The question is about why single measurements have the values they have. Out of the, say, 1000 measure that make a successful QM experiment, why do the single measurements happen in that particular order? Why does the wave function collapse to a specific eigenvalue and not another? It's undeniable that this collapse (or projection) happens. Is this random? What is the source of this randomness?



In other words: what is the mechanism of choice?




Edit 2: More in particular you can refer to chapter 29 of "The road to reality" by Penrose, and with special interest page 809 where the Everett interpretation is discussed - including why it is, if not wrong, quite incomplete.



Answer



The short answer is that we do not know why the world is this way. There might eventually be theories which explain this, rather than the current ones which simply take it as axiomatic. Maybe these future theories will relate to what we currently call the holographic principle, for example.


There is also the apparently partially related fact of the quantization of elementary phenomena, e.g. that the measured spin of an elementary particle always is measured in integer or half integer values. We also do not know why the world is this way.


If we try to unify these two, the essential statistical aspect of quantum phenomena and the quantization of the phenomena themselves, the beginnings of a new theory start to emerge. See papers by Tomasz Paterek, Borivoje Dakic, Caslav Brukner, Anton Zeilinger, and others for details .


http://arxiv.org/abs/0804.1423 and


http://www.univie.ac.at/qfp/publications3/pdffiles/Paterek_Logical%20independence%20and%20quantum%20randomness.pdf


beginning with Zeilinger's (1999) http://www.springerlink.com/content/jt342534x711542g/, also online free here



These papers present phenomenological (preliminary) theories in which logical propositions about elementary phenomena somehow can only carry 1 or a few bits of information.


Thanks for asking this question. It was a pleasure to find these papers.


No comments:

Post a Comment

Understanding Stagnation point in pitot fluid

What is stagnation point in fluid mechanics. At the open end of the pitot tube the velocity of the fluid becomes zero.But that should result...