So I was reading this paper, "Limits to Binary Logic Switch Scaling—A Gedanken Model". The following is the paper's abstract:
In this paper we consider device scaling and speed limitations on irreversible von Neumann computing that are derived from the requirement of "least energy computation." We consider computational systems whose material realizations utilize electrons and energy barriers to represent and manipulate their binary representations of state.
So, logically, a bit of physics is used in the paper. What the author does on the second page is rewrite $ x = \dfrac{\hbar}{\Delta p }$ to $ x = \dfrac{\hbar}{\sqrt{2mE_{\text{bit}}}}$. I am aware that $ p = \sqrt{2mE}$, but why can you use $\Delta p = \sqrt{2mE}$? Is this allowed or does the author make a mistake?
Answer
This is an estimation tool not uncommon in theoretical physics. Namely, one knows the value of some quantity for a given problem and therefore assumes that the scale of the problem with regards to that quantity is of the same order of magnitude as the known value. In other words, we assume that the error in our known value must not be too much greater than the value itself, otherwise we wouldn't actually know the value.
For instance, the converse of this argument is sometimes used when discussing the absolute mass of the neutrino flavors. The neutrino relative masses have been measured, so when one needs an estimation of the absolute mass of a neutrino, the best guess is that it is roughly of the same order as the mass difference. It would be strange, the argument goes, that neutrino masses should be so tightly packed compared to their actual values. Why should we have so many significant figures on such a (comparatively) large value?
It is likely that this is what the author means: for an estimation of the minimum scale for a switch, it is reasonable to assume that the scale of the momentum of the charge carriers is of the same order as the momentum itself. If the error were much larger, we wouldn't actually know the momentum. If the error were much smaller, this would cease to be an estimation.
Edit: Here's another way to put it that is more tightly focused to this question. Within the classical model of an electron gas (the Drude model), electrons behave like particles within an ideal gas. Therefore, their velocity (and by extension their momentum) distribution function is a Boltzmann Distribution. If you follow that link, you'll notice that the mean, mode, and standard deviation (the square root of the variance) of such a distribution all scale as $a$ (the scale parameter of the distribution). That means that the mean is actually proportional to the standard deviation. That is the mathematical way of saying, "The bigger your guessed value is, the bigger your error in that guess will be."
No comments:
Post a Comment