Regretfully I have to start with an apology as I fear I might be unable to express the question rigorously.
Often reading physics papers the concept of "length scale" is used, in statements such as "over this length scale, the phenomenon can be characterized by an exponential decay", or "the increase in X is virtually linear over such and such time scale". The Nobel Laureate De Gennes seems to me a virtuoso in this particular art.
I am able to follow some reasoning, but I am not so sure I understand fully their methods. For example, let us imagine I have got a model characterizing the change of a certain quantity, $Y$, versus time $t$: $$Y = e^{-A/t}$$ where $A$ is a constant. The function tends to zero for small times, it is first concave and then becomes convex. One expects to be able to characterize the "length scales" at which the transition occurs, in relation to the constant $A$. How can that be done? I tried to calculate the second derivative, which equals $$Y'' = e^{-A/t} \frac{A^2}{t^4} - 2 e^{-A/t} \frac{A}{t^3}$$ so imposing the conditions it equals zero I get the equation $$ \frac{A}{t^4} = \frac {2}{t^3}$$ suggesting the conclusion the concave-convex transition occurs at "time scales in the order of A", is this correct?
But what truly puzzles me is how to characterize, for example, the time scale over which the functions is "almost flat", in the initial concave region. For example, if one fixes $A = 100$ and plot the function for a maximum $t = 5$, intuitively one suspects there must be a way to say "over such and such length scale, the function is flat". I wonder if to give sense to this statement one should specify which variations in the functions can be considered negligible.
Thanks a lot for any help on this I appreciate rather misty question.
No comments:
Post a Comment