I am reading Gelfand's Calculus of Variations & mathematically everything makes sense to me, it makes perfect sense to me to set up the mathematics of extremization of functionals & show that in extremizing a certain functional you can end up with Newton's laws (i.e. you could extremize some arbitrary functional $L$ & in examining the form of Newton's laws you see that one should define $L$ as $T - V$) & this way of looking at things requires no magic, to me it seems as though you've just found a clever way of doing mathematics that results in Newton's laws.
However in books like Landau one must assume this magical principle of least action using the kind of thinking akin to Maupertuis, & I remember that every time I read some form of justification of this there's always a crux point where they'd say 'because it works'. I'm thinking that there may be a way to explain the principle of least action if you think of extremizing functionals along the lines Euler first did & use the method of finite differences (as is done in the chapter on the Variational Derivative in Gelfand, can't post a link unfortunately), i.e. because you're thinking of a functional as a function of n variables you can somehow incorporate the $T - V$ naturally into what you're extremizing, but I don't really know... I'm really hoping someone in here can give me a definitive answer, to kind of go through what the thought process is in this principle once and for all!
As far as I know, one can either assume this as a principle as in Landau & use all this beautiful theory of homogeneity etc... to get some results, or else once can assume Newton's laws & then use the principle of virtual work to derive the Euler-Lagrange equations, or start from Hamilton's equations & end up with Newton's laws, or one can assume Newton's laws & set up the calculus of variations & show how extremizing one kind of functional leads to Newton's laws - but I'm not clear on exactly what's going on & would really really love some help with this.
No comments:
Post a Comment