Why is the adjoint of a function simply it's complex conjugate? Normally with a vector we consider the adjoint to be the transpose (And the conjugate? I don't know why), so does this concept carry through to these functions? Should I imagine the conjugate of a wave function to actually be a column vector? Further what does it mean for a function to be a row vector, versus a column vector. Do they live in a completely separate space? Are our operators always square?
Answer
I'll just first add to Alfred's answer and also point out that in more generalized discussions (theory of distributions and rigged Hilbert spaces) the notion of adjoint is broadened so that in rigged Hilbert space the set of bras is strictly bigger than the set of kets (see the end of my answer).
Further to Alfred's answer, you can think of the function $\psi(x)$ as a continuous generalization of the following process. Imagine you were doing your calculations numerically, and that you were storing the values of your function sampled at discrete points over some interval of interest to your simulation. You could think of your function as a column vector $\Psi$. To form the inner product between two such functions $\psi_1$, $\psi_2$, discretized to column vectors $\Psi_1$, $\Psi_2$, you would do the vector operation $\left<\psi_1, \psi_2\right> = \Psi_1^\dagger \Psi_2$. So there's your transpose. In computer code you would simply contract the indices (the two functions would have the same shape in memory - you wouldn't make "columns" and "rows" different in memory), so hence you can see that the inner product between two functions becomes $\int \psi_1^*(x) \psi_2(x) dx$.
Also, you probably shouldn't think of the transpose as being a fundamental part of the concept of "adjoint". The transpose in the inner product expressed in matrix notation $\left<\psi_1, \psi_2\right> = \Psi_1^\dagger \Psi_2 = {\Psi_1^*}^T \Psi_2= {\Psi_1^T}^* \Psi_2$only arises when we use matrix notation owing to the familiar "rows on left by columns on right" rule in the matrix product. This rule itself is "only" a convention that is adopted to simplify interpretation of matrices (i.e. with their columns defined as the images of the basis vectors under the transformation the matrix stands for) and how we combine matrices to represent composition of the linear transformations they stand for. The transpose is not an essential part of forming the inner product, which is the essence of the idea of "adjoint" and if $T:\mathbf{H}\to\mathbf{H}$ is an operator mapping some vector space $\mathbf{H}$ to itself, the "adjoint" $T^\dagger$ to $T$ is the unique one such that $\left
To expand on Alfred's comments on one-forms and duals. This is actually why physicist bang on about "Hilbert spaces". A one form is simply a linear functional $\mathcal{L}:\mathbf{H}\to\mathbb{C}$ mapping a function or vector $\psi(x)$, $\Psi$ in some linear space $\mathbf{H}$ (a linear space of e.g. of complex valued functions $f:\mathbb{R}\to\mathbb{C}$ or column vectors), to a complex number. For example, if we consider a linear space of functions, the linear functional could be $\psi(x)\to\int_{-\infty}^\infty \psi_0(u)^* \psi(u)\mathrm{d}u$ for some test function $\psi_0(x)$, or for vectors in a discrete space it could be $\Psi\to \Psi_0^\dagger \Psi$. The point about a Hilbert space is that we have an inner product defined and that:
ALL continuous linear functionals can be represented by an inner product. Whatever your continuous linear functional $\mathcal{L}$, you can always find a test function/vector $\psi_\mathcal{L}$ such that $\mathcal{L}\psi(x) = \left<\psi_\mathcal{L},\psi\right> = \int \psi_\mathcal{L}^*\psi du$.
So in a Hilbert space, vectors and functions always double up as one-forms if you define the appropriate linear functional with a given function / vector as the kernel of the functional. Abstractly, one can think of functions/vectors and one-forms as the "same" through a one-to-one, onto correspondence. So a Hilbert space is, through this abstract correspondence, "the same" as its space of continuous (sometimes "topological") duals.
Now you may know a Hilbert space is more standardly defined as an inner product space wherein all Cauchy sequences converge in the norm defined by the inner product to a point in the space, i.e. the space is complete with respect to the metric defined by the inner product.
This is indeed the right same notion as the one I have just talked about, the self-dualhood being the one often more workable and usable by physicists. The proof that these two notions are logically equivalent is the Riesz Representation Theorem and you should probably read through the Wikipedia page on this topic to get a deeper insight into what's going on here.
However, inevitably one reaches the fairly obvious question about including such things as the Dirac delta in your space of functionals: we want some bras that cannot be represented as Hilbert space members. Here is where the notion of Rigged Hilbert Space comes in - the ingenious process where we kit a dense subset $S\subset H$ of the original Hilbert space $H$ ("rig it") with a stronger topology, so that things like the Dirac delta are included in the topological dual space $S^*$ where $S\subset H\subset S^*$. Good references for this notion are https://physics.stackexchange.com/a/43519/26076 and also the discussions at https://mathoverflow.net/q/43313. In the latter, Todd Trimble's suspicions that that the usual Gel'Fand triple is $S\subset H = \mathbf{L}^2(\mathbb{R}^N)\subset S^*$ with $S$ , $S^∗$ being the Schwartz space and its topological dual, respectively. (The topological dual being wrt to the stronger topology induced by a family of norms, NOT the $\mathbf{L}^2$ norm of the original Hilbert space). $S^∗$ is the space of tempered distributions as discussed in my answer here. One definition of Hilbert space is that it is isomorphic to the set of its continuous linear functionals (its topological dual): it does not include useful things like the Dirac delta which, although a linear functional, is not continuous in the Hilbert space norm. So we kit a dense subspace of $H$ out with a strong topology to ferret out all the useful distributions in $S^∗$. By "ferret out", of course, I'm saying that the topology is strong enough to make things like the Dirac delta into a continuous functional, whereas it isn't continuous with respect to the Hilbert space norm. The Wikipedia article is a little light on here: there's a great deal of detail about nuclear spaces that's glossed over so at the first reading I'd suggest you should take a specific example $S$ = Schwartz space and $S^∗$ = Tempered Distributions and keep this relatively simple (and, for QM most relevant) example exclusively in mind - for QM you won't need anything else. So the set of bras is strictly bigger than the set of kets in this view.
No comments:
Post a Comment