It's really a mathematical technique which draws on physics for its inspiration. You know in quantum mechanics you have the bras and kets, and you always calculate the expecation value of an operator by using this type of expression:
<phi*|H|phi>
The phi's are "state vectors", and the H is an "operator". (The asterik represents complex conjugation, which is a bit of a bookkeeping detail we don't need to worry about too much here.) This is the abstract form; but often the state vectors can be represented as functions in space, and the operator is an action performed on those functions. For example, the operator which evaluates the momentum of a function is just differentiation (actually, the del operator in three dimensions).
Sometimes the operator is just the "do-nothing" operator; in which case, the calculation simply evaluates the dot product of the two state vectors, which of course is the square of the total amplitude. Or, in the functional representation, it evaluates the integral between plus-and-minus infinity of the product of the function with itself. Which we can also interpret as the amplitude-squared of the function.
Sometimes the bra and ket are two different state vectors, in which case you have the dot product of two vectors. Or in the case of two functions multiplied by each other and integrated over all space, it's what you can consider as the "dot product" of those two function. Like the vector dot product, when you divide by the total amplitude, it gives you a measure of the extent to which those functions are pointing in the same direction. If the dot product is zero, we can say that those functions are "orthogonal".
So what does this have to do with integration by parts? Well, you know that there are two functions involved, which we call u(x) and v(x). What I like to do is say that we can think of them as bras and kets, with differentiation being the operator:
<u(x)|D|v(x)>
The "D" represents differentiation so what I've written here is just the plain dot product of u and dv/dx, after you've done the differentiation. This is exactly what integration by parts is...well, almost but not quite.
I didn't exactly emphasize this, but this "dot-product"interpretation only really works if the limits of your integration are plus/minus infinity. That's not exactly the usual case for integration by parts, where you're normally integrating between specific limits, often for example from zero to infinity. For example, you'll be doing the product of x-squared and exp(-x), from zero to infinity. You can't integrate it all the way from negative infinity because it diverges. And an integral of this kind is not a dot product.
What I figured out is you can make it into a dot product using the unit step function. The unit step function H(x) is defined as zero to the left of the y-axis and 1 to the right. Or alternately it's the integral of the Dirac delta function. (The H is for Heaviside, an electrical engineer who was a few years ahead of Dirac in these things. So my dot product looks like this:
<H(x)u(x)|D|v(x)>
Now we're ready to do integration by parts. So I'm going to do uv-vdu, right? No. It's a lot nicer than that. What I say is this: the D operator can work in either direction. It looks like you're operating to the right, on v(x), in which case you have the "dot product:
H(x)u(x) *dv/dx
Now you know in quantum mechanics they're always reminding you that things aren't commutative. But that doesn't mean you can't operate in the opposite direction! You can. And when you do, it looks like this:
d/dx{H(x)u(x)}*v(x)
That's integration by parts, and when I come back I'll show you how it works.
No comments:
Post a Comment