## Wednesday, April 6, 2011

### Perturbation Theory and Tayor Expansions

My take on perturbation theory starts with the hydrogen atom. The ground state is a cloud of charge about the nucleus, and we ask: what is the simplest way we can distort the ground state? This is a vague question and it has at least two possible answers. One answer is that the simplest way do distort the ground state is to mix in a little bit of the first excited state, say for example the pz state. The effect of this is to more or less push the charge cloud a small distance along the z axis. The second way to answer this question is by suggesting we introduce a small, constant electric field to the region. Let's say we direct it along the z axis. The effect of this is to push the charge cloud a small distance along the z axis. These two answers suggest a third way to distort the ground state. Why not just push the charge cloud a small distance along the z axis? Isn't this exactly the same thing? Not quite. The first two methods do in fact, give exactly the same result. In the limit of a very small constant field in the z direction, the resulting perturbed state is in fact exactly equal to the original ground state plus a small amount of pz state. And it is approximately true that this perturbed field is similar to what you get if you just push the whole charge cloud a small distance north. But it is not exactly the same. You can see that is isn't by remembering that the wave function is "pointy" right at the origin, and you can't very well expect this "pointiness" to move away from the nucleus along with the rest of the charge cloud. So the third description can only be approximate at best. However, it's an interesting approximation and mathematically it's quite simple. If you have any function and you want to move it a small distance in the z direction, you can do so by taking the derivative of the function with respect to z, and adding a small component of this derivative to the original function. That's how a Taylor series starts out. For small displacements, you can ignore the higher order terms. Wouldn't it be nice if there was some kind of mathematical operation you could perform on the ground state of hydrogen that would give you the pz state? You can see that the simple operation of differentiation "comes close", but it clearly doesn't work exactly. I'm going to try and show how to construct an operator that works exactly. The idea is to go back to the second proposal for distorting the charge cloud: adding a small electric field. We know that the perturbation must consist of exactly the pz state, because that's the simplest possible distortion of the charge cloud, and in the limiting case it must be exact. I'm going to suggest putting the hydrogen atom between the plates of a capacitor, and adding a charge q to the capacitor. This gives us a small constant field. Let's begin with no charge on the plates, so the field is zero. Let's remember that in quantum mechanics there is something called the Hamiltonian; and for the hydrogen it is given by the equation H=-∇^2+V where V is the 1/r potential of the nucleus. Also remember that when we insert the wave function phi into the Hamiltonian, that the result is that the function returns a simple multiple of phi. We can scale it with an appropriate multiplicative constant so it returns phi itself. Now I'm going to add a charge q to the capacitor, and ask the question: how does H, the Hamiltonian, change? The change in H can obviously be written as dH/dq. What I want to show now is that dH/dq is itself an operator, and that when you operate on phi, the ground state of the hydrogen atom, the result of this operation is to return the first excited state. So dH/dq is what they call in quantum mechanics a "ladder operator" (up to at least an arbitrary multiplicative constant). My blogger program is acting up right now and not exactly permitting me to do paragraph breaks, so I'm going to end this post now and continue later.

### Ladder Operators and Fourier Transforms

I sometimes help my son's friend with his physics homework and it drives me crazy. He's completing Honors Physics at U of W and right now we're working through a course in mathematical physics. The thing that drives me crazy is that it's full of real deep heavy duty physics but they don't explain the physics in class: all they do it the math. We were doing Fourier Transforms last month and he had all these pointless integrations to do. I could recognize some of them as being real physical systems, like when we did the Fourier Transform for 1(/w^2 + 1). That's a perfect example of what's wrong with the course. Just a random function on an assignment that you're expected to integrate with no motivation. We went through the class notes and there was nothing about it. What I figured out is that it's the power spectrum of an exponential decay, a typical impulse response function from electrical engineering. You can sort of see it if you break it into partial fractions and you might recognize the pieces looking similar to the impedance of an RC circuit. Now, working from the time domain to the frequency domain, it's a very simple integration, because you're working with exp(-t) from zero to infinity. The problem on the assignment was much harder because you were in effect working from the frequency domain to the time domain. The super short cut here is to recall that the Fourier Transform applied twice must return you to the original function. So if exp(-t) integrates out to give you 1(/1+jw), then integrating one more time must bring you back to exp(-t). The conplex conjugate gives you the left branch of the function exp(t) from minus infinity to zero, and when you add them together you get the complete Fourier Transform of 1/(1 + w^2). It's pretty cool.

I also figured out another completely different way of doing the problem. It's based on another trick they didn't teach in this course. It's like this: multiplication by jw in the frequency domain corresponds to differentiation in the time domain. You can see that it's obviously true for a simple sine wave, and in fact it's generally true. So if we take the given function and multiply it by -w^2, we've in effect differentiated it twice. Now watch this: working in the frequency domain, subtract this result to the original function: you get 1/(1+w^2) - (-w^2)/(1+w^2) = 1!! Now remember that the Fourier transform of the constant function 1 is just the delta function, and you can translate the whole mathematical statement back to the time domain like so:

"The original function, take-away it's second derivative, gives you the delta function."

This is an ordinary differential equation, and its solution gives you exactly the correct result. It's also a very cool illustration of what's actually happening when you use Laplace Transforms to solve differential equations. Everyone who's taken a course in Laplace Transforms must remember how you're given a table of transforms and a bunch of rules on how to work back and forth, and you solve differential equations by following a bunch of steps without knowing what you're doing. That's how they teach it, and it doesn't have to be that way.

The other annoying thing about this problem is that while the original function, which I described as a power spectrum, is certainly a real function from physics, it's not one for which there is ever any reason to want to evaluate the Fourier Transform. The function exp(-t) to the right of zero is definitely an important impulse response function; and it's Fourier Transform, 1/1+jw, is therefore also significant. But the power in the impulse, is given by it's square, evaluated by multiplying by the complex conjugate. That's where you get 1/1+w^2. It makes no sense to work backwards from this function to the time domain by taking the inverse Fourier transform. It can be done, but the result has no physical significance. Of course, what do you expect from a course in mathematical physics?

I started off by saying how annoying it is trying to help someone get through his coursework when you can see that all this really cool physics is presented as nothing more than a bunch of meaningless mathematical manipulations. But the very last assignment of the year, which we worked on last weekend, was the ultimate. Three days before the end of the term, the prof introduced partial differential equations! On the final assignment, you had things like the quantum harmonic oscillator and the diffusion equation (which deals with the time evolution of a heated metal bar with an initial temperature distribution). But there was nothing in the notes about the physics of these problems: just this assignment, where the equation was written out and the student is asked to "solve the differential equation". It's awful to teach things that way. But one of the equations struck me as being vaguely familiar: we were asked to find the eigenfunctions of the operator d/dx + x. What could that be?

It turns out to be one of the ladder operators of the harmonic oscillator. It's a huge topic of physics that could fill a whole graduate level course, and it's given on the last assignment of the year as a one-line problem. I'll have more to say about this in my next post.

I also figured out another completely different way of doing the problem. It's based on another trick they didn't teach in this course. It's like this: multiplication by jw in the frequency domain corresponds to differentiation in the time domain. You can see that it's obviously true for a simple sine wave, and in fact it's generally true. So if we take the given function and multiply it by -w^2, we've in effect differentiated it twice. Now watch this: working in the frequency domain, subtract this result to the original function: you get 1/(1+w^2) - (-w^2)/(1+w^2) = 1!! Now remember that the Fourier transform of the constant function 1 is just the delta function, and you can translate the whole mathematical statement back to the time domain like so:

"The original function, take-away it's second derivative, gives you the delta function."

This is an ordinary differential equation, and its solution gives you exactly the correct result. It's also a very cool illustration of what's actually happening when you use Laplace Transforms to solve differential equations. Everyone who's taken a course in Laplace Transforms must remember how you're given a table of transforms and a bunch of rules on how to work back and forth, and you solve differential equations by following a bunch of steps without knowing what you're doing. That's how they teach it, and it doesn't have to be that way.

The other annoying thing about this problem is that while the original function, which I described as a power spectrum, is certainly a real function from physics, it's not one for which there is ever any reason to want to evaluate the Fourier Transform. The function exp(-t) to the right of zero is definitely an important impulse response function; and it's Fourier Transform, 1/1+jw, is therefore also significant. But the power in the impulse, is given by it's square, evaluated by multiplying by the complex conjugate. That's where you get 1/1+w^2. It makes no sense to work backwards from this function to the time domain by taking the inverse Fourier transform. It can be done, but the result has no physical significance. Of course, what do you expect from a course in mathematical physics?

I started off by saying how annoying it is trying to help someone get through his coursework when you can see that all this really cool physics is presented as nothing more than a bunch of meaningless mathematical manipulations. But the very last assignment of the year, which we worked on last weekend, was the ultimate. Three days before the end of the term, the prof introduced partial differential equations! On the final assignment, you had things like the quantum harmonic oscillator and the diffusion equation (which deals with the time evolution of a heated metal bar with an initial temperature distribution). But there was nothing in the notes about the physics of these problems: just this assignment, where the equation was written out and the student is asked to "solve the differential equation". It's awful to teach things that way. But one of the equations struck me as being vaguely familiar: we were asked to find the eigenfunctions of the operator d/dx + x. What could that be?

It turns out to be one of the ladder operators of the harmonic oscillator. It's a huge topic of physics that could fill a whole graduate level course, and it's given on the last assignment of the year as a one-line problem. I'll have more to say about this in my next post.

## Saturday, April 2, 2011

### How to handle two electrons at once

Yesterday I sketched out the form of the wave function for two electrons in a potential well. It's over a year since I started this blog and I've actually solved a problem or two in that time. It's just hard to believe that I've never really worked out the correct form for this basic problem until now. It's a pretty significant problem for me, because it generalized to a whole list of other problems. Where do we begin? A very good problem to look at is the case of two isolated hydrogen atoms. We can solve each of these individually as single-electron problems, but we really ought to be able to get the same solution by treating it as one big two-electron problem. In fact, I got myself in a lot of trouble last year when I tried to do this and kept coming up with a form of "mini-helium" as a solution. You can go back to my blogs from last winter to see how I got out of that mess. Anyhow, let's begin with the solution we got for two electrons in a box and see how it would be applied to two hydrogen atoms. We can take the diagram from the last blog and just change shape of the function to come up with something like this: This would appear to be the correct representation for the ground state of two hydrogen atoms, taken together. It is interesting that we're not allowed to say that "electron A is here, and electron B is there". We have to allow that each electron can somehow be either here or there. Another interesting aspect is that the electrons must have opposite spin. This is similar to what happens in the ground state of the helium atom. It seems strange to require that the spins be opposite for two separate hydrogen atoms, and in fact we will find that we can get out of this difficulty rather easily. It will turn out that there are three more nearly degenerate states which the electrons can fill, and this gives us the flexibility to specify their spins independently. The most puzzling aspect of this representation, however, turns out to be that the electrons are in a spin singlet state. It's not just that the spin at A is opposite the spin at B. It's that the spin everywhere is identically zero. This is baffling because it's not something we are able to get if we just write down the wave equations of two separate atoms. It's as though by taking them as a complete system we expose new, unexpected behavior.

## Friday, April 1, 2011

### The Two-Electron Well Revisited

Last year I posted some stuff about the two-electron potential well. It ought to be a pretty standard problem but it turns out you won't find it hardly anywhere. It's true that there is a standard textbook problem with two particles in a potential well, but they almost always stipulate that they are two "non-interacting" particles, so you simply get the product combinations of the standard single-electron solutions. I wanted to see what happens with actual electrons that repel each other. I was pretty surprised to find out that the shape of the solution depends on the size of the box. I delve more deeply into this when I take up the subject of the isoelectonic series of helium. But for the two-electron box, it turns out that for the very

Like I said, I was pretty happy when I figure this out. What has happened recently is I've come to realize that this solution is wrong; or, at least, it's incomplete. You can tell it's wrong because the wave function for two electrons must be anti-symmetric. And if I reverse the roles of A and B in the function I've sketched above, I get exactly the same function back. It's symmetric, and that has to be wrong. It's easy to correct this and make it anti-symmetric. You just put a minus sign in the middle instead of a plus sign. But that's not right either. Yes, you've made the function anti-symmetric. But now the energy of the modified (antisymmetric) system turns out to be higher than that of the original symmetric case. Why is the energy higher? Why indeed is the energy different at all? Remember how we constructed these cases. We took an electron in a box, and added a second electron. We then pushed the two electrons just slightly away from each other, to slightly reduce their coulombic interaction without distorting the wave function with too many "higher harmonics" with their inherently greater momentum content. The old compromise of kinetic and potential energy. It's a straightforward optimisation. Then, after we've optimised the energy, almost as an afterthough, we consider the symmetrization. Symmetrical or anti-symmetrical? The horrifying fact is,

*small*box, the solution tends towards the simple product function. For the very large box, the two electrons tend to stay away from each other at opposite ends of the box. And for the medium box, which is to say, for a box on the scale of typical atomic dimensions, it's a compromise, as shown for a typical case in the drawing below: You construct this drawing (or I constructed it) by assuming A and B are each the familar sine wave solutions for the box, and just distort them a bit so they tend to keep away from each other. Then you symmetrize the wave function by reversing A and B, so that your final solution doesn't distinguish between the two electons. Because, of course, the electrons are supposed to be indistinguishable. It's basically what I show in the sketch below:Like I said, I was pretty happy when I figure this out. What has happened recently is I've come to realize that this solution is wrong; or, at least, it's incomplete. You can tell it's wrong because the wave function for two electrons must be anti-symmetric. And if I reverse the roles of A and B in the function I've sketched above, I get exactly the same function back. It's symmetric, and that has to be wrong. It's easy to correct this and make it anti-symmetric. You just put a minus sign in the middle instead of a plus sign. But that's not right either. Yes, you've made the function anti-symmetric. But now the energy of the modified (antisymmetric) system turns out to be higher than that of the original symmetric case. Why is the energy higher? Why indeed is the energy different at all? Remember how we constructed these cases. We took an electron in a box, and added a second electron. We then pushed the two electrons just slightly away from each other, to slightly reduce their coulombic interaction without distorting the wave function with too many "higher harmonics" with their inherently greater momentum content. The old compromise of kinetic and potential energy. It's a straightforward optimisation. Then, after we've optimised the energy, almost as an afterthough, we consider the symmetrization. Symmetrical or anti-symmetrical? The horrifying fact is,

*regardless*of which one we choose, the energy after symmetrization is*different*from what we calculated when we optimized the individual electron wavefunctions. In fact, the symmetrical form (almost?) always has the lower energy in these situations. But from my point of view, the truly disturbing aspect is that we can't simply optimize first, symmetrize later: we have to do them both at the same time. It makes the calculation very difficult to visualize. And that's not even the reason I took up this question today. I said already that I came up with this solution last year and recently realized it was incomplete. And now I want to show what has to be done to fix it up. The problem is that I haven't taken into account the spin of the electrons. If I attempt to track the spin, and redraw my sketch of the function in terms of the sum of product functions, it ought to look like this: You see I've got electron A with spin up, and B with spin down. But now let's consider the symmetry of what I've written. If you reverse the roles of A and B, you don't get the same function back. You don't get it back with the same sign, and you don't get it back with the opposite sign. You get a different function altogether. You now have to symmetrize*again.*But this time I'm going to choose the antisymmetric option. I've written it out in full below: That is what the solution has to look like for two particles in a box. It looks awfully complicated but there is really no simpler way to indicate it so far as I can tell. By the way, if you're familiar with the traditional way of writing two electrons in the "singlet" state vs the "triplet" state, you may be able to figure out that I've got them in a singlet state here. Which is good. But right now it all seems very complicated.
Subscribe to:
Posts (Atom)