Wednesday, February 29, 2012
One the one hand, you don't get that close to the right answer unless your physics is pretty sound to begin with. There's little doubt that the discrepancy is just a matter of a math error on my part. The problem is: do you go back and obsess over it until you find the error, or do you accept that you're basically on the right track and move on to other things? You'd theoretically make more progress that way, by pressing ahead; but you never know when you might miss something important. It's a tough one.
The calculus was in fact rather picturesque, but it's not really the kind of thing I that lends itself to a blog treatment. However, I suppose I ought to sketch it out, just in case anyone is wondering. So here goes:
If you recall the superposition of the two wave systems, one circular and one planar, the question becomes: how fast doe the circular waves go out of phase with the plane waves, as you move away from the axis of symmetry? It shouldn't be that hard to convince yourself that the phase is quadratic in x, where x is the radial distance. (That's because a circle is basically the same as a parabola, if you take it over a shallow enough section of the curve. Which we can always do by backing off sufficiently far from the antenna.) The assumption is going to be that we're far enough away from the receiving antenna that the circular field is much smaller than the planar field, so the binomial approximation for power density will apply: (1-x)^2 = 1-2x, where x is the in-phase component of the circular wave.
Here it gets a little tricky: we still don't know what the phase difference of the two wave patterns is, taken along the axis of symmetry. I've drawn it as though the phase difference is zero and grows as you move outwards; but in fact, it might be anything. It's not clear exactly how it has to start off in order to maximize the power absorption. Fortunately, we can cover all possibilities by two special cases: sin and cosine. Either you start off exactly in phase, or you start of 90 degrees out of phase...or it's an intermediate case which you can make up by putting together those two special cases.
Then we just have to integrate the field over the cross-section. Don't forget the factor of 2*pi*xdx which you get for the circular rings when you set up the integrals: we then have to evaluate:
and we also have to check the quadrature case,
But if you really think about it, you shouldn't be that quick to accept the formal mathematical solution. Just look at the function. It grows and grows, and oscillates faster and faster. If we graph it, it's pretty nasty looking:
Doesn't that grow like crazy? Yes and no. Physically, what's happening is that as you get farther from the axis of symmetry, the two wave patterns are going in and out of phase so rapidly that there is no net effect. It's like shining to flashlight beams across each other. In theory, you should calculate the power flow by adding the wave vectors everywhere and calculating the Poynting vector, which will be going crazy. But in practise, the effect is for all the micro-fluctuations to average out to zero. All the real physics happens in the first few lumps.
But how do we handle that mathematically? I actually did a problem like this once before, when I was adding up one of those crazy Ramanujan series. It came up in my calculation of the Casimir effect, and it was something like 1+2-3+4.... which adds up to 0.25. How? You cover it over with a very gentle Gaussian that preserves the low end and gradually suppresses the high-end fluctuation. I did exactly the same thing when I did this sin integral in Excel. You just keep adjusting the width of the Gaussian until the value stabilises, and then you know you're done. Actually, the fluctuations in this integral are almost like that other series, except instead of alternating integers, it's pretty much the square roots of the integers that alternate. (Because of the x-squared inside the sine.)
Like I said, the integral turned out to be not the hard part. It was getting the right scaling factor to line it up to the physics. I did my best; I'm not going to drag you through the details, but when it was all over, I was still out by a factor of 2. It's just one of those things.
Tuesday, February 28, 2012
Sunday, February 26, 2012
Friday, February 24, 2012
First, we're setting up a thermal equilibrium between a mechanical oscillator and the radiation field. You might think you need to have a statistical quantity of oscillators, but you don't. It's the same equilibrium if you have just one single charged oscillator. So I've created an artificial tether-ball in the middle of an empty box. Actually, it wouldn't have hurt to have filled the box with uncharged balls the same size as the tether-ball, and let them bounce about freely. We could have given them a temperature, and they would have imparted random motion to the constrained, charged ball. Since the tether-ball has only one degree of freedom, it would take on the same average energy as all the other balls. Sometimes it would have less energy, and in those circumstances the subsequent collision would by more likely to add energy to the tetherball. Sometimes it would have more than average energy, and...well, you get the idea. When it happens to have the average energy, it is no more likely to gain than to lose with the next collision...it is in mechanical equilibrium. The real point to understand is that at that moment, it is also in equilibrium with the radiation field. You should be able to see why this has to be so. The bottom line is you only need one charged oscillator in the box to establish the equilibrium between matter and radiation.
Now we invoke the equipartition theorem, which says that the energy per mode of the radiation field has to equal the energy per mode of the mechanical system. This is the same principle that supposedly leads to the ultraviolet catastrophe, but we really don't need to worry about that here. What people forget about this principle is that it is strictly true only in the vicinity of a specific frequency. There might be different average mode energies at 200 Megahertz versus 200 Gigahertz, but at each frequency the electrical mode energy will equal the mechanical mode energy. The idea that a mechanical oscillator at 200 Gigahertz must have the same energy as one at 200 Megahertz is a misapplication of the equipartition principle.
Having said all that, for the purposes of the present calculation it is rather a moot point. We can choose our numbers so that the frequencies are all within the Rayleigh-Jeans regime where equipartition prevails in the broader sense...that is, all frequencies have the same mode energy regardless. It ought to be noted, however, that the validity of this calculation will ultimately not depend on any such assumption.
The equilibrium will be independent of any specific properties of the oscillator. That means we can give it any arbitrary properties we like. In our case, we have chosen to configure it as what we call a rigid rotor, with a certain arbitrary mass, charge, and radius of rotation. One degree of freedom, which defines its frequency and energy at equilibrium. The radiation field must then be at equilibrium with the rigid rotor, or the tether-ball as we call it.
But how can our charged tether-ball be at equilibrium if it is spinning about, radiating energy like crazy? There is only one way out: it must be also absorbing energy at the same time, and it must be absorbing energy at the same average rate as it is radiating energy. If we can possibly figure out how much energy the tether-ball is absorbing, we will automatically know how much it is radiating. And wouldn't that be something. You tie a charged ball to a string, and whirl it about your head, and now you can calculate how much power you're radiating. Without looking up any formulas. You can actually figure it out.
I'm not saying we won't use any formulas. We'll use mostly basic formulas that everyone knows, like the force on a charged body and the energy of an electric field. We'll do some stuff with Fourier Series that you can ultimately verify with high-school level trig identities. And we'll use one formula that you could theoretically figure out by yourself, but each time I try to do it I end up off by a factor of three, so I caved in and copied it from the internet. That's the famous Rayleigh-Jeans formula for counting the modes of the electromagnetic field in a rectangular box. The point is that none of these formulas tell you how much radiation you get from an accelerating charge. That's the result we're going to come up with, and ultimately we'll get it by doing nothing more than enforcing the condition of equilibrium between the electromagnetic field and the mechanical oscillator.
It's a very funny trick, and to be honest I'm still not really sure why it works. Part of the secret is that there is a fundamental asymetry between absorption and emission of radiation. Emission of radiation is a huge conceptual mystery: the charge is giving off energy as it oscillates, which means you are doing work to shake the charge: but what is the force which is reacting against the shaking charge, the force which you are working against? I'm not sure anyone really knows.
The absorption of energy is different. An oscillating electric field comes along and pushes on the charge, so the charge starts to oscillate in response to the field. It's pretty straightforward. Actually, even here there is a hidden mystery: we agree that the charge moves, gaining energy, but how does this process remove energy removed from the electromagnetic field? That's a story for another day. The point is that unlike the case of radiation, for absorption we have a pretty reasonable calculation.
It's a reasonable calculation, but it's far from a walk in the park. There are two pretty huge problems facing us. First, the electromagnetic field. What is it? Why, it's just equal to the average energy of the rigid rotor, which is an arbitrary number we said we can just pull out of our ass. No, that's not quite right: we said that energy would be the average energy per mode of the electromagnetic field. We still have to add up all the modes to get the total field.
Add up all the modes? But there are billions of modes, and they are all all kinds of crazy frequencies. Why not just take the ones that are at the same frequency as the tether-ball? Because there is not one mode out of all those billions that has the exact frequency we want. There are all kinds of modes very close, but how close is close enough? It's a horrible mess.
I was stuck on this point until I made an amazing discovery that changed everything. It's that business with Fourier Series that I mentioned earlier. You add up a bunch of modes, starting with the ones closest frequency to your target, and moving outwards as you go. And then you just stop. You truncate your series. Where do you truncate it? Anywhere you want. It turns out you're going to get the same final answer for the energy of the tetherball regardless of where you truncate your series. It's hard to believe, but I show why it works here.
The Fourier calculation converts your random, distributed field into a uniform chain of pulse trains of a specific size, frequency, and duty cycle. Depending on where you cut off your frequency band, you get different parameters for those three values of the pulse train. That's a bad sign: it means you're not calculating anything that is physically real in a measurable sense. But here's the catch: when you apply any one of those pulse trains to your tether-ball, you get the exact same final result. So who cares if they're real or not? How can you doubt that the result you get is consistent with the true physical result? (Or at least a very good approximation to it.)
But let's not get ahead of ourselves. The result of the Fourier calculation is a train of pulses - frequency bursts, actually. We apply that pulse train to the tether-ball, and it twirls about its post. How fast does it go? It turns out we can figure that out using the theory of the drunkard's walk. Everybody knows that the expactation of the distance traveled by the drunkard from the lamppost goes as the square root of the number of steps. But for an amplitude of oscillation, which is what we are dealing with here, the energy of oscillation is proportional to the square of the amplitude. Since it turns out that the amplitude of oscillation corresponds to the distance of the drunkard from the lamppost, the conclusion is that the energy of the oscillator grows linearly with time. Each pulse train adds an equal amount of energy to the oscillator!
What oscillator am I talking about? I thought we were going to drive a tether ball. That's not a harmonic oscillator, it's a rigid rotor. Well, I have to admit I'm cheating a bit here. The first time I did this calculation I applied it to a harmonic oscillator. When I came back to it the other day, I decided the rigid rotor would be a better case to work out, for a couple of reasons. I have an easier time restricting the degrees of freedom. I have a very convenient formula (the Larmor Formula) to check my work against, whereas with the harmonic oscillator I was having trouble getting it to fit into the antenna formula for the short dipole. And not least, I like the image of the tether-ball. I think it's evocative.
The problem with the tether ball is it's hard to get it started. To get it up to speed, you have to drive it through the whole spectrum of frequencies. That's a problem. The nice thing about the harmonic oscillator is that you're working with the same frequency through the whole range of amplitudes. So I end up returning to the old reliable mass-on-a-spring after all. It's kind of cheating, but you can ultimately justify it. After all, you could imagine some kind of variable-tether mechanism whereby the ball is let out gradually as it revs up, maintaining the same frequency the whole time. But more importantly, once you actually get up to speed, the equilibrium conditiions are exactly the same for the tether-ball and the mass-on-a-spring. So with apologies, I'm going to start the system off as a harmonic oscillator. It's a bit of a mixed metaphor, but it works.
Here's how it works. You have this oscillating charged mass, and you hit it with one of these standard pulse trains that you've worked out. This gives a certain impulse to speed up the oscillation...or does it? The problem is you don't know the relative phase. Isn't it just as likely that your pulse train arrives 180 degrees out of phase, so it slows down the oscillator instead of speeding it up? Or any other phase in between? In fact, you have no idea what the outcome of the interaction is going to be.
That's why it's just like the drunkards walk. He takes a step in a random direction, and you have absolutely no way of knowing whether he's getting closer or farther from the lamppost. It turns out to be slightly more probable that he will end up farther, but it's totally random. Except, that is, in one particular circumstance: the very first step. You know that if he starts out right under the lamppost, that after the first step he will be...exactly one step away from the lamppost. And the harmonic oscillator works exactly the same way!
We have to apply one of our standard frequency bursts to our mass-on-a-spring when it is perfectly at rest. In that case, and in that case only, can we calculate the outcome of the interaction. We can calculate the resulting amplitude of oscillation, and also the energy of the oscillation. With subsequent frequency bursts, the amplitude will grow more and more slowly, like the drunkard's walk. But the energy, since it is the square of the amplitude, will grow...on average...linearly with each frequency burst. Each frequency burst adds, on average, the same energy to the oscillator. And we know how big the bursts are, and how far apart they are.
Does the oscillator grow without limit? No it doesn't. Because as it oscillates, it begins to radiate. The problem is, we don't yet know the laws of radiation for a harmonic oscillator, or a rigid rotor, or any kind of accellerating charge. Well, that's not quite right. We ought to know on general grounds that the radiation is quadratic with the amplitude of oscillation; and as goes the amplitude, so go the velocity and accelertation. We will guess that the formula for radiation should be in some way proportional to the square of the acceleration.
So what are we missing? We need to calculate how much energy our frequency burst puts into a stationary oscillator. That's going to be pretty much an F=ma calculation; it shouldn't be an obstacle. Since we know how often the pulse trains are coming, we know the rate at which energy is being absorbed: in fact, it's being absorbed at a constant rate. How about the rate at which energy is being radiated? From what we just said, it's clearly proportional to the total energy of the oscillator, which is growing at a linear rate. In other words, at some point, the radiation must overtake the absorption. That is the point of equilibrium, and we already know where that point is!
At the point of equilibrium, we now know the amplitude of oscillation, and we also know how fast it is radiating. The formula which relates these two quantities is nothing other than the Larmor Formula, and our data must agree with it. All we need is to divide quantity A by quantity B to derive the constant of propotionality which links them. That will be our job for tomorrow.
Thursday, February 23, 2012
Wednesday, February 22, 2012
Then I said I was going to redo the same calculation, this time treating the atoms as tiny classical antennas. Since everybody knows that Maxwell's Equations don't apply to atomic systems, we should get nonsense. Let's see what actually happens.
I'm going to admit that I sweated bullets doing this calculation. I probably did it at least twenty times and got a different answer each time. The antenna calculation just wouldn't come out right. The best I could do was still out by a factor of ten. In desperation, I looked up the formula for the radiation from an accelerating charge. It's called the Larmor Formula, and I found it on the Wolfram Alpha website:
This gives us a formula to work with: but what numbers do we put in the formula? Well, there's not too much to worry about; we've got a couple of constants, which are provided in the table; we've got the charge on the electron, which everyone knows is 1.6x10^-19. The only other thing we need is the acceleration.
That's the hard part. What is the acceleration of the electron, according to the Schroedinger picture? Fortunately, Professor Fitzpatrick has already done all the hard work for us. I said there were two alternative pictures of the physics. In the Copenhagen picture, one percent of the hydrogen atoms are in the excited state. In the Schroedinger picture, each of the atoms is in a mixed state to the extent of one percent excited. To calculate the dipole moment of such a mixed state, you do the bra-ket thing with x in the middle. Actually, the order of operations doesn't really matter in this calculation (as it sometimes does in quantum mechanics). You can just think of this as the square of the amplitude (that's the product of the bra and ket states) integrated against x to get the dipole moment. The calculation looks like this:
in my result? Because there is no dipole moment for a pure state: because the s^2 and p^2 states are symmetric (even) about the origin, the integral against x (an odd function) gives you zero. No, the calcuation reduces to the dipole coupling of s versus p.
But this is the calculation which the Professor has already done for us. Recall from yesterdeay:
Most importantly, let's notice that this dipole moment is an oscillating dipole moment. Because the s and p states precess through time at different exponential frequencies. So based on the difference in those two frequencies, what is initially a positive dipole moment becomes, after one-half cycle, a negative moment. It oscillates.
We've now got almost all the numbers we need to do the Larmor calculation. We still need the acceleration of the charge. We've got the oscillation amplitude - that's the 8 picometers - and we've got the frequency from yesterday - that was 2.5 x 10^15 Hz. Now I'm going to do something pretty slick. Instead of working out the acceleration due to sinusoidal motion, I'm going to pretend I'm working with uniform cirular motion instead. That's not what the atom is doing: in fact, it will throw me off by a factor of two, because the energy output of the circular motion is just the superposition of two orthogonal harmonic oscillators. No problem...I'll just divide by two at the end. The nice thing is I get to use the omega-squared-r formula for circular motion that I still remember from high school physics. Remembering the factor of 2-pi to change from hertz to radian frequency, I get an acceleration of 2*10^21 m/sec^2.
Putting all the numbers together, I get a power output of...20 picowatts. (You can check this if you like by plugging the numbers into the applet on the Wolfram site.) Divide by two to convert circular motion into simple harmonic motion, multiply by one million for the number of atoms in the sample, and I get, incredibly, exactly the same power output that I calculated yesterday. The Copenhagen picture with its quantum leaps gives exactly the same result as the Schroedinger picture with its tiny oscillating dipoles.
Not convinced? There's one more calculation that really needs to be done to bring this argument full circle. I said originally that I was going to use antenna theory to do the calculation, and when I tried, the numbers wouldn't come out. I kept getting two microwatts instead of ten. How do you get a factor of five for an error? It didn't make sense, and it was driving me crazy. But now I've worked it out so the antenna calculation comes out right. That's a story for my next blogpost.