Wednesday, March 2, 2011

Let's run the numbers

I've proposed that a mechanical oscillator will be in equilibrium with an external field when that field has the same energy per mode as the mechanical oscillator. Now I have to show how the numbers work. I'm going to sacrifice some accuracy and rigor in order to present the calculation in a way that is easy (?) to follow.

There was a stumbling block for me when I tried to count modes and ended up with a result different from the Rayleigh-Jeans formula by a factor of three. I couldn't resolve the discrepancy so I'm just going to use the number from Rayleigh Jeans. Here is that formula again:

I'm going to choose a frequency of 100 GHz and make a box 30 cm on a side. You can see that I have a wavelength of 3 mm so exactly one hundred waves fit into the box lengthwise.

I'm going to add up all the modes between 99 and 101 GHz, so I need to know how many modes there are. The formula given here is for wavelength, not frequency, so I'm going to let lamda range from 2.97 mm to 3.03 mm, which should come to just about the same thing. Letting pi=3 and using some shortcuts, I'm getting 240 000 modes, or about a quarter of a million. You can see if I space the waves out equally over the frequency interval they are about 8 kHz apart.

So we have a quarter million sine waves between 99 and 101 GHz and we want to add them up and see what they look like. Remember, I could have just as easily taken one tenth of that bandwidth and added up only the waves between 99.9 and 100.1 GHz. The beauty of this analysis is that the equilibrium would come out to the same point either way, as I explained in a previous post. So I am at liberty to choose any arbitrary numbers. There are people who would say "let's just work in symbols and let the formulas do the work" but I think it's easier to follow with the actual numbers.

I'm going to let each sine wave have an intensity of one millivolt per meter. There are two things we have to do next: add up a quarter of a million sine waves to get the composite waveform, and figure out the energy in each mode. Let's do the energy calculation first. It's the square of the field divided by the impedance of free space (377 ohms) and the speed of light. It comes to 10^-17 joules. I'm ignoring things like a factor of two somewhere.

EDIT: Oops. I actually ignored the size of the box, which matters a lot. What I got was energy per unit volume. The actual mode energy therefore comes to around 4 x 10^-19 joules.

Now let's add up the sine waves. I'm again ignoring details like the fact that they are all going in different random directions. These calculations are basically a Fourier series which becomes in the limiting case a Fourier transform. There are a quarter of a million waves all in phase at the exact center frequency. So the peak amplitude is 250 volts/meter. (Remember the mode amplitude was 1 mV/m). I have a total bandwidth of 2000 MHz with a space of just 8 kHz between waves. I explained how to add these things up in a previous post. You start by taking the middle three waves... in AM radio, that would be the carrier and the two sidebands. It's pretty well known that this gives you an 8 kHz signal modulated at 100 GHz. To simplify the math, we will think of this as a series of 100 GHz wave trains 62.5 microseconds in duration (let's just call it 60), spaced 125 microseconds apart (a 50% duty cycle.)

Now we will start adding more side bands. When we add a second pair of sidebands, the pulse train gets shorter ... 30 microseconds instead of 60 ... but the "window" of 125 microseconds stays the same. We get a 25% duty cycle. The repetition rate stays the same but the duty cycle gets shorter.

This is how it works each time we add a pair of sidebands, and we must add 120,000 such pairs. The end result is a series of pulse trains 500 picoseconds long, separated by an interval of 125 microseconds. With a repetition rate of 8 kHz.

Let's apply this pulse to an atomic oscillator with a reasonable mass and charge: let's say, 10^-27 kg and 10^-20 coulombs. It's of course a sinusoidal force and there is a spring to consider but I'm going to ignore all that and just calculate the effect of a straight force applied to an inertial mass. I'm going to be out by a factor of 2 or 4 or something, but that's OK. The final velocity of the particle will be (force) x (time) / (mass). The force is just (field) x (charge). So the calculation gives me

(250 V/m) x (10^-20 Coulombs) x (500 x 10^-12 sec) / (10^-27 kg) = 1.25 m/sec

So the energy of oscillation, from 1/2 mv^2, is close to 10^-27 joules. That's from a single impulse.

Let's now remember how the drunkard's walk works. The distance increases as the square root of the number of steps; so the square of the distance is linear in the number of steps. Likewise with the harmonic oscillator. Each impulse adds to the amplitude in a randomly oriented phase, but the square of the amplitude (the energy) increases linearly. There are, we may recall, 8000 impulses per second; so the energy starting from rest increasees at a rate of 8 x 10^-24 joules per second. The question is: what will limit this buildup of energy?

Answer: the radiative losses. We simply need to calculate what size of antenna will radiate with a power of 8 x 10^-24 watts. At the point where the atomic oscillations build up to that amplitude, the system will be in equilibrium.

We will use the classical formula for the radiation resistance of a half-wave dipole: it is

200 (L/Lamda)^2

where lamda is the wavelength and L is the antenna length. You can see this gives the result of 50 ohms for the half-wave dipole; it is well known to radio amatures that the correct value is 73 ohms, but this diverges somewhat in the limiting case. From the assumed values of charge and frequency, and recalling the nursery rhyme "twinkle, twinkle little star, power equals I-squared-R", we get the following condition:

{(10^11 Hz) x (10^-20 Coulombs)} ^2 x (resistance) = 8 x 10^-24 Watts

We therefore require a radiation resistance of 8 x 10^-6 ohms, which from the formula gives us an antenna length of 6 x 10^-7 meters. We must now ask the question: what is the mechanical energy of this oscillator?

It's not hard to figure out a velocity of close to 10^5 m/sec for the oscillator; squaring this and multiplying by 1/2 times the mass, we get an energy of 5 x 10^-18 joules.

We're out by a factor of ten from what we set out to show: that the electromagnetic mode energy is equal to the mechanical energy. It should be realized that we took an awful lot of short-cuts and neglected a whole bunch of things, so it's not surprising that we might be out by an order of magnitude. I may be biased, but I'm going to judge this result to be "close enough".

The proper thing would be to redo the whole calculation with symbols instead of arbitrary values and show that no matter what parameters you choose, the result is that the energy of the mechanical oscillator comes out equal to the energy of a single mode of the electromagnetic field. I can hardly doubt that the result must come out correctly, but I'm not about to do the work to prove it. What I find hardest to understand is that there are people who would prefer to do it symbolically from the get-go, rather than run a numerical example first to make sure everything is lined up right.

There's one last thing that still bothers me. I have almost no doubt that the identity must hold, and the calculation strongly suggests that this is so. What I don't have is a simple way of understanding why it must be so. I think there ought to be a way of seeing that it has to work without actually grinding through the numbers.

Where does the equilibrium come from?

In my very first blogpost just over a year ago, I talked about how I really felt I understood classical physics even if I wasn't capable of every single calculation. The example I took was Maxwell's calculation of the equilibrium distribution of velocities in a gas. I said that even if I didn't know how to calculate the distribution, I could "see" how it more or less had to come about.

The irony in this example is that over the last two weeks, I've been agonizing over how to work out the exact same equilibrium, except as applied to the classical radiation field. And my problem was not merely one of how to do the calculation. It was the basic physics that had me baffled. Just why and how does an equilibrium come about in the first place? I know that an oscillating atom can either absorb or emit radiation. But I had no definite way of seeing why the tendency to absorb or radiate should depend on the magnitude of the ambient field. Rather than saying that there had to be an equilibrium point, I could have just as well argued, for example, that the amount of outward radiation would always excede the amount of absorption.

I went down a number of dead end roads before solving the problem. Mostly I was trying to analyze the ambient field as the sum of an in-phase and and out-of-phase component to the atomic vibration. To the extent that the fields are in quadrature (out-of-phase) there is no interaction, so the atomic oscillator is always emitting. Where does the absorption come into effect, to counterbalance the emission? As the fields go in and out of phase, they are either in a leading or lagging relationship. One is absorbing, the other is emitting. But if the phases are basically random, the quantities should apparently just cancel out. So there is no equilibrium between absorption and emission.

So then I looked for mechanisms whereby the ambient field would "drive" the atomic oscillator slightly off frequency, pulling the two fields into synchronism just like a motor when it is connected to a 60-Hz power line. I tried in vain to make this model work and simply gave up.

Then I had two essential inspirations that gave me the solution of the problem. The first was the drunkard's walk. I realized that even when the impulse is in completely random directions, there is still a net tendency for outward progress. This was the first part of the puzzle. Then I came up with a way of analyzing random fields whereby I could convert a power spectrum into a time-varying electric field. (See most "Harmonic Oscillator: The problem is solved" ). The amazing thing about this analysis is that no matter what I chose for an arbitrary frequency cutoff or discrete resolution, I ended up with waveforms, all of them different from each other, but all of which gave the same result when used in a "random walk" analysis of the harmonic oscillator.

At last I could say I understood the essential physics of the equilibrium process. Now, when you really understand something, you ought to be able to do some calculations with it; and that's what I'm going to do next. It's not the kind of thing where the numbers are going to come out exact, because I've made some approximations along the way; and to some extent, I'm allowing myself to be a little sloppy. The point of this kind of calculation is to see if you are at least in the ballpark, and I think I've been able to do that much.

Sunday, February 27, 2011

Counting Modes

How many ways can you set up a standing mode of electromagnetic waves in a rectangular cavity? This question has been holding me up for a week now. There is a formula which I found on the hyperphysics webpage; it is basically the Rayleigh-Jeans analysis which gives the low-frequency limit of the black-body law. The formula must be right because it agrees with experimental results, assuming each mode gets its alloted energy of 1/2kT. My problem is I have my own way of counting modes which disagrees by a factor of 3, and I can't for the life of me see where.

There are a lot of ways you can be out by a factor of 2 or 4, but it is very hard to be out by a factor of three. Furthermore, I understand the basic concept of counting modes. I know how to do it in two dimensions for electromagnetics, and know how to do it in three dimensions for a gas. I thought I knew how to do it in three dimensions for solids, but now I'm not so sure. At first I thought it was the same as a gas, but now I realize that a solid, unlike a gas, can support shear waves as well as compressional waves.

The electromagnetic case in two dimensions is not so bad because you can start by taking waves in the xy plane and letting the polarization be all in the z direction. That lets you separate out the vector portion and leaves you with nothing but scalar waves to add up in the plane. It doesn't take much more than high-school trig identities to show that four sets of plane waves criss-crossing each other can be added up to equal a checkerboard pattern of standing waves.

That's how the hyperphysics explanation starts out, except it's in three dimensions. Here is what they say:


I really don't see that this can be correct. They call for the electric field to go to zero at the cavity walls, which is of course the correct condition for acoustic vibration amplitude in a gas. But electromagnetics? It's certainly not the condition for a waveguide. We need the electric field to have no tangential component, and we need the magnetic field to have no perpendicular component at the walls. Those are very different conditions than what they've described here. But never mind, let's go on.
We now have the rectangular cavity divided up into unit cells, and we want to count the modes up to a given wavelength. They do the thing of working in k-vector space and counting the lattice points within a spherical volume. I haven't explained it in detail but it's the same thing I do and they end up with the following formula:
It's essentially the number of unit cells, with an extra factor of two for what they say is polarization. I think that's also wrong; I would say the factor of two is necessary for the time displacement of one-quarter cycle; but that's a technical point. In any case I said I wouldn't be arguing about factors of two. It's the factor of three that's getting me.

What exactly does the electromagnetic field look like?? I first thought that it could be polarized in either the x, y, or z directions, giving me an extra factor of three. But then I realized that didn't make sense. In two dimensions you can add up all kinds of waves in the xy plane, keeping the polarization in the z plane. But there's no way you can take plane waves in three dimensions and add them up with all the same polarization. The polarizations is going to vary in different places... in other words, the field lines are going to curve.

How can you picture field lines curving through space and still meeting the required boundary conditions? Not the simplistic, incorrect boundary conditions described in the hyperphysics, link, but the correct e-m cavity conditions: perfectly conducting walls with the electric field perpendicular and the magnetic field tangential? After some hours staring at a box of Rice Krispies (as my typical rectangular cell) and drawing ellipses on different faces, this is what I came up with:
The green lines are the electric fields and the red ellipses are the magnetic fields. I am a bit troubled by a certain assymetry .... the unit cell has two "ellipsoids" of magnetic lines and one of electrical lines. (I've drawn the green electrical lines as though they form a torus, but that's bad artwork: the ellipses should be parallel.) Still, I think this is the correct configuration. It seems to work: as the magnetic loops collapse, the generate electric loops in generally the right places.

It's the count that bothers me. Of the six cell walls, four have magnetic loops on them and two are empty... in this case the side walls (the walls where they list the ingredients on the Rice Krispies box.) But I could have just as easily taken the front/back or the top/bottom faces to be the null faces. So there are three field configurations that fit this geometry. I can't think of any physical reason that shouldn't work.

So my mode count disagrees with Rayleigh-Jeans by a factor of three. And yet the Rayleigh-Jeans formula gives the correct energy for the field (at low frequencies), assuming each mode gets its alloted energy. It's a problem and I can't get around it.

For now I think I'm going to make use of the Rayleigh-Jeans formula and go ahead with my equilibrium calculations.

JAN 24 2013: It's taken me almost two years, but I found the flaw in my counting method. You can see what I did wrong if you click here.

Monday, February 21, 2011

What's the connection?

In my last posting, I showed that you could get a convert the formula for electromagnetic power density into a real-time picture of actual sine waves, but you got different results depending on how you chopped up the spectrum and how wide you chose the bandwidth. Here are three pictures of typical waveforms you can get from a given spectrum. The pictures are highly simplified to illustrate the most essential features of the waves. The single cycle of a sine wave is really intended to suggest hundreds or millions of cycles in general:


It's not completely arbitrary. I've written in a formula at the bottom of the picture to show that the three waveforms must have a certain property in common. I've expressed this property as a mathematical formula because that's what people do, but the physical significance of this common property is what really matters. And it relates to ... the drunkard's walk!

If any of these wave trains are applied to a harmonic oscillator, they will tend to build up the vibration; but because the applied wave and the mechanical oscillator are in general out of phase with each other, the amplitude of oscillation will build up not steadily, but randomly. Like the drunkards walk.

What these three wave forms have in common is that if you apply them to a given oscillator over a period of time, they will cause the oscillation to grow at equal rates!

You don't even need to know much about the dynamics of a driven oscillator to see why this ought to be so. It goes back to the drunkard's walk. Let's compare the first two wave forms. The driving force in the second case is four times as great, applied over one quarter the time. That's why those two waveforms both give the same rate of growth.

The third case is interesting. Compared to the first waveform, it's the twice the impulse applied one quarter as often. It's like the drunkard who takes a giant two-meter step, but only once every four seconds. Figure it out ... he has the same rate of progress as the drunkard who takes regular one-meter steps every second.

So you can take your pick. Any one of these waveforms applied to a harmonic oscillator will cause it to grow at the same rate. What is the rate of growth? It appears to get slower and slower as the amplitude of oscillation builds up. But notice something funny. If we measure the size of the oscillation not by its amplitude, but by its total energy ... the growth rate becomes a constant. The energy of oscillation grows at a constant rate.

So why doesn't it build up without limit? Because as it oscillates, it becomes a transmitting antenna, and radiates outwards in all directions. There are formulas from antenna theory that will give us an exact number for this, but it's enough for now to undertand that the radiated power is proportional to the square of the amplitude. In other words, its proportional to the total power of the mechanical oscillator.

As the oscillator builds up, it emits power at a faster and faster rate. At some moment this becomes equal to the rate at which it is absorbing power. That is the point of equilibrium, and we can now calculate it.


The Drunkard's Walk

Yesterday I showed how you could take a piece of the energy spectrum and convert it into an actual waveform. You just approximate the energy distribution by a series of sine waves, pick an arbitrary bandwidth, and add up the waves. The picture you end up with depends on how finely you chop up the distribution into discrete waves, and how many of those waves you choose to add up.

In each case you get series of wave trains of a certain duration, amplitude, repetition rate. Some examples of what you might get from a given distribution are:

10 mV/m pulses 4 microsecond duration 100 kHz repetition rate
40 mV/m pulses 1 microsecond duration 100 kHz repetition rate
20 mV/m pulses 4 microsecond duration 25 kHz repetition rate

(These aren't the exact values I worked out yesterday, but they illustrate the general pattern: multiply column A by column B and the square root of column C to get a constant value.)

I then claimed that no matter what choices you make, you will get the exact same result when you apply the resulting waveform the the problem of finding the equilibrium of a mechanical resonator in that field. Now I'm going to show why it works. To understand the reason we must first consider a famous math problem known as the Drunkard's Walk.

The premise of the problem is that you have a drunk standing under a lamppost and he begins to wander aimlessly. Every second he takes one step, and each step is one meter long. The question is, how far from the lamppost will he be found after a given time?

It's not such an easy question, but the answer turns out to be simple: in this example it's just the square root of the number of steps. So after 100 seconds the drunk is expected to be found (on average) ten meters from the lamppost.

Why should this be? Since his steps are totally random, shouldn't he make absolutely no progress on average?

That's also true. His most likely progress in any given direction is zero, and is most likely endpoint is...right where he started. But his average distance away from the lamppost is still ten meters. Go figure.

It's actually not hard to see why he must, on average, continue to get farther and farther away. Suppose at some time he happens to be five meters away from the lamppost. His next step will place him somewhere on an imaginary circle exactly one meter from his present location.

If you look at that circle, you will see that most of it (more than half of it, at least) lies outside a five-meter radius from the lamp!

If you look at the small circle you see that the segment of arc (in green) outside the five-meter radius is longer than the segment (in purple) inside five meters. This is true wherever he starts from; and so, on average, the drunk keeps getting farther from the lamppost.
How much farther? That's a tough one. But there is a little trick that will let us guess the answer. The most "neutral" thing the drunk can do is, arguably, to take a step perpendicular to the radius of the big circle. We can then use the Law of Pythagoras to see where he ends up. 5^2 plus 1^2 equals 26, so his distance from the lamppost is the square root of 26. Since his previous location at a distance of 5 represented the square root of 25, we see that this little trick gives us exactly the answer that we wanted...the distance goes as the square root of the number of steps.
We're going to need this trick when we go back to the harmonic oscillator.

Sunday, February 20, 2011

Harmonic Oscillator: The problem is solved

Okay. Now I've figured out something good. This is a problem that has plagued me for years and now I've solved it.

I've been talking about the equilibrium between the radiation field and the molecular oscillator. We said that the oscillator gets its energy from collisions with other molecules and pumps energy into the radiation field. Of course that's only half the story. The radiation field also pumps energy into the molecular oscillator. The problem is to calculate the equilibrium point, and now I've figured out how to do it.

Why is this such a difficult problem? It's difficult in every possible way. But probably the greatest difficulty of all is that the fields are random. So we have an oscillator vibrating at 1000 MHz. It woud be fine and dandy if we simply had a radiation field at 1ooo MHz with a strength of 100 milliwatts per square meter. That we could work with. But that is not what we have. No, what we have is a radiation distribution of 100 mW/m^2 .....per MegaHertz! So what do we work with? Do we consider only the radiation at exactly 1000 MHz? No, because there is zero radiation at any exact frequency! We have to sample it over a certain finite bandwidth or we get nothing. So do we consider all radiation between 999 and 1001 MHz? That gives us a total power density of 200mW/m^2? But we could just as well have considered all the radiation between 996 and 1004 MHz, giving us a total power density of 800 mW/m^2. Which one should it be??? It's all very confusing.

What I've figured out today is that...it doesn't matter! We're free to make either choice, and we get the same result. This seems crazy, but I'm going to show you why it works. The first step is to describe the power distribution. Mathematically, it's very tricky do deal with continuous distributions, but we can always approximate them by discrete sums. I've drawn a sketch of a graph showing a continuous frequency distribution in the vicinity of 1000 MHz, represented by a sum of equal sine waves spaced 1 MHz apart. (The graph says hertz, but that's a mistake. It should be labeled MHz!)

By the way, there's nothing special about MegaHertzes. We could also have chosen to represent our field by taking sine waves every 250 kiloHertz. Those are the skinny little lines between the heavy black lines. This is a more accurate breakdown. If we choose the 250 kHz separation, the sine waves are only half as big, because there are more of them. I know...there are four times as many, so shouldn't they be a quarter the height? No, because power goes as the square of the amplitude. I've chosen the correct height to give the same power in either representation.

Now let's see what this power distribution looks like from the point of view of the oscillator. The oscillator is really only sensitive to power close to its frequency of oscillation. How close? That's a tough one, but let's try for starters just including power between 999 and 1001 MHz ... a bandwith of 2 MHz. There's a little circled sketch in the previous graph showing exactly how I'm calculating this - I've taken half the sine waves from the side bands, and the full wave in the middle. (If you took first-year calculus I think it's what they call Simpson's Rule.) The resulting wave looks like this:


It rises and falls over a span of one microsecond. This pattern repeats itself a million times a second.
That's one way we can do it. But who said we had to restrict our bandwidth to 2 MHz? I've redone the calculation below, taking in a bandwith of 8 MHz between 996 and 1004 MHz. Again, I've used Simpson's Rule for the endpoints. Here again is a schematic of the distribution:


The resulting waveform is shown below. (By the way, these pictures are for real. I actually programmed nine sine waves into an Excel spreadsheet, added them up, and plotted the graph.)

It looks very different. The total time interval is still one microsecond, but the energy pulse is jammed into the middle of the frame, and the peak is much higher. In fact, the math shows you get four times the amplitude over one quarter the time interval. Is that the same total power? No, because power goes as the square of the amplitude. It's four times as much power. But that makes sense, after all...we took four times the bandwidth over a constant power density.

So depending on how we divide and truncate our power distribution, we get very different electromagnetic waveforms as applied in real time to the mechanical oscillator. It gets even more complicated if we approximate our distribution with sine waves every 250 kHz (the skinny lines in the graph). For example, taking in eight bands of power between 999 and 1001 MHz, we get a pulse train lasting one microsecond out of four, and twice the height calculated in our first example. Twice the power for one quarter the time interval? That's the same net energy .... that's a good sign at least. But again it's a very different looking wave.
But here's what I just figured out .... these waves all have exactly the same effect on a mechanical oscillator! It doesn't matter which approximation or how much bandwidth we choose. We're going to find it gives the same average boost to a random oscillator. It means we can go ahead with the equilibrium calculation. That will be the topic of my next post.

What was Planck thinking?

I ended my last posting by speculating that the only way to fix things was to find some way of recalculating the entropy. Because no matter how many practical reasons I could find as to why the high-frequency vibrational modes should not be activated in "real" matter, the fundamental problem remains: we can still imagine an ideal gas made of perfect billiard balls joined by flexible rods, and for this theoretical gas, the entropy calculation agrees with the practical outcome: all the modes, vibrational included, must be excited to their full complement of energy. And any "real" gas must have the same equilibrium as this "ideal" gas.

Isn't this something like what Planck did? By quantising the energy modes of the electromagnetic resonances, he fixed things so the high frequency modes went away. Because when you calculate the entropy of these quantised modes, it gives the desired outcome. But there's something very wrong with this approach. If Planck allows the gas itself to behave as an ideal collection of springs and ball bearings, then both thermodynamics and practical reasoning tell us that the high-frequency vibrations must exist. And it doesn't help to recalculate the entropy of the radiation field, because if those molecules are vibrating, then they will (and must) radiate power into those forbidden frequencies.

One hundred years after the fact, it's hard to know exactly what people were thinking back then. Planck had conjured up this miraculous formula that worked perfectly, but nobody (Planck included) knew quite what to make of it. It seems like people were focused on the thermodynamics of the electromagnetic field, so people figured out that by quantising the modes of vibration, and allowing only certain values of the energy at any given frequency, they could make the entropy come out "right" - in other words, consistent with the experimental facts. But what about the entropy of the mechanical oscillators? Wasn't that still a problem?

It would be a number of years before Einstein would propose that the mechanical oscillations should follow similar rules of quantisation. In the meantime, one has to assume that people were content to allow the mechanical vibrations to have their place in the theory, but to somehow insist, against all logic, that they would not contribute to the intensity of the radiation field. To make this work, they had to suspend the laws of electromagnetism! That is the origin of the idea that energy could only be emitted or absorbed in discrete lumps.

There is a much better way out of this dilemma, and that is to attack the problem at its source: the mechanical vibrations. Because if the unwanted mechanical vibrations are supressed, then the electromagnetic field follows suit and we avert the ultraviolet catastrophe. I have already shown how there are many practical reasons to suppose why those vibrations shouldn't happen in "real" atoms. The problem is with the entropy: the entropy calculation shows that those vibrations must assume their alloted share of the energy; and the entropy calculation is backed up by the theoretical example of the imaginary ideal gas made of springs and billiard balls, where you can verify by mechanical reasoning that it works as claimed - that equipartition must prevail.

What I am saying is that if we solve the problem at the mechanical level, we avoid the problem with the electromagnetic field. You can keep the laws of Maxwell and there is no need for quantized lumps of energy. In my next post, I'm going to delve more deeply into just what is required to solve the problem at the mechanical level.