Sunday, April 22, 2012

Rutherford Scattering

A few months ago I did a series of articles about light reflecting off the moon. It actually makes a difference if you treat the moon as a polished steel mirror versus the case of a white painted ball. The mathematics of the polished steel moon are especially simple: it turns out that for a uniform illumination, the light is scattered equally in all directions. It's the same as billiard ball collisions of a hard sphere.


There is a third case which is especially important in physics: coulomb scattering via the 1/r-squared force, also called Rutherford Scattering. Rutherford worked out the mathematics of the scattering in order to analyze his famous experiment of bombarding gold foil with alpha particles. I looked up the formula and this is what I found: 



I'm not actually interested at this point in the Rutherford experiment: I got onto this topic because Feymann wants me to draw some kind of cosmic inferences about the nature of the wave function by considering the symmetries of the electron-electron collision. In the meantime, I've gotten bogged down in the classical physics. I tried to analyze this problem by making some approximations, and I thought I was doing OK, until I saw this formula. I really wasn't expecting to see the fourth power of the angle anywhere, and I couldn't figure out where it came from.

I started off by considering the case of small deflections...that is, the case where the two electrons don't come too close to each other. Well, that's not quite right. I guess I started off with the case of the head-on collision where both electrons bounce back at 180 degrees. Of course it's the case that can virtually never happen in practise, but it's also the case where you can easily calculate things like the point of closest approach. I did some approximations to try and work out the radiative losses of the collision, which you can read about here. After that, I tried to work on the opposite end: the case of the glancing angle of collision. That's where I came up against that formula with the fourth power.

After struggling with it for awhile, I figured out what was what in the formula and drew the following picture:


It's kind of a funny picture, but it actually shows the physics of what's going on. You really have to be careful to remember that sigma and omega are complimentary parameters in a way: the incident electrons passing through the sigma surface actually end up entirely outside the solid angle omega, and vice versa. It's really easy to overlook that fact, and if you just plow ahead and do the calculus, you get exactly the same result either way because you end up dealing only with the differential cross sections.

The math itself is also kind of ass-backwards. The formula presents the result in terms of solid angles and intercepted cross sections, but when you actually work out the physics you deal with centerline displacement and deflection angle. Those are very different parameters, and the "natural" formula, in terms of displacements and deflection angles, is actually really simple: I've put it at the bottom of my diagram, and you see that for large r the scattering angle is simply inversely proportional to the displacement from centerline.

It's not too hard to see why this has to be true. We can easily believe that most of the interaction between the two particles occurs within a linear distance (along the line of travel) of approximately r. And the force of interaction is obviously 1/r^2. For constant velocity, the time of the interaction is proportional to the distance r: and the total momentum transferred to the moving particle in that time is just (force)x(time), which is clearly proportional to 1/r. For small angles, the change in momentum is proportional to the anglular deflection, and that's about as simple as it gets.

The question is: how in the world do you get the fourth power of the half-angle into the formula???

It is my sad duty to report that the fourth power indeed falls out when you do the calculus and substitute the variables. I'm not even going to try and show you how it works: if you really care, you can figure it out for yourself. Just remember that the solid angle is proportional to the square of the deflection angle, and of course use the small-angle approximation to get rid of the cosecant (inverse sine).


Thursday, April 19, 2012

Radiative Losses in Electron Collisions


This morning I started a long and perhaps pointless calculation to figure out the radiation losses when two electrons collide. I took as an example the case of two electrons colliding head on with an initial velocity of 10^6 m/sec each, and rebounding exactly the way they came. I drew a graph of the collision, and got values for the acceleration and time: namely, an acceleration of 4x10^21 m/sec^2, and a collision time of 5 x 10^(-16) sec. Putting these into the Larmor Formula (you can check the numbers yourself) I get a total power of 4.5 x 10^(-28) joules, of about one part in 10^7 of the initial energy of each electron.



This isn’t all that surprising after all. The electron energy is typical for atomic electrons, and the sharpness of spectral linewidths is typically on the order of 10^7. The proximities, the forces, and the charges are all the same as we deal with in spectroscopy, except that we have only one-half of one cycle of oscillation. So the energy radiated is about what it should be.

But remember, this does not give us the correct answer for the radiative losses for the overall collision. That’s because the accelerations of the two electrons are almost mirror images of each other, so the radiation pattern of one is almost entirely cancelled out by the other. I want to deal with the question of: just how much is the residue?

A lot of first-year physics students notice something funny about the formulas for acoustic power: if you have two sources one on top of the other, the amplitude of the wave doubles, so you get four times as much power. The other side of the coin is that for oppositie amplitudes, you get zero power. If this doesn’t bother you when you first encounter it, well…it ought to.

The paradox starts to make sense when you look at the effect of proximity of the two sources. If they are close enough to be coherent, you do indeed get these interference effects. But once the separation of the sources becomes on the order of one wavelength, the effect disappears. Oh, you can still have constructive interference…but only in selected directions. In other directions, you get destructive interference; and mostly, you get random degrees of interference, positive and negative. It averages out so the power of the two sources adds up to just the sum of the independent sources.

The case we are dealing with here is two sources of opposite phase which are very close to each other. It’s pretty hard to do the exact calculation, but it’s not so hard to get a pretty good idea of how the interference works. If we take the nominal power to be just the sum of the independent powers of the two sources, the actual power has to equal the nominal power for anything over one wavelength of separation, and it has to go to zero as the separation narrows below one wavelength. We can draw a graph of this:



It’s not too hard to believe that for short separations, that is, the quadrupole approximation, the power is proportional to the square of the separation. What does that mean for our present example? Well, we had a half-cycle time of 500 micro-picoseconds, which corresponds to a wavelength of 300 nano-meters; with an “antenna length” of close to 300 picometers, this gives us a factor of 10^6 when we square the ratio. So the quadrupole combination radiates a million times more weakly than the independent electrons. Since we calculated a “naïve” power loss of 10^(-7), the realistic power loss, taking into account the quadrupole effect, is only one part in 10^13.

Do the losses ever become appreciable? We are dealing in the present case with speeds on the order of 1% of the speed of light. The dimensions of the radiating source are of a similar ratio as compared to the wavelengths of the emitted radiation. The radiated power actually increases rapidly with energy, becomes significant, not surprisingly, when the speeds become relativistic and the geometry of the radiating system becomes a large fraction of the wavelength.
At least that’s how it works for the classical case of charged billiard balls. The sad fact is, with all the calculations, I still don’t know what to make of the quantum mechanical case.

How elastic is a collision?



In Volume 3 Chapter 3 of the Feynmann Lectures, Feynmann takes up the question of electron-on-electron collisions. Working in a center-of-mass frame, he says it’s pretty hard to distinguish between the two alternatives of a glancing collision versus a more direct one:



Actually, he doesn’t just say they’re hard to distinguish: he says they’re indistinguishable. The implications of this become especially interesting when the angle of deflection becomes ninety degrees, and that’s what got me started on this question. But along the way I started getting these nagging doubts: are the two processes really indistinguishable…that is, in a fundamental way, indistinguishable in principle, rather than just hard to distinguish as a practical matter?

It seems to me that there is some radiation involved in this process. At least when we treat the electrons classically, as tiny charged ping-pong balls, we see that there is acceleration involved; and when there is acceleration there is radiation. It is pretty clear that the forces of acceleration are more intense in the second case, where the angles are sharper. So wouldn’t there be more radiation, and wouldn’t that come at the expense of the kinetic energy of the electrons? And wouldn’t that be enough to let us distinguish the two cases?

So I thought I’d try and work it out and see just how much energy loss there should be, and that’s the project for today.

It shouldn’t be too hard to figure out the radiation. It was just a few weeks ago that we did the Larmor Formula, which explicitly gives the power of a moving charge as a function of its acceleration. So all we have to do is plug in the numbers, right?

It’s not so simple. There are two accelerating charges, and they are very close to each other. And the acceleration of one is the exact opposite of the other. So the radiated waves cancel each other out. Apparently, there is no radiation after all….is there?

It’s not so simple. Yes, the radiation patterns of the two electrons are equal and opposite to each other. But they are not quite  on top of one another. The patterns are displaced in space. There is a name for this type of thing: it’s called quadrupole radiation.

If you ever want to convince yourself that you’re not smart enough to get involved with physics, you should check out this incredibly intimidating lecture on quadrupole radiation by Duke University professor Robert Brown.  From what I can gather, Brown is that extreme rarity…a physics prof who really understands all that insane 3-d calculus and can relate it to physical reality. Based on the physics profs I’ve actually met in my life, I would incline to believe that guys like Brown don’t actually exist. But in the face of direct evidence, it’s pretty hard to deny that there are people who actually function at this level. 

No, I’m not going to do all that 3-d calculus. I have to admit that I function on a very different level; one might call it a lower level, and yet even so I am able to achieve certain results. What I’m going to do will be a very rough ballpark of the actual radiation losses of the colliding electrons. Let’s give it a go.

It will be most convenient to consider the case of the direct collision, where the two electrons bounce straight off each other. If you’ve been reading my blogposts, you know I like to work in actual numbers, so I’m going to give them initial velocities of 1,000,000 meters per second. I make the energy aroun 2.8 electron volts at this speed. This is a very typical speed for an electron in an atom; for example, it’s close to the energy of the p orbitals in hydrogen.

Now let’s look at the collision. Of course the electrons don’t actually collide and bounce off each other: they are repelled from a distance by electrostatic forces. The question is: how close do they come?

The speeds invovled are on the order of atomic orbitals, and the distances are also going to be on that order. We can do the calculation without too much trouble. I make it 2.5 Angstroms, or 250 picometers if you like. You can verify this for yourself: just take the total energy of both electrons and equate it to the energy of a system of two charges separated by a distance r:

Now we start with the ballparking. I’m going to draw a line in the middle of the page and say that each electron approaches within 125 picometers of that line. I’m also going to say that the electron reverses its velocity via uniform acceleration over an approximate distance of…125 picometers. I’ve drawn a graph of the interaction as I intend to approximate it:



I did some basic geometry to scale in the time factor, based on our initial velocities of 1,000,000 meters per second. You can see that the interaction takes place over a time span of 5x10^-16 seconds, or 500 micro-picoseconds. Or 500 nano-nanoseconds if you prefer.

We now have all the parameters needed to do the basic Larmor Formula calculation…that is, we have an acceleration, a charge, and a time span. What is the acceleration? Well, the speed changes uniformly from positive to negative one million m/s – a change of 2 million m/s – over a time span of 500 milli-femto seconds. I make that an acceleration of 4x10^21 m/sec. We can verify this by equating it to the force of repulsion between two charges, using F=ma and k(Q^2)/r^2: I actually get exactly the same answer, which is unusual for me. But in this case I’ll accept that it means I’ve got it right.

But why am I going to use the Larmor Formula? Didn't I already say that it will give the wrong answer because the two electrons cancel out each other's radiation patterns? Yes, that's right: but I'll correct for that later. First of all I'm going to do the "naive" Larmor calculation.

I think we’ll leave it off for now and continue with the calculation tomorrow.

Saturday, April 14, 2012

How to Approximate Arc Lengths

When I wrote about pi the other day, I said that I was originally planning to use some tricks I developed over the years for approximating arc lengths. Instead I ended up using the very well-known (I hope!) approximation for sector areas, based on two-thirds the area of the circumscribing rectangle:
If you've taken first year calculus you can easily see why it's true. If you're not quite sure, remember that every sufficiently shallow arc is ultimately equivalent to a parabola.

Today I want to tell you about my approximation for arc lengths. I haven't seen this anywhere, but that's not to say it's some kind of deep secret. It's just not something that gets mentioned too often, as far as I can see. And yet it comes in handy now and then.
Here's how it goes. You take the segment of arc, and of course the simplest approximation is to take the straight line distance. And you won't be that far off either. But of course you can do better, by taking the midpoint of the arc and using two line segments. Here I've drawn the straight-line in blue, and the bisected arc in red:

And the true distance is shown in black. The funny thing is that whatever you gain by bisecting the arc and using two straight lines instead of one, you just need to add another 33% to get the exact arc length. That's how it works. I hope I've explained it clearly.

The other question is why does it work. Maybe I'll let you figure it out for yourselves. Of course you can do it with calculus, and you can also do it by working from the Taylor Series for sine and cosine. But it's probably the most satisfying if you can reason it out from straight geometry. I'll leave it up to you.








Wednesday, April 11, 2012

Rational Approximations to Pi

Physics is my territory, not mathematics. Even so, after posting a hundred and twenty-something blog articles over the last two years, it's hard to believe I haven't done any straight-up math topics. Today I'm going to break my rule.

You will see that what follows is not the work of a mathematician, or even a physicist. At best, it represents the efforts of an engineer. (For a superb mathematical summary, check out this web article by David Bau.) And yet, I feel there is a certain charm to my method.

I didn't start out to write about pi. I was going to write about a couple of approximations I worked out over the years for approximating curves. One was an approximation for arc lengths, and the other was for the area of curved surfaces. Both of them are vaguely related to the well-known approximation for a circular segment as two-thirds the area of the circumscribed rectangle. This approximation becomes exact for the case of a parabola:
If you don't know this already, you really ought to. I end up using this approximation in my calculation of pi.

My method is based on complex numbers. You know that in complex algebra, if you take a number like 4+3i and square it, cube it, etc, you generate complex vectors that rotate around the origin in steady angular increments. This is a consequence of the general rule that when you multiply complex numbers, you multiply the magnitudes and add the angles. If you don't want to keep track of the ever-swelling magnitude, you can just normalize everything by dividing your original vector by its length.

It's a funny point that you can never get back to your original angle by this method. You can come awfully close, but you'll never return to your starting angle. Which is another way of saying you'll never project your vector exactly onto the x or y axis, no matter how many times you circle the origin. For example, given the vector (4+3i)/5, it turns out that the 22nd power is 0.09928 + 4.999i , which is very close to the positive y axis. The 83rd power comes even closer: at -4.99996-.0176i, it is ever so close to the minus-x axis. Clearly, our vector is a very close approximation to the 166th root of unity.

How can we leverage this information to get an approximation for pi? Well, one way is to use our approximation for the area under a curve. Taking it to be two-thirds of the circumscribed rectangle, we get this from the geometry:


From what I've already said about the powers of 4+3i, it should be clear that eleven such sectors are very nearly five-fourths of a full circle...actually, that's wrong, because by the time you take 22 powers, you've actually circled the origin twice, so we're looking at nine-fourths of the area of the circle. From this we get the following rational approximation for pi:

How good is this? In decimal notation, it comes to 3.12888, which is OK but not all that brilliant. We can actually fix it up just a bit by noticing that there was a sliver of pie missing...the width 0.09928 which I mentioned earlier, and which would have contribued an extra area of close to 0.25. This correction inches us up to 3.13000, but it's clearly still short by close to 1%.

It's not hard to guess the source of error: it's the parabolic approximation for the circular sector which is to blame. Yes, I just finished saying what a great approximation it is, but we're now facing the famous historical approximation to pi which are renowned for their accuracy. Actually, you really ought to read that article by David Bau that I referenced at the start of this post...he shows how even the homely 22/7 that we're all familiar with is in fact a much much better approximation than anyone has a right to expect. But that's another story.

The main reason the parabolic area approximation falls short for us in this case is that we're taking to big a bite. The approximation becomes better as you work with shallower circular arcs. I wanted to stick to rational approximations, so I looked for trianges similar to 3-4-5 except thinner, so we work with shallower arcs. Such triangles are easy to construct, and the 11-60-61 triangle gives us some nice values.

Without redrawing the same pictures, what I found in EXCEL was that the 26th power of the complex vector falls very near the negative-y axis. The exact value is 0.1174 - 60.9999i. This represents three-quarters the area of the unit circle. What about the area of the sector? Well, the two triangles add up to 660, and the circular arc enclose, by our approximation, an additional 44/3. So, just as we did with our last calculation, we get the following result:

Converting this to decimal, we get a value of 3.14276, which is just ever-so-slightly better than the old traditional 22/7 (decimal 3.14286)! What about the pie-sliver correction we made in our 3-4-5 analysis? It didn't help us much in that situation, but this time, it is actually quite helpful indeed. The sliver of width 0.1174 gives us an area of 3.74; when we factor this in to the calculation above, it adjusts our value of pi to 3.14142...a significant improvement. Clearly, the approximation becomes much much better as the arc becomes more shallow. But it's still not nearly as good as the next best fractional value, 355/113, which gives us 3.1415929..., diverging from the true value of pi only in the seventh decimal place!

Although I do not seem to be competing for accuracty and economy of computational resources with other methods, it is somewhat noteworthy that my techniqes do lead to rational approximations...at least up to the point where I do the sliver correction. As David Bau points out, the modern extremely accurate and rapidly-converging series for pi come not from geometry but from analysis. And as good as they are, they do not tend to give rational expressions; in particular, the elusive 355/113 seems to defy all attempts at "rational" explanation. 

It is an inexplicable fact of human nature that I, and others like me, will nevertheless continue to play around with our primitive methods, hoping against all odds to stumble upon the key to unlock the secrets of pi and its rational approximations. Although we are almost certainly doomed to failure, we just can't turn and walk away.









Monday, April 9, 2012

What good are the rotor slots?

Last week I argued that the normal design of rotor slots in an induction motor seems to guarantee that the magnetic field will avoid passing through the current-carrying rotor bars. Here is the picture I drew:
If the field bypasses the rotor bars, then where is the force to turn the motor? And yet they've been building motors this way for a hundred years. So I've been racking my brains trying to figure out what's wrong with my analysis. Here are some of the thoughts I've had:

1. Maybe I'm not taking into account the total magnetic field. The lines I've drawn are intended to show the field due to the magnetising current in the stator. What about the load current in the stator, and the induced currents in the rotor? Don't they generate their own fields, therefore nullifying the significance of the simple drawing I've shown here?

I don't think so. First of all, we don't like to draw the fields from the rotor currents, because a conductor feels a force mainly due to all the other fields, not its own field. But then, what about the fields due to the load currents flowing in the stator? That's a little trickier to answer, but we can get a pretty good idea by looking at the ideal case where the air gap goes to zero and there is no leakage flux. In that case, we have an ideal transformer, and the magnetic fields created by the secondary currents are exactly nullified by the additional currents that flow on the primary side. Once again, we return to the situation where the only magnetic field we need to consider is the original field due to the magnetising current in the stator. Just as I've drawn.

2. Maybe you want to argue that the case of no leakague flux becomes degenerate, and the motor doesn't work. I don't buy that. Analyzed as a transformer, the case of no leakage corresponds to the perfect transformer, in which case the rotor currents are purely resistive. I already showed that the case of pure resistive current corresponds to the ideal magnet configuration, with the stator field at right angles to the rotor field. If that doesn't generate torque, what does?

3. What about the saturation of the magnetic iron? Doesn't this change the field path?

I don't think so. I hate to go out on a limb, but I don't think we need to worry about that. After all, we can run a motor on half voltage, keeping it well within the magnetic limits of the iron, and I don't think the performance is all that different, qualitatively speaking, once you account for the lower power levels.

4. What about analyzing the forces between the magnetised rotor vs the magnetised stator, instead of trying to analyze the forces between the stator field and the rotor currents? Maybe we need to look on it as magnet-on-magnet?

I don't think so, but I'm on shaky ground here. I think the magnet-on-magnet analysis is just a different version of the same physics. I really don't think it's a whole nother set of forces in addition to the forces on the conductors. It's just really hard for me to believe that the total force can't be analyzed just by looking at the force on the rotor bars. Maybe you can analyze it by looking at the magnets instead,  but I'll be really surprised if it turns out you have to look at both sets of forces. In any case, it still seems really wrong that there are no forces to speak of on the rotor bars.

5. Doesn't all the flux have to cross through the rotor bars anyways? This is a scary argument that keeps pushing its way into my mind. The stator field is moving a little faster than the rotor, so those lines of force do in fact sweep through the rotor bars as the rotor orientations shifts backwards relative to the stator field. Yes, the field lines crowd into the iron salients, but then they snap across the gaps very quickly. Is there some additional component of magnetic force related to the velocity of the field lines in addition to their density?

It's a tempting argument but I just can't buy it. The force on a conductor has got to be based on the current and the field, and nothing else. There's no additional term in the equations which brings in the velocity of the field lines. Not that I can see.

And that's just about it. You can see I've been turning it over and over in my mind, and I just can't see how those rotor slots are designed to acheive any kind of torque in the motor. As I said the other day, my idea would have been to have a solid cylindrical rotor coated with a thin layer of copper conductor. So the magnetic field lines would have to cut through the copper to complete their circuit from pole to pole through the magnetic iron. I still can't see what's wrong with my theory.

We have to ask the question: if I can't figure out how a motor works, how am I going to figure out the quantum structure of the helium atom? That's a tough question, and I'm the first to admit it.


Friday, April 6, 2012

Why does the rotor need slots?

I wrote the other day about the importance of the air gap in an induction motor. It seems after all the rest of the world doesn't share my concern about this parameter. I looked up and down the internet and the unanimous opinion is that the air gap just has to be big enough to provide mechanical clearance for the rotor.

I found some good pictures of typical rotor stampings on this website, so we can see what we are talking about. Here is a good one:

You can see that the linear profile is about fifty percent copper and fifty percent iron. You want iron to carry the magnetic field, and you want lots of copper to provide a path for current to flow. (The fifty-fifty compromise is not that uncommon in these engineering situations, and not only because people are too lazy to work out the true optimum. But that's another story.)

But I'm still not convinced that this is entirely right. As I explained in my last article, if you want torque then the rotor current has to flow in a region where there is a strong magnetic field. What good is the magnetic field if it's inside the iron? It has to cut across the copper bars, and I can't for the life of me see why that should happen in this typical rotor configuration.

You have to understand something about what we call the magnetic permeability of iron. As compared to all ordinary materials, including in this case such things as air and copper, the relative permeability of magnetic iron is on the order of 1000:1. That means, in anthropomorphic terms, that a line of magnetic flux would just as rather pass through 1000 millimeters of iron rather than jump a gap of 1 millimeter through air or copper. If you'll forgive my shaky graphics, we can see what this means for this particular rotor in a typical magnetic field situation:

The field lines will do just about anything to stay inside the iron. So what good is all that copper if it's not in the magnetic field? I just don't get it.

Obviously people have been building motors that work for 100 years. So maybe I'm missing something here. I just can't see what's wrong with my analysis.

According to my theory, I wouldn't even have rotor slots. My rotor would be a solid iron cylinder, and my "squirrel cage" would be a smooth copper coating about 3 millimeters thick all around the suface of the rotor cylinder. That way the magnetic field lines would have to  cut through the copper to complete their circuit. Yes, it would mean the motor has lower inductance, which means it would need a higher magnetising current. But it does no good to design for low magnetising current if the tradeoff is that your magnetic field manages to avoid the rotor bars.

So that's my theory. You tell me what's wrong with it.



Wednesday, April 4, 2012

Flux and the Air Gap

Since I've started writing about induction motors, I've been searching the internet for references to back up what I've been saying. In all honest I must report that I've found very little. In particular, no one else seems to make the point that repulsion is the dominant mode, especially during start-up. So I'm kind of out on a limb here. I don't know if this is a good thing or a bad thing.

Today I'm going to go even farther out on a limb when I talk about the importance of the air gap in a motor. From what I've seen so far, people regard the air gap as a necessary evil. Without the air gap, the rotor and stator are stuck together. You need just enough air gap to let the rotor turn. If you increase it too much, you pay the price in excessive magnetising current.

I'm not entirely buying this argument. I have a problem with the idea of a very small air gap. Last time I showed a sketch of a motor running near full load condition. According to how I understand it, the currents and fields are distributed more or less like so: (Remember I have the stator windings on top, with the field sweeping from left to right:)

You can see I've got the rotor field flowing in a region where the field is pretty strong. You might also notice that I've drawn a pretty healthy air gap. I did this for a reason. Because something nasty happens if you try to close the air gap to a minimum:

If you compare the two pictures, you see that as the air gap closes, the magnetic field lines choose to skew around the slots with the rotor bars, making straight for the iron. Those field lines would rather be flowing through iron than air or copper. That's a problem, because the rotor bars only experience a force when the magnetic field passes right through them, not around them.

Isn't this just as much of a problem with the wider air gap? Not to the same extent. The magnetic field lines arrange themselves to provide the most favorable path. In the case of the wide air gap, they can shorten the air gap by perhaps 20% by choosing to avoid the copper bars; but this comes at the expense of a certain degree of overcrowding. So there's a compromise. By contrast, with the narrow air gap, the incentive to crowd is much greater. In anthropomorphic terms, the flux lines would much rather go from stator iron to rotor iron rather than tire themselves out going through all that air and copper. For the case of the wider air gap, it just doesn't make as much difference: the flux lines have already gone through so much air, a little more copper doesn't matter that much.

The wider air gap does not come without a price. The magnetisation current needed to set up the field is that much greater. In a transformer, the mangentisation current is 5% to 7% of the full load current, so we tend to ignore it. In a motor, because of the air gap, it is already much more to begin with. Increasing the air gap only makes it worse. So there must be a design comporomise. Nevertheless, I can't see how a wider air gap isn't desirable from the point of view of generating more torque. The more air gap, the bigger you can effectively make your rotor slots while still taking advantage of the available magnetic flux.

And that's how I see it. As I said, there is a dearth of confirmatory information on the internet, so I'm kind of out on a limb here. If someone wants to tell me I'm right or I'm wrong, fire away. I'd really like to know.