Sunday, September 21, 2014

V. I. Arnold's Topological Proof

I just came across the strangest thing. This Israeli guy put up a youtube video about how you can prove the fifth-degree equation is unsolvable, based on the ideas of a Russian mathematician named V. I. Arnold. I haven't worked it all out yet, but it starts off with the strangest idea. You take an equation, and then plot out the roots in the complex plane. Then you also plot out the coefficients  in the complex plane. Why the complex plane, you ask? The cooefficients are all integers. Why not just plot them on the number line? Answer: because we're about to mess with them.

This is the funny thing. Take the simplest possible equation, like:

x^2 - 2 = 0.

The coefficients are 1 and 2, and the roots are +/- sqrt(2). Here is what we are going to do. We are going to take the "2" (the constant coefficient) from the equation and move it slowly along a circle about the origin. And we're going to observe what happens to the solutions of the equation as it changes.

Try it yourself! You'll see that when you complete the circle, so that the equation returns to its original form, that the roots of the equation also go back to their original values. But they are reversed! You have to go around the circle twice to put the roots back where they started. It's the strangest thing.

In the video, the Israeli guy claims (without really explaining it all that clearly) that in general, you can construct loops in the map of the coefficient such that by dragging the coefficients around those loops, you can arbitrarily force every possible permutation of the roots. I've shown you what happens when you drag the "2" about a loop in the complex plane - in the map of the roots, the two square roots of two switch places. Arnold's idea is that in general, you can force every permutation of the roots by dragging the coefficients around the complex plane.

I haven't yet figured out why this must be so (EDIT: Okay, I've thought about it and it's true: Boaz explains it around 4 minutes into his video), but if  since it's true then it has consequences. The idea is that if there is a solution to the fifth degree, it has to be written in terms of the coefficients arranged somehow within a complicated nested system of radicals...but for any such representation, there are restrictions as to where the roots can go when you mess with the coefficients.

How does this help us? Well, for one thing, in the example I've just shown, it proves that the solutions of x^2 - 2 cannot be rational. (EDIT: No, that's not quite right: it only shows that you need to write them with a formula that includes a square root sign). Why? Because the loop we constructed flips them around. But it's not so hard to see that if the roots are given by rational expressions, then any loop of the coefficients in the complex plane has to bring each of those rational expressions (for the roots) right back to where it started. So rational expressions don't flip around with each other, the way we did with the square roots of two.

Anyhow, that's the basis of Arnold's proof, which I'm not able to go much farther into right now. But it's something to think about.

If you're a follower of my blog, you know I've written a lot about the fifth degree equation. I think I explain it pretty well here at Why You Can't Solve The Qunitic. But I've never seen anything like Arnold's method before.,


Balarka said...

Thanks for the email, Martin, just read it earlier this morning and have replied to it. This post would definitely be of interest to me. I have just read about the topological proof a few weeks ago from Arnold's original notes so I guess I can participate in this ongoing discussion about the proof with fruitful contributions to the blogpost. I'll just raise a few points about your explanation :

You considered the polynomial x^2 - 2 and indicated that if one lets the constant coefficient 'loop round' the complex plane around z = 2, then the roots sqrt(2) and -sqrt(2) are exchanged. I might point out that this doesn't happen if the loop is small. For example, let the loop be the small circle starting from z = 2 in the complex plane, moves upward in the positive coordinate encircling z = 1.5, dropping downwards, crossing z = 1 and finally completing the path by coming back to z = 2 again (i.e., the circle |z| = 0.5 in the complex plane with center situated at z = 1.5). Throughout this process, the roots are technically NEVER exchanged if the loop is not large enough.

A better candidate would be x^2 + 2 for visualization. The roots are i*sqrt(2) and -i*sqrt(2) and if one lets the constant coefficient loop around any well-behaved closed curve hitting the negative real axis at least ones, the roots are easily seen to be replaced :

Consider sqrt(-z) with the positive sign ("the principal branch") where z = a + i*b is in the upper-right coordinate (i.e., 'b' is positive whilst 'a' is not). Note that if one sets z = -2 + i*k and lets k move through the positive reals from k = N to k = -2 for some large N (equivalently, imagine the cut Re[z] = -2 through the complex plane and let a point continuously drop from the "positive end" of the line to z = -2), the sign of imaginary part of sqrt(-z) is left absolutely unchanged (positive). However, as z crosses the imaginary real axis and comes upon the lower-right coordinate (i.e., z = a + i*b where 'a' and 'b' are both negative) then sqrt(z) suddenly jumps and lands on the lower-left coordinate, i.e., the sign of sqrt(z) is changed from positive to negative. The same happens when one let z move towards the upper end of the line Re[z] = -2 from the lower end. Some numerical examples :

z = -2 + 1*i ===> sqrt(z) = 0.3435 + 1.4553*i
z = -2 + 0.7*i ===> sqrt(z) = 0.2438 + 1.4351*i
z = -2 + 0.5*i ===> sqrt(z) = 0.1754 + 1.4251*i
z = -2 + 0.3*i ===> sqrt(z) = 0.1057 + 1.4182*i
z = -2 + 0.1*i ===> sqrt(z) = 0.0353 + 1.4147*i
z = -2 + 0.001*i ===> sqrt(z) = 0.0003 + 1.4142*i
z = -2 ===> sqrt(z) = 1.4142*i

So far so good, the imaginary part is yet positive. Now let the point enter the lower-left coordinate :

z = -2 - 0.001*i ===> sqrt(z) = 0.0003 - 1.4142*i
z = -2 - 0.1*i ===> sqrt(z) = 0.0353 - 1.4147*i, etc.

The appearance of conjugated forms are not so surprising, as it is evident that it would happen from the fact that polynomial functions preserve conjugation, but the sudden change of sign of the imaginary part certainly *is* remarkable. This implies that if z = -2 + i*s and s > 0 then sqrt(z) has both real and imaginary part positive (i.e., lies in the upper-right coordinate) whereas if s < 0, then sqrt(z) has the real part positive but the imaginary part negative (i.e., lies in the upper-left coordinate).

But this is clearly a violation of continuity!! If one continuously moved z = -2 + i*s from s > 0 region to the s < 0 region, the limit of the imaginary parts at first seemingly tends to sqrt(2) but jumps suddenly to -sqrt(2) as s enters the negative reals.

This has already become quite long for a comment, so I'll rather leave the explanations for now. But if you're interested, Marty, I could try posting it/mailing them to you.

I. J. Kennedy said...

You might also want to take a look at:

marty green said...

I'm awfully sorry Balarka but I think you're missing the essence of Arnold's method. Your example relies on the human convention of choosing the "positive" branch of the square root function. This is no what algebra is about: see this argument that I had with Arturo Magadin a couple of years ago on

Obviously if you think I'm wrong about this, you're in good company. But the way I understand abstract algegra, the whole beauty of the subject is that you have multiple values for the expression under a radical sign, and you're not allowed to say that this one is more special than that one.

Looking at it my way, there is no differnce between your example and my example. In both cases there are two roots of the equation, and as you loop your path about the origin, the two roots smoothly and continuously move around their own circle. When the loop of the constant coefficient has come full circle, the loop of the roots is only halfway, so the roots have been exchanged.

Either way, they exhcange places smoothly and continuously, and there is no sudden jump based on arbitrary human conventions.

Balarka said...

I am well aware of the exchange of the roots. We disagree at the point on the "continuous exchange" of the roots ;)

The video fools us. They are really operating the square root NOT over C but over something else. If you take w(z) = sqrt_1(z) (any branch) and let z loop around z = 0 the it actually ends up with w(z) = sqrt_2(z) quite abruptly as z crosses the imaginary axis. If the algebra doesn't convince you, open up a page in geogebra, set a complex point z and two points defined as sqrt(z) and -sqrt(z) and watch how they get exchanged (NOT continuously) as z hits the imaginary axis. However, the continuity is preserved if you "patch up" sqrt(z) (the positive branch) for Re[z] > 0 with -sqrt(z) (the negative branch) for Re[z] < 0. Try this yourself in geogebra. That'd convince you that the exchange of the roots are not continuously possible in C. in fact, a big part of Arnold's proof is about constructing "more general surfaces" than C over which you can patch up the branches of sqrt(z) and -sqrt(z) to make teh square root continuous. I'll send you some images of this "discontinuous exchange" and this surface (also called "riemann surface" in math jargon).

I, of course, welcome any valid correct of errors in understanding from my part. I am considerably new to this geometric proof and am looking forward for a better understanding.

Balarka said...

As a reply to I. J. Kennedy, I haven't seen the lecture but it seems to be on the topic on whether quintics can be be solved you "adjoin" exponential and logarithms to the elementary operators.

The answer, as far as I know, is still open. Conditionally on Schanuel's conjecture, this has been settled by Timothy Y. Chow on his paper "What is a closed-form number?". It's a fun problem and I believe is of interest in transcendental number theory.

marty green said...

Balarka, I will have limited internet access for the next few days, but maybe I can send an email to Boaz Katz (who posted that video in the first place) and see if he can help sort this out.

Unknown said...


As Marty suggested, to understand this proof you need to think of sqrt(z) as a double valued function (i.e. a function from C to C^2) defined as the two values z1,z2 that satisfy z1^2=z2^2=z (and similarly for higher degree radicals). This function is continuous from C to C^2 as long as you don't pass through the point z=0 (which you should avoid when building the loops). The discontinuities you mention are simply a result of you trying to view the radicals as single valued functions from C to C by choosing branch cuts. As such, I agree they are not continuous but this has nothing to do with the proof.
The video of the roots shows the actual numerical results (from matlab), it is done on C and is not meant to fool you :-). Please try it yourself!
P.S. The video I.J.Kennedy refers to is about the same proof of the insolubility of the quintic and not about "adjoin" exponentials. While we should trust our own brains, it is reassuring to hear a professor of mathematics (and a good one...) appreciate this version of the proof (he discusses my video...). Moreover, he improves the proof by doing the group theory part directly (in one line) instead of using a computer...

Boaz Katz

Unknown said...

minor correction - sqrt(z) is actually continuous even at z=0. z=0 should be avoided for a different reason, so that the tracking of the different roots is defined.

Balarka said...

OK, let's see... I am not sure how you are supposed to think of the function sqrt(z) as C to C^2 and make it continuous at the same time. z |--> (sqrt(z), -sqrt(z)) is the most canonical way I can think of embedding sqrt(z) in C^2 and for this to be continuous you need each of this functions to be continuous over the two copies of C, respectively. I don't believe this is the case an Arnold actually agrees with me if I am not mistaken (have a look at V. B. Alekseev "Arnold's proof in Problems and Solutions, page 76-78). sqrt(z) is not continuous over the two sheets if my understanding is not flawed, and one really has to think of sqrt operating on the more general Riemann surface to make it continuous : this is precisely why [-infinity, 0] is a branch cut of the function.

So I believe we are still at disagreement in here. sqrt is not really continuous on C not even on C^2 but rather on the Riemannsurface embedded in R^4. Apologies if I am being silly.

PS : Fair enough, I never saw the video. I was reading the comments (now that I see it, it is done by yourself!) which relates to something I have been fiddling with lately.

Balarka said...

Can you show me proof that sqrt(z) is holomorphic on all of the complex plane, even at the line [-infinity, 0]? I have a disprove of this (in fact, I think I can prove that sqrt(z) is holomorphic on C if and only if [0 -infty] is removed over, i.e., a proof that it's a branch cut) so I can convince myself that it is flawed if you can show me your proof.

I guess the best way for me would be to see a mathematically rigorous proof rather than fiddling with numerical values and plots =)

Thanks in advanced,

Unknown said...

Lets define the relevant statement needed for the proof.
Heuristically: when z moves continuously, each root moves continuously.
Exactly: Given a continuous curve z(t):[0 1] ->C (t being real and from 0 to 1) which does not pass through the point z(t)=0, and given a choice z10 of one of the two roots at the beginning of the curve, z10^2=z(0), there exists one and only one continuous curve z1(t):[0 1]-> C, which satisfies that z1(0)=z10 and z1(t)^2=z(t). This continuous curve is what we call the trajectory of the root that started at z10.
Do you agree this is the relevant statement?
I think you will see it is correct but if you still think it is not we can discuss analytic examples and a proof.


Balarka said...

Ah, no wait I am starting to get what is happening. By sqrt(z) you mean the multivalued aggregate of the two branches of sqrt, while I am always think of one of the branches defined on the corresponding upper or lower half plane. Is that right?

In that case, it indeed does make sense. For sqrt(z) to come back after a single loop (i.e., the monodromy to be trivial) one have to have arg(sqrt(z)) to be 2*pi*k and for that one must have arg(z) to be 4*pi*k, i.e., z must wind even number of times around z = 0.

Marty Green said...

A couple of points: the video I. J. Kennedy posted is a lecture by Dror Bar-Natan and I finally watched it. It's a really good explanation and he really hit home with me when he talked about taking a whole course in abstract algebra, and then one day the prof says "....and therefore the fifth degree is unsolvable..." and you really didn't see it coming or know how you got there.

The other thing is when Boaz apologizes for using brute force to exhaustively work out the commutators of commutators: yes, it's nice that Dror does it in an elegant way, but on the other hand there's also a certain charm in the fact that Boaz maneuvers you into a situation where the search, no matter how laborious, is demonstrably and clearly finite (and conclusive). I kind of like that. said...

This post will definitely be interesting to me. In maneuvers, there is a certain charm. I like ituk thesis
and interesting. Truth is born in a dispute.