Abel Prize, 2020
Mar. 18th, 2020 01:33 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
The Abel Prize was just rewarded to Hillel Furstenberg and Gregory Margulis for their use of methods from probability and dynamics in group theory, number theory and combinatorics. I've interacted with both of them. I took complex analysis with Margulis when I was an undergrad. My interaction with Furstenberg was more limited, talking to him a few times when he was visiting, mainly when I was an undergrad. Hillel seemed willing to listen to whatever half-baked idea an undergrad had that was connected to something he did.
People often think of the Fields Medal as the equivalent of a Nobel in math, but the Abel prize has a pretty similar stature.
To give some idea of what they did, let's talk about random walks. Imagine you are on a standard number line, starting at zero. You take turns flipping a coin. If you get a heads, you walk one unit in the positive direction and if you get tails, you walk one unit in the negative direction. As you repeat this, you wander somewhat randomly; this is thus called a random walk.
There are a lot of questions you can ask. For example, how likely is it that I'm going to end back where I started? It turns out that if you play this game, you'll find your way back home infinitely many times with probability 1. We can also ask, in general if I play this game t steps, about how far should I expect to be from where I started? I could keep almost always hitting heads, but that seems unlikely. It turns out that your distance from the origin is almost surely not higher than roughly the square root of the number of steps you have taken. (It also turns out that this fact is closely related to one reason we believe the Riemann Hypothesis is true, but that's a story for another time.)
One interesting thing we can do is try to ask the same questions in higher dimensions. Imagine a random walk in two dimensions. Now, instead of flipping a coin, we roll a four sided die, and depending on the result go one unit left, one unit right, one unit up, or one unit down. It turns out that the answers here to our questions above remain the same. We'll still be no more than about square root of our number of steps from the origin, and we'll still return to the origin infinitely many times almost surely.
What about three dimensions? Now we have six possible directions. And now things change! The square root claim is still true, but we don't expect to return to the origin infinitely many times. In this sense, a wandering ant will eventually find its colony, but a wandering bird might not find its nest.
Part of what Margulis and Furstenburg did was they realized that by asking questions about things like the above on abstract objects rather than the space one is used to, they could better understand the structure of those objects.
One idea that they worked with is applying notions from ergodic theory in a similar context. Ergodic theory is a branch of what is sometimes called "chaos theory" and studies systems which unlike the random walks have no intrinsic randomness, but are still hard to predict. Consider for example, the following function: We'll start with a number between 0 and 1 (inclusive), and we'll double it. If the resulting number is greater than 1, we'll then subtract 1, and if not we'll keep that number. Let's look at three example numbers.
Let's start with 1/3. The first time, we double it, so we get 2/3. Then we double again so we get 4/3. That's greater than 1, so we subtract 1 to get 1/3. And we're back where we started. So this formed a loop. But not all numbers form a loop this way, and in fact most numbers don't.
Imagine we started with 25/48. We have then 2(25/48) = 25/24, so we have to subtract 1 to 1/24. We then get 1/12, then 1/6, then 1/3, and we're now in the loop from 1/3. But that loop didn't contain 25/48 to start with. So a number might end up in a loop where the loop doesn't itself contain the number.
But some numbers never end up in a loop at all. Let's look at an example.
Let's start with sqrt(2)-1 which is about .414... Now, we double we get 2(sqrt(2)-1) = 0.828... We'll double and subtract 1 to get 4(sqrt(2)-1)-1 = 0.656... and we'll do this again to get 2(4(sqrt(2)-1)-1) -1 = 0.313... If we double again, we won't need to subtract 1, so we'll have 2(2(4(sqrt(2)-1)-1) -1) = 0.616... and so on. It turns out we never get in a loop. Worse, numbers very close to each other might behave very differently. sqrt(2)-1 and sqrt(2) -1 +1/108 are very close together as numbers, but if you do this process more than a few times, to each, you'll find that your process for each starts looking completely unrelated in terms of how large one is at a given stage compared to the other. Margulis and Furstenburg applied ideas about how to understand systems like this to other areas of math. That's now a very big area of study, and a lot of it is owed to their work on it.