The Facebook group "A Way With Words" had the line "It was a dark and stormy night. In the tallest tower of Castle Greygallows, mad scientist Jacques LeFebre paced incessantly waiting for" and asked how it should continue. Here's my response:  

It was a dark and stormy night. In the tallest tower of Castle Greygallows, mad scientist Jacques LeFebre paced incessantly waiting for the cliches to end. Of course, it would be a dark and stormy night. And of course, his castle had to have an ominous name. But did everyone then, even the narrator, need to assume he was not just a scientist but a mad scientist?

What he really wanted to do was just peer up in the sky, through his telescopes. After all, he had purchased this castle high in the mountains because the clear dry air would make it easier to observe. But did any person, much less the current weather, care? No. The villagers down below all assumed he had to be some sort of mad scientist, not simply a retired university professor. Although, he thought to himself, given that the previous owner had been Baron von Blud, who had turned out to be a vampire, and the owner before that had been a werewolf, and the one before that had been an actual mad scientist, maybe, their conclusion wasn't unreasonable.

But that they had then sent one of their children to be a servant for him, simply because the child had a hunchback, well that was going too far. But the child, who, thankfully was not named Igor, but rather Dimitri, had proven to be bright, and good with numbers. Now, much of LeFebre's time was spent tutoring the youngster. Hopefully the boy would be bright enough that he could go to the university, and make the world-changing discoveries that LeFebre had always dreamed he himself would make. And of course, Dimitri would not need to be stuck in the benighted village unless he wanted to be.
 
A preprint proving the Erdos-Faber-Lovasz conjecture (EFL) for sufficiently large n just came out. Initial reactions are positive. The authors, Dong Yeap Kang, Tom Kelly, Daniela Kühn, Abhishek Methuku, Deryk Osthus, are all serious graph theorists, so simply from an author list this looks like a real attempt. What is EFL and why do we care?

Recall, by a graph, mathematicians and computer scientists mean a collection of points (called vertices or nodes) some of which are connected by edges. For example, one might have a collection of people, some of whom are friends with each other, and we make a graph where we draw an edge between two nodes if the corresponding people are friends with each other.

In a graph, a clique is a collection of nodes of which each node in that collection is connected to every other node in that collection. For example, if we had the friend graph of Alice, Bob, Carol, and Diane, with Alice friends with Bob and Carol, and Bob friends with Carol and Diane, then Alice, Bob and Carol form a clique. The reason we use the word "clique" should be hopefully obvious. (A minor note: We are using connected here in a slightly non-standard fashion. In graph theory we normally speak of two nodes which have an edge between them as being adjacent and use connected to mean that we can follow a path of adjacent nodes between them. But I'm trying to keep the phrasing here as close to regular speech as possible.)

One thing we are interested in with graphs is graph coloring. This is where we try to give each node a color and we want that no two nodes which have an edge between them have the same color. One of the original contexts people cared about this was the Four Color Conjecture, which stated that if one had a map of countries, one never needed more than four colors to color the map, with no two bordering countries the same color. (We are assuming each country is a contiguous section, so nothing like Kalingrad or Alaska allowed). This is equivalent to saying that if I have a graph which is planar, in the sense that I can draw it with no edges, overlapping, then I can color it with at most four colors. Here, each node corresponds to a country, and each edge represents that two countries share a border.

Coloring problems show up in more practical contexts. For example. Suppose one has a school with many students, and some of those students are taking the same classes. We cannot have a student in two classes at the same time. So we wish to schedule the classes so that no two classes are in the same time slot. Then we can make a graph where we think of each class as a node, and two classes have an edge if they share at least one student. Then the minimum number of colors needed to color the graph is the number of time slots needed to fit all the classes in.

Now, for any graph, we can write that graph as a collection of (possibly overlapping) cliques. So let us imagine a graph G which can be written as a collection of n cliques, and none of those n cliques themselves have more than n nodes. Assume further than no two of those n cliques share more than one vertex. (An example of this with n =4 is in this picture ). Then the conjecture is that for any such graph, we do not need more than n colors to color its nodes. This is a famous and notorious difficult problem. Erdos considered it to be one of his favorite graph theory problems.

What this paper claims to show is not the whole EFL, but that if n is large enough, EFL is true. If this is accurate, this is a major breakthrough that will likely have implications about other graph theory problems. I have not had time to read the paper carefully, and it is pretty technical, so I don't know where the sufficiently large n assumption is coming in, although I think it may be coming from their modified use of the Rodl nibble method.
Hybrid schooling is requiring a bit more use of multiple choice and other questions that can be computer graded. A closely connected issue I have been thinking about precalculus questions can be done which are calculator-impervious or calculator resistant. One recently one I came up with was of the form "Here are a bunch of graphs, which could be the graph of a rational function" (and one has then a bunch of different graphs with different discontinuities and asymptotes).
One that occurred to me was "How many discontinuities does the following function have" and then have some interesting function like f(x) = (x3-x)/((x-1)(4-|x|)).

But it occurred to me that if trig functions are allowed in the above sort of question, the problems can become non-trivial. In particular, consider the following: Set f(x) = 1/((sin pi x )2 + (sin ex )2 ). Is f(x) continuous? This question is equivalent to asking if e^n /pi is ever a natural number for an integer n, since that's exactly where f(x) would be discontinuous. This is as far as I'm aware an open problem. A counterexample would in particular yield a negative solution to the very old conjecture that pi and e are algebraically independent (essentially the statement that there's no nice purely polynomial relation between the two). What I don't know: Is there some easy way to make a function f(x) which is essentially a precalculus level function where its continuity is actually equivalent to pi and e being algebraically independent.
 There is an idea in mathematics of a "graph," not in the sense you may have heard of y=f(x), but rather in the sense of a network with a collection of special points, which are called vertices or nodes, which are connected if they share some relation. For example, on a social network like Facebook, one might imagine that each person is a node and two nodes are connected if they are friends. We say a graph is connected if one can travel along edges from any given node to any other node. For example, consider the friend graph of Alice, Bob, Carol, Diane and Eric, it might be that Alice, is friends with Bob who is friends with Carol, and Diane is friends with Eric. This graph is not connected since there's no way to get from Eric to Alice. But if Bob became friends with Diane, then this graph would now be connected.
 
Here is a question I don't know the answer to. Suppose that we make a graph using the following process. We start with n points, and no edges. For our first step, we pick a random node (Which for convenience we'll call 1). We then pick a second random node, we'll call 2 and add an edge between those two nodes. We then move to our new node 2, which we just connected. We now pick a random node which does not have an edge connected to 2, connect that one, call it 3, and then move to 3, continuing this process. We'll stop when the entire graph is connected. On average, for a given n, about how long do we expect it to take before the graph is connected? This isn't immediately obvious, since in our process, by bad luck we can go back to a previous node which is already connected. For example, from node 3, we might end up going back to node 1, which is essentially a wasted move.

Here's a related question: We say a subset of nodes of a graph is a clique if every element in that set has an edge connecting to every other element.  In terms of friends this means that every person in the group is friends with every other person in that group of people. For example, consider the friend graph of Alice, Bob, Carol, Diane, Eric, and Fred. Suppose that Alice, Bob and Carol is each friends with the other two.  Carol is also friends with Diane who is friends with Eric who is friends with Fred.  And there are no other friend relations. Then we have a clique of size 3 from Alice, Bob and Carol, and no larger clique exists in the graph

In our above process of making a graph by randomly adding edges, for a given n, what do we expect on average the largest clique to  be in the graph we have formed? 


Note I also posed these questions on Facebook where some interesting partial results were made in the comments. One related question, namely the same question for directed graphs were also raised. Empirically the answers seem to be approximately the same.   One major observation in that thread was that it appears that the answer to the first question grows at about n log n, and that if one made a directed graph version of this, the rate seemed close to that also. 

 
 Editor's note: The following is the first few pages from Jonathan Teitelbaum's autobiography. We include it here to give a good summary of his personality and approach. Unfortunately he makes many references to things from his own time, and we can only speculate what some of them mean. While he did meet with some historians, much of his life was taken up helping restore as much as he could from the Age of Wonders.  
 
My name is Jonathan Teitelbaum, and I am one of the last two people alive from what we called the 21st century, and this is my story. 
 
It was the middle of October when it started, at least from my perspective. I think it started when I stopped by Stacy's lab. Stacy had been a friend of mine since our undergraduate days at Yale, and now we were both at Brandeis. She had been more successful than I in that regard; I was just barely tenure track in the math department, while she was the star of the physics department, with her work on what the media was already dubbing "stasis fields'" (and someone had called "Stacy's field'" which I'm sure they thought were hilarious) making her likely to be a shoe-in for the Nobel Prize in physics. They functioned almost exactly like stasis fields anyone would have heard about in science fiction. They weren't precisely identical; they didn't stop time in a region, but rather functionally slowed it down a lot. The first fields had been small, containing just a few atoms, but Stacy had quickly increased their size; the entire discovery wouldn't have been possible without the new near room-temperature superconductors. People were already talking about how they might be used inside ambulances among other applications. The military was of course interested in funding the research also. She hadn't yet decided about that for a whole host of reasons.
 
I walked up to the door of her lab, and saw that the green light next to the door was on. A sign next to the door said that a red light meant testing was occurring and not to come in. In practice nothing they were doing was actually high radiation or the like, but the university bureaucracy apparently required it in all labs which worked with any high energy devices. The door, of course, read  "Stacy McDougal." She had been ecstatic when they had put that up, or at least as ecstatic as Stacy could get.
 
As I walked into her lab, I put my backpack and the package I was carrying down on the lab bench. Next to the bench was a large piece of cylindrical equipment, about three feet in diameter and almost reaching the ceiling, with small coils wrapped around it.  I recognized it as a larger, recently completed version of the item that she had jokingly referred to as a "flux capacitor" which projected the stasis fields.  She had once called them that as a joke and then had in vain tried to take it back when the graduate students had started referring to it and that way and the media had heard about it. The devices could be programmed to project a stasis around the device, but for safety and practicality reasons, were often more used to project a much smaller field above or below it. 
 
It was still early in the morning, and none of her grad students or postdocs seemed to have arrived yet. Stacy had clearly been up late again, or had not gone to bed at all; her brown hair was tousled, as she hunched over a much smaller flux capacitor, only about two feet tall. 
 
She looked up at me. "Oh, Hi Jon. What's that?" she gestured with the device in her hand (which for all I knew might as well be labeled a sonic screwdriver), at the long package I had with me.
 
"Oh, that's my lulav, I had to pick it up this morning.'"
 
"You mean that weird palm frond you wave around in a few days, along with that lemon?'"
 
I nodded, and did not bother explaining that an etrog (which was in my backpack) while similar to a lemon, is actually a citron. While Stacy was brilliant, she seemed to have a blind spot about actually learning almost anything about anyone's religion. We had discussed this sort of thing many times, and she remained fundamentally puzzled not only  that someone could believe in any religion, but even more so that anyone could actively disbelieve in a religion yet still want to do the rituals.
 
I had long ago given up trying to explain that there was an emotional comfort in the ritual itself, doing the same rituals that my ancestors had done for thousands of years. So I wasn't going to rehash explaining about the four species and how on the Jewish holiday of Sukkot, they were waved around with a ritual blessing, generally in the small ritually constructed hut which one ate meals in during the holiday. Ok, phrased that way, I guess the whole thing does sound a bit weird. 
 
On the other hand, she had never had any issue eating all the Halloween candy she could, so maybe it was just an issue of what she had personally grown up with. On the gripping hand, Halloween candy was candy, and that wasn't at all hard to understand.
 
"So any luck on the data?'" she asked me.
 
I shook my head. One of the difficulties with actually making the stasis fields practical was that they didn't seem to have a simple on-off switch. After the field generator was no longer being actively powered, they stuck around for some time. The longer the field was on, and the larger the field, the longer it stuck around, but no one was clear why. The underlying theory predicted that after being depowered they should last at most a few seconds, but in practice they seemed to last minutes. One of her postdocs had suggested that this was a sign that the stasis fields were using something other than just the Extended Standard Model that predicted their existence, but Stacy was firmly of the belief that there were just subtle interactions which hadn't been modeled correctly. She had a few days ago given me a whole set of their data, to see if I could glean any obvious patterns out of it that neither she nor any of the computers had detected. 
 
We talked a bit, and the conversation meandered, from her own data sets, to minor issues she was having with one of her grad students, to the ongoing international instability after the Russian Empire's invasion of Bulgaria, and the withdrawal of the Russian contingent from the lunar base in response to the international sanctions. A few days ago, when this had come up, Stacy had pointed out that since both Russia and the US had nuclear missiles which were far faster than anything in the first Cold War, along with space-based missiles and space-based laser arrays, the situation was not only more unstable, because warning times were shorter, but that it might very well be that one wouldn't have any time to even notice before one was vaporized.
 
That topic soon turned to a slightly more pleasant, although for me, also difficult topic, my upcoming tenure review. While my research was great, I wasn't sure that my teaching was up to snuff. No one had signaled me anything negative about it, so it could have just been my own insecurities. But over time, the university had emphasized teaching more and more as important for tenure. And then, as we were discussing some of the experiments she was intending to try out, my phone made a loud chirp, and hers did as well. As we looked down we saw that what we had just been discussing a topic before had apparently been far more timely than we had guessed. Missiles were already flying through the air, one was on its way to Boston. Brandeis was on the outskirts of the city, technically in Waltham, but close enough that if the warhead was large enough, it could kill us. The estimated time to impact was two minutes. I looked at Stacy. 
 
"Is there a bunker nearby?'" Stacy, shook her head. 
 
"Not in this building. The nearest one is over in the library." That was  over two minutes away considering we were in the basement. Stacy hurried over to her computer and started typing something.
 
"What are you doing?'" I asked. Just like her, I thought, two minutes to live, and her  response is to write notes on an experiment. 
 
"I'm setting up the system to project a stasis field large enough to hold us both. Quick get over there.'"
 
She gestured towards the other, larger, "flux capacitor'" which was on the other side of the room, by the table where I had placed my backpack. She spent at least a minute typing in commands; and then raced over to join me huddling next to the machine. She shouted "Computer! Activate Stacy12" We had both been entertained by Star Trek, and in fact had first met at a campus Trek event, and now I was glad that her own Trekkiness had given voice activation for a system which had no need it. As I had that thought, the room altered, and I felt a strange sensation of nausea. I remembered that they had just barely experimented with small mammals in the stasis fields, and had seen no sign of ill effects. As I thought that, I saw the room outside us flicker, and then the capacitor overheated, and a section of it went flying. Stacy and I ducked as part of a superconducting coil sailed over our heads.
 
We looked around. All the lights in the room were dark, with a small amount of light from one of the windows. There was a thick layer of dust all around, although Stacy, I, my backpack, and the package all had been apparently protected inside the field. None of the emergency lights were on.
 
"How long did you set it for?'"
 
"I didn't know when it would be safe. I set it to go indefinitely. I figured someone would come and rescue us or it would run out of power."
 
I thought for a moment. "And no one came to rescue us. So, it could have been weeks? Months?"
 
"Possibly. Possibly longer.'"
 
We carefully explored the room. Stacy's first act was to go over to a cabinet to retrieve a radiation detector; I had referred to it once as a Geiger counter, and she had spent time explaining in no small detail that the devices she used were much more sophisticated. However, the cabinet door refused to budge. I came over, and helping her, pulled it open. Inside the radiation detector was there along with some other equipment, but it refused to turn on. Some other equipment in the cabinet looked severely rusted.
 
I wondered how long we had been out. I pulled out my cell phone, it showed no signal, and of course didn't show any substantial time having passed. If there were a network, it could update with the time, but without one, it had no way of telling how much time had passed any more than we did.
 
We opened the door to the lab, and looking out, saw that the corridor outside was almost impassable. The lab was in the basement, and it looked like parts of the upper floors had fallen inward. We turned back to the lab, and after some work managed to knock out the window. We piled a few chairs up, and squeezed our way out. I got out first and then helped Stacy squeeze her way out. 
 
As we stood up, the amount of time which must have passed became apparent. Almost nothing above ground of the building we had been in remained, and remaining pieces of wall were covered in vines.  The nearby library across might still have been standing, but it was impossible to tell, because of the many trees which now stood in the way. 
 
"I don't understand" began Stacy, "This looks like we were in stasis, decades at a minimum, but there wasn't enough energy for that, and the the coils should have given out well before then."
 
"We knew that the extra time from the power cutting out had a non-linear relation to how long the field lasts, and that seemed to get bigger with larger objects-'"
 
"So the combination of maybe a few days of power, which was longer than we've ever done, combined with a field size larger than we've ever had for more than a few seconds. Right." Stacy nodded. 
 
"Forget about decades. We could have been gone centuries."
 
The extremity of the situation was at odds with the environment. We had left in the middle of October, in the morning, and for all we could tell, from the temperature, we had come out of stasis at nearly the exact same time. 
 
"Ok. We need to prioritize. Do we have food?" 
 
I had a box of granola bars in my backpack, back in the lab. It also turned out that the fridge some people had kept food in had been within range of the field. I squeezed down again through the window and retrieved the bars and what remained in the fridge; we felt a little guilty about the food. Only a sandwich and some Indian takeout leftovers were actually Stacy's. There were open peanut and jelly jars and a loaf of bread belonging to Todd, one of her grad students. In the back of the fridge was something that looked like it had been food once, but we weren't sure if its current state was due to the stasis field not reaching the end of the fridge, or it just being something someone had left in it which had reached its current state before the event. In any case, it seemed depressingly unlikely that any other owners of the rest of the food would be in a position to mind. I filled my backpack with the two jars and the loaf of bread, and after thinking about it for a bit, decided to include the etrog. It would at least be edible if we really ran low on food. We estimated there were enough calories there at least to last a few days, probably a week if we rationed them carefully. 
 
After some discussion, we decided that the best possibility was to head out in the direction of Boston and see if anything remained.  As we progressed from what remained of campus in the direction of Boston, it became clear that a very long time period had gone by. Most of Waltham itself was forest. I-90 still existed in parts, but some sections were complete rubble or had trees growing up through them. For the most part it was easier to travel along side it. Multiple times, we simply had to head east, and hope we were going in the right direction.
 
By the time it was evening, it was clear that if we were headed in the right direction, we weren't going to make it to Boston by nightfall. We decided to finish the Indian takeout, since without refrigeration it might likely go bad soon.

 
After we ate, I remarked "So when is it going to hit us?"
 
"You mean the emotional reaction from realizing that everyone we've ever known or loved is dead?"
 
"Well, yes. I know it but I guess at some level I haven't really processed it."
 
"Shock of some sort. Me? I'm just refusing to process it until we're really safe."
 
I looked at her. "Can you really do that?"
 
She looked me in the eye. "Not really. At least, not for much longer."
 
We quickly changed the subject, to more abstract issues, how we might be the last humans, how we might be able if we got good enough data from star watching we might be able to tell about what year it was. As evening fell, it grew cold and we huddled together for warmth. To be clear, there was nothing romantic between us and never had been, just good friendship. And in a completely non-romantic way, beyond the practical use of body heat, being aware physically that at least some other person was still around seemed to help. I think I woke during the night, crying at least once, but I was careful not to wake up Stacy.  It would not surprise me if she had the same experience. 
 
In the morning we each had a granola bar, and continued eastward, and now we began to see what might be a some sign of civilization, as part of the remains of one of the roads turned into what appeared to be a dirt path. After another mile, up ahead we saw, smoke, and what might be the outline of a city. As we came close to what must now pass for Boston, we saw what looked to our eyes almost like a late medieval or perhaps early modern city. Before we got to the city itself, the scent of horse manure confirmed our earlier visual impressions.
 
We arrived in the city itself in the late afternoon. As first, we almost broke down into tears simply at seeing other humans even as the people were dressed in fashions which corresponded to nothing we had ever seen. Some wore cloaks, or robes, and some wore what appeared to be uniforms with metal armor although the armor at a glance looked like it must be far heavier than it seemed to be from their movement. Whether these were police or military we could not say. Almost everywhere though, people had small metal bracelets, some of which looked like they must be heavy.
 
"Aluminum. They are using all the aluminum we refined." I nodded in understanding and continued her chain of thought. 
 
"Aluminum is very tough to refine. But once refined is easy to process. So once it exists in a near pure form-."
 
"Exactly."
 
We tried to approach a few people, but it quickly became apparent that whatever language they spoke, it was far enough from English that we did not make any progress. We could tell that some buildings were shops, from people coming in and out, but we couldn't tell what they were for. Street signs and the signs on shops used what appeared to be a close variant of the Latin alphabet but what they said was indecipherable.  
 
 
There were also some posters, around the city, and one clearly mass-produced one which said something like "Reporte Nucleon." We tentatively guessed that this was a warning to report any possible radiation or radioactive remnants, but we weren't sure.
 
"How long does it take for an alphabet to change?" asked Stacy. 
 
"I don't know. We used almost the same one as the Romans, but the Hebrew alphabet changed completely at least once."
 
As if the topic itself had caused me to notice, I pointed to a building. "Let's try that building."
 
"Why?"
 
"Because there's a six-pointed star in the stained glass, and in that lot next to it is a hut with leaves on top. I'm guessing that's a Sukkah."
 
"So, you think we arrived centuries from when we left but almost exactly the same time of year we left and that that's a synagogue?" Stacy sounded skeptical. 
 
"Yeah, and if that's a shul, even now, then someone is going to at least speak some Hebrew."
 
We knocked on the door, and a woman answered it, with some phrase in the language other people had used, and followed by "Chag Sameach," the normal Hebrew holiday greeting. 
 
In my halting Hebrew which I hadn't used much of since I was an undergraduate, and which I now deeply regretted getting a B+ in (sorry, Mrs. Weiss!), I said we were travelers from far away, and had nowhere to stay for the night. The woman at door welcomed us in, and identified herself as Rachel, the Rabbi of the shul. I thought she had given a surname also but I couldn't tell if that was a surname or some word in Hebrew that just was not in my vocabulary. If it was a surname, it wasn't one that sounded at all like one from our time period, and might have had a phoneme or two in it which didn't exist in English or Hebrew.
 
It became apparent that she expected herself to welcome in traveling Jews, and she insisted we sit down for dinner with her in the Sukkah. Despite Stacy being a vegan and me being a vegetarian, the allure of what smelled like chicken soup, the first hot food we had had in two days, made us both say yes. Along with the chicken were potatoes, and a root vegetable that neither of us could identify but was apparently called something like "Ti-ihlick." There were also hard boiled eggs, which struck me as odd, for in Judaism in our time, they were traditionally a food associated with mourning, eaten on the anniversary of the destruction of the Temple. They were eaten on Passover also, but one common traditional explanation for that connected Passover and the destruction. 
 
As we sat down, I looked through a copy of what appeared to be a siddur, a Jewish prayer book. The book had prayers I did not recognize, but much of it seemed little changed, including the Shema. There were some prayers  in English, and all the prayers, Hebrew, Aramaic, or English had next to them a translation into the unknown other language, from which I was able to at least confirm that it was as we had suspected a language which had developed from English.  
 
As we were eating, Stacy said to me, "So where do we tell her we're from?"
 
"I'm not sure. Will she  believe if we tell her? And if she does believe it, how will she react?'"
 
I glanced over at the Rabbi as I said this. Now, I've always joked that Rabbis need to have a lot of ranks in social skills (in the the sense used in games like Dungeons and Dragons), to pretend that congregants like myself aren't boring them, but Rachel's ranks either weren't that high, or she had rolled a natural one. 
 
"You speak English?" I asked her.
 
"I am a Rabbi. Of course I speak the Third Holy Tongue." she said, albeit haltingly and with a heavy accent. 
 
Stacy looked up alarmed and confused. 
 
"What does she mean- third holy tongue?'"
 
Well, it was out now I suppose. "Well, my guess is that the first two are Hebrew and Aramaic." Rachel nodded in confirmation. "So, after, the end of things, the language changed, but the surviving Jews must have kept using English for some things." 
 
"And now that that's established," said Rachel, putting down her fork where she had been piercing the last bit  of some of the strange root vegetable, "Who are you really?" 
 
"So how long did you know something was strange about us?"
 
"Almost immediately. Your clothing, although-" she searched for the next word. 
 
"Messy?" I suggested.
 
"Yes, although messy, is a better depiction of pre-Collapse clothing than I've ever seen someone actually wear, and you speak English like you are native speakers." Then her eyes widened.
 
"You aren't malachim, are you?" The word she used, malachim was the Hebrew word for angels.
 
Stacy looked over, "Is she asking us if we're something  supernatural?"
 
"Yes." I struggled not to laugh at the absurdity of the question. 

"If we were angels, would we be better at concealing it?"
 
Rachel shrugged. "Probably. But I notice you did not say no. So what is your story?" She seemed at once ready to be awestruck and yet skeptical. 
 
Stacy and I took turns explaining what happened to us, although it was clear that Rachel had no context for anything like the stasis field. That puzzled her the most, for apparently the belief was that after the end of the "Age of Wonders," all what she termed  "Wonders" had been taken away from humanity. Stacy showed Rachel her cell phone lighting up, and Rachel seemed almost awestruck by the device itself, and that awe replaced any remaining skepticism about our claims. 
 
Rachel explained that a few "Wonders" which had been powered by the "Gift of the Sun" had continued to function for some time after the Collapse, but all had ceased long ago, with her great-grandfather supposedly seeing one of the last ones, although no one had known what it was supposed to do. From her we also learned more about where and when we were. All considered, her explanation was calm enough to be talking with people from out of a near mythic age. I suspected that I would not have had similar success, but I could not even think of a comparable analogy.
 
The year was 912 after the Collapse. After the Collapse, some cities had been rebuilt, but it took centuries. And one of the things that had caused the most damage was that many different plants did not survive the nuclear winter. She explained that some plants "Which were themselves Wonders," turned out to not survive the effects of the Collapse. Stacy and I tentatively understood her to mean that many genetically modified organisms had turned out to be easily harmed by radiation, but we lacked a common enough vocabulary, or Rachel lacked the knowledge to confirm this. Other plants and animals had died just from the environmental problems, and she mentioned "Like the etrog."
 
At that point, Stacy, who I guess must have been paying more attention to religion than I generally gave her credit for, nudged me. I looked at her and then asked Rachel "The etrog is extinct?"
 
Rachel, nodded sadly, "Yes, we still have three of the Four Species, but we have substituted other fruits, in keeping with the literal meaning of the verse, and we shake it without a blessing. It is about evening- if you have not yet waved for the holiday, it is the last day. I'm sure for you, this seems like a small comfort, but for me it is amazing to even think to meet someone from before the Collapse, and who probably held an etrog in their own hands." She pointed to a side table where I now noticed sat a lulav, with the associated other branches, myrtle, and willow, and next to it, what looked like an apple. 
 
"Um, actually I can do one better. The Collapse happened a few days before Sukkot, you see."
 
"Yes, it is why for us, Sukkot always has an aspect of mourning." She gestured to the plate of eggs. "But what do you mean one better?" 
 
Not sure what to say. I picked up my backpack, unzipped it, and took out the etrog, preserved with us in the stasis field.

Rachel momentarily lost her apparent command of English, and began speaking excitedly in the unknown tongue, before collecting herself. "Is that really what I think it is?"
 
Stacy and I nodded. "He picked it up that morning." said Stacy.
 
I held it out to Rachel, who held it reverently, turning it over in her hands. 
 
"You know," said Stacy "it might be possible with the seeds of this one to make more."
 
Rachel whispered "You say you are not malachim, but you are, I am sure, sent by the Holy One. May I use it?" I nodded. 
 
She got up, walked over to the side table. And for the first time in almost a thousand years, a Rabbi said the blessing over a lulav and etrog.
An infamously difficult unsolved problem in mathematics is to prove that factoring numbers is tough. We know that if someone gives you a claimed factorization that it is easy to check. If someone asks you to factor 629, you may not have an easy time. But if someone tells you that 629 is 17 times 37, you can check this by hand (or if you want even more easily with a calculator). In general, proving that something is tough to do is very hard. Even after one gets past all the issues of how to define what one means rigorously, one has to somehow prove that *no algorithm* works efficiently and that's a tall order. But in the case of factoring there are some interesting examples of how far we are from actually proving anything which are worth stating explicitly.

Here's a naive way of trying to factor a number (discussed in more detail at this excellent Quanta article
). The basic idea is that it turns out that polynomials (assuming their degree and coefficients aren't too large) are easier to factor than numbers. How can we take advantage of that? Let's say we wanted to factor the number 15, and we didn't notice it was just 15 = (3)(5). So instead we'll try to write 15 in base 2, where we get 15 = 11112, and this is the same as saying 15 = 1(23)+1(22)+1(2)+1, and this now is almost a polynomial, in particular it is the value of the polynomial x3 +x2 +x +1 when x=2. So if we can factor x3 +x2 +x +1. we can use that to factor our original. Sure enough, x3 +x2 +x + 1 = (x+1)(x2+1), and so we plug back in x=2, and we get that 15 = (2+1)(22+1) = (3)(5). We did this using base 2, but one can use the same trick in other bases for some numbers. (Of course it could turn out that n is prime in which case this trick will fail for any base, but we can detect primes using other means so that's ok.) Maybe this trick will almost always work. We don't get that lucky, but even proving this is tough. Emmanuel Breuillard and Péter Varjú proved that (assuming the Riemann Hypothesis), that for any base b, only a very tiny fraction of numbers can be factored using this trick. But wait, you say, that just means we probably can't do this for a specific base. What if we're allowed to pick our base cleverly. Maybe we can always find such a base if we can somehow choose the base. Or maybe, we can go through a small list of bases, say try this trick with every base b where b is small compared to n, say not much bigger than log log n. Already we can't prove that this doesn't work, although it does seem plausible that the techniques of Breuillard and Varjú could be extended to this context, and Theorem 2 from their paper does push in that direction. But if we struggle to even prove that this sort of very basic (if you'll pardon the pun) trick won't work in general, how can we possibly hope to prove that factoring itself is tough? Worse, from a security standpoint, the numbers we really care about not being able to factor efficiently are *semiprimes* that is a number which is a product of exactly two primes. 15 above is an example of a semiprime, but for example, 1001 is not a semiprime because it had three prime factors, 1001=(7)(11)(13).

In fact, most positive integers are easy to factor; the difficulty is really showing that numbers which are not prime but have only a small set of prime factors each of which is relatively large compared to the number are difficult to factor. Here's another example although in some respects a bit sillier. It also turns out that there's an easy way given two positive integers x and y to find their greatest common divisor which is much more efficient than factoring x and y . For example, it may not be that easy to factor 133 or 217, but if you can find their gcd, which is 7, then you've already found a factor for both. The ability to efficiently find gcds is due to the Euclidean Algorithm
and has been known for about 2000 years. So, maybe we can given a number n, make a short list of numbers based on n, and get that the gcd of n and at least one of these numbers is greater than 1, in which case we've found a factor. So, in that spirit, here's a naive idea which has absolutely no serious hope of actually working but which I don't know how to prove that it doesn't work: For a given positive integer we can look at its digital reversal in base b, which we can get when we take the number where we write all its digits in base b in reverse. For example, in base 10, 132 would become 231. Maybe, for n, we can find a small base b, such that n and the digital reversal of n share a common factor. This isn't interesting with our example of 15, since 15 in base 2 is just 11112 so it is the same as its digital reversal. But this does actually work to factor some numbers and bases. For example, 21 in base 10 has a digital reversal of 12, and 21 and 12 share a common factor of 3. In fact, a good exercise is to show that if n is divisible by 3, then n does share a factor of 3 with its base 10 digital reversal (what is the correct generalization here to any given base?). So here's a statement which must be false but I don't know how to prove: Given a composite n, there's always a base b where b is small compared to n (say not larger than a constant time log n) and n and the digital reversal of n with respect to b has a common prime factor. It may be that there's an easy proof that the above doesn't happen, but if so, I don't know of it. Now, a professional, either a number theorist or a computer scientist, might object to some of what we've written above: Of course, proving these sorts of things is difficult. Proving statements about the behavior of numbers in different bases is known in general to be very difficult. (For one semi-famous unsolved problem of this sort: We strongly suspect that this sequence is finite https://oeis.org/A258107 but cannot prove it. ) So why should we be surprised that these sorts of things are tough to prove? But this underscores exactly why proving that factoring is hard; if we can't even prove things about these highly restricted algorithms coming just from writing the number in various bases, how can we hope to prove anything when the space of all possible algorithms is far, far larger?
Recently, someone made a depiction of the fairings of some upcoming rockets   (Click to embiggen). The fairing is the fancy term for the nosecone of a rocket that protects the payload until the rocket is in minimal atmosphere where it is then jettisoned (since it is then extra mass which isn't needed).
 
There are a few things to note here; the Falcon 9 and Falcon Heavy are the only rockets currently flying in this picture, and no flights of the Falcon Heavy with the extended fairing have actually occurred. But the Falcon 9 fairing is about the same size as the standard fairing for the Delta rockets or the Ariane 5 . In fact, the Falcon 9 is generally more limited by up-mass than volume with the Falcon Heavy under the standard fairing having the reverse issue; SpaceX has used the same fairing for both in order to save money, so the fairing for the Falcon 9 is a little bit big for the rocket, and same fairing is a little small for the Falcon Heavy. An ideal fairing for a rocket with the Falcon Heavy's abilities would probably resemble the extended fairing with a larger diameter, but the fairing cannot be much wider than the rocket's diameter itself (without drastic changes) so even the Falcon Heavy's extended fairing has the same (somewhat small) diameter.
 
But if one looks at the size of the Vulcan fairing, the New Glenn and the Starship, one thing one seeing here a definite trend in the next few years of a projected increase in launch volume. In the case of Starship, this actually understates the case; the intended fairing will actually be a little larger than the fairing as depicted here, about four times the volume of the Space Shuttle's cargo bay. Unlike a normal fairing, Starship's won't actually be jettisoned but will stay attached to the rocket and land with the rocket's second stage; there will be a mass penalty from this, but Starship itself will be so large that this will be a minor issue. Fairing mass grows at about roughly 2/3rds power of the internal volume. This is because the fairing is functionally the surface area of the internal volume so our old friend the square-cube law rears its head. So for very big rockets, the penalty for keeping the fairing attached is smaller than it would be with small rockets.
 
The New Glenn here though is the most striking thing. New Glenn is planned to have a fairing almost as large as Starship, and with total volume larger than Falcon Heavy's extended fairing. This is consistent with the launch masses: While Falcon Heavy has a larger payload mass in its fully expendable mode, New Glenn has a payload mass almost twice that of Falcon Heavy's expendable payload mass. New Glenn is not intended to ever fly in expendable mode, but if it did, it could probably beat the expendable Falcon Heavy. (Strictly speaking when we discuss expendable and reusable it should be clear here that we are only talking about first-stage reusability. Neither New Glenn or Falcon have a reusable second stage. Second stage reuse is a basic aspect of Starship but that's a different rocket. )
 
Vulcan here is the odd one out. Vulcan is not going to be reusable under the initial plans. There is a long-term plan for Vulcan to have partial first stage use where the engines and some other major components will detach from the first stage, parachute down, and be caught by a helicopter https://spaceflightnow.com/2015/04/14/ula-chief-explains-reusability-and-innovation-of-new-rocket/ .
 
I'm uncertain when New Glenn is going to fly. It is currently projected to launch in 2021, but in general Blue Origin has moved very slowly, and the engine for New Glenn is still in development. However, some of the perception of their slow movement may be due to their general secrecy. SpaceX in contrast is very open about everything they are doing, and ULA under the new leadership of Tory Bruno has also been pretty open. That said, the engine for New Glenn is the same engine for Vulcan, (ULA is purchasing it from Blue Origin), and having an external customer who would be upset if their engine were not at least somewhat close to on-time may help push Blue into getting their engine done closer to schedule.
 
But regardless of the exact timelines, there's something important going on. In the last few years, we've seen a general trend that has made launches cheaper, especially for small and medium satellites. We're now starting to see that trend occur in general even for larger satellites, and along with it, starting to see the ability to launch much larger payloads, both in terms of mass and volume. Increases in volume also mean satellites can be simpler and cheaper; for example, one engineering difficulty with the James Webb telescope is that it needs to unfold its mirrors since they wouldn't fit in the fairing. With all the other issues going on right now, it is nice to remember that we're still making progress on many different fronts.
Most people have at this point probably heard of the Fibonacci sequence. (Fibonacci actually never went by that name but that's a whole other story.) This sequences goes 1,1,2,3,5,8,13,21,34,55,89...
Each term is made from the sum of the previous two terms. For example, 3+5=8, and then 5+8=13. But since almost everyone has heard of this sequence, it can be a bit boring. So let us spice it up slightly. We could use the same rule for making new terms, but start with any two natural numbers. For example, we could do the sequence 1, 3, 4, 7, 11, 18, 29, 47... Here we started with 1 and 3. Or we could choose some other starting value, say 2 and 2 to get 2, 2, 4, 6, 10, 16, 26... Note that this last sequence isn't very interesting; each term is just twice the corresponding term in the Fibonacci sequence; we could write it as 2(1), 2(1), 2(2), 2(3), 2(5), 2(8), 2(13) and so on. We'll call any sequence which starts with two natural numbers and then makes each new term using the Fibonacci rule Fibonacci-like.

One famous open problem about the Fibonacci sequence is if there are infinitely many Fibonacci numbers which are prime. That is, are there infinitely many numbers like 2, 3, 5, and 13. (7 is prime but not Fibonacci. 8 is Fibonacci but not prime.) We strongly believe the answer is yes. Right now, the largest known Fibonacci prime is F_104911. That is, the 104911th Fibonacci number is prime.

One might note that for small n it seems like F_n is prime if and only if n is prime. This turns out not to be the case. It turns out that if the nth Fibonacci number is prime then one needs that n is prime. But the other direction does not hold. For example, the 19th Fibonacci number, F_19, is 4181 which is 37 times 113.

Since we already had a game where we make Fibonacci-like sequences, and trying to figure out where there are infinitely many Fibonacci primes seems hard, maybe we can look at these Fibonacci-like sequences and see if they help us out. This is something mathematicians do frequently; sometimes when we can't solve something, we look in a more general context and see if it gives us insight. Or at minimum, we at least have a new object which we also don't understand, which is fun too.

So, what happens if we ask if every Fibonacci-like sequence contains infinitely many prime values? Well, here the answer is obviously false. Consider the example we gave above where we had earlier 2, 2, 4, 6, 10, 16... Every term is even, so obviously it doesn't contain any primes after 2. And we can do similar tricks: For example, look at the Fibonacci-like sequence 3, 9, 12, 21, 33, 54, 87... This sequence has every element divisible by 3? Why? Because both the first two terms were divisible by 3, and the sum of two multiples of 3 has to also be a multiple of 3. So at each stage, we get a new multiple of 3.

Given this we can construct a lot of Fibonacci-like sequences which not only don't have infinitely many prime values, they don't have any prime values. For example, we could start with 10, 25 as our starting values, and we'd get a sequence where every term is divisible by 5.

Ok. So our original question had an answer of no. But for boring reasons. Let's call a Fibonacci-like sequence interesting if the first two terms don't have any common factor greater than 1. So the Fibonacci numbers themselves match this, as does our sequence 1, 3, 4, 7, 11... Let's look at a few others: Let's start with 9 and 25. So we get 9,25, 34,59 . Hmm, we eventually got a prime at least. Let's do another, how about 8 and 17. We get 8, 17, 25, 42, 67. Again we got a prime.

Now here's the interesting question: Does every interesting Fibonacci-like sequence eventually have at least one prime value? Perhaps surprisingly, the answer to this is also no. Ron Graham (Who recently passed awy) back in 1964, found that the answer is no. But the starting values Graham found were very big. Since then there's been some effort to find the smallest pair of starting values which don't give rise to any primes. But the smallest known pair today is still very big. It is known that if one starts with 106276436867 and 35256392432 then the resulting Fibonacci-like sequence has no prime values. The basic idea of finding numbers like this is to find a finite set of primes such that any member of one's sequence is divisible by at least one of the primes. But constructing these is hard.

But the existence of Graham-like sequences shows that if there are infinitely many Fibonacci primes, then the proof has to use something deep, beyond just the very basics of the sequence itself. There is however a generalization of the claim that there are infinitely many Fibonacci primes which seems plausible, and I have not seen stated explicitly in the literature. It seems plausible that if a given interesting Fibonacci-like sequence contains at least one prime, then it must contain infinitely many. This is, if true, probably a very difficult thing to prove, since it would be stronger than the apparently tough open problem about Fibonacci primes themselves. If the above is false, it may be that the same techniques as used in Graham's result can find a counterexample. However, it seems likely that there is some number k such that if an interesting Fibonacci-like sequence contains at least k primes then it contains infinitely many.

In the last few years, there's been a lot of discussion of "virtue signaling." This is the idea that someone is doing something not because it is actually a good thing to do but because it signals to one's community that one is virtuous, that one is a loyal member of the tribe, and certainly not allied at all with that hated Other Group. The idea first arose in the context of sociology of religion, as an explanation of essentially ostentatious displays of piety. The idea is now most popularly used by some on the American right-wing as essentially an accusation directed against political opponents.

One of the basic problems with such accusations is that they frequently conflate not understanding or not sharing a motivation with deciding that it must be signaling. For example, people who have solar power on their homes or buy electric cars are often accused of virtue signaling; almost invariably this is done by people who at a fundamental level, don't care about climate change or don't believe in it. In this situation, the accusers are confusing people having a genuinely different set of motivations with instead having a cynical motivation.

To be sure, some virtue signaling certainly does exist. The decisions by some large corporations in the last month to issue statements in support of Black Lives Matter, or engage in name changes which no one asked for, can be in part understood as virtue signaling by the corporations. Similar remarks apply to cities: painting a giant slogan on the road is substantially easier than making any sort of systemic changes to how policing occurs. And this really does seem to be virtue signaling: it takes little effort to issue a statement or paint a road. The easiest signals are the cheapest ones. The hardest signals of genuine feeling to fake are those which require a lot of resources.

One recent area where the accusation of virtue signaling has been most prominent has been wearing masks. People have claimed that wearing masks is about signaling virtue, not any actual desire to protect anyone else. At this point, the evidence suggests that wearing a mask helps protect both the wearer and people around them. That it also may protect the wearer does not seem to have filtered down to the general population. Thus, the virtue signaling claim has focused on the idea that people are wearing masks not to protect others, but to signal that one is one of the sort of good, responsible people who wears a mask.

And there's probably at least a little truth to this claim. I took a walk this afternoon in the local park, wearing a mask, and I definitely felt some degree of pride about it. "Look at me! I'm responsible! I'm wearing a mask! You should too!" I'd like to think that's not the only reason I wear a mask, but that's tough to say. It is very easy for a person to convince themselves that they have noble intentions.

Here's the important thing though: It doesn't matter why someone is wearing a mask.  A person could literally put on a mask while thinking "I don't care if I'm protected by this. I don't care if anyone else is protected. I just care about everyone seeing how prosocial I am." That person's droplets will be just as blocked as the droplets from a person who puts on a mask with perfect sincerity. The laws of physics and biology don't pay attention to one's personal motivation.

This last idea is tough for humans to get. We have some basic intuitions that intention matters in a deep way. Ideas about magic in almost all societies and general beliefs strongly connect to this. In fact, one of the ways in the modern world we sometimes distinguish natural or supernatural claims is whether human intent impacts them. In other contexts intent also matters.

To some extent this idea is connected with the sometimes repeated slogan that people need to realize that "the map is not the territory" that is that our conception of something isn't the same as what exists in the external world. Curiously, although this is a tough lesson, it turns out to be actually not completely true in a limited, technical sense. There's a neat theorem that if you take a map of a region, crumple it up and drop it somewhere in the region, no matter what, at least one point on the map will be exactly marking itself. I've been surprised for a long time that no mystic has tried to read something deeper into that statement.

What is labeled as magical beliefs though aren't unique to magic. For example, most major religions believe that at least some of their ritualized behavior only works with intent. In Judaism, the idea of intent for religious rituals, called kavanah, is important. There is a large amount of disagreement in Judaism over which things require kavanah and which do not.

There's a wonderful little children's book, "Hershel and the Hanukkah Goblins," about the folk hero and trickster figure Hershel of Ostropol. In the story, Hershel confronts a series of goblins which play tricks with the Jews of the town. The only way to defeat the goblins is if the King of the Goblins lights a Menorah on the last day of the holiday. Spoiler alert: Hershel manages to trick the King into lighting a Menorah without realizing that the candles are for Chanukah, and the goblins' hold on the town is broken. Some readers may be aware that for a long-time I've argued that the story should have a different ending: The King should have explained that Jewish law is that lighting candles for Chanukah takes intent, and that all he has done is light a few candles in a row. Then he should have ripped off Hershel's head, and the book would have a good lesson about how one shouldn't interact with any supernatural being unless one understands the rules under which it operates.

But at another level my criticism of the story is a poor one. The story is a modern story, not a true folktale, and so if anything it is a positive sign of understanding how people think that intent does not end up mattering. All that matters are the abstract rules.

In that context, let us all realize that COVID-19 is just like the King of the Goblins. What intent anyone has does not matter. Wearing a mask is helpful whether or not one has proper intentions. So if wearing a mask is virtue signaling, then by all means, let us all signal our virtue.

 I've mentioned before that there are a lot of simple to state open math problems that are not so famous. Let's talk about another of them: The perfect cuboid problem, sometimes called the Euler brick problem.
 
The Euler brick problem/perfect cuboid problem is to find an example of a rectangular box such that every side length, each face diagonal, and the internal diagonals are integers. We call such an object a perfect cuboid.
 
We have examples of where all sides and face diagonals are integers. An example is a box with width 44, length 117, and height 240. But there no known example where every face and the internal diagonals are integers.
 
In 2009, Jorge Sawyer and Clifford Reiter discovered examples of parallelepipeds (that is a box with parallelograms for sides) where are all sides, face diagonals and internal diagonals, are integers.
 
There are a lot of restrictions on what a perfect cuboid must look like. For example, we know that the smallest side must be at least 10^10. This number has been claimed to be pushed up the last few years but those higher searches have not yet been peer reviewed as far as I'm aware. One natural thing one can try to prove that the sides must be divisible by various primes. For example, one can without too much work show that at least one of the edge's must be divisible by 5. More generally, if one lets p be any prime of the set 2, 3, 5, 7, 11, 13, 17, 19, 29, 37 then at least one edge, face diagonal or internal diagonal must be divisible by p. You may note that the list above skips 23 and 31; the obvious sort of arguments don't seem to work for those two. Extending this to larger primes likewise seems difficult.
 There's been a fair bit of discussion about people running summer camps this summer. Right now, I've seen both Christians and Jews arguing that their summer camps for kids should run, and that with appropriate safeguards there won't be any serious risk of issue. There's a lot wrong with this argument, but one aspect which is possibly not getting enough attention is something that is an implied assumption people are making without even realizing it, which is that if one increases the scale, the number of probable infections will rise linearly and that the chance of an infection event rises roughly linearly. Neither of these are the case. My suspicion is that this sort of assumption is occuring not just in the camp discussions but in all the discussions about opening up. So let's discuss these ideas. 
 
First, let's imagine a game. In the first game, you'll get to roll a regular six sided. If it turns up 1, then one loses the game, otherwise one wins.  The chance of losing is 1/6. Now, play the same game, but roll 10 dice. The chance that at least one  show up is much much higher. If a one represents an infected person, then as one increases the number of people, the chance that at least one infection is present goes up drastically. This is, by the way, part of the point of isolating in small groups, rather than just isolating with a larger number all of whom seem safe. (And yes, the chance that anyone is infected with COVID in life is less than 1/6 but the same basic pattern holds.)
 
Now, we're going to imagine a different game. In this game, there are a whole bunch of people. When we start one person has a red slip of paper, and everyone else has a blue slip of paper. We'll play the game in stages. At each stage, every person shakes hands with every other person,  but if one of the people they shake hands with has a red slip of paper then they roll a fair 10 sided die, and on roll of 1, they give a red slip of paper to the other person. Having a red slip here represents being infected. For simplicity, we'll imagine that only two rounds happen, so first everyone shakes hands with everyone else, paper is given as necessary,  and then a second round of handshakes occur. 
 
Now, we can ask, how likely is anyone to have a piece of red paper at the end? Let's look at the simplest case, where we have two people playing, one with a blue slip and one with a red slip. The chance that the person with the blue slip is 19%. To see this, note that  there's a 10% chance in either round that they'll get a red slip , but that double counts the possibility they get a red slip in both rounds, which has  1% chance  (1/10 times 1/10 of happening). Now, let's imagine that we play this game with 3 people; a little figuring will convince you that the chance now that either of the two people with blue slips end up with a red slip has gone up: why? Because in the second round, there are more pathways for infection, if one of the two with blue was infected on the first round. By the time one gets to 20 people playing this game,  most people with blue slips will end up with a red slip by the end. If one plays with  30 people, almost every blue slipped person will end up with a red slip.  
 
Shaking hands here is of course a proxy not just for shaking hands but for many other forms of contact,  including coughing and  touching. In the case of teenagers at camps there is probably more other experimental behavior than some of the parents are comfortable thinking about even if they remember their own time at such camps.  But the essential upshot is the same; if one has more pathways, the chance for infection goes up drastically.  This really is not just about summer camps; similar remarks apply to any gathering of people or extended way for people to interact.
 Many today may be celebrating Star Wars due to it being May 4th. But there's another thing worth noting. Tonight is the start of Denmark's Liberation Day  . On the evening of May 4th, Denmark was liberated from the Nazis. People took down their blackout curtains and lit candles in their windows, and then they went to the streets with candles and the Danish flag.  
 
In a Jewish context, we often don't talk about the experience of Jewish Danes. Denmark's Resistance, its people and its government took steps matched by no other country to protect its Jewish citizens from the Nazis. As a result,  99% of Denmark's Jews survived the war.  It is in these trying times, where there is a threat not just some minority groups, but to each and every person,  a reminder that the actions that you take, the actions your leaders take, the actions your politicians take, and the actions of those around you, can have a real impact. That is an impact not just on the quality of lives, but whether people have lives at all.   
 
In that context, it feels appropriate that although Denmark officially marks May 5th as Liberation Day, the celebration begins at nightfall on the 4th with the lighting of candles, for Jewish holidays always begin the night of a date, and often involves candles being lit.  
 
My own ancestors who were Polish Jews were not so lucky as most of the Danish Jews. My grandmother went through Auschwitz, and many of her family and friends were killed. My grandfather fought in the Polish Resistance.  Some Polish Jews tried to return to their homes after the war, and were met with further anti-semitism and violence, including the Kielce pogrom and Krakow pogroms, which were caused in part by the classic blood libel. The Danish tradition is not my tradition. I think that this year though, I will be lighting a candle in my window. We could all use a reminder of the power of individuals to make the world a better place.
 The Abel Prize was just rewarded to Hillel Furstenberg and Gregory Margulis for their use of methods from probability and dynamics in group theory, number theory and combinatorics. I've interacted with both of them. I took complex analysis with Margulis when I was an undergrad. My interaction with Furstenberg was more limited, talking to him a few times when he was visiting, mainly when I was an undergrad. Hillel seemed willing to listen to whatever half-baked idea an undergrad had that was connected to something he did.
 
People often think of the Fields Medal as the equivalent of a Nobel in math, but the Abel prize has a pretty similar stature.
 
To give some idea of what they did, let's talk about random walks. Imagine you are on a standard number line, starting at zero. You take turns flipping a coin. If you get a heads, you walk one unit in the positive direction and if you get tails, you walk one unit in the negative direction. As you repeat this, you wander somewhat randomly; this is thus called a random walk.
 
There are a lot of questions you can ask. For example, how likely is it that I'm going to end back where I started? It turns out that if you play this game, you'll find your way back home infinitely many times with probability 1. We can also ask, in general if I play this game t steps, about how far should I expect to be from where I started? I could keep almost always hitting heads, but that seems unlikely. It turns out that your distance from the origin is almost surely not higher than roughly the square root of the number of steps you have taken. (It also turns out that this fact is closely related to one reason we believe the Riemann Hypothesis is true, but that's a story for another time.)
 
One interesting thing we can do is try to ask the same questions in higher dimensions. Imagine a random walk in two dimensions. Now, instead of flipping a coin, we roll a four sided die, and depending on the result go one unit left, one unit right, one unit up, or one unit down. It turns out that the answers here to our questions above remain the same. We'll still be no more than about square root of our number of steps from the origin, and we'll still return to the origin infinitely many times almost surely.
 
What about three dimensions? Now we have six possible directions. And now things change! The square root claim is still true, but we don't expect to return to the origin infinitely many times. In this sense, a wandering ant will eventually find its colony, but a wandering bird might not find its nest.
 
Part of what Margulis and Furstenburg did was they realized that by asking questions about things like the above on abstract objects rather than the space one is used to, they could better understand the structure of those objects.
 
One idea that they worked with is applying notions from ergodic theory in a similar context. Ergodic theory is a branch of what is sometimes called "chaos theory" and studies systems which unlike the random walks have no intrinsic randomness, but are still hard to predict. Consider for example, the following function: We'll start with a number between 0 and 1 (inclusive), and we'll double it. If the resulting number is greater than 1, we'll then subtract 1, and if not we'll keep that number. Let's look at three example numbers.
 
Let's start with 1/3. The first time, we double it, so we get 2/3. Then we double again so we get 4/3. That's greater than 1, so we subtract 1 to get 1/3. And we're back where we started. So this formed a loop. But not all numbers form a loop this way, and in fact most numbers don't.
 
Imagine we started with 25/48. We have then 2(25/48) = 25/24, so we have to subtract 1 to 1/24. We then get 1/12, then 1/6, then 1/3, and we're now in the loop from 1/3. But that loop didn't contain 25/48 to start with. So a number might end up in a loop where the loop doesn't itself contain the number.
 
But some numbers never end up in a loop at all. Let's look at an example.
Let's start with sqrt(2)-1 which is about .414... Now, we double we get 2(sqrt(2)-1) = 0.828... We'll double and subtract 1 to get 4(sqrt(2)-1)-1 = 0.656... and we'll do this again to get 2(4(sqrt(2)-1)-1) -1 = 0.313... If we double again, we won't need to subtract 1, so we'll have 2(2(4(sqrt(2)-1)-1) -1) = 0.616... and so on. It turns out we never get in a loop. Worse, numbers very close to each other might behave very differently. sqrt(2)-1 and sqrt(2) -1 +1/108 are very close together as numbers, but if you do this process more than a few times, to each, you'll find that your process for each starts looking completely unrelated in terms of how large one is at a given stage compared to the other. Margulis and Furstenburg applied ideas about how to understand systems like this to other areas of math. That's now a very big area of study, and a lot of it is owed to their work on it.
A lot of people have wondered why there's so much concern about COVID-19. I've taught classes on both differential equations and graph theory before with some degree of disease modeling, so I have some relevant expertise, although certainly much less than a trained epidemiologist or someone whose research focuses on disease modeling. In that context, I want to discuss some of the reasons why this is generating more concern than the regular flu from a mathematical standpoint.

The most common basic model of disease spread is the SIR model , and it turns out that simple versions of this model work pretty well empirically for many diseases. The basic idea of the model is that one has three compartments representing three types of people, people who are susceptible to the disease but are uninfected, people who are infected, and people who have recovered. People move from the susceptible to the infected category based on the number of people infected, with some parameter representing how likely an infected person is to infect new people. People move from the infected to the recovered population at some fixed probability. Strictly speaking "recovered" for our purposes also includes people who died and thus aren't spreading the disease but that distinction doesn't matter much for this sort of model. The basic model assumes that once one is in the recovered category one is either dead or immune.

When one uses this sort of model, and one looks at the disease presence over time the basic pattern one gets is a large hump followed by the number of infected then trailing downwards. For a graph of this sort of hump occurring in China with this virus, along with a lot of other good data, see here.

This is the basic model, and one can do a lot of things to play with the model. For example, one can imagine one has a vaccine for some people; this moves some people directly from the susceptible box into the recovered box. This drastically reduces the size of one's hump. Another thing one can do is have more than three boxes; one can imagine each region (say a city, school or nation) with its own set of boxes but then a chance for people infected in a region to infect people in a nearby region. One can also imagine diseases which have long periods of spread and infection, so the presence of births in the population become relevant. A good exercise is to think of some other thing you'd want to add to this sort of model.

As these models get more complicated, much of the modeling becomes empirical, running many different small variants of a model with computer approximations rather than much emphasis on analytic techniques; there's not as much as we can satisfactorily prove about some of these models as we'd like from a mathematician's perspective.

In this context, let's talk about three things which make from an SIR perspective this virus to be potentially worse than influenza (aside from the higher mortality rate which while also concerning isn't something that the SIR perspective really models much):

First, not everyone who gets the virus becomes immune after they recover; we're seeing not just relapses but evidence of reinfection. One essentially has some people who would move from infected to recovered but instead are moving back to susceptible. If one plays around with SIR models one will see that having such movement can easily make epidemics much worse. We're not completely certain that such reinfection is occurring but it looks likely. One difficulty is trying to distinguish reinfection from relapse. There's some evidence for low-level infections in people who are otherwise considered to have recovered. Trying to figure this out is going to be something doctors are thinking about very carefully. This is a general news article discussing some of the issues involved.

Second, the contagion rate of this virus appears to be higher than that for influenza. While there's a lot of uncertainty about the reproduction rate, R_0, which roughly speaking, represents the number of average new infections from infected individual, those numbers range from about 2.2 to about 4.5 and they seem to be likely on the higher end. In contrasts, many estimates for influenza put this number for it at at most 2; for the 1918 flu the number was probably between 2 and 3. Increasing R_0 has for what should be obvious reasons, a pretty severe impact on the severity of epidemics. The exact translation of R_0 into the SIR has some subtleties, and estimates for R_0 do change for diseases. They frequently start off larger than they would be otherwise until procedures targeting a disease are in place. There's some substantial criticism of R_0 as a metric in general , but as a rough guide it is useful here. There's also been some statement by the WHO (such as here) saying that at least by some metrics, COVID-19 is less efficient at transmission than the flu. I''m not sure what metrics they are using there, but if that's accurate that's a major reason to be less worried.

Third, there are people who get infected and are contagious but are almost completely asymptomatic. That basically doesn't happen with flu almost at all, and that means that containment is much harder. (This is not double counting evidence with the R_0 point above because those estimates are generally for people who do exhibit symptoms.) We're still uncertain how often this happens. In the case of influenza, people can be infectious before any substantial symptoms appear, but symptoms always show up. In the case of this virus, there appear to be some people, especially young people, who are not just infectious while being asymptomatic, but remain completely asymptomatic. Even a tiny percentage here can drastically interfere with containment efforts.  Remember how I mentioned how one variant of SIR models involves triplets of boxes each for a different location? Well, if you do that, you can model how effective travel restrictions and quarantines are, and they become substantially less helpful if there are completely asymptomatic people. It only takes one infected person to enter a new location.

Right now one of the biggest issues is simply how many unknowns there are. It could turn out that all three of these issues above aren't that severe. Or it could turn out that all three are substantially in play. We don't completely know at this point, and have a worrying amount of evidence that all three of these issues are cropping up.

In this context, one might wonder why taking steps to delay infection matter. In these models, people get infected and eventually move over to recovered, so you might think that everyone gets infected. But that's not actually the case. Eventually, a disease becomes low enough levels that some people who are in the susceptible box category never get infected. If one can delay infection the number of people left in that box at the end stages can get bigger. Another major issue is the maximum number of infected. Remember the hump I mentioned above? The size of that hump matters a lot from a treatment perspective because ill people aren't as economically productive and many of them will need to be in hospitals, taking up limited hospital bed space. Even if the total number of infected were to remain the same over the entire period, the stress on the hospital system will be lower, resulting in likely fewer deaths from the disease, or deaths from other medical issues as the entire system is subject to less of a resource crunch. There's also the issue that other treatments may be developed; the longer we delay things the higher the chances that more people will be saved by new treatments. Kim and Bergstrom's picture of two curves for an epidemic


In this context, what can you do? Washing hands and not shaking hands are steps which have been mentioned by many but I'll repeat them here; they can't be emphasized enough. But aside from that, buying extra non-perishable goods that are frequently used is the obvious place to start. One doesn't need at this point to run out and buy all the toilet paper (looking at you Australia), but having a few extra days of food, and yes toiletries, is not a bad step. When you go to the store, buy a few extra food items. You don't need to buy them all now, but but buy a few, and then put them away, and (this bit is a important) don't count them as present when calculating whether you need to pick up more food in the future, until the disease is passed. The same goes for some other basic items, medicines and the like when possible and practical.

Another thingy you can do is get the flu vaccine. While Trump's comments about the flu vaccine for COVID-19 were, not surprisingly, stupid and ignorant, there are two good reasons to get the flu vaccine if you have not before. First, for many diseases, multiple infections are a major cause of fatalities. With the 1918 flu, many who died died not directly from influenza but from bacterial infections which occurred with it and this is a pretty standard pattern for deaths from influenza or similar illnesses. Second, the flu vaccine makes it less likely that one will need to be hospitalized for flu; that's even more important now than it usually is, because of the possible issues with limited hospital resources. If we can minimize stressing the system as a whole, we'll be in better shape.
Richard Guy passed away yesterday morning, on March 9th. Guy was a mathematician at the University of Calgary. He was 103 years old and had been steadily working right up until his death.
 
I've interacted with him in a bunch of different contexts, and his "Unsolved Problems in Number Theory" was a major influence on my work and a lot of what I do. Almost every paper I've written was influenced by his wonderfully comprehensive collection of problems. Although I had corresponded with him via email, it wasn't until a few years ago at the JMM that I met him in person, where he struck me as a kind and thoughtful person, very interested in what others were doing. He was a great influence on me and many other young mathematicians.
 
In a certain sense almost everyone dies too young, but it feels strange to say that about someone at 103. But in Richard's case, I'm willing to say it. We had more to learn from him, and he had more to teach and discover. It is a cliche to say that someone will be missed, but in this case, it is true. He will be missed by many mathematicians all over the world, in many different fields. I feel sorry for the now young mathematicians who will get to read his books but not get to actually meet him.
Note: I'm in the process of copying some older math posts from Facebook over here. with some modification This is one of those.

I mentioned Lehmer's Conjecture  in an earlier post
. Recently, someone asked me how one goes about proving something like the sort of result that a couunterexample needs to have a lot of distinct prime factors. I wrote an answer to them, and this is a version of that answer which may be of interest to people who read the earlier post. This post is slightly more technical than most of the other math posts currently here.

The primary technique used is the following. In what follows, I'll write capital N for a counterexample to Lehmer's conjecture, and use n for a generic integer where we aren't necessarily making that assumption. I'll implicitly assume that you've seen the Euler phi function a bit before. If not, the Wikipedia article is a good place to start . But the main takeaway we need about it is that if I have n=p^a for some prime p, then I can write phi(n) = (p-1)p^(a-1), I can multiply together the phi values at each highest prime power which divide n. For example, if n = 360, 180 =2^3 * 3^2 * 5 I can write phi(360)= phi(2^3) phi(3^2) phi(5) = (2-1)2^2 * (3-1)3^1 * (5-1) = 4*2*3*4=96. This formula will come in handy below. We won't need any other properties beyond what was mentioned in the previous post about Lehmer's Conjecture.

In the below, we'll write a|b to mean that a is a divisor of b. For example, 3|6 is true but 4|6 is false.  We'll need two preliminary remarks:

First, note that any such N must be odd. To see this, note that since phi(n) is even for n>2, if N is a counterexample, then N must be odd, since if N were even, N-1 would not be divisible by phi(N).

Second, note that any such N must be squarefree (that is not divisible by the square of a prime). To see this, note that from the standard formula for phi(n), if p^2 |n, then p|phi(n).

So if N is divisible by p^2 for some prime p, then since p|phi(n), we'll have p|N-1 and p|N which is impossible.

We'll define H(n) as the product of p/(p-1) over all primes which divide n. For example, H(60)= (2/1)(3/2)(5/4) = 15/4.

Important things to note about H(n). First, H(n) is multiplicative; that H(ab)= H(a)H(b) whenever a and b have greatest common divisor 1. Second, since x/(x-1) is a decreasing function, so if we replace a prime in the factorization of n with a larger prime, the value of H goes down. For example, H(3) > H(5) and H(20) > H(45).

Notice that H(n) = n/phi(n). Now, if phi(N)|N-1 for some composite N, then k phi(N) = n-1 for some k >1, and therefore we have H(n) = n/phi(n) > (n-1)/phi(n) = k. In particular, for any such N one must have H(N) > 2.

From this one immediately has that any such N has to have at least three distinct prime factors. To see this, note that H(15) = (3/2)(5/4) = 15/8 < 2, and again replacing any prime with a larger prime would result in H being even smaller.

To give a flavor of the general method, let's sketch a proof that N needs to have at least five distinct prime factors.

One Lemma that will be helpful first:

Lemma 1: If N is a counterexample to Lehmer's conjecture and p|N for some prime p, and q|N for some prime q then we cannot have p|q-1.

Proof: Assume we had such, then by the formula for phi(n) we'd have p-1|N-1 and hence q|N-1 since q|p-1. But we cannot q|N and q|n-1.

(For example, this Lemma shows that we cannot have both 3|N and 7|N. Very handy little bit.)

Theorem: There is no such N with three distinct prime factors.

Proof: Assume that N is such an N. Note that (5/4)(7/6)(11/10) < 2. So our smallest prime factor must be 3. So 7 does not divide N by our Lemma. Similarly, since (3/2)(11/10)(13/12)< 2, our second largest prime factor must be 5, and thus by our Lemma our third prime factor cannot be 11. It also cannot be 13 by our Lemma. We now need a prime p (3/2)(5/4)(p/(p-1) > 2. But solving this inequality one will find that one needs p<17, so there are no primes which remain to check.

Now we'll sketch the same technique for the slightly harder case of four distinct prime factors. But the above shows most of the central ideas. Below will be essentially just an elaboration of the same technique.

Theorem: No such N has four distinct prime factors.

Proof:We'll break this down into two cases, based on whether 3 is a prime factor of N.

Case I:Assume 3|N. By our Lemma, we cannot have 7, 13 or 19 as divisors of N. Now, we'll break this down into two subcases based on whether 5|N.

Case Ia: 5|N. In that case, by our Lemma we have that 11 does not divide N. Let's break this down slightly further. Note that if the second largest prime factor of N is 29 then since (3/2)(5/4)(29/28)(47/46)< 2 we have a contradiction. (Exercise: Why was I able to get away with skipping 31, 37, 40 and 43 as possible prime divisors?). So we have some subcases where the third prime factor is 17, 23 or 29. If the third prime factor is 17, then we must have for the fourth prime factor p, that (3/2)(5/4)(17/16)(p/(p-1)) > 2. Solving for this, one will find that one must have p <= 257. Now, checking all primes which are at most 257 will find that none of them work. Similarly, one can do the same thing if the third prime is 23 or 29. (If one is lazy and is doing this with a computer one doesn't even need to calculate the precise largest bound for when (3/2)(5/4)(23/22)(p/(p-1)) > 2, since we know it must be no higher than 257. The same remark applies to 29.)

Case Ib, where 5 is not a divisor of N. Then we have (3/2)(11/10)(17/16)(23/22) < 2 so we're done. (Note that if this hadn't been less than 2 we could have used that we can't actually have both 11|N and 23|N so we could have broken this down further but we didn't need to).

Case II: We've assumed that 3 doesn't divide N. Note that (5/4)(7/6)(11/10)(13/12) < 2, so we're done. (Again note that if this had been too large we could have broken down into subcases depending on whether 5 divides N and then thrown 11 out in one of them.)

That's the basic technique. In practice, one wants to do as much of this as one can via computers and there are a few other ideas and tweaks which one can do to save on case checking.

(Note: I'm in the process of copying some older math posts from Facebook over here. This is one of those posts.) 

I've mentioned before that there are a lot of simple to state but not so incredibly famous open problems. I want to talk about two thematically related problems, Lehmer's Totient Problem, and Carmichael's Conjecture. These problems are well known to number theorists but don't have the same general fame as say Goldbach's Conjecture or the existence of odd perfect numbers.

Recall two numbers are relatively prime when their greatest common divisor is 1. For example, 15 and 8 are relatively prime. But 21 and 15 are not relatively prime because their GCD is 3. We'll define phi(n) to be the number of positive integers which are less than or equal to n and relatively prime to n. This function is known as the Euler phi function. For example, phi(9) =6 because under 9 we have 1,2,4,5,7 and 8 as numbers which are relatively prime to 9. Note that if p is a prime, then phi(p)=p-1. For example, phi(5) =4 since the numbers which are less than or equal to 5 and relatively prime to 5 are everybody: 1,2,3 and 4. (If you haven't seen the phi function before, it might be worth playing around with. Can you find phi(20)? What is phi of a power of 2? What is phi of a prime power in general?)

For a given positive integer a and another integer n we want to understand how big the smallest positive integer k is so that a^k leaves a remainder of 1 when we divide by n. For example, if a =2 and n =5, then one may take k=4, since 2^4 = 16 which leaves a remainder of 1 when we divide by 5. Similarly, if a=4 and n=7, then k = 3, since 4^3 = 64 which leaves a remainder of 1 when divided by 7.

Such a k doesn't always exist for a given n. If a and n share a common factor greater than 1, then so will a^k and n, and so the remainder of a^k when divided by n will also share the common factor, and so the remainder cannot be 1. For example, let's look at powers of 2 and their remainder when we divide by 10. We have 2,4, 8, 16, 32,64, 128, 256. When we divide by 10, we get as remainders 2,4,8, 6, 2,4,8, 6 ... And it keeps repeating this way.

But maybe there always is such a k as long as a and n are relatively prime. This turns out to be the case.

The first step towards proving this was due to Fermat. He proved what is now often called "Fermat's Little Theorem" (to contrast it with "Fermat's Last Theorem" but confusingly also abbreviated as FLT) which was that if p is prime, and a is not divisible by p (hence they must be relatively prime) then a^(p-1) leaves a remainder of 1 when one divides by p. So when n=p, we can in this situation always take p-1. The number p-1 may remind you of something, which we had earlier. When p was prime we had phi(p)=p-1. (A good exercise is to verify this works with p=7. That is, check that a^6 always leaves a remainder o 1 when you divide by 7 for a=1,2,3,4,5 or 6.)

And this is in fact the proper generalization! Euler proved a much stronger result: Recall we defined phi(n) be the number of positive integers which are less than or equal to n, and which share no common factor greater than 1 with n. Then Euler proved that if a and n share no common factor, then aphi(n) leaves a remainder of 1 when I divide by n. So, for k we care about is always at most phi(n). One might be interested in given an a and n what is the least value of k, and that's a delicate question that my own PhD work touched on. But that's a separate question. Phi(n) also shows up in some practical cryptographic applications, but that's not our emphasis today.

D. H. Lehmer in the 1930s was interested in how if p is a prime, then phi(p)=p-1. Lehmer noted that if n is composite then obviously phi(n) has to be strictly smaller than n-1, but maybe it fits in neatly to n-1. That is, maybe there is a composite number where phi(n) divides n-1? Lehmer conjectured that this was not the case. We don't know, but we do know that if there is such a number it needs to be pretty big (greater than 10^22 ), and it needs to have at least 15 distinct prime factors. If someone has the time and effort with some decent amount of computing power, they could likely improve this substantially. The technique used to obtain that bound is a little more clever than just checking every number under 10^22, but it doesn't involve deeply subtle number theory. My suspicion is that someone who is a good algorithmist and with some access to decent computing power should be able to improve that bound.

The other conjecture worth talking about in this context is Carmichael's Conjecture. Carmichael noticed that some numbers never arise as the value of phi(n). For example, phi(n) is always even if n >2. To see this note that if b is relatively prime to n, then so n-b, so after 2, numbers relatively prime to n and less than n always come in pairs. So any odd number other than 1 is never the value of the phi function. With a bit more work one can show that some other numbers don't arise either. For example, phi(x)=14 has no solutions.

But, Carmichael noticed that when there was a solution to phi(x)=k, there always seemed to be another y such that x was not equal to y and phi(y)=k. For example, we saw earlier that phi(5)=4. But phi(8) =4 also, since 1, 3,5 and 7 are the numbers which are less than or equal to 8 and relatively prime to 8. Carmichael conjectured that this was always true. That is, if phi(x)=k had a solution, then there was always more than one solution. Another similar example is how we saw that phi(9)=6, but it is also the case that phi(7)=6. (A good exercise: There are two other m such that phi(m)=6. Can you find them?). In this case, we also have very large bounds. In particular, we know that any counterexample to Carmichael's conjecture must be greater than 10^(10^10). That's pretty big.

(Note: I'm in the process of copying some older math posts from Facebook over here with some slight modifications. This is one of those.)

There are a lot of times people talk about famous open mathematical problems. But less attention is paid to some of the problems that are both simple to state and aren't very famous at all. Let's talk about one of those not so famous but still simple open problems.

We're interested in the minimum number of 1s needed to write a positive integer n as a product or sum of 1s, using any number of parentheses, and we'll call that ||n||. For example, the equation 6=(1+1)(1+1+1) shows that ||6|| is at most 5, since it took 5 1s. A little playing around will convince you that you cannot write 6 using just four or fewer 1s, so it is really is the case that ||6||=5. We call ||n|| the integer complexity of n.

Now here's the problem: Does every power of 2 have a most efficient representation the obvious way? That is, is it is always true f(2^k)= 2k? For example, 8=(1+1)(1+1)(1+1) and in fact ||8||=6. Similarly, 16 = (1+1)(1+1)(1+1)(1+1), and ||16||=8. (Note that we sometimes have representations which are tied for writing it this way; for example we can also write 8 = (1+1+1+1)(1+1) but that also takes 6 ones, not fewer).

One might think that the answer should be obviously yes, so let's look for a moment at powers of 5. Does every power of 5 have the obvious representation as best? We'd expect from ||5||=5, to get that ||25||=10, and we do. The pattern continues with ||5^3||=15, and ||5^4||=20, and ||5^5||=25, but then it breaks down, ||5^6||=29. This in some sense happens because 5^6-1 has a lot of small prime factors. But it isn't obvious that something similar couldn't happen with powers of 2. This problem was as far as I'm aware, first proposed by Richard Guy. This is a problem which is dear to me and one which I'm responsible for some of the progress on with Harry Altman who has written a lot about this problem on his Dreamwidth discussing his published works on the problem. 
 

My intuition on this problem keeps jumping back and forth on whether it is true or not. Right now, I'm leaning towards it being false.

(Note: I'm in the process of copying some older math posts from Facebook over here. This is the first of those.)
 
 In school, one often learns to solve equations in a single variable, something like x^2-5x+6=0. But when one has more than one variable what can one say about the solution set? For example, one might have x^2 -2y^2 =1 which has a lot of solutions. 
One thing we're interested in is equations with possibly more than one variable, but we only care about integer solutions. For example, in our equation x^2- 2y^2 =1, we might care about solutions like x=1, y=0, or x=-1 and y=0, or the solution x=3 and y=2, but we won't care about solutions like x=2 and y = 1.225 ... When we care about an equation's integer solutions only we'll call it a Diophatine equation. This is a term named after the Greek mathematician who studied similar equations; most of the time Diophantus himself was interested in the slightly broader set of rational solutions, but the name stuck. It turns out that x^2 -2y^2 =1 has infinitely many integer solutions, but establishing that takes more work.
 
Sometimes a Diophantine equation has no solutions at all. For example, let's look at x^2 + y^2 = 3. How can we tell that this equation has no integer solutions? Well, one way of doing so is to notice that if x=a and y=b is a solution, then so is x=-a, and y=-b, because x and y are always squared in this equation, and (-m)^2 = m^2 for any m. But, if x is at least 2, then x^2 is at least 4, and 4+0 is already greater than 3, so we need only look at options where x and y are either 0 or 1. Now, we can just check the possibilities 1^2+0=1 which is not 3, and 0^2+0^2 = 0 which is not 3, and 1^2+1^2 = 2 which is not 3.
 
What about a slightly harder equation? What if we want to look at the Diophantine equation x^2 + y^2 = 3+4t. It seems like this equation should be much harder than the previous one. Any proof that this equation doesn't have any solutions should trickier than the proof that x^2+y^2 = 3, because that equation is a special case of this one; in particular that equation is when we insist that t=0 in our new equation.
 
It seems that our previous method won't work because if t is big, then x or y could be large also. So for x^2+y^2=3+4t we can't in any obvious way reduce the set of solutions to just a finite set to check. Or can we?
 
Instead of looking at a finite set of integers, we'll look at the remainders of x and y when we divide by 4. We'll write the remainder of a when divided by b as "a (mod b)", and we'll say that two numbers are equivalent mod b if they have the same remainder when we divide by b. For example, 10, 4 and 16 are all equivalent mod 6, because they all leave a remainder of 4 when we divide by 6. Similarly, 5 and 9 are not equivalent mod 3, because 5 leaves a remainder of 2 when we divide by 3, but 9 leaves a remainder of 0. Let's look at the remainders of n^2 when we divide by 4.
1^2 = 1 = 1 (mod 4) . 2^2 = 4= 0 (mod 4), 3^2 = 9 = 1 (mod 4), 4^2 = 16 = 0 (mod 4). We might notice a pattern here, that any perfect square leaves a remainder of either 1 or 0 when we divide by 4.
 
Let's see why that's the case. If n is an even number then we can write n=2k for some integer k. Then n^2 = (2k)^2 = 4k^2 = 4(k^2) so n^2 leaves a remainder of 0 when we divide by 4. If n is odd, then n=2k+1 for some k, and in that case n^2 = (2k+1)^2 = 4k^2+4k+1 = 4(k^2 +k)+1 so, n^2 in that case leaves a remainder of 1 when we divide by 4.
 
Great, how is this relevant to trying to understand whether x^2 + y^2 = 3+4t has any integer solutions? Here's the key: we know that x^2 has a remainder of 1 or 0 when we divide by 4, and the same goes for y^2, so their sum x^2 + y^2 can have a remainder of 1+0, 0+1, 1+1 or 0+0 when we divide by 4. That means that no matter what values we choose for x and y, x^2 + y^2 will have a remainder of 0, 1 or 2 when we divide by 4; it will never have a remainder of 3. But 3+4t will always have a remainder of 3 when we divide by 4. So, there's no way that two sides can ever be equal, because their remainders will always be different.
 
One can often use this sort of approach. For example, one can look at remainders when dividing by 3, to show that x^2 = 2 + 3y doesn't have any integer solutions. (This is a good exercise.)
 
Sometimes an equation has no solutions other than a "trivial" solution. For example, we could take our earlier equation x^2 + y^2 = 3+4t and modify it to get (x^2 + y^2)(x^2 + y^2 + t^2) = (3+4t)(x^2 + y^2 + t^2). This new equation has a silly solution, x = y=t=0. But it doesn't have any other solutions; to see this note that if x^2 + y^2 + t^2 is not zero, then we can divide by it so we get just our original equation x^2 + y^2 = 3+4t back. But if x^2+y^2+t^2 =0, then we must have x=y=t=0.
 
Many equations have a solution where all the variables are zero, but no other solutions. For example, x^2 = 2y^2 only has the solution x=0 and y=0 (See if you can see why.)
 
One might hope that maybe when a Diophantine equation doesn't have solutions other than a trivial solution, one can always use these methods, either just looking at remainders, or being able to bound how big the variables can be and solving accordingly, or at least other similarly simple methods. But alas, this is not the case. There are equations where there are no solutions, but where no remainder argument will ever show you that.
A somewhat "famous" example is the Cassels-Guy cubic 5x^3 + 9y^3 + 10z^3 + 12w^3 = 0
This Diophantine equation has no solutions other than all variables being equal zero, but if you pick any n and try to look at the remainders when one divides by n, you'll never get a contradiction just from that, and in a certain deeper sense, even techniques which generalize the remainder argument won't ever tell you that there are no solutions. And if one lists other similarly simple methods, one can show that they won't lead to a contradiction either.
 
The Cassels-Guy cubic is annoying for another related reason. One might naively guess that if an equation has lots of variables that it should have non-zero solutions as long as there aren't any such remainder issues, since more variables give more room/flexbility. One can try to make this notion mathematically precise, but the obvious precise versions of this would incorrectly predict that the Cassels-Guy cubic does have nonzero solutions.
 
In 1900, the David Hilbert listed 23 problems that he thought should be of the most importance for mathematicians to try to resolve in the 20th century. One of those problems, the 10th problem, was to find an algorithm which given a Diophantine equation tells you whether or not it has solutions. In the 1950s, Martin Davis and Julia Robinson almost showed that there was no such algorithm. They proved that there was no such general procedure if one allowed a slightly larger set of equations than what is normally called a Diophantine equation. In particular, note that in all of our examples, we've never had a variable in an exponent. For example, we haven't looked at something like 2^y = x^t + x^2 + x. Robinson and Davis proved that if one is allowed to have such equations (sometimes called "exponential Diophantine equations") that there was no algorithm to tell whether such an equation had solutions.
 
A few years later, Yuri Matiyasevich, building on their work showed that even for regular Diophantine equations, there was no solution to Hilbert's tenth problem. An important role was shown by essentially showing that one can make a Diophantine equation whose solution set in a certain sense looks exactly like the Fibonacci numbers. One consequence of Matiyasevic's work  was that even if one insisted that no variable be raised to a power higher than the fourth power, there was still no such algorithm. One obvious question then arises is whether the same thing is true for equations where there's no variable raised to a power greater than 3. No one knows. The Cassels-Guy cubic shows that if an algorithm does exist to solve such equations, it must have some substantial subtlety behind it.
 Lots of people like to write fanfiction in the Earth 2020 setting. This is understandable. The setting is really popular, and it gets updated yearly. (Although why they feel a need to change the setting name each year I don't fully understand.)

People can write all sorts of things in the setting and take for granted that readers know the setting basics. If someone writes fanfic for say Harry Potter in contrast, interest will be much more narrow to people who are Harry Potter fans. Even major figures who haven't been around in the Earth 2020 setting for years are well known; pretty much everyone knows about major characters like Abraham Lincoln, Albert Einstein, or Margaret Thatcher.

And the characters are so complicated and nuanced. That's part of how one gets such detailed arguments over setting minutia, like just how should fans think of Winston Churchill. Should they be happy for his roll in defeating one of the major villains in setting, or should fans dislike him for his actions surrounding India? Similarly, did the USSR really try to beat the US to the moon or not? What great questions for fans to discuss.

Of course, lots of people write all sorts of things to try to fix the setting. Apparently a lot of people don't like the lack of an explicit magic system in Earth 2020, and people have handled that in all sorts of different ways. In the last few years, post-apocalyptic or dystopian Young Adult variants of the Earth 2020 have become really popular, imagining the setting jumping ahead a few years as it takes a grimmer and darker tone.

But given all of this, one thing remains puzzling. No one almost ever says they are writing fanfic in this setting. Sure, they'll skip a hundred years in the setting, and imagine the Earth 2020 setting with advanced spaceships or faster than light travel (something the series authors seem to really not like in the main canon), but they don't even label it as Earth 2020 fanfic. Similarly, people write all sorts of romances or novels set in the Earth 2020 setting focusing on their own minor characters they have introduced. But they don't credit the setting pretty much ever.

Let's do better in the future. Be open and honest when we're writing in the Earth 2020 setting. Hi, I'm Joshua Zelinsky, and I've written Earth 2020 fanfic.

Profile

joshuazelinsky

December 2024

S M T W T F S
1234567
891011121314
15161718192021
22232425262728
2930 31    

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 2nd, 2025 11:51 am
Powered by Dreamwidth Studios