(Note: I'm in the process of copying some older math posts from Facebook over here. This is the first of those.)
In school, one often learns to solve equations in a single variable, something like x^2-5x+6=0. But when one has more than one variable what can one say about the solution set? For example, one might have x^2 -2y^2 =1 which has a lot of solutions.
One thing we're interested in is equations with possibly more than one variable, but we only care about integer solutions. For example, in our equation x^2- 2y^2 =1, we might care about solutions like x=1, y=0, or x=-1 and y=0, or the solution x=3 and y=2, but we won't care about solutions like x=2 and y = 1.225 ... When we care about an equation's integer solutions only we'll call it a Diophatine equation. This is a term named after the Greek mathematician who studied similar equations; most of the time Diophantus himself was interested in the slightly broader set of rational solutions, but the name stuck. It turns out that x^2 -2y^2 =1 has infinitely many integer solutions, but establishing that takes more work.
Sometimes a Diophantine equation has no solutions at all. For example, let's look at x^2 + y^2 = 3. How can we tell that this equation has no integer solutions? Well, one way of doing so is to notice that if x=a and y=b is a solution, then so is x=-a, and y=-b, because x and y are always squared in this equation, and (-m)^2 = m^2 for any m. But, if x is at least 2, then x^2 is at least 4, and 4+0 is already greater than 3, so we need only look at options where x and y are either 0 or 1. Now, we can just check the possibilities 1^2+0=1 which is not 3, and 0^2+0^2 = 0 which is not 3, and 1^2+1^2 = 2 which is not 3.
What about a slightly harder equation? What if we want to look at the Diophantine equation x^2 + y^2 = 3+4t. It seems like this equation should be much harder than the previous one. Any proof that this equation doesn't have any solutions should trickier than the proof that x^2+y^2 = 3, because that equation is a special case of this one; in particular that equation is when we insist that t=0 in our new equation.
It seems that our previous method won't work because if t is big, then x or y could be large also. So for x^2+y^2=3+4t we can't in any obvious way reduce the set of solutions to just a finite set to check. Or can we?
Instead of looking at a finite set of integers, we'll look at the remainders of x and y when we divide by 4. We'll write the remainder of a when divided by b as "a (mod b)", and we'll say that two numbers are equivalent mod b if they have the same remainder when we divide by b. For example, 10, 4 and 16 are all equivalent mod 6, because they all leave a remainder of 4 when we divide by 6. Similarly, 5 and 9 are not equivalent mod 3, because 5 leaves a remainder of 2 when we divide by 3, but 9 leaves a remainder of 0. Let's look at the remainders of n^2 when we divide by 4.
1^2 = 1 = 1 (mod 4) . 2^2 = 4= 0 (mod 4), 3^2 = 9 = 1 (mod 4), 4^2 = 16 = 0 (mod 4). We might notice a pattern here, that any perfect square leaves a remainder of either 1 or 0 when we divide by 4.
Let's see why that's the case. If n is an even number then we can write n=2k for some integer k. Then n^2 = (2k)^2 = 4k^2 = 4(k^2) so n^2 leaves a remainder of 0 when we divide by 4. If n is odd, then n=2k+1 for some k, and in that case n^2 = (2k+1)^2 = 4k^2+4k+1 = 4(k^2 +k)+1 so, n^2 in that case leaves a remainder of 1 when we divide by 4.
Great, how is this relevant to trying to understand whether x^2 + y^2 = 3+4t has any integer solutions? Here's the key: we know that x^2 has a remainder of 1 or 0 when we divide by 4, and the same goes for y^2, so their sum x^2 + y^2 can have a remainder of 1+0, 0+1, 1+1 or 0+0 when we divide by 4. That means that no matter what values we choose for x and y, x^2 + y^2 will have a remainder of 0, 1 or 2 when we divide by 4; it will never have a remainder of 3. But 3+4t will always have a remainder of 3 when we divide by 4. So, there's no way that two sides can ever be equal, because their remainders will always be different.
One can often use this sort of approach. For example, one can look at remainders when dividing by 3, to show that x^2 = 2 + 3y doesn't have any integer solutions. (This is a good exercise.)
Sometimes an equation has no solutions other than a "trivial" solution. For example, we could take our earlier equation x^2 + y^2 = 3+4t and modify it to get (x^2 + y^2)(x^2 + y^2 + t^2) = (3+4t)(x^2 + y^2 + t^2). This new equation has a silly solution, x = y=t=0. But it doesn't have any other solutions; to see this note that if x^2 + y^2 + t^2 is not zero, then we can divide by it so we get just our original equation x^2 + y^2 = 3+4t back. But if x^2+y^2+t^2 =0, then we must have x=y=t=0.
Many equations have a solution where all the variables are zero, but no other solutions. For example, x^2 = 2y^2 only has the solution x=0 and y=0 (See if you can see why.)
One might hope that maybe when a Diophantine equation doesn't have solutions other than a trivial solution, one can always use these methods, either just looking at remainders, or being able to bound how big the variables can be and solving accordingly, or at least other similarly simple methods. But alas, this is not the case. There are equations where there are no solutions, but where no remainder argument will ever show you that.
A somewhat "famous" example is the Cassels-Guy cubic 5x^3 + 9y^3 + 10z^3 + 12w^3 = 0
This Diophantine equation has no solutions other than all variables being equal zero, but if you pick any n and try to look at the remainders when one divides by n, you'll never get a contradiction just from that, and in a certain deeper sense, even techniques which generalize the remainder argument won't ever tell you that there are no solutions. And if one lists other similarly simple methods, one can show that they won't lead to a contradiction either.
The Cassels-Guy cubic is annoying for another related reason. One might naively guess that if an equation has lots of variables that it should have non-zero solutions as long as there aren't any such remainder issues, since more variables give more room/flexbility. One can try to make this notion mathematically precise, but the obvious precise versions of this would incorrectly predict that the Cassels-Guy cubic does have nonzero solutions.
In 1900, the David Hilbert listed 23 problems that he thought should be of the most importance for mathematicians to try to resolve in the 20th century. One of those problems, the 10th problem, was to find an algorithm which given a Diophantine equation tells you whether or not it has solutions. In the 1950s, Martin Davis and Julia Robinson almost showed that there was no such algorithm. They proved that there was no such general procedure if one allowed a slightly larger set of equations than what is normally called a Diophantine equation. In particular, note that in all of our examples, we've never had a variable in an exponent. For example, we haven't looked at something like 2^y = x^t + x^2 + x. Robinson and Davis proved that if one is allowed to have such equations (sometimes called "exponential Diophantine equations") that there was no algorithm to tell whether such an equation had solutions.
A few years later, Yuri Matiyasevich, building on their work showed that even for regular Diophantine equations, there was no solution to Hilbert's tenth problem. An important role was shown by essentially showing that one can make a Diophantine equation whose solution set in a certain sense looks exactly like the Fibonacci numbers. One consequence of Matiyasevic's work was that even if one insisted that no variable be raised to a power higher than the fourth power, there was still no such algorithm. One obvious question then arises is whether the same thing is true for equations where there's no variable raised to a power greater than 3. No one knows. The Cassels-Guy cubic shows that if an algorithm does exist to solve such equations, it must have some substantial subtlety behind it.