The Oldest Computer Program

I’m preparing for my classes next semester, and one of the courses I’ll be teaching is cryptography (a majors-level course).  Perhaps the most remarkable feature about cryptography, and a telling feature about mathematics in general, is the following:

Computer security relies on mathematical algorithms that are at least 250 years old.

Put another way:  250 years ago, there was no practical use for a very broad area of mathematics.  Today, that area of mathematics is the basis of civilization in the 21st century.  Any time you visit a website with an  https designation, you are using a method of encryption that could have been invented by Thomas Jefferson.

A crucial component of modern cryptographic implementations is the fast powering algorithm.  The first recorded appearance of this algorithm was in the work of Pingala, an Indian mathematician who lived in the 3rd century BC.  Pingala not only wrote down the algorithm, but (in modern terms) wrote down a computer program to implement it.  Incidentally, the treatise of Pingala was the first to use the zero symbol (though in Pingala’s manuscript, it’s just a symbol that has no specific meaning).

Here’s a video I’ve prepared on the subject:


Lies Your Math Teacher Told You: PEMDAS

As most of us know, the basic operations of arithmetic are performed in a specific order. This is known as the order of operations, and is usually recalled by the mnemonic PEMDAS: parentheses; exponents; multiplication and division; addition and subtraction.  But even though we learn about PEMDAS in school, it’s important to understand that there are several falsehoods associated with it.

The first is rather subtle:  It’s that arithmetic operations must be done in this order.  This is true…except it isn’t.  In particular, while the order of operations is important, the actual order is less important than the fact that we agree on what the order should be.

The order of operations is a convention:  it’s an agreement on how we do things, but there’s no mathematical justification for it.  We don’t need to do multiplication before addition, any more than we need to drive on the right side of the road.  It’s certainly possible to drive on the left:

Image result for driving on left side

What’s important is not which side of the road we drive on, but that we agree which side of the road we drive on.  If some of us choose one side and some of us choose the other, then that’s a problem.

In fact, PEMDAS came about because  there was no general agreement on which side of the road we should drive on…I mean, because there was no standardization of the actual order.  It was only in the early years of the 20th century that the idea of a universal agreement on the order of operations came about (and a good thing, because soon after we were building electronic computers, and unless there was an agreement, a computer built in Britain might produce answers different from one built in the United States).

Another falsehood told about PEMDAS is that it’s PEMDAS.  It should actually be PEM/DA/S.

The MD in PEMDAS stands for multiplication and division. In PEMDAS, multiplication is listed before division, suggesting that in an expression like 20 \div 5 \times 4, you should multiply 5 \times 4 = 20, then divide 20 \div 20 = 1.

But in fact, multiplication and division are equiprecedent, meaning they are handled simultaneously. Again, this is a convention, like driving on the right side of the road.

This is unfortunately impossible, since one of them must be done first. But which one?

The answer is that we do them in the order they appear, from left to right.  In fact, the correct way to state the order of operations is:

All arithmetic operations are to be done from left to right, UNLESS…

Thus:  20 \div 5 \times 4 = 4 \times 4 = 16, since we take care of 20 \div 5 first, then multiply the result by 4.

There’s another problem with the way the order of operations is usually stated:  PEM/DA/S is better, but PM/DA/S is better still.

That’s because exponentiation isn’t an operation.

And here’s the lie that your math teacher told you:  You don’t do exponents before multiplication and division, because exponentiation isn’t an operation.

Exponentiation is a shorthand way of writing out a multiplication.  Remember that a^{3} means a \cdot a \cdot a.  Consider the expression:  5 \cdot 2^{4}.  Generations of students, having memorized PEMDAS, “do” the exponent first, and find 5 \cdot 2^{4} = 5 \cdot 16 = 80.

But in fact, 2^{4} is not an operation:  it’s shorthand for 2 \cdot 2 \cdot 2 \cdot 2.  In which case, 5 \cdot 2^{4} = 5 \cdot 2 \cdot 2 \cdot 2 \cdot 2.

Why does this matter?  There are several reasons.  First, if you try to find 5 \cdot 2^{4} using PEMDAS and finding 2^{4} first, you’re stuck with the product 5 \cdot 16.  On the other hand, if you recognize that 2^{4} is shorthand for “Multiply four 2s together,” then you have the much simpler task 5 \cdot 2 \cdot 2 \cdot 2 \cdot 2 = 10 \cdot 2 \cdot 2 \cdot 2 = 20 \cdot 2 \cdot 2 = 40 \cdot 2 = 80, where multiplying the 5 and 2 together gave us an easier product.

More importantly, consider scientific notation:  8 \times 10^{12}.  No one in their right mind wants to calculate 10^{12} (even though it’s not particularly difficult).  And they don’t have to:  scientific notation is shorthand.  In this case, 10^{12} is shorthand for the number we call one trillion, so 8 \times 10^{12} is 8 trillion.  Likewise, 5 \times 10^{-6} shouldn’t be treated as the product of $5$ and whatever you get when you do the E in PEMDAS:   10^{-6} is shorthand for millionth and this number is 5 millionths.

God and Probability

In 1710, John Arbuthnot, an English physician, published an article proving the existence of God.  Arbuthnot’s argument was based on the following:

  • For 82 years, the number of boys born in London has been greater than the number of girls born in London.
  • This is too unlikely to happen by chance.  Therefore, “Divine Providence” arranged it.

To see why this argument isn’t a good one, consider the analogous argument: I have a rock. When I let it go, it can either fall up or fall down. So if I let it go 100 times, the chance of it falling every time is 1 in 2^{100}, which is so small that we may regard this occurrence as impossible.

And yet, the rock falls every time. Does this improbable event prove the existence of “not a sparrow falls” God?

This example should make clear the flaw in Arbuthnot’s argument, as well as those of his modern emulators. The probability assumptions being made are questionable. Arbuthnot assumed boys and girls were born with equal frequency, and when he found they weren’t, concluded God exists. But a better conclusion is that boys and girls aren’t born with equal frequency… because the evidence says they’re not.

This leads to the following conclusion: no probability argument can be used to prove the existence of God. This is because the underlying probabilities are always subject to debate.

For example, if I flip a coin and it lands heads 99 times in a row, I could believe I’ve just witnessed an extremely improbable event. But I’m more inclined to conclude the coin isn’t a fair coin.

Or say we find intelligent alien life out there, a la Star Trek, with one difference: every species we encounter is genetically identical to modern humans. Would this be evidence for a God who made man in His image?

I’ll admit, I’d wonder. But ultimately, I’d question the assumption that the human form is the product of random selection in the genetic lottery: Maybe there are undiscovered laws of physics that favor five phalanged bipeds without feathers. (Science fiction offers other scenarios, e.g. alien gardeners who make species in the form they want, and if you choose to believe God is an alien gardener, then this is an argument that would be compelling)

The Sixth Wave

Over the past four thousand years, four waves of mathematical innovation have swept the world, leading to rapid advances and significant changes in society:

  • The invention of written number (Fertile Crescent, 3000 BC).  This allowed civilization to exist, because if you want to live with more than your extended family, record keeping is essential…and that means keeping track of numerical amounts.
  • The invention of geometry (Greece, 300 BC).  Yes, geometry existed before then; what I’m using is the date of Euclid’s Elements, which is the oldest surviving deductive geometry.  The idea that you could, from a few simple principles, deduce an entire logical structure has a profound impact on society.  How important?  Consider a rather famous line:  “We hold these truths to be self-evident…”  The Declaration of Independence reads like a mathematical theorem, proving the necessity of revolution from some simple axioms.
  • The invention of algebra (Iraq, 900).  The problem “A number and its seventh make 19; find the number” appears in a 4000-year-old manuscript from ancient Egypt, so finding  unknown quantities has a very long history.  What algebra adds is an important viewpoint:  Any of the infinite variety of possible problems can be transformed into one of a small number of types.  Thus, “A farmer has 800 feet of fence and wants to enclose the largest area possible” and “Find a number so the sum of the number and its reciprocal is 8” and “The sum of a number is 12 and its product is 20” can all be reduced to ax^{2} + bx + c = 0 and solved using the quadratic formula x = \dfrac{-b \pm \sqrt{b^{2} - 4ac}}{2a}.
  • The invention of calculus (Europe, 1600).  Algebra is the mathematics of what is.  Calculus is the mathematics of how things change.  Calculus makes physics possible, and from physics comes chemistry and engineering.
  • The invention of statistics (Europe, 1900).  Both algebra and calculus deal with single objects:  a bridge, a number, a moving planet.  But the universe consists of many similar objects:  the human population; the planetary climate; the trash generated by a city.  Statistics aggregates the data on the individual in a way that can be used to describe a population…then uses the information on a population to predict information about an individual.  Everything in modern society, from the pain relievers you use to the road you travel to work, incorporates such a statistical analysis.

Many people, myself included, believe we are on the verge of a sixth wave.  That sixth wave will have the transformative power of calculus and statistics, and fundamentally reshape society.

The sixth wave is based around discrete mathematics.  That’s not mathematics you whisper in dark corners.  Rather, it’s the mathematics of things that can be counted as opposed to measured.  For example, length is continuous:  a length can have any value, and no one looks at you strangely if you say “I traveled 1.38924 miles today…”  (You might get some strange looks, but it’s because you specified the distance so precisely and not because of the distance itself)  But if you continued “…and met 2.35 people,” you would get strange looks, because the number of people you meet is a counted number:  it’s a discrete quantity.

How important is discrete mathematics?  If calculus is the basis for physics and engineering, then linear algebra is the basis for discrete mathematics.  But a first-year calculus problem would have a hard time solving even a simple question in statics (the physics of structures).  In contrast, Google’s search algorithm is based on mathematics learned in the first half of a standard college linear algebra course.

I’ll talk more about this later.  But if you’re interested in learning some linear algebra, the video lectures for the course I teach are available on YouTube.

Obesity, Poverty, and National Security

According to the internet, if you ate only ramen, you’d save thousands of dollars each year in food.

That sounds great, except there’s a problem: ramen lacks a wide range of essential nutrients and vitamins. You’d lose your teeth to scurvy, a lack of vitamin D would cause your bones to become brittle and easily broken, you’d suffer nightblindness from a lack of vitamin A, and you’d be tired all the time from a lack of iron and the B vitamins. In short, all the money you saved on food, and much, much, more, would be spent on increased medical care.

The problem is that eating healthy is costly. And this leads to a national security crisis.

If you want the short version, I’ve summarized the key points in a ten-minute video:

A little more mathematics:

Food buyers face what mathematicians call a constrained optimization problem: they have to meet certain caloric and nutritional goals (the constraints), which defines a feasible region. Generally speaking, any point in the feasible region defines a solution to the problem; what you want to do is to find the optimal solution.

The optimal solution is generally determined by the objective function. For example, if you lived off x packages of ramen and y eggs, the important objective function might be the total cost of your meals. At 15 cents a pack of ramen and 20 cents an egg, the objective function has the form L = 0.15x + 0.20y, and we might want to minimize the value of the objective function.

In the following, I’ll assume you want to minimize the value of the objective function; the arguments are similar if you’re trying to maximize the value (for example, if you’re designing a set of roads, you might want to maximize the traffic flow through a town center).

There’s a theorem in mathematics that says the optimal solution will be found on the boundary of the feasible region. The intuition behind this theorem is the following: Imagine any point inside the feasible region. If you change any one of the coordinates while leaving the others the same, the value of the objective function will generally change. The general idea is to move in the direction that decreases the objective function, and continue moving in that direction until you hit the boundary of the feasible region.

At this point, you can’t move any further in that direction. But you can try one of the other directions. Repeating this process allows us to find the optimal solution.

We can go further. Suppose our objective function is linear (like the cost function). Then the same analysis tells us the optimal solution will be found at a vertex of the feasible region. This suggests an elegant way to solve linear optimization problems:

  • Graph the feasible region and locate all the vertices. Generally speaking, the constraints are themselves linear functions, so (in our ramen and egg example) the feasible region will be a polygon.
  • Evaluate the objective function at each vertex,
  • Choose the vertex that minimizes the value of the objective function.

Easy, huh? Except…

  • If you have n commodities, you have to work in \mathbb{R}^{n}.
  • This means the feasible region will be some sort of higher solid.
  • This also means that finding the vertices of the feasible region will require solving systems of n equations in n unknowns.

In 1945 ,George Stigler did such an analysis to find a minimal cost diet that met caloric and nutritional requirements. To make the problem tractable, he focused on a diet consisting of just seven food items: wheat flour; evaporated milk; cabbage; spinach; dried navy beans; pancake flour; and pork liver.

“Thereafter the procedure is experimental because there does not appear to be any direct method of finding the minimum of a linear function subject to linear conditions.” The problem is that with seven items, you’re working with hyperplanes in \mathbb{R}^{7}, and the constraints will give you hundreds of vertices to check.

Note the date: 1945. What Stigler didn’t know is that there was a method for finding the minimum value easily. But that’s a story for another post…

How To Get Good At Math (In Ten Minutes a Day)

Some years ago, a Certain Toy Corporation got into quite a bit of trouble for marketing a (girl) doll that spoke phrases.  In particular, one phrase:  “Math is hard.”

I’ve always argued that that phrase isn’t objectionable.  Math is hard.  So is throwing a three-point shot in basketball, doing a triple gainer, bowling a perfect game, and changing out a car engine.  One of my favorite episodes of The Bernie Mac Show was when Bernie Mac made this very point:  yes, math is hard…but you do hard things all the time, so why not do math?  (Season 1, Episode 16, Mac 101)

So how do you get better at basketball?  You practice, practice, practice.  The same is true in math, and teachers often tell students this.  And while that’s all true, and certainly good advice, it occurs to me there are two components to being “good at math.”  The first is being good at doing math.  Maybe you’ve just learned how to solve a quadratic equation, and so solving x^{2} - 3x - 7 = 0 takes a little effort.  But after you’ve solved a few hundred quadratic equations, it becomes second nature, and you can throw down the solution (x = \frac{3 \pm \sqrt{37}}{2}) without hesitation.

But the other component of being “good at math,” and ultimately what it means to be a mathematician, is being good at creating math.  This is far more difficult.  It’s the difference between doing 30 push-ups a day, and inventing a new calisthenic.

So how do you do that?  Let’s consider two of the greatest mathematicians ever:  Gauss and Euler.  They actually talked about what it took to be a great mathematician, and the short form is this:  Never solve a problem one time.

For example, in 1736, Euler proved a result of Fermat, namely that if p is prime and a < p, then a^{p - 1} - 1 is divisible by  p.  Euler proved this, using an induction argument so obscure that it keeps being rediscovered by mathematicians, both great ones (Laplace and Cauchy) and obscure ones (me, actually…this may be the only time I’ve sat at the same table as Euler, Laplace, and Cauchy).

But Euler didn’t stop with one proof.  About every ten years,  he came up with a new way to prove the theorem.  His re-examination of the problem led him to discover the \varphi-function (where \varphi(n) is the number of numbers less than n which are relatively prime to n) and generalized it to what is now called the Euler-Fermat Theorem:  For any number N and any number a relatively prime to N, the least value x for which a^{x} -1 is divisible by N is a divisor of \varphi(N).

Incidentally, this result is the basis of modern computer security (the RSA algorithm).

What about Gauss?  In 1799, Gauss proved the Fundamental Theorem of Algebra,namely that a nth degree polynomial with real coefficients has n real and/or complex roots.

And then, over the course of his career, Gauss proved the Fundamental Theorem three more times, each time extending the result and developing new mathematics.

What’s the practical application?  Let’s consider something really basic:  multiplication of two numbers, say 47 \times 153.  We all know how to do this:  we were taught how to do this computation in school.  We can practice the multiplication algorithm by trying different products:  47 \times 153 today, 23 \times 17 tomorrow, 153 \times 301 the day after, and so on.  If you do this, you will develop your skills at applying the standard algorithm.

On the other hand, suppose that instead of doing new products the way you were taught, what if you tried to find the same product using a completely different method?  You know what the answer is supposed to be, so you’ll have a good way to check if your method works.

How might that work?  In this case, 47 \times 153 is the sum of forty-seven 153s.  So you could add 153 + 153 + 153 + \ldots + 153.  That’s one method of multiplying; one nice feature of it is that it’s something a first grader can do.  (Granted, you’d probably have them do something easier, like 5 \times 4 = 4 + 4 + 4 + 4 + 4, but the important thing is that they don’t have to know multiplication to be able to solve the problem “Find 5 \times 4“)

Obviously, you don’t want to spend the next half hour adding forty-seven 153s together…but progress comes when someone asks “Can we find a better approach?”  So you start thinking about how to improve the efficiency of your sums.   Maybe tomorrow, you realize 153 = 100 + 50 + 3, so adding together forty-seven 153s is the same as adding together forty-seven 100s, forty-seven 50s, and forty-seven 3s.

And even that gets a little tricky, so the day after, you come up with a new insight that allows you to make the addition even more efficient.

What’s the logo?


The main logo for the site is shown above, and it’s my current personal favorite geometric construction.  This comes from Abu’l Wafa, a 10th century Persian geometer, and gives a quick-and-easy method of constructing a nearly regular heptagon:

  • Let ABC be an equilateral triangle inscribed in a circle.
  • Bisect BC at D.
  • CD is very nearly the side of a regular heptagon inscribed in the circle.

How close is it?  The green shows what happens when you mark six sides, equal in length to CD, with the seventh side joining the last vertex to your starting point.  The blue is a regular heptagon.  They’re almost indistinguishable.  (The inset shows that the true regular heptagon does deviate very slightly from the approximate regular heptagon)

I actually used this a few years ago, to set up a seven-sided tomato cage.  I’ll leave the details as an exercise for the reader…