Obesity, Poverty, and National Security

According to the internet, if you ate only ramen, you’d save thousands of dollars each year in food.

That sounds great, except there’s a problem:  ramen lacks a wide range of essential nutrients and vitamins.  You’d lose your teeth to scurvy, a lack of vitamin D would cause your bones to become brittle and easily broken,  you’d suffer nightblindness from a lack of vitamin A, and you’d be tired all the time from a lack of iron and the B vitamins.  In short, all the money you saved on food, and much, much, more, would be spent on increased medical care.

The problem is that eating healthy is costly.  And this leads to a national security crisis.

If you want the short version, I’ve summarized the key points in a ten-minute video:

A little more mathematics:

Food buyers face what mathematicians call a constrained optimization problem:  they have to meet certain caloric and nutritional goals (the constraints), which defines a feasible region.  Generally speaking, any point in the feasible region defines a solution to the problem; what you want to do is to find the optimal solution.

The optimal solution is generally determined by the objective function.  For example, if you lived off x packages of ramen and y eggs, the important objective function might be the total cost of your meals.  At 15 cents a pack of ramen and 20 cents an egg, the objective function has the form L = 0.15x + 0.20y, and we might want to minimize the value of the objective function.

In the following, I’ll assume you want to minimize the value of the objective function; the arguments are similar if you’re trying to maximize the value (for example, if you’re designing a set of roads, you might want to maximize the traffic flow through a town center).

There’s a theorem in mathematics that says the optimal solution will be found on the boundary of the feasible region.  The intuition behind this theorem is the following:  Imagine any point inside the feasible region.  If you change any one of the coordinates while leaving the others the same, the value of the objective function will generally change.  The general idea is to move in the direction that decreases the objective function, and continue moving in that direction  until you hit the boundary of the feasible region.

At this point, you can’t move any further in that direction.  But you can try one of the other directions.  Repeating this process allows us to find the optimal solution.

We can go further.  Suppose our objective function is linear (like the cost function).  Then the same analysis tells us the optimal solution will be found at a vertex of the feasible region.  This suggests an elegant way to solve linear optimization problems:

  • Graph the feasible region and locate all the vertices.  Generally speaking, the constraints are themselves linear functions, so (in our ramen and egg example) the feasible region will be a polygon.
  • Evaluate the objective function at each vertex,
  • Choose the vertex that minimizes the value of the objective function.

Easy, huh?  Except…

  • If you have n commodities, you have to work in \mathbb{R}^{n}.
  • This means the feasible region will be some sort of higher solid.
  • This also means that finding the vertices of the feasible region will require solving systems of n equations in n unknowns.

In 1945 ,George Stigler did such an analysis to find a minimal cost diet that met caloric and nutritional requirements.  To make the problem tractable, he focused on a diet consisting of just seven food items:  wheat flour;  evaporated milk; cabbage; spinach; dried navy beans; pancake flour; and pork liver.

“Thereafter the procedure is experimental because there does not appear to be any direct method of finding the minimum of a linear function subject to linear conditions.”  The problem is that with seven items, you’re working with hyperplanes in \mathbb{R}^{7}, and the constraints will give you hundreds of vertices to check.

Note the date:  1945.  What Stigler didn’t know is that there was a method for finding the minimum value easily.  But that’s a story for another post…

How To Get Good At Math (In Ten Minutes a Day)

Some years ago, a Certain Toy Corporation got into quite a bit of trouble for marketing a (girl) doll that spoke phrases.  In particular, one phrase:  “Math is hard.”

I’ve always argued that that phrase isn’t objectionable.  Math is hard.  So is throwing a three-point shot in basketball, doing a triple gainer, bowling a perfect game, and changing out a car engine.  One of my favorite episodes of The Bernie Mac Show was when Bernie Mac made this very point:  yes, math is hard…but you do hard things all the time, so why not do math?  (Season 1, Episode 16, Mac 101)

So how do you get better at basketball?  You practice, practice, practice.  The same is true in math, and teachers often tell students this.  And while that’s all true, and certainly good advice, it occurs to me there are two components to being “good at math.”  The first is being good at doing math.  Maybe you’ve just learned how to solve a quadratic equation, and so solving x^{2} - 3x - 7 = 0 takes a little effort.  But after you’ve solved a few hundred quadratic equations, it becomes second nature, and you can throw down the solution (x = \frac{3 \pm \sqrt{37}}{2}) without hesitation.

But the other component of being “good at math,” and ultimately what it means to be a mathematician, is being good at creating math.  This is far more difficult.  It’s the difference between doing 30 push-ups a day, and inventing a new calisthenic.

So how do you do that?  Let’s consider two of the greatest mathematicians ever:  Gauss and Euler.  They actually talked about what it took to be a great mathematician, and the short form is this:  Never solve a problem one time.

For example, in 1736, Euler proved a result of Fermat, namely that if p is prime and a < p, then a^{p - 1} - 1 is divisible by  p.  Euler proved this, using an induction argument so obscure that it keeps being rediscovered by mathematicians, both great ones (Laplace and Cauchy) and obscure ones (me, actually…this may be the only time I’ve sat at the same table as Euler, Laplace, and Cauchy).

But Euler didn’t stop with one proof.  About every ten years,  he came up with a new way to prove the theorem.  His re-examination of the problem led him to discover the \varphi-function (where \varphi(n) is the number of numbers less than n which are relatively prime to n) and generalized it to what is now called the Euler-Fermat Theorem:  For any number N and any number a relatively prime to N, the least value x for which a^{x} -1 is divisible by N is a divisor of \varphi(N).

Incidentally, this result is the basis of modern computer security (the RSA algorithm).

What about Gauss?  In 1799, Gauss proved the Fundamental Theorem of Algebra,namely that a nth degree polynomial with real coefficients has n real and/or complex roots.

And then, over the course of his career, Gauss proved the Fundamental Theorem three more times, each time extending the result and developing new mathematics.

What’s the practical application?  Let’s consider something really basic:  multiplication of two numbers, say 47 \times 153.  We all know how to do this:  we were taught how to do this computation in school.  We can practice the multiplication algorithm by trying different products:  47 \times 153 today, 23 \times 17 tomorrow, 153 \times 301 the day after, and so on.  If you do this, you will develop your skills at applying the standard algorithm.

On the other hand, suppose that instead of doing new products the way you were taught, what if you tried to find the same product using a completely different method?  You know what the answer is supposed to be, so you’ll have a good way to check if your method works.

How might that work?  In this case, 47 \times 153 is the sum of forty-seven 153s.  So you could add 153 + 153 + 153 + \ldots + 153.  That’s one method of multiplying; one nice feature of it is that it’s something a first grader can do.  (Granted, you’d probably have them do something easier, like 5 \times 4 = 4 + 4 + 4 + 4 + 4, but the important thing is that they don’t have to know multiplication to be able to solve the problem “Find 5 \times 4“)

Obviously, you don’t want to spend the next half hour adding forty-seven 153s together…but progress comes when someone asks “Can we find a better approach?”  So you start thinking about how to improve the efficiency of your sums.   Maybe tomorrow, you realize 153 = 100 + 50 + 3, so adding together forty-seven 153s is the same as adding together forty-seven 100s, forty-seven 50s, and forty-seven 3s.

And even that gets a little tricky, so the day after, you come up with a new insight that allows you to make the addition even more efficient.

What’s the logo?

abul_heptagon_small

The main logo for the site is shown above, and it’s my current personal favorite geometric construction.  This comes from Abu’l Wafa, a 10th century Persian geometer, and gives a quick-and-easy method of constructing a nearly regular heptagon:

  • Let ABC be an equilateral triangle inscribed in a circle.
  • Bisect BC at D.
  • CD is very nearly the side of a regular heptagon inscribed in the circle.

How close is it?  The green shows what happens when you mark six sides, equal in length to CD, with the seventh side joining the last vertex to your starting point.  The blue is a regular heptagon.  They’re almost indistinguishable.  (The inset shows that the true regular heptagon does deviate very slightly from the approximate regular heptagon)

I actually used this a few years ago, to set up a seven-sided tomato cage.  I’ll leave the details as an exercise for the reader…