On Cooking, Human Evolution, Obesity, and Type II Diabetes

An interesting write-up over at The Economist on Richard Wrangham’s argument about the role of cooking in human evolution, which he reiterated at the 2009 AAAS Annual Meeting in Chicago. The orthodox view among the anthropology community seems to be that human brain size increased due to a shift in diet from vegetables to meat. Wrangham’s view is that it is the cooking of food rather than its origin which has influenced human brain size:

Cooking alters food in three important ways. It breaks starch molecules into more digestible fragments. It denatures protein molecules, so that their amino-acid chains unfold and digestive enzymes can attack them more easily. And heat physically softens food. That makes it easier to digest, so even though the stuff is no more calorific, the body uses fewer calories dealing with it.

On the face of it, it seems plausible. Cooked foods make more nutrients available, and would presumably require less chewing, reducing the size of the jaw and making more of the skull’s internal space available for the brain.

This is a more application-oriented twist:

Indeed, Dr Wrangham suspects the main cause of the modern epidemic of obesity is not overeating (which the evidence suggests—in America, at least—is a myth) but the rise of processed foods. These are softer, because that is what people prefer.

On a very unscientific level, I had suspected once that upma made with fine semolina was digested more quickly (and consequently felt less filling) than upma made with coarse cracked wheat. It is nice to learn that there is a scientific basis for that hypothesis.

Now: assuming that food habits are harder to change than foods themselves, would it be possible to make people gain less weight by making foods artificially rough? Could one make a taste-free coarse-particled edible powder (like a “magic food”) to sprinkle on (or mix with) soft creamy foods to make them digest more slowly? I’m not being facetious — if the food is grittier and rougher and takes longer to digest, the glycogen peak after a meal should be wider and shorter, and should presumably make a person less susceptible to diseases like Type II diabetes. One way to do this would be to eat naturally grittier and healthier foods (whole wheat flour instead of refined wheat flour for instance), but if one habitually eats foods that are too soft and easily digestible, could we add an antidigestive agent to the food to slow down the sugar rush?

Of Researchers, Entrepreneurs and Philanthropists

Consider this quote:

I think there are some very important problems that don’t get worked on naturally, that is, the market does not drive the scientists, the communicators, the thinkers, the governments, to do the right things. And only by paying attention to these things, and having brilliant people who care and draw other people in can we make as much progress as we need to.

That was said by Bill Gates at the 2009 TED conference.

The fascinating thing about the quote is that it could be used word-for-word to justify research, entrepreneurship, and philanthropy (not to mention public service and policy). An interesting reminder of their interrelationship.

NYTimes Article on Risk Management

Via Schneier, I came upon this New York Times article, which talks about the use and abuse of everybody’s favorite quant tool: Value at Risk (VaR). One particular section caught my eye:

…the big problem was that it turned out that VaR could be gamed. That is what happened when banks began reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks. That sounds good in principle, but managers began to manipulate the VaR by loading up on what Guldimann calls “asymmetric risk positions.” These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge.

I find this interesting: reporting and acting on VaRs is not different from reporting and acting on the results of any other probabilistic model. Engineers and operations researchers do it all the time when modeling the failure rates of process units and finished products or services. The same applies to variations in process inputs (crude oil composition for a refinery, particle size distribution for a powdered pharmaceutical drug and so on). Even something as mundane as the residence time distribution in a reactor is a probabilistic model that important decisions are based on. Yet the failure modes are usually not catastrophic system meltdown. Why?

There are two questions I cannot fully answer yet: It is relatively easy to cross-check an engineering model with theory using back-of-the-envelope estimates from first principles. Is the same possible for financial models? It is also relatively easy to cross-check a model with data from controlled experiments, not just observations seen in the wild with confounding factors. Are controlled experiments feasible and practical for financial systems?

Of course, no model would be complete without a car analogy:

David Einhorn, who founded Greenlight Capital, a prominent hedge fund, wrote not long ago that VaR was “like an air bag that works all the time, except when you have a car accident.”