I’m currently reading through Sustainable Energy – without the hot air by David J.C. MacKay, FRS, which I recommend to anyone interested in energy or energy policy. I’m particularly impressed by the graphs and diagrams in the book, both for the laboriously-collected data they represent and for their power to convey important points quickly and clearly. (Example).
The visual imagery evoked by the prose is powerful too: in a section discussing the merits (and lack thereof) of having a large number of people make a small saving each, here is what MacKay has to say:
The “if-everyone” multiplying machine is a bad thing because it deflects people’s attention towards 25 million minnows instead of 25 million sharks. The mantra “Little changes can make a big difference” is bunkum, when applied to climate change and power. [link]
Citation: David J.C. MacKay. Sustainable Energy – without the hot air. UIT Cambridge, 2008. ISBN 978-0-9544529-3-3. Available free online from www.withouthotair.com.
US Secretary of Energy Steven Chu gave an interesting Compton Lecture at MIT on May 12, 2009, on how researchers can fit in with providing energy for a crowded world.
Over the past 18 months I have been reading up in my personal time on water, its availability, requirements, usage and distribution. I believe it to be a particularly important problem for systems engineers to examine, since it needs to combine aspects of purification technology with energy analysis, human practices, policy and politics, and one which I believe mainstream media does not report on as much as it perhaps should.
I was therefore happy to read a well-written BBC article by Richard Black that describes the problem and complexity of modeling water as a resource. If you’ve been trained in chemical engineering or systems engineering, the material is probably not new to you, but it’s presented very well.
Via Schneier, I came upon this New York Times article, which talks about the use and abuse of everybody’s favorite quant tool: Value at Risk (VaR). One particular section caught my eye:
…the big problem was that it turned out that VaR could be gamed. That is what happened when banks began reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks. That sounds good in principle, but managers began to manipulate the VaR by loading up on what Guldimann calls “asymmetric risk positions.” These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge.
I find this interesting: reporting and acting on VaRs is not different from reporting and acting on the results of any other probabilistic model. Engineers and operations researchers do it all the time when modeling the failure rates of process units and finished products or services. The same applies to variations in process inputs (crude oil composition for a refinery, particle size distribution for a powdered pharmaceutical drug and so on). Even something as mundane as the residence time distribution in a reactor is a probabilistic model that important decisions are based on. Yet the failure modes are usually not catastrophic system meltdown. Why?
There are two questions I cannot fully answer yet: It is relatively easy to cross-check an engineering model with theory using back-of-the-envelope estimates from first principles. Is the same possible for financial models? It is also relatively easy to cross-check a model with data from controlled experiments, not just observations seen in the wild with confounding factors. Are controlled experiments feasible and practical for financial systems?
Of course, no model would be complete without a car analogy:
David Einhorn, who founded Greenlight Capital, a prominent hedge fund, wrote not long ago that VaR was “like an air bag that works all the time, except when you have a car accident.”