Dunbar’s number is a theoretical cognitive limit to the number of people with whom one can maintain stable social relationships. No precise value has been proposed for Dunbar’s number, but a commonly cited approximation is 150.
there is reason to believe that the social-networking sites will enable their users to burst past Dunbar’s number for friends, just as humans have developed and harnessed technology to surpass their physical limits on speed, strength and the ability to process information.
What mainly goes up [in the number of Facebook contacts], therefore, is not the core network but the number of casual contacts that people track more passively. This corroborates Dr Marsden’s ideas about core networks, since even those Facebook users with the most friends communicate only with a relatively small number of them.
So how many monkeys would you have to own before you couldn’t remember their names? At what point, in your mind, do your beloved pets become just a faceless sea of monkey? Even though each one is every bit the monkey Slappy was, there’s a certain point where you will no longer really care if one of them dies.
Using fractal analysis, we identify with high statistical confidence a discrete hierarchy of group sizes with a preferred scaling ratio close to 3: rather than a single or a continuous spectrum of group sizes, humans spontaneously form groups of preferred sizes organized in a geometrical series approximating 3, 9, 27,…
The smallest, three to five, is a “clique”: the number of people from whom you would seek help in times of severe emotional distress. The twelve to 20 group is the “sympathy group”: people with which you have special ties. After that, 30 to 50 is the typical size of hunter-gatherer overnight camps, generally drawn from the same pool of 150 people. No matter what size company you work for, there are only about 150 people you consider to be “co-workers.” The 500-person group is the “megaband,” and the 1,500-person group is the “tribe.” Fifteen hundred is roughly the number of faces we can put names to, and the typical size of a hunter-gatherer society.
Over the past 18 months I have been reading up in my personal time on water, its availability, requirements, usage and distribution. I believe it to be a particularly important problem for systems engineers to examine, since it needs to combine aspects of purification technology with energy analysis, human practices, policy and politics, and one which I believe mainstream media does not report on as much as it perhaps should.
I was therefore happy to read a well-written BBC article by Richard Black that describes the problem and complexity of modeling water as a resource. If you’ve been trained in chemical engineering or systems engineering, the material is probably not new to you, but it’s presented very well.
…the big problem was that it turned out that VaR could be gamed. That is what happened when banks began reporting their VaRs. To motivate managers, the banks began to compensate them not just for making big profits but also for making profits with low risks. That sounds good in principle, but managers began to manipulate the VaR by loading up on what Guldimann calls “asymmetric risk positions.” These are products or contracts that, in general, generate small gains and very rarely have losses. But when they do have losses, they are huge.
I find this interesting: reporting and acting on VaRs is not different from reporting and acting on the results of any other probabilistic model. Engineers and operations researchers do it all the time when modeling the failure rates of process units and finished products or services. The same applies to variations in process inputs (crude oil composition for a refinery, particle size distribution for a powdered pharmaceutical drug and so on). Even something as mundane as the residence time distribution in a reactor is a probabilistic model that important decisions are based on. Yet the failure modes are usually not catastrophic system meltdown. Why?
There are two questions I cannot fully answer yet: It is relatively easy to cross-check an engineering model with theory using back-of-the-envelope estimates from first principles. Is the same possible for financial models? It is also relatively easy to cross-check a model with data from controlled experiments, not just observations seen in the wild with confounding factors. Are controlled experiments feasible and practical for financial systems?
Of course, no model would be complete without a car analogy:
David Einhorn, who founded Greenlight Capital, a prominent hedge fund, wrote not long ago that VaR was “like an air bag that works all the time, except when you have a car accident.”
The glasses work on the principle that the more liquid pumped into a thin sac in the plastic lenses, the stronger the correction.
[Joshua Silver] has attached plastic syringes filled with silicone oil on each bow of the glasses; the wearer adds or subtracts the clear liquid with a little dial on the pump until the focus is right. After that adjustment, the syringes are removed and the “adaptive glasses” are ready to go.
Currently, Silver said, a pair costs about $19, but his hope is to cut that to a few dollars.
This is interesting for several reasons:
It increases the flexibility of a single pair of eyeglasses for different situations.
It allows the entire field of view to be the same focal length, unlike a bifocal or multifocal lens that has only a small region at the required focus and everything else off-focus.
It decreases the long-term cost by allowing eyeglasses to adapt to changing eye-lens powers over many years (assuming the glasses last that long).
It’s a clever emulation of what nature does for the eye-lenses of most species: make them fluid-filled and change their shape with muscle tension.
Overall, seems like a design that solves several problems simultaneously. Quite impressive.