Popular Posts

Saturday, December 15, 2012

Nassim Taleb on Fragility and Antifragility By Nassim Nicholas Taleb, on December 11th, 2012SI NO MATE SE ENGORDA..WHAT DOESN'T KILL YOU MAKES YOU STRONGER..

http://dailycapitalist.com/2012/12/11/nassim-taleb-on-fragility-and-antifragility/



Nassim Taleb on Fragility and Antifragility

Nassim Nicholas Taleb is, as my readers know, my favorite contemporary philosopher. He has come out with a new book, Antifragile: Things That Gain from DisorderTaleb wrote the book, The Black Swan: The Impact of the Highly Improbable, which was a revolutionary look at risk in the investment world. I have written a lot about his ideas here (search “Taleb” or “black swan”). If you haven’t read the book, I urge you to do so. His newest work is about the fragility of systems, and for purposes of Daily Capitalist readers, especially about political and economic systems. I have watched him over the years and and have been pleased to see him turn to Austrian economic theory to explain the way economics work. This article discusses the basic concepts of “antifragility”. This article was published on the Wall Street Journal. I have ordered the book and will eventually give you a book report.— JH

Several years before the financial crisis descended on us, I put forward the concept of “black swans”: large events that are both unexpected and highly consequential. We never see black swans coming, but when they do arrive, they profoundly shape our world: Think of World War I, 9/11, the Internet, the rise of Google .
In economic life and history more generally, just about everything of consequence comes from black swans; ordinary events have paltry effects in the long term. Still, through some mental bias, people think in hindsight that they “sort of” considered the possibility of such events; this gives them confidence in continuing to formulate predictions. But our tools for forecasting and risk measurement cannot begin to capture black swans. Indeed, our faith in these tools make it more likely that we will continue to take dangerous, uninformed risks.
Some made the mistake of thinking that I hoped to see us develop better methods for predicting black swans. Others asked if we should just give up and throw our hands in the air: If we could not measure the risks of potential blowups, what were we to do? The answer is simple: We should try to create institutions that won’t fall apart when we encounter black swans—or that might even gain from these unexpected events.
Fragility is the quality of things that are vulnerable to volatility. Take the coffee cup on your desk: It wants peace and quiet because it incurs more harm than benefit from random events. The opposite of fragile, therefore, isn’t robust or sturdy or resilient—things with these qualities are simply difficult to break.
To deal with black swans, we instead need things that gain from volatility, variability, stress and disorder. My (admittedly inelegant) term for this crucial quality is “antifragile.” The only existing expression remotely close to the concept of antifragility is what we derivatives traders call “long gamma,” to describe financial packages that benefit from market volatility. Crucially, both fragility and antifragility are measurable.
As a practical matter, emphasizing antifragility means that our private and public sectors should be able to thrive and improve in the face of disorder. By grasping the mechanisms of antifragility, we can make better decisions without the illusion of being able to predict the next big thing. We can navigate situations in which the unknown predominates and our understanding is limited.
Herewith are five policy rules that can help us to establish antifragility as a principle of our socioeconomic life.
Rule 1: Think of the economy as being more like a cat than a washing machine.
We are victims of the post-Enlightenment view that the world functions like a sophisticated machine, to be understood like a textbook engineering problem and run by wonks. In other words, like a home appliance, not like the human body. If this were so, our institutions would have no self-healing properties and would need someone to run and micromanage them, to protect their safety, because they cannot survive on their own.
By contrast, natural or organic systems are antifragile: They need some dose of disorder in order to develop. Deprive your bones of stress and they become brittle. This denial of the antifragility of living or complex systems is the costliest mistake that we have made in modern times. Stifling natural fluctuations masks real problems, causing the explosions to be both delayed and more intense when they do take place. As with the flammable material accumulating on the forest floor in the absence of forest fires, problems hide in the absence of stressors, and the resulting cumulative harm can take on tragic proportions.
And yet our economic policy makers have often aimed for maximum stability, even for eradicating the business cycle. “No more boom and bust,” as voiced by the U.K. Labor leader Gordon Brown, was the policy pursued by Alan Greenspan in order to “smooth” things out, thus micromanaging us into the current chaos. Mr. Greenspan kept trying to iron out economic fluctuations by injecting cheap money into the system, which eventually led to monstrous hidden leverage and real-estate bubbles. On this front there is now at least a glimmer of hope, in the U.K. rather than the U.S., alas: Mervyn King, governor of the Bank of England, has advocated the idea that central banks should intervene only when an economy is truly sick and should otherwise defer action.
Promoting antifragility doesn’t mean that government institutions should avoid intervention altogether. In fact, a key problem with overzealous intervention is that, by depleting resources, it often results in a failure to intervene in more urgent situations, like natural disasters. So in complex systems, we should limit government (and other) interventions to important matters: The state should be there for emergency-room surgery, not nanny-style maintenance and overmedication of the patient—and it should get better at the former.
In social policy, when we provide a safety net, it should be designed to help people take more entrepreneurial risks, not to turn them into dependents. This doesn’t mean that we should be callous to the underprivileged. In the long run, bailing out people is less harmful to the system than bailing out firms; we should have policies now that minimize the possibility of being forced to bail out firms in the future, with the moral hazard this entails.
Rule 2: Favor businesses that benefit from their own mistakes, not those whose mistakes percolate into the system.
Some businesses and political systems respond to stress better than others. The airline industry is set up in such a way as to make travel safer after every plane crash. A tragedy leads to the thorough examination and elimination of the cause of the problem. The same thing happens in the restaurant industry, where the quality of your next meal depends on the failure rate in the business—what kills some makes others stronger. Without the high failure rate in the restaurant business, you would be eating Soviet-style cafeteria food for your next meal out.
These industries are antifragile: The collective enterprise benefits from the fragility of the individual components, so nothing fails in vain. These businesses have properties similar to evolution in the natural world, with a well-functioning mechanism to benefit from evolutionary pressures, one error at a time.
By contrast, every bank failure weakens the financial system, which in its current form is irremediably fragile: Errors end up becoming large and threatening. A reformed financial system would eliminate this domino effect, allowing no systemic risk from individual failures. A good starting point would be reducing the amount of debt and leverage in the economy and turning to equity financing. A firm with highly leveraged debt has no room for error; it has to be extremely good at predicting future revenues (and black swans). And when one leveraged firm fails to meet its obligations, other borrowers who need to renew their loans suffer as the chastened lenders lose their appetite to extend credit. So debt tends to make failures spread through the system.
A firm with equity financing can survive drops in income, however. Consider the abrupt deflation of the technology bubble during 2000. Because technology firms were relying on equity rather than debt, their failures didn’t ripple out into the wider economy. Indeed, their failures helped to strengthen the technology sector.
Rule 3: Small is beautiful, but it is also efficient.
Experts in business and government are always talking about economies of scale. They say that increasing the size of projects and institutions brings costs savings. But the “efficient,” when too large, isn’t so efficient. Size produces visible benefits but also hidden risks; it increases exposure to the probability of large losses. Projects of $100 million seem rational, but they tend to have much higher percentage overruns than projects of, say, $10 million. Great size in itself, when it exceeds a certain threshold, produces fragility and can eradicate all the gains from economies of scale. To see how large things can be fragile, consider the difference between an elephant and a mouse: The former breaks a leg at the slightest fall, while the latter is unharmed by a drop several multiples of its height. This explains why we have so many more mice than elephants.
So we need to distribute decisions and projects across as many units as possible, which reinforces the system by spreading errors across a wider range of sources. In fact, I have argued that government decentralization would help to lower public deficits. A large part of these deficits comes from underestimating the costs of projects, and such underestimates are more severe in large, top-down governments. Compare the success of the bottom-up mechanism of canton-based decision making in Switzerland to the failures of authoritarian regimes in Soviet Russia and Baathist Iraq and Syria.
Rule 4: Trial and error beats academic knowledge.
Things that are antifragile love randomness and uncertainty, which also means—crucially—that they can learn from errors. Tinkering by trial and error has traditionally played a larger role than directed science in Western invention and innovation. Indeed, advances in theoretical science have most often emerged from technological development, which is closely tied to entrepreneurship. Just think of the number of famous college dropouts in the computer industry.
But I don’t mean just any version of trial and error. There is a crucial requirement to achieve antifragility: The potential cost of errors needs to remain small; the potential gain should be large. It is the asymmetry between upside and downside that allows antifragile tinkering to benefit from disorder and uncertainty.
Perhaps because of the success of the Manhattan Project and the space program, we greatly overestimate the influence and importance of researchers and academics in technological advancement. These people write books and papers; tinkerers and engineers don’t, and are thus less visible. Consider Britain, whose historic rise during the Industrial Revolution came from tinkerers who gave us innovations like iron making, the steam engine and textile manufacturing. The great names of the golden years of English science were hobbyists, not academics: Charles Darwin, Henry Cavendish, William Parsons, the Rev. Thomas Bayes. Britain saw its decline when it switched to the model of bureaucracy-driven science.
America has emulated this earlier model, in the invention of everything from cybernetics to the pricing formulas for derivatives. They were developed by practitioners in trial-and-error mode, drawing continuous feedback from reality. To promote antifragility, we must recognize that there is an inverse relationship between the amount of formal education that a culture supports and its volume of trial-and-error by tinkering. Innovation doesn’t require theoretical instruction, what I like to compare to “lecturing birds on how to fly.”
Rule 5: Decision makers must have skin in the game.
At no time in the history of humankind have more positions of power been assigned to people who don’t take personal risks. But the idea of incentive in capitalism demands some comparable form of disincentive. In the business world, the solution is simple: Bonuses that go to managers whose firms subsequently fail should be clawed back, and there should be additional financial penalties for those who hide risks under the rug. This has an excellent precedent in the practices of the ancients. The Romans forced engineers to sleep under a bridge once it was completed.
Because our current system is so complex, it lacks elementary clarity: No regulator will know more about the hidden risks of an enterprise than the engineer who can hide exposures to rare events and be unharmed by their consequences. This rule would have saved us from the banking crisis, when bankers who loaded their balance sheets with exposures to small probability events collected bonuses during the quiet years and then transferred the harm to the taxpayer, keeping their own compensation.
In these five rules, I have sketched out only a few of the more obvious policy conclusions that we might draw from a proper appreciation of antifragility. But the significance of antifragility runs deeper. It is not just a useful heuristic for socioeconomic matters but a crucial property of life in general. Things that are antifragile only grow and improve under adversity. This dynamic can be seen not just in economic life but in the evolution of all things, from cuisine, urbanization and legal systems to our own existence as a species on this planet.
We all know that the stressors of exercise are necessary for good health, but people don’t translate this insight into other domains of physical and mental well-being. We also benefit, it turns out, from occasional and intermittent hunger, short-term protein deprivation, physical discomfort and exposure to extreme cold or heat. Newspapers discuss post-traumatic stress disorder, but nobody seems to account for post-traumatic growth. Walking on smooth surfaces with “comfortable” shoes injures our feet and back musculature: We need variations in terrain.
Modernity has been obsessed with comfort and cosmetic stability, but by making ourselves too comfortable and eliminating all volatility from our lives, we do to our bodies and souls what Mr. Greenspan did to the U.S. economy: We make them fragile. We must instead learn to gain from disorder.
—Mr. Taleb, a former derivatives trader, is distinguished professor of risk engineering at New York University’s Polytechnic Institute. He is the author of “Antifragile: Things That Gain From Disorder” (Random House), from which this is adapted.

No comments: