This article is an excerpt from the Shortform book guide to "Antifragile" by Nassim Nicholas Taleb. Shortform has the world's best summaries and analyses of books you should be reading.

Like this article? Sign up for a free trial here.

What is convexity? How does convexity relate to antifragility?

In *Antifragile*, Taleb explains that antifragile systems thrive off randomness and the unexpected. Part of this is due to convexity—but what is convexity exactly? In this case, it’s when an effect of an event increases alongside the intensity of the event.

Read more to find out what convexity is and how it relates to antifragility.

**Exponential Benefit or Harm**

We start with a graphical representation of fragility and antifragility. **Using that simple illustration as a guide, we revisit exactly why fragile systems hate random events while antifragile systems love them.**

After that, we return to the discussion of how size causes fragility, now with an added dimension: concentration. A centralized system is much more fragile than a decentralized one, even if they add up to the same size. For example, a single large bank is more vulnerable to mistakes and bad deals than 10 banks that are each a 10th the size; the simple reason is that the large bank has more resources, and therefore has more to lose.

**We’ll also touch on the idea that large, influential systems can cause damage even outside of themselves.** When that large bank got itself into trouble, the global stock market took close to a 10% hit—one of the 10 hypothetical smaller banks doing something similar would have caused a much smaller shock to the market, if any at all.

Then we expand on the fragility of size, and how large systems like banks cause harm to those who rely on them. It also explores how we could mitigate the damage by dividing up our investments and our consumption. For example, large tuna fisheries cause harm to the environment by overfishing, but they only do so because people keep demanding tuna. If we returned to a more natural method of consuming what’s readily available, our large systems wouldn’t cause as much damage.

**Concavity and Convexity**

Both fragility and antifragility have exponential effects. **In other words, as the significance of an event increases, the effect of that event increases even faster.** For example, if you punched a window you could easily break it; however, you could drum your fingers on the glass all day without damaging it. The thousands of tiny impacts from your fingers wouldn’t add up to the same effect as one large impact from your fist.

As we’ve previously discussed, when those significant events have negative effects, the situation is fragile. When they have positive effects, the situation is antifragile. We can roughly sketch this general concept as seen below:

A fragile situation has a limit to how good an outcome can be, but no limit (or almost no limit) to how bad an outcome can be. A graph of the situation has a *concave *shape: it flexes outward in the middle. An antifragile situation is exactly the opposite, and the graph has a *convex *shape: it flexes inward in the middle. **An easy way to remember this is that fragility makes a frown, and antifragility makes a smile. **

The two graphs also illustrate why fragility dislikes randomness, and antifragility loves it. Imagine picking random points on each of the graphs; depending on where the point falls on the *significance *axis, it may have a positive or negative *outcome*. Now imagine that you keep picking such random points over and over again. Eventually you’re going to land on a point with enough significance that the outcome is either hugely negative (for the fragility graph) or hugely positive (for the antifragility graph).

A side note: Any line on a graph can be represented by an equation. Putting a negative sign in front of that equation will result in the same graph, but upside-down. This also holds true for our graphs of fragility and antifragility; the opposite of concavity is convexity. Fat Tony made his fortune in the oil slump by putting a minus sign in front of the banks’ equation, so that whenever they lost a dollar, he made one.

**Acceleration of Harm or Benefit**

An intrinsic property of concave and convex graphs is that, the farther along the horizontal axis you go, the steeper the slope becomes. In other words, the outcome changes by greater amounts over the same horizontal distance.

This gives us an easy way to check systems for fragility (or antifragility). Take, for example, travel time from point A to point B on the thruway. If there’s no traffic, you’ll drive from A to B quickly and easily. Now let’s say, hypothetically, you note that when there are 10,000 cars on the thruway, travel time increases by 10 minutes. If traffic increases by *another *10,000 cars, your travel time now increases by 30 minutes. Add another 10,000 and you may be stuck in traffic for hours.

**Though the increases in traffic are the same, the increases in travel time get bigger and bigger. **Since the change is undesirable, we would say that the degree of harm is accelerating. This is a fragile system. If more traffic meant you somehow got to your destination sooner, then it would be an antifragile system; it would have accelerating benefits as the amount of cars on the road increases.

We’ve talked before about the fragility of relying on forecasting. Whether in finances, weather, votes, or anything else, trying to predict the future is notoriously unreliable. However, these forecast models could be made a great deal more accurate—and the decisions based on them made much less fragile—by subjecting them to a simple acceleration of harm test.

In short, take the model and ask, “What if it’s wrong?” Change some of the key assumptions by small increments and see what happens to the results. **If the positive changes outpace the negative ones, then you’ve got a model for an antifragile system. **

**Beware of Averages**

**One key thing that many models get wrong is that they rely on averages.** The problem, in light of the acceleration of harm effect, is that averages don’t take devastating extremes into account.

For example, imagine you booked a hotel room that’s kept at an average of 70° Fahrenheit. No doubt that sounds pretty reasonable. However, it could be that the room is 0° half the time, and 140° the other half. The average temperature is still 70° but, far from being comfortable, the room is downright dangerous. **The variability is more important than the average.**

Finally, imagine that there was a third graph with a straight line; for every “unit” that the significance of an event goes up by, the effect also goes up by one “unit.” This would represent an “average” situation, as hypothesized by any number of predictive models. If you were running a business, for example, your sales model might predict that more sales equals more money in a linear fashion, represented by such a graph.

First of all, that model is probably extremely inaccurate. Does it account for bulk buying and production? Supply shortages? In short, are there extraordinary benefits or downsides to selling much more product than usual?

**However, aside from that, it’s plain to see that a convex model would quickly outpace a linear one. **An antifragile system will, in the long term, perform better than the hypothetical average. Similarly, sooner or later a fragile system will fall victim to random chance or unforeseen events, and it would underperform.

### ———End of Preview———

#### Like what you just read? **Read the rest of the world's best book summary and analysis of Nassim Nicholas Taleb's "Antifragile" at Shortform**.

Here's what you'll find in our **full Antifragile summary**:

- How to be helped by unforeseen events rather than harmed by them
- Why you shouldn't get too comfortable or you'll miss out on the chance to become stronger
- Why you should keep as many options available to you as possible