Jamais Cascio is Research Affiliate at the Institute for the Future and writes online at Open the Future.

Uncertainty and Resilience

by Jamais Cascio

In my work as a futurist, focusing on the intersection of environment, technology and culture, the concept of resilience has come to play a fundamental role. We face a present and a future of extraordinary change, and whether that change manifests as threat or opportunity depends on our capacity to adapt and remake ourselves and our civilization -- that is, depends upon our resilience. It's no surprise, then, that Brian Walker's essay, "Resilience Thinking," articulates a set of principles that resonate deeply for me.

When I first read Walker's piece, I was struck by how closely his list of characteristics of resilience parallels the set that I've been using in my own work. Some of the language varies, of course -- I tend to talk about "transparency" where Walker talks about "feedback," for example -- but the underlying principles align strongly. He includes a feature that I've left out (and will consider adding): ecological services, easily lost in a too-brittle environment. I can see how this concept could be applied broadly, articulating the ways in which one element of a resilient system can serve to strengthen and reinforce the viability of other elements of a system.

At the same time, in my work I include a couple of features that Walker doesn't touch on in his essay. That's not to say that they are alien to the resilience concept, but (at least in my articulation) they emerge from the worlds of design and strategy more so than the world of ecology. There are undoubtedly parallels to these concepts in other writings on resilience, but I'd like to take a moment to explore how these versions, emphasizing intentionality and planning, might fit in with Walker's larger argument.

The first is default to least harm. This is a concept from the interaction design world, reflecting a desire to make sure that when a system fails, the default state is as harmless as possible. The narrow goal here is that a system failure shouldn't make an already-bad situation significantly worse. The "air brakes" on large trucks might be considered the simple example of this principle: Air pressure keeps the brakes off; if the brakes fail, they slowly re-engage as the air leaks out, bringing the truck to a stop.

More broadly, this principle forces to us to recognize that no system is immune to failure, and that we are far-better served by considering the implications of failure in our plans and strategies. As an element of resilience, this is at once common-sense and easily-forgotten. We know that we should be ready for disaster, but we hate to think about it. Defaulting to least harm can mean actions as simple as making backups, building fire-breaks, funding safety nets, and so forth; such actions may seem boring, a waste of resources, or even distracting from core goals, but they are nonetheless key elements of a resilient world.

This concept applies to more than how we build or undertake simple projects. Implicit in the notion of defaulting to least harm is the need to avoid cascade failures (where the collapse of one system overloads and causes the collapse of other systems, and so forth). Doing so requires not just thinking of the resilience of individual components, but of connected systems. One example of how this manifests is in the avoidance of monocultures. Monocultures -- tree farms or single operating system computer networks, as examples -- can be highly efficient, allowing for consistent and easy management, but they offer a prime example of over-optimization undermining resiliency. Under attack -- whether by disease or computer virus -- monocultures are terribly brittle; the failure of one component of the group means that all components are vulnerable. Polycultures, mixing different "species" together, can be more complex to manage, but can be far better able to withstand attacks.

The second feature of resilience that Walker might want to consider adding is foresight, the capacity to think through possible future consequences of present actions, and to identify early indicators of changing conditions. The concept of resilience implicitly acknowledges a dynamic environment, and the need to be able to adapt to changing conditions. The speed, form and impact of such changes are inconsistent and unpredictable -- but unpredictable is not the same as unforeseeable. It's possible to use our limited knowledge of what lies ahead to look for present-day choices and actions that serve to improve our resilience, not degrade it.

This means looking at plausible futures, not simply one "official future." If the future is unpredictable, we're much better off looking at a range of possible outcomes than just at a single best guess. A forecast doesn't need to be exactly right to be useful; in fact, a mix of divergent, plausible futures (sometimes referred to as scenarios) can offer insights into the strengths and weaknesses of a given system or strategy. With foresight tools such as scenarios (as well as similar processes, such as mapping and gaming), we can test how well our present environment and plans would respond to complex changes; if we see that a particular aspect of how we are today tends to weaken or fail under certain (unpredictable, but reasonable) conditions, we know that it's likely that we'll need to strengthen or change that potential point of failure.

It's like a wind tunnel, in a way. We can test a design against a variety of conditions (all similar to what one might find in reality), in order to make sure that there are no hidden flaws. It's not foolproof by any means, but even if a design that passes a wind tunnel test can't be guaranteed to work, a design that fails such a test almost certainly won't. Or to adopt an analogy of a different sort, such a practice can be seen as an immune system, where a taste of a possible future allows us to develop antibodies against the less-desirable outcomes.

Both of these aspects of resilience that I've come to identify in my work come down to ways of dealing with uncertainty. Systems and strategies of optimization can work only when conditions are certain -- and conditions rarely remain certain for long. Resilience, conversely, is an especially viable strategy for dealing with uncertainty, as it does not presume stasis.

Yet it's something of a paradox: The most resilient systems are those that recognize that they may be insufficient against all possible outcomes. Defaulting to least harm offers a way for a resilience strategy to handle unexpected failure gracefully. Foresight, conversely, offers a way for a resilience strategy to be able to anticipate changes before they occur. This is not defeatism. The potential for failure lies within every action, every system we design -- but it's the very process of preparing for the chance of failure that gives us the greatest hope for long-term success.

Tags: foresight

Discussion

4 Comments

Html tags for style or links are okay. Your patience is appreciated while comments await moderation.

  • clarification of terminology

    Jamais,

    Thanks for adding your perspective to this issue of P&P. As you know, I’ve followed and appreciated your writing for some time. This essay pairs well with your recent talk on Building Civilizational Resilience, in which you list resilient system design principles that include: foresight, reversibility, and graceful failure.

    One of my passions is clarification of terminology, and I have a question related to your use of terms.

    Uncertainty is of course applicable not only to the future but to the past. Ascribing causation can be problematic, certainly in human affairs, and also in bio-physical systems. In recent decades, for example, we have seen emerging and rising confidence in the statistical correlation between CO2 concentrations and climate change.

    If history is to be a guide, one might reflect: In recognition of the uncertainties of releasing CO2, humans might have been more cautious in the use of fossil fuels, might have sought to minimize regret while probing the potential effects of CO2 emissions.

    As a thought experiment, I wonder if, in this case, precaution and reversibility are indeed analogous. For while reversibility is brilliant as a design principle (for example in Wikipedia’s revert-ability), how would it have been applied in this instance?

  • uncertainty & reversibility

    Howard, thanks for asking me to write a piece for the inaugural issue of P&P.

    You pose an interesting question: how could we have integrated reversibility into our systems in the early days of recognizing the carbon problem? I think the answer comes in a commonplace expression: we could have "hedged our bets." That is, while undoubtedly still relying primarily upon fossil fuels for transportation and energy, we might have invested money into building workable prototypes for non-petroleum-fueled cars and more widespread wind power (for example). We might have put more money into trains and public transportation. We wouldn't have tried to out-and-out replace our fossil energy systems at the first hint of a problem -- that wouldn't have been very cautious -- but we could have made the eventual transition away from fossil energy easier and less desperate.

    (There's an interesting analogy from the world of computers. If you follow personal computing, you might know that, for years, Apple computers used chips made by Motorola (and, eventually, IBM), while everybody else used chips made by Intel. It turned out that, when recurring problems with the Motorola/IBM chips became too much to bear, Apple could easily swap over to Intel because they had an in-house program of maintaining an up-to-date version of their system running on the alternative platform. And you can bet that they have the equivalent today for non-Intel chips, just as a hedge.)

    The biggest dilemma around reversibility, however, comes from entrenchment. That is, the longer you're on a given path, no matter how much you've done to hedge your bets, the harder it will be to pull back. All the money plausibly spent on electric car prototypes, public transit, and wind farms wouldn't have made a shift away from carbon simple -- it just would have made it less painful.

    Entrenchment is particularly difficult when the systems in question have already been around for awhile before you start thinking about alternatives and reversibility. That's why I emphasize reversibility as a design criterion -- embedding it into your designs before you need it makes it easier to use if you do. It also can make your designs more expensive... but (as a benefit) it can make your designs more flexible and amenable to evolutionary development.

  • I expect we'll revisit this important topic of precaution - its shades of meaning, applicability, and relationship to reversibility - in an upcoming P&P volume.

    For now, I'll just link over to Jamais' original article on The Reversibility Principle, over at Worldchanging. And note that this comment thread remains open ...

  • Good article... I have a question.

    You wrote "This means looking at plausible futures, not simply one "official future." If the future is unpredictable, we're much better off looking at a range of possible outcomes than just at a single best guess."

    One of the things I wrestle with when it comes to all of this is the fact that the true game changers in systems are 'black swans'; Nassim Taleb's term for the truly large impact, hard to predict events that are beyond the realm of normal expectation. According to Taleb, planners operate under the assumption that the unexpected can be predicted by extrapolating from variations in statistics based on past observations. This assumption 'usually' proves to be true, but fails when the catastrophically unusual occurs... as it often does.

    Politically, the true world-system changers were completely unanticipated by planners, whether 'positive' (ie. the internet), or 'negative' (9/11 attacks).

    Building resilience into forestry management through selective logging practices is a small, localized solution to sustainability that is highly desirable. On this level, I agree with your perspective.

    But consider the bigger picture; the larger the scale, the less feasible planning becomes. There is no shortage of natural examples of radical climate change and volcanic activity throughout history that have profoundly - and unexpectedly - changed the ecological game on this planet and on others. It was likely a comet that wiped out 99% of extinct species on this planet, long before we humans even became bipedal.

    If Taleb is onto something, which I believe he is, then ecological resilience planning is somewhat along the lines of game theory. It assumes a set of rules and variables, the basis of which become invalid in the presence of a truly unanticipated ecological game changer. Think Mid Eastern nukes, comets, solar flares. Better yet, think of something you and I can't even think of!

    Your second sentence "We face a present and a future of extraordinary change, and whether that change manifests as threat or opportunity depends on our capacity to adapt and remake ourselves and our civilization -- that is, depends upon our resilience." is excellent, and on a personal, self-reliance level I could not agree more.

This discussion has been closed.