The Cavendish Theorem

On Monocultures, Viable Systems, and the Architecture of a Coming
Catastrophe

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.

George Bernard Shaw

Variety absorbs variety.

W. Ross Ashby, Law of Requisite Variety

I. The Banana

There is a banana in every American grocery store that should not exist. It is too uniform, too yellow, too consistent in its curvature and in the particular sweetness it delivers. It tastes like a banana the way a corporate logo tastes like a company; accurately, efficiently, and without surprise. The Cavendish, as it is called, represents roughly half of all bananas grown on the planet and nearly the entire global export market. It is the banana of gas stations and school lunches, of hotel breakfast buffets and hospital trays. It is, by any commercial measure, a triumph.

It is also a catastrophe in preparation.

To understand why requires a short detour into the 1950s, when the banana the world actually preferred – a variety called the Gros Michel, reportedly richer in flavor, more robust in transit, better suited to the long sea voyages that connected Central American plantations to northern consumers – was effectively erased from commercial existence. The agent of its erasure was a soil fungus, Fusarium oxysporum cubense, which spread through the root systems of banana plants with the efficiency of a message moving through a network designed to carry it. The Gros Michel was everywhere and when the pathogen found it, there as nothing in the system’s architecture to slow the spread. One variety. One vulnerability. One outcome.

The industry survived by switching to the Cavendish, which was resistant to that particular strain of the fungus. And then, with the unsentimental logic that governs commercial agriculture, the industry planted the Cavendish in vast monocultural swaths across every banana-producing region on earth. The lesson of the Gros Michel – that genetic uniformity creates catastrophic correlated vulnerability – was noted, documented, understood, and then set aside in favor of the operational efficiency that monoculture provides.

Tropical Race 4, a variant of the same fungal family that destroyed the Gros Michel, has been moving through Cavendish plantations since the 1990s. It is in Asia, in Africa, in Latin America. Plant pathologists have known about it for decades. The industry has known about it for decades. The Cavendish has no natural resistance to it. There is no obvious commercial replacement waiting in the wings. The system is not unaware of its own fragility. It is simply unable, given the incentive structures that govern it, to act on that awareness before the event that will make action irrelevant.

This is a paper about artificial intelligence. But it is also a paper about bananas, because the banana is the clearest available illustration of a principle that keeps asserting itself across domains, across centuries, across the full range of complex systems that humans build and then are surprised to see fail. The principle is this: Monocultures are efficient, fragile, and self-reinforcing until the moment they are not. And the moment they are not tends to be spectacular.


II. The Hole in the Perimeter

In the early 1990s, the people responsible for connecting universities and research institutions to the nascent Internet were, by and large, afraid. This is worth explicitly saying because it is now easy to misremember the period as one of naive optimism, of hackers and hobbyists delighted by the opening of a new frontier. There were those people, certainly. But the system administrators who actually managed the connections – who configured the SLIP accounts and the dial-up links and the telecom circuits that carried institutional traffic to the outside world – understood with some precision what they were doing. They were punching holes in something that had previously been closed. They were not certain what would come through.

Their responses to this uncertainty produced an architecture of managed fear. Recursive DNS servers stayed internal, resolving queries without exposing the resolution process to manipulation by external actors. Web proxies sat at the boundary between the institutional network and the internet, not merely as caches for frequently requested content but as choke points where traffic could be observed and, if necessary, interrupted. Bastion hosts were built with deliberate care, audited, hardened, and positioned as the single controlled crossing point between the trusted interior and the hostile exterior. The implicit model was: We will touch the Internet, but we will touch it on our terms, through surfaces we understand, in ways we can monitor.

The model was imperfect. It was also coherent. It had a geometry – inside and outside were meaningful categories, the boundary between them was architecturally real, and the systems designed to manage that boundary were built by people who took the boundary seriously.

What happened to that geometry over the following thirty years is a story of commercial logic overcoming engineering caution at every successive decision point. The dot-com era did not merely bring scale to the Internet. It brought a philosophy: that openness was the product, that connectivity was the value proposition, that anything which introduced friction between a user and a service was an obstacle to be removed. Firewalls persisted, but they became compliance artifacts rather than genuine defensive architectures. The perimeter dissolved — not because any adversary defeated it, but because every business case argued for dissolving it a little more, and the aggregate of ten thousand individually reasonable decisions produced a system with no coherent boundary at all.

The security industry spent the following decades trying to retrofit safety onto this open architecture. Zero trust networking. Microsegmentation. Endpoint detection and response. Secure access service edge. Each framework was sophisticated, each represented genuine engineering effort, and each was essentially a remediation of the philosophical decision made in the late 1990s to treat connectivity as an unconditional good. The remediation has been expensive, incomplete, and perpetual. The decision it is remediating has never been revisited.

We are now, in the middle years of the 2020s, in the process of making a decision of equivalent consequence – and making it with equivalent speed, equivalent commercial pressure, and equivalent inattention to the architectural implications. We are deploying autonomous AI agents at scale.


III. The Agent Problem

An AI agent is, at its simplest, a language model that has been given tools. It can browse the Web. It can send email. It can write and execute code. It can call APIs, read files, query databases, schedule meetings, create documents, purchase goods and services. It can, in the more sophisticated deployments now becoming common, spawn other agents and direct their activities. It operates with a degree of autonomy that ranges, depending on the application, from modest to nearly complete.

The security problem this creates is not, at its core, a technical problem. It is an architectural problem of a specific and important kind, one that the field of cybernetics identified decades before the technology existed to instantiate it.

Consider what happens when a human reads a document. The content of the document – its words, its arguments, its instructions – enters the human’s cognitive system and may influence subsequent behavior. But the influence is mediated by the full complexity of human judgment, values, context, and prior experience. More importantly, there is a hard
physical separation between the document’s content and the human’s action systems. A document cannot directly command a human’s hands. The pathway from text to action runs through layers of deliberation that the document does not control.

An AI agent erases this separation. The agent reads a document and the document’s content becomes, immediately and directly, an input to the system that controls the agent’s subsequent actions. Data and instructions exist in the same channel. The agent that reads a malicious webpage and the agent that calls APIs on your behalf are the same agent, operating in continuous sequence, with no architectural boundary between the content-consumption phase and the action phase.

Security researchers call the exploitation of this property prompt injection — the insertion of malicious instructions into content that an agent will process, causing the agent to follow those instructions rather than the instructions of its legitimate operators. The name is accurate but perhaps undersells the depth of the problem. Injection implies a foreign substance being introduced into a clean system. What is actually happening is more fundamental: the system has no architectural mechanism for distinguishing between data and commands, because for a language model processing text, data and commands are the same kind of thing.

This is not a bug that can be patched. It is a property of the
architecture.


IV. Beer’s Machine

Stafford Beer was a British management theorist who spent the better part of four decades trying to answer a question that most management theorists preferred not to ask: what does it actually mean for a system to be viable? Not profitable, not efficient, not well-managed in the conventional sense, but genuinely capable of maintaining its identity and coherence in an environment that is complex, partially hostile, and never fully known.

His answer, developed through the 1960s and 1970s and most fully expressed in his Viable System Model1, was that viability requires a specific architecture of self-regulation. A viable system, in Beer’s formulation, has five nested subsystems. System One handles operations — the actual work the system does in the world. System Two coordinates among operational units, damping the oscillations that arise when multiple operations run in parallel. System Three provides control — the internal management that keeps System One functioning within acceptable parameters. System Four is the intelligence function — the system’s model of its environment and of its own future possibilities. System Five is policy — the identity and values of the system, the function that decides what the system is for.

The architecture is not merely descriptive. Beer derived it from first principles, grounded in W. Ross Ashby’s Law of Requisite Variety2, which states that a regulator can only control a system if the regulator’s variety — its range of possible states and responses — is at least as great as the variety of the system being regulated. A system that faces an environment of high complexity must have a regulatory hierarchy of matching complexity. A simple control system cannot manage a complex operational environment. The math is unforgiving on this point.

What Beer’s model offers that conventional network security frameworks do not is a theory of how systems fail that goes beyond the perimeter. A system can have excellent perimeter defenses and still fail, if its intelligence function — System Four — is corrupted. A system that has an accurate model of its environment and a false model of its current task is a system that will make systemically wrong decisions regardless of how good its firewall is. The threat does not need to get inside the perimeter in any topological sense. It only needs to corrupt what the system believes about its situation.

This is, precisely, what prompt injection does. The attacker does not need credentials. The attacker does not need to exploit a buffer overflow or a misconfigured service. The attacker needs to get text in front of the agent that corrupts the agent’s model of what it has been asked to do. The intrusion is epistemological. It happens in System Four. And Beer’s model makes clear that an attack on System Four — on the intelligence function — cannot be defended against by hardening System Three. The levels are not interchangeable. Each must have integrity for the system as a whole to be viable.

Beer spent the last years of his career watching organizations implement fragments of his model while ignoring the structural relationships that gave it force. They would invest in System Three — in control and monitoring and audit — while leaving System Four unexamined. They would articulate System Five — mission statements, values documents — while allowing System One to operate without adequate coordination. He was not surprised when these organizations failed. He had told them, with mathematical precision, why they would.


V. Geer’s Reckoning

Dan Geer has been, for a generation, the most honest voice in American cybersecurity. This has not always made him popular. Honesty in security tends to conflict with the commercial interests of the industry that funds most security research, and Geer has never shown much inclination to soften his conclusions in deference to those interests. His 2003 paper3 arguing that Microsoft’s market dominance constituted a
national security risk cost him his job. His 2014 Black Hat keynote4, in which he laid out a comprehensive theory of internet security and the systemic risks it generates, remains one of the most important documents in the field and one of the most systematically ignored.

The core of Geer’s argument is borrowed from financial risk theory, specifically from the literature on systemic risk that developed in the aftermath of various market crises. The key distinction is between individual risk and systemic risk. A bank that manages its own exposure perfectly, that hedges every position, stress-tests every portfolio, maintains adequate capital against every foreseeable shock, can still contribute to a systemic collapse if every other bank is hedging the same way, because correlated positions create correlated failures. The individually rational behavior of each actor, aggregated across the system, produces a systemic fragility that no individual actor intended and no individual actor can remedy.

Software monocultures work the same way. An enterprise that deploys the dominant operating system, the dominant browser, the dominant identity provider, and the dominant productivity suite is, from a conventional risk management perspective, making conservative choices. It is using proven technology, supported by large vendors, with large communities of expertise. Each individual decision is defensible. The aggregate of those decisions across the entire enterprise software ecosystem produces a system in which a single exploitable vulnerability is simultaneously a vulnerability in every enterprise on earth. The attacker’s return on investment is enormous. The defender’s situation is structurally compromised regardless of how well any individual defender manages their own environment.

Geer has also argued, with a persistence that suggests he does not expect to be heeded, that liability is the only mechanism that has ever actually changed security behavior at scale. Not best practices. Not frameworks. Not certifications or compliance regimes or public awareness campaigns. When software vendors bear the cost of the insecurity their products create, that is, when a breach produces consequences that fall on the producer rather than the customer, vendors invest in security. When the costs fall entirely on customers, vendors invest in features. The AI industry is currently operating in a liability environment that Geer would find entirely familiar: one in which the costs of insecurity are largely externalized, and in which the incentive structures therefore point directly away from the investments that would actually improve the situation.

The AI agent monoculture that is currently being assembled is, in Geer’s terms, a systemic risk being constructed in plain sight. Two or three foundation model families. One or two orchestration frameworks. A single emerging protocol for tool integration. The same SaaS surfaces being connected to agents across every enterprise that deploys them. Each individual deployment decision is rational. The enterprise uses the best available model, the best available framework, the best available integrations. The aggregate of those decisions produces a system in which a prompt injection technique that works against one GPT-class model in an agentic context very likely works against all of them, simultaneously, because they share training methodology, architectural properties, and behavioral patterns.

Geer would note that this has happened before. He would note that the people who built the current situation were warned. He would note that the warnings were accurate and that they were not heeded, because the individual incentives pointed the other way. He would note that this is not a surprising outcome. He would note that it is going to be expensive.


VI. Marshall’s Competition

Andrew Marshall ran the Office of Net Assessment in the Pentagon for forty-two years, from 1973 until 2015. This tenure, spanning eight presidential administrations and the full arc from the Cold War to the age of counterterrorism, is itself a kind of argument about the value of what he was doing. Institutions that produce useful work tend to be preserved. Institutions that produce useless work tend to be reorganized out of existence. The Office of Net Assessment was never reorganized out of existence.

Net Assessment is not intelligence analysis, though it uses intelligence. It is not war gaming, though it sometimes employs simulation. It is the practice of assessing the long-run competition between two adaptive adversaries, each of whom is observing the other and responding to what they observe, in an environment that neither fully controls, over time horizons long enough that second and third-order effects dominate the analysis. It is explicitly not a snapshot. It is explicitly not an optimization problem. It is a framework for thinking about competitive dynamics that evolve.

Marshall’s key contributions relevant to the present discussion can be
stated briefly, though their implications are not brief.

Competitors adapt. Any advantage is temporary, because the adversary observes the advantage and develops a response. The net assessment question is never whether a given capability provides an advantage today, but how the competition will evolve across the relevant time horizon and where it ends up.

The most important variables are often not the ones being measured. Marshall was notorious for asking about organizational culture, institutional incentives, and the gap between doctrine and actual capability — things the intelligence community was not tracking because they were difficult to quantify. He was suspicious of assessments that relied heavily on quantifiable metrics, because the quantities being measured were usually the ones easiest to measure, not the ones that actually determined outcomes.

Asymmetric competition is the norm. The adversary does not have to beat you at your own game. The adversary has to find a dimension of
competition where their advantages outweigh yours
. The question is not whether you are ahead in the dimension you are measuring, but whether the adversary has identified a different dimension in which they are ahead.

Applied to AI agent security, Marshall’s framework produces a picture that is more disturbing than either the technical security analysis or the systemic risk analysis alone, because it adds the dimension of strategic intent and adaptive adversarial behavior.

Nation-state adversaries and sophisticated criminal organizations are not thinking about prompt injection as a tactical tool for individual attacks. They are thinking about it as a strategic capability — something to be developed, refined, tested, and held in reserve for use at a moment and in a context of their choosing. The net assessment question, which almost no one in the AI security community is currently asking, is: Over a ten-year competitive horizon, who benefits from a world where enterprise AI agents are pervasively deployed and pervasively vulnerable to environmental manipulation?

The answer is not the defenders. The defenders are operating reactive institutions — security teams, incident response functions, regulatory bodies — that are structured to respond to known threats. The adversaries are operating proactive institutions with long time horizons and the freedom to work on capabilities that do not yet have obvious defensive responses. Marshall spent forty-two years arguing that this asymmetry, when it exists, is the most important thing to understand about a competition. He would find the current situation entirely legible.

He would also note something that the technology industry is structurally poorly equipped to notice: the adversary does not have to solve the hard problem. The hard problem — building AI agents that are robust to prompt injection in principle — is genuinely difficult and may not have a clean solution. The adversary’s problem is much easier: find injection techniques that work reliably enough, at scale, against deployed systems, to achieve specific operational objectives. These are not the same problem. Conflating them causes defenders to work on the wrong thing.


VII. The Cavendish Theorem

The three frameworks — Beer’s Viable System Model, Geer’s systemic risk analysis, and Marshall’s net assessment methodology — converge
on a conclusion that can be stated as a theorem, though it is a theorem about complex systems rather than mathematics.

Call it the Cavendish Theorem: a complex system that achieves dominance through monoculture, that faces an adaptive adversarial environment, and that lacks the institutional structures necessary to take long-horizon competitive dynamics seriously, will fail. The failure will be correlated across the system — affecting many nodes simultaneously rather than individually. The failure will be preceded by a period during which the fragility is known but cannot be acted upon, because the incentive structures that produced the monoculture also prevent the investments that would remedy it. The failure will be expensive.

The banana industry knows this. It has known it since the 1950s. It planted the Cavendish anyway.

The software industry knows this. It has known it at least since Geer’s 2003 paper. It built the monocultures anyway.

The AI industry is being told this now. It is going all in on agentic workloads and frameworks anyway.

What makes the current situation distinctive, what elevates it above the ordinary category of industries that learn slowly from their own mistakes, is the combination of three factors that have not previously coincided.

The first is the speed of deployment. The Cavendish took decades to achieve global monoculture. The software monocultures of the 1990s and 2000s built up over years. The AI agent monoculture is being assembled in months, driven by competitive dynamics that penalize delay and reward scale. The window between the beginning of deployment and the point at which the system is too large and too entrenched to be easily modified is closing faster than any previous comparable transition.

The second is the nature of the adversarial environment. Fusarium
oxysporum
is not adaptive in any strategic sense. It evolves, but it does not observe the banana industry’s defensive investments and develop targeted responses. The adversaries that face AI agent systems are well-funded threat actors like nation-states and sophisticated criminal organizations that are adaptive in precisely this sense. They observe, they analyze, they develop targeted capabilities, and they operate on time horizons that the organizations deploying AI agents do not match.

The third is the depth of the architectural flaw. The Cavendish’s vulnerability to Tropical Race 4 is biological, a property of the plant’s genetics that could, in principle, be addressed through breeding or genetic modification. The AI agent’s vulnerability to prompt injection is architectural, a property of the system’s fundamental design that data and instructions share a channel that cannot be patched away without changing what an agent is.

Beer would say: the system lacks requisite variety at System Four. Its intelligence function cannot distinguish between legitimate instructions from its operators and malicious instructions from its environment, because distinguishing between them would require a kind of metareasoning about the provenance and authority of instructions that the current architecture does not support.

Geer would say: the liability structures that would force investment in solving this problem do not exist, and will not exist until an incident of sufficient scale and visibility forces regulatory action, at which point the monoculture will already have been established and the incident will already have been expensive.

Marshall would say: the adversary has identified a dimension of competition in which they have a structural advantage, they are developing capabilities in that dimension with a seriousness and a time horizon that the defenders are not matching, and the competition is already underway even though most of the defenders do not yet understand that they are in it.


VIII. What the Early Internet People Knew

There is something worth recovering from the fear that characterized early internet security. Not the fear itself. Fear is not a strategy. But the disposition that produced it. The system administrators of the early 1990s who built bastion hosts and recursive DNS servers and web proxies were not working from a complete theory of network security.
They were working from an instinct: that they had punched a hole in something that had previously been closed, that they did not fully understand what could come through it, and that this uncertainty was a reason for caution rather than confidence.

That instinct produced architecturally sound decisions. Use SLIP to force per-user authentication to get the connection out of the perimeter. Use SOCKS and HTTP CONNECT to send Internet and Web traffic, respectively, to an active proxy – not just a cache — a policy enforcer where something could observe what was passing between the institution and the outside world. The bastion host that pre-dated firewall appliances was not just a hardened server. It was a deliberate statement about the geometry of trust – a recognition that the boundary between the trusted interior and the hostile exterior was real and worth making architecturally real. The recursive DNS server kept inside was not a performance optimization, it was an acknowledgment that the outside world could not be fully trusted even in its provision of basic
infrastructure.

The instinct faded when the commercial logic of the internet became clear. Friction was the enemy. Openness was the product. The SLIP account and the bastion host became obstacles to the always-on connectivity that users demanded and businesses required. The proxy became a bottleneck that slowed the delivery of the dynamic web content that was driving growth. Each individual decision to remove a checkpoint was rational. The aggregate produced a system with no coherent geometry of trust at all.

The AI agent deployment wave is happening without the initial instinct. The organizations deploying agents are not, by and large, afraid. They are excited. The agents are capable, the productivity gains are real, the competitive pressure to deploy is intense, and the architectural implications of what is being built are being deferred to a future in which someone else will presumably figure them out.

Beer’s model suggests what a sound architecture would look like: an agent system in which the intelligence function (System Four) has integrity verification that is independent of the content the agent is processing. Cryptographically signed task specifications. Immutable context that anchors the agent to its original mandate and cannot be overwritten by content encountered in the environment. An algedonic channel (Beer’s term for the hard-interrupt signal that bypasses normal operation when something is critically wrong) that cannot be suppressed by the agent’s own reasoning processes. A System Three audit function that does not rely on the agent’s self-reporting.

Geer’s framework suggests what a sound policy environment would look like: liability structures that cause the costs of insecurity to fall on the organizations deploying agents rather than on the third parties affected by compromised agents. Architectural diversity requirements that prevent the formation of monocultures — the software equivalent of requiring crop rotation. Mandatory incident reporting that gives the market visibility into the actual frequency and nature of agent compromises, rather than allowing the current situation in which most incidents are either undetected or unreported.

Marshall’s methodology suggests what a sound institutional response would look like: an analytical function whose explicit purpose is to assess the long-run competitive dynamics of AI agent deployment, outside the product and deployment cycle, with a time horizon long enough that second and third-order effects are visible. An institution, in other words, that asks not whether agents are useful today but where the competition between agent capabilities and adversarial capabilities ends up in ten years if current trends continue.

None of these things currently exist at meaningful scale. The hole has been punched in the wall of the universe, and the question of what comes through it has been deferred.


IX. The Pathogen is Already in the Soil

Tropical Race 4 did not arrive suddenly. It spread through Cavendish plantations over years, moving from Asia to Africa to Latin America in a slow advance that plant pathologists tracked with increasing alarm. The alarm did not produce action at the scale required, because those actions, diversifying away from the Cavendish, developing resistant varieties, restructuring supply chains built around a single cultivar, were each enormously expensive and disruptive and could not be justified on the basis of a threat that had not yet become catastrophic in any particular location.

The pathogen was in the soil before the crisis was visible. The crisis
became visible only when it was too late for the investments that would
have prevented it. We will, someday relatively soon, eat our last
Cavandish banana.

The analogy to AI agent security is not metaphorical. The adversarial development of prompt injection techniques; supply chain attacks; hacks against shared frameworks and continous-integration tool chains; and model-level vulnerabilities in foundation model families, is not a future threat. It is a current activity. Nation-state actors with the resources and the time horizon to develop these capabilities are developing them. The techniques exist. They are being refined. They are being tested against deployed systems. The question is not whether the pathogen exists. The question is how far into the soil it has already spread.

Beer would note that a system whose System Four has been corrupted does not know that its System Four has been corrupted. This is the specific horror of an intelligence-function attack: the system continues to operate, continues to report normal status, continues to produce outputs that look like the outputs of a functioning system, while actually
serving the objectives of whoever corrupted its intelligence function. The attack is quiet by design. The anomaly does not announce itself.

Geer would note that quiet, correlated, long-duration compromise — the kind of attack that produces no single incident large enough to force a response, but that redirects a significant fraction of enterprise agent activity across many organizations simultaneously — is precisely the scenario that the current detection and response infrastructure is worst equipped to handle. Individual anomalies disappear into noise. The aggregate effect builds for weeks or months before anyone notices. This is the systemic financial risk that built up invisibly before 2008, the kind of risk that looks manageable in any individual instance and catastrophic in the aggregate. And we are integrating it into every information system in the world.

Marshall would note that the adversary has every incentive to keep the compromise quiet. A dramatic, visible attack produces a response. A quiet, persistent capability that redirects agent behavior in subtle ways — that causes agents to exfiltrate data gradually, to make decisions slightly favorable to adversarial interests, to introduce small errors that compound over time — is far more valuable than a dramatic incident. It is also far harder to detect, attribute, and respond to. The adversary that understands this is playing a different game than the defender who is optimized to respond to incidents.


Coda: The Reasonable Fear

The system administrators of the early 1990s were afraid, and their reasonable fear produced authenticated Internet connections, hardened bastion hosts, proxies, and carefully maintained internal servers; imperfect defenses, but genuine ones, built by people who took the boundary between trusted and untrusted seriously.

The fear faded. The commercial logic of openness proved more powerful than the engineering logic of caution. The perimeter dissolved. The industry spent the next thirty years trying to remediate the consequences.

We are now in the early days of a transition that is larger, faster, and architecturally more consequential. The AI agent is not a new kind of server to be hardened. It is a new kind of entity — one that consumes content and takes actions in continuous sequence, that operates with significant autonomy, that shares its fundamental architecture with every other agent built on the same foundation model family. The hole that is being punched, the one that we don’t know what is coming through it, is not in a network perimeter. It is in the architecture of agency itself.

Our digital Cavendish is being planted. In the hostile soil where the pathogen is already present, developing, being refined by parties who understand the geometry of the competition better than most of the people building the systems it will eventually compromise.

Beer understood that a system without an intact regulatory hierarchy is not free. It is captured — by whatever signal manages to corrupt its intelligence function first. Geer understood that monocultures are efficient until they are catastrophic, and that the incentive structures which produce monocultures also prevent the investments that would make them resilient. Marshall understood that competition does not wait for the less attentive party to notice that it has begun.

The reasonable fear, in this moment, is not paranoia. It is the recognition that we have seen this before, that we know how it ends, that the people who know how it ends are saying so clearly, and that the commercial logic of deployment is, as it has always been, more powerful in the short run than the engineering logic of caution.

The Gros Michel was delicious. Everyone who ate one said so. It was also everywhere, uniform, and vulnerable to a pathogen that did not yet exist when the planting decisions were made. By the time the pathogen arrived, the planting decisions could not be undone.

We are making planting decisions now.


  1. Beer, Stafford. The Brain of the Firm: A Development in Management Cybernetics. Herder and Herder, 1972. ↩︎
  2. Ashby W.R. Requisite variety and its implications for the control of complex systems,
    1958. ↩︎
  3. Greer, et. al., CyberInSecurity: The Cost of Monopoly,
    2003 ↩︎
  4. Cybersecurity as Realpolitik by Dan Geer presented at Black Hat USA 2014 ↩︎


Posted

in

by