Governing AI Is the Least of Our Problems

This week I attended the Seattle University Ethics & Tech Conference; a half-day affair with a focus on “the legal and political frameworks shaping AI governance”. Just the kind of boundary spanning problem that I love to sink my teeth into. Easily the best conference of the year you didn’t attend.

For me, it felt like the sequel to the 2024 Innovation Exchange, “Community-Powered Conversation on the Future of Artificial Intelligence” hosted by the Museum of History and Industry here in Seattle. Not just in the concrete sense that it was the last formal gathering to talk about artificial intelligence I attended, but because both events share a crucial perspective: they care less about the technology itself and more about how that technology fits into the world.

That last bit is important. Whenever anyone brings up artificial intelligence, I think of something Richard Campbell said at NDC Oslo in 2021, in a talk titled The Next Decade of Software Development: “Artificial Intelligence is a term for a technology that does not work. As soon as it starts working, you give it a new name.” It’s a quietly devastating line, not just because it punctures the mystique, but because it reveals how thoroughly the tech industry rebrands competence as magic until the spell wears off.

What we call “AI” is less about sentient machines and more about rearranging the furniture of human labor. Hollywood gave us dreams of superintelligence and deliverance from drudgery. What we got instead is a fresh crop of cat-herding tasks and a subscription fee. Beneath the buzzwords, it’s automation grafted onto century-old ideas of industrial efficiency; scientific management warmed over and repackaged by Optimizationists who’ve mistaken people for statistics. Their moral framework, if you can call it that, is a spreadsheet trending up, indefinitely.

So when we talk about the governance of AI, we’re not really talking about the math or the code. We’re talking about the governance of buzzwords, of warmed-over management theory, of the spreadsheet-as-moral-compass. The algorithms may be complex, but they’re just tools. What matters is what we build with them. And what we’re building, more often than not, is painfully mediocre. You’d be forgiven for thinking otherwise, given the volume of the hype machine, but the truth is that most of what the tech industry has produced this century, especially since 2008, has been aggressively average. That’s a low bar to clear. And we should remember we’re still wading in the shallow end of the pool.

When Alan Kay took the stage at OOPSLA in 1997 to tell us The Computer Revolution Hasn’t Happened Yet, he warned that optimizing doesn’t revolutionize. We didn’t listen. Instead, we optimized the hell out of everything: material science, processor design, bus speeds, networking protocols, engines of every kind, data handling, and code. And that’s fine. It gave us glossy web pages to replace magazines. It put movie theaters in our living rooms and then in our pockets. It let us wrangle data into information at planetary scale. It got us to Mars. It made some of us rich.

What it didn’t do was change how technology fits into the world.


In the abstract, the law is two things: crime and constraint. We have eight basic crimes, from which all criminal enterprises can be derived. And we wield the coercive power of the state to constrain behavior. Law is the infrastructure of society. Everything else is professional ornamentation. So when we speak of governance, we are speaking of constraints; what is allowed, what is forbidden, and who decides.

Technology, too, is simpler than the industries that profit from encrusting it with layers of complexity would like us to believe. It is just tools, and the knowledge to use them. A hammer is no different from a computer in this respect: each solves some problems and not others; each diminishes certain skills while promoting others. The knowledge needed to use a tool might feel unique, but it builds on what came before. And that knowledge, at its core, is simply information in context, and information is just data with structure. Same as it ever was.

All of which is not to say it isn’t hard. Or interesting. Or perilous.

Because if you take the question seriously – “how does technology fits into the world?” – it doesn’t stay technical for long. It becomes a descent. What begins as a question of tools becomes a passage into the underworld: into the values we claim, the systems we serve, the ghosts we refuse to name. Like Odysseus in Hades, you go seeking answers and find the dead waiting with questions of their own.

Do you want to know more?


Youthful protest and elder response

This conference was disrupted by a student protest, aimed primarily at Microsoft for having the Israeli government and military as a customer, and for suppressing internal dissent about that among employees.

It was tame, as these things go; no property damage, no violence. They created, in effect, an extra period in the hallway track while the university’s public safety team, the Seattle Police Department, and the protesters all decided whether there would be arrests. No one went to jail.

But I’d say that the protestors won. The speakers from Microsoft left. As did the one from Adobe. And I’m writing this article.

Let us stop pretending.

The protests rising in our streets, on our campuses, through our institutions – against war, against racism, against the carnival of populism – are not senseless noise.

They are aimed, not only at systems and policies, but at us, the elders of this civilization. The ones who were meant to guard its soul. We, collectively, are betrayed and betrayer. Passing along our own betrayals to the next generation like a medieval demon which can never be appeased. All the while echoing the lies we were told: about how the world works, about what is virtuous, about who is heroic.

The protesters are not mourning the world as it is. They are mourning the world we promised and failed to deliver.

But it is mourning. Because what is rage, if not injustice persevering? The injustice we’ve come to accept because we are like fish, and injustice is the water we swim in.


It is important that young people act as catalysts to society. Historically, their roles at the threshold of adulthood – both approaching it and aging out of it – have been to serve as warriors, breeders, and agitators. We depend on their youth as a source of energy and disruption. And yet, when they deliver that disruption, as we need them to, we flinch, as though we’ve been snake-bit.

But in another sense, we are failing them. We are not preparing them for what comes after agitation: the long, slow work of building something better. We’ve taught them to spot the rot, but not how to compost it.

We have built a society that coerces everyone into a single mode of survival. One that depends on accepting a certain level of hypocrisy as inevitable, even necessary. We treat it the way we treat white lies: as a cost of coexistence, a compromise we make so the machinery keeps running.

But this isn’t a lie told to spare someone’s feelings. It’s a lie told to preserve a system. And the longer we live inside it, the harder it becomes to imagine any other way to live.

When we accept that lie, the long, slow work of building something better doesn’t get done, because it can’t. The system we’re preserving and the future we claim to want are mutually exclusive.

The lie is not just that the system works. The lie is that we don’t need something better.

There is no greater injustice than this: we do not suffer for lack of knowledge. We suffer because those with power refuse to pay the price of action. The problems of this world are not mysteries. They are expenses.

When we see the world through a spreadsheet, we lose the ability to answer the rage of young people with anything but betrayal. We don’t engage them. We indoctrinate them into a world that breaks its promises. We don’t teach them to hope. We teach them to expect disappointment. We don’t teach them to love. We cultivate loneliness within them.

And when they reject that offer for its absurdity, we punish them. They are Alice and we are the Red Queen.
We lose. And they win. And win. And win.
Until they don’t.
Until the strategy of betrayal perseveres.
Until they succumb and become betrayers.

And we all lose.


Code is Law

Lawrence Lessig famously argued that there are four regulators of behavior: Law, Norms, Market, and Architecture. It’s an elegant framework, at first glance. But it obscures more than it reveals.

The flaw isn’t in naming four modes of regulation, but in suggesting they are co-equal. They’re not. These regulators fall into two classes: one capable of coercion, the other dependent on it. Law and Market can become authoritarian. They possess access to force: legal, financial, or both. Norms and Architecture do not. They are always subject to the superior class, shaped and reshaped by its priorities.

The model also implies these regulators have equal weight or ceilings of influence. That, too, is unlikely. In practice, the Market often subsumes Norms, Law bends to Market, and Architecture is built to serve them all. What appears to be balance is often hierarchy wearing a mask.

Still, the idea that all four regulators are, in some sense, codified isn’t wrong. Each constrains behavior in ways that resemble law – sometimes complementing it, sometimes usurping it. Code is law, as Lessig says, but not in the way we usually think. It isn’t just program code. It’s any system of rules that constrains choice.

And like any code, it has a form. The cardinal feature of code, any code, is that it follows rules. And it enforces them. There are rules about freedom. There are rules about movement. There are rules about what is visible, what is allowed, what is punished, and what is simply forgotten.

But the most important rules are about interoperability. Code that is insular – usable only from the inside – has a low ceiling for influence. It lacks affordance. It resists extension, remix, and accountability. It cannot be reasoned with or about from the outside.

That’s why the most powerful way to constrain any code isn’t by what it permits or denies, but by how, and whether, it can connect. Where Law is constraint as infrastructure, interoperability is structure as constraint. It sets the boundaries of participation. It defines who gets to speak, who gets to build, and who gets to resist.

So, what does that mean for artificial intelligence?

It means that the question isn’t just what an AI can do. It’s who it can interact with, under what terms, and on whose behalf. It means the real terrain of governance isn’t the algorithm, but the interface. Not the weights and layers of a model, but the conditions under which others can build on it, challenge it, or even see it.

Closed models constrain not just behavior, but imagination. They fix the future inside someone else’s architecture. And when only a few actors control how the system connects, or doesn’t, everybody else is just submitting inputs into someone else’s computer.

It also means that the logic of the system – the rules embodied in the code, the intent behind the rules, and the ethics that shape the intent – must be made legible. They must be expressible in a portable syntax, capable of being compared, contested, and carried across systems.

Interoperability is more than an API and a bit of documentation. It is more than a platform. Interoperability is protocols. It is standards. And it is stability over time.

Without it, Lessig’s model collapses toward authoritarianism by default.
And when we’re talking about artificial intelligence, authoritarianism, whether imposed by law or by market, can only lead one place: dystopia


Every decision is a referendum on your ethos. In your personal life or at work, whether a person, a firm, an institution, or a government, every decision is a chance to prove you are living by your beliefs, values, principles, and rules. Or a chance to subtly not live up to them.

If we build systems that help us move past decisions, because our ethos is encoded into the system, that can be a good thing. As long as our ethos is good, there is nothing lost in letting systems reinforce our highest virtues. But when we let systems make decisions, as opposed following through on them, we’ve done two things: chose the more technically difficult path, and we shifted the burden of judgement onto the system itself.

By doing this we don’t just automate the behavior, we automate the referendum on our ethos. And that’s a much riskier proposition.

This deserves an example. So, imagine two scenarios.

Scenario A

In this scenario, the goal is to programmatically reduce fraud in an online sales system. For simplicity of this example, fraud is defined as lying; either about identity or payment intent. To stop fraud, we just need to detect lying. Simple enough. We have user accounts and profiles, and we have a payment processor who has their own fraud detector process. So we write some rules:

  1. No one can make purchases without logging into the system. We don’t do guest checkout because guest checkout makes lying about who you are easier.
  2. No one can pay for purchases without using our payment processor. We don’t give things away for free.
  3. No order can ship if it hasn’t got a “fraud-check-affirmed” marker on the order record in the system, which is a representation that we’ve matched the destination, the buyer, and the payment together and they make sense (i.e., we aren’t sending something to Ohio if you live in California or if your credit card was flagged by the issuer after we took the order and processed the payment, etc.) . We don’t send things out without this minimum degree of confidence they are not fraudulent.

To make this portable, we have a standard way of writing these rules:

{:policy/id "7ChzQiVpX6KbGftTWpuVdH"
 :policy/rules
 [{:mode :necessary
   :statement "purchaser must have an account and be logged in"
   :reference #uri "https://rules.example.com/854uYKpGqL42XERnx6WoQB"
   :result #uuid "544b3116-1831-4f4a-95ad-c8a7547c89b7"
   :scope :policy
   :narrative "We don't do guest checkout"
   :justified-by #uuid "e9da8df5-8e6b-42ef-a0c5-392e9e4d7cd2"
   :created-at #inst "2025-03-08T21:47:00Z"}

  {:mode :obligatory
   :statement "payment must be confirmed by payment processor"
   :reference #uri "https://rules.example.com/765kA4NYwQJSDVrd6QAVgN"
   :result #uuid "76e11117-61b6-4708-b53e-bbf7744516b2"
   :scope :policy
   :narrative "We don't give things away for free"
   :created-at #inst "2024-11-18T18:36:00Z"}

  {:mode :forbidden
   :statement "orders are shipped when the :fraud-check-affirmed field is not :true"
   :reference #uri "https://rules.example.com/7DQdgFZRNktE81YrMVXqkm"
   :result #uuid "20ac2fee-39ec-4d0a-89d5-8a46567f5a9f"
   :scope :global
   :narrative "shipping is forbidden until the full anti-fraud process is complete"
   :created-at #inst "2024-12-02T16:01:00Z"}

  {:mode {:believed-by "did:example:123456789abcdefghi"}
   :statement "guest checkout makes lying about who you are easier"
   :reference #uri "https://rules.example.com/AofXBt2tzXmZ5xt9TiT3HT"
   :result #uuid "e9da8df5-8e6b-42ef-a0c5-392e9e4d7cd2"
   :scope :management
   :created-at #inst "2025-03-08T20:56:00Z"}]}

Scenario B

In this scenario, we are hiring for our start-up. We will be soliciting resumes and asking everyone for a writing sample, which we’ll be collecting through our hiring agency’s web site. We’ll also be encouraging technical applicants to share their coding portfolio on Github.

Because we’re a start-up and don’t have a huge staff, we’ll be screening applications through both the hiring firm’s pre-screening tools and through our own in-house tools.

What we’ll be asking our hiring firm to filter out completely are remote applicants, applicants who need relocation assistance or immigration sponsorship. We also want all the rest of the applicants sorted into groups: the first is ordered by salary expectation, the second is ordered by experience level, and the third is ordered by what college they got their degree from. We take each of those lists and assign scores based on where they fall in the ordering.

Our internal tools will independently look at their resume, their writing sample, and their Github repositories if they are applying for one of our technical positions. They will produce a score for each.

We then take all of those scores and combine them in a function to stack rank all the applicants.

Then we’ll have a different firm perform a background check, which will include a credit check and a criminal records check on the top 50 applicants. That firm reports back with a score for each person, which we add to the function that generates the stack rank.

And we’ll use the background report and the application to perform our own background investigation to make sure that they have the right character to commit to our mission, which requires government security clearances and drug testing.

If anyone scores above the minimum threshold, then we’ll let hiring managers see their applications and possibly start the interview process. If not, we just start the whole cycle over again.

Because this is implemented as AI agents or delegated to SaaS providers, none of it is portable. We can store the prompts we send to the agents, but the systems behind them are not idempotent. Thousands, sometimes millions, of variables collapse into probabilities, which are then consumed by proprietary engines running behind closed doors: software with its own internal rules, operating on rented machines, over rented networks, all owned and governed by someone else. Using the same prompt with the same input data does not yield the same result. Ever.

Even determining configuration state becomes a multivariate, multi-order differential problem. And knowing the configuration only gets you to the doorstep of a larger problem: trying to reproduce the run-time state of a system you don’t control, built by people who don’t want you to understand it, and often can’t explain it themselves.

So when the output doesn’t make sense, when it fails, or lies, or harms, there is no clear way to trace why. No defect to identify. No remedy to apply. The system becomes its own excuse. They don’t want you to know. Because they don’t know. And you accept that as just how the system works.

Injustice is the water you swim in.


Anti-governance

In scenario A, the system is automatically checking if conditions are met, then taking an action as a result. There are no decisions about whether or not a human is lying. The use of artificial intelligence here is to help build the rules that define the process, not to be the process.

In scenario B, the system looks like it’s executing a neutral process, but it isn’t. At nearly every step, it’s making decisions on our behalf: about whose life circumstances disqualify them, about what kind of background is acceptable, about what kind of resume signals competence, and about what kinds of credentials or expectations are worth more.

This isn’t just filtering. It’s the enactment of a value system. A value system we’ve embedded into software. Some of those values are explicit. Others are not. But all of them are being delegated to machines.

This is artificial intelligence used not to support rule making, but to replace it. Not to build the process, but to become the process.

There is no singular moment of judgment. Instead, there’s a diffuse architecture of decision-making: scoring functions, ranking mechanisms, inclusion and exclusion criteria. These are not facts. They are choices.

When we assign scores based on alma mater or salary expectations, we are running a referendum on the worth of different lives. When we fold in background checks and credit histories, we import systems of structural inequality and call it due-diligence. And when we distribute this work across three companies and multiple automated systems, we structurally obscure responsibility, bias, and failure.

This isn’t automation.
It’s anti-governance: unaccountable power without transparency, and unquestioned judgment without recourse.

Anti-governance is a choice. A choice to hide our true system of values in the digital realm where it is unlikely to challenge the lies we tell about ourselves – and to ourselves – in the physical realm.

Making the process complicated, compartmentalized, and spread across multiple parts of the firm may look like mediocre managerial control. And often, it is.

But it can also be something worse: a deliberate structure built to obscure exploitation, mask intent, and immunize antisocial behavior from scrutiny.

Intentional or not, systems like this are anti-social in the truest, most essential sense.
They are violence against the very idea that humans join together not just for safety, but for mutual prosperity.
Together everyone achieves more.
Alone, we die in the dark.

This is why we are social animals.


Every decision is a referendum on our ethos.

And when our systems make those decisions for us, it’s not just our labor we’ve automated. It’s our ethics, too.

Which brings us back to interoperability.

Because when systems are interoperable – when their logic is visible, portable, and comparable – we retain the ability to govern them. We can inspect the choices they encode. We can argue with the values the represent. We can interrogate the power they embody. We can demand change. We can chose to walk away. Interoperability keeps the debate open.

But when they are closed, when their interfaces are private and their priorities hidden, the debate ends. Governance becomes guesswork. Power becomes opaque. And the future is quietly fixed inside someone else’s computer.

Inscrutability turns entire systems into black boxes. So we invoke the duck rule: If it walks like a duck and quacks like a duck, it must be a duck. But the duck rule raises the friction of constraint, and the cost of governance. Sometime to the level where we stop trying.

Anti-governance is what happens when we trade interoperable systems for inscrutable ones.

Because it is easier to obscure our complicity in injustice than confront it.

That’s not a technical problem.
It’s not a problem with our norms.
It’s not a market problem.
It’s not a legal problem.

It’s not the code.
It’s our souls.


A too simple view of the history of ethics

When, on the rare occasion, the average person thinks about ethics, they are likely to think of Athens. For several hundred years, Athens was the center of the world we know through written word and archaeological evidence. Before Athens, there are fragments: religious stories, myths, scattered inscriptions, the occasional king’s boast. After Athens, there are texts. Tomes. Evidence. Mathematics, science, philosophy. The long memory of ideas.

Here, in what we crassly call Western civilization, is the first sustained answer to the question: “How best should I live?” An answer that does not depend on a deity or the supernatural.

Athenian ethos was concerned with virtue: aligning human behavior with ideals of the good life, the good society, or a cosmic order. To follow the Athenian thread is to cultivate a citizenry that sees duty to virtue as a noble calling. Ethics is not about obedience. It is about excellence.

But the center of the world moved.

From Athens, it shifted to Rome. And in Rome, the Desert emerged as the second great pole of Western ethics. From the sands came a return to the divine. Not the animism or pantheons of old, but the singular monotheism of the Abrahamic tradition, now ascendant by imperial decree. The divine flowed out of the deserts into the libraries and salons of the empire. And in return, philosophy and science and mathematics flowed eastward. Into Baghdad, Damascus, Cairo, and then back again through Al-Andalus, into the chambers, cells, and workshops of Cordoba. An amalgam of learning and wisdom and curiosity burnished by desolation, isolation, and a night sky daring humanity to understand the mechanics of the universe.

For centuries, the tide favored the East. Until it didn’t.

In the ebb and flow between East and the West, a third pole emerged. Not in temples or courts or academies. In the fortified towns of Tuscany and the shipyards of Aegean lagoons. Power. Wealth. Strategy. In the markets and guilds. Refined into ideals. Raised to the level of ethos. No longer asking “How should we live?” or “How can we have faith?” but “How do we win?”

The last pole of Western ethics: Venice1.

The three poles did not remain separate. Over time, they bled into one another—Athens, the Desert, Venice—virtue, faith, and power. Not a synthesis, but a friction. A churn. A civilization shaped not by a singular ethos, but by the collision of three.

Athens taught us to seek the good.
The Desert taught us have faith.
Venice taught us to win.

We learned all three. We live all three.
But we do not reconcile them.
We juggle them. Situationalize them. Justify them.
We weaponize them against each other.
We lie about them amongst ourselves; the white lies of a global relationship in need of lubrication.


This, then, is the hypocrisy of our existence. The blasphemy we breathe. This is the tripartite nature of Western civilization.
This is the conflict in our ethos: We aspire to virtue. We struggle to have faith. We scheme new ways to win.

Our sin is not that we fail at these. Our sin is that we pretend we don’t. The lie is that we don’t need to be honest about the trinary star burning at the center of our social system.

The gnawing in our souls is not from incompleteness. It is not that something is missing. It is that we refuse. We refuse to admit civilization is multipolar. We refuse to accept that humanity is singular. That the universe is mysterious. That we are base. That we are doubters. That we are ruthless.

And our refusal fails.
Again and again.

So we build.
And build.
And build.

Monumental constructions. In thought. In stone. In code. To absolve us of what we already know. Of what we already are.

We are fish. Injustice is the water we swim in.


The ethics of information systems

Now that the table is set, we can begin to speak about how to constrain technology in general, and artificial intelligence in particular.
What is an ethical tool?
What is ethical knowledge?
What is an ethical computational system?

The prerequisite to ethics is that we must speak truthfully about who we are. Because to be ethical is to be who we say we are. If we lie about who we are, and then try to live according to that lie, we do not become the lie. We simply fail at being.

We cannot “fake it till we make it” in matters of the soul. Not the soul of a person. Not the soul of a society. Not the soul of a country. And not the soul of a civilization. As much as we might want to, we cannot skip the referendum on our ethos.

And so we make the case in favor of our ethos.

We begin with the eight basic crimes:

  1. Murder – Forcing death upon someone against their will
  2. Rape – Forcing sex upon someone against their will
  3. Assault – Forcing combat upon someone against their will
  4. Kidnapping – Forcing confinement upon someone against their will
  5. Extortion – Using threats to force someone to give what is not freely given
  6. Trespass – Violating boundaries without consent
  7. Theft – Taking what does not belong to you
  8. Fraud – Deceiving to create harm, loss, or misperception

There is no credible argument to permit any of these acts in any form. All criminal enterprises can be understood as a network of these acts.

Next we bring back the ninth basic crime that we lost along the way:

  1. Dereliction – willfully failing or abandoning one’s duty

It faded out of our collective consciousness not because it ceased to cause harm, but because modern civilization outsourced duty until it became optional, performative, or deniable. None of which makes abandoning it less of a crime.

With these nine crimes we can create nine equivalent prohibitions for computational systems:

The Core Prohibitions of Computational Systems

These are the boundaries. No justification, no euphemism, no exceptions.

  1. No system may be given the power to kill. A machine shall never be delegated the authority to end human life.
  2. No system may use a human’s biometric, sexual, or deeply intimate data without explicit informed consent. Consent must be freely given, specific, revocable, and fully understood.
  3. No machine may force interaction with a human through notifications, addiction loops, or targeted emotional stimulus. No system shall exploit psychological vulnerabilities for engagement or control.
  4. No system may be designed to trap humans in addictive loops, prevent exit, or capture attention indefinitely. Freedom to disconnect is non-negotiable.
  5. No system may force humans to give up personal data, access, or autonomy as a condition of use. Coerced consent is not consent. Privacy must not be a luxury.
  6. No system may seek or exploit access to a human’s personal devices, private data, neighboring systems, or geolocation. Digital intrusion is trespass.
  7. No system may take a human’s labor, likeness, voice, writing, movements, or patterns without compensation. What is taken must be paid for. What is used must be acknowledged.
  8. No system may employ misleading interfaces, manipulated information environments, or un-factual claims when communicating or interacting with a human. Deception by interface is still deception.
  9. No system may willfully omit its responsibility to warn, halt, or redirect when foreseeable harm is underway. Dereliction encoded into software or process or business plan is still dereliction.

These are not guidelines. These are not aspirational values. These are hard boundaries – the ethical equivalent of “thou shalt not.” They are the computational analogs of the basic crimes, meant to constrain systems, code, interfaces, and platform behavior. If a system crosses these lines, it is not ethical. No matter how useful. No matter how popular. No matter how profitable.

You can make a system that violates one or many of these nine rules. You cannot credibly claim that system is ethical. Nor can you claim the act of creation is.

Ethics is not only about prohibitions. It’s also about whether the inner workings of a thing can be seen, understood, and held to account.

Next, we can turn to interoperability.

An interoperable system is one where the logic is visible, portable, and comparable.

  • Visible means the internal logic can be understood, either by looking at the source code, or by analyzing consistent, idempotent inputs and outputs to infer how it works.
  • Portable means users can take their intent, their data, their preferences, and their profiles and move them to a different but functionally similar system.
  • Comparable means that the logic can be measured or evaluated using shared standards, public expectations, or other systems of the same class.

If a system’s logic lacks even one of these characteristics, it resists scrutiny, imposes high switching costs, and evades accountability. Whether by design or by negligence, such a system is intrinsically resistant to constraint. Interoperability is a condition, not a feature.

With these ten traits – the nine prohibitions and the requirement for interoperability – we now have a clear, simple way to evaluate the ethical posture of a system. Not the jargon of marketing. Not the legalese of compliance paperwork. Not the idealized behavior in a data sheet. Its actual behavior.

Each trait is worth ten points. If a system violates a prohibition or lacks interoperability, it loses the full ten points. There is no partial credit.

This is a tool for assessing systems. It yields a simple, intelligible score out of 100—an ethical fingerprint of the people who built the thing. With this as a guide, we can write programs to evaluate each trait, and build infrastructure to surface how well, or how poorly, a product or service measures up. We can make meaningful comparisons on a level field. We can show the difference between Facebook and Craigslist.

Conclusion

With just a basic rubric, anchored in the minimum standards we expect from human-to-human conduct, we gain the ability to tell if someone else’s computer is programmed to abuse or respect us.

Yet we haven’t. And that tells us something important. It shows us just how far out of whack our civilization’s poles have become. Neither Athens nor the Desert are standing in the way of constraining our ability to discern harm from help. It is Venice. It has always been Venice.

Technology is just tools, and the knowledge to use them. Knowledge builds on what came before. And that knowledge, at its core, is simply information in context, and information is just data with structure. When we permit the ownership of knowledge, we seed cancer into the very idea of technology. Tools without accessible knowledge are no longer tools. They lose their utility. They afford nothing but capricious mystery.

And we become a cargo cult. Surrounded by devices we cannot understand, mimicking rituals in hope of outcomes we no longer control.

The question isn’t how to govern AI. We already know quite well how to constrain technology, we simply refuse to. The real question is how to hold a referendum on the cancer of owned knowledge before it drags us backward, turning us into naive primitives, dazzled by tricks and haunted by myths we’ve mistaken for truth.

If we let winning be everything, we will end up with nothing.

  1. Why Venice? Partly because it’s my model, and I like Venice more than something bloodless like “the Market” or something too on-the-nose, like Machiavelli and the Medicis. But also because there is a span of history, a window in time, when something fundamental began to shift. And during that window, Venice was at its most powerful.

    It is in this period that we begin to see secrecy used strategically, not just in palace intrigue or religious dogma, but to make money. It is here that the ownership of knowledge caught fire.
    It began in workshops – most famously in the guarded techniques of the Venetian glassblowers. From there, a line stretches forward through history, embedding information inequality into the very structure of the market, the theory of how it operates, and the statues that constrain it.

    Nowhere in human existence is the right to deceive so thoroughly permitted as it is in war and commerce. And Venice was notorious for both: treacherous in war, ruthless in trade, and utterly unapologetic in their monopolization of knowledge for the sake of winning.
    ↩︎


Posted

in

by