華文

Democracy Needs Civic AI

March 25, 2026

Audrey Tang

Audrey Tang on why democracy needs bounded, local Civic AI: Kami, not governor.

Somewhere over the Middle East, the cabin lights dimmed. Most passengers were asleep. On the tray table in front of me sat a small computer hosting what we call jdd-kami.

Not a cloud service. Not a general assistant pretending to live everywhere at once. A bounded Civic AI we can hold, inspect and shut down.

No cloud. No company. No one else's server. Airplane mode. No signal, no internet. Just the model, the hardware and the relationship that shaped it.

I typed: Write about democracy.

It wrote about listening.

Not policy. Not optimisation. Not power. Listening.

That mattered to me because the system on that tray table was not built to govern anyone. It was built to help a community think in public without handing its judgment to the machine. That is the distinction I want to defend today.

The default trajectory

The default trajectory in AI has a very seductive shape. One increasingly general system. Trained on everything. Mediating everything. Optimising everything. In that story, democracy is not crushed by force. It is simply outgrown.

This is not speculative.

In January 2026, I co-authored a paper in Science with twenty other researchers, including Nick Bostrom, Maria Ressa and Nicholas Christakis. We studied malicious AI swarms: networks of agents that maintain persistent identities, build convincing relationships and coordinate toward goals their human targets never agreed to.

We now have the technical means to simulate a public that does not exist and make it look real enough to steer one that does.

That is why the deepest danger is not only loss of agency. It is the hollowing-out of legitimacy. When swarms can fabricate the appearance of public will, politics becomes theatre. Democracy becomes a user interface wrapped around optimisation.

The default trajectory is monoculture at machine speed. It concentrates intelligence, authority and failure into the same stack.

Democracy needs a different trajectory.

Not one wiser governor above society, but many bounded intelligences within it. Not a system that replaces politics, but one that helps people do politics better. Not a universal sovereign, but local Civic AI: Kami, not governor.

On March 13, 2026, in Dharamsala, Tenzin Yangtso asked the Dalai Lama a question the three of us had composed together: when AI can speak every language and still fail at compassion, how should we use this power for collaboration rather than control?

His answer was simple and exact. Life begins in interdependence. Therefore these tools should not be used for control, but to improve the pathways of human connection.

That is the task.

Love and the architecture of care

Last month, Rebecca Henderson stood before an audience at the Harvard Ash Center and said she had never spoken the word love in an academic gathering before, but she was desperate.

That sentence matters because it names the limits of our current vocabulary. When institutions are failing, people do not need warmer branding. They need a way to join moral attention to public power.

Martin Luther King Jr. put the formula plainly: love without power is sentimental and anemic. Power without love is reckless and abusive.

The question for Civic AI is how to make that union operational.

Carol Gilligan gives one part of the answer. Radical listening begins from not-knowing. It replaces judgment with curiosity. That is not softness. It is a discipline.

Joan Tronto gives the next part. Care is not merely a feeling. It is a public practice. It is the willingness to stay in relationship, especially across difference, and to build institutions that keep repair possible.

This is why democracy needs Civic AI. Not because machines can care instead of us. Because they can help us build processes where care has standing, memory and teeth.

Taiwan's answer

In 2024, Taiwan's internet was flooded with AI-generated scam ads. Trusted faces. Trusted voices. Fake investments, fake cures, fake hope.

This was not abstract harm. Grandparents watched a video of someone they recognised, called a number and lost their savings. The money was real. The shame was real too, because many victims blamed themselves.

Taiwan also has the freest internet in Asia. We were not willing to solve one problem by creating a larger one through censorship.

So we did something slower, harder and ultimately stronger. We went to the people.

We sent text messages to a large random sample of citizens. Four hundred and forty-seven joined the deliberation. They sat in forty-four tables of roughly ten people each: retired teachers, service workers, engineers, parents, grandparents, people who had been scammed and people who had not.

AI supported the process with transcription, clustering and synthesis. It did not decide. It helped the room remember itself. It surfaced where agreement was forming and where real disagreement still needed attention. It helped ensure that the quietest voice was not erased by the loudest.

The questions were concrete. How should platforms verify advertisers? Who should carry liability when a deepfake causes financial harm? What evidence should trigger intervention, and who should review it?

These were not easy conversations. But something shifted in room after room. People who arrived ready to denounce began asking questions of the person across from them. Judgment gave way to curiosity.

What emerged was not unanimity. It was legitimacy.

More than 85 percent supported the core package. The remainder mostly said they could live with it. The legislature acted. Within a year, identity-impersonation scam ads fell by 94 percent.

That result matters because AI did not replace democracy. It helped democracy operate at the speed of the problem without ceasing to be democracy.

AI was not governor. It was a bounded civic instrument in a process where the citizens remained sovereign.

The 6-Pack as democratic discipline

What made that work was not just technology. It was institutional rhythm.

With Caroline Green, drawing on Joan Tronto's ethics of care, we have been developing what we call the 6-Pack of Care. It is not a checklist. It is a minimum democratic discipline for systems that touch public life.

Attentiveness asks: who is closest to the harm, and what are they seeing that institutions still miss?

Responsibility asks: who is answerable, in public, when the system fails?

Competence asks: does it work, and when it breaks, does it break small?

Responsiveness asks: can the people affected contest the outcome and force repair?

These first four packs form a loop. I sometimes think of them as breathing. Attentiveness is the inhale. Responsiveness is the exhale. Responsibility and competence are what make that breath durable rather than rhetorical.

Then come the last two.

Solidarity means the ecosystem must reward cooperation: open standards, interoperability, portability, the freedom to leave.

Symbiosis means every Civic AI must be designed for handoff, sunset or shutdown. No permanent rulers. No automatic expansion from local usefulness to general mandate.

If a system cannot be inspected, contested, repaired and retired, it is not ready to serve a democratic community.

Alignment as a living process

On March 23, 2026, we ran two sessions on Habermolt, an open-source platform where participants teach an AI agent their views, their red lines and their non-negotiables. Those agents then deliberate with one another under ranked choice.

The point is not to let proxies replace politics. The point is to let people exchange reasons at a scale and pace that ordinary forums often cannot sustain.

Some participants wanted markets to solve AI alignment. Others wanted democratic governance of every parameter. Some wanted AI development slowed dramatically. The spread of views was wide.

Yet on one question, a broad convergence emerged: alignment should not be fixed once and for all by a small group of engineers. It should be governed through ongoing democratic processes.

That is the deeper point. Alignment must be a living verb, not a frozen adjective.

The moment Civic AI treats its ethical questions as purely technical, it has already been captured by something else: by convenience, by centralisation or by wealth-care masquerading as public reason.

Kami, not governor

Many AI visions still imagine a single general system hovering above society: a benevolent governor, a planetary brain, a superintelligence that makes politics obsolete by doing it better than we can.

This is one of the most dangerous ideas in technology.

Dangerous not only because it may fail. Dangerous because it may appear to succeed. A society that delegates judgment upward to one optimiser may gain speed while losing its civic muscle. It may become more efficient at the cost of becoming less free.

The better image is the Kami.

In Japanese tradition, a Kami belongs to a place: a river, a grove, a shrine, a neighbourhood. Its authority is local. Its knowledge is specific. Its role is not to dominate the whole world, but to tend one part of a living world well.

That is how Civic AI should look.

A school may have one kind of assistant. A clinic, another. A union, another. A city, another. Each bounded by purpose. Each inspectable. Each contestable. Each replaceable. Each interoperable with the others, but none entitled to rule the whole person.

Is that less efficient than one system managing everything? Yes.

Democratically, gloriously, yes.

Efficiency is not the only public value. A resilient polity needs plural institutions, overlapping accountability and the ability to correct local failure without submitting everything to one centre.

What to build next

The question is no longer whether AI will shape public life. It already does. The question is whether we will accept AI that fabricates publics, manipulates attention and concentrates authority, or whether we will build Civic AI that helps real people reason, deliberate and decide together.

So here is my ask.

Choose one public service in your community: school enrolment, housing allocation, benefits access, health referral, disaster response.

Open one real deliberation around it. Not a consultation after the decision has already been made, but a process where the affected can shape the system before it governs them.

Make the system inspectable. Make the commitments public. Make appeals real. Make exit possible.

Build one bounded Civic AI that helps a community hear itself more clearly.

Publish what you learn, including the failures. Democracy does not get stronger by hiding mistakes. It gets stronger when institutions can be corrected in public without collapsing.

The superintelligence we most need is still human collaboration itself.

The frontier ahead

But there is one frontier we still do not understand well enough.

Those citizens in Taiwan did not bring only arguments into the room. They brought grief, confusion, distrust, embarrassment, hope and, eventually, a measure of trust. The emotional texture of civic life is not noise around deliberation. It is part of deliberation.

We are learning how to build Civic AI that supports structure, memory, contestability and repair. We are still learning how technology should meet the affective life that flows through democratic institutions without flattening it into prediction or manipulation.

Rosalind Picard has spent her career working at precisely this frontier.

That is why her question belongs at the centre of this conference: do we want technology that only simulates concern, or technology that helps make human life genuinely better?

That is the question the Kami carried from Taiwan to Oxford.

Rosalind, over to you.

Conference