Governance should feel like a daily capability, not just a periodic vote.
Most AI alignment work tries to solve values from the top down: write better rules, infer better preferences, train better models. Those tools matter. They are not enough on their own.
The 6-Pack starts somewhere else. It asks who gets heard, who is accountable, how failures are repaired, and when a system should stop. Alignment is not solved once. It is maintained in public.
The unit of deployment is the Kami — a bounded local steward, not a universal governor. Kamis help neighbourhoods, schools, unions, faith groups, cities, and diasporas do what collective self-government has always promised but rarely delivered at scale: listen across difference, deliberate in the open, remember faithfully, and act together.
No central model owns them. No platform extracts from them. Communities govern them, inspect them, contest them, and shut them down.
The breakthrough is not smarter chatbots. It is stronger self-government: institutions that show their work, repair harm in public, and carry civic memory across generations.
Civic AI Conference 2026 — 25 March, Rhodes House, Oxford
Start here
- Policy. Read the Manifesto, the FAQ, and "AI Alignment Cannot Be Top-Down".
- Engineering. Start with Pack 3: Competence, Measures, and "Inside the Kami".
- Civic practice. Start with Pack 1: Attentiveness, Pack 4: Responsiveness, and Podcast: The 6-Pack of Care.
The 6-Pack
The 6-Pack is an application of ⿻ Plurality to AI governance. Packs 1–4 form a feedback loop (Attentiveness → Responsibility → Competence → Responsiveness → back to Attentiveness). Pack 5 scales that loop across organisations. Pack 6 is the boundary condition that keeps every deployment local, plural, and sunset-ready. The unit of deployment is the Kami — a bounded local steward, not a universal governor.
Six design principles translate care ethics into something institutions can build and inspect:
- Pack 1: Attentiveness — what the people closest to the problem are seeing that institutions still miss.
- Pack 2: Responsibility — who is accountable, with what authority, and what happens if they fail.
- Pack 3: Competence — whether the system actually works in practice: audited, explainable, and safe to fail.
- Pack 4: Responsiveness — whether affected people can contest outcomes and force repair.
- Pack 5: Solidarity — whether the ecosystem rewards cooperation, exit, and public accountability over lock-in.
- Pack 6: Symbiosis — whether the system stays bounded, local, and sunset-ready instead of hardening into permanent rule.
- Measures — one headline public measure per pack, with supporting diagnostics.
Four proof points
- A public-policy case. "AI Alignment Cannot Be Top-Down" shows how Taiwan used an Alignment Assembly to respond to AI-enabled scam ads.
- A technical case. "Inside the Kami" argues that bounded, specialised systems are easier to govern than general-purpose agents.
- A civic-practice case. "Ciudadanía Digital" and Podcast: AI and Democracy show how this work connects to participation, legitimacy, and everyday public problem-solving.
- A long-term care case. "Attentiveness in Long-Term Care AI" examines how a cross-sector collaboration co-produced a care-centric definition of responsible AI with the people closest to the problem.
Publications
- "Sunset Section 230 and Unleash the First Amendment": Audrey, Jaron Lanier, and Allison Stanger argue for ending algorithmic amplification's liability shield while protecting human speech — the reach-not-speech reform at the heart of Pack 5. (Communications of the ACM, January 2026)
- "How Malicious AI Swarms Can Threaten Democracy": Audrey joins Maria Ressa, Nick Bostrom, Nicholas Christakis, and 19 other researchers to document how LLM-powered agent swarms can infiltrate communities and fabricate consensus at population scale. (Science, 2026)
- "Conversation Networks": Audrey, Deb Roy, and Lawrence Lessig propose civic communication infrastructure — interoperable apps with Civic AI guided by communities — as the technical layer beneath 6-Pack of Care. (McGill Centre for Media, Technology and Democracy, March 2025)
- "Community by Design": Audrey and Glen Weyl with four co-authors propose rebuilding social platforms around social fabric — rewarding content that bridges communities rather than maximising engagement. The technical backbone behind Packs 1 and 5. (arXiv, February 2025)
Project
The project sits between a manifesto, a set of operational pack pages, and a book arriving in 2026. Audrey Tang and Caroline Green will present the framework at the Civic AI Conference on 25 March 2026.