This report captures the views of 61 participants at the Civic AI Conference 2026 at Rhodes House, Oxford. During the event, attendees took part in a Polis sensemaking exercise on AI and community care. Polis uses machine learning to identify areas of consensus and map opinion groups — without enabling the polarising dynamics of social media. Participants responded to short statements by agreeing, disagreeing, or passing, and could submit their own.
Each dot is a statement. Tap or hover to see details. Teal = broad agreement, dark = mixed views, red = majority disagree.
What We Agree On
Across all three opinion groups, participants found broad consensus on 14 of the 20 statements. Five were flagged by Polis as statistically significant areas of consensus.
People should always be able to reach a real person when dealing with a government service, even if AI handles most tasks.
Schools should teach children to question what AI tells them, not just how to use it.
When AI helps make a decision about your benefits, housing, or health, you should be told.
AI in care homes should free up time for staff to spend with residents, not replace human contact.
AI companies should have to pay artists and writers when they use their work to train AI systems.
When I talk to a chatbot or AI assistant, I should always be told it is not a real person.
Big technology companies care more about profits than about what happens to our communities.
People living with dementia deserve a say in whether AI tools are used in their care.
If an AI system treats people unfairly because of their race, age, or disability, someone should be held responsible.
Communities should have a voice in deciding how AI is used in their local schools and hospitals.
People are being harmed by AI-driven decisions while the government takes too long to act.
A community that takes care of its people matters more than one with the most advanced technology.
Workers who lose their jobs because of AI should get real help finding new work, not just advice.
The UK should create an independent body with real power to shut down harmful AI systems.
Where Opinions Differ
Six statements revealed meaningful divisions between the three opinion groups. These are the tensions worth exploring further.
Using AI to keep an elderly person company when no human is available is better than leaving them alone.
We should slow down on AI until we better understand what it does to people.
Families caring for someone with a serious illness should be offered AI tools to help, even if those tools are not perfect.
Companies should not be allowed to replace workers with AI unless they help those workers find new roles.
New AI data centres in Oxfordshire will create good jobs for local people, not just for tech workers from elsewhere.
AI tools in schools do more to help struggling students catch up than they do to harm learning.
Three Perspectives
Polis identified three distinct opinion groups using principal component analysis (PCA). Groups are formed by clustering participants who voted similarly, not by demographic data.
Each dot is a participant, positioned by how similarly they voted. Participants who voted alike cluster together.
Group A: Innovation-Ready (17 members)
The most comfortable with AI adoption. This group is notably less concerned about slowing down and does not believe companies should be forced to help displaced workers. They support AI companionship for elderly people but are sceptical of offering imperfect AI tools to families.
- Only 6% agree companies must help displaced workers (vs 67% in B, 86% in C)
- 47% think AI in schools helps more than it harms (vs 8% in B, 0% in C)
- Only 31% want to slow down AI (vs 79% in B, 100% in C)
- 79% support AI companionship for the elderly
Group B: Supportive Reformers (21 members)
The largest group. They favour offering AI tools to families — even imperfect ones — and want strong worker protections. They believe AI companies should compensate creators and are open to AI with guardrails.
- 94% support offering imperfect AI tools to families (vs 15% in A, 0% in C)
- 100% agree workers who lose jobs deserve real help
- 100% agree AI companies should pay artists and writers
- 79% want to slow down AI
Group C: Principled Sceptics (8 members)
The smallest but most decisive group. They take strong, clear positions and are the most cautious about AI. They want to slow down, demand strict regulation, and reject any notion of offering imperfect AI tools.
- 100% want to slow down on AI
- 100% disagree with offering imperfect AI tools to families
- 100% disagree that AI in schools helps more than it harms
- 100% want an independent body with power to shut down harmful AI