華文

Community Voices on AI

A Polis sensemaking report from the Civic AI Conference 2026 at Rhodes House, Oxford

This report captures the views of 61 participants at the Civic AI Conference 2026 at Rhodes House, Oxford. During the event, attendees took part in a Polis sensemaking exercise on AI and community care. Polis uses machine learning to identify areas of consensus and map opinion groups — without enabling the polarising dynamics of social media. Participants responded to short statements by agreeing, disagreeing, or passing, and could submit their own.

61
Participants
20
Statements
551
Votes Cast
3
Opinion Groups

How divisive was the conversation?ConsensusDivisive#7: Always reach a real person with government services (97% agree, 0% disagree)#14: Teach children to question AI, not just use it (93% agree, 3% disagree)#5: You should be told when AI helps make decisions about you (93% agree, 0% disagree)#1: AI should free up staff time, not replace human contact (91% agree, 0% disagree)#16: Should be told chatbot is not a real person (90% agree, 3% disagree)#17: Big tech cares more about profits than communities (90% agree, 3% disagree)#19: AI companies should pay artists and writers (87% agree, 6% disagree)#9: People harmed while government too slow to act (84% agree, 9% disagree)#13: Communities should have voice in local AI use (85% agree, 6% disagree)#6: Someone responsible if AI treats people unfairly (85% agree, 6% disagree)#2: Dementia patients deserve a say in AI care tools (86% agree, 3% disagree)#18: Community caring matters more than advanced tech (82% agree, 4% disagree)#10: Workers should get real help, not just advice (79% agree, 12% disagree)#8: UK should create body to shut down harmful AI (74% agree, 9% disagree)#3: AI companionship for elderly better than being alone (67% agree, 17% disagree)#20: Slow down AI until we understand the effects (64% agree, 24% disagree)#12: AI data centres will create good local jobs (39% agree, 25% disagree)#11: Companies must help workers they replace with AI (43% agree, 43% disagree)#4: Offer imperfect AI tools to families with serious illness (49% agree, 35% disagree)#15: AI in schools helps more than it harms (25% agree, 47% disagree)

Each dot is a statement. Tap or hover to see details. Teal = broad agreement, dark = mixed views, red = majority disagree.

What We Agree On

Across all three opinion groups, participants found broad consensus on 14 of the 20 statements. Five were flagged by Polis as statistically significant areas of consensus.

Consensus

People should always be able to reach a real person when dealing with a government service, even if AI handles most tasks.

97% agree 3% pass
Consensus

Schools should teach children to question what AI tells them, not just how to use it.

93% agree 3% disagree
Consensus

When AI helps make a decision about your benefits, housing, or health, you should be told.

93% agree 7% pass
Consensus

AI in care homes should free up time for staff to spend with residents, not replace human contact.

91% agree 9% pass
Consensus

AI companies should have to pay artists and writers when they use their work to train AI systems.

87% agree 6% disagree

When I talk to a chatbot or AI assistant, I should always be told it is not a real person.

90% agree 3% disagree

Big technology companies care more about profits than about what happens to our communities.

90% agree 3% disagree

People living with dementia deserve a say in whether AI tools are used in their care.

86% agree 3% disagree

If an AI system treats people unfairly because of their race, age, or disability, someone should be held responsible.

85% agree 6% disagree

Communities should have a voice in deciding how AI is used in their local schools and hospitals.

85% agree 6% disagree

People are being harmed by AI-driven decisions while the government takes too long to act.

84% agree 9% disagree

A community that takes care of its people matters more than one with the most advanced technology.

81% agree 4% disagree

Workers who lose their jobs because of AI should get real help finding new work, not just advice.

79% agree 12% disagree

The UK should create an independent body with real power to shut down harmful AI systems.

74% agree 9% disagree

Where Opinions Differ

Six statements revealed meaningful divisions between the three opinion groups. These are the tensions worth exploring further.

Using AI to keep an elderly person company when no human is available is better than leaving them alone.

67% agree 17% disagree 17% pass
A
79%
B
40%
C
83%

We should slow down on AI until we better understand what it does to people.

64% agree 24% disagree 12% pass
A
31%
B
79%
C
100%

Families caring for someone with a serious illness should be offered AI tools to help, even if those tools are not perfect.

49% agree 35% disagree 16% pass
A
15%
B
94%
C
0%

Companies should not be allowed to replace workers with AI unless they help those workers find new roles.

43% agree 43% disagree 14% pass
A
6%
B
67%
C
86%

New AI data centres in Oxfordshire will create good jobs for local people, not just for tech workers from elsewhere.

39% agree 25% disagree 36% pass
A
38%
B
42%
C
33%

AI tools in schools do more to help struggling students catch up than they do to harm learning.

25% agree 47% disagree 28% pass
A
47%
B
8%
C
0%

Three Perspectives

Polis identified three distinct opinion groups using principal component analysis (PCA). Groups are formed by clustering participants who voted similarly, not by demographic data.

Participant opinion mapABCUngroupedGroup AGroup BGroup C

Each dot is a participant, positioned by how similarly they voted. Participants who voted alike cluster together.

Group A: Innovation-Ready (17 members)

The most comfortable with AI adoption. This group is notably less concerned about slowing down and does not believe companies should be forced to help displaced workers. They support AI companionship for elderly people but are sceptical of offering imperfect AI tools to families.

Group B: Supportive Reformers (21 members)

The largest group. They favour offering AI tools to families — even imperfect ones — and want strong worker protections. They believe AI companies should compensate creators and are open to AI with guardrails.

Group C: Principled Sceptics (8 members)

The smallest but most decisive group. They take strong, clear positions and are the most cautious about AI. They want to slow down, demand strict regulation, and reject any notion of offering imperfect AI tools.


View the original Polis report

Conference Manifesto