Civic AI Sensemaker

Bloom Civic AI Report

Structured public-input analysis. What communities really think about AI and its role in their lives.

20
Statements
627
Votes
8
Topics
15
Subtopics
📊

Overview

Topics discussed in the conversation, with the percentage of statements categorised under each. Percentages may exceed 100% when statements span multiple topics.

Community Voice and Democratic Control

25%

Community well‑being should take precedence over corporate profit and advanced technology; local impacts matter more than the profit motives of large tech firms.

AI in Health and Social Care

15%

AI should augment — not replace — human caregivers, provide companionship for isolated older adults, and assist families even when the tools are imperfect.

Accountability and Redress for Harmful AI

15%

Clear accountability when AI harms protected groups, an independent UK body with power to shut down harmful systems, and faster government action.

Workforce Impact and Economic Transition

15%

AI could affect jobs; policies should support workers through economic transition with real help, not just advice.

Transparency and Right to Know

10%

Transparency about AI systems and the public's right to know how these technologies operate and affect them.

Education and Critical AI Literacy

10%

Education and critical AI literacy to empower communities to understand, question, and shape AI technologies.

Intellectual Property and Creator Compensation

5%

Creators must be compensated and intellectual property rights respected when AI systems use their work.

Precautionary Approach to AI Deployment

5%

A cautious, precautionary stance toward deploying AI, with thorough assessment of potential harms before widespread use.

🔥

Top 5 Most Discussed

15 subtopics of discussion emerged. These 5 had the most statements submitted.

  1. Prioritising community well‑being over corporate profit and technological advancement

    • Corporate profit prioritised over community health and welfare
    • Community well‑being valued higher than technological advancement
    • Technology should serve people, not corporate interests
    • Local communities need control over AI deployment decisions
    • Ethical AI must prioritise social good over profit
  2. Inclusion of affected populations in AI decision‑making

    • Affected individuals should influence AI deployment decisions
    • Community participation ensures ethical AI use
    • Respect for autonomy in care technology choices
    • Inclusive design reflects lived experience needs
    • Democratic input guides responsible AI implementation
  3. Ensuring human accessibility and fallback in public services

    • Guaranteed access to human representatives
    • AI as supplementary, not replacement
    • Preserve user choice in service channels
    • Maintain accountability through human oversight
    • Ensure equitable service for all citizens
  4. Community participation in local institutional AI governance (schools, hospitals)

    • Community participation in AI decisions
    • Local governance of school AI systems
    • Local governance of hospital AI systems
    • Public input shaping AI use policies
  5. Support for displaced workers (retraining, placement assistance)

    • Need for concrete job placement assistance for AI-displaced workers
    • Importance of retraining programmes beyond generic advice
    • Ensuring tangible support rather than just counselling
    • Addressing workforce transitions caused by AI adoption
    • Prioritising actionable resources over informational guidance
🔍

Topics in Depth

Based on voting patterns, both points of common ground and differences of opinion have been identified.

Community Voice and Democratic Control 5 statements

This topic included 1 subtopic, comprising a total of 5 statements. This subtopic had moderately low alignment compared to the other subtopics.

Common ground
A participant said that big technology companies prioritise profits over the well‑being of communities.
Differences of opinion
No statements met the thresholds necessary to be considered as a significant difference of opinion (at least 20 votes, and more than 40% and 60% difference in agreement rate between groups).
AI in Health and Social Care 3 statements

This topic included 1 subtopic, comprising a total of 3 statements. This subtopic had moderately low alignment compared to the other subtopics.

Common ground
A participant stated that AI in care homes should free up staff time for more interaction with residents rather than replace human contact.
Differences of opinion
No statements met the thresholds necessary to be considered as a significant difference of opinion.
  • AI as supplement to human care: AI should free staff time for resident interaction rather than replace human contact
  • AI for companionship when humans unavailable: Using AI to keep an elderly person company when no human is present is preferable to leaving them alone
  • AI as supportive tool for families: Families caring for someone with a serious illness should be offered AI assistance even if the tools are not perfect
Accountability and Redress for Harmful AI 3 statements

This topic included 1 subtopic, comprising a total of 3 statements. This subtopic had moderately low alignment compared to the other subtopics.

Common ground
Participants say we should hold someone responsible when AI systems treat people unfairly because of race, age, or disability. They observe that AI-driven decisions harm people while the government takes too long to act, and they urge quicker government intervention.
Differences of opinion
No statements met the thresholds necessary to be considered as a significant difference of opinion.
  • Accountability for unfair AI treatment: Responsibility should be assigned when AI systems discriminate based on protected characteristics
  • Independent oversight with enforcement power: An independent body in the UK capable of shutting down harmful AI systems
  • Delayed government response: People are suffering from AI-driven decisions while government action is slow
Workforce Impact and Economic Transition 3 statements

Participants discussed how AI could affect jobs and emphasised the need for policies that support workers through economic transition.

Transparency and Right to Know 2 statements

Participants emphasised the need for transparency about AI systems and asserted the public's right to know how these technologies operate and affect them.

Education and Critical AI Literacy 2 statements

Participants highlighted the importance of education and critical AI literacy to empower communities to understand, question, and shape AI technologies.

Intellectual Property and Creator Compensation 1 statement

Participants raised concerns that creators must be compensated and intellectual property rights respected when AI systems use their work.

Precautionary Approach to AI Deployment 1 statement

Participants advocated a cautious, precautionary stance toward deploying AI, urging thorough assessment of potential harms before widespread use.

💬

All 20 Statements

Every statement submitted by participants, with voting results. All voters were anonymous.

People living with dementia deserve a say in whether AI tools are used in their care.
96% agree 4%
146 votes
When AI helps make a decision about your benefits, housing, or health, you should be told.
97% agree 3%
101 votes
If an AI system treats people unfairly because of their race, age, or disability, someone should be held responsible.
97% agree 3%
101 votes
People should always be able to reach a real person when dealing with a government service, even if AI handles most tasks.
97% agree 3%
99 votes
When I talk to a chatbot or AI assistant, I should always be told it is not a real person.
90% agree 10%
99 votes
The UK should create an independent body with real power to shut down harmful AI systems.
90% agree 10%
94 votes
Schools should teach children to question what AI tells them, not just how to use it.
96% agree 4%
55 votes
AI in care homes should free up time for staff to spend with residents, not replace human contact.
93% agree 7%
60 votes
Workers who lose their jobs because of AI should get real help finding new work, not just advice.
93% agree 7%
60 votes
We should slow down on AI until we better understand what it does to people.
80% agree 20%
35 votes
People are being harmed by AI-driven decisions while the government takes too long to act.
89% agree 11%
33 votes
Communities should have a voice in deciding how AI is used in their local schools and hospitals.
96% agree 4%
33 votes
Companies should not be allowed to replace workers with AI unless they help those workers find new roles.
59% agree 41%
34 votes
AI companies should have to pay artists and writers when they use their work to train AI systems.
95% agree 5%
32 votes
A community that takes care of its people matters more than one with the most advanced technology.
71% agree 29%
32 votes
Big technology companies care more about profits than about what happens to our communities.
64% agree 36%
33 votes
Families caring for someone with a serious illness should be offered AI tools to help, even if those tools are not perfect.
50% agree 50%
30 votes
Using AI to keep an elderly person company when no human is available is better than leaving them alone.
100% agree 0%
28 votes
AI tools in schools do more to help struggling students catch up than they do to harm learning.
30% agree 70%
28 votes
New AI data centres in Oxfordshire will create good jobs for local people, not just for tech workers from elsewhere.
100% agree 0%
27 votes