A care worker finishes a twelve-hour night shift in a residential home. She has spent the last hour completing paperwork that a new AI transcription tool was supposed to handle. The tool was designed to save time — and on a dashboard somewhere, it does. But it was built around an administrative problem, not around her. Nobody asked what she needed. Nobody asked the people she cares for, either.
This is the gap that attentiveness is designed to close: the distance between the people closest to a problem and the institutions making decisions about it. In long-term care — where AI is increasingly used in everything from care assessments to voice-activated support — that distance can be vast.
Between February 2024 and March 2025, a cross-sector collaboration convened more than 100 representatives from across the UK's long-term care community to co-produce a definition of responsible AI in long-term care. The collaborators included care and support recipients, unpaid caregivers, professional care workers, care providers, technology providers, policy makers, decision makers, and civil society representatives. Three hosts from academia, the private sector, and civil society coordinated the work.
The resulting definition, published in The Lancet Healthy Longevity, centres care values — not efficiency metrics — as the starting point for AI policy and practice. What makes this work a case study in attentiveness is not only what the collaboration produced, but how it produced it: by practising the discipline of noticing who is missing, listening before solving, and refusing to let powerful voices set the agenda unchallenged.
How the co-production practised attentiveness
Relationships first
The collaboration did not begin with a technology problem. It began with a roundtable — an open invitation to the long-term care community to establish relationships across sectors and develop a shared understanding of the current state of AI in care. The goal of this first phase was not to produce outputs but to build the trust and cross-sector connections necessary for honest conversation. Co-production principles from the Social Care Institute for Excellence (SCIE) guided the process, grounded in equality, diversity, accessibility, and reciprocity: listening through secure, voluntary channels rather than extracting data.
Power must answer questions
The collaboration was deliberately structured to surface and address power imbalances. Care recipients and caregivers — groups who are typically subjected to the decisions of more powerful actors like policy makers, technology developers, and care organisations — were positioned as equal collaborators, not consultees. Working groups met three to five times to explore what AI meant for their care, work, and lives. When tensions arose between groups, they were named openly. Care recipients and caregivers reported feeling under-represented in the current policy landscape. Technology providers acknowledged the tension between openness and commercial pressure. These disagreements were not smoothed over; they were recorded as part of the work.
Stories before specification
The process started with stories, not specifications. Working groups began by exploring what AI and generative AI meant to participants in their daily lives — at home, at work, in their care. Collaborators shared experiences with specific AI products, discussed hopes and concerns, and reflected on their roles as developers, users, or subjects of AI systems. Numbers and formal definitions came later, built on top of these grounded, personal accounts. The collaboration produced a definition of responsible AI that reads not as a technical standard but as a statement of what people value: that AI should not replace human interaction, that it should support independence and dignity, and that its use should be a well-informed and safe choice.
Rights as a baseline
The definition that emerged is rooted in the view that care is a human need tied to wellbeing, dignity, equality, and human rights. The collaborators agreed that the starting point for AI in care should not be technological potential but the role of long-term care in protecting and advancing independence, choice and control, dignity, and the wellbeing of people who receive care — as enshrined in international and national law. Rights were not an afterthought bolted onto a technology framework; they were the foundation from which everything else followed.
No fake pluralism
The collaboration took active steps to avoid the appearance of inclusion without the substance. It followed the SCIE co-production ladder, which ranks consultation lowest and equal partnership highest. The Care Workers' Charity's guide to centring care workers informed the approach. Experienced facilitators and civil society groups led the working groups. When some voices risked dominating — as happened when commercial pressures clashed with care values — the structure of the process ensured that the quieter, more vulnerable perspectives were not erased. The final phase brought more than 50 collaborators together physically for deliberative assembly, where the definition and practice guidance were debated and finalised face to face.
From ideas to practice
The co-production moved through four phases, each deepening the practice of attentiveness:
Listen widely. A roundtable convened representatives from across the long-term care community. The goal was not to solve a problem but to understand the landscape: what AI was already doing in care, what people hoped for, and what they feared. A shared statement was published and endorsed by approximately 30 organisations, establishing two aims for the work ahead.
Map relationships and disagreements. Thematic working groups formed around values, principles of good care, and ethical evaluation. As conversations progressed, participants reflected on their different roles — developer, user, subject — and the power dynamics between them. Each group chose an output to co-create publicly: technology providers drafted a pledge; care professionals issued demands for training, accountability, and investment; care recipients produced a statement on what matters most to them.
Send receipts. An overarching co-production working group scrutinised all outputs from previous phases. This was the accountability layer — ensuring that what was heard in working groups was faithfully carried into the combined document. The resulting definition and principles for practice were not authored by researchers alone but assembled from the contributions of all groups, with transparent attribution.
Decide with brakes. The final phase was a deliberative assembly. More than 50 collaborators gathered in person to examine the definition and practice guidance. Points of consensus were identified. But so were unresolved tensions: concerns about co-production being used as rhetoric, about the financial realities of implementation, about mistrust between care recipients and institutions. These open questions were published as part of the work, not hidden.
What they built
The collaboration produced several concrete outputs that embody attentiveness in practice:
A care-centric definition of responsible AI — a framing that starts from what people value in care (human connection, dignity, choice) rather than from what technology can optimise. The definition states that AI use should not undermine, harm, or breach fundamental values of care, human rights, independence, choice and control, dignity, equality, and wellbeing.
Principles for practice across eleven domains — including improving care and support, choice and control, accessibility, training, data privacy, transparency, human contact and connections, bias and discrimination, continuous improvement, co-production, and sustainable technology. These were not drawn from existing AI ethics frameworks but built from the ground up by the people who live within the care system.
Sector-specific outputs from each working group — a technology provider pledge, care professional demands, and care recipient statements. These are artefacts of attentiveness: evidence that different groups were heard on their own terms, not collapsed into a single consensus document.
A published shared statement endorsed by approximately 30 organisations, marking the first step and making the collaboration's commitments visible to the broader sector.
What could go wrong
The collaboration itself surfaced several risks:
Co-production as listening theatre. Collaborators raised concerns that co-production principles are rarely followed in practice, limiting meaningful input from people with lived experience. Fix: The collaboration used SCIE's co-production framework and the Care Workers' Charity's centring guide as structural safeguards, not just aspirational references.
Outcome-driven narratives overtaking care values. A key concern was that government and providers might embrace AI too eagerly, promoting narrow, outcome-driven narratives without sufficient evidence or oversight. AI adoption could prompt staffing cuts in care homes, undermining overall care quality. Fix: The definition explicitly rejects starting from efficiency metrics and instead anchors responsible AI in human rights and care values.
Powerful voices setting the agenda. Technology developers face pressure to compete and ship quickly. Openness and dialogue can be challenging when speed-to-market is the priority. Fix: Deliberate process design ensured that care recipients and workers had equal standing, and that commercial interests did not dominate the framing.
Reduced choice and human contact. Care recipients expressed concern about being offered technology-based options that might not suit their individual needs, replacing human relationships with automated alternatives. Fix: The definition makes explicit that AI should support and enhance caring relationships, not replace human interactions or care provision.
Equity of access. Collaborators flagged concerns about the financial resources required for procurement and maintenance of AI systems, and whether these would be accessible equally to everyone. Fix: Accessibility was established as a core principle — AI systems should be accessible to people with different needs, and costs should not create new inequities.
Mistrust between sectors. Care recipients and caregivers expressed doubt that AI would truly serve them rather than the interests of policy makers and providers. Some groups expressed mistrust in policy makers and care providers. Fix: Open publication of tensions, disagreements, and unresolved questions. Transparency about what was not agreed upon is itself a form of attentiveness.
Co-production fatigue. The process is time consuming and expensive. Sustaining genuine participation over four phases demands resources that may not always be available. Fix: The collaboration embedded co-production as a principle within the definition itself, arguing it should be merged into the AI lifecycle and into AI roll-outs in care services and wider policy — not treated as a one-off consultation exercise.
Interfaces
This case study connects to other packs in the framework:
→ Responsibility (Pack 2): The collaboration explicitly allocated responsibilities across actors — technology developers, care providers, policy makers — and argued for clearer links between long-term care regulation and AI oversight. The question of who is accountable when AI harms someone in care was a recurring theme.
→ Competence (Pack 3): The principles for practice include training (people should be able to learn about AI systems used in care provision) and continuous improvement (user feedback should lead to system improvements or risk mitigation). These map directly to competence as the ability to do care work well.
→ Responsiveness (Pack 4): The definition's insistence on choice and control — that people should be able to make informed choices about the AI they use and are subjected to — is a responsiveness claim. Care must adapt to the person receiving it, not the other way around.
Green C, Reinmund T, Hamblin K, Sinha SK. Responsible use of artificial intelligence in the provision of long-term care for older people: a care-centric approach. The Lancet Healthy Longevity, 2026.