Workshop Questions and Responses
WQ1. How are AI-driven predictions made in high-stakes public contexts, and what social, organizational, and practical considerations must policymakers consider in their implementation and governance?: Lessons from "Prediction in Practice" workshop
Summary: Researchers are developing predictive systems to respond to contentious and complex public problems across all types of domains, including criminal justice, healthcare, education, and social services—high-stakes contexts that can impact quality of life in material ways. Success is greatly influenced by how a system is or is not integrated into existing decision-making processes, policies, and institutions. The ways we define and formalize prediction problems shape how an algorithmic system looks and functions. Even subtle differences in problem definition can significantly change resulting policies. The most successful predictive systems are not dropped into place but are thoughtfully integrated into existing social and organizational environments and practices. Matters are further complicated by questions about jurisdiction and the imposition of algorithmic objectives at a state or regional level that are inconsistent with the goals held by local decision-makers. Successfully integrating AI into high-stakes public decision-making contexts requires difficult work, deep and multidisciplinary understanding of the problem and context, cultivation of meaningful relationships with practitioners and affected communities, and a nuanced understanding of the limitations of technical approaches.
WQ2. What are the most pressing challenges and significant opportunities in the use of artificial intelligence to provide physical and emotional care to people in need?: Lessons from "Coding Caring" workshop
Summary: Smart home devices can give Alzheimer's patients medication reminders, pet avatars and humanoid robots can offer companionship, and chatbots can help veterans living with PTSD treat their mental health. These intimate forms of AI caregiving challenge how we think of core human values, like privacy, compassion, trust, and the very idea of care itself. AI offers extraordinary tools to support caregiving and increase the autonomy and well-being of those in need. Some patients may even express a preference for robotic care in contexts where privacy is an acute concern, as with intimate bodily functions or other activities where a non-judgmental helper may preserve privacy or dignity. However, in elder care, particularly for dementia patients, companion robots will not replace the human decision-makers who increase a patient’s comfort through intimate knowledge of their conditions and needs. The use of AI technologies in caregiving should aim to supplement or augment existing caring relationships, not replace them, and should be integrated in ways that respect and sustain those relationships. Good care demands respect and dignity, things that we simply do not know how to code into procedural algorithms. Innovation and convenience through automation should not come at the expense of authentic care.