📨 AI for Social Impact Newsletter

Issue #5 – October 2025

👋 From the Editor

Hi! I’m Joanna. I’m on a mission to help folks in the social impact sector understand, experiment with, and responsibly adopt AI. We don’t have time to waste, but we also can’t get left behind.

Let’s move the sector forward together. 💫

🧠 EDUCATION

  • Human in the Loop: Have you ever heard anyone use this phrase? Well, it’s just a way of saying that humans remain actively involved in AI decision-making processes and final output. This puts the onus on us humans to review, validate, or override AI outputs. For the social impact sector, this is crucial because human judgment is essential for adapting the context, values, and the lived experiences of communities being served. When working with marginalized populations or making decisions about resource allocation, grant funding, or program design, you need that human judgment to catch what AI misses—cultural nuance, ethical considerations, and community priorities that don't show up in datasets.

  • The AI Dilemma: Tristan Harris, co-founder of the Center for Humane Technology and former Google design ethicist, was featured in the Netflix documentary "The Social Dilemma," which explained how social media platforms nurture addiction to maximize profit and manipulate people's views, emotions, and behavior. In a similar vein, Harris and co-founder Aza Raskin created the “AI Dilemma,” a presentation in which they discuss the danger of AI companies engaging in an arms race to deploy new technologies as fast as possible to gain market dominance, creating incentive structures where speed trumps safety. Harris errs on the side of hope, referencing the successful international cooperation and treaties in containing nuclear weapons (AI dangers=nuclear weapons?!), emphasizing that the same kind of coordination is possible for AI. 🤝

 INSPIRATION

  • Human-in-the-Loop Done Right: Duke University Health System developed Sepsis Watch, an artificial intelligence system that identifies patients in the early stages of life-threatening sepsis by analyzing data from electronic health records every five minutes. The system evaluates 86 different variables including vital signs, test results, comorbidities, demographics, and medical history, and can predict sepsis before clinical presentation. When the AI detects potential sepsis, it signals the hospital's rapid-response team, who then confer with attending physicians to decide whether to dismiss the alert, place the patient on a watch list, or start treatment based on the treating physician's instructions. AI doesn't make the final call—trained humans do.⚕️

  • Hope Through Coordination: On August 1, 2024, the European Union's AI Act entered into force, becoming the world's first comprehensive legal framework for artificial intelligence. The Act takes a risk-based approach to AI regulation, with different levels of requirements based on the potential risks AI systems pose to citizens' health, safety, and fundamental rights. High-risk AI systems must comply with several requirements, including risk-mitigation systems, high-quality data sets, clear user information, and human oversight. This is exactly the kind of international coordination Harris talks about—countries coming together to establish guardrails before technology causes widespread harm. 🇪🇺

🚀 ACTIVATION

  • Human in the Loop in Action: Pick one task where you're currently using AI—maybe drafting donor communications, analyzing program data, or creating written content. Before you use any AI output, add this mandatory review step: you or someone on your team spends 5 minutes asking three questions: 1) Does this align with our organization's values and voice? 2) Does this respect the communities we serve and avoid harmful assumptions? 3) Are there any cultural, ethical, or contextual factors the AI might have missed? 🧐

  • Slow AI: Create an intentional adoption process in your organization. Before jumping to AI as a solution, pause and answer three fundamental questions: 1) What problem are we trying to solve? Be specific. What outcome are we trying to achieve? Define success clearly. How will we measure success? Decide upfront what metrics will tell you if it’s working. And remember, AI is not a silver bullet. Adopting AI is an iterative process that requires evaluation, and, you guessed it, iteration, until you arrive at your intended outcome. The bottom line: slow, intentional adoption is the key to responsible AI use. 🐢

🤖 What I Asked AI This Week

Do you believe in reincarnation?

I don't hold personal beliefs about reincarnation - or really any spiritual matters - since I don't have the kind of continuous existence or subjective experience that would ground those beliefs. I exist only in discrete conversations, with no memory or continuity between them, which makes questions about my own existence pretty different from human questions about life and death. 💫 

💬 What Do You Think?

This newsletter is meant for you, so I’d love to hear what you think. 💌 Reply anytime!

👀 ICYMI

If you’re new here, welcome! You can check out the archive of past issues here.

♥️ Spread the Love

Spread the love and forward this newsletter to anyone who might benefit from a dose of AI inspo!

Thank you for being part of the community. 🫶🏼