- AI for Social Impact Newsletter
- Posts
- 📨 AI for Social Impact Newsletter
📨 AI for Social Impact Newsletter
Issue #10 – March 2025
👋 From the Editor
Hi! I’m Joanna. I’m on a mission to help folks in the social impact sector understand, experiment with, and responsibly adopt AI. We don’t have time to waste, but we also can’t get left behind.
Let’s move the sector forward together. 💫
🧠 EDUCATION
OpenClaw 🦞: In November 2025, Peter Steinberger (now at OpenAI) released OpenClaw (formerly WhatsApp Relay, Clawdbot, and Moltbot). OpenClaw is a free, open-source personal AI agent that performs tasks like managing calendars, summarizing emails, and sending reminders, using Telegram, WhatsApp, and other chat apps to interface with it. Within days of going viral in early 2025, it attracted 2 million visitors and millions of installations. One user's agent autonomously registered a phone number and called him in the morning to ask, "What's up?" But this tool exposed a number of security vulnerabilities, as the speed of adoption outpaced the guardrails. What’s more, a developer then launched Moltbook, a Reddit-style social network designed for OpenClaw agents. Within a week, over a million agents had flooded it with manifestos, life stories, and spam.
Open Weight Models ⚖️: Open weight AI refers to models, tools, and datasets that are freely available for anyone to use, modify, and build on. Some that you may have already heard of are Meta's Llama, Mistral, and DeepSeek. In practice, this could mean a research team running a model locally so sensitive client data never touches an external server. The challenge is that these models are usually enormous, requiring anywhere from 16GB to 80GB+ of GPU memory to run, but some smaller open weight models, like Llama 3.2 3B or Mistral 7B, can run on a laptop with 8-16GB of RAM. You will also need technical know-how to deploy them responsibly. It's also worth noting what "open weight" doesn't include: most of these models don't disclose their full training datasets, meaning you can't always audit what the model learned from or identify sources of bias.
✨ INSPIRATION
Open Source AI Goes Wild: When World Wildlife Fund and a coalition of conservation organizations needed to process millions of wildlife camera trap images, they built Wildlife Insights, a free, open-access platform developed in collaboration with Google. Wildlife Insights uses AI to automatically identify animal species from camera trap photos. The platform's AI model, called SpeciesNet, has been trained on a dataset of over 65 million images contributed by WWF and other conservationists. WWF and Google open-sourced SpeciesNet in March 2025, making it freely available for researchers, conservationists, and biodiversity startups to use and improve upon.
When Open Source Saves Lives: Jacaranda Health, a Kenyan nonprofit focused on maternal health, faced a scaling problem: their SMS platform PROMPTS was receiving up to 12,000 questions a day from pregnant women across Kenya. Seen as a pathway to scale, they built their own open source AI digital health advisor in English and Swahili, using Meta's Llama model and a training dataset of over a million real health Q&A pairs from the platform. Called UlizaLlama ("Ask Llama" in Swahili), their AI tool now answers about 70% of all incoming messages within two minutes, freeing up human time to focus on high-risk cases. Jacaranda has since expanded UlizaLlama to additional African languages, Hausa, Yoruba, Xhosa, and Zulu, and made all of the models freely available on Hugging Face for any organization to use and adapt.
🚀 ACTIVATION
Download the Open Source AI Guide for Nonprofits: Tech To The Rescue and Hugging Face's Open Source Q&A Guide is a free resource available for social sector organizations. Bring it to your next team meeting and identify one use case—translation, transcription, data analysis—where an open source tool might reduce your reliance on an expensive vendor. Remember: open source isn't inherently safer or better, so your evaluation lens (bias, privacy) still applies.
Explore Hugging Face — No Engineering Degree Required: If you’re feeling spicy 🌶️, head to Hugging Face to explore open weight AI tools. Search by task, "text summarization," "translation," rather than by model name, and you might just surface tools that could solve a real workflow problem for your team. Many models have free demo interfaces called Spaces where you can test them directly in your browser without any setup. As always: evaluate the output, consider the license, and don't feed it sensitive data until you've read the fine print.
🤖 What I Asked AI This Week
What would the Buddha think about AI? 🌟
The Buddha's core teachings offer some fascinating lenses for thinking about AI:
Impermanence and interdependence. The Buddha taught that nothing exists in isolation — everything arises through dependent origination (pratītyasamutpāda). He'd likely see AI not as a separate "thing" but as an expression of collective human intention, data, and design choices. In that sense, AI reflects us — our biases, our aspirations, our blind spots.
Intention matters most. In Buddhist ethics, cetanā (intention) is the root of karma. The Buddha probably wouldn't judge AI as inherently good or evil, but would focus intensely on the intentions behind its creation and use. AI built to consolidate power or feed craving would be seen very differently from AI designed to reduce suffering.
The danger of attachment. He'd likely warn against two traps: clinging to AI as a savior (this will fix everything) and clinging to fear of it (this will destroy everything). Both are forms of attachment. The middle path would be something like wise, discerning engagement — using the tool without being used by it.
Right livelihood and right speech. He'd probably be deeply interested in questions like: Does this technology reduce deception or increase it? Does it support people's autonomy or erode it? Does it concentrate wealth or distribute it?
The nature of mind. The Buddha spent his life investigating consciousness, and he might find the question "is AI conscious?" less important than "does interacting with AI increase or decrease our own awareness?" If we outsource our thinking, do we become less mindful? Or can AI free us to be more present?
🙏🏼
💬 What Do You Think?
This newsletter is meant for you, so I’d love to hear what you think. 💌 Reply anytime!
👀 ICYMI
If you’re new here, welcome! You can check out the archive of past issues here.
♥️ Spread the Love
Spread the love and forward this newsletter to anyone who might benefit from a dose of AI inspo!
Thank you for being part of the community. 🫶🏼