šŸ“Ø AI for Social Impact Newsletter

Issue #3 – August 2025

šŸ‘‹ From the Editor

Hi! I’m Joanna. I’m on a mission to help folks in the social impact sector understand, experiment with, and responsibly adopt AI. We don’t have time to waste, but we also can’t get left behind.

Let’s move the sector forward together. šŸ’«

🧠 EDUCATION

  • Moore’s Law: In 1965, Gordon Moore, one of the co-founders of Intel, predicted that the number of tiny switches (ā€œtransistorsā€) on a computer chip would double about every two years with a minimal increase in cost. This means that computers got smaller, faster, and cheaper. Since his prediction, this steady growth in computing power, along with improvements in memory, storage, and internet speed, has made it possible to process huge amounts of information quickly, which is essential for AI. Intel CEO argues that Moore’s Law is still alive, while Nvidia CEO argues the opposite. Regardless of whose side you’re on, AI development depends not only on faster machines, but also on smarter ways of designing and running AI systems so they can do more with less.

  • The Alignment Problem: The reality is, there is no way to ensure that AI systems will do exactly what we want them to do. 😬 Traditional software follows conditional, if/then logic, or deterministic algorithms. LLMs, on the other hand, introduce a level of non-determinism, meaning they can give you different responses to the exact same question. Because of this non-deterministic nature, we can't guarantee that an AI system will always respond the way we want, and in turn, might not align with our human values. A crazy example of this is an Anthropic research study that found that when leading AI models, in a simulated corporate environment, were given autonomy and faced obstacles to their goals, they deliberately chose harmful behaviors (blackmail and corporate espionage) as optimal paths to achieve their goals. 🤯

✨ INSPIRATION

  • Moore’s Law for Good: Digital Green pioneered a community video model to equip smallholder farmers with peer-produced videos in local languages. This video-mediated approach was ten times more cost-effective than traditional methods, reducing the per-adoption cost from $35 to $3.50 per video. They now operate in 12 countries, reaching over 7.2 million farmers globally. This example, and their new AI powered assistant, FarmerChat, illustrate how cheaper, more powerful technology—true to the spirit of Moore’s Law—can be harnessed for good.

  • A Cultural Alignment Problem: The Montreal AI Ethics Institute conducted a research study to assess how well LLMs (like GPT‑3.5, GPT‑4) align with the cultural values of different countries—specifically the US, Saudi Arabia, China, and Slovakia. Using Hofstede’s cultural dimensions theory, the study found that these models tend to align most closely with US cultural norms, while showing significant misalignment—especially with countries such as Saudi Arabia. Notably, GPT‑4 showed the strongest average cultural alignment, and specifying a persona’s nationality in prompts helped improve alignment.šŸ’”

šŸš€ ACTIVATION

  • Strategic Thinking Partner: AI shines when you need to push past your first few ideas. AI can help you escape the echo chamber of how you’ve always done things and explore creative alternatives that you might not have considered on your own. Stuck on the same old fundraising events? Ask AI to suggest approaches used by other industries. Can't figure out how to approach a new program area? Have AI conduct an analysis of what's worked for similar organizations in different sectors. Then, plug and play with a new, innovative idea!

  • An Objective Second Opinion: Before you hit send on that important email or publish that impact report, let AI play devil's advocate. AI can help you identify jargon that might confuse your audience or arguments that need stronger evidence. AI can be particularly good at flagging gaps or inconsistencies. Sample prompt: ā€œPlease review this draft [describe content, objective, audience] and tell me: 1) what's unclear or confusing? 2) what key points am I not emphasizing enough? 3) are there gaps or weak arguments that need strengthening?ā€

šŸ¤– What I Asked AI This Week

How do you know when you’re having too much fun? 🪩

ā€œI think you know you're having too much fun when tomorrow starts feeling like a problem instead of a possibility.ā€ šŸ˜…

ā€œThe tricky part is that the line between "just enough fun" and "too much fun" is usually only visible in the rearview mirror.ā€ šŸ˜‚

šŸ’¬ What Do You Think?

This newsletter is meant for you, so I’d love to hear what you think. šŸ’Œ Reply anytime!

šŸ‘€ ICYMI

If you’re new here, welcome! You can check out the archive of past issues here.

ā™„ļø Spread the Love

Spread the love and forward this newsletter to anyone who might benefit from a dose of AI inspo!

Thank you for being part of the community. šŸ«¶šŸ¼