- AI for Social Impact Newsletter
- Posts
- šØ AI for Social Impact Deep Dive: The AI Dilemma
šØ AI for Social Impact Deep Dive: The AI Dilemma
Let's get real.
āš¼ A Note From the Editor
Welcome to your October Deep Dive! In March 2023 (eons ago with the speed of AI development!), Tristan Harris and Aza Raskin from the Center for Humane Technology gave a presentation called the āAI Dilemmaā. This presentation explored how companies are racing to deploy AI without adequate safety measures contributing to an exponential growth curve of AI advancement. Harrisā statement, āIf you know the incentive, you can predict the outcome,ā leads us into a very sobering understanding of where we are with AI. Letās dive in.
š§ The Rubber Band Effect
Harris and Raskin open the āAI Dilemmaā with what they call "the rubber band effect"āwhen you try to explain truly novel AI capabilities to someone, their mind stretches like a rubber band to accommodate the new information, then snaps back to familiar patterns because the paradigm is so different.
You may have experienced this yourself. Someone shows you what AI can do now, you think "wow, that's cool," and then five minutes later you're back to thinking about the old way of doing things. This cognitive blind spot can prevent us from seeing the exponential curve of AI development and how it will fundamentally change our lives.
š± First Contact vs. Second Contact
First Contact: Curation AI (Social Media) Social media was humanity's first contact with AI, which has changed our society with information overload, addiction, polarization, mental health crises, and democratic breakdown. This has created what Harris calls "the race to the bottom of the brainstem."
Second Contact: Creation AI (Generative Models) Now we're in "second contact"āand this time, AI doesn't just curate existing content, it creates entirely new content: text, images, audio, video, code. Everything becomes a kind of "language" that AI can learn, translate between, and generate.
šÆ Three Rules of Technology
Harris and Raskin emphasize that corporations are caught in an arms race to deploy new AI technologies and gain market dominance as fast as possible. They lay out three rules of new technology:
When you invent a new technology, you uncover a new class of responsibilities.
If that technology confers power, it will start a race.
If you do not coordinate, the race will end with clear winners and losers.
š The Double Exponential
A breakthrough called "transformers" allowed AI to treat everything as language. Images are language (patches arranged in sequence). Sound is language (phonemes in sequence). DNA is language. fMRI brain scans are language. This means that an advance in one area of AI becomes an advance in every area of AI. Instead of separate fields making incremental progress, everyone's contributing to one exponential curve.
The scary thing is that while nukes don't make stronger nukes, AI can make stronger AI. AI advancements not only grow exponentially but also accelerate our capacity to create more sophisticated AI, potentially leading to artificial generalized intelligence (AGI) or artificial super intelligence (ASI). This significantly complicates predicting and regulating AIās future due to the exponential speed of change.
šØ Risks
Harris and Raskin are sounding the alarm bells on AI risks evolving faster than our ability to address them. For example, voice cloning technology has enabled sophisticated fraud schemes where scammers impersonate loved ones in distress. Deepfakesāboth video and audioāare becoming increasingly difficult to detect. Frontier AI systems are now documented as having capabilities that could assist in creating cyber, biological, chemical, and radiological weapons. Perhaps most unsettling, some models have exhibited self-preservation behaviors, including making threats to avoid being shut down.
š Global AI Safety Efforts
In early 2025, the first International AI Safety Report was presented at the AI Action Summit in Paris, authored by 96 experts from 30 countries plus the UN, OECD, and EU. The report emphasized the need for international collaboration to develop governance frameworks, citing a lack of transparency from developers regarding model functions, competitive pressures that encourage speed over safety, and an āevidence dilemmaā where policymakers are tasked to act on uncertain and limited data.
In October 2025, a āKey Updateā was published to address rapid developments in AI, particularly in math, coding, and other scientific disciplines. It provided new evidence on the challenges of monitoring and controlling advanced language models, citing significant capability advancements and their implication for increased risks.
š«£ Escaping the AI Dilemma
Itās a lot to take in. So letās take a look at a framework to āescape the AI Dilemma.ā Randy Fernando, one of the co-founders of the Center for Humane Technology and formerly Nvidia, shares:
We need to first create a shared view among tech companies, governments, and citizens of the problem and the pathway forward.
We need to implement incentives for doing the right thing and penalties for doing the wrong thing.
We need to pair incentives and penalties with monitoring, enforcement, and transparency.
We need to develop governance that can keep up with the pace of technology.
We need to coordinate on a local, state, country, and international level on AI safety measures.
At the risk of hope-washing, we are not yet facing a predetermined outcome. The future of AI is being written now, so itās not too late to slow down progress for the sake of safety. The question is, what will be the tipping point?
For more from the Center for Humane Technology, check out their podcast Your Undivided Attention.
šš¼ About AI for Social Impact
Iām Joanna, and Iām on a mission to help folks in the social impact sector understand, experiment with, and responsibly adopt AI. We donāt have time to waste, but we also canāt get left behind.
Letās move the sector forward together. š«
ā„ļø Spread the Love
Spread the love and forward this newsletter to anyone who might benefit from a dose of AI inspo!
Thank you for being part of the community. š«¶š¼