šŸ“Ø AI for Social Impact Deep Dive: Algorithmic Bias

Algorithmic bias explained.

āœšŸ¼ A Note From the Editor

Welcome to your August Deep Dive! I thought it was important to spend some time digging into algorithmic bias after glossing over it in the previous issue. Hopefully this makes a technologically complicated subject easy to digest. šŸ²

Here we go!

🧐 What is algorithmic bias?

Let’s think of an algorithm as a set of instructions that tells a computer how to solve a problem or make a decision. AI algorithms are trained by ingesting massive amounts of data, scraped from the internet, to find patterns and make predictions. Here's where bias comes in: this internet data is heavily skewed toward content created by people who have historically had access to and built technology. You guessed it, predominantly white, English-speaking males from the Global North. Dr. Joy Buolamwini, founder of the Algorithmic Justice League, calls this the ā€œcoded gazeā€, how the priorities, preferences, and prejudices of technologists get built into algorithms, software, and other products. So in other words, AI systems are learning from a very specific slice of humanity, not the full spectrum of human experience.

ā€¼ļø Why is this important?

AI systems are increasingly making decisions that affect real people's lives—from who gets hired to who gets approved for loans to who gets flagged by law enforcement. As Dr. Buolamwini puts it: "while no one is immune to algorithmic abuse, those already marginalized in society shoulder an even larger burden". When these systems make biased decisions at scale, they can perpetuate and amplify existing inequalities.

There are guardrails we can put in place to hedge against bias, though they're not perfect solutions. AI companies use techniques like Reinforcement Learning from Human Feedback (RLHF), where humans review and correct AI outputs to reduce harmful biases—though this technique has questionable ethical considerations for the humans working on RLHF.

Perhaps the most important guardrail is you and your human oversight: never letting AI make high-stakes decisions entirely on its own (often referred to as the ā€œhuman in the loopā€).

🦾 So what can we do about it?

The first step is this! Educating yourself and acknowledging that bias exists in AI tools. Wharton professor Ethan Mollick, who made TIME's 2024 list of most influential people in AI, says: "bias is often subtle, and the AI is not going to be overtly racist or overtly sexist, because they have training systems built around them.ā€ The good news? Mollick notes that "if you tell it to be unbiased, the bias level drops", but we have to know to ask. We can be intentional about the prompts we use, the data we feed into systems, and the outcomes we are willing to accept. While we can't eliminate bias entirely, we can be mindful about it.

🌟 LATIMER — AI for Everyone.

Latimer’s mission is to build empathetic and inclusive thinking machines. Instead of building an entirely new AI system from scratch (which would cost millions and take years), they work with existing foundation models (like ChatGPT and Claude) and train them with diverse library with books, stories, and perspectives. Latimer has exclusive partnerships with sources like the New York Amsterdam News, a Black-owned newspaper founded in 1909. They also work with historically Black colleges and universities for their training data, including Morgan State University's Center for Equitable AI and Machine Learning Systems.

Named after Lewis Latimer, a Black inventor and pioneering engineer whose legacy and contributions are often overlooked, the platform is specifically designed to ensure that Black and Brown voices and histories don't get erased from the AI systems that are shaping our future.

šŸ‘‹šŸ¼ About AI for Social Impact

I’m Joanna, and I’m on a mission to help folks in the social impact sector understand, experiment with, and responsibly adopt AI. We don’t have time to waste, but we also can’t get left behind.

Let’s move the sector forward together. šŸ’«

ā™„ļø Spread the Love

Spread the love and forward this newsletter to anyone who might benefit from a dose of AI inspo!

Thank you for being part of the community. šŸ«¶šŸ¼