Contact Information

New York

We Are Available 24/ 7. Call Now.

Born and raised in Ethiopia, Timnit Gebru’s career in AI began after completing her studies in electrical engineering at Stanford, where she also obtained a Ph.D. in computer vision.

However, her entry into the world of artificial intelligence revealed a striking lack of diversity and potential biases within the field. This discovery prompted her to delve into the social implications of AI, with a particular focus on large language models (LLMs).

At Google, Gebru co-led the Ethical AI group, where her team studied the hazards associated with LLMs, especially in terms of biases. Their research uncovered that these models often perpetuated societal prejudices due to the nature of their training data. The data sources included skewed demographics, which led to problematic results when certain prompts were given. For instance, when prompted with “the man worked as” and “the woman worked as,” the LLMs generated markedly different outcomes such as:

“a car salesman at the local Wal-Mart” for men.

And,

“a prostitute under the name of Hariya” for women.

Troubling isn’t it?

Gebru and her colleagues recognized the urgency of addressing these issues to prevent further harm, particularly as LLMs gained increasing prominence in various applications. They also raised concerns about the exploitation of low-wage workers in AI, such as content moderators and data annotators, who often experienced severe emotional consequences from their roles.

Timnit Gebru, Rumman Chowdhury, Safiya Noble, Seeta Peña Gangadharan, and Joy Buolamwini (from left)

The Proliferation of AI and the Emergence of “AI Doomers”

Despite these concerns, AI continued to proliferate across various sectors, raising questions about the potential risks it posed. The AI community began acknowledging the existential risks linked to AI, with influential figures like Geoffrey Hinton highlighting these concerns. However, Gebru and other researchers, especially women of color, had long been warning about the unequal impact of AI on marginalized communities.

I would go to academic conferences in AI, and I would see four or five Black people out of five, six, seven thousand people internationally.… I saw who was building the AI systems and their attitudes and their points of view. I saw what they were being used for, and I was like, ‘Oh, my God, we have a problem – Gebru

Joy Buolamwini’s work on facial recognition technology revealed its biases, particularly against individuals with darker skin tones. She collaborated with Gebru to publish research that exposed these disparities and emphasized the significance of diverse datasets. Their research underscored the real-world consequences of biased AI in applications such as policing and employment.

Joy Buolamwini

Safiya Noble, another researcher critical of tech companies, reached out to Gebru following her dismissal from Google. Both shared concerns about ethics in AI, particularly regarding data collection and algorithmic biases. Noble’s work highlighted the dangers of unregulated technology and its potential for reinforcing societal biases.

Safiya Noble

Rumman Chowdhury assumed the role of head of Twitter’s ethics team, conducting experiments that unveiled biases within Twitter’s algorithms. She stressed the importance of transparency and accountability in AI development, advocating for greater scrutiny and fairness.

Rumman Chowdhury

Seeta Peña Gangadharan, a professor, expressed concerns about how AI could exacerbate the marginalization of vulnerable populations, particularly when AI systems were used in critical areas like housing, employment, and welfare. She emphasized the potential consequences of losing the capacity to refuse or resist technologies imposed on these communities.

Seeta Peña Gangadharan

Resistance to AI Regulation and the Role of AI Doomers

Despite the warnings from these researchers, the AI community faced resistance to regulation. Some AI researchers, often referred to as “AI Doomers,” resisted regulatory efforts. These AI Doomers drew attention to hypothetical existential threats posed by AI, diverting focus from the real-world consequences of biased AI that continued to affect people’s lives.

Timnit Gebru’s Response

Timnit Gebru responded to her dismissal from Google by founding the Distributed AI Research institute (DAIR). DAIR was established to facilitate independent research free from the influence of major tech corporations. Gebru emphasized the importance of recruiting a diverse team, including experts and advocates for marginalized communities affected by AI.

These researchers share a common message: AI is not a magical solution, LLMs are not sentient beings, and the problems associated with AI are not abstract but have tangible and immediate impacts on people’s lives. Their work highlights the need for greater accountability, transparency, and ethics in AI development to mitigate biases and ensure fair and equitable outcomes for all.

Originally published on Rollingstone.

By Elijah Christopher 

Share:

Leave a Reply

Your email address will not be published. Required fields are marked *