The paper that led to Timnit Gebru’s ouster from Google reportedly questioned language models


A paper co-authored by former Google AI ethicist Timnit Gebru raised some potentially thorny questions for Google about whether AI language models may be too big, and whether tech companies are doing enough to reduce potential risks, according to MIT Technology Review. The paper also questioned the environmental costs and inherent biases in large language models.
Google’s AI team created such a language model— BERT— in 2018, and it was so successful that the company incorporated BERT into its search engine. Search is a highly lucrative segment of Google’s business; in the third quarter of this year alone, it brought in revenue of $26.3 billion. “This year, including this quarter, showed how valuable Google’s founding product — search — has been to people,” CEO Sundar Pichai said on a call with investors in October.
Gebru and her team submitted their paper, titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” for a research conference. She said in a series of tweets on Wednesday that following an internal review, she was asked to retract the paper or remove Google employees’ names from it. She says she asked Google for conditions for taking her name off the paper, and if they couldn’t meet the conditions they could “work on a last date.” Gebru says she then received an email from Google informing her they were “accepting her resignation effective immediately.”
The head of Google AI, Jeff Dean, wrote in an email to employees that the paper “didn’t meet our bar for publication.” He wrote that one of Gebru’s conditions for continuing to work at Google was for the company to tell her who had reviewed the paper and their specific feedback, which it declined to do. “Timnit wrote that if we didn’t meet these demands, she would leave Google and work on an end date. We accept and respect her decision to resign from Google,” Dean wrote.
In his letter, Dean wrote that the paper “ignored too much relevant research,” a claim that the paper’s co-author Emily M. Bender, a professor of computational linguistics at the University of Washington, disputed. Bender told MIT Technology Review that the paper, which had six collaborators, was “the sort of work that no individual or even pair of authors can pull off,” noting it had a citation list of 128 references.
Gebru is known for her work on algorithmic bias, especially in facial recognition technology. In 2018, she co-authored a paper with Joy Buolamwini that showed error rates for identifying darker-skinned people were much higher than error rates for identifying lighter-skinned people, since the datasets used to train algorithms were overwhelmingly white.
Gebru told Wired in an interview published Thursday that she felt she was being censored. “You’re not going to have papers that make the company happy all the time and don’t point out problems,” she said. “That’s antithetical to what it means to be that kind of researcher.”
Since news of her termination became public, thousands of supporters, including more than 1,500 Google employees have signed a letter of protest. “We, the undersigned, stand in solidarity with Dr. Timnit Gebru, who was terminated from her position as Staff Research Scientist and Co-Lead of Ethical Artificial Intelligence (AI) team at Google, following unprecedented research censorship,” reads the petition, titled Standing with Dr. Timnit Gebru.
“We call on Google Research to strengthen its commitment to research integrity and to unequivocally commit to supporting research that honors the commitments made in Google’s AI Principles.”
The petitioners are demanding that Dean and others “who were involved with the decision to censor Dr. Gebru’s paper meet with the Ethical AI team to explain the process by which the paper was unilaterally rejected by leadership.”
Google did not immediately respond to a request for comment on Saturday.
A paper co-authored by former Google AI ethicist Timnit Gebru raised some potentially thorny questions for Google about whether AI language models may be too big, and whether tech companies are doing enough to reduce potential risks, according to MIT Technology Review. The paper also questioned the environmental costs and…
Recent Posts
- Lucid’s CEO steps down, as EV maker aims to double production
- iPhones are replacing ‘Trump’ with ‘racist’ during dictation – but Apple is fixing the problem
- The 9 Best Mirrorless Cameras (2025): Full-Frame, APS-C, and More
- Framework Desktop hands-on: a possible new direction for gaming desktops
- ChatGPT is a terrible, fascinating, and thrilling to-do list app
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010