Google reportedly tightens grip on research into ‘sensitive topics’


Google is currently under fire for apparently pushing out a researcher whose work warned of bias in AI, and now a report from Reuters says others doing such work at the company have been asked to “strike a positive tone” and undergo additional reviews for research touching on “sensitive topics.”
Reuters, citing researchers at the company and internal documents, reports that Google has implemented new controls in the last year, including an extra round of inspection for papers on certain topics and seemingly an increase in executive interference at later stages of research.
That certainly appears to have been the case with Dr. Timnit Gebru, an AI researcher at Google whose resignation seems to have been forced under confusing circumstances, following friction between her and management over work that her team was doing. (I’ve asked Gebru and Google for comment on the story.)
Among the “sensitive” topics, according to an internal webpage seen by Reuters, are: “the oil industry, China, Iran, Israel, COVID-19, home security, insurance, location data, religion, self-driving vehicles, telecoms and systems that recommend or personalize web content.”
It’s clear that many of these issues are indeed sensitive, though advising researchers to take care when addressing them seems superfluous considering the existence of ethics boards, peer review, and other ordinary controls on research. One researcher who spoke to Reuters warned that this sort of top-down interference from Google could soon get “into a serious problem of censorship.”
This is in addition to the fundamental issue of vital research being conducted under the auspices of a company for which it may or may not be in their interest to publish. Naturally large private research institutions have existed for nearly as long as organized scientific endeavor, but companies like Facebook, Google, Apple, Microsoft and others exert an enormous influence over fields like AI and have good reason to avoid criticism of lucrative technologies while shouting their usefulness from every rooftop.
Google is currently under fire for apparently pushing out a researcher whose work warned of bias in AI, and now a report from Reuters says others doing such work at the company have been asked to “strike a positive tone” and undergo additional reviews for research touching on “sensitive topics.”…
Recent Posts
- Reddit is experiencing outages again
- OpenAI confirms 400 million weekly ChatGPT users – here’s 5 great ways to use the world’s most popular AI chatbot
- Elon Musk’s AI said he and Trump deserve the death penalty
- Grok resets the AI race
- The GSA is shutting down its EV chargers, calling them ‘not mission critical’
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010