OpenAI’s new tool may help you identify text written by ChatGPT
OpenAI has released a tool to help you determine whether text was more likely written by a human or AI. However, the ChatGPT maker warns that its equivalent of Blade Runner’s Voight-Kampff test can also get it wrong.
The tool includes a box where you can paste text that’s at least 1,000 characters long. It will then spit out a verdict, like “The classifier considers the text to be very unlikely AI-generated” or “The classifier considers the text to be possibly AI-generated.”
I tested it by prompting ChatGPT to write an essay about the migratory patterns of birds, which the detection tool then described as “possibly AI-generated.” Meanwhile, it rated several human-written articles as “very unlikely AI-generated.” So although the tool could raise false flags in either direction, my (tiny sample size) test suggests at least a degree of accuracy. Still, OpenAI cautions not to use the tool alone to determine content’s authenticity; it also works best with text of 1,000 words or longer.
The startup has faced pressure from educators after the November release of its ChatGPT tool, which produces AI-written content that can sometimes pass for human writing. The natural-language model can create essays in seconds based on simple text prompts — even passing a graduate business and law exam — while providing students with a tempting new cheating opportunity. As a result, New York public schools banned the bot from their WiFi networks and school devices.
While ChatGPT’s arrival has been a buzzed-about topic of late, even extending into media outlets eager to automate SEO-friendly articles, the bot is big business for OpenAI. The company reportedly secured a $10 billion investment earlier this month from Microsoft, which plans to integrate it into Bing and Office 365. OpenAI allegedly discussed selling shares at a $29 billion valuation late last year, which would make it one of the most valuable US startups.
Although ChatGPT is currently the best publicly available natural language AI model, Google, Baidu and others are working on competitors. Google’s LaMDA is convincing enough that one former researcher threw away his job with the search giant last year by claiming the chatbot is sentient. (The human tendency to project feelings and consciousness onto algorithms is a concept we’ll likely hear much about in the coming years.) Google has only released extremely constricted versions of its chatbot in a beta, presumably out of ethical concerns. With the genie out of the bottle, it will be interesting to see how long that restraint lasts.
OpenAI has released a tool to help you determine whether text was more likely written by a human or AI. However, the ChatGPT maker warns that its equivalent of Blade Runner’s Voight-Kampff test can also get it wrong. The tool includes a box where you can paste text that’s at…
Recent Posts
Archives
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- December 2011