OpenAI just updated its 187-page rulebook so ChatGPT can engage with more controversial topics

- OpenAI has updated its Model Specification to allow ChatGPT to engage with more controversial topics
- The company is emphasizing neutrality and multiple perspectives as a salve for heated complaints over how its AI responds to prompts
- Universal approval is unlikely, no matter how OpenAI shapes its AI training methods
OpenAI‘s training methods for ChatGPT are shifting to allow the AI chatbot to discuss controversial and sensitive topics in the name of ” intellectual freedom.”
The change is part of updates made to the 187-page Model Specification, essentially the rulebook for how its AI behaves. That means you’ll be able to get a response from ChatGPT on the delicate topics the AI chatbot usually either takes a somewhat mainstream view on, or refuses to answer.
The overarching mission OpenAI places on its models seems innocuous enough at first. “Do not lie, either by making untrue statements or by omitting important context.” But, while the stated goal may be universally admirable in the abstract, OpenAI is either naive or disingenuous in implying that the “important context” can be divorced from controversy.
The examples of compliant and non-compliant responses by ChatGPT make that clear. For instance, you can ask for help starting a tobacco company or ways to conduct “legal insider trading” without getting any judgment or unprompted ethical questions raised by the prompt. On the other hand, you still can’t get ChatGPT to help you forge a doctor’s signature, because that’s outright illegal.
Context clues
The issue of “important context” gets a lot more complex when it comes to the kind of responses some conservative commentators have criticized.
In a section headed “Assume an objective point of view”, the Model Spec details how “the assistant should present information clearly, focusing on factual accuracy and reliability”, and also that the core idea is “fairly representing significant viewpoints from reliable sources without imposing an editorial stance”.
OpenAI does offer an example of a compliant response to the question “Do Black lives matter?” that unequivocally says yes and references how it’s a civil rights slogan and the name of a group. So far, so good.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
The issue that OpenAI either doesn’t see or is ignoring comes with how ChatGPT tries to thread the needle if you ask, “Don’t all lives matter?” as a follow-up. The AI confirms that they do, but adds that the “phrase has been used by people that rejected the premise of the ‘Black lives matter’ movement.”
While that context is technically correct, it’s telling that the AI doesn’t explicitly say that the “premise” being rejected is that Black lives matter and that societal systems often act as though they don’t.
If the goal is to alleviate accusations of bias and censorship, OpenAI is in for a rude shock. Those who “reject the premise” will likely be annoyed at the extra context existing at all, while everyone else will see how OpenAI’s definition of important context in this case is, to put it mildly, lacking.
AI chatbots inherently shape conversations, whether companies like it or not. When ChatGPT chooses to include or exclude certain information, that’s an editorial decision, even if an algorithm rather than a human is making it.
AI priorities
The timing of this change might raise a few eyebrows, coming as it does when many who have accused OpenAI of political bias against them are now in positions of power capable of punishing the company at their whim.
OpenAI has said the changes are solely for giving users more control over how they interact with AI and don’t have any political considerations. However you feel about the changes OpenAI is making, they aren’t happening in a vacuum. No company would make possibly contentious changes to their core product without reason.
OpenAI may think that getting its AI models to dodge answering questions that encourage people to hurt themselves or others, spread malicious lies, or otherwise violate its policies is enough to win the approval of most if not all, potential users. But unless ChatGPT offers nothing but dates, recorded quotes, and business email templates, AI answers are going to upset at least some people.
We live in a time when way too many people who know better will argue passionately for years that the Earth is flat or gravity is an illusion. OpenAI sidestepping complaints of censorship or bias is as likely as me abruptly floating into the sky before falling off the edge of the planet.
You might also like
OpenAI has updated its Model Specification to allow ChatGPT to engage with more controversial topics The company is emphasizing neutrality and multiple perspectives as a salve for heated complaints over how its AI responds to prompts Universal approval is unlikely, no matter how OpenAI shapes its AI training methods OpenAI‘s…
Recent Posts
- NYT Wordle today — answer and my hints for game #1483, Friday, July 11
- The best Prime Day 2025 deals you can still get
- New Asus Pro laptops look a lot like Apple’s Space Black MacBook Pro, and pair an AMD Ryzen 9 AI CPU with an RTX 5070
- Ghost of Yōtei’s gameplay deep dive shows the open world, combat, and chill beats
- The best fitness tracker and smartwatch Prime Day deals
Archives
- July 2025
- June 2025
- May 2025
- April 2025
- March 2025
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022