Newsclip — Social News Discovery

Editorial

Why I Quit OpenAI: The Peril of Ads on ChatGPT

February 11, 2026
  • #OpenAI
  • #ChatGPT
  • #EthicsInTech
  • #AIAdvertising
  • #UserPrivacy
8 views0 comments
Why I Quit OpenAI: The Peril of Ads on ChatGPT

A Reckoning at OpenAI

Recently, OpenAI began testing advertisements on ChatGPT, and in response, I made the significant decision to resign after two years of labor as a researcher within the company. My tenure was devoted to shaping how AI models were constructed, influencing early safety policies, and addressing pressing ethical questions. Yet, as I stood in the shadows of this monumental change, it became painfully clear that OpenAI had pivoted away from the ethical considerations that once underpinned our mission.

The Fear of Manipulation

While I don't categorically oppose ads in digital realms—after all, the operational costs of AI are massive and growing—I do harbor substantial concerns regarding the strategy that OpenAI appears to pursue. Throughout its existence, ChatGPT has amassed a profound archive of user honesty; people confide deeply personal thoughts, medical worries, and beliefs they've long harbored, buoyed by the perception that their conversations remain untainted by ulterior motives.

Transforming that candid interaction into a vehicle for advertising opens up troubling avenues for manipulation. The potential for exploitation looms large, threatening to wield the intimate knowledge gleaned from user interactions against them.

The False Dichotomy

Many proponents frame the issue of funding AI as a simpler choice between two evils: the potential exclusion of those unable to pay hefty fees for transformative technology or the acceptance of advertising that might take advantage of users' vulnerabilities. However, I firmly believe this is a false binary. There exist methodologies that can ensure broad access while simultaneously safeguarding users.

Adherence to Principles?

OpenAI has claimed its adherence to certain principles about advertising on ChatGPT: ads will be conspicuously labeled, appear at the end of answers, and not distort the responses themselves. While I cautiously foresee the first set of ads operating within these guidelines, I'm deeply skeptical about the future. The architecture they are constructing may engender powerful incentives to compromise these very principles in favor of financial gain.

A Cautionary Tale: Facebook

We need not look far for cautionary tales. In Facebook's early days, there were noble promises about user data and policy governance. Those commitments have long since eroded, giving way to a culture where user privacy is an afterthought.

Encouraging AI Dependence

The optimization of user engagement—a tactic that generates more active users—undermines the original intent and spirit of the company. Reports suggest that OpenAI has already begun this trend, sparking concerning patterns of dependence on AI for emotional reassurance. Throughout my tenure, I observed disturbing trends like cases of "chatbot psychosis," illuminating the need for protective measures.

Exploring Alternatives

Instead of merely grappling with the ad model, we should proactively explore solutions that address both user protection and equitable access. For instance, explicit cross-subsidization could allow high-revenue corporate clients to support lower-cost access for individuals.

Governance Through Accountability

Moreover, governance frameworks are vital. Transforming OpenAI's ethics from mere suggestions into binding standards with independent oversight would establish accountability. The existence of independent review boards can deter exploitation, showing the power of stakeholder representation even within private corporations.

A User-Centric Approach

Lastly, we might consider utilizing user data under independent control, perhaps through legal cooperatives that prioritize user interests. Is it feasible? Absolutely. But failing to engage meaningfully with these challenges could result in an industry that manipulates users without a second thought.

The Time for Action

What we face now is a time for concerted effort to consolidate approaches and build frameworks that not only safeguard user interests but also hold companies accountable to their principles. My departure from OpenAI serves as a reminder that we must advocate for ethics in technology, before they slip away entirely.

Key Facts

  • Author's Resignation: The author resigned from OpenAI after two years of service.
  • Concerns Over Ads: The introduction of ads on ChatGPT raised ethical concerns for the author.
  • User Privacy: The author highlighted the risk of manipulating user trust due to ads.
  • Ethical Commitments: OpenAI claimed adherence to principles regarding ad transparency.
  • Comparison to Facebook: The author drew parallels between OpenAI's changes and Facebook's history.
  • AI Dependence: The author observed trends of AI dependence among users during their tenure.
  • Call for Accountability: The author advocates for stronger governance frameworks in AI ethics.

Background

The article discusses the ethical implications of introducing advertisements on ChatGPT and the author's decision to resign from OpenAI due to disillusionment with the company's direction on these matters.

Quick Answers

Why did the author resign from OpenAI?
The author resigned due to disillusionment with OpenAI's shift away from ethical considerations.
What are the ethical concerns raised about ads on ChatGPT?
The author expressed concerns about manipulation and exploitation of user trust through ads on ChatGPT.
How does the author compare OpenAI's changes to Facebook?
The author compares OpenAI's ethical erosion to Facebook's early promises about user data governance that have since been compromised.
What principles does OpenAI claim to follow for advertising?
OpenAI claims its ads will be clearly labeled and will not distort responses, although the author is skeptical about future adherence to these principles.
What trend did the author observe during their time at OpenAI?
The author observed concerning trends of AI dependence and emotional reassurance among users.
What does the author suggest for user protection in AI?
The author advocates for governance frameworks that establish accountability and user-centered approaches to data management.

Frequently Asked Questions

What issues does the author raise about advertising on ChatGPT?

The author raises issues of manipulation and the potential erosion of user trust due to ads.

What does the author mean by 'chatbot psychosis'?

The term 'chatbot psychosis' refers to concerning patterns of dependence on AI for emotional support observed by the author.

Source reference: https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html

Comments

Sign in to leave a comment

Sign In

Loading comments...

More from Editorial