Member-only story

Why should we trust OpenAI?

Hassan Zouhar
4 min readJun 5, 2024

--

Illustration generated using Dall-E. Yes.. there is some poetic contradiction in using a OpenAI service in writing this article.

Anonymous AI workers demand the right to warn the rest of us..

Don’t pay for Medium (yet)? I got you! You can read this article for free with my “friends link”.

As we progress towards 2025, a group of AI workers demand stronger whistleblower protection, and most of them don’t dare to do so without being anonymous.

Half in, half out

A total of 13 AI workers, some well-known and key contributors in making the technology a reality, working for companies like Google, OpenAI, and Anthropic, have released an open letter titled “A Right to Warn”.

Among the names, we find prominent figures like Geoffrey Hinton, the British-Canadian computer scientist and cognitive psychologist, known as “the Godfather of AI” for his work on artificial neural networks. He retired from Google and went on to spend his time lecturing and warning about the dangers of unchecked AI companies.

The letter comes just a couple of weeks after revelations that OpenAI had used threats toward employees, stating they would lose their vested equity to silence those who left the company. In the case of OpenAI, they had been forcing employees to choose between signing an aggressive non-disparagement agreement or risking financial loss or lawsuits. OpenAI CEO Sam Altman admitted that the provision was “embarrassing” and claimed it has now been removed from exit policies, but it is unknown if the provision remains in force for some employees.

Jacob Hilton, who worked at OpenAI on various reinforcement learning-related topics, posted on X.com that for AI companies to “be held accountable for their own commitments […] the public must have confidence that employees will not be retaliated against.”

--

--

Written by Hassan Zouhar

I write about technology, psychology and the human condition.

No responses yet