Back to Blog
The People Who Left OpenAI Built an AI — The Birth of Anthropic and Claude
Startup Stories

The People Who Left OpenAI Built an AI — The Birth of Anthropic and Claude

In 2021, 11 key members of OpenAI resigned all at once. Their reason: to build safer AI. The company they founded is Anthropic, and the AI they created is Claude.

Mar 24, 20265min read

11 People Resigned at Once

In the fall of 2021, a small shockwave rippled through San Francisco's AI industry.

Key figures at OpenAI left the company all at once. Research VP Dario Amodei, Operations VP Daniela Amodei — siblings — and a total of 11 others. They represented a significant portion of OpenAI's entire workforce.

The reason was singular.

"We believe AI is not being developed safely enough."

At the time, OpenAI had received massive investment from Microsoft and was accelerating commercialization. Dario and his colleagues felt this pace was dangerous. They tried to change course from within but could not get through. In the end, they decided to leave.


The Meaning Behind the Name "Anthropic"

That November, the Amodei siblings founded a new company. Its name was Anthropic.

"Anthropic" comes from the philosophical concept known as the Anthropic Principle — the idea that the universe exists in its current form because it satisfies the conditions necessary for humans to observe it. It is an abstract name, but it symbolically reveals what this company places at its center.

AI that centers on humans.

The founding members had all been deeply involved in AI safety research. Their goal was not simply to build smarter AI. The core mission was to build AI that is not harmful to humans.

Dario Amodei


Training AI with a "Constitution"

The first thing Anthropic put into the world was not a product but a research paper.

In 2022, they published the concept of Constitutional AI (CAI). It was fundamentally different from existing AI training methods.

The conventional approach had humans directly judge AI responses as "good" or "bad." Thousands of people had to evaluate millions of responses. It was inefficient, and human biases seeped directly into the AI.

Anthropic took a different approach. They first taught the AI a set of principles (a constitution).

"Be harmless. Be honest. Be helpful."

Then they had the AI evaluate and revise its own responses based on these principles. It was AI teaching AI. This approach later became an important methodology in AI safety research.


The Name Claude

In March 2023, Anthropic finally unveiled its AI assistant. Its name was Claude.

The name was taken from mathematician Claude Shannon, known as the father of information theory. Shannon published "A Mathematical Theory of Communication" in 1948, laying the foundation for digital communication. At the root of all digital technology — modern computers, the internet, and AI — lies Shannon's research.

Anthropic borrowed his name, paying tribute to an AI that handles information.

When first released, Claude was quiet. Overshadowed by ChatGPT's explosive popularity, it did not receive much attention. But those who tried it noticed the difference. It was not harsh when declining requests, and its responses lacked unnecessary exaggeration. Above all, it was an AI that honestly said when it did not know something.


Google Placed Its Bet

In 2023, Google invested $300 million in Anthropic. Later that year, it announced plans to invest up to an additional $2 billion.

There was an irony. By investing in Anthropic — a competitor to OpenAI — Google simultaneously entered a competitive dynamic with Microsoft, OpenAI's largest investor. It was the moment the AI power struggle transformed from a simple technology race into a war of capital giants.

Amazon followed suit. At the end of 2023, it signed a deal to invest up to $4 billion. This was one of the largest investments ever made in a single AI startup.

Within two years of founding, Anthropic's valuation had grown to tens of billions of dollars.


Safety Became a Competitive Advantage

There was an interesting twist.

The company that said "we will go slowly for the sake of safety" became the company that was chosen because of safety.

Enterprise clients, especially in healthcare, law, and finance, feared AI confidently stating incorrect information, leaking sensitive data, or generating harmful content. Claude was regarded as the AI that best addressed these concerns.

"Claude chooses being trustworthy over being clever."

As this reputation accumulated, Claude rapidly established itself in the enterprise AI market.


The Reason They Left OpenAI Became Anthropic's Weapon

In one interview, Dario Amodei said:

"The reason we left OpenAI was not money. We were worried about whether AI was being developed in the right direction, and we turned that concern into our reason for building a company."

What they had identified as the problem during their OpenAI days — overly rapid commercialization and disregard for safety — paradoxically became the very reason for Anthropic's existence. And over time, that reason has functioned as an increasingly clear competitive advantage.

In 2025, Claude became an AI used by hundreds of millions of people worldwide. The goal envisioned by those 11 founders — AI that is not harmful to humans and is honest — is still a work in progress.


What One Resignation Letter Created

Claude Shannon The AI's name 'Claude' comes from Claude Shannon, the father of information theory

Anthropic's story begins with the resolve to "go in the direction we believe is right."

When 11 people left their secure positions, many viewed it as reckless. With a giant first-mover like OpenAI in place, how could a company that wanted to move more cautiously possibly compete?

But that caution became what the world needs most right now.

One resignation letter created a company worth tens of billions. More importantly, that resignation letter was written for the right reasons.

Get new posts by email ✉️

We'll notify you when new posts are published