Anthropic is the new AI research outfit from OpenAI’s Dario Amodei, and it has $124M to burn

As AI has grown from a menagerie of research projects to include a handful of titanic, industry-powering models like GPT-3, there is a need for the sector to evolve — or so thinks Dario Amodei, former VP of research at OpenAI, who struck out on his own to create a new company a few months ago. Anthropic, as it’s called, was founded with his sister Daniela and its goal is to create “large-scale AI systems that are steerable, interpretable, and robust.”

The challenge the siblings Amodei are tackling is simply that these AI models, while incredibly powerful, are not well understood. GPT-3, which they worked on, is an astonishingly versatile language system that can produce extremely convincing text in practically any style, and on any topic.

But say you had it generate rhyming couplets with Shakespeare and Pope as examples. How does it do it? What is it “thinking”? Which knob would you tweak, which dial would you turn, to make it more melancholy, less romantic, or limit its diction and lexicon in specific ways? Certainly there are parameters to change here and there, but really no one knows exactly how this extremely convincing language sausage is being made.

It’s one thing to not know when an AI model is generating poetry, quite another when the model is watching a department store for suspicious behavior, or fetching legal precedents for a judge about to pass down a sentence. Today the general rule is: the more powerful the system, the harder it is to explain its actions. That’s not exactly a good trend.

“Large, general systems of today can have significant benefits, but can also be unpredictable, unreliable, and opaque: our goal is to make progress on these issues,” reads the company’s self-description. “For now, we’re primarily focused on research towards these goals; down the road, we foresee many opportunities for our work to create value commercially and for public benefit.”

Excited to announce what we’ve been working on this year – @AnthropicAI, an AI safety and research company. If you’d like to help us combine safety research with scaling ML models while thinking about societal impacts, check out our careers page https://t.co/TVHA0t7VLc

— Daniela Amodei (@DanielaAmodei) May 28, 2021

The goal seems to be to integrate safety principles into the existing priority system of AI development that generally favors efficiency and power. Like any other industry, it’s easier and more effective to incorporate something from the beginning than to bolt it on at the end. Attempting to make some of the biggest models out there able to be picked apart and understood may be more work than building them in the first place. Anthropic seems to be starting fresh.

“Anthropic’s goal is to make the fundamental research advances that will let us build more capable, general, and reliable AI systems, then deploy these systems in a way that benefits people,” said Dario Amodei, CEO of the new venture, in a short post announcing the company and its $124 million in funding.

That funding, by the way, is as star-studded as you might expect. It was led by Skype co-founder Jaan Tallinn, and included James McClave, Dustin Moskovitz, Eric Schmidt and the Center for Emerging Risk Research, among others.

The company is a public benefit corporation, and the plan for now, as the limited information on the site suggests, is to remain heads-down on researching these fundamental questions of how to make large models more tractable and interpretable. We can expect more information later this year, perhaps, as the mission and team coalesces and initial results pan out.

The name, incidentally, is adjacent to anthropocentric, and concerns relevancy to human experience or existence. Perhaps it derives from the “Anthropic principle,” the notion that intelligent life is possible in the universe because… well, we’re here. If intelligence is inevitable under the right conditions, the company just has to create those conditions.