As AI has grown from a menagerie of research projects to a handful of titanic and powerful models like GPT-3, the industry needs to evolve – at least thinks Dario Amodei, former vice president of research. at OpenAI, who started on his own to create a new company a few months ago. Anthropic, as it is called, was founded with its sister Daniela and its goal is to create “large-scale AI systems that are steerable, interpretable and robust.”
The challenge that the Amodei siblings tackle is simply that these AI models, while incredibly powerful, are not well understood. GPT-3, on which they worked, is a surprisingly versatile language system that can produce extremely convincing text in virtually any style and on any subject.
But let’s say you had him generate rhyming verses with Shakespeare and Pope as examples. How is it ? What is it to “think”? Which knob would you adjust, which dial would you turn, to make it more melancholy, less romantic, or limit its diction and lexicon in a specific way? Granted, there are parameters to be changed here and there, but really no one knows exactly how this extremely convincing tongue sausage is made.
It’s one thing not to know when an AI model generates poetry, another when the model watches a department store for suspicious behavior, or seeks legal precedent for a judge about to pass sentence. The general rule today is: the more powerful the system, the harder it is to explain its actions. It’s not exactly a good trend.
“Today’s large general systems can have significant advantages, but can also be unpredictable, unreliable and opaque: our goal is to make progress on these issues,” reads the company’s self-description. . “At the moment, our main focus is on researching these goals; over time, we foresee many opportunities for our work to create value commercially and for the public good.
The aim seems to be to integrate the principles of safety into the existing priority system of AI development which generally prioritizes efficiency and power. Like in any other industry, it’s easier and more efficient to incorporate something from the start than it is to incorporate it at the end. Trying to make sure that some of the larger models can be separated and understood can be more work than building them in the first place. Anthropic seems to be starting over again.
“Anthropic’s goal is to make fundamental research advances that will allow us to build better, more general and reliable AI systems, and then deploy those systems in a way that benefits people,” Dario said. Amodei, CEO of the new company, in a brief post announcing the company and its $ 124 million funding.
By the way, this funding is as starred as one might expect. It was led by Skype co-founder Jaan Tallinn, and included James McClave, Dustin Moskovitz, Eric Schmidt, and the Center for Emerging Risk Research, among others.
The company is a public benefit corporation and the plan for now, as the limited information on the site suggests, is to stay headlong in researching these fundamental questions of how to make great models easier to understand. and to interpret. We can expect more information later this year, perhaps as the mission and team merge and the first results are felt.
The name, by the way, seems to derive from the “anthropogenic principle,” the notion that intelligent life is possible in the universe because … well, here we are. Perhaps the idea is that intelligence is inevitable under the right conditions, and the company wants to create those conditions.