Alphabet-C may activate AI "high-risk functions", and the Gemini model will have the ability to plan, problem-solve, and analyze text, perceiving the world and solving problems like humans.
Is there still a boundary between humans and OpenAI when OpenAI can perceive the world and solve problems like humans?
The Pandora's Box has been opened, and OpenAI is evolving towards a more advanced form of intelligence. According to Wired's report on Monday, Demis Hassabis, co-founder and CEO of Alphabet-C DeepMind, said in an interview that Alphabet-C is using AlphaGo's technology to create an artificial intelligence system called "Gemini", which will be more powerful than OpenAI's model.
Gemini is similar in nature to GPT-4, a large language model for processing text. Currently, Gemini is still under development, which will take several months. Hassabis pointed out:
The team will combine the technology of GPT-4 with the technology used in AlphaGo to give the model new capabilities, such as the ability to plan or solve problems and analyze text.
At a high level, you can think of Gemini as combining some of the advantages of the AlphaGo system with the amazing language ability of large models.
OpenAI's more advanced form = AlphaGo + large models
It is widely believed that learning from the physical world like humans and animals is crucial for the development of artificial intelligence. Some AI experts believe that the main limitation of language models is that they learn indirectly through text.
AlphaGo's advantage can solve this problem. In 2016, the OpenAI system AlphaGo designed by DeepMind defeated the world Go champion Lee Sedol with a score of 4 to 1, becoming the first robot to defeat the world Go champion.
AlphaGo is based on the reinforcement learning technology pioneered by DeepMind, which enables AlphaGo to learn how to deal with tricky problems that require choosing actions by repeatedly trying and accepting feedback on its performance. At the same time, AlphaGo uses Monte Carlo tree search to explore and memorize possible behaviors on the chessboard.
The next leap for language models may be to perform more tasks on computers. As mentioned in a previous article, Gemini's biggest advantage is its multimodal ability, which not only understands and generates text and code, but also understands and generates images. In contrast, OpenAI is just a pure text model that can only understand and generate text.
In addition, an important step in creating language models with similar capabilities to OpenAI is to use human feedback reinforcement learning to improve their performance. DeepMind's extensive experience in reinforcement learning can give Gemini new capabilities. It is worth mentioning that in order to strengthen OpenAI research, Alphabet-C will merge the formerly independent AI research labs Google Brain and DeepMind into a new department called Google DeepMind in April. Hassabis said that the new team brings together the power that has played a key role in recent advances in artificial intelligence.
In 2014, DeepMind demonstrated the use of reinforcement learning to master simple games and was acquired by Alphabet-C. In the following years, DeepMind demonstrated how the technology could do things that had previously seemed only possible for humans. AlphaGo defeated the Go champion Lee Sedol in 2016, which amazed many AI experts who had thought it would take decades for machines to master such a complex game.
More Risk in the Future?
Recently, the rapid development of OpenAI models has caused many experts to worry about whether the technology will be maliciously used or become difficult to control. Some industry insiders even call for a pause in the development of more powerful algorithms to avoid security threats.
The ability of Gemini has surprised and frightened the outside world, and the "high-risk features" of OpenAI may be opened by Alphabet-C.
OpenAI Explained found that "planning" is a selling point of Gemini for Alphabet-C, but OpenAI sees it as a security risk.
Hassabis believes:
OpenAI has extraordinary potential benefits, such as scientific discoveries in fields such as health or climate, and this technology needs to continue to develop.
In addition, mandatory suspension is impractical because it is almost impossible to implement. If done correctly, OpenAI will be the most beneficial technology for humans, and we must boldly and courageously pursue it.
However, this does not mean that Hassabis advocates for the development of AI to be rushed. Before OpenAI appeared, DeepMind had been exploring the potential risks of artificial intelligence. One of the company's co-founders, Shane Legg, has been leading an "AI safety" group within the company for many years. Last month, Hassabis signed a statement with other high-profile industry experts warning that the risks posed by OpenAI in the future could be comparable to epidemics and nuclear war.
Hassabis pointed out that one of the biggest challenges at present is to determine what the risks of more powerful artificial intelligence are. He believes that the field needs more research, evaluation testing, and other work to determine how the abilities and controllability of new AI models can be determined.