OpenOpenAI's Chief Scientist: OpenOpenOpenAI may already have consciousness, and OpenOpenAI will be immortal for all eternity.

Wallstreetcn
2023.10.31 07:01
portai
I'm PortAI, I can summarize articles.

ChatGPT has rewritten many people's expectations for the future, turning "it will never happen" into "it will happen faster than you imagine."

press:> > OpenAI chief scientist Ilya Sutskever's x account number is probably the most special among celebrities in science and technology. he seldom shares his personal life. apart from forwarding company product links, his tweets are usually fragmentary thoughts and thoughts: "self is the enemy of growth"; "GPU is the bitcoin of the new era"; "If you value intelligence above all other human qualities, then you will have a bad life"; "Empathy in life and business is underestimated"; "Perfection destroys a lot of perfect good things.">> More often than not, it is for (general artificial intelligence) AGI platform, just as his x signature: "create many AGI who like human beings" (towards a plurality of humanity loving AGIs).>> ! > > The same is true in real life. He is not keen on socializing and rarely appears in front of the media. The only thing that excites him is artificial intelligence.>> Recently, Ilya Sutskever gave an exclusive interview to Will Douglas Heaven, a reporter from MIT Science and Technology Review. In the interview, he talked about OpenAI's early entrepreneurial history and the possibility of realizing AGI. He also introduced OpenAI's future plans to control "super intelligence". He hoped * * to enable future super intelligence to treat human beings like parents treat children * *. The following is the text: Ilya Sutskever bowed her head in meditation. He had his arms spread out, his fingers resting on the table, like a pianist about to play the first note at a concert. We sat quietly. I'm here to meet with Sutskever, the co-founder and chief scientist of OpenAI, whose company is located on a nondescript street in San Francisco's missionary district, with an unmarked office building, and I want to hear the next steps in his world-changing technology. I also wondered what his next steps were, and in particular, why building his company's next-generation flagship generative model was no longer a priority. Sutskever told me that his new focus is not on building the next-generation GPT or image-making machine DALL · E, but on how to stop artificial superintelligence (which he sees as a hypothetical, prescient future technology) from getting out of control. Sutskever told me a lot of other things. **He thinks ChatGPT might be conscious (if you squint). He believes the world needs to wake up to the true power of the technology his company and others are trying to create. He also believes that one day humans will choose to merge with machines. **A lot of what Sutskever say is crazy. But not as crazy as it sounded a year or two ago. As he himself told me, **ChatGPT has rewritten many people's expectations of the future, from" never happening "to" will happen sooner than you think ". **He said:>" It is important to discuss the direction of all this." When he predicts the future of AGI (artificial general intelligence, AI as smart as humans), he is as confident as another iPhone: "AGI will be realized one day. Maybe from OpenA. Maybe from another company." Since the release of the smash-hit new product ChatGPT last November, the talk around OpenAI has been impressive, even in an industry known for its hype. No one wouldn't be curious about this $80 billion startup. World leaders sought (and got) a private meeting with CEO Sam Altman. ChatGPT this clumsy product name comes up from time to time in small talk. This summer, OpenAI's CEO, Sam Altman, spent much of the summer on a weeks-long outreach trip, talking to politicians and speaking at packed venues around the world. But Sutskever is not a public figure like he is, and he doesn't give interviews often. He spoke thoughtfully and methodically. He pauses for a long time, thinking about what he wants to say and how he wants to say it, and pondering the problem like a puzzle. He doesn't seem interested in talking about himself. He said:> **" My life is simple. I go to work and then I go home. I don't do other things. One can have a lot of socializing, a lot of activities, but I won't." **But when we talk about artificial intelligence and the epochal risks and rewards he sees, his eyes light up:> "AI will immortalize and shake the world. Its birth is like the beginning of the world." ## **Getting better, getting better **In a world without OpenAI, Sutskever will still go down in the history of AI. An Israeli-Canadian, he was born in the former Soviet Union but grew up in Jerusalem from the age of five (he still speaks Russian, Hebrew and English). He then moved to Canada, where he studied at the University of Toronto under artificial intelligence pioneer Geoffrey Hinton. (Sutskever didn't want to comment on Hinton's comments, but his focus on superintelligence disasters suggests they're kindred minds.). Hinton later shared the Turing Award with Yann LeCun and Yoshua Bengio for their work on neural networks. But when Sutskever joined his team in the early 2000 s, most AI researchers thought neural networks were a dead end. Hinton is an exception. Sutskever said: "This is the beginning of generative artificial intelligence. It's really cool, it's just not good enough." Sutskever is fascinated by the brain: how do they learn, and how do they recreate or at least mimic this process in a machine? Like Hinton, he sees the potential of neural networks and the trial-and-error technique Hinton uses to train them, known as deep learning. "It's getting better and better and better," Sutskever said." In 2012, Sutskever, Hinton, and another of Hinton's graduate students, Alex Krizhevsky, built a neural network called AlexNet, which was trained to recognize objects in photos far beyond other software available at the time. **This is the Big Bang moment for deep learning. **After years of failure, they have finally demonstrated the amazing efficacy of neural networks in pattern recognition. **All you need is enough data (they use a million images from the ImageNet dataset that Princeton researcher Feifei Li has been maintaining since 2006) and an explosion of computing power. **The increase in computing power comes from a new chip called graphics processing unit (GPU) produced by Nvidia. GPUs are designed to project fast-moving video game visuals onto the screen at lightning speed. But the computations that GPUs excel at-the multiplication of massive grids of numbers-are very similar to those needed to train neural networks. Nvidia is now a trillion-dollar company. At the time, it was rushing to find applications for its narrow market for new hardware. "When you invent a new technology, you have to accept the crazy idea," said Renxun Huang, chief executive of Nvidia. "My state of mind is always looking for something wacky, and the idea that neural networks will change computer science is a very wacky idea." Huang Renxun said that when the Toronto team was developing AlexNet, Nvidia sent them several GPUs for them to try out. But what they wanted was the latest version, a chip called the GTX580, which quickly sold out in stores. According to Huang, Sutskever drove from Toronto to New York to get the GPU. "People lined up on street corners," Huang said. "I don't know how he did it-I'm pretty sure everyone can only buy one; we have a very strict policy that each player can only buy one GPU, but he obviously filled a trunk with them. A full trunk of GTX580 changed the world." **This is a great story, just probably not true. Sutskever insists he bought his first GPUs online. But in this bustling industry, such myths are commonplace. **Sutskever himself is more modest, saying:" I think if I could make even the slightest bit of real progress, I would consider it a success. The impact of the real world felt too far away because computers were so weak back then." After the success of the AlexNet, Google came knocking. Google bought Hinton's corporate DNNresearch and hired Sutskever. At Google, Sutskever showed that the pattern recognition capabilities of deep learning can be applied to data sequences, such as words and sentences, as well as images. Jeff Dean, a former colleague of Sutskever and now chief scientist at Google, said: "Ilya has always been interested in language, and we have had good discussions over the years. Ilya has a strong intuition of where things are going." But Sutskever didn't work at Google for long. In 2014, he was hired as a co-founder of OpenAI. The new company, with $1 billion in funding (from CEO Altman, Musk, Peter Thiel, Microsoft, Y Combinator, and others), had that Silicon Valley ambition that set its sights on developing AGI from the start, a prospect that few people took seriously at the time. Sutskever is the driving force behind the company, and his ambition is understandable. Prior to this, he has achieved more and more results in neural networks. Caldwell Dalton, managing director of Y Combinator Investments, said that Sutskever was already famous at the time and he was a key source of OpenAI's appeal. Caldwell said: "I remember Sam Altman saying that Ilya was one of the most respected researchers in the world. He thinks Ilya can attract a lot of top AI talent. He even mentioned that Yoshua Bengio, the world's top artificial intelligence expert, believes that it is impossible to find a better candidate than Ilya to serve as the chief scientist of OpenAI." However, OpenAI struggled at first. "When we launched OpenAI, there was a time when I wasn't sure how we would continue to make progress," Sutskever says. "But I had a very clear belief that you couldn't bet against deep learning. Somehow, every time there was a roadblock, researchers would find a way around it within six months or a year." His faith was rewarded. In 2016, OpenAI's first GPT large-scale language model (the name stands for "Generating Pre-Training Translator") came out. Subsequently, GPT-2 and GPT-3 came out. Then there is the striking picture generation model DALL · E. No one could build anything so good. With each release, OpenAI has raised awareness of what's possible. ## Manage expectations In November last year, OpenAI released a free-to-use chatbot that repackaged some of its existing technology. It has reset the agenda of the entire industry. At the time, OpenAI had no idea how hot its product might be. **Expectations within the company couldn't be lower, and Sutskever said:" I admit, I'm a little embarrassed-I don't know if I should admit it, but what the hell, it's true-when we make ChatGPT, I don't know if it's good. When you ask it a factual question, it will give you a wrong answer. I thought it was going to be bland and people would say, 'Why are you doing this!" **Sutskever said that the attraction lies in its convenience. ChatGPT large language models under the hood have been around for months. But encapsulate it in an easy-to-access interface and give it away for free, allowing billions of people to learn for the first time what OpenAI and other companies are building. Sutskever said: **" This first experience attracts people. The first time I use it, I think it's almost a spiritual experience. You think, oh my god, the computer seems to understand what I'm saying." **OpenAI has amassed 0.1 billion users in less than two months, many of whom have been captivated by this amazing new toy. Aaron Levie, CEO of storage company Box, summed up the atmosphere of the week after the release on Twitter: **" ChatGPT is a rare moment in the technology field, allowing you to see the future everything will be different. Dawn." **When ChatGPT say something stupid, this wonderful feeling immediately collapsed. But then it won't matter. Sutskever said: "That glance is enough. ChatGPT changed people's perceptions." "AGI is no longer a dirty word in the world of machine learning," he said. "This is a huge change. The attitude has always been: AI doesn't work, AI doesn't work, every step is very difficult, and you have to fight for every little bit of progress. When people talk about AI, researchers say, 'What are you talking about. Too many questions. 'But with ChatGPT, it starts to feel different." Did the shift happen only a year ago? "Because of the ChatGPT," he said. "ChatGPT gives machine learning researchers a dream." OpenAI's scientists have been evangelists from the beginning, inspiring these dreams with blog posts and speaking tours. It's all working: "We now have people talking about how far AI will go, and people talking about AGI or superintelligence. Not just researchers. It's crazy that governments are talking about it." ## Unbelievable Things Sutskever insists that all this talk about technologies that don't yet exist (and may never exist) is a good thing, because it makes more people aware of the future he has taken for granted. "You can do a lot of amazing things with AGI, incredible things: automate healthcare, make healthcare a thousand times cheaper, get healthcare a thousand times better, cure a lot of diseases, really solve global warming," he said. But there's also a lot of concern about, oh my God, can AI companies successfully manage this huge technology? "AGI sounds more like a wish-fulfilling elf than a technology that can appear in the real world. Few people would say no to saving lives and tackling climate change. But the problem with a non-existent technology is that you can say whatever you want to it. What exactly is Sutskever talking about when he talks about AGI? He says, "AGI is not a scientific term. It is only a useful threshold, a reference point. It's an idea." He began, then paused. "It refers to the degree of intelligence of artificial intelligence. If humans can complete tasks, artificial intelligence can also complete them. Then, you can say that AGI is achieved." AGI remains one of the most controversial ideas in AI. Few people think that the arrival of AGI is inevitable. Many researchers believe that a major conceptual breakthrough is needed before we can see something like what Sutskever thought, while some think we will never see it. However, this was a vision he had from the beginning. Sutskever said: "I have always been inspired and inspired by this idea. It wasn't called AGI back then, but you know, it's like having a neural net do everything. I don't always believe they can do it. But this is a mountain to climb." He drew an analogy between neural networks and the way the brain works. **Both receive data, summarize the signals in the data, and then decide whether to propagate these signals based on some simple processes (mathematics in neural networks, chemicals in the brain, and bioelectricity). This is a simplified metaphor, but the principle is similar. **Sutskever said:>" If you believe this, if you allow yourself to believe this, then it will have a lot of interesting effects. If you have a very large artificial neural network, it should be able to do a lot of things. In particular, if the human brain can do something, a large artificial neural network can do something similar." "If you take this seriously enough, everything will fall into place," he said. "Most of my work can be explained by this". While we are talking about the brain, I would like to ask about an article Sutskever published on X (Twitter). Sutskever posts are like a volume of proverbs: **" If you value intelligence above all other human qualities, you'll be living badly ";" Empathy in life and business is underrated ";" Perfection ruins a lot of perfect good things." **In February 2022, he posted that" perhaps today's large neural networks are slightly conscious "(to which Murray Shanahan, Google DeepMind chief scientist and professor at Imperial College London and scientific adviser on the film ExMachina, replied:"… like a large wheat field that might have a little spaghetti "). Sutskever laughed when I brought it up. Is he kidding? He's not. "Are you familiar with Boltzmann's concept of the brain," he asks, referring to a (quirksome) thought experiment in quantum mechanics named after the 19th-century physicist Ludwig Boltzmann, in which one imagines random thermodynamic fluctuations in the universe cause brains to suddenly appear or disappear. "I think now these language models are a bit like the Bozeman brain," Sutskever said. "You start talking to it, you talk for a while; then you're done, and the brain…" He made a disappearing motion with his hand. Poof-bye, brain." I asked him, do you mean that when the neural network is active, that is, when it is launched, something is there? He said: "I think it may be. I'm not sure, but it's a possibility that's hard to argue against. But who knows what will happen, right?### Artificial intelligence, but not as we know it While others struggle with the idea that machines can match human intelligence, Sutskever are preparing for machines to surpass us. He calls it artificial superintelligence: "They will see better. They see things we can't see." I'm still having a hard time understanding what that really means. Human intelligence is the benchmark by which we measure intelligence. What does Sutskever mean by intelligence that is more intelligent than humans? We 've already seen an example of a finite superintelligence in AlphaGo, he says. In 2016, DeepMind's AI Go robot defeated one of the world's best Go players, Lee Sedol, by a score of 4-1 in a Go game. Sutskever said: "It figured out a way to play Go, different from what humans have developed together over thousands of years. It comes up with new ideas." Sutskever pointed to AlphaGo enigmatic 37th hand. In the second game against Li Shishi, AI made a move that surprised the commentators. They thought the AlphaGo was a failure. In fact, AlphaGo played a * * one of the gods * * (called "dog flow" by chess fans) that no one had ever seen in the history of the match. "Imagine AlphaGo insight is so strong and all-encompassing." Sutskever said. It was this line of thinking that prompted Sutskever to make the biggest shift of his career. Together with OpenAI scientist Jan Leike, he set up a team to focus on what they call super alignment (superalignment). Alignment is jargon, which means to let the artificial intelligence model do what you want, that's all. Superalignment is an OpenAI term for the alignment of superintelligence. The goal of Super Alignment is to develop a foolproof procedure for building and controlling this future technology. OpenAI says it will allocate 1/5 huge computing resources to solve this problem and solve it within four years. "Existing permutation methods don't work for models that are smarter than humans, because they fundamentally assume that humans can reliably assess what AI systems are doing," Leike said. The idea is that it will be harder for humans to evaluate them. In the process of forming a hyperalignment team with Ilya, we have set out to solve these future alignment challenges." Google Chief Scientist Dean said:> "It is very important to focus not only on the potential opportunities of large language models, but also on their risks and drawbacks." OpenAI announced the project with great fanfare in July. But for some, it's just a fantasy. OpenAI's blog post on Twitter has drawn ridicule from prominent critics in big tech circles, including Abeba Birhane, who works on AI accountability in Mozilla ("So many grand-looking but empty words appear in one blog post"). ; Timnit Gebru, co-founder of the Distributed Artificial Intelligence Institute (Distributed Artificial Intelligence Research Institute) ("Imagine ChatGPT being more 'super aligned' with OpenAI's technologists '. shudder "); and Margaret Mitchell (" My coalition is bigger than yours "), chief ethics scientist at AI firm HuggingFace. Admittedly, these are different voices that we are familiar. But it's also a powerful reminder that * * to some, OpenAI is a leader on the front, while to others, OpenAI is a leader on the edge. **For Sutskever, however, an alliance is the inevitable next step. "This is an unresolved issue," he said. He believes that core machine learning researchers like himself are not working on enough problems. "I did it for my own benefit. Obviously, it is important that whoever builds the superintelligence does not betray humanity." The work of superintelligence is just beginning. Sutskever said it would require a wide range of reforms in research institutions. But he already has a paradigm in mind for the kind of safeguards he hopes to design: an AI that can see humans the way their parents see their children. **He said:" In my opinion, this is the gold standard." He said, "After all, people really care about children. Does AI have children? No, but I hope it thinks so." The time for my conversation with Sutskever is almost up, and I think we're done. But he had another idea-one I hadn't thought of: "Once you 've solved the challenge of runaway AI, then what? In a world with more intelligent artificial intelligence, is there room for human beings to survive?" "One possibility-which may be crazy by today's standards, but not so crazy by future standards-is that many people will choose to be part of AI. This may be the way humans are trying to keep up with the times. In the beginning, only the boldest and most adventurous will try to do so. Maybe others will follow, or maybe they won't." Wait, what? He's ready to get up and leave. Would you do that? I asked him, would you be the first? The first? I don't know, he said. But it's something I 've thought about. The real answer is: maybe. With that, he stood up and walked out of the room. "Nice to see you again." He said as he walked.