Huang Renxun: AI hallucinations can be solved, AGI is expected to be achieved in five years

Wallstreetcn
2024.03.20 01:10
portai
I'm PortAI, I can summarize articles.

Huang Renxun believes that predicting when a qualified AGI will appear depends on how you define AGI, and AI illusions can be easily resolved by ensuring that the answers undergo sufficient research

The future development of Artificial Intelligence (AI) will see a major leap forward, that is achieving Artificial General Intelligence (AGI).

This week at NVIDIA, CEO Jensen Huang seemed annoyed when discussing this topic with the media, partly due to being frequently misunderstood. However, when asked for a specific timeline, Jensen Huang took some time to elaborate on his views.

Jensen Huang believes that predicting when a qualified AGI will appear depends on how you define AGI. He made two analogies: just like you know when the new year arrives despite time zone differences, you also know when 2025 will arrive. If you drive to the San Jose Convention Center (the venue for this year's GTC conference), when you see the huge GTC banner, you know you have arrived. The key is that we can agree on how to get there, whether in terms of time or geographical space, depending on where you originally wanted to go.

"If we define AGI as something very specific, such as a software program that can perform tests very well, or better than 8% of most people, then I believe we will achieve it within 5 years," Jensen Huang explained. He suggested these tests could be legal qualification exams, logic tests, economic tests, or perhaps the ability to pass medical school entrance exams. Unless the questioner can specify very clearly what AGI means in the question, he is unwilling to make a prediction.

Furthermore, during the Q&A session on Tuesday, someone asked Jensen Huang how to deal with the AI illusion problem, where some AIs fabricate answers that sound reasonable but are not based on facts. He replied, "Add a rule that for every answer, you must check the source of the answer." He referred to this approach as "retrieval-augmented generation," and described it as a method very similar to basic media literacy: checking sources and context. This means comparing the facts contained in the source with known truths, and if there is any inaccuracy in the answer, even if only partially, the entire source should be discarded and the search for the next one continued.

Jensen Huang stated, "AI should not just provide answers, it should first conduct research to determine which answer is best."