Qualcomm and Meta Platforms have announced that starting from 2024, Meta Platforms' new large-scale language model, Llama 2, will run on smartphones and personal computers powered by Qualcomm chips.
According to Zhītōng Finance APP, Qualcomm (QCOM.US) and Meta Platforms have announced that starting from 2024, Meta Platforms' new large-scale language model, Llama 2, will run on smartphones and personal computers equipped with Qualcomm chips.
So far, large-scale language models have mainly run on large server clusters using NVIDIA (NVDA.US) graphics processing units (GPUs) because this technology requires significant computational power and data, which has boosted NVIDIA's stock price, which has risen over 220% this year.
However, companies like Qualcomm that manufacture cutting-edge processors for smartphones and personal computers have largely missed out on the AI boom. Qualcomm's stock price has only risen by about 10% in 2023, lagging behind the 36% increase in the NASDAQ index.
Tuesday's announcement indicates that Qualcomm aims to position its processors as highly suitable for AI, but to run "at the edge" or on devices rather than "in the cloud." If large-scale language models can run on smartphones instead of large data centers, it could significantly reduce the cost of running AI models and potentially bring about better and faster voice assistants and other applications.
Qualcomm believes that offering Meta Platforms' open-source Llama 2 model on Qualcomm devices will make applications like intelligent virtual assistants possible. Meta Platforms' Llama 2 can do many of the same things as ChatGPT, but it can be packaged into a smaller program, making it suitable for running on smartphones.
Qualcomm's chips include a tensor processing unit (TPU) that is well-suited for the various computations required by AI models. However, the processing power available on mobile devices pales in comparison to data centers equipped with state-of-the-art GPUs.
The reason why Meta Platforms' Llama is noteworthy is because Meta Platforms has released its "weights," which are a set of numbers that help manage how specific AI models work. This allows researchers and businesses to use AI models on their own computers without seeking permission or paying fees. Other well-known AI models, such as OpenAI's GPT-4 or Google's Bard, are closed-source, and their weights are strictly confidential. High-throughput access has previously collaborated closely with Meta Platforms, especially in the chip aspect of the Quest virtual reality device. Qualcomm also demonstrated some artificial intelligence models running slowly on its chips, such as the open-source image generator Stable Diffusion.