Launching the mobile version of Gemini, Alphabet-C opens up a new front in AI, and large models join the battle of edge AI.

Wallstreetcn
2023.12.07 09:47
portai
I'm PortAI, I can summarize articles.

Google has launched the mobile version of its AI model, Gemini, which can run locally and offline on devices, providing a smarter and faster mobile experience. The Gemini Nano model has been integrated into Pixel phones and can be used for offline AI applications and features, protecting personal privacy. Gemini will be widely used in Google's search engine and Assistant on Pixel phones, offering more Gemini experiences.

Google has extended the battlefield of the AI large model competition to mobile hardware.

On Wednesday local time, Google launched its self-proclaimed largest and most capable AI large model, Gemini, which can be used to analyze image and audio information and has complex reasoning and "planning" capabilities. It will provide support for Google's chatbot Bard starting from Wednesday and will be more widely applied to Google's search engine from next year.

Gemini comes in three versions: the most powerful Gemini Ultra, the multitasking Gemini Pro, and the Gemini Nano for specific tasks and edge computing.

Although Nano is the smallest model in the Gemini large model series, Google has high expectations for its application. It is specifically designed by Google to run on mobile devices, does not require an internet connection, and can run locally and offline on the device.

Running Gemini Nano on Mobile Locally

Google has integrated Gemini Nano into its latest Pixel phones. Google stated that the nano model is optimized for mobile devices, and Android developers can easily build AI applications and features that work offline or use personal information on the device to better protect personal privacy.

Pixel 8 Pro is currently the only phone compatible with the Nano model, but Google sees this new model as a core part of the future of Android.

If you own a Pixel 8 Pro, starting today, two features on your phone will be supported by Gemini Nano: the automatic summarization feature in the voice recorder app and the smart reply feature on the Gboard keyboard. Both of these features can run offline, and because the Nano model runs on the device itself, it provides a fast and native experience.

Next year, when Google introduces the Bard chatbot powered by Gemini in the Assistant on Pixel phones, you will have more Gemini experiences.

According to media reports quoting Demis Hassabis, CEO of Google DeepMind:

Although the Nano model is small, it is still powerful. Because Pixel phones are very small in size and have limitations in memory and speed, AI models must be made smaller. In terms of its size, it is actually an incredible model. The goal of the Nano model is to create a powerful version of Gemini as much as possible without occupying storage space or causing the processor to overheat.

Google Research integrates Gemini Nano into the Android system

Currently, Google's Tensor 3 processor seems to be the only processor capable of running Nano models. However, Google is also working on integrating Nano models into the entire Android system: Google has launched a new system service called AICore, which allows developers to incorporate Gemini-supported features into their own applications.

Your phone still needs a high-end chip to run Nano models, but Google mentioned in the blog post announcing this feature that companies like Qualcomm, Samsung, and MediaTek can produce compatible processors. Developers can now join Google's early access program.

In the past few years, Google has essentially regarded its Pixel phones as AI devices. With Gemini Nano, many high-end Android devices in the future will be able to achieve this goal.