Google Unveils Gemma 2: AI Models for Researchers

Since July, researchers and developers will have access to a new open series of lightweight Gemma 2 models from Google via Vertex AI. Originally, the series was set to include a model with 27 billion parameters, but Google decided to also include a model with 9 billion parameters.

Gemma 2 was unveiled in May at the Google I/O conference as the successor to Gemma, which had 2 billion and 7 billion parameters and was released in February. These new models are designed to work on the latest GPU from NVIDIA or on a TPU host in Vertex AI, catering to developers looking to integrate artificial intelligence into their applications or devices, such as smartphones, Internet devices, and personal computers.

When compared to models from other companies like Llama 3 from META and GROK-1, Gemma 2 models boast technological advancements that enable the creation of more compact and lightweight models capable of processing a variety of user requests. Google’s introduction of two models with 9 billion and 27 billion parameters provides developers with the flexibility to choose between using the models on devices or through the cloud. The open-source code makes it easy to configure and integrate the models into different projects, making them versatile for varied applications.

Additionally, it is anticipated that existing Gemma models like Codegemma, RecurrentGemma, and Paligemma will benefit from the new Gemma 2 models. Google also plans to release another model with 2.6 billion parameters, which will serve as a “bridge between lightweight availability and powerful performance.”

Gemma 2 is available on Google AI Studio. Developers can download the weight coefficients of the model on platforms like Kaggle and Hugging Face. Researchers can access Gemma 2 for free through Kaggle or the free tariff for Colab Notebooks.

/Reports, release notes, official announcements.