Google announces safety, transparency advancements in AI models
These new models offer additional tools for developers and researchers, contributing to ongoing efforts toward a secure and transparent AI future.
In a recent development aimed at improving the safety and transparency of artificial intelligence, Google has introduced three new generative AI models. The models, part of Google’s Gemma 2 series, are designed to be safer, more efficient and more transparent than many existing models.
A blog post on the company’s website states that the new models — Gemma 2 2B, ShieldGemma and Gemma Scope — build upon the foundation established by the original Gemma 2 series, which was launched in May.
A new era of AI with Gemma 2
Unlike Google’s Gemini models, the Gemma series is open source. This approach mirrors Meta’s strategy with its Llama models, aiming to provide accessible, robust AI tools for a broader audience.
Gemma 2 2B is a lightweight model for generating and analyzing text. It is versatile enough to run on various hardware, including laptops and edge devices. Its ability to function across different environments makes it an attractive option for developers and researchers looking for flexible AI solutions, according to Google.
Meanwhile, Google said the ShieldGemma model focuses on enhancing safety by acting as a collection of safety classifiers. ShieldGemma is built to detect and filter out toxic content, including hate speech, harassment and sexually explicit material. It operates on top of Gemma 2, providing a layer of content moderation.
According to Google, ShieldGemma can filter prompts to a generative model and the content generated, making it a valuable tool for maintaining the integrity and safety of AI-generated content.
The Gemma Scope model allows developers to gain deeper insights into the inner workings of Gemma 2 models. According to Google, Gemma Scope consists of specialized neural networks that help unpack the dense, complex information processed by Gemma 2.
Posted on: 8/2/2024 3:04:44 AM
|