Google Gemini, Apple Add Music-Focused Generative AI Features
Alphabet Inc.’s Google and Apple Inc. are adding music-focused generative artificial intelligence features to their core consumer apps, underscoring how advanced AI tools are moving into mainstream use.
Google’s Gemini AI assistant can now create 30-second music tracks based on text, photos or video uploaded by users using Google DeepMind’s latest Lyria 3 model, the company said in a blog post on Wednesday. The feature, which can generate custom lyrics or purely instrumental audio, will be available to users over the age of 18 in multiple languages. It is being rolled out on the desktop version of Gemini and will appear in the mobile app over the next few days, the company said.
Its popular image-creation model, Nano Banana, will also generate custom cover art alongside the track, adding a visual element when users share links to the tracks with others, Google said.
Adding audio-creation tools to its mobile app can potentially strengthen Google’s consumer offerings as it remains locked in a race with OpenAI’s ChatGPT to win over users. Google won widespread praise from investors and users for its Gemini 3 AI model released in November, prompting OpenAI Chief Executive Officer Sam Altman to declare a "code red" to spur faster ChatGPT improvements.
Separately this week, Apple said consumers can soon use AI to create playlists in Apple Music. The feature, called Playlist Playground, uses Apple Intelligence to let people turn text prompts into playlists that will include cover art, description and 25 songs. It is included in iOS 26.4, which was released in beta on Monday and will become more widely available this spring. Apple Music’s new feature rivals a similar one offered by Spotify Technology SA.
Posted on: 2/19/2026 11:42:41 AM
|