
Google has announced a new collaboration with the Wild Dolphin Project (WPD) to analyze the sounds of these creatures using a custom artificial intelligence model.
This model is named "DolphinGemma" ā derived from the open-source Gemma family of models.
It's designed to understand the complex communication patterns of Atlantic dolphins, which the organization has been studying since the 1980s.
The approach is based on acoustic recordings gathered by researchers over four decades, containing clicks and whistles associated with specific behaviors.
During initial trials, the model successfully generated sounds similar to those made by dolphins, boosting hopes of deciphering their communication.
On the other hand, the team developed a small device incorporating a Pixel 9 phone to capture underwater sounds and emit programmed acoustic responses.
The new device boasts more powerful processing capabilities compared to previous versions, allowing for real-time data analysis without the need for bulky equipment.
However, despite significant progress, researchers involved in the project emphasize that there's still a long way to go before fully understanding the language.
According to the researchers, the current focus is on correlating sounds with daily activities, such as hunting or playing, rather than attempting to simulate complex dialogues.
Field tests are scheduled to begin this summer off the coast of the Bahamas. If successful, the technology could potentially be used to study whales and other marine species, according to a Google statement.