Cybernetic dolphins make an appearance in
The New Yorker, emphasis added. Next thing I know, I’ve discovered that a site belonging to Google’s head of engineering, Ray Kurzweil, has linked to a scientific paper on a “‘ dolphin speaker’ to enhance study of dolphin vocalizations and acoustics.”And you know, maybe delphic civilization had a little zazzle back! Dolphin researcher Denise Herzing gave a TED talk this year on dolphin communication, and it’s received more than half a million views.In any case, I was beginning to break out into an excited sweat. The narrative was all there: a broad, nerdy interest in dolphin codes gets distilled when some leaders of Google X see this TED talk.
And maybe Google X wasn’t just
decoding the dolphin speech. Maybe they were also playing it back to the animals. Maybe they really were working on this stuff!
In a very real sense, this would be the fulfillment of Michels’ dream to put “a Supercomputer and a Population of Dolphins” in the same place. (Of course, for Michels that place was low-earth orbit, but close enough!)
I dashed off an email to Google X’s press person, hoping, praying she’d say that the company was, in fact, solving interspecies communication. (That’s solutionism I can get behind!)
Her initial response was discouraging: “I haven’t seen any dolphin tanks out back anywhere.”
But then …
The real truth turned out to be a little more complicated. It’s not that Google X is doing work on dolphin communication. BUT! BUT! BUT!
One of its close affiliates, wearable computing pioneer Thad Starner, who is a Technical Lead/Manager on Project Glass,
is also working with dolphins.
Let’s say that again: Starner, a Project Glass manager, is also working on human-dolphin communication in his academic role at Georgia Tech.
As a matter of fact, he’s working with Denise Herzing, the TED talker, on something called the
Cetacean Hearing and Telemetry project, which would allow for *real-time* communication with free-swimming dolphins. This is a major difference from most would-be communication systems, which have relied on captive dolphins and large, clunky apparatuses.
Herzing and her colleagues have been working for more than 15 years to associate certain sounds—outside the dolphins’ normal vocabulary, but easy for them to mimic—with objects like seaweed, scarfs, and rope. Using that information, the Starner-Herzing system, which is still a prototype, is supposed to automatically
translate when a dolphin uses the “word” for “rope” into the word “rope” for the diver in real-time.
“Instead of pushing a keyboard through the water, the diver is wearing the complete system and it’s acoustic only. Basically the diver activates the sounds on a keypad on the forearm. The sounds go out through an underwater speaker. If a dolphin mimics the whistle or a human plays the whistle, the sounds come in and are localized through two hydrophones,” Herzing explained at TED. “The computer can localize who requested the toy, if there is a word match. The real power of this is in the real-time sound recognition, so we can respond to the dolphins quickly and accurately.”
All of which might be why the Google X engineer in Bilger’s story, Anthony Levandowski, was talking about
cybernetic dolphins.Unless there are even more Googlers interested in such a good subject. I hope there are. And that, as soon as possible, they stop creating floating retail barges and whatever other crazy stuff, and start unraveling the mysteries of the undersea civilizations. Because dolphins.