Locating sound

Did you know that sound travels four times faster under water? And we can therefore not locate a sound source like we can under regular conditions? Let’s take a peek at that in more detail and what we could do to build a workaround!

To start, sound travels 1235km/h or 343m/s. The latter is more useful for the upcoming calculations, so keep that in mind. Let’s say the distance between your ears is about 20cm, or 0.2m. Depending on the size of your head, its likely a bit less, or even a bit more. If sound travels 343m/s, it would take 0.2m/343m/s to travel from one ear to the other, so 0.58ms. In other words, half a millisecond is all we need to figure out from what direction a sound is coming, as that’s the maximum delay for a sound coming from one side to reach the other ear. But we can also locate sounds coming from front-left or front-right and inbetween, so we need even less then half a millisecond to locate a sound.

With that in mind, let’s take another look at sound under water. According to Wikipedia, sound in fresh water travels with 1497m/s, in sea water even 1560m/s (at certain temperatures etc.). That’s 4.3 and 4.5 times faster then compared to sound in air. So our location hearing precision would have to be four times better to locate a sound under water, or need need some workaround that delays sound from reaching the other ear, depending on the direction.

Based on my experience with audio recording on specialized audio hardware and drivers, we usually have to deal with latencies of about one millisecond. Which is great for audio recording, compared to 100ms to 300ms that regular audio hardware and drivers get you, but way too much for the underwarter locating device. So we’d probably need a hardwired solution that could work with much lower latencies. If 0.58ms is the maximum we can deal with, half of that might be the lower bound, so 0.29ms. To adapt that to the 4.5 speed increase in salt water, we’d need a precision of 0.0725 milliseconds, or 72.5 microseconds. Considering that computers can measure in nanoseconds, that should be plenty for some optimized hardware.

On a somewhat related note: Consider a coordinate system with a single axis, x. You can position points anywhere on x. That’s what stereo sound is. Add another axis, y, and you can position a point on both x and y. You’ve got two dimensions. That’s what’s sold as “3D” sound. To get actual 3D, you’d have to add another axis. I haven’t ever seen any setup that has actual three dimensions. Why is that? Maybe because we’re not that good at actually hearing in three dimensions, as we’ve got only two ears?

So much for my two cents of anecdotal evidence on the topic, the Sound Localization article has a lot more actual detail (but also no fancy images). For some fun images, take a look at this (archived) page.


No more comments.
  1. Female crickets locate male crickets by the sounds they make. However, the head of a cricket is too small to locate the sound by measuring the time difference between the incoming signals.

    So how does the cricket locate sound?

    It does it by “hardwiring” movement and sound recognition. Basically that is: If a sound reaches the right ear, a movement to the right is triggered. After that the system sleeps for sometime. Same for the left ear.

    This makes the cricket move to the target in a zick-zack kind of way.

    This exact behaviour has been implemented in a robot already: http://citeseerx.ist.psu.edu/viewdoc/download?doi=

  2. Thanks Gerd! And sorry for taking to long to get this out of the moderation queue. Related to failing mailserver…