Ye olde recordings, random

To add insult to injury, here are some more recordings, created between 2006 and 2009.

To start, a Funk Jam Mix with Gerrit and Uwe.

Up next, a never finished song, for some reason filed as “radioedit”, there’s just drum and bass, Uwe and me:

This is an odd one. As far as I can remember, some VST drum computer, where I just changed patterns here and there, plus VST-based piano with some custom bass. Whatever.

Last but not least, a version of Pour Me A Glass, with an oldschool Remaining cast.

That’s it. Whatever comes next is going to be some new recordings. Hopefully sometime this year…

Ye olde recordings

Not exactly legendary, but fun nonetheless, are some old recordings that I’ve uploaded to SoundCloud. Previously they were embedded via some flash-based plugin, which stopped working a long time ago anyway.

To bring those back to life, I’ve replaced them with the new html5-based SoundCloud widget. While I’ve got some more to upload, I’d like to share something for the time being.

There’s the Roaring Hamster session, which resulted in a set of four tracks, with one longplayer. If you’re reading this on the site, the set widget should show up below:

A very different beast was “Die Gesellschaft des Elefanten”, consisting of multiple sessions. The jammix higlight is below, the rest inside the original post.

Put away your high expectations and give it a shot!

Locating sound, the cricket way

On the Locating Sound post, Gerd Riesselmann left this comment:

Female crickets locate male crickets by the sounds they make. However, the head of a cricket is too small to locate the sound by measuring the time difference between the incoming signals.

So how does the cricket locate sound?

It does it by “hardwiring” movement and sound recognition. Basically that is: If a sound reaches the right ear, a movement to the right is triggered. After that the system sleeps for sometime. Same for the left ear.

This makes the cricket move to the target in a zick-zack kind of way.

This exact behaviour has been implemented in a robot already: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.106.1665&rep=rep1&type=pdf

The robot cricket is pretty cool:

A bit larger then the average cricket though.

Another update from Gerd (via G+):

 I found it in the book “Die Entdeckung der Intelligenz – Können Ameisen denken?” by Holk Cruse, Jeffrey Dean and Helge Ritter. Unfortunately there is no English edition, as far as I know. The book contains quite a lot of examples of robots simulating animal behavior (they are called Animaten in German). The special cricket robot itself was mentioned in another book, though, but it seems I gave it away. And unfortunately I can’t remember it’s title.

Rereading the according chapter I unfortunately have to correct myself: Crickets have a special organ that physically measures the difference of sound pressures directly. Basically by allowing the sound to reach both the eardrums from both outside and inside, which makes the ear one organ with two eardrums and four entrances. One entrance leads to the front of the right, another to the front of the left eardrum, like for us. But an additional entrance leads from the right between the two eardrums, and yet another does the same from the left . The principle is called “coupled eardrums”

However, for crickets this is limited to a very narrow range of frequencies (that of male crickets, of course). It’s a very special solution. But frogs apply the same mechanisms to a broader range of frequencies. There’s a small paragraph on this in the Wikipedia article on sound locating: “Ho showed that the coupled-eardrum system in frogs can produce increased interaural vibration disparities when only small arrival time and sound level differences were available to the animal’s head. Efforts to build directional microphones based on the coupled-eardrum structure are underway.” The article mentions flies as using this system, too, so it seems to scale very well 🙂

Now you know!

Locating sound

Did you know that sound travels four times faster under water? And we can therefore not locate a sound source like we can under regular conditions? Let’s take a peek at that in more detail and what we could do to build a workaround!

To start, sound travels 1235km/h or 343m/s. The latter is more useful for the upcoming calculations, so keep that in mind. Let’s say the distance between your ears is about 20cm, or 0.2m. Depending on the size of your head, its likely a bit less, or even a bit more. If sound travels 343m/s, it would take 0.2m/343m/s to travel from one ear to the other, so 0.58ms. In other words, half a millisecond is all we need to figure out from what direction a sound is coming, as that’s the maximum delay for a sound coming from one side to reach the other ear. But we can also locate sounds coming from front-left or front-right and inbetween, so we need even less then half a millisecond to locate a sound.

With that in mind, let’s take another look at sound under water. According to Wikipedia, sound in fresh water travels with 1497m/s, in sea water even 1560m/s (at certain temperatures etc.). That’s 4.3 and 4.5 times faster then compared to sound in air. So our location hearing precision would have to be four times better to locate a sound under water, or need need some workaround that delays sound from reaching the other ear, depending on the direction.

Based on my experience with audio recording on specialized audio hardware and drivers, we usually have to deal with latencies of about one millisecond. Which is great for audio recording, compared to 100ms to 300ms that regular audio hardware and drivers get you, but way too much for the underwarter locating device. So we’d probably need a hardwired solution that could work with much lower latencies. If 0.58ms is the maximum we can deal with, half of that might be the lower bound, so 0.29ms. To adapt that to the 4.5 speed increase in salt water, we’d need a precision of 0.0725 milliseconds, or 72.5 microseconds. Considering that computers can measure in nanoseconds, that should be plenty for some optimized hardware.

On a somewhat related note: Consider a coordinate system with a single axis, x. You can position points anywhere on x. That’s what stereo sound is. Add another axis, y, and you can position a point on both x and y. You’ve got two dimensions. That’s what’s sold as “3D” sound. To get actual 3D, you’d have to add another axis. I haven’t ever seen any setup that has actual three dimensions. Why is that? Maybe because we’re not that good at actually hearing in three dimensions, as we’ve got only two ears?

So much for my two cents of anecdotal evidence on the topic, the Sound Localization article has a lot more actual detail (but also no fancy images). For some fun images, take a look at this (archived) page.