Utile Eyes

This page discusses the possibility of creating Utile Eyes (UE), an economical, open source, and portable version of The Sonic Eye.

Background

The Sonic Eye (TSE) is a promising navigation aid for the blind and visually impaired. It works by allowing the user to recognize the acoustic signatures of staircases, walls, etc. TSE's output is amazingly informative and human-friendly; if you haven't done so, listen to the demonstration videos (preferably wearing stereo headphones).

TSE's inspirations and approach are biomimetic, based on the echolocation capabilities of microbats. Basically, it:

  • emits a 3 ms ultrasonic chirp
  • captures the reflections in stereo
  • slows down the reflections by 25x
  • plays them into the user's ears

Without understanding everything that goes on in the user's brain, we can guess that:

  • The temporal position of each reflection indicates its distance.

  • The use of stereo allows the user to capture phase information
    (and thus, left-right directionality) from the reflected ultrasound.

  • The rising frequency of the chirp (25-50 kHz) allows the user to map
    the pitch of a reflection to its temporal position in the initial signal.

The original prototype for TSE was large and rather ungainly: a backpack and a helmet with large, cup-like auricles (modeled on bat ears). However, the developers are working on a much more convenient version, about the size of a pair of sunglasses, but requiring a cell phone.

In the meanwhile, the TSE paper provides enough information to create an interim platform for experimentation. So, some friends are helping me to do just that, releasing details as we proceed. Our hardware and software designs will be open source, allowing (nay, encouraging!) others to jump in and try things out.

Design

UE's design tries to balance minimalism against convenience and flexibility. For example, we plan to omit the auricles, because they are ungainly and do not appear to be particularly critical. On the other hand, we're using a microcomputer and a Bluetooth link, because they allow the UE to be reconfigured in the field, etc.

The current physical design uses a rectangular box (e.g., 3" x 6" x 2") to store the amplifiers, battery, computer, super tweeter, and so forth. This could be attached to a hat or helmet, suspended from a neck lanyard, etc. The microphones and speakers plug into the box, using standard (2.5 and 3.5 mm, respectively) stereo audio plugs.

Basically, an Arduino will play a pre-recorded chirp (~3 ms) into a "super tweeter". It will then use a high-speed ADC to capture the reflected ultrasound waveforms. After the end of the expected reflections (~60 ms), it will play back the (slowed-down) reflections.

The use of an Arduino as the control device allows the UE to be reconfigured (and even reprogrammed entirely) "in the field". The Bluetooth interface can serve a number of functions, including audio output, device control, and data logging. Open source software (e.g., Audacity) can be used to generate waveforms.

Status

This project is still in the early design stages (e.g., overall design, part selection). For details, see the following pages:

The Utiles/Arduino subweb has several relevant pages, including:

Futures

Once we have a working prototype, we can look into ways to improve it. Here are some partly-baked ideas...

Synthetic Sonar

The initial UE simply presents a slowed-down version of the reflected ultrasound. Although this may be all that is necessary (it works for microbats!), it may be possible to do better. For example, we could try to capture more information on the surrounding area, filter out uninteresting echoes, add virtual objects to the landscape, etc.

Using local information (e.g., collected by sweeps of the user's cane) and contextual information (e.g., downloaded from a geographic server), a program could create a 3D model of the surrounding area. This would then be mapped into a synthetic stereo pair of "reflected" signals. The processing would use a variation on ray tracing, making allowances for the differences between light and ultrasound.


This wiki page is maintained by Rich Morin, an independent consultant specializing in software design, development, and documentation. Please feel free to email comments, inquiries, suggestions, etc!

Topic revision: r111 - 25 Feb 2016, RichMorin
This site is powered by FoswikiCopyright © by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding CFCL Wiki? Send feedback