Can this become an ESRA project?
Two totally different images are visible in the picture above and when we alternate between the white columns and the black figures we realize that it is our brain rather than our eyes that determines what we see. Our brain has as many nerve cells as stars in the Milky Way, and at least a quarter of these cells are involved in vision.
What happens to the brain centers involved in vision in people who cannot see?
Professor Amir Amedi of the Hebrew University studies the brain, particularly how much plasticity the adult brain maintains over our lifespan. Surprisingly, he found that the brain region which specializes in reading, the “visual word form area”, lights up in the same way in both sighted and congenitally blind individuals who are reading Braille. So, basically, the brain doesn’t care about how the information is transmitted and will form an image irrespective of whether we SEE the script, FEEL the letters as in Braille, or HEAR a soundscape that describes the image, as in Amedi’s novel Sound-to-Sight (SSD) system described below.
Identical brain regions are activated in sighted individuals reading text and in blind individuals using Braille or listening to a soundscape (below). LH, left hemisphere; VWFA, Ventral Word Forming Area.
Turning an image into a soundscape - the Sight-to-Sound project
Amedi exploits the ability to engage the visual brain centers via senses other than the eyes to help the blind visualize their surroundings. A tiny digital camera attached to the frame of sunglasses “sees” the surroundings. A software app scans the image and translates it into discreet sounds which are transmitted by headphones to the ears. For example, a descending scale signifies a diagonal line going from the top left to the bottom of the image, whereas a single extended note signifies a straight line.
Can someone who has been blind from birth see images via sound?
A team of students has been training congenitally blind people to extract visual information from these soundscapes. A combination of basic sounds encodes the alphabet, and after 10 hours of training, the trainee is able to read words and sentences. Experienced trainees can read signboards and may advance to read books that have not been translated into Braille. Images are possible too, and each color is encoded by a different musical instrument - guitar, flute, etc. Highlights of the training program are: enabling blind trainees to find their shoes in a cluttered room, to identify a single red apple on a plate with green fruit (below), and to distinguish facial expressions. At a minimum, the system transmits information about obstacles in their path.
What is the current status of the project?
The original software for translating images into soundscapes was developed by Peter Meijer of Holland and put on the Web as Freeware. Prof. Amedi’s group is upgrading the algorithm, including the introduction of color. Furthermore, they are developing a web interface for home self-training to make the system widely available. So far, all the trainees have successfully navigated the training program, which is monitored constantly and is being improved to make it less tiring and more efficient. The components of the SSD are not expensive and will hopefully be available commercially as a kit once an investor steps up to the plate.
Can this become an ESRA project?
How easy is it to become a trainer?
I came across this program through a chance encounter with Ella Striem-Amit, a lively Ph.D. student who did part of the above research for her doctoral thesis. Ella was very enthusiastic about the possibility of being able to expand the training program through our ESRA volunteers, and immediately fired off an email to Prof. Amir Amedi. Since having been invited on board I have joined the trainer team headed by Dina Tauer. I have had their software installed on my laptop and have been over the first series of lessons. At first I was rather intimidated as my hearing’s not all that sharp, but the images and soundscapes are easy to manage - one just has to be able to click on a mouse - no additional computer skills required. One doesn’t have to learn the sounds off by heart to train and there is a braille version of the images for the trainee. I've participated in a training session with an experienced trainer and his blind trainee, and have had my first session with my own blind trainee - a young woman who is actually working with the Amedi lab in putting their training system onto the Internet for people to use by themselves. However, I don't think that this will exclude the need for personal trainers, as one needs some kind of appointment schedule and personal encouragement to stay on course.
You can watch Professor Amedi presenting the SSD system at the Tedxconference at: