Sistema de Earcones para optimizar el uso de dispositivos informáticos a invidentes.

12 03 2015

Estudiantes y alumni de la UPC crean un sistema que facilita a las personas invidentes el uso de dispositivos informáticos.

 El sistema HUI, que ha ganado uno de los premios ‘Emprèn UPC’ a la mejor idea de negocio de base tecnológica, se basa en la síntesis digital del sonido tridimensional. El equipo que lo ha creado trabaja en la comercialización del sistema.

El prototipo ofrece información de los espacios a través del sonido.

Holofonic User Interface (HUI) es una solución informática que utiliza, en tiempo real, la síntesis digital de sonido tridimensional para que usuarios invidentes o con deficiencias visuales puedan utilizar de manera óptima ordenadores, smartphones, videojuegos, tabletas y todo tipo de dispositivos informáticos con pantalla.

HUI es fruto del Proyecto fin de carrera de Oscar Martínez, de la Escuela de Ingeniería de Terrassa (EET) de la UPC, dirigido por el profesor Antonio Calomarde, del Departamento de Ingeniería Electrónica de la UPC.

Al proyecto se sumaron desde sus inicios Xavier Rodríguez, licenciado en Ingeniería Informática por la Facultad de Informática de Barcelona (FIB) y, en las últimas semanas, German Coines, desarrollador informático. Los tres han desarrollado un primer prototipo de este innovador sistema que permite al usuario, sin necesidad de ver, encontrar la posición de los iconos que aparecen en los escritorios de las pantallas de cualquier dispositivo informático y ejecutar sus funciones.

El equipo trabaja ahora en la comercialización del sistema, que obtuvo el primer premio del concurso Emprèn UPC a la mejor idea de negocio.

 

Funcionamiento con ‘earcones’

 El sistema Holofonic User Interface (Interfaz de Usuario Holofònica, HUI) se basa en la síntesis digital del sonido tridimensional. Esta técnica permite integrar de forma digital un sonido característico a un icono, convirtiéndolo en un earcone. Los earcon señalan carpetas, aplicaciones y programas de cualquier dispositivo con sistema Windows. El usuario invidente al abrir el ordenador se instala unos auriculares domésticos y comienza a escuchar a la vez cada uno de los sonidos que identifican cada earcone. Para encontrar lo que busca, sólo tiene que navegar por el escritorio usando el ratón.

La localización de la posición exacta de los earcon se realiza a través de los sonidos que los caracterizan y también gracias a las diferencias de intensidad del sonido cuando el cursor del ratón se acerca o se aleja de los mismos. Cuando el usuario oye el sonido que busca en la máxima intensidad y no escuchan ninguna otra, hace clic dos veces con el ratón y puede empezar a utilizar el programa, la aplicación o el documento que buscaba.

Encontrar calles con el GPS

 Otra de las novedades de HUI es que muestra información espacial a través del sonido. Es decir, que permite no sólo identificar earcon, sino también señalar el camino para acceder a sitios, ciudades, o calles en un mapa de GPS sin voz. Los creadores de HUI ven viable también aplicar el sistema en museos, ya que permitiría indicar dónde se encuentra una obra de arte respecto a la posición del visitante o incluso incluir el contenido de la exposición con sonido tridimensional.

Ambos estudiantes ya están pensando en aplicar esta misma tecnología a los dispositivos táctiles, como los smartphones o las tabletas, o incluso en los videojuegos y los sistemas de realidad aumentada. Según afirman los creadores, este ingenioso sistema será un paso más adelante que los sistemas actuales de tarjeta electrónica de voz.

Cómo se ha diseñado

 En el desarrollo de HUI se han sintetizado el sonido tridimensional. Y shan utilizado las bases de datos que modelizan el oído externo humana con medidas reales en sujetos reales. Gracias a la existencia de estas bases de datos, la síntesis de sonido tridimensional no es muy compleja, pero para hacerlo en tiempo real se necesita una capacidad de procesamiento y de análisis muy rápida. Este es uno de los secretos de la eficacia de HUI.

Además, han utilizado una técnica que ya se ha empleado en simulaciones de sistemas auditivos o en realidad virtual como los videojuegos. Para obtener el primer prototipo han trabajado con lenguajes informáticos Open Source, como Matlab y Pure Data, que tienen prestaciones en el ámbito del procesamiento de sonido en tiempo real.

Óscar y Xavier presentaron su proyecto al concurso de ideas de negocio de base tecnológica Emprèn UPC, convocado por el programa INNOVA de la misma Universidad y resultaron finalistas. Por eso INNOVA les dio la oportunidad de formarse en materia de emprendimiento para poder desarrollar su idea de negocio. Finalmente, HUI resultó ser uno de los proyectos ganadores, junto con otras dos iniciativas.

Modelo de negocio y primeras aplicaciones

 En la primera etapa, el modelo de negocio de HUI se basa en la venta de licencias para el uso de la aplicación. Posteriormente, se desarrollará una Interfaz de programación de aplicaciones (API) que permitirá integrar las funcionalidades de HUI en sus aplicaciones.

Los creadores de HUI tienen la intención de distribuir periódicamente aplicaciones sencillas y juegos para móviles, con el objetivo de dar a conocer sus posibilidades. Está previsto que la primera sea una aplicación de realidad aumentada que integre sonido 3D.

 

 

Upc.edu [en línea] Barcelona (ESP): upc.edu, 12 de marzo de 2015 [ref. 04 de diciembre de 2012] Disponible en Internet: http://www.upc.edu/saladepremsa/al-dia/mes-noticies/estudiantes-y-alumni-de-la-upc-crean-un-sistema-que-facilita-a-las-personas-invidentes-el-uso-de-dispositivos-informaticos



EyeMusic SSD: Can the blind ‘hear’ colors and shapes?

23 06 2014

What if you could “hear” colors? Or shapes?  These features are normally perceived visually, but using sensory substitution devices (SSDs) they can now be conveyed to the brain noninvasively through other senses.

Prof. Amir Amedi: innovating solutions for the blind and visually impaired, at the Hebrew University of Jerusalem (Photo: Hebrew University)

Prof. Amir Amedi: innovating solutions for the blind and visually impaired, at the Hebrew University of Jerusalem (Photo: Hebrew University)

At the Center for Human Perception and Cognition, headed by Prof. Amir Amedi of the Edmond and Lily Safra Center for Brain Sciences and the Institute for Medical Research Israel-Canada at the Hebrew University of Jerusalem Faculty of Medicine, the blind and visually impaired are being offered tools, via training with SSDs, to receive environmental visual information and interact with it in ways otherwise unimaginable. The work of Prof. Amedi and his colleagues is patented by Yissum, the Hebrew University’s Technology Transfer Company.

SSDs are non-invasive sensory aids that provide visual information to the blind via their existing senses. For example, using a visual-to-auditory SSD in a clinical or everyday setting, users wear a miniature camera connected to a small computer (or smart phone) and stereo headphones. The images are converted into “soundscapes,” using a predictable algorithm, allowing the user to listen to and then interpret the visual information coming from the camera.

With the EyeMusic SSD (available free at the Apple App store at http://tinyurl.com/oe8d4p4), one hears pleasant musical notes to convey information about colors, shapes and location of objects in the world.

Using this SSD equipment and a unique training program, the blind are able to achieve various complex. visual-linked abilities. In recent articles in Restorative Neurology and Neuroscience and Scientific Reports, blind and blindfolded-sighted users of the EyeMusic were shown to correctly perceive and interact with objects, such as recognizing different shapes and colors or reaching for a beverage. (A live demonstration can be seen at http://youtu.be/r6bz1pOEJWg).

 

In another use of EyeMusic, it was shown that other fast and accurate movements can be guided by the EyeMusic and visuo-motor learning.  In studies published in two prestigious scientific journals, Neuron and Current Biology, it was demonstrated that the blind can characterize sound-conveyed images into complex object categories (such as faces, houses and outdoor scenes, plus everyday objects) and could locate people’s positions, identify facial expressions and read letters and words. (See YouTube channel http://www.youtube.com/amiramedilab for demonstrations.)

Despite these encouraging behavioral demonstrations, SSDs are currently not widely used by the blind population. However, in a recent review published in Neuroscience & Biobehavioral Reviews, the reasons that have prevented their adoption have changed for the better over the past few years. For instance, new technological advances enable SSDs to be much cheaper, much smaller and lighter, and they can run using a standard Smart phone. Additionally, new computerized training methods and environments boost training and performance.

The Hebrew University research has shown that contrary to the long-held conception of the cortex being divided into separate vision-processing areas, auditory areas, etc., new findings over the past decade demonstrate that many brain areas are characterized by their computational task, and can be activated using senses other than the one commonly used for this task, even for people who were never exposed to “original” sensory information at all (such as a person born blind that never saw one photon of light in his life).

When processing “visual’ information” conveyed through SSD, it was shown by the researchers that congenitally blind people who learned to read by touch using the Braille script or through their ears with sensory substitution devices use the same areas in the visual cortex as those used by sighted readers. A recent example of this approach was just published in Current biology, showing that blind subjects “see” body shapes via their ears using SSD equipment and training.

There is a whole network of regions in the human brain dedicated to processing and perceiving of body shapes, starting from the areas processing vision in the cortex, leading to the “Extrastriate Body Area,” or EBA, and further connecting to multiple brain areas deciphering people’s motion in space, their feelings and intents.

 

In tests with the blind, it was found that their EBA was functionally connected to the whole network of body-processing found in the sighted. This lends strength to the researchers’ new theory of the brain as a sensory-independent task machine, rather than as a pure sensory (vision, audition, touch) machine.

“The human brain is more flexible than we thought,” says Prof. Amedi. “These results give a lot of hope for the successful regaining of visual functions using cheap non-invasive SSDs or other invasive sight restoration approaches. They suggest that in the blind, brain areas have the potential to be ‘awakened’ to processing visual properties and tasks even after years or maybe even lifelong blindness, if the proper technologies and training approaches are used.”

 

 

 

New.huji.ac.il [en línea] Jerusalem (ISR): new.huji.ac.il, 23 de junio de 2014 [ref. 09 de marzo de 2014] Disponible en Internet: http://new.huji.ac.il/en/article/19856

 



Dianne Ashworth Bionic Eye: ‘Little Flash’ Brings Australian Woman Some Sight

10 09 2012

SYDNEY (Reuters) – A bionic eye has given an Australian woman partial sight and researchers say it is an important step towards eventually helping visually impaired people get around independently.

Dianne Ashworth, who has severe vision loss due to the inherited condition retinitis pigmentosa, was fitted with a prototype bionic eye in May at the Royal Victorian Eye and Ear Hospital. It was switched on a month later.

“All of a sudden I could see a little flash … it was amazing,” she said in a statement.

“Every time there was stimulation there was a different shape that appeared in front of my eye.”

The bionic eye, designed, built and tested by the Bionic Vision Australia, a consortium of researchers partially funded by the Australian government, is equipped with 24 electrodes with a small wire that extends from the back of the eye to a receptor attached behind the ear.

It is inserted into the choroidal space, the space next to the retina within the eye.

“The device electrically stimulates the retina,” said Dr Penny Allen, a specialist surgeon who implanted the prototype.

“Electrical impulses are passed through the device, which then stimulate the retina. Those impulses then pass back to the brain (creating the image).”

The device restores mild vision, where patients are able to pick up major contrasts and edges such as light and dark objects. Researchers hope to develop it so blind patients can achieve independent mobility.

“Di is the first patient of three with this prototype device, the next step is analyzing the visual information that we are getting from the stimulation,” Allen said.

The operation itself was made simple so it can be readily taught to eye surgeons worldwide.

“We didn’t want to have a device that was too complex in a surgical approach that was very difficult to learn,” Allen.

Similar research has been conducted at Cornell University in New York by researchers who have deciphered the neural code, which are the pulses that transfer information to the brain, in mice.

The researchers have developed a prosthetic device that has succeeded in restoring near-normal sight to blind mice.

According to the World Health Organization, 39 million people around the world are blind and 246 million have low vision.

“What we’re going to be doing is restoring a type of vision which is probably going to be black and white, but what we’re hoping to do for these patients who are severely visually impaired is to give them mobility,” Allen said.

Link to video: click here

 

Huffingtonpost.com [en línea] Sydney (AUS): huffingtonpost.com, 10 de septiembre de 2012 [ref. 30 de agosto de 2012] Disponible en Internet: http://www.huffingtonpost.com/2012/08/30/dianne-ashworth-bionic-eye-sight-australian-woman_n_1841960.html