Skip links

Sensory substitution as a universal accessibility agent for museums

I recently participated in a workshop relating to accessibility of arts and culture with reference to museums. This was conducted at the National Museum under the auspices of UNESCO. The idea behind the workshop was to explore what could be done to make art accessible to the blind. The museum had taken a number of steps to make objects accessible. A number of presenters also covered how Museum accessibility is being handled globally. The primary ways it is being done appear to be as follows.

  • The use of audio descriptions along with associated broadcast technology.
  • The creation of 3-D models of objects that are on display.
  • The use of tactile graphics to display paintings.

All the above methods are time and labor intensive. Moreover, there are significant constraints in the application of these methods. One of the biggest challenges is the lack of space. There is insufficient space to store the 3-D representation of the models. Moreover, many of the 3-d models that have been created are not durable. Museums change their displays frequently which involves changing the audio descriptions of models as well.

Several solutions were proposed to handle these problems. For example, it was suggested that curators decide which models best tell the story of the collection and only translate those into tactile representations. The issue of durability could be addressed by using better material to make the 3-D representations.

The other set of challenges that museums have to face relate to logistics. In most cases, disabled visitors need to book an appointment in advance before visiting a museum. This allows museum staff to get things organized and to ensure the best possible experience for their disabled patrons.

As a disabled consumer, I foresee several issues with these accommodations. Do not get me wrong, they are nice and the intentions behind them are good. They even work. However, the problems I have are as follows.

  • I want the right to be disorganized. Every disabled person is not a good planner. Moreover, the disabled person should have the ability to walk into the museum and enjoy its exhibits without the additional step of planning.
  • There are very few audio described items and even fewer tactile representations.
  • These facilities are available only at certain museums.

Enter the vOICe

The vOICe is a generic solution that converts images to sound. All it needs is a camera and some kind of computing device to run. It can be run as a mobile phone application or as a program running on a laptop or desktop computer. It is image agnostic and can translate images of any complexity and at any resolution. The interpretation of these images is left to the user.

As I’ve stated in other posts on this blog, the vOICe gives the user a direct sensation of shape without any interpretation.
To summarize, the vOICe combines audio and tactile representations into a single modality. There is no manual effort required in the preparation of audio descriptions or tactile representations. This also solves the problem of space. There is no need to maintain multiple representations of an item.

One of the criticisms that has been often levelled at the vOICe is its steep learning curve. There is no denying that interpreting soundscape’s needs to be learnt. It does take time and effort to be able to do this effectively. However, in a museum, there are textual labels that already exist explaining what items are in a given display. These usually contain enough information for the blind patron to get a good idea of what he or she is looking at.

The vOICe facilitates true universal accessibility. All it needs to function effectively is good lighting, clearly printed text labels (that is more a user requirement) and clean displays. These incidentally are requirements for all patrons irrespective of their level of ability.

It is possible to record the sound skips from the vOICe and incorporate them into digital media presentations. This would allow people who are unable to access the museum physically to still enjoy the exhibits. Lastly, this will allow curators to sleep easier because no one will have to “touch” their precious collections!

Acknowledgements

  • To the Blind with Camera foundation for giving me the opportunity to present about the vOICe and to participate in the workshop
  • To the National Museum and UNESCO for being fantastic hosts.
  • To Saksham for making the Santhal collection accessible.

Five things to like about J-Say 13

The J-Say product has undergone a significant face lift. It is now owned by Hartgen Consultency, a company founded and headed by the developer Brian Hartgen. J-Say is middleware technology that interfaces Dragon NaturallySpeaking Professional with the Jaws for Windows screen reader. It allows users to control Jaws for Windows using their voice, and allows them to get full control of their computer. J-Say version 13 has just been released. There are five features I really like about this upgrade.

  1. The “forget it” command which deletes whatever was dictated in the last utterance and speaks what has been left. This allows me, as a user to retain the context of my dictation.
  2. The new correction feature using the “select” command. If the corrected phrase is in the choices, it improves the speed of editing significantly. No need to redictate or to open the spell box and re-enter the text.
  3. This is probably the fastest version of J-Say so far. I can feel this specially in Microsoft Outlook when I have to get through a large number of messages quickly. I prefer using the keyboard for this but now, speech recognition has become an acceptable option.
  4. J-Say tags allow the user, to mark files across folders and drives and then to delete, move or copy them. This beats holding down the shift key and moving around with arrow keys or, holding down the control key and marking files using the arrow keys.
  5. You can stay current with Jaws for Windows versions. J-Say no longer restricts you to a specific build of jaws.

Finally, for those of you contemplating buying this technology, the price has been reduced significantly. Please see the J-Say website for more information.