Skip links

Sensory substitution as a universal accessibility agent for museums

I recently participated in a workshop relating to accessibility of arts and culture with reference to museums. This was conducted at the National Museum under the auspices of UNESCO. The idea behind the workshop was to explore what could be done to make art accessible to the blind. The museum had taken a number of steps to make objects accessible. A number of presenters also covered how Museum accessibility is being handled globally. The primary ways it is being done appear to be as follows.

  • The use of audio descriptions along with associated broadcast technology.
  • The creation of 3-D models of objects that are on display.
  • The use of tactile graphics to display paintings.

All the above methods are time and labor intensive. Moreover, there are significant constraints in the application of these methods. One of the biggest challenges is the lack of space. There is insufficient space to store the 3-D representation of the models. Moreover, many of the 3-d models that have been created are not durable. Museums change their displays frequently which involves changing the audio descriptions of models as well.

Several solutions were proposed to handle these problems. For example, it was suggested that curators decide which models best tell the story of the collection and only translate those into tactile representations. The issue of durability could be addressed by using better material to make the 3-D representations.

The other set of challenges that museums have to face relate to logistics. In most cases, disabled visitors need to book an appointment in advance before visiting a museum. This allows museum staff to get things organized and to ensure the best possible experience for their disabled patrons.

As a disabled consumer, I foresee several issues with these accommodations. Do not get me wrong, they are nice and the intentions behind them are good. They even work. However, the problems I have are as follows.

  • I want the right to be disorganized. Every disabled person is not a good planner. Moreover, the disabled person should have the ability to walk into the museum and enjoy its exhibits without the additional step of planning.
  • There are very few audio described items and even fewer tactile representations.
  • These facilities are available only at certain museums.

Enter the vOICe

The vOICe is a generic solution that converts images to sound. All it needs is a camera and some kind of computing device to run. It can be run as a mobile phone application or as a program running on a laptop or desktop computer. It is image agnostic and can translate images of any complexity and at any resolution. The interpretation of these images is left to the user.

As I’ve stated in other posts on this blog, the vOICe gives the user a direct sensation of shape without any interpretation.
To summarize, the vOICe combines audio and tactile representations into a single modality. There is no manual effort required in the preparation of audio descriptions or tactile representations. This also solves the problem of space. There is no need to maintain multiple representations of an item.

One of the criticisms that has been often levelled at the vOICe is its steep learning curve. There is no denying that interpreting soundscape’s needs to be learnt. It does take time and effort to be able to do this effectively. However, in a museum, there are textual labels that already exist explaining what items are in a given display. These usually contain enough information for the blind patron to get a good idea of what he or she is looking at.

The vOICe facilitates true universal accessibility. All it needs to function effectively is good lighting, clearly printed text labels (that is more a user requirement) and clean displays. These incidentally are requirements for all patrons irrespective of their level of ability.

It is possible to record the sound skips from the vOICe and incorporate them into digital media presentations. This would allow people who are unable to access the museum physically to still enjoy the exhibits. Lastly, this will allow curators to sleep easier because no one will have to “touch” their precious collections!


  • To the Blind with Camera foundation for giving me the opportunity to present about the vOICe and to participate in the workshop
  • To the National Museum and UNESCO for being fantastic hosts.
  • To Saksham for making the Santhal collection accessible.

Five things to like about J-Say 13

The J-Say product has undergone a significant face lift. It is now owned by Hartgen Consultency, a company founded and headed by the developer Brian Hartgen. J-Say is middleware technology that interfaces Dragon NaturallySpeaking Professional with the Jaws for Windows screen reader. It allows users to control Jaws for Windows using their voice, and allows them to get full control of their computer. J-Say version 13 has just been released. There are five features I really like about this upgrade.

  1. The “forget it” command which deletes whatever was dictated in the last utterance and speaks what has been left. This allows me, as a user to retain the context of my dictation.
  2. The new correction feature using the “select” command. If the corrected phrase is in the choices, it improves the speed of editing significantly. No need to redictate or to open the spell box and re-enter the text.
  3. This is probably the fastest version of J-Say so far. I can feel this specially in Microsoft Outlook when I have to get through a large number of messages quickly. I prefer using the keyboard for this but now, speech recognition has become an acceptable option.
  4. J-Say tags allow the user, to mark files across folders and drives and then to delete, move or copy them. This beats holding down the shift key and moving around with arrow keys or, holding down the control key and marking files using the arrow keys.
  5. You can stay current with Jaws for Windows versions. J-Say no longer restricts you to a specific build of jaws.

Finally, for those of you contemplating buying this technology, the price has been reduced significantly. Please see the J-Say website for more information.

Learn, Enable, Advance, So Easy!

I have been beta testing a new offering from Brian Hartgen and the people at Astec called Leasey which stands for Learn, Enable, Advance, and So Easy! Yes, these are the same people who produced J-Say and J-Tools.

Leasey is a successor to J-Tools. Mr. Hartgen has surpassed himself in terms of catering to multiple audiences in this program. The primary focus of Leasey is on users who do not want or need to face the complexity of the computer. Leasey presents a simple menu system which facilitates the doing of common tasks such as writing letters, e-mail, spell checking documents, setting appointments, downloading and reading books etc.

The user has help at each stage of the process. For example, when the user is focused on a link, she can ask for help and will be told how to execute that link. The help however is not intrusive. Every action requires the user to press a key. This way, there are no surprises for the user. How many times have you heard people exclaim “Oh, that thing just came up?” Anyone remember the paper clip in earlier versions of Microsoft Word? In addition, Leasey does not completely mask the user interface of the computer. The user still has to use the common open file dialogue box etc. However, the introduction to these concepts is gradual.

As regards advanced users, Leasey has several features such as the ability to tag files for selection and movement. The tagging extends across folders and drives. The program can also store shortcuts to documents and webpages. Yes, you can have stacks of links on your desktop but those will slow down your computer. Here, the shortcuts are in a simple list box which supports first letter navigation.

These features make Leasey a tremendous efficiency booster. Take the example of navigating to your bank’s website. If you did not use Leasey, you would do something like this.

  1. Launch your browser or if your browser was open, open a new tab.
  2. Expand your bookmarks / favorites and find the site.
  3. Hit the enter key to go to the site.
  4. Find the place to enter your credentials.

If you have Leasey, you can do the following.

  1. Open a list of Leasey shortcuts.
  2. Navigate to the shortcut belonging to your bank and activate it.
  3. Use a Leasey point to get to where you have to enter your credentials.

See the difference in steps? I leave it to your imagination to compute the reduction in keystrokes.

>This point about reduced keystrokes allows me to Segway into a significant health related benefit of Leasey; namely, the mitigation of repetitive stress injury also known as RSI. I am not saying that Leasey is a cure all for RSI but the lesser you type, the lesser the chances of RSI. This is more significant for blind people because the computer is an integral part of their work and life.

As of this writing, Leasey supports the Jaws for Windows screen reader. Yes, it does require you to use popular applications like foobar 2000 for playing media but many of its features such as Leasey notes work in any application.
You can read more about Leasey at its website. If you are in the UK, you can see it being demonstrated at the Sight Village exhibition. Do check it out!

Leasey presentations of interest

Leasey at TechTalk on Accessible World
All about Leasey Advanced

The definitive guide to getting a miniseika notetaker working with JAWS for Windows over USB

The key thing to remember about the miniseika notetaker is that the manual does not give you an accurate picture of the installation stepps. If you need to install the miniseika with Jaws for Windows, please do the following.
I am assuming that you are unable to access the CD that comes with your device.

  1. download the correct driver from the following link.
    Product Support Perkins Products
    the direct link to the driver is below.
    Perkins Mini driver for JAWS (XP/Vista/7)
  2. connect the miniseika to your computer. There is no need to install the supplied driver because the miniseika registers itself as a human Interface compliant device.
  3. once you install this driver, follow the steps in the manual which will allow you to connect the Seca mini to Jaws for Windows.

Be warned, if you do not see an option for USB output in the Braille dialog of jaws, which is accessed from the options menu, you have the wrong driver. In addition, the size of the driver zip archive is about 81MB.

A guide on using the Talking Goggles app on the iPhone

A guide on setting up and using the Talking Goggles app. This guide is being posted with the permission of its creater, Anne Robertson.

Set up

Open Settings and find Goggles. Click the button and set Save captured image to Off; Speak out to On; Performance to Balanced.
Close Settings.


Open Goggles.
The logic of the buttons is that they tell you what will happen if you click them.
Set Change language now to English and the last button to Still camera. Goggles is now in video mode.
When the middle button is saying Record, the video is paused.
When the video is running, the Flash button becomes visible. When it says just Flash, the flash is off. Flash off means that flash is off.

Identifying objects and recognising text

To identify an object or piece of text, make sure there’s enough light and double tap Record, then hold the iPhone about a foot away from the item in question. If the light level is low, put the flash on.
Be patient.

%d bloggers like this: