Planet Abled invited me to speak at the Access To Travel conference conducted by them on 27 September 2017. I went in expecting the usual conference but was pleasantly surprised. It was conducted at Park Hotel. The conference experience began just after the metal detectors. I was barely through the device when I was met by a Planet Abled team member and escorted to the venue. I checked as I entered the hall. Something was wrong. The light level dropped, and I was in an enclosed space. I am light independent therefore I continued through and entered the auditorium.It turns out that the entrance had been reconfigured as a darkness simulator.
Cameras and Planet Abled staff were everywhere. Unlike other conferences, they did not ask me to settle down. There was active focus on moving around and interaction. The staff helped. We, the tourists had a chance to meet other key leaders in the travel industry such as Mr. Subash Goyal. The food was good and was dry. I have attended more than my share of conferences where the food has gravy and is impossible to eat without 2 working hands.
it is rare to have a discussion on recreation in India. The access to travel conference was one of the few events that addressed not only the challenges when traveling with a disability but allowed me to see what everyone else was doing to have fun.
The most enjoyable part for me in the conference was the emphasis on stories. Each speaker had his own story to tell and had time to tell it in. It was also very easy to ask speakers questions and to meet those who stayed after the conference. As always, the speakers were cross disabilities. I returned home with a greater sense of unity. Our senses that we used to engage with the world were different, but the problems were the same and had the same broad solution namely making people better humans and treating each other with dignity and respect.
concepts
Artificial vision in the enterprise
I recently acquired the Vision 800 smart glasses. This gave me a compact and convenient setup allowing me to run the the vOICe. I have had several visual experiences. When I wear the glasses, I am effectively wearing an Android tablet on my head. Yes, it would be nice to do multiple things but given the specifications of the glasses, I use them as a dedicated vision device. I also use bone conduction headphones.
- I am able to read floor numbers as well as other signage. This means that when the fire martial asks me to exit from gate 3B, I know what he is talking about. In addition, I can navigate the stairs independently and do not need to count floors.
- I can lean forward and see if my laptop has an error message. It is easier talking to the help desk if I can tell the problem and many times, I can solve the problem independently.
- I am better at indoor navigation. I am able to tell when silent humans are in the way.
- The camera on the vision 800 glasses is on the extreme left. I am not used to scanning therefore the narrow field of view and the left orientation is not matching the sense of space my body has. This is taking getting used to.
- I am also still working out the right time to look down.
- I can derive more information about my environment such as detecting flower pots that have been placed on top of filing cabinets.
- Bone conduction headphones are a double edged sword. Yes, I can hear environmental sounds but on the other hand, in situations like lunch time at the office Cafeteria, they are almost useless. I cannot hear the soundscapes unless I increase the volume significant in the vOICe.
- I have run the glasses for over 8 hours. They do not heat up much.
- I can better handle situations where colleagues leave things on different parts of my table. For example, a colleague heated my lunch and his. He kept my lunch but forgot to tell me that he had done this. I scanned the table and was able to get hold of my plate.
Post processing images including describing them automatically
As most of you know, I publish plenty of images on this blog. I ensure that all of them are described. The biggest challenge I have in posting photographs on this blog is captioning them. I have to get images described manually before I put them up here. Once I take my photographs I isolate my pictures by location because of the geotagging my phone does. I then send them to people who have been on the trip who describe the images. I have been searching for solutions that describe images automatically. I was thrilled to learn that wordpress had a plugin that used the Microsoft Cognitive Services API to automatically describe images. The describer plugin however did not give me location information therefore I rolled my own code in python. I have created a utility that queries Google for location and the Microsoft Cognitive Services API for image descriptions and writes them to a text file. I had tried to embed the descriptions in EXIF tags but that did not work and I cannot tell why.
References
You will need an API key from the below link.
Microsoft Cognative Services API
The wordpress plugin that uses the Microsoft Cognitive Services API to automatically describe images when uploading
Notes
- You will need to keep your cognitive services API key alive by describing images at least once in every 90 days I think.
- Do account for Google’s usage limits for the geotagging API.
- In the code, do adjust where the image files you want described live as well where you want the log file to be stored.
- Do ensure you add your API key before you run the code.
import glob from PIL import Image from PIL.ExifTags import TAGS from PIL.ExifTags import TAGS, GPSTAGS import piexif import requests import json import geocoder def _get_if_exist(data, key): if key in data: return data[key] return None def get_exif_data(fn): """Returns a dictionary from the exif data of an PIL Image item. Also converts the GPS Tags""" image = Image.open(fn) exif_data = {} info = image._getexif() if info: for tag, value in info.items(): decoded = TAGS.get(tag, tag) if decoded == "GPSInfo": gps_data = {} for t in value: sub_decoded = GPSTAGS.get(t, t) gps_data[sub_decoded] = value[t] exif_data[decoded] = gps_data else: exif_data[decoded] = value return exif_data def _convert_to_degrees(value): """Helper function to convert the GPS coordinates stored in the EXIF to degrees in float format""" d0 = value[0][0] d1 = value[0][1] d = float(d0) / float(d1) m0 = value[1][0] m1 = value[1][1] m = float(m0) / float(m1) s0 = value[2][0] s1 = value[2][1] s = float(s0) / float(s1) return d + (m / 60.0) + (s / 3600.0) def get_lat_lon(exif_data): """Returns the latitude and longitude, if available, from the provided exif_data (obtained through get_exif_data above)""" lat = None lon = None if "GPSInfo" in exif_data: gps_info = exif_data["GPSInfo"] gps_latitude = _get_if_exist(gps_info, "GPSLatitude") gps_latitude_ref = _get_if_exist(gps_info, 'GPSLatitudeRef') gps_longitude = _get_if_exist(gps_info, 'GPSLongitude') gps_longitude_ref = _get_if_exist(gps_info, 'GPSLongitudeRef') if gps_latitude and gps_latitude_ref and gps_longitude and gps_longitude_ref: lat = _convert_to_degrees(gps_latitude) if gps_latitude_ref != "N": lat = 0 - lat lon = _convert_to_degrees(gps_longitude) if gps_longitude_ref != "E": lon = 0 - lon return lat, lon def getPlaceName(fn): lli=() lli=get_lat_lon(get_exif_data(fn)) g = geocoder.google(lli, method='reverse') return g.address def getImageDescription(fn): payload = {'visualFeatures': 'Description'} files = {'file': open(fn, 'rb')} headers={} headers={ 'Ocp-Apim-Subscription-Key': 'myKey'} r = requests.post('https://api.projectoxford.ai/vision/v1.0/describe', params=payload,files=files,headers=headers) data = json.loads(r.text) dscr=data['description'] s=dscr['captions'] s1=s[0] return s1['text'] def tagFile(fn,ds): img = Image.open(fn) exif_dict = piexif.load(img.info["exif"]) exif_dict['Description''Comment']=ds exif_bytes = piexif.dump(exif_dict) piexif.insert(exif_bytes, fn) img.save(fn, exif=exif_bytes) def createLog(dl): with open('imageDescriberLog.txt','a+') as f: f.write(dl) f.write("\n") path = "\*.jpg" for fname in glob.glob(path): print("processing:"+fname) createLog("processing:"+fname) try: imageLocation=getPlaceName(fname) except: createLog("error in getting location name for file: "+fname) pass try: imageDescription=getImageDescription(fname) except: createLog("error in getting description of file: "+fname) pass imgString="Description: "+imageDescription+"\n"+"location: "+imageLocation createLog(imgString) try: tagFile(fname,imgString) except: createLog("error in writing exif tag to file: "+fname) pass
Getting fenrir talking on a raspberry pi
Fenrir is a promising new user mode screen reader. It is primarily written in python 3. Here are my instructions to install it on a raspberry pi.
I am assuming that you are using RASPBIAN JESSIE LITE and are at a terminal prompt.
Updating your installation
You must update your installation of raspbian otherwise components like espeak will not install correctly.
sudo apt-get update
Let the update finish. The above command just fetches the listing of packages that need to be updated. Now do the actual upgrade.
sudo apt-get upgrade -y
This is going to take a while to complete.
You now have to install several dependencies for fenrir to work.
Espeak
This is the speech synthesizer fenrir will use.
sudo apt-get install libespeak-dev
Python 3
sudo apt-get install python3-pip -y
Fenrir is written in python 3.
The python daemon package
sudo apt-get install python-daemon -y
A package that allows fenrir to run as a service.
evdev
sudo pip3 install -evdev
A python package that handles keyboard input.
The speech-dispatcher package
sudo apt-get install speech-dispatcher -y
The above package is required to get fenrir to talk to a speech synthesizer such as espeak.
The ConfigParser package
sudo pip3 install configparser
You may need this dependency to parse the fenrir configuration file.
A module for checking spelling
sudo apt-get install enchant -y
sudo pip3 install pyenchant
An optional package to handle spell checking.
Git
sudo apt-get install git -y
Git is a version control system which you will use to pull down the latest source code of fenrir.
It is now time to install fenrir. Execute the following command.
git clone https://github.com/chrys87/fenrir.git
You now need to start configuring the pi for fenrir to work.
Execute the following command.
sudo spd-conf
You will be asked a series of questions. The main thing you need to change is the kind of sound output technology to use. You need to use alsa if you are using the 3.5MM headphone jack of the pi like I am doing. When you are asked about using pulseaudio type “alsa” without the quotes. Hit enter to move to the next prompt after each question. Do not forget to adjust the speed and pitch to your liking.
You now need to test your speech synthesizer configuration. Do ensure that you have your headphones or speakers ready.
sudo spd-say testing
If you hear the word “testing” you are good to start fenrir. If not, look through your logs and seek support.
To start fenrir, execute the following command.
Warning: You will need to change terminals once you execute the below command so you may want to open a new terminal and keep it handy.
Assuming you are in your home directory type the following commands.
cd fenrir/src/fenrir
sudo python3 ./fenrir
You should hear a stack of startup messages which signifies that fenrir is running.
Usage
I am still playing with fenrir therefore cannot comment much. The key bindings appear to be similar to those of the speek up screen reader. If you want to check them out, take a look at the config file in your installation directory or at the following link.
There is a tutorial mode available accessed with the fenrir key and h. The fenrir key is the insert key by default.
Acknowledgements
Thanks to the following people for helping me get fenrir talking and for raspberry pi advice.
Michael A. Ray
Storm Dragon
Other members on the IRC channel called #a11y at the Server: irc.netwirc.tk; specifically:
PrinceKyle, Jeremiah, chrys and southernprince.
The golden triangle and beyond
Our tour began with the Delhi flower market at Ghazipur. Ghazipur otherwise is better known for its slotterhouse.
The flower market is a whole sale market therefore no organized shops. There are stalls with baskets of flowers and everyone is out to sell.
We then moved to Agra.
In case you are wondering, yes we did visit the Taj but it was my seventh visit to it and I did not take many photographs.
The more interesting bits with respect to the Taj lie outside it. These are the art and craft shops that sell you replicas of the taj and show you how they are made and do other marble related work. Most of the designs involve complex geometrical patterns. The carving is done by hand using a chisel. I did ask about 3d-printing and other machines. I am told that the marble is very soft and machines will break the stone. The designers are also very experienced. There is no cadcam software. The people have been doing it from generations and the skill is passed from father to son as far as I could tell.
We then moved to Jaipur.
I did try using the vOICe to determine how much of the cloth I have colored and have printed on. It is a non-trivial task because the paint / ink is very light when you apply it. It does darken after drying but this is going to take a lot of work. Moreover, because of the poor contrast, it was difficult to make out what patterns I had printed.
One of the most interesting things we did in Jaipur was to visit shops selling gems and Jewry and see how they are made. They had a little setup outside the shop where one could see the workers polishing gems etc. You need to be careful with the cutting wheel and the polishing is intricate work best done during the day because it needs natural light for the colors to stand out. Again, I tried looking at the setup. It may be possible to determine how much of a stone is polished using vision but the hand appeared to be a better tool for the job at least for now. In addition, the process cannot be completely automated because of the soft stones in use and how will a machine create a design? Heuristics of some kind coupled with a neural net with massive amounts of data are perhaps options but until that happens, it is up to human ingenuity to create designs. Again, this trade is based on generations.
It may be possible to tell semiprecious stones apart by their texture but I am uncertain if this is a reliable means of identification.
Be warned, you do not sense the wind if you are outside the palace. The monument is on a busy street and is a quick photo stop unless you want to go inside and climb the several stairs. I did not have an opportunity to do this.
We had a chance to visit fort Amer. The drive up to the fort is a lot of fun if you take the elephant. In addition, forget the term “camera shy” because there an inordinate number of photographers seeking your custom.
We then moved to Jodhpur.
We then moved to Pushkar for sun and sand but found much more.
This trip was special in many ways. For one, I had a multilingual group and our conversations were conducted in English, Hindi and German. I was mistaken for a foreign tourist which was intriguing and most of all, I had not seen my country through the eyes of a foreigner before. It showed me how much I took for granted.
Acknowledgements
Aparna Mathur leadership consultant / art, history and food lover for the image descriptions
Girdhari Singh Shekhawat
our fearless guide and leader and one of the fastest learners I have ever had the chance to meet
Anil Dhyani Shrooti Sharma, June and the other staff of Sita Travels for giving me the opportunity of co-leading the tour and for fantastic organization
Laura Kutter, CEO of Tour de sens for some fantastic conversation and brilliant organization and group management
Marcel and Raphaela Franke for being who you are and for breaking the ice
Barbara Krug for fantastic conversations and navigation in tricky places
Sigrid Gleser for breaking the ice and stimulating technical conversation
Rita Gleser for intriguing teaching opportunities and food
Claudia for the laughs
Teena, Christina, Gregor and everyone else for the fun.
Clay infinitum
I was a part of the clay modeling workshop conducted by Planet Abled and The Clay Company.
I have vague memories of modeling with clay in grade 1 but I had not touched clay after that.
The interesting thing about clay modeling is that you work in 3 dimensions. I thought it would be easy to produce what I imagined but that was not the case. For example, I was instructed to make a mask. How do I make a nose? I should have put clay around my own nose and but I thought of that trick once the mask was complete.
We used ceramic clay instead of terracotta clay.
My biggest challenge came in shaping the clay. I would damage the slab when I tried engraving. The answer was to use cooky cutters to get the shape I wanted.
My biggest surprise was the shape of the sphere. For the shapes I made, the first thing that I was instructed to do was to form a sphere with the clay. I then took a rolling pin and flattened the ball. Once that was done, I could begin shaping. I mentioned this to my father who pointed out that the sphere is the symbol for infinity.
Another thing to be aware of was the pressure. If I put too much, the clay would begin to disintegrate. If it was too little, the clay would stay as is.
My thanks to Aparna Choudhrie, founder of the Clay Company for her incite on design. I had asked her how should I decide what shape I want in a design. Her answer was that “each shape should tell a story.” This was just the right starting point for my writer self. I can now begin to think of creating shapes that have meaning without my brain freezing on fractals, perspective, solid geometry etc.
As always, special thanks to the volunteers from The Clay Company and from Planet Abled. The class would not have been the same without you.
The FootLoose Vibe: universal design at its bbest
Universal design is not something one would usually associate with dating events. However, the event (The VIBE – Delhi) conducted by Footloose No More was a superb example of how things can be made inclusive for everyone. I am not going to go into the challenges of online dating and the extreme focus on pixels on many dating websites. Footloose is different since the website is an adjunct to their offline events. Most of the Footloose events are unstructured where you walk up to the person of choice and start talking. If you are blind, then there are some unique problems with this format.
- Finding someone to talk to. Yes, you do get introduced like everyone else but people drift in and detecting them is almost impossible.
- Finding the next person to talk to. So, you finish talking to one woman but how do you identify the next one? The objective is to meet as many new people as possible. People cluster into groups so yes, you can look for chattering groups but many participants sit in a quiet corner so no audio. Yes, you can ask the organizer but the organizer does not know who all you have spoken too.
- Irrespective of disability, people complain that they did not meet everyone.
The Footloose solution is elegant. The women are seated while the men get five minutes with each woman. They move from person-to-person. There is a whistle after every 5 minutes. Once everyone has met everyone else, the event returns to its original unstructured format. You talk to everyone, get a chance to exchange contact information if desired and then are free.
Tips
- Exchange contact information in the fourth minute of the interaction else you will have to scramble to enter it once the whistle has blown. You do not want to keep your fellow men waiting.
- Stay relaxed. Do not look at your watch. Time keeping is the organizer’s responsibility.