I had to receive my cousin at the airport. Her flight was landing at 23:45 IST. I have always been concerned about the reaction of security personnel to my video glasses. This time, I was not going inside the terminal building therefore, decided to wear my Vuzix M400 device and take a chance.
I need not have worried. No one gave me a second look or if they did, that look did not translate into any conversation or interfearance
The airport was well lit and I was able to sense people moving ahead of me. Sometimes, I saw patches of relative darkness followed by patches of light and then some kind of object like a railing or a sign. This meant that people had clustered around said obstacle and then moved away.
The OCR feature came in handy in reading the various signs.
One thing to remember at least at terminal 3 of the Indira Gandhi International Airport, is that the gate numbers are written above the gates therefore you need to raise your head to look at them unless you are tall. It is also possible to tilt the lens of the Vuzix device upwards but that did not help as much as I expected it to.
There were also times that partial signs were read. In such a case, pan your head until you get something recognizable in focus. You may also need to venture closer to the sign. It is better to frame the sign in the center of your view and then move closer.
Do watch out for people and other moving obstacles like trolleys while moving.
After reading the above pages, you may well ask, what is the point of the vOICe specially when your phone can do as good a job of text reading?
For one thing, you will need your phone to coordinate with the person and to make phone calls. who you are picking.
In addition, the visual context changes almost constantly in such an environment and in case you are moved by the crowd, you will need a mechanism to reestablish visual context. This is where an immersive setup comes in handy.
In this case, I was able to see the sign of a money exchange and was able to tell my cousin where I was standing. I did have sighted assistance with me and there were multiple money exchanges so we eventually landed up also referencing an ATM which did the trick. The sign for the ATM was harder to read probably because it was in a smaller font or may be because I was a little further from it.
So, what about those Ray-ban smart glasses? They could have helped here but as of this writing, I would need to keep queering them constantly to account for the changing context. Their things around feature would have been useful but I do not know if they would have read text that was on the objects that were all around me without sending a lot of questions to the bot. I would also not have identified the changes in landscapes as we moved between the parking and the airport terminal.
I do plan to use the vOICe inside the airport next but will see when I get the opportunity to do that.