Video conferences have proliferated especially after COVID. These days, you are expected to be on camera. This applies even if you are blind and are dealing with sighted people.
I do not have a problem with this per se since it represents equality in a way. Ensuring your face is centered is easy enough with a variety of AI tools available. The Jaws for windows screen reader has this feature built-in while NVDA has an add-on that does the same thing. I believe there also was a standalone program that did this.
However, none of these programs tell you how you are looking. Yes, you can use the plethora of AI apps to check this using your smart phone but remember the context. You are seated on your chair and need to see the picture in that situation.
The vOICe makes this extremely easy. It will talk to your webcam and you will be able to hear the soundscapes. This will give you an idea of how you look and what else is there in the scene. In addition, to simplify matters, activate the skin filter which will only sound the exposed parts of your skin like your face assuming it is centered correctly. Try making unusual expressions and hear the soundscapes change. That way, you can determine how you are positioned.
This is independent of any AI model and there is no need to send your image to the cloud. Everything is local and your look is yours.
Hi Patty, Indeed, your approach with the meta smart glasses works nicely. However, my approach makes the context more precise because I can see myself in place which perhaps is a more sighted way of doing things. I suspect sighted people take advantage of both approaches, they do check themselves in the mirror and check themselves before switching on their cameras. I used to work in consulting and even in my current job which is with a large company, I do need to be on camera. It is more of a cultural norm rather than a rule.