• AFP-JIJI

  • SHARE

Using a raised eyebrow or smile, people with speech or physical disabilities can now operate their Android-powered smartphones hands-free, Google said Thursday.

Two new tools put machine learning and front-facing cameras on smartphones to work detecting face and eye movements.

Users can scan their phone screen and select a task by smiling, raising eyebrows, opening their mouth or looking to the left, right or up.

“To make Android more accessible for everyone, we’re launching new tools that make it easier to control your phone and communicate using facial gestures,” Google said.

The Centers for Disease Control and Prevention estimates that 61 million adults in the United States live with disabilities, which has pushed Google and rivals Apple and Microsoft to make products and services more accessible to them.

“Every day, people use voice commands, like ‘Hey Google’, or their hands to navigate their phones,” the tech giant said in a blog post.

“However, that’s not always possible for people with severe motor and speech disabilities.”

The changes are the result of two new features, one called Camera Switches, which lets people use their faces instead of swipes and taps to interact with smartphones.

The other is Project Activate, a new Android application that allows people to use those gestures to trigger an action, such as having a phone play a recorded phrase, send a text or make a call.

“Now it’s possible for anyone to use eye movements and facial gestures that are customized to their range of movement to navigate their phone — sans hands and voice,” Google said.

The free Activate app is available in Australia, Britain, Canada and the United States on the Google Play shop.

Apple, Google and Microsoft have consistently rolled out innovations that make internet technology more accessible to people with disabilities or who find that age has made some tasks, such as reading, more difficult.

Voice-commanded digital assistants built into speakers and smartphones can enable people with sight or movement challenges to tell computers what to do.

There is software that identifies text on web pages or in images and then reads it aloud, as well as automatic generation of captions that display what is said in videos.

The AssistiveTouch feature that Apple built into the software powering its smart watch lets touchscreen displays be controlled by sensing movements such as finger pinches or hand clenches.

“This feature also works with VoiceOver so you can navigate Apple Watch with one hand while using a cane or leading a service animal,” Apple said in a post.

Computing colossus Microsoft describes accessibility as essential to empowering everyone with technology tools.

“To enable transformative change accessibility needs to be a priority,” Microsoft said in a post.

“We aim to build it into what we design for every team, organization, classroom, and home.”

In a time of both misinformation and too much information, quality journalism is more crucial than ever.
By subscribing, you can help us get the story right.

SUBSCRIBE NOW

PHOTO GALLERY (CLICK TO ENLARGE)