Google's digital accessibility developments on show at TechSharePro 2019

Christopher Patnoe from Google at TechSharePro conference 2018Ahead of this year's TechSharePro conference in London, we chat with Google's Head of Accessibility Programs, Christopher Patnoe (pictured at 2018's conference), about some of the recent digital accessibility developments and key themes that are likely to be discussed at the event, including:

Sound Amplifier

Being Google, it has a huge amount of research and experience in terms of machine learning and this has helped to create products like Google Assistant. In the past year Google has also focused on products that help people who are deaf or hard of hearing. Sound Amplifier is an Android Accessibility app that helps people on Android devices help people boost important sound and filter out background noise using their Android phones. "So it can help you in a noisy room, for example," says Christopher. Users can customise frequencies to amplify certain important sound, like the voices of the people you are with, or the voices of the speaker at a lecture, and filter out background noise. 

Live Transcribe

Another app Google has created is Live Transcribe, which is an application that will transcribe the conversation you have with a person in up to 70 different languages. 

"Someone who is deaf or hard of hearing can have a good sense of what's being said," says Christopher. "And if they choose not to speak, they also have the ability to type a response back. So you could actually have a conversation with someone in a situation where you don't have an interpreter. While it's not necessarily a replacement for an interpreter, it’s a good next best thing," he says.

And earlier this year, Google's Live Transcribe added 2 new features; sound event detection and saved transcriptions, the latter of which has the ability to save transcripts for up to three days. "So you can take the conversation - like a school lecture - take it out, save it and start working with it," says Christopher.

Live Transcribe will now show you sound events in addition to transcribing speech. You can see, for example, when a dog is barking or when someone is knocking on your door. Seeing sound events allows you to be more immersed in the non-conversation realm of audio and helps you understand what is happening in the world. This is important to those who may not be able to hear non-speech audio cues such as clapping, laughter, music, applause, or the sound of a speeding vehicle whizzing by.

Project Euphonia

Project Euphonia is a Google Artificial Intelligence (AI) research effort to help speech-impaired users communicate faster and gain independence. 

The person who helped create it, or who it was created in conjunction with, is a man named Dimitri Kanevsky. "Dimitri went deaf at one year old and he was raised in Russia. And he learned English written phonetically, so his spoken pattern of English, for example, is not typical for a deaf accent. And he had a very difficult time being understood by the Google Assistant, Live Transcribe and all these other things that use a voice model," says Christopher.
"So, Google created a model for him that allows him to be understood very clearly, just by recording his voice using different expressions. The team recorded his voice, and trained a data model on it... And now it understands him as well as someone who worked with him. Sometimes better than some people!" says Christopher.


Google has also been requesting others to provide examples of their voices to aid the speech recognition tool's development. "By having people contribute examples of their speech allows us to create a model that works for more and more people...the eventual goal is to create a model that will work for everyone, or near everyone," says Christopher.

Project Euphonia has also been able to translate utterances from people with Amyotrophic lateral sclerosis (ALS), also known as motor neurone disease (MND), says Christopher. "Understanding these utterances allows the technology to 'trigger' things". For example, the team has created an 'utterance' that one sports fan who worked with to develop the tool can trigger to kick off a siren. "So that he  can contribute to the joy and excitement within a sports game," says Christopher.

Sign Language options on Disability Support

The Google Disability Support team has also extended its support for multiple languages, and in the United States it will be introducing American Sign Language (ASL) support. "We want to extend phone and Be My Eyes support in multiple languages - right now it's email and chat support only in Europe," says Christopher.
Google is also in the process of developing gesture recognition tools.

The recently announced Pixel4 smartphone includes a radar, and its 'Hub Max' has the ability to stop by the user pushing their hands forward.

"We are experimenting in gestures, but we need to do it carefully and we want to do it thoughtfully, because we don't want to create something that's not intuitive," Christopher says. The ability to to use machine learning to recognise gestures "allows us to recognise what a gesture is," says Christopher.

"It doesn't provide context, it doesn't provide information, it doesn't provide translation or interpretation, it just says 'here's a sign' - but it doesn't mean it understands sign language. It recognises 'This is a sign', or 'This is a gesture that could be a sign'. There are so many things that make a gestural language robust and rich. We're not claiming  to solve this, but it's certainly an interesting step forward and a good step that allows us to do some of the recognition on device, but there's still so much that we have to do before this becomes a real solution," says Christopher.

Accessibility Scanner

Another tool that Google has put in place is the Accessibility Scanner on the Google Play Store. Since launching in July 2018, more than 3.8 million apps have been tested and over 171 million suggestions have been made to improve accessibility.

Any application that is uploaded to the Play Store is automatically run through the Accessibility Scanner - automated tests on multiple devices. "We come up with a report and screenshots and recommendations on things that could be made better. We do this on every application that gets submitted to the Play Store, whether they want it or not," says Christopher.

The uploader then receives a Pre-launch report, which also includes information about the app's accessibility, via an Accessibility tab. "You can get a ranking on how you're doing with regards to the accessibility of your application," says Christopher. 

New fonts for Google Docs to aid visual crowding issues

Google has recently introduced the Lexend font to Google Docs, which it reports can be useful for students and other users who are sensitive to visual crowding. "The fonts weren't created explicitly to help people with dyslexia," says Christopher, but they can help some people with comprehension when reading.

Google Accessibility YouTube playlist 

Google has now created a dedicated Google Accessibility playlist on YouTube, where you can learn about all its accessibility developments including those mentioned in this article.

"One of the problems we realised is it's very difficult for someone who's not immersed in what we're doing [with regards to accessibility] to find what we're doing," says Christopher.

"The Chrome folks did their videos, and the Assistant folks did their videos, and the Gsuite did theirs. And we realised it's very hard for someone to find out about all the work we've done. So we aggregated all of the YouTube videos that we've created over the years and put them together on a single playlist so you can go to one spot and find the thing that you're looking for, whether it's the Assistant trailer, whether it's how to use Assistant in the home, or how to use GSuite, or how to use Chrome. Or what is the cool video that we've just recently announced," says Christopher.

Ethics, machine learning and disabilities

TechShare Procast logoAt TechSharePro conference, Chris will be speaking among a panel of experts on a variety of tech and accessibility issues, including ethics and machine learning.

"One of the things I think is important is that people understand what machine learning and Artificial Intelligence is. Without context it could be kind of scary, because you don't understand how it works and why it works. And why it works the way it does." So what I hope to get from this panel is a clearer understanding is 'What is machine learning?' and 'Why is it hard?' and how that influences why data can accidentally cause bias. One of the biggest problems with machine learning is you're only as good as your data. And if you have data that's accidentally biased you can create results that aren't intended," says Chris. 

Hear more about digital accessibility on the brand new AbilityNet podcast, The TechShare Procast. Transcripts provided.

Update November 2020: TechShare Pro 2020 tickets available!

TechShare Pro will take place online from 17-19 November. Find out more and register at the link below. Not for profit pricing tickets available.

Register now >>


Further resources