Submitted by Robin.Christopherson on Tue, 23/01/2018 - 12:13
I’m very proud to have been with AbilityNet since the very start. Today marks our 20th birthday - and it’s worth taking a look at just how far we’ve come…
Two decades of changing lives through tech
Whichever way you look at it the past 20 years have been a blast! When AbilityNet was first founded in 1998, the internet was still young and fresh, computer speeds were measured in megahertz, mobile phones weren’t nearly so mobile (and definitely weren’t so smart) and the terrifying Y2K digital-apocalypse loomed large.
Photo: Toyota's driverless car made for Google | Image credit: Steve Jurvetson
Through all these changes, one thing hasn’t altered; AbilityNet has remained at the very cutting-edge, using our expertise to help people with disabilities.
Celebrating the power of tech
In these two decades, the technology has evolved in eye-opening ways but its potential to help overcome impairments has existed from the very start. In education, at home and in the workplace we’ve continued changing lives through the power of technology - helping many millions of people across countless countries reach their full potential.
Photo: Robin Christopherson and his dog Archie | Image credit: AbilityNet
Without tech and its awesome ability to include everyone in this digital world, I wouldn’t be Head of Digital Inclusion for such an excellent organisation as AbilityNet - and been able to play a small part in enabling others to also achieve their ambitions over these many years.
I know what you’re thinking; of course AbilityNet wouldn’t exist without tech.
So I need to be really clear; without the power of tech to include everyone I wouldn’t be in work at all.
I and millions of others wouldn’t stand a chance. But tech has given us so much more than an equal chance at a career – it’s given us all those things that YOU use tech for every day.
I may need to tweak my computer and smartphone to do the things you take for branted, but that’s where AbilityNet comes in.
My talk was called 'From AI to robots, from apps to wearables - let's design for everyone, ok?' It covered a broad range of technology and how important it is to ensure that the tech of tomorrow is inclusive. If we get the design right it can be used by everyone, regardless of disability, impairment or environment.
The organisers have been swift in getting the video up online and I'd love for you to watch it.
So where do luck and bank robbers fit in? Well you’ll just have to watch it for the full story (hot tip - it's right at the beginning) but one significant message I'd like people to take away is that, in large part, we make our own luck. Whether it’s being caught in the crossfire in a bank robbery, or something as every day (but still exasperating) as dropping your phone, how we choose to view that event can make all the difference to our day, our week, our lives.
When it comes to people with disabilities, you’ll find that they are often the most grateful and positive people you’d be lucky to meet. When life serves you lemons, often the best approach is to make lemonade.
Embrace inclusive design and give people a fighting chance to have a truly lucky 2018
One word that used to be used for describing people with a disability was ‘handicapped’. I actually quite like this term. The better the racehorse, the bigger the handicap (additional weight added to slow them down and level the field) and the better the golfer, the greater the number of shots added to his or her score at the start of a round.
The thing is that no matter how good a golfer you are, if instead of your set of golf clubs you’re given a stick of celery instead, then even Tiger Woods wouldn’t make it past the first hole. There are some handicaps you just can’t get over however positive your outlook is.
The same is true of inaccessible design.
If you have a disability and a website, app or piece of software that you need to use for work, education or pleasure is not inclusive, then you’re stuck. You’re out of luck. Some things are out of our control.
If, however, you’re a designer or developer working on websites, apps or even robots or AI, then it’s completely within your power to make them inclusive and to help build a future for everyone.
Submitted by Robin.Christopherson on Mon, 15/01/2018 - 21:38
It’s predicted that, by 2021, contact lens computers will be a reality. A recent patent and detailed tech-spec for such a device by Sony (see video below) shows how every element of a computer – from a screen to a battery and even a camera – can be condensed down to fit in a contact lens. Such tech could be a real eye-opener for people with disabilities.
Sony has submitted a patent (including a detailed technical specification) for a ‘contact lens computer’. It fits over the eye and contains everything that you need for a fully-functional computer, as well as wi-fi, storage, a built-in camera and even a piezo-electrically-charged battery that happily keeps the miniscule micro-components powered simply by your natural eye-movements. The company predicts that it will be available as early as 2021.
Seeing the future clearly with contact lens computers
With a computer screen nestled on your eyeball and its image able to occupy your entire field of vision, the applications for both augmented reality and virtual reality are obvious. You can work and play wherever you are – using applications that either layer important information on top of the real world or else immerse you in another world of your choosing - and without the need for any devices or power supply. For those with a vision impairment, however, this ability to see a virtual screen that is effectively so enormous as to fill what field of view you do have, has obvious benefits.
While larger and larger screens are available, for someone who is partially sighted, the further away those outer edges of monitor are, the harder they are to see. And there is also the obvious question of the cost for such massive monitors. This virtual view of a screen gets around those issues and affords the user much easier access to their computer and the internet.
How a contact lens computer works
Most computer monitors comprise a liquid crystal display (LCD) panel that contains millions of pixels that can change colour or block light altogether. Then there’s a backlight panel that shines through to light up each pixel so we can see the colours shine.
As we see in this quick DIY video on how to make a see-through screen, if we dismantle our monitor and take away that back panel then what we get is a transparent display through which we can see the world – as well as the information or images on our computer screen. If the world behind is a white wall, say, then it might look a little like a normal monitor.
In newer, organic light-emitting diode (OLED) displays that comprise pixels which produce their own light, the process is even easier. Here there is no back panel so all you need to do is remove the back of your monitor.
Making minute contact lens computers
Having a transparent display is a crucial part of making the concept of a contact lens-sized computer a reality. Obviously, there’s much more to a computer than simply the screen, but we are seeing a marked reduction in size of each and every element of a computer. We see those in the field taking complex circuit boards and several components such as memory and CPU etc and creating a minute ‘system on a chip’ and taking bulky battery and camera technology and making squeezin them into ever-thinner smartphones.
The ultimate head-mounted computer
Many tech companies are producing smart glasses that give you a similar ‘heads-up display’ (Vuzix glasses pictured below) . Here are the top five available on Amazon today, but having both a display that is in front of you wherever you look combined with a camera that is always looking where you are, will make these bulky unappealing gadgets of today look hopelessly out-of-date.
Such smart glasses with head-mounted cameras have many disability-specific applications - from using AI to read text or identify what objects a blind person is looking at, to highlighting (with a hi-vis outline) such objects to assist those with partial vision, to layering helpful info or icons on top of what someone with a learning difficulty sees when performing everyday tasks. Now these capabilities will be available with less inconvenience and, we hope, expense.
Gazing into the future
In a few short years there will no longer be people walking around looking down at mobile phones, oblivious to their surroundings, blundering into people, lamposts or on-coming traffic. People will instead be empty-handed and gazing blankly into the middle-distance. Whether they will see the wisdom of standing still while they view a screen that potentially takes up their whole field of vision… we’ll just have to wait and see.
Submitted by Claudia.Cahalane on Sun, 14/01/2018 - 10:31
Now the festivities are over it's time for us all to start thinking about the year ahead. For most students it's time to get to work. Whether you've got a thesis, dissertation or a simple report to write, these student apps will help to maximise productivity, reduce procrastination and even improve the eloquence of your writing.
This is particularly good for science students who need to reference, but good for anyone writing essays and thesis. Use it to collect, organise, speedily cite, and share your research sources.
Dragon Dictate is a voice recognition app that listens to you speaking and automatically converts those words into digital written text. Obviously useful for essays, but you could also try it for capturing notes and ideas. If you have trouble getting your words down and prefer to speak, or are dyslexic or have trouble writing for physical reasons, this could ease the pressure. By allowing users to dictate a stream of thoughts and words, it takes the pressure off students who feel it’s difficult to put words on the page while thinking.
A popular organisational tool, where you can keep different notes and subjects in order and in separate sections and add and subtract from them when you wish, syncing across your devices. Handily it’s also an audio recorder for lecturers or verbal notes and ideas. You can share notes with course mates too, but this might involve a charge. One extra really cool feature - scan and search - means that if you take pics of whiteboard content or handouts, you can search them using any of the words in the image, because Evernote recognises content within images. In addition, you can write or draw on those PDFs on screen. Try Trello too if you want a project management board where you can see your projects’ workflow really easily and clearly.
Submitted by Claudia.Cahalane on Wed, 20/12/2017 - 20:53
A demo of the Orcam MyEye 2.0 was one of the highlights at the AbilityNet/RNIB TechShare Pro event in November. This small device, an update to the MyEye released in 2013, clips onto any pair of glasses and provides discrete audio feedback about the world around the wearer. It uses state-of-the-art image recognition to read signs and documents as well as recognise people and does not require internet connection. It's just one of many apps and devices that are using the power of artificial intelligence (AI) to transform the lives of people who are blind or have sight loss.
Last week, we took a look Microsoft’s updated free app Seeing AI and its amazing new features for people who are blind or have sight loss, including colour recognition and handwriting recognition. The app proved popular with AbilityNet’s head of digital inclusion, Robin Christopherson.
And it's not the only innovation that is helping blind people. In the last few years we’ve seen popular and loved apps such as TapTapSee powered by Cloudsight.ai image recognition. This app allows users to take a photo and the details of what and who is in the photo are then spoken to the user. Similarly, Aipoly Vision app gives real time image recognition using Deep Learning.
New smaller Orcam MyEye
At TechShare Pro, Orcam, the makers of AI vision tech MyEye who've recently launched MyEye 2.0, gave delegates an advance look at the updated tech before launch (6 December). The MyEye 2.0 consists of a very small camera and microphone attached to a pair of glasses linked to a smaller processor that can be clipped onto the body. A user can point to text, for example on a menu or notice board, and will hear a computerised voice read out the information. The device can also recognise faces, money and other objects.
Presenting the technology, Leon Paull, Orcam’s international business development manager, said: “You can teach it to identify certain items and it will find those in a supermarket. It’s ability to find products has been enhanced. The device is being used all around world and the new version understands multiple languages and can read barcodes and has colour recognition."
He used simple hand gestures to work the technology, such as pointing a finger towards a page to have the text on the page read discreetly into his ear. With a wave of his hand, the system then stopped reading out text. He looked at his wrist to mime that he wanted to know the time, and MyEye 2.0 spoke the time.
The MyEye 2.0 builds on the previous model for blind people, offering a more discreet and portable device with no wires. It currently costs around £3,000, but the creators say they are hoping funders will come forward so the devices can be provided at a cheaper cost or for free.
Submitted by Claudia.Cahalane on Mon, 18/12/2017 - 09:52
The Amazon Echo and Google Home top of many Christmas lists this year. Both of these amazing devices can use our verbal instructions to play music, turn our lights on and off, exchange the basics of a short conversation with us, and of course, tell us the weather. Amongst the highlights of our TechShare pro conference in November was a talk by Leonie Watson - who offered her five top tips on creating accessible conversations with our machines.
Fast forward to the 1980s, when the first text-to-speech synthesiser by DECtalk was introduced. 1984 also saw Steve Jobs demonstrate the Apple Mac’s talking capabilities for the first time with Apple MacInTalk. See the video below from 3 minutes 20.
"There’s been some good marketing around such technology", said Watson, who has sight loss. "But I've found that talking to tech has been a laborious process - with a person having to speak very, very clearly and with specific phrases for machines to understand. Even then, the interaction has ended up bearing little resemblance to an actual conversation", said Watson.
“The thing that really changed that was Siri in 2011. For first time we could have something that felt a lot more like a conversation with technology. In 2014 the Windows Cortana launch followed, giving us another digital assistant that would talk back to us.”
“The same year, with the Amazon Echo, we started to see digital assistants be able to do practical things around the house, but we still needed very structured language and to ask very carefully phrased demands to get it to do things," explained Watson. "A further leap forward came in 2015 with Google making it’s technology more context aware. Meaning, for example, if a song was playing, you could ask your Google device ‘where’s this artist from? Or what’s his real name?” without having to specifically state who you were talking about.”
How to build accessible conversational interfaces
Watson laid out five ways that developers could make interactions with machines as clear as possible for a wide-range of people.
1 Keep machine language simple
Think about the context of how people might be using the device. They might be driving or cooking and need short, simple interactions.
Offer choices but not too many choices.
Suggest obvious pointers to useful information.
2 Avoid idioms and colloquialisms
ie, terms like “it’s raining cats and dogs” or “coffee to go” might only be understood by certain audiences and so lack inclusivity.
3 Avoid dead-ends in conversation
Give the users cues around what to say or ask next to get what they need.
4 Use familiar, natural language.
Ie for time, say ‘three thirty in the morning’ for a UK or US audience. Don’t say ‘zero, three, three zero a.m’.
5 Provide a comparable experience
Users of such technology will generally require speech and hearing to talk to machines.
For those with hearing loss, conversational transcripts could be posted on screen.
For those without speech, the only obvious option at the moment is using simulated speech, like Stephen Hawking does, for example.
Submitted by Robin.Christopherson on Thu, 14/12/2017 - 08:25
Microsoft’s revolutionary app Seeing AI took the blind world by storm when released earlier this year. Now it's been updated, with several new functions including handwriting and colour recognition - and it’s still free. It's launching today and I’m personally wriggling in my office chair with excitement, whilst simultaneously hitting refresh in the iOS app store.
Take a look at the features below to see why it's such an amazing step forward for disabled people.
Seeing AI just got even better
Since it was launched in mid-2017 Seeing AI has been downloaded more than 100,000 times and has assisted users with over 3 million tasks. It was released with features such as the ability to identify a product audibly using the barcode, as well as being able to describe images, text and faces of friends and family as they come into view.
Today (14 December) Microsoft has announced new features that provide new user experiences including currency, handwriting and colour recognition, as well as light detection. It's also now available in 35 countries, including the European Union.
Seeing AI in action
As you can see from my video I've been using Seeing AI for reading magazines as well as handwritten notes left by my family.
A whole new set of features
New features in Seeing AI v2 include:
Colour recognition: Getting dressed in the morning just got easier with this new feature, which describes the colour of objects, like the garments in your closet.
Currency recognition: Seeing AI can now recognize the currency of US dollars, Canadian dollars, British Pounds and Euros. Checking how much change is in your pocket or leaving a cash tip at a restaurant is much easier.
Musical light detector: The app alerts you with a corresponding audible tone when you aim the phone’s camera at light in the environment. A newly convenient tool so you don’t have to touch hot bulbs to know that a light is switched off, or the battery pack’s LED is on.
Handwriting recognition: Expanding on the ability of the app to read printed text, such as on menus or signs, the newly improved ability to read handwriting means you can read personal notes in a greeting card, as well as printed stylized text not usually readable by optical character recognition.
Reading documents: Seeing AI can read you the document aloud without voiceover, with synchronised word highlighting. Also includes the ability to change the text size on the Document channel.
Choose voices and speed: Personalization is key, and when you’re not using VoiceOver, this feature lets you choose between the voice that is used and how fast it talks.
Raise your glasses to a brighter future for the blind
As amazing as it is the development of the app is also a sign of the power that AI will bring in the future.
The video above shows a prototytpe that uses the Seeing AI engine in a pair of smart glasses. While the Google Glass smart spectacles fell a little flat, there’s no doubt that head-mounted cameras (with or without a screen) are going to play a major role in the future of wearable tech and it’s only a matter of time before this functionality is added.
For the users of Seeing AI, it makes perfect sense to have the camera that is working so hard to tell us about our surroundings mounted on our heads and looking in the direction most important to us. This is important in terms of warning us about upcoming obstacles, people we’re interacting with (or wanting to avoid physically or possibly even socially) and also in terms of street signs, shop fronts, notices and hoardings etc (all of which will soon be able to be automatically and effortlessly read out to us).
It is also possible to combine this breathtakingly useful level of awareness of our surroundings with the spoken cues of navigation apps - my personal top GPS app with added accessibility sparkles is BlindSquare. This means that people who are blind, severely dyslexic or have a learning disability will now have a whole new world of support wherever we go.
Thank you, Microsoft
I just want to end this quick post by spelling out my gratitude to Microsoft for bringing real cutting-edge machine learning to a group of users with such evident needs in this area. While none of the smarts within Seeing AI are solely or even primarily intended for blind users, it takes a company as acutely aware of the importance of accessibility as Microsoft to do such an excellent implementation that brings the best of AI to those who benefit most.
Robin Christopherson is head of digital inclusion for AbilityNet
Submitted by Guest Blogger on Tue, 12/12/2017 - 10:57
Sophie Christiansen CBE is a para-equestrian dressage rider who spoke at AbilityNet's TechShare pro conference in November.
As someone who was born with quadriplegic Cerebral Palsy, technology helped me access education. Simply having a typewriter and then a computer, enabled me to do my school work independently. To write would’ve been far harder. Tech allowed me to show the part of my body that did work properly – my brain. Because of that I knew that I could go on to get a job and live independently like everyone else.
I know my value in the workplace in my role as tech analyst at Goldman Sachs. When you have a disability you really focus on your abilities. You think differently – outside the box. It’s this mentality that makes disabled people good employees and also inspires able-bodied co-workers to do the same - to make their product and business accessible for everyone.
Voice recognition issues
One thing that would make my life even easier is if voice recognition worked better for me. My voice is obviously different to other people’s and I find voice recognition a bit hit and miss. For example, Siri on iPhones doesn’t understand me, but Google does. Amazon Alexa gets me when she is online and processes on Amazon’s server, but to switch it on using the specific ‘wake up words’ doesn’t work because at that point she’s offline and her local processing skills are less clever. I did a little experiment with her to show you what I mean.
There’s lots more that can be done to keep improving things. I spend a lot of time on trains, and feel companies could do more to make it easier for disabled people. At the moment, to catch a train, I have to phone to book assistance 24 hours in advance (because disabled people can never be spontaneous, right?). But phone at peak times and this normally involves being put on hold for 15 minutes.
Reasonable adjustments in the transport sector
I’d say about for about one in 10 journeys, the member of staff forgets that I have booked on, so in the absence of a ramp, I have to rely on kind members of the public to lift my wheelchair down so I don’t end up in Portsmouth.
As a reasonable adjustment there could be an app which I could use to quickly book assistance half an hour or more before my train, get reassurance that the staff know about me, and send data back to the train operator on their performance. I can guarantee that if this was in place more disabled people and their families would have the confidence to travel by train.
The world of tech is still in its infancy, so why not think about accessibility in the embryo stage of an idea? You don’t have to just do this out of the goodness of your hearts – there are 13.3 million disabled people in the UK alone with a household spending power of £249bn. That’s a lot of business that you are missing out on by not being accessible. A little extra thought goes a long way for everyone.
Submitted by Claudia.Cahalane on Mon, 11/12/2017 - 10:39
“The Echo Dot makes me feel included," says Ellie Southwood, chair of The Royal National Institute of the Blind (RNIB). “I spend far less time searching for things online; I can multi-task while online and be more productive. Microsoft’s Seeing AI app (narrates the world for people with sight loss) means I can recognise people and scenarios and make up my own mind about what’s going on.”
Southwood (pictured right with Hector Minto of Microsoft and Sharon Spencer of IAAP), who has sight loss, made the comments at AbilityNet/ RNIB’s TechShare Pro conference in November, which focused on AI, disability and inclusive design. News about Artificial Intelligence (AI) can often present a dystopian version of the future - cars let loose on the motorway driving themselves; robots controlling our thoughts and taking jobs. But this sold out London conference looked at the many ways that advances in AI could transform the lives of disabled people.
The power of AI
AI is fast becoming part of our lives. The technology is behind the likes of Siri, Alexa, Cortana and other similar services. It powers speech-to-text service and is getting better at understanding different voices. It’s responsible for the suggested responses on GMail, auto-captions on Facebook and picture library searches for a specific location or person.
Opening the event, IBM’s evangelist for its AI database Watson, Jeremy Waite, told delegates that the company's survey of 1,200 UK executives found 28% plan to invest in AI in the next year. Watson can search 10 million records a second - for example it can find specific words in the whole back catalogue of TED talks in seconds or less.
AI and accessibility
The biggest tech companies’ use of AI within their products is seeing new tech features becoming far simpler for everyone to use, including disabled people. And by investing in inclusive designs those products are now reaching bigger audiences - accessible designs are more popular with every customer.
“Hopefully AI means that, rather than expecting people to provide something in an accessible format, it will mean that everything becomes accessible in the future,” said Hector Minto, head of accessibility and assistive tech at Microsoft, who sat on a future-gazing panel at the event. He spoke about the new Microsoft Seeing AI app, which enables blind people to recognise faces, scenes, money, text and more, and also about a range of other Microsoft features which use AI to reduce or remove the technology barriers that disabled people face.
For example Windows Hello uses biometric login, ie fingerprint, face or iris, which can work for people with physical disabilities or those with dyslexia who might struggle to remember passwords. Subtitles in Powerpoint mean people can save a transcript of the narration which happened alongside slides and keep the transcript.
Minto added that the big opportunity for AI - with advances in translation capabilities and free apps - is that it could help assistive technology could 'go global' and reach parts of the world where there are more disabled people and fewer services and support.
Biometric logins and disability
Delegates also heard from Kiran Kaja, technical programme manager for search accessibility at Google.
“Everyone wins when we harness AI,” he said. “Voice recognition was developed for disabled people, but it’s the hot item at the moment and is useful for everyone. The same with speech-to-text technology, which is completely based on renewal networks. This uses AI and predictive text is also AI. Google wants to use intelligent tech to improve customers experience,” said Kaja, who has sight loss.
Kaja spoke about Google Home’s connection to Smart devices and the potential of home automation technology to support disabled people. In particular, people with physical disabilities or sight loss can more easily do things like turn lights on and off, alter a thermostat and turn on home appliances using such technology.