Why blind and visually impaired Tweeters don't get the full picture

Robin Christopherson is Head of Digital Inclusion at AbilityNetOn Twitter's 10th birthday last month I wrote about the improved accessibility of twitter.com, but lamented the fact that the increasing trend towards photo Tweets was excluding many visually impaired people. Now Twitter has announced that users can easily add alt-text to images.

It's great news when tech giants catch up on accessibility, but I'm concerned that it won't be enough to make the image-heavy web more accessible to me and many other visually impaired users. I also wonder whether we may be on the cusp of a whole new solution with the latest image recognition software.

Are images taking over social media?

The web was once filled with a trillion words but is now being overrun with witty, useful and inspirational images. While no doubt adding considerably to the appeal and information on social media this prevalence of images causes significant problems for people who use screenreaders - people like me.

That's because screenreaders rely on alternative (alt) text descriptions being provided, and where alt text isn't present the filename is spoken. The trouble is that most image filenames an incomprehensible string of characters, which makes for painful listening when my screenreader reads out their names as I browse a web page or email.

It has always been thus for blind users - huge chunks of the web’s content is invisible to us (it's also bad news for search engines). So, as social media increasingly focuses on visual imagery, the problem is worsening.

It is now possible to add descriptive text to an image in a TweetTwitter brings in alt tags

A week after my Twitter 10th birthday post, Twitter announced that users can now add alt tags to their pictures

It's fantastic that Twitter is reading my blog (winky face) and acting so swiftly, but this option is not offered by default and users not only need to be aware of it, they need to take the time to dig out the option in the accessibility settings and turn it on.

I applaud the move, but what about raising the profile a little, Twitter?

Who's going to use this feature?

Even if the they were to discover this new feature, it’s not clear whether the average Tweeter will feel inclined to take the time to add alt tags. Tweets are such throwaway things for the most part. I think we will see companies begin to use the option in their PR efforts, so we’ll get the full benefit of all those lovely ads, at least (another winking face).

I'm also curious to see how the likes of Twitterific, Tweetlist and Tweetbot will enable this new function and thus whether take-up on those platforms will be any higher.

Facebook, Apple and Microsoft make strides in image recognition

In my blog, I gave a shout out for more advanced technologies like object, face and text recognition, which help me understand what's in pictures - for example text recognition will read the text of a menu in a photo. While in their infancy, this tech is nevertheless still providing some useful clues to the content of all those millions of unlabeled images.

Alas, despite the evident persuasive power of my blog, Twitter hasn't gone down this road yet.

Twitter appears to be behind the curve. Facebook, for example, has been working on image recognition for some time, starting with the acquisition of DeepFace a couple of years ago. DeepFace recognises faces in images with creepy accuracy. It is incredibly useful.

Apple also has this feature (although the initial teaching of whose face is whose needs to be done on a Mac) which not only helps people with a visual impairment, but is also invaluable when searching thousands of family photos.

Robots vs Humans

Microsoft too is surging ahead. Check out the very impressive video of its Seeing AI technology, part of the company's new suite of cognitive services. It analyses images on the go to assist blind people in recognising everyday objects and environments.

I've also been experimenting with Microsoft's CaptionBot which allows me to try out this image recognition engine first-hand. I can upload any image and get a verbal description of what it is. The technology isn’t perfect just yet, however.

Microsoft CaptionBot fails to identify Stephen Hawking in zero gravity

For example, this image of Professor Stephen Hawking floating in zero gravity (above) was described to me as a man laying on a suitcase. CaptionBot also tries to provide descriptions for facial expressions but includes these as emojis and unfortunately screenreaders often interpret these as unintelligible text codes.

There's obviously still a way to go, but the technologies mentioned have huge potential to turn the vast amounts of graphical data on the web into text that can be read by blind users or searched by Google.

Alt-text for everyone?

In the meantime, I’d love to see Twitter better signpost their new alt text feature - even better if you could make it opt-out rather than opt-in dear Twitter if you’re still listening? With image and text recognition needed on Twitter sooner rather than later.

Follow Robin Christopherson on Twitter