innovation at Barrierbreak – Leader in Offshore Accessibility Testing | Section 508 Compliance | WCAG Conformance | BarrierBreak https://www.barrierbreak.com/building-a-better-magnifier-with-the-power-of-the-web/ Creating a limitless future Mon, 06 Jun 2022 10:33:27 +0000 en-US hourly 1 https://www.barrierbreak.com/wp-content/uploads/2018/05/favicon.ico.png innovation at Barrierbreak – Leader in Offshore Accessibility Testing | Section 508 Compliance | WCAG Conformance | BarrierBreak https://www.barrierbreak.com/building-a-better-magnifier-with-the-power-of-the-web/ 32 32 Building a better magnifier with the power of the web https://www.barrierbreak.com/building-a-better-magnifier-with-the-power-of-the-web/ Tue, 09 Jan 2018 06:29:05 +0000 https://www.barrierbreak.com/?p=10503 For people with low-vision issues, magnifier tools are often essential in order for them to see clearly and better. For the computer, there are various options already built into the platform (at the operating system level, the browser level by increasing zoom, as well as various apps which do screen magnification). However, for the physical… Read More »Building a better magnifier with the power of the web

The post Building a better magnifier with the power of the web appeared first on Leader in Offshore Accessibility Testing | Section 508 Compliance | WCAG Conformance | BarrierBreak.

]]>
For people with low-vision issues, magnifier tools are often essential in order for them to see clearly and better. For the computer, there are various options already built into the platform (at the operating system level, the browser level by increasing zoom, as well as various apps which do screen magnification).

However, for the physical world around us, the tools we found for magnification were somehow less than optimal. There are some tools out there in the market which require you to buy proprietary hardware, which is often expensive and sometimes doesn’t ship to all countries. Some of native apps, but and have all the problems associated with those (like finding those, sharing the URL, downloading and installing it etc). We asked ourselves a question – can we make a better magnification tool, which is free and easy for people to use, and is available everywhere?

Its with this thought that began our exploration. My first instinct is to always go with the web – its available everywhere, and if we can somehow make a good magnification tool for the web and have it available on a person’s mobile phone, then people with low vision issues can go around and point their phone and magnify things in the physical world and see better! But how do can we magnify using web technologies? This post is about how we used the web to build a better magnifier. Check it out (using Chrome or Opera on Android, due to it currently being the only browsers which supports the needed APIs properly) and please give us feedback to improve it further!

The Image Capture API

Chrome recently came out with support for the Image Capture API, which I am very excited about. I’ve always been quite interested in camera access using the web and have talked about WebRTC at Fronteers 2014 and most recently at JSKongress 2016 (where I partly talked about the same magnifier app I am going to mention here). You can see my talk at JSKongress 2017 mentioning the app and the API below.

Over the years with WebRTC gaining ground, there finally seems to be enough people interested in more granular control over the camera experience, which is why the Image Capture API has been implemented in Chrome. While the name of the API makes it quite obvious what the API is about, it is also deceivingly simple about its potential use-cases.

With the Image Capture API, you finally have fine-grained control over the camera. So if you want to take a photo by zooming the camera, and setting the ISO level at a particular point, or even use the camera flash a certain way, you can. Previously none of this was programatically available to web developers. You can find more information on how to take images using this API by reading this nice overview of the Image Capture API by the Chrome Developers Team.

For our purposes, the first thing we want to do is to magnify, in other words, zoom the camera. The API makes it very simple for us.

navigator.mediaDevices.getUserMedia(constraints)
 .then(async mediaStream => {
 //..
 const track = mediaStream.getVideoTracks()[0];
 const capabilities = track.getCapabilities();
 const settings = track.getSettings();

// get current zoom level
 const currentZoomLevel = settings.zoom;
 //apply new zoom level of '2.0'
 track.applyConstraints({advanced: [ {zoom: 2.0} ]});
 })

In the above code, we run the standard getUserMedia code to get out camera access, and inside we get our track. You can then call getCapabilities() on that track to get it’s capabilities (this will give you an object with the device’s camera capabilities – for example, the maximum and minimum zoom level supported, etc).

Calling getSettings()  on the track gives you the settings currently in play. For example, whats the current zoom level for the device camera. To get the current zoom level, we’ll get the info from settings.zoom.

What we want to finally do is to apply a new zoom setting to the device. Say for example that the current zoom setting is ’1.0’ and we want to double that. We can apply a new set of constraints using the applyContraints() method inside its  `advanced` object.

The final piece of the puzzle for us was to build an input slider and on every change in value, update the value to the device zoom. This allowed us to magnify to the maximum zoom level possible – but wait, there’s more!

Since this is a web app running inside a mobile web browser, you can further magnify by doing a pinch-and-zoom to the web page. This way you can combine the optical and digital zoom to have even more magnification!

This feature alone (which comes free since it’s a web page) instantly makes it superior to any other native app doing magnification out there. See a sample of it in action below.

 

 

We’re not done yet though. When talking to people with low-vision issues, we identified a few things we could do to make the experience better.

Freezing the camera

Often people use magnifiers to read text or identify logos, signs etc and that typically takes time. If you’re pointing to an object up in the air, then you have to make sure you’re still so that the camera doesn’t move and you can see the object, and all the while your arms could hurt too. Plus, the further you zoom, the more even the slightest shake would distort the viewport and the object in focus.

A better solution would be to zoom to an object, freeze the camera, and then the person could take their own time looking at the image while holding the phone in a more comfortable position (they could even put the phone down on a table and see it).

Normally we’re showing a <video> element with the WebRTC output live in it. However, whenever we’re freezing the video, we need to pause it. For that, we take snapshot of the current video output, put it onto a <canvas> element, and swap in that into view instead of the <video> element.

Reading Mode

There are a couple of small peculiar problems while reading as a low vision user. Usually reading text on a white background with black text can get tiring, especially if you have low-vision issues and especially if you’re zooming in and looking closely for a long time.

Reading handwriting can sometimes be difficult too, since there are many variables at play (the kind of surface text is written on, the nib of the pen, the color of the ink, and even the ink level of the pen). This could result in text with low contrast (or even variable contrast – for example one letter written in such a way that its high contrast, but the next letter written in a way that its low contrast).

We wanted to experiment in creating a mode which tried to invert the colors, as well as play up the contrast so that its easier to read. There are multiple ways to achieve this, including applying filters on a <canvas> element (Which we do in case the camera is freezed). However, for live video, we can simply use CSS Filters to do this. In fact, we can chain multiple CSS filters to achieve this effect on live video in one line.

.readingmode {

filter: contrast(175%) grayscale(100%) invert(100%);

}

Using the camera flash

5-6 years ago I had suggested in one of the standards discussions mailing lists that web apps could benefit from having access to the camera flash. At that time there wasn’t much interest sadly. However, its possible now. But why would you need programatic access to it in a web app.

Seeing in low-light conditions is especially bad if you have a low-vision condition. So enabling access to the camera’s flash was very important to our use case. Fortunately the Image Capture API has support for working with it, and its as easy as switching a boolean in the applyConstraints() method, like so:

track.applyConstraints({advanced: [ {torch: true} ]});

To turn off the flash, use

track.applyConstraints({advanced: [ {torch: false} ]});

NOTE: It seems turning off the camera flash is buggy in certain devices. On my nexus6P and Google Pixel 1 it works great, but on some other lower-end devices, it seems that the flash can turn on, but not turn off. We hope this behaviour is fixed in the future.

The Web Speech API

One more piece of feedback we received from showing it to actual low-vision users, is that some of them asked for the application to speak out on key actions. This is because either they couldn’t see the icons clearly, or because they couldn’t immediately determine the meaning. Also, the possibility of further magnifying using pinch-and-zoom once the slider is zoomed all the way is something people missed.

We turned to the Web Speech API, specially the part regarding speech synthesis for addressing this.

if ('speechSynthesis' in window) {

var msg = new SpeechSynthesisUtterance();

var voices = window.speechSynthesis.getVoices();

msg.voice = voices[0];

msg.voiceURI = 'English India';

msg.volume = 0.5; // 0 to 1

msg.rate = 0.8; // 0.1 to 10

msg.pitch = 0.95; //0 to 2

msg.lang = 'en-US';

}

function speak(text){

msg.text = text;

speechSynthesis.speak(msg);

}

In the above code, we’re just checking is speech synthesis is available, and if so, then to create a new `SpeechSynthesisUtterance` object. We can get a list of voices the devices supports using window.speechSynthesis.getVoices(). We can even control the rate, pitch, and volume of our voice. We then made a short method to conveniently speak a line of text, and used it like so:

speak('torch mode enabled'); //speaks out aloud the text 'torch mode enabled'

PWA and offline

Magnifiers are a kind of app that you want to open quickly – just like the camera app. Opening a browser (or even a new tab) and entering the URL (or even accessing it from the speed dial or bookmarks) is sometimes cumbersome and time-consuming in such scenarios, especially for people with low-vision.

Making it a PWA, so that it works offline (and hence not connect to the network all the time) as well as it having an icon on the homescreen was something which was not just good UX, but actually a beneficial feature in the context of low-vision users.

Where to go from here

This was our experimental approach to helping people with low-vision issues by using the power of the web to create a better magnifier which is free and available everywhere.

It uses some new APIs and there are some bugs to deal with, but overall the feedback from low-vision users we have shown it to, has been fantastic. We hope more browsers support the Image Capture API in the future so that its available in more browsers than Chrome on Android. (People asked for iOS support in particular – but Safari, while it supports WebRTC, doesn’t support the Image Capture API yet).

We await further feedback on it – Please send your thoughts about it to innovation@barrierbreak.com.

In my next post, I’ll be talking about the other tools we made to help people better understand various low-vision conditions by simulating those conditions on live video. Stay tuned!

The post Building a better magnifier with the power of the web appeared first on Leader in Offshore Accessibility Testing | Section 508 Compliance | WCAG Conformance | BarrierBreak.

]]>