Contacts, on Steroids

Filtering by Category: iOS Development

VoiceOver design for a mobile app main menu


VoiceOver is Apple’s technology for allowing blind and partially sighted users to access iOS devices. As they describe it, VoiceOver is "a gesture-based screen reader that lets you enjoy using iPhone even if you don’t see the screen". Apple tries as much as possible to give VoiceOver support to developers ‘for free’ by enabling it on standard system interface elements that are used in apps, like buttons and labels, and by providing sensible default values if you don’t customise them.

From the beginning, I’d been working to build good support for VoiceOver into Intro. As part of that, I created functional UI tests for Intro using the KIF iOS Integration Testing Framework as part of Intro’s suite of automated tests. KIF tests run by controlling an app using the same accessibility frameworks that are used by VoiceOver. This meant that from time to time, I’d go to set up a functional test for a new part of Intro, and I'd discover that it wasn’t possible to complete some step of the task—a button wasn’t clickable, say. I’d figure out why, and add the necessary accessibility label or enable accessibility for that element. This process meant that by having a good test coverage of key functional tasks that users do in Intro, I can be certain that those tasks can also be completed by users using VoiceOver.

Can be completed, yes. But that doesn’t always mean it’s pretty. As I familiarised myself with VoiceOver, I found screens where there were just a heck of a lot of interface elements that were accessible. This means you potentially are swiping through a lot of what seems like noise. As a person deeply familiar with iOS, but completely unfamiliar with VoiceOver, it felt a bit overwhelming. As discussed, Apple enables VoiceOver for standard interface elements ‘for free’, because that’s the best default. But at times it felt like some of what was presented in VoiceOver for Intro was as a result too repetitive. And even as I spent more time digging into it, what I realised was I didn’t know what best practices were for what interface content should be presented via VoiceOver. Purely decorative interface graphics—of course, omit. Core content being presented by the app—of course, include. The grey area was in things like whether to name containers of controls, or finding cases where a label was separate from a button, so the button name is spoken (a name not visible in the visual interface) and then the label text is also spoken. Seems like I should perhaps be customising the defaults so that users weren’t presented with needless repetition? But how to be sure?

Let’s come back to that at the end. Time to look at a specific example.

Auditing the Intro main menu

I’d noticed a little while ago something odd about Intro’s main menu and VoiceOver. There was no question the menu was accessible—every one of those functional tests started from here, after all—but I noticed a few things that weren’t right. There was some duplication that seemed unnecessary, and one button behaved differently to the others. So, as part of my year-long plan to make Intro the most accessible app on the App Store, I started with my focus on this main menu.

The first video here gives you a VoiceOver tour of that main menu in Intro 1.2.1, before any fixes had been made. (47 seconds, watch with either audio or Closed Captions to hear the VoiceOver content when swiping through the menu contents and back.)

There’s a number of things I noticed here that needed thinking about:

  • As you swipe through, sections of each main button are targeted three times. First, the target is the entire button, and it says “[Name of button] button”. The next swipe, the target moves to the main button label, and reads the text—which is the same as the name of the button. Finally, the target moves to the subheading label, and reads that out. Three swipes for what is intended to be one button.
  • Along the way down this menu, one button is different. For the Learn button, the first target, of the entire button—is skipped. It goes straight to target the label, and then the subheading. This is unintentionally different to the other buttons, so whatever I decide is the right behaviour for these buttons, I need to make sure I fix this one so they all behave the same. As shown at the end of the video, you could still reach the Learn menu that this button is supposed to link to, because that button label was tappable. That label was being targeted by the functional test in this area, and so this discrepancy hadn’t been picked up via that process.
  • Somewhat similar to the first issue, after VoiceOver reads the three buttons across the bottom of the screen, “Settings button” and so on—with three more swipes you move across again, while VoiceOver reads out the labels on each of these three buttons.
  • Finally, a possible issue right at the start. When VoiceOver is activated it states the name of the app, and then the first item targeted is the logo in the top left corner, where VoiceOver says “Intro logo image”.

The fix

This second video walks you through VoiceOver on the main menu in Intro 1.2.2, when the fixes discussed here had been implemented.

One target per main button: It seems pretty clear each button on this main screen should be only a single target for VoiceOver. The reason they are not was obvious pretty quickly when I looked at it. Each button had been designed as a UIButton itself, which filled the full blue stripe, overlaid on which was an icon, a main label, and a subheading label. The label and subheading were both UILabels, used instead of the built-in UIButton label text as this provided the required layout customisation. However, since VoiceOver accessibility is enabled by default for both UIButtons and UILabels, all of them were thus being read out.

Fixing this was pretty simple for the main labels—given the button itself was already appropriately labelled in VoiceOver, reading out the label was redundant. I simply disabled the label as an accessibility element. What to do with the subheadings required slightly more thought. They offer information that is additional to the main button name, so it’s not redundant. But would a VoiceOver user really want this read out every single time? For users of the visual interface, it’s there as a hint as to what the main button is for, but it is also downplayed by putting it in a smaller size. Turns out, there’s an ideal option for this in VoiceOver—the accessibilityHint string. From Apple’s documentation, this is "A brief description of the result of performing an action on the accessibility element, in a localized string." Sounded perfect. I applied the text of each subheading as an accessibilityHint on the main button, and then like the main label, set isAccessibilityElement to false for the subheading itself. As you see in the video, initially VoiceOver reads out the main button label. If you keep moving, you don’t hear the hint. In the second half of the video I pause longer between swipes. When you leave an accessibility element as the target after the label is read out, after a slight pause VoiceOver reads out the hint. Perfect. Finally, VoiceOver users can control whether they hear hints or not… so an Intro power user could choose to turn these off. Even better. Ironically, I’ve been toying with whether Intro should offer a feature in Settings for users to hide these written hints, when they are familiar with Intro. So in this case, VoiceOver users have a feature today that might one day also be provided to other users of the app. Nice turn of the usual tables?

That different Learn button: Nothing complex here. At some point, the main Learn button had for unknown reasons had isAccessibiltyElement set to false. This was toggled back on, and this button treated just like the others above.

One target for Settings, Help and Support buttons: Similar to the above, these buttons are in fact an assemblage of a UIButton, an icon, and a label. As the buttons were already correctly labelled for accessibility, all that was required here was marking the additional label as not an accessibility element, as above.

The logo: My understanding is that graphics that are purely decorative should probably be xcluded from VoiceOver—though I’d be pleased to get advice on this. The Intro logo though I felt was not purely decorative. For people using the visual interface, the logo serves as a marker that Intro is the app they are in, and an anchor that this menu is the top level in the Intro hierarchy, as it doesn’t appear anywhere else in the app. For those reasons, I thought this probably should be an accessibility element.

Actually using the app in VoiceOver, I felt quite differently. Firstly, it seemed redundant to always have to swipe away from the logo—which is not selectable and has no associated action—to get to the first button. Secondly, VoiceOver was reading redundant content, because it already announces the name of the app when you launch it or change to it. That was the clincher. VoiceOver already provides by default the marker purpose that the logo serves in the visual interface, so it isn’t needed as an accessibility element.

Take Aways

VoiceOver comes for free, which is great, but there are some caveats. Most important among these is to watch for times you’re composing multiple UI elements, like a UIButton and one or more UILabels, into a single logical target. Make the item on which the user can take action the accessibility element, set all relevant information in the accessibility label and accessibility hint, and disable accessibility on the other parts of the assemblage. For an even cleaner look, ensure that the tappable item also has the most logical target outline, such as by ensuring your button itself fill the entire part of the view that appears to be shaded as that button in the visual interface. That way, the target outline in VoiceOver will nicely match your visual interface. Remember, many visually impaired users have some vision, and not all VoiceOver users are visually impaired at all—they may be using these accessibility frameworks due to motor difficulties, convenience, or many other reasons.

As for actually implementing fixes like this, once you get your head around VoiceOver, it’s not that difficult to do. A contact of mine at the New Zealand's Blind Foundation previously was a trainer with their clients in using mobile technologies. (She also has expert lived experience in being blind.) She put me on to the LookTel VoiceOver Tutorial for iPhone. This great app walks you through VoiceOver, going beyond the basics into the more sophisticated ways that you interact with an iPhone using VoiceOver. I can’t recommend highly enough the benefits of using this free app, in getting your head around a user’s VoiceOver experience, and to begin to see what a VoiceOver user might expect in your app.


These changes went out in Intro 1.2.2. More changes are coming soon. It took way longer to make these videos and document these changes than it actually took to identify the issues and resolve them. I’m going to keep on writing about this journey to make Intro as accessible as possible, and if you’re a developer, I’d encourage you to do the same—the more examples and resources out there, the better.

My main accessibility challenge to other developers though is to try and ship accessibility improvements to your app every month for a year—ideally shipping them with every release. Changes like this don’t take long, and they make a big difference to many users (and potential users you’d otherwise lose). They also give you something good to write about in those release notes, which is great, since Apple now shockingly require release notes to contain actual content.

The most accessible app on the App Store

Secret intention:

Make Intro the most accessible app on the App Store.

Hidden agenda: 

Make Intro awesome for everyone. Try to spawn a wave of developers that make their own apps even more accessible, out of their sheer competitive enthusiasm. Which would then be a win, whether I win or lose.

Dastardly plan: 

Make enhancing accessibility a continuous development objective. Build on the accessibility efforts I've already made as Intro was built, by shipping a significant accessibility improvement to Intro every month. Begin with the obvious. Start with auditing and improving the current support for VoiceOver. Implement captioning and audio descriptions into the Learn tutorial video, and any subsequent videos created in Intro. Implement support for dynamic text and larger text sizes, with the associated interface redesign that will be required in some parts of Intro. Along the way, collect and integrate end user feedback from a diverse range of users of accessibility features. Engage with other app developers and influencers to raise and focus on the issue of app accessibility. Examine and discuss app accessibility guidelines to broaden the ways in which Intro can be made accessible. Then go beyond all of this, targeting cognitive accessibility. Examine implementing settings within Intro to enable local decisions for each app installation regarding the cognitive complexity that will be presented to the user. Take heed in this of the global work currently going on to develop cognitive accessibility standards, and contribute to that work.

Secrecy procedure:

Document each step of this accessibility journey at, to ensure that it remains tightly under wraps and that there is no chance that it could assist other developers to improve the accessibility of their own products.

Nefarious timeframe: 

t-minus one year, and counting. A continuous process never ends, but as a start I'm giving myself one year to make some great enhancements to enhance Intro's accessibility.


Join the conversation on Twitter: @greatintro & developer @dbabbage #GAAD. And if you're a developer taking up this challenge, ping me so I can let you know about The #AppA11y Pledge (coming soon). DMs are open.


I have been thinking through how best to share this plan since I launched Intro in February 2018. Once Intro 1.2.1 was complete, I started writing this manifesto, and while I was doing that Global Accessibility Awareness Day appeared on my radar. I still planned to publish this sometime in the last week, when this post-script would have said, 'I didn't hold back the publication of this until Global Accessibility Awareness Day because every day is a good day to focus on accessibility.' But things take time, and here we are, and being ahead as we are, in New Zealand it's now Global Accessibility Awareness Day. And you know, it feels good to be part of the #GAAD global party.

Mobile app marketing, privacy, and the Facebook SDK

Intro is a long labour of love from me as a clinical psychologist turned self-taught app developer. It’s all about making more effective connections with people, so launching on Valentines Day seemed appropriate. On day one it was ‘in stock’ and available for sale in 155 countries in the App Store. The payment and fulfilment infrastructure that makes this possible is a radical departure from the historical challenges that faced a one person exporting company in the past, let alone one based in New Zealand. There’s never been more opportunity for a supplier of a product than there is today for digital goods. The flip side is, of course, that with millions of apps to choose from there’s also never been a more noisy marketplace of choices for consumers. To build a successful business, I need to reach people who want the product I’m producing. How do I cut through the noise?

One route that other app developers have found success with is advertising. For a digital product, online advertising is the logical route: reaching potential customers in the context where they have the capacity to directly act on the message. Among the online advertising options, Facebook app install ads stand out as the likely most effective medium, and it’s one I’m testing. Intro runs only on iPhone, and requires iOS 10 or newer. On Facebook, I can ensure my ads are always shown only to users on these devices, an obvious win for both my advertising spend—even if I pay a bit more for the privilege of such targeting—and a win for other Facebook users who couldn’t run the app, who won't see an irrelevant ad. So far, so good, but these targeting technologies can go further than that.

Recently, there has been further coverage of the tracking technologies used by a number of companies, most notably Facebook, to collect information about websites that people visit. When a website includes the Facebook pixel, Facebook receives information about sites you’ve visited, even if you are not logged into Facebook at the time. They have indicated this is used to enable Facebook’s ads to be more effectively targeted in the future. The presence of this pixel on a website also means that Facebook receives some information about you (your IP address, at least), even if you’ve never created a Facebook account and have never used Facebook. Many people feel uncomfortable about this kind of tracking, and recently there have been increasing moves from the European Union in particular to examine whether Facebook is violating privacy laws with this technology.

While tracking people across the web has been in the news, not so visible is the ways in which user behavior can be tracked and reported from within native apps as well. Akin to the Facebook pixel, for app developers Facebook provides an SDK—a software component—that can be incorporated into a third party app. This can provide a number of services. Most obvious is when an app allows a user to log in to an account in the app using their Facebook account, rather than creating one specific to the app. Facebook also provides an analytics service to app developers, enabling you to log key events of interest within your app, and get anonymised statistical summaries of the activity of your users within your app as a result. This doesn't require the Facebook login system to be being used in the app, and an app only using the analytics service would should no visible sign to a user that data is being sent to Facebook at all. Now, in a general sense, collecting such analytics is a routine part of maintaining a healthy app, and many app developers use an analytics service to assist with this. Facebook however offers something other packages do not—tying user behavior inside your app back to previous clicks on Facebook ads. And while a developer can be selective in what they send back, but Facebook's default recommendation is a mode that logs a lot of user behaviour, automatically.

Why would a developer want to do this? Intro is free to download, but the main features of the app require a monthly subscription. The value Intro provides users is in ongoing personal productivity and isn’t predicated on network effects… unlike say a social network, Intro’s value to a user isn’t dependent on whether their friends are using the app. Therefore, from a business standpoint, recruiting users to the free product is only of limited benefit. Business viability will be almost solely based on recruiting people who continue on from a one month free trial to subscribe to the full features of the app. Clearly, the ideal scenario would be to be connected to those people who would benefit from what Intro offers and would be happy to form an ongoing partnership with me as the developer by subscribing to the full features of the app. If advertising leads to many downloads of the free app, but fails to find the users who would actually want to subscribe, it might drive traffic but be expensive and ineffective. Facebook’s SDK provides the opportunity to send back to Facebook the signal the developer is actually interested in—which users not only clicked the link in the ad, but installed it and then also went on to start a free trial or make an in-app purchase. Based on this information, Facebook will dynamically tune advertisements to be presented to those Facebook users most likely to engage in this same behaviour again. Extremely clever. And financially, sounds like an immediate win for my marketing efforts.

Despite that, I won’t be installing the Facebook SDK in Intro. It was immediately clear to me that Intro’s users would not be expecting any action within the app to be reported back to Facebook. Indeed, if you're a user you'd have a clear expectation that your activity within the app certainly would not be reported back to Facebook, or any other social network. This would be true for any user, but particularly so if you've never used Facebook, let alone clicked on an Intro ad, and you found the app through another route. Sending data that identified those users’s devices to Facebook would be far beyond the pale. Yet, this is the way such analytics work.

Facebook’s ability to know the final outcome of an advertising click within an app, and use that to make more effective use of a developer's ad spend, is impressive technology. But as a developer, I must be a vigilant guardian of my user’s data and of their privacy. Intro’s Privacy Policy informs users that I do aggregate some user activity data for analytics purposes. This is done using third party services that are focussed solely on providing analytics services, who are not harvesting user data to target subsequent advertising delivery. So while I may advertise on Facebook, if I can demonstrate sufficient return on investment, that’ll be based just on those initial clicks on that ad, without Facebook knowing what happens after that. Even if that costs me considerable efficiency in my advertising spend. Because users matter more than money. That was never in question. And that’s the way it should be.

Dr Duncan Babbage is founder and developer of Intro for iPhone, an app for connecting effectively with people through learning people's names, recognising faces, and keeping track of your personal network. He describes it as like Contacts, on steroids. Duncan is also Director of the Centre for eHealth and an Associate Professor in Rehabilitation at Auckland University of Technology, New Zealand, where his research examines health innovation, mobile technology in healthcare, and neuropsychological rehabilitation.