Trenchcoat invisible phone

Engineering

How designing an invisible app almost killed us

Lessons learned from building automation products that give good feedback.

Without looking up, how many lightbulbs in your office are out? How strong is your phone’s signal? Unless you’re sitting in darkness or dropping calls… why would you want to know?

“Nobody notices when things go right.”
— Zimmerman’s Law of Complaints

Problems, as it turns out, are the fastest way to bring visibility to your day’s mechanics. Everything is smooth sailing until it isn’t, and then whatever failed is suddenly the center of attention.

This makes the new class of automated tech really hard to get right.

Harry Potter in the Invisibility Cloak
For the muggle world, not all invisibility is magic.

You have to earn invisibility

Automated products hit a wall if you make them run silently in the background before teaching people how they work. Invisibility is earned, a lesson we learned the hard way.

The quick and dirty background about our app: Robin is a room booking tool for offices that runs on mobile, web, and tablets. You can book from the apps directly, but there’s also a twist for mobile users — we can run in the background and use iBeacons to detect when you’re in a room, booking calendars automatically. This means your team stays updated with a live feed of who and what’s available.

The fact you don’t need to open the app to use it is where our problems start.

Original screenshot of the Robin app
The original app showed where you were, and did everything else in the background.

Our early approach (as you can see in the screenshot above) was “Keep your phone in your pocket, we’ll handle the rest”. The thinking was that if people just had the app installed, we could do the rest automatically based entirely on presence data. It was a noble idea (aren’t they all?), but after our beta launched we quickly realized being so far out of the way created a new mess of problems.

Failing to walk through a door
How do I door.

The challenge with “hiding the work” too early is people never learn how to do it manually — just ask anyone who’s walked into a closed door. In usability, these cues are called affordances, and are how (most) people manage to use everyday objects like chairs without instruction manuals. New tech already starts with a knowledge gap. Early on you’re better off automating 95% and requiring a button for the final 5% than skipping the education step.

Then there’s the issue of credibility. If you can’t see how a decision is made, it’s harder to trust the results. Amazon reminds you recommendations are “based on your purchase of X” for a reason.

Missing feedback moments

In our case, getting feedback from the beta was very difficult. We would talk to customers several weeks in and we’d hear “Oh, I actually haven’t opened the app recently.” Things were mostly working, but we had a visibility problem and our usage metrics alone weren’t telling stories we could act on — we were flying blind.

The Robin Dashboard showing room presence
“Oh weird, you’re not showing up yet.”

Worse still, when the app did have bugs it failed silently in the background. Our early users would notice Robin wasn’t working when they had room collisions or the web dashboard stopped showing a coworker.

The only time people remembered we were there was when the app had issues, which is a terrible way of reintroducing yourself.

Getting to this point was slow because when you’re working with a system with no screens it’s hard to give people feedback of “It’s working” without interrupting them constantly. On the other hand, by the time it’s not working you may not have the option to tell anyone — like only noticing a broken air conditioner after you start to sweat.

What we missed

Our first version fell into a trap of believing that solving the problem meant people didn’t want to think about it at all. In reality people really just wanted to think about it less. The lack of app feedback also muted the gratifying “I’m a superhuman” moment of a room recognizing you and jumping into action.

  • Regular feedback is a big deal when you’re moving towards product market fit.
  • When you sell something that feels like the future, people want an easy way to show it off.

Our app was like the average Boston startup —
it did useful things and forgot to tell anyone.

How to increase visibility

We had an invisible man problem to solve. So how do you design an automated product to be more visible while still letting it, well… automate?

We asked five questions:

  1. How would a user discover if the app stopped working?
  2. What events are worth interrupting a user’s day for?
  3. How many decisions does the user have to make for the app per day? Week?
  4. What other things change when the app is working? Think secondhand feedback. (e.g. Does it post to friend’s activity feeds? Update a display somewhere?)
  5. How often does the app have new information? (e.g Timehop has 1 daily, chat apps have unlimited)

Getting these answers allowed us to make more directed product decisions with feedback in mind.

A reason to leave the pocket

Compared to the browser version, mobile was a lame duck. We took tools that previously only existed on the web dashboard and made one-touch versions for the mobile app. If you pick good presets, most people don’t miss the extra options.

First we added the ability to search for free rooms, which was the first feature that gave people a reason to open the app regularly. The unexpected bonus was you could now use the app remotely, and check who was in the office from their commute. People liked that.

People and place sidebars in Robin mobile app for iOS
These sidebars were the first step to adding in-app actions

Being invisible also robbed users of their chance to explore new features as they came out. Today most app stores update in the background, so checking for new stuff doesn’t really make sense if you thought the app was supposed to run silently in your pocket. We had to retrain people to think of the app as a tool instead of a passive experience.


From there, our fixes broke down to three main points.

#1 Ask for decisions

Captain Picard makes it so
Make it so.

“This room is booked in 10 minutes. Find another one?”

Just because the system can figure it out doesn’t mean it should. Optional control is a powerful UX tool. Decisions are part of why checking email (and recently Slack) is so addictive — there are constantly new messages you can respond to. Asking users to make a decision also gains instant visibility at the exact moment when you’re doing something useful. Good decision notifications are an app’s version of humble brags:

“Hey I’m doing something useful here.
Do you want to be involved?”

When UX or technical issues get in the way of this process, we gain feedback a product team can act on. Read-only info often makes feedback about accuracy or speed instead of utility and output.

#2 Add notifications

“Emily just arrived at the office”

Hindsight is always 20/20, but at the time this one felt forced to launch with. The first version of the app focused on getting information to the platform and crunching the good stuff there. Notifications seemed like the opposite of good design since we prided ourselves in being out of the way.

On second thought, we realized almost all of our interactions started based on people being somewhere. With good notifications, we could connect users to context and decisions about a space at the exact moment it’s least likely to be intrusive. A bit like Foursquare tips but for room level.

#3 Build the neighborhood

Devices can do more if they meet the neighbors

“Can’t we just have the iPad do it instead?”

A common tech in modern-thinking offices are wall-mounted iPads. We originally resisted on account of it feeling a little clunky. Then we realized it wasn’t about the iPad as a centerpiece, it was about what they could do as wingman.

The device that triggers an action doesn’t have to show the feedback. By recruiting other devices in the neighborhood (i.e. coworker’s apps, wall-mounted iPads) to show updates, we multiplied points of ambient feedback. Originally when you walked into a room running Robin, nothing changed. Now when you walk up to a room, the iPad shows your face as well as everyone inside. Even better? The phone stays in your pocket.

Interfaces become whatever devices are currently nearby.

We believe this is how “UI-less” apps will grow in the physical world. It’s not so much about a “post-screen” experience as much as “Whatever screens are free”. This may sound squishy, but it’s early days and the industry still has a lot to experiment with. In the meantime, we’re focused on making stuff people can see and have opinions about.


As you build interactions for the next wave of tech, make sure the experience reminds users it exists. Picking the right parts of your product to un-automate will require some experimentation, and you probably won’t get it right the first time around. Fortunately, automation products are at the very beginning of their time to shine.

There’s an elegance in minimalism, but there’s also a danger in being overlooked. Completely invisible may sound good on paper, but just invisible enough? That’s elegant.

Subscribe to Robin at Work

Writing about improving the work day, product updates, and what we learn building a company along the way. Straight to your inbox.