UX writing for distributed interfaces

Writing for one interface can be tough enough. What about writing for multiple, connected UIs across different platforms? Remy Ferber shares learnings from writing for home appliances.

Imagine you’re working away on your laptop, with a phone on your desk, a wearable on your wrist, and Bluetooth headphones in your ears. You’re deep in thought as music plays quietly. And then you get a call.

That quiet, contemplative headspace suddenly gets replaced by ringtones, vibrations, and a sea of CTAs and suggestions on every screen. It’s disruptive, confusing, and irritating on a level you recognize only from getting requests like, “Can you just fix the copy?”

This is my experience every time someone calls me while I’m working. I’m all-in on the Apple ecosystem: MacBook Pro, iPhone, Apple Watch, and AirPods. Most of the time, it’s a smooth experience made better by connectivity. But when someone calls me, I’m so overwhelmed by where to look, what to read, and what to do that I usually decline the call just to make it stop. While the content may suit each individual UI, seen together it’s a mishmash of information and calls-to-action (pun intended).

For connected experiences with more than one screen, writing for distributed UIs ratchets up the complexity of creating a clear, cohesive, and helpful experience.

Mockup showing phone calls appearing on different Apple products.

What are distributed UIs?

Distributed UIs are the multiple user interfaces within a connected ecosystem. You may see them in the same line of sight, but they appear on different devices: a phone, a wearable, a speaker, a car, a washing machine, and so on. How you interact with each UI can be the same or different: touch-based, voice-based, haptics, etc.

While each UI may offer its own experience, in concert with one another they have the potential to enhance or expand the product offering. In other words, the whole is greater than the sum of its parts.

For multi-screen ecosystems, content design gets complicated quickly. The UIs need to work together and apart. The UX may need to facilitate the same outcome in different ways. But with varying use cases and screen sizes, the UI becomes an even more critical consideration.

I’ve been lucky to work with distributed UIs in automotive and home appliances, previously at Volvo On Demand and now at Electrolux Group. Among the many challenges and fascinations, here are a few things I’m learning along the way.

Each UI needs a clear role

A good starting point is Michal Levin’s Designing for Multi-Device Experiences. He outlines three design approaches for these ecosystems: consistent design, continuous design, and complementary design. This framework provides a starter guide to consider how the UX adapts and expands across distributed UIs.

Levin underscores how the contextual interplay between user and device can change everything—after all, we want to place the right information, on the right UI, at the right time. “By mapping the variety of contexts across an experience, and then framing the roles each device plays in the overall ecosystem, we can create a clear narrative and mental model for that multi-device experience.”

A common mistake I’ve observed is to define the UX on one device and overlook how that content adapts (if at all) across UIs. As UX writers, we’re used to thinking about how copy scales with responsive design and translation. But distributed UIs are more than just responsive.

A user may relate to and interact with each interface differently, so knowing what role each device plays is essential. What is that device uniquely positioned to do? Given that, how do we define the breadth and depth of content for this UI? It’s worth considering use cases, screen size, ergonomics, and inter-device dynamics. Together, these parts ladder up to the holistic content strategy that reflects our user goals and business objectives.

At Electrolux Group, we design for an array of connected appliances: Ovens, cooktops, washing machines, robot vacuum cleaners, and more. On appliances, we write for screens ranging from 2.8” (7.1 cm) to 7.8” (19.8 cm) and have a smartphone app to complement. As cheesy as it sounds, I’ve started considering the role of each device based on its “minimum viable purpose.”

For example, someone buys a washing machine so they can have clean laundry. We can improve the experience and results with connectivity, but that is not a dependency for the washing machine to fulfill its primary purpose. The more content we try to pack into that screen—even if it seems to support the user’s needs and goals—the greater the risk of eroding that purpose and the non-negotiable value of that appliance.

While the appliance focuses on providing clean laundry, the app focuses on the experience of cleaning that laundry. The purpose for connectivity becomes the flexibility and personalization with which a user can achieve their goal despite the physical limitations of the washing machine. Both appliance and app play a distinct role, and therefore require a distinct content approach.

Identify the hero and their sidekick for each flow

Multi-screen experiences increase cognitive load: Where should I be looking? What will happen next on each UI? On which UI should I take action?

To mitigate this confusion, for every flow I like to designate one UI to be the hero and the other to be a sidekick. The hero holds the focus with dynamic content and CTAs. The sidekick refers back to the hero, sometimes explicitly saying, “pay attention over there,” to not distract or mislead the user. Together, they fight friction and smooth the way for an intuitive experience.