Skip Navigation
Mia

Insight

Unlocking the original promise of home assistants

Article_Generative experiences for the home_hero poster image

Written by

Emil Wasberger

Emil Wasberger

Principal

Tore Knudsen

Tore Knudsen

Lead Designer

Shaping generative experiences for the home

For years, “smart” homes have been surprisingly rigid – listening, but rarely understanding. At Manyone, we’ve shaped many of these experiences, often bumping against a technological ceiling where assistants function as tools, not partners. But the rise of generative agents is finally moving the needle from executing digital chores to understanding human life. We have now crossed a threshold where we can realize the true appeal of the assistant concept: an interface that adapts to us, rather than forcing us to adapt to it.

Moving beyond tasks

Assistants are already an integral part of many people's lives. Advances in speech recognition and synthesis have certainly enhanced the experience, but until recently, these improvements were largely superficial.

The rise of generative models with agentic capabilities marks a fundamental shift. The primary driver for these new experiences is the Large Language Model’s (LLM) ability to understand intent at a much higher level. From a user perspective, this means evolving from products that automate single tasks to agents that help achieve desired outcomes.

Consider the difference in command. In the old model, a user might say, "Set a timer for 20 minutes." In the new model, that request becomes, "Help me cook dinner tonight." The assistant no longer just watches the clock; it understands the goal. It shifts from being a tool you operate to a partner you collaborate with.

Two screens side by side: In both of them, a woman sits at a desk. On the left, there are chat bubbles where a Smart Assistant doesn't understand her request fully. On the right, the Assistant helps her in her task.

LLMs understand intent at a higher level, evolving from automating single tasks to helping achieve outcomes

Orchestrating the home

The agentic nature of current systems unlocks the ability to plan and coordinate across the chaotic ecosystem of the home. Unlike traditional "smart home" setups where users rely on brittle, pre-defined scenes or automations, generative assistants can dynamically compose and recompose tools to achieve an intent.

Take the classic "Morning Routine." In the past, this was a fixed script. Today, an agentic assistant focuses on the outcome: start the day smoothly. It can draw on a set of device-level tools – lighting, thermostat, coffee machine, calendar – and decide in real time how to use them based on context.

If the assistant detects you overslept, it knows to skip the slow, ambient light fade-in and instead brew a stronger coffee. It prioritizes the outcome (getting you out the door) over the script.

Designing for human ambiguity

Humans are messy communicators. We pause, we imply, and we speak in fragments. Because generative models have been trained on material with built-in human ambiguity, they can handle natural conversations smoothly.

If information is missing, the assistant can utilize tools or sub-agents to fill in the blanks. As a user, this means we can finally interact with assistants on our own terms – no need to conform to rigid dialogue patterns or memorize specific syntax to express what we want.

A woman writes at a cozy desk surrounded by plants while Home AI advises adjustments for focus. Interface notes changes in lighting, music, notifications, and vacuum.
The real transformation isn’t that assistants can now do more, but that they understand more. They no longer need perfect instructions, but interpret our intentions and adapt to our rhythms.

Emil Wasberger

Principal Design Technologist

Manyone

Harnessing flexibility

With great flexibility comes a massive design challenge. As designers, we must strike a balance between flexibility and usability. An infinite scope can feel overwhelming to users, leaving them unsure of what to expect.

Paradoxically, explicit constraints must be infused into the interaction to help people navigate the breadth of possibilities. Carefully applied cues and examples are necessary to guide the user, helping them feel empowered rather than lost in a "blank canvas."

A new approach to prototyping

Designing for generative interfaces requires us to rethink our toolkit. We need to shape intuitive and consistent experiences even though the flows are unpredictable and the expression might shift at any time.

To work with these interfaces, we need to incorporate interactivity and systems thinking into the way we communicate our designs. Static screens are no longer enough; prototyping live experiences is critical to understanding the "feel" of the model.

We must focus on "hero moments" – the high-impact interactions that define the relationship – because the number of edge cases is too high for exhaustive mapping. And while generative models can be delightfully creative, designers must ensure the model's "temperature" is aligned with the user experience.

In this new era, detailed specifications are less useful than analogies, guardrails, and principles. The technology is finally ready to fulfill the promise of the home assistant; the challenge now is designing the wisdom to wield it.


Want to know more?

Curious how we help brands navigate this evolution? Let's talk.

Together, we can design experiences that meet customers in the moment and move your brand ahead of the curve.

Emil Wasberger

Emil Wasberger

Principal Design Technologist


Related work

Image alt text is missing

Ask Safely

A strategic foundation for safer UX with AI chatbots

Image alt text is missing

AugustMille

AI-powered spatial experiences for last mile delivery

Image alt text is missing

Aura Air

Description

Image alt text is missing

Bezeq

A state of the art home router for Bezeq