Mozilla Assistant
On-device AI assistant

Overview
Mozilla Assistant is a privacy-first, on-device AI assistant. Our goal was to design a lightweight prototype to explore a core question: Can we deliver intelligent, context-aware assistance without sending user data to the cloud?
This raised key UX challenges: How do you build trust in a system that operates mostly in the background? How do you support power users who expect transparency and control—without overwhelming them with complexity?
I collaborated closely with engineering to design a prototype for early user testing and stakeholder review. This work laid the foundation for future research into how people perceive local AI and how much visibility and control they actually want.
Design Objective
To test early hypotheses for Mozilla Assistant, we used rapid prototyping to explore a lightweight, privacy-first AI assistant that reflects Mozilla's values: privacy, transparency, and user control.
I acted as a bridge between design and engineering, translating technical constraints—like local processing and encryption limits—into practical, user-friendly experiences. We prioritized speed and insight over polish, aiming to uncover early user signals and align internal stakeholders.
My Role
Framed product assumptions and prioritized what to explore first.
Designed core flows and built prototypes in Figma for fast iteration.
Partnered with engineering to translate technical limits into UX.
Workflow

Initial Vision of Mozilla Assistant
Our Vision
Mozilla Assistant aims to be a true personal AI assistant — one that understands user's preferences and routines while keeping data secure and private. It builds a local, encrypted, on-device knowledge base, ensuring that user's information remains fully under user's control.
Competitive Research
Competitive Research
I am not able to show this section due to a request.
Forming Hypothesis
Hypothesis
We envision Mozilla Assistant as a privacy-first assistant designed to prioritize user trust and control.
Transparency is a core principle. Permission requests are clearly surfaced in the interface, and no data is shared without explicit user opt-in. The goal is to make data sharing feel intentional, seamless, and respectful—never disruptive.
Security is treated as a first-class priority, influencing decisions across both interface and experience design.
Why This Hypothesis
- Trust is the primary barrier to adoption for AI in privacy-aware audiences.
- Users expect control; transparent, opt-in permissions make data sharing feel safe and intentional.
- Security isn't optional; it's foundational to earning and maintaining user confidence.
Design Challenges
Designing for a Fast MVP, Aligned with Tech Constraints
To enable a fast, feasible MVP, I partnered closely with engineering to understand constraints around on-device AI, encryption, and data storage. This early alignment shaped my UX approach—favoring familiar patterns and lightweight logic to ensure designs were implementation-ready and flexible as tech decisions evolved.
Privacy-first UX
Designing for an AI assistant meant putting privacy and ethics at the core of the experience. It was essential to clearly communicate how user data is collected, stored, and used—while giving people control without creating friction. Beyond compliance, the goal was to build trust: making transparency intuitive, choices meaningful, and ensuring the assistant felt helpful—not intrusive.
Designing for scale
With many unknowns in the product's roadmap, designing for scalability was critical. I focused on building flexible components and patterns that could adapt as the product evolved—minimizing rework and enabling faster iteration down the line.
Exploring Early Concepts
I used low-fidelity wireframes to quickly define core user flows and align early with product and engineering. This lean approach helped us:
- Explore layout and interaction ideas with speed
- Gather fast, actionable feedback from stakeholders
- Identify technical constraints early and adapt confidently
Sharing early concepts kept the team aligned, minimized rework, and set us up for high-fidelity design with clarity and momentum.




Clarity at Every Step

1. Account Connection
The flow starts with a focused, distraction-free prompt to connect an account—carefully designed to avoid confusion or unnecessary navigation.

2. Syncing Progress
Showing real-time syncing feedback helps manage expectations and prevents user uncertainty. Visual cues and messaging make it clear the system is working—reducing drop-off during wait time.

3. Ready to Chat
Once syncing is complete, users land directly in the chat experience, with full context. No confusion, no extra steps—just a smooth transition that respects their time and keeps momentum going.
Transparency about data usage
Designing for Informed Choice
This moment introduces a data-sharing request in plain language, placed directly in the chat to feel contextual. The goal was to offer a clear, timely choice—so users know what's being asked and why, without digging through settings.


Respectful Transparency
When users choose to share data, we acknowledge their choice clearly and let the assistant continue seamlessly. The goal is to reinforce trust by making consent feel natural—without interrupting the flow.
Graceful Opt-Outs Without Penalty
If users decline, the experience continues without pressure or penalty. The assistant adapts accordingly, respecting their choice while keeping the flow smooth and functional.

Designing Trust Through Settings
Trust in AI begins with transparency and control. These settings give users meaningful choices over data, preferences, and assistant behavior—making control a core part of the experience.

Users can delete their account at any time—no questions asked.

Users can clear their memory and past interactions whenever they choose.

Users can select which AI model powers their assistant experience.

Users can enable or disable extensions based on what they need or trust.
Dark Mode for Modern Use
Dark mode was designed not just for aesthetics, but to reduce eye strain and improve comfort in low-light environments—meeting both user needs and modern platform expectations.

Designing for Fast Iteration and Familiar Use
To speed up iteration, I leaned on familiar UX patterns and avoided complex custom UI. This kept the design intuitive, reduced dev effort, and let us quickly prototype and evolve—without compromising usability.



Designing with Mobile in Mind

Although the current product is web-based, I approached the design with a mobile-first mindset where it made sense. From layout choices to interaction patterns, I considered how the experience could translate to smaller screens with minimal rework.
Typography
Typeface
Inter
Weights
Regular
Semi-Bold
Main Colors
Learning
Build Fast, Learn Faster
This project reinforced the need for an agile, test-and-learn mindset over a linear build. Given the complexity and ambition of the product, fast iteration based on real-world feedback is essential—not just for usability, but for achieving product-market fit.
Balancing Agility with Control
One key takeaway was the value of combining an agile approach with a gated release strategy—moving quickly, but within clearly defined phases. This allows us to learn fast while keeping scope focused and risk manageable at each stage.
I Am Not the User
While I'm personally excited about the vision, I've learned to check that excitement against user behavior, data, and testing signals. I am not the user—and assuming otherwise can derail even the best ideas. The real impact comes from asking better questions, observing how people actually use the product, and knowing what to test next and why.
Designing for Trust
Designing for privacy and trust adds a unique layer of responsibility. Every opt-in, every permission request, every moment of friction must be intentional. It's not just about features—it's about ensuring the experience reflects user values and long-term strategy.
Staying Curious and Outcome-Oriented
Moving forward, I'm focused on staying curious, integrating feedback loops early, and making sure design decisions are grounded in both user needs and measurable outcomes.
Explore my portfolio in Figma
Design speaks louder than words—Check out the full Figma file to see the process, decisions, and iterations behind this project.
Check Out Figma
Previous Explorations














Let's make something users will love!
If you're looking for a product designer who can bridge the gap between design and code, I'm here to help!
Contact Me