Mozilla - Ghostcat
Privacy-first AI Assistant

Overview
Ghostcat is a privacy-first, on-device AI assistant built by Mozilla.
The goal wasn't polish or launch-readiness—it was to explore how Mozilla's values of trust, transparency, and user control could shape the product.
We focused on a few core questions:
- How can consent feel seamless, not disruptive?
- How much control can we offer without overwhelming users?
- What familiar patterns work within strict on-device constraints?
I led UX strategy and prototyping, working closely with engineering to align on feasibility and surface design risks early. This work helped Mozilla align cross-functionally and laid the foundation for deeper testing and iteration.
My Role
Framed product assumptions and prioritized what to explore first.
Designed core flows and built prototypes in Figma for fast iteration.
Partnered with engineering to translate technical limits into UX.
Workflow.

Initial Vision of Ghostcat.
Our Vision
Ghostcat aims to be a true personal AI assistant — one that understands users' preferences and routines while keeping data secure and private. It builds a local, encrypted, on-device knowledge base, ensuring that users' information remains fully under their control.
Competitive Research
I am unable to show this section due to a request.
Hypothesis.
For privacy-aware users, trust is the main barrier to AI adoption, and transparent data practices increase both trust and adoption.
How might we design data practices that build trust with privacy-aware users?
We ask before collecting data and explain why it's needed and how it'll be used, using plain language inside the chat.
Users can delete their account, clear memory, or opt out at any time with no friction or loss of functionality.
We leaned on UI patterns people already know to make privacy feel simple, not technical.
Design Challenges.
Designing for a Fast MVP, Aligned with Tech Constraints
To enable a fast, feasible MVP, I partnered closely with a team of two engineers to understand constraints around on-device AI, encryption, and data storage. This early alignment shaped my UX approach—favoring familiar patterns and lightweight logic to ensure designs were implementation-ready and flexible as tech decisions evolved.
Privacy-first UX
Designing for an AI assistant meant putting privacy and ethics at the core of the experience. It was essential to clearly communicate how user data is collected, stored, and used—while giving people control without creating friction. Beyond compliance, the goal was to build trust: making transparency intuitive, choices meaningful, and ensuring the assistant felt helpful—not intrusive.
Designing for scale
With many unknowns in the product's roadmap, designing for scalability was critical. I focused on building flexible components and patterns that could adapt as the product evolved—minimizing rework and enabling faster iteration down the line.
Transparency about data usage.
Designing for Informed Choice
This moment introduces a data-sharing request in plain language, placed directly in the chat to feel contextual. The goal was to offer a clear, timely choice—so users know what's being asked and why, without digging through settings.


Respectful Transparency
When users choose to share data, we acknowledge their choice clearly and let the assistant continue seamlessly. The goal is to reinforce trust by making consent feel natural—without interrupting the flow.
Graceful Opt-Outs Without Penalty
If users decline, the experience continues without pressure or penalty. The assistant adapts accordingly, respecting their choice while keeping the flow smooth and functional.

Designing Trust Through Settings.
Trust in AI begins with transparency and control. These settings give users meaningful choices over data, preferences, and assistant behavior—making control a core part of the experience.

Users can delete their account at any time—no questions asked.

Users can clear their memory and past interactions whenever they choose.

Users can select which AI model powers their assistant experience.

Users can enable or disable extensions based on what they need or trust.
Designing for Fast Iteration and Familiar Use.
To speed up iteration, I leaned on familiar UX patterns and avoided complex custom UI. This kept the design intuitive, reduced dev effort, and let us quickly prototype and evolve—without compromising usability.

Clarity at Every Step.

1. Account Connection
The flow starts with a focused, distraction-free prompt to connect an account—carefully designed to avoid confusion or unnecessary navigation.

2. Syncing Progress
Showing real-time syncing feedback helps manage expectations and prevents user uncertainty. Visual cues and messaging make it clear the system is working—reducing drop-off during wait time.

3. Ready to Chat
Once syncing is complete, users land directly in the chat experience, with full context. No confusion, no extra steps—just a smooth transition that respects their time and keeps momentum going.
Exploring Early Concepts.
I used low-fidelity wireframes to quickly define core user flows and align early with product and engineering. This lean approach helped us:
- Explore layout and interaction ideas with speed
- Gather fast, actionable feedback from stakeholders
- Identify technical constraints early and adapt confidently
Sharing early concepts kept the team aligned, minimized rework, and set us up for high-fidelity design with clarity and momentum.



Dark Mode for Modern Use.
Dark mode was designed not just for aesthetics, but to reduce eye strain and improve comfort in low-light environments—meeting both user needs and modern platform expectations.

Designing with Mobile in Mind.

Although the current product is web-based, I approached the design with a mobile-first mindset where it made sense. From layout choices to interaction patterns, I considered how the experience could translate to smaller screens with minimal rework.
Typography.
Typeface
Inter
Weights
Regular
Semi-Bold
Main Colors.
Learning.
Build Fast, Learn Faster
This project reinforced the need for an agile, test-and-learn mindset over a linear build. Given the complexity and ambition of the product, fast iteration based on real-world feedback is essential—not just for usability, but for achieving product-market fit.
Balancing Agility with Control
One key takeaway was the value of combining an agile approach with a gated release strategy—moving quickly, but within clearly defined phases. This allows us to learn fast while keeping scope focused and risk manageable at each stage.
I Am Not the User
While I'm personally excited about the vision, I've learned to check that excitement against user behavior, data, and testing signals. I am not the user—and assuming otherwise can derail even the best ideas. The real impact comes from asking better questions, observing how people actually use the product, and knowing what to test next and why.
Designing for Trust
Designing for privacy and trust adds a unique layer of responsibility. Every opt-in, every permission request, every moment of friction must be intentional. It's not just about features—it's about ensuring the experience reflects user values and long-term strategy.
Staying Curious and Outcome-Oriented
Moving forward, I'm focused on staying curious, integrating feedback loops early, and making sure design decisions are grounded in both user needs and measurable outcomes.
Explore my portfolio in Figma.
Design speaks louder than words—Check out the full Figma file to see the process, decisions, and iterations behind this project.
Check Out Figma
Previous Explorations.














Let's make something users will love!
If you're looking for a product designer who can bridge the gap between design and code, I'm here to help!
Contact Me