Mozilla Thunderbolt
Privacy-first AI Assistant

Overview.
Thunderbolt is a privacy-first, on-device AI assistant built by Mozilla.
The goal wasn't polish or launch-readiness, it was to explore how Mozilla's values of trust, transparency, and user control could shape the product.
We focused on a few core questions:
- How can consent feel seamless, not disruptive?
- How much control can we offer without overwhelming users?
- What familiar patterns work within strict on-device constraints?
I led UX strategy and prototyping, working closely with engineering to align on feasibility and surface design risks early. This work helped Mozilla align cross-functionally and laid the foundation for deeper testing and iteration.
My Role
Workflow.

Initial Vision of Thunderbolt.
Our Vision
Thunderbolt aims to be a true personal AI assistant, one that understands users' preferences and routines while keeping data secure and private. It builds a local, encrypted, on-device knowledge base, ensuring that users' information remains fully under their control.
Competitive Research
I am unable to show this section due to a request.
Hypothesis.
For privacy-aware users, trust is the main barrier to AI adoption, and transparent data practices increase both trust and adoption.
How might we design data practices that build trust with privacy-aware users?
Design Challenges.
Designing for a Fast MVP, Aligned with Tech Constraints
To enable a fast, feasible MVP, I partnered closely with a team of two engineers to understand constraints around on-device AI, encryption, and data storage. This early alignment shaped my UX approach, favoring familiar patterns and lightweight logic to ensure designs were implementation-ready and flexible as tech decisions evolved.
Privacy-first UX
Designing for an AI assistant meant putting privacy and ethics at the core of the experience. It was essential to clearly communicate how user data is collected, stored, and used, while giving people control without creating friction. Beyond compliance, the goal was to build trust: making transparency intuitive, choices meaningful, and ensuring the assistant felt helpful, not intrusive.
Designing for scale
With many unknowns in the product's roadmap, designing for scalability was critical. I focused on building flexible components and patterns that could adapt as the product evolved,minimizing rework and enabling faster iteration down the line.
Transparency about data usage.
Designing for Informed Choice
This moment introduces a data-sharing request in plain language, placed directly in the chat to feel contextual. The goal was to offer a clear, timely choices, so users know what's being asked and why, without digging through settings.


Respectful Transparency
When users choose to share data, we acknowledge their choice clearly and let the assistant continue seamlessly. The goal is to reinforce trust by making consent feel natural, without interrupting the flow.
Opt-Outs Without Penalty
If users decline, the experience continues without pressure or penalty. The assistant adapts accordingly, respecting their choice while keeping the flow smooth and functional.

Designing Trust Through Settings.
Trust in AI begins with transparency and control. These settings give users meaningful choices over data, preferences, and assistant behavior, making control a core part of the experience.
Account Deletion
Users can delete their account at any time, no questions asked.

Clear Memory
Users can clear their memory and past interactions whenever they choose.

Model Selection
Users can select which AI model powers their assistant experience.

Extensions Control
Users can enable or disable extensions based on what they need or trust.

Designing for Fast Iteration and Familiar Use.
To speed up iteration, I leaned on familiar UX patterns and avoided complex custom UI. This kept the design intuitive, reduced dev effort, and let us quickly prototype and evolve, without compromising usability.

Clarity at Every Step.

1. Account Connection
The flow starts with a focused, distraction-free prompt to connect an account, carefully designed to avoid confusion or unnecessary navigation.

2. Syncing Progress
Showing real-time syncing feedback helps manage expectations and prevents user uncertainty. Visual cues and messaging make it clear the system is working, reducing drop-off during wait time.

3. Ready to Chat
Once syncing is complete, users land directly in the chat experience, with full context. No confusion, no extra steps, just a smooth transition that respects their time and keeps momentum going.
Exploring Early Concepts.
I used low-fidelity wireframes to quickly define core user flows and align early with product and engineering. This lean approach helped us:
- Explore layout and interaction ideas with speed
- Gather fast, actionable feedback from stakeholders
- Identify technical constraints early and adapt confidently
Sharing early concepts kept the team aligned, minimized rework, and set us up for high-fidelity design with clarity and momentum.



Dark Mode for Modern Use.
Dark mode was designed not just for aesthetics, but to reduce eye strain and improve comfort in low-light environments, meeting both user needs and modern platform expectations.

Mobile First.

Although the current product is web-based, I approached the design with a mobile-first mindset where it made sense. From layout choices to interaction patterns, I considered how the experience could translate to smaller screens with minimal rework.
Typography.
Typeface
Inter
Weights
Regular
Semi-Bold
Main Colors.
Learning.
Explore my portfolio.
Design speaks louder than words. Check out the full Figma file to see the process, decisions, and iterations behind this project.
Check Out Figma




Let's make something users will love!
If you're looking for a product designer who can bridge the gap between design and code, I'm here to help!




