home work about contact let's talk
back to work

connected audio ecosystem

harman kardon

next-gen connected audio experience

ux designer
global market
figma, protopie
problem fragmented identity & poor continuity
solution unified cross-device ecosystem
outcome frictionless login & higher engagement

connected audio is disconnected

🔐

fragmented identity

login friction across devices

🔗

poor continuity

mobile-to-car experience breaks

👤

low personalization

static content, no adaptation

⚙️

high friction

complex setup & configuration

how users experience audio today

📱 mobile
🚗 car
🎧 headphones
🔁 constant re-adjustment
😕 frustration

Users repeatedly reconfigure sound across devices, environments, and contexts — turning premium audio systems into complex tools instead of effortless listening experiences.

one connected ecosystem

1 login
2 personalize
3 control
4 discover

qr login → adaptive home → voice control → commerce in one seamless flow

design deep dive

mood - sound mood switch interface

mood — sound mood switch

Instantly shift the audio atmosphere to match your current emotional state. Users select how they want to feel, and the system handles the technical complexity invisibly.

ux rationale

Why mood-based sound profiles? Traditional systems expose EQ sliders and frequency bands, assuming users understand acoustic engineering. In reality, users think emotionally — "I want something calm" not "Increase mid-range frequencies."

Reducing cognitive load: Single gesture → full sound transformation. Eliminates decision fatigue, reduces eyes-off-road time, improves safety. Principle: Minimize thinking. Maximize feeling.

Research-informed categories: Four primary emotional states — Relax (calm), Focus (clarity), Energize (alertness), Comfort (warmth) — cover ~90% of in-car listening intent.

A

Mood selection interface: Tactile, gesture-driven control with large central orb. Directional swiping between emotional states with immediate visual feedback. Supports muscle memory, one-handed operation, minimal visual attention. Adjustment in under one second.

B

Audio profile transition: Mood transitions feel organic — gradual frequency blending, progressive spatial repositioning, smooth dynamic range adaptation. Sound morphs in real time instead of snapping. Eliminates jarring shifts, enhances emotional continuity.

seat sonic personalized audio zones

seat sonic — personalized audio zones

Deliver individualized sound experiences to each seat. Localized sound personalization transforms shared listening into a private, adaptive experience.

ux rationale

Why seat-based audio? Traditional systems assume one profile fits everyone. Reality: drivers prefer clarity, passengers want immersion, rear-seat occupants feel neglected. This mismatch causes frequent adjustments and compromise-driven settings.

Spatial audio isolation: Directional speaker arrays create localized sound bubbles — personal listening zones, reduced sound bleed, individualized equalization while preserving cabin-wide balance. Result: calmer cabin, fewer conflicts.

Technical constraints shaped design: Limited speaker geometry, acoustic reflections, processing limits led to simplified interaction models, preset-based tuning, visual-first seat mapping. Philosophy: Expose experience, not engineering.

A

Seat selection interface: Top-down cabin visualization with direct seat tapping, large interaction targets, instant visual confirmation. Matches real-world mental models, eliminates hierarchy navigation, enables muscle memory. Selection in under one second.

B

Individual audio controls: Localized tuning for intensity, immersion, spatial spread, emotional tone. Gesture-based arcs, radial sliders, real-time feedback. Avoids technical EQ language, encourages experimentation. Principle: Let users feel the sound — not configure it.

system settings configuration

system settings — unified configuration

Centralized control for audio tuning, seat technology, ambient lighting, and personalization — clear hierarchy, fast access, predictable patterns.

ux rationale

Why unified over distributed? Traditional systems scatter settings across media, sound, seat, lighting modules — causing high navigation depth, poor discoverability, inconsistent mental models, increased distraction. Shift: Feature-based → experience-based organization.

Progressive disclosure hierarchy: Three-layer structure — Primary (Sound, Seat, Light, Overview), Secondary (Presets, Personalization, Smart Modes), Fine-grained (individual tuning). Minimizes clutter, reduces scanning, supports glanceable interactions.

Quick access priorities: Usage analytics elevated seat comfort, audio tuning, ambient light, smart toggles to primary surfaces — ensuring sub-2-second access time.

A

Settings navigation: Left-anchored persistent control rail provides constant spatial reference, rapid module switching, muscle-memory navigation. No deep nesting, minimal backtracking, stable layout. Any major area in two taps or less.

B

Quick access preferences: High-frequency controls surfaced directly — seat presets, ambient intensity, mood sync, automation toggles. Exposes high-impact controls, hides rarely used complexity. Principle: Speed beats completeness while driving.

final ui

visual showcase coming soon

why it works

manual login qr authentication

instant access, no typing while driving

static content contextual personalization

relevant content based on user, time, context

complex eq voice-first control

natural language feels human, not technical

results

login friction dropped
engagement increased
voice felt more human
safety-first increased trust

key learning

designing for cars taught me that clarity beats cleverness every single time. in automotive ux, every extra second of thinking directly impacts safety, trust, and usability.

the biggest takeaway: strip experiences down to their pure functional core, while still keeping them emotionally premium.

continue exploring

view more case studies