Past Projects
- Mercedes Benz: MBUX Voice Control
- BMW: Voice Command
- Audi: Voice Recognition
- Tesla: Voice Commands
- LG Electronics: Voice Mate
- BlackBerry: BlackBerry Assistant
- QNX: HTML5 WebPlatform
- Combined Forces Command: Ulchi Freedom Guardian
With the ultimate goal of enabling drivers to control their vehicles by voice, I led the development of our natural language understanding (NLU) system as the primary point of contact for the NLU research team. I partnered closely with fellow PoCs in ASR, language modeling, dialogue management, and TTS to monitor end-to-end KPIs—accuracy, quality, coverage, and latency—across more than 20 languages. By rigorously tracking performance against our SLAs, I ensured every product update met our standards before release.
With the ultimate goal of enabling drivers to control their vehicles by voice, I led the development of our natural language understanding (NLU) system as the primary point of contact for the NLU research team. I partnered closely with fellow PoCs in ASR, language modeling, dialogue management, and TTS to monitor end-to-end KPIs—accuracy, quality, coverage, and latency—across more than 20 languages. By rigorously tracking performance against our SLAs, I ensured every product update met our standards before release.
With the ultimate goal of enabling drivers to control their vehicles by voice, I led the development of our natural language understanding (NLU) system as the primary point of contact for the NLU research team. I partnered closely with fellow PoCs in ASR, language modeling, dialogue management, and TTS to monitor end-to-end KPIs—accuracy, quality, coverage, and latency—across more than 20 languages. By rigorously tracking performance against our SLAs, I ensured every product update met our standards before release.
With the mission of enabling seamless, hands-free interaction behind the wheel, I spearheaded the development of state-of-the-art NLP, dialogue management, and natural language generation systems for both English and Korean. To ensure reliable entity resolution—whether users request songs, contacts, or navigation destinations—I designed a cross-lingual phonetic matching engine. Today, this voice-command platform powers over 1.3 million Tesla vehicles worldwide.
With the mission of delivering an intelligent personal assistant for English- and Korean-speaking smartphone users, I served as the primary liaison to LG Electronics throughout the LG G4 launch—from crafting product demos and shaping the roadmap to overseeing development and ensuring on-time delivery.
With the mission of empowering smartphone users to accomplish tasks through natural voice, I led the design and development of an English-language NLP system covering more than 20 domains and 100 intents for a virtual assistant application. Our team then integrated and successfully launched this solution on BlackBerry’s flagship Passport device, delivering seamless, voice-driven task completion.
With the mission of empowering web developers to build truly cross-platform mobile apps, I architected **WebPlatform**—a unified foundation that ensures web applications and UIs respond seamlessly to device events (touch, gestures, rotation, context menus, etc.) across smartphones, tablets, and automotive systems. Built to work with the HTML5 SDK or Apache Cordova, WebPlatform dynamically adapts rendering per device to deliver a consistent, high-quality user experience.
In Ulchi Freedom Guardian—the world’s largest computerized command-and-control exercise, coordinating 50,000 South Korean and 17,500 U.S. troops to defend the Republic of Korea—I conducted ground-intelligence operations under top-level security clearance.