AI Concierge (prototype)

Prototype
UX Exploration
Visual Design

This is a vision piece (prototype) built during Hoken’s final days, designed to explore how large language models could support clarity and decision-making in travel planning.

The goal was to move beyond obvious chat-based interactions and imagine how AI could integrate into everyday interfaces—proactively, contextually, and with a focus on user intent. It’s not just about talking to AI. It’s about building systems that listen.

A part of the Concierge's interface showing it's initial state that reads: 

Tell concierge where you're going, your plan appears right here.

You’re going to a concert in Amsterdam. You type one line—and everything starts building around it.

This isn’t a chatbot prototype. It’s a glimpse of how interfaces might behave when they understand context and act on it.

Hit play: 

Why

I was exploring ideas that could spark virality; something useful for any traveler while also building brand awareness for Hoken.

One of my core beliefs is that if you want to be remembered, you need to offer upfront value. Hoken was solving a real problem, but staying top-of-mind without strong marketing or signal-rich content felt like an uphill battle. This prototype was born out of that tension, and out of personal curiosity.

As AI tools like ChatGPT became part of my everyday life (helping me track nutrition, recognize behavior patterns, and even learn Japanese) it felt natural to bring those same insights into a product experience for others. Not just as a feature, but as an integrated, contextual layer in the interface.

On the business side, I didn’t want Hoken to just “add a chatbot.” I wanted us to be the first to deliver an LLM-powered experience grounded in genuine user intent—something proactive and embedded in the journey, not bolted on as a novelty.

How

I wanted the team to feel what I meant, not just hear about it.

This idea came during a time of urgency.

We needed to improve core metrics while preparing for a new investment round. I had been thinking about how to integrate LLMs into our experience, and shared the concept with a few people at Hoken. It started gaining traction, but I knew words wouldn’t land like a working prototype.

So I carved out a block of focused time, built the first version in a few hours, and refined it the next day. I recorded a Loom video to walk the team through it. The response was positive and curious. People immediately started suggesting things like “Can we save itineraries?”, “How about a mobile version?”, and “Can users start over?”

I had already confirmed with engineering that it was viable—just an API call away from generating JSON-based plans that could easily structure a day around an event alongside the usual chat responses.

When I have more time, I plan to explore a mobile version as well.

We didn’t pursue it further due to more pressing revenue-focused initiatives, but I still believe this was a step in the right direction—not just for Hoken, but for how future interfaces should behave.

Let's stay in touch