Language: 한국어 ENG

1. Technical limitations in the prototype

Dishire was a prototype focused on validating a multi-step LLM pipeline and basic recommendation flows. It did not reach a production-grade level of automation or personalization, and remained limited in the following ways:

These limitations were also useful: they clarified what should be prioritized in a next iteration.

2. Expanding toward automatic routing (classification-based recommendation)

In the prototype, the user chose the recommendation mode explicitly. For a real service, it is more natural for the system to infer which flow should be used from the input text and profile constraints.

A practical next step is to start with rule-based routing and gradually evolve to a lightweight text classifier or embedding-based router that maps input → recommendation type.

3. Personalization engine using user history

A longer-term direction was to evolve Dishire from a “single-use tool” into a recipe partner that learns with the user. This requires a personalization engine built on user history.

In the prototype, recommendations were limited to session-level interactions, and this personalization pipeline stayed at the design-idea level.

4. Prompt generation and tuning system

Static templates provide stability, but they do not scale well to cover diverse contexts. The eventual goal was a structure where prompts evolve with data and experimentation.

Dishire implemented static templates with partial dynamic insertion only. Automatic prompt generation/tuning was left as a next-stage challenge.

5. Quality evaluation layer for the multi-step pipeline

To fully benefit from a multi-step pipeline, the system needs an automated quality evaluation layer that checks outputs at each stage.

In the prototype, the Validate step existed mainly as a structural placeholder. A future design could combine rule-based checks with LLM-based self-correction.

6. Closing reflection

Dishire remained a prototype, but the process provided hands-on experience in treating LLMs as a service component rather than a one-off API call.

By implementing prompt templates, a multi-step generation pipeline, and constraint-aware routing experiments, I learned to view LLM systems as an integrated structure where model · prompts · data · and UX must work together.

While Dishire is not a finished product, it produced practical insight into common structural issues in LLM-based recommendation systems and concrete ways to improve them.

← Previous: How It Works Back to Overview →