Back

Agentic Onboarding Flow

Overview

New clients complete an intake conversation to gather employment history, goals, constraints, and eligibility information. Case workers traditionally conducted this as a structured interview, but time constraints meant critical details were often missed or inconsistently documented.

The system provides a conversational interface where clients can provide information at their own pace, with the AI asking follow-up questions to fill gaps and clarify ambiguous responses.

Key Design Decision

Decision: Explicit state management and routing logic instead of fully generative conversation.

Early versions used open-ended LLM conversations. Clients found them confusing (“What should I say next?”) and the system had no way to know when it had collected sufficient information.

The final architecture uses:

State tracking

A structured data model defines required fields (employment status, work history, availability, etc.). The system tracks which fields are complete, incomplete, or ambiguous.

Routing logic

Deterministic rules decide which question to ask next based on current state. If employment status is “unemployed,” ask about last job. If “employed,” skip to career change motivation.

LLM-generated phrasing

The LLM generates natural-sounding questions based on routing logic and prior conversation context, but it does not decide what information to collect—that’s handled by rules.

Stopping conditions

The conversation ends when all required fields meet quality thresholds (e.g., work history includes dates and job titles, not just “I worked in retail”). The LLM does not decide when to stop—explicit completeness checks do.

Constraint and Tradeoff

Constraint: Conversations are less flexible than fully generative agents but far more predictable.

Clients cannot ask arbitrary questions like “How do I apply for benefits?” mid-conversation. The system stays focused on data collection. If clients need information outside that scope, they’re directed to appropriate resources rather than the LLM attempting to answer.

Tradeoff: Reduced conversational naturalness in exchange for guaranteed data completeness and auditability. Case workers reviewing intake records can see exactly what was asked and answered, not infer it from freeform conversation logs.

What This Connects To

Collected client data feeds into both the course matching system and report generation. Incomplete intake data blocks downstream processes rather than allowing systems to proceed with assumptions.

Back to overview