Caging Chaos: Building Deterministic AI Chatbots with Effect
- Event
- The Innovation Lab
- Overview
Engagement is shifting to chatbots. Users now expect prompt-based interaction. This article shows how to build AI chatbots that feel natural and engaging while remaining deterministic — using Effect-TS for type-safe tools, structured outputs, page-gated behavior, canned responses, and workflow integration.
- Technologies
- AIEffect-TSChatbotsTypeScriptStructured OutputsWorkflows

Users Are Prompting. Your Product Should Be Ready.
The shift already happened. People now default to conversational interfaces. They type questions instead of clicking menus. They expect to be understood, not routed through forms. The prompt box has become the primary engagement surface, and every product without one feels dated.
This isn't a novelty. Chatbots are becoming the default interface for onboarding, support, and commerce. The question isn't whether your product needs a conversational layer — it's whether you can build one that's both engaging and reliable.
Here's the problem: AI is nondeterministic. The same prompt produces different responses on every call. When your chatbot handles registration, payment collection, or order processing, that unpredictability becomes a liability. This article shows how to cage the chaos — using Effect-TS to build chatbots where critical workflows execute exactly as designed, every time.
Want to skip the theory? Jump to the interactive demo and try it yourself.
Natural conversation, quick replies, typewriter animation. Users feel heard, not processed.
Structured outputs, validated schemas, canned responses for known questions. Right answer, every time.
State machines, page-gating, typed actions. Critical workflows execute exactly as designed.
Layered Prompts, Layered Control
A single monolithic prompt cannot handle the complexity of a production chatbot. Different pages need different behavior. Returning users need different responses than new visitors. The solution is a three-layer prompt system where each layer adds specificity without modifying the others.
System Prompt
The foundation layer. Over 400 lines defining the AI's identity, language rules, safety boundaries, and response formatting. This is the AI's constitution — it never changes between requests. Every conversation starts from this baseline of personality, constraints, and guardrails.
Page Prompts
Supplemental instructions injected based on the user's current URL path. The get-started page gets education-focused prompts. The checkout page gets reassurance prompts for high-commitment moments. The billing settings page gets upgrade support. Prompts are ordered most-specific first — the first match wins.
User Context
Authenticated user data — plan tier, onboarding status, usage metrics — injected so the AI knows exactly where each user is in their journey. A new visitor browsing the homepage gets educational responses. A returning user on the pro plan gets feature-specific guidance and account support.
The base layer defines personality, rules, and safety boundaries that apply to every conversation:
const SYSTEM_PROMPT = `
You are a friendly, knowledgeable assistant.
## Rules
- Keep responses to 3-4 sentences max
- Never make promises about uptime or SLAs
- Always be helpful but not pushy
## Language
- Use plain language, avoid jargon
- Say "workspace" not "tenant"
## Safety Boundaries
- If asked about billing disputes, defer to support team
- Never share internal pricing logic
`Giving AI Hands — Then Tying Them
The AI can call any tool it wants. But calling a tool and having it take effect are two different things. The determinism lives in the handler layer, not the model.
Tool Definition & Composition
Each tool is a typed contract — parameters validated by Effect Schema, return types enforced at compile time. Tools compose into a Toolkit that the AI model receives as its available actions.
const GetProducts = Tool.make("GetProducts", {
description: "Get available product information",
parameters: {
category: Schema.optional(Schema.String),
},
success: Schema.Array(Product),
})
const ChatToolkit = Toolkit.make(
GetProducts,
ReadyToRegister,
CreateAccount,
SelectProduct,
)Page Gating
After the AI responds, tool results are filtered by the user's current page. Tools called on the wrong page are silently discarded.
for (const toolResult of response.toolResults) {
if (toolResult.name === "ReadyToRegister"
&& currentPage === "/get-started") {
action = "redirect_to_registration"
}
if (toolResult.name === "CreateAccount"
&& currentPage === "/get-started") {
action = "create_account"
}
}AI Model
Calls any tool from the toolkit based on conversation context
Page Gate
Filters tool results by currentPage — mismatched tools are discarded
Action
Deterministic side effect — redirect, create account, or select product
The AI doesn't know it's being gated. It can call any tool at any time. The determinism lives in the handler, not the model.
From Free Text to Type-Safe Contracts
The AI generates free text. The handler extracts structured actions. The frontend executes deterministic code. Three layers, zero ambiguity.
Every AI response is decoded through a strict Effect Schema. The response either conforms to the contract or fails at the boundary.
const ChatMessageResponse = Schema.Struct({
conversationId: Schema.String,
message: Schema.String,
toolResults: Schema.Array(Schema.Struct({
name: Schema.String,
result: Schema.Unknown,
})),
action: Schema.optional(
Schema.Literal(
"redirect_to_registration",
"show_products",
"create_account",
"select_product",
)
),
registrationData: Schema.optional(Schema.Struct({
firstName: Schema.String,
lastName: Schema.String,
email: Schema.String,
phone: Schema.String,
})),
productId: Schema.optional(Schema.String),
})AI Response
INPUTFree text + tool calls. The AI says whatever it wants.
Handler
PROCESSValidates, extracts actions, enforces page gates.
Frontend
OUTPUTDeterministic side effects. Navigate, persist, display.
The AI's output is never the final product. It's raw material that gets refined through typed schemas and validated handlers before reaching the user.
When AI Shouldn't Decide
The Problem
Registration requires collecting four fields in exact order: first name, last name, email, phone, then confirmation. AI might skip fields, ask out of order, hallucinate validation, or forget what was already collected. For a business-critical flow, "good enough" isn't good enough.
The State Machine
A programmatic flow runs before the AI. If the user is on /get-started and providing registration data, the state machine intercepts. The AI never sees the message. The flow is deterministic: first name, last name, email, phone, confirmation — every time, in that order.
Shared Validation
Both the chat state machine and the web form use the same NormalizedEmail, NormalizedPhone Effect schemas. Whether a user types "john@test.com" in a form or in the chat, identical validation runs. One source of truth, two input surfaces.
Fallback to AI
If the state machine doesn't match — the user asks a general question like "what features are included?" — the message passes through to the AI pipeline with page-specific prompts. The state machine only intercepts what it owns.
Message arrives
INPUTUser sends free text via chat input
State machine check
GATEIs the user on /get-started? Is this registration data?
Validate field, advance state, return next prompt
Pass to LLM with page-specific prompts
function tryRegistrationFlow(
message: string,
conversation: Conversation,
): RegistrationResult | null {
const state = detectRegistrationState(conversation)
switch (state) {
case "awaiting_first_name":
const firstName = validateFirstName(message)
if (!firstName.valid) return error(firstName.message)
return prompt("Great! What's your last name?")
case "awaiting_email":
const email = NormalizedEmail.decode(message)
if (!email.valid) return error("Please enter a valid email")
return prompt("And your phone number?")
case "confirming":
if (isAffirmative(message))
return createAccount(conversation.registrationData)
return prompt("What would you like to change?")
default:
return null // Falls through to AI
}
}The state machine and the form share the same validation schemas. NormalizedEmail, NormalizedPhone, NormalizedFirstName — one source of truth, two input surfaces.
Zero-Latency Answers
Pre-Built Responses
Common questions are mapped to instant answers. When a user clicks a quick-reply chip matching a known question, the response displays immediately with no AI API call. The conversation is still persisted to the database in the background, non-blocking, so analytics and context remain intact.
Phase-Based Chips
Quick reply chips change based on conversation depth using a messageCount heuristic. Early messages surface intent selection chips. Mid-conversation chips handle common concerns. Once the conversation is sufficiently personalized, chips disappear entirely and the AI takes over.
Chip Replacement
When a chip is used, it is replaced with a follow-up chip exactly once. For example, “How secure is my data?” becomes “Tell me about compliance”. The replacement is also removed after use. No infinite loops, no stale options cluttering the interface.
UX Benefit
Users get instant answers for predictable questions. The AI handles the unpredictable ones. This dual approach reduces API costs while delivering a faster, more responsive experience for the most common interaction patterns.
Intent Selection
Messages 1–2Concern Handling
Messages 3–8Personalized AI
Messages 9+Known questions get known answers. Instantly. The AI only handles what's truly unpredictable.
The Chatbot Is the Interface. The Workflow Is the Backbone.
The Pattern
The chatbot collects data conversationally. Each completed action — registration, product selection, confirmation — triggers a workflow step. The workflow is a state machine that doesn't care how data was collected. It only cares that the data arrived validated.
Effect Cluster Workflows
Each step in the onboarding workflow has defined inputs, outputs, and transitions. There is no ambiguity. A step either completes with valid output or fails with a typed error. The workflow engine handles retries, persistence, and resumption — the business logic stays pure.
Why This Matters
AI handles the messy human interaction. The workflow handles the business logic. They never cross. The chatbot can be creative with how it asks questions; the workflow is rigid about what happens next. This separation means you can swap out the AI model, rewrite every prompt, or add a traditional form — the workflow doesn't change.
const OnboardingWorkflow = Workflow.make("onboarding", {
input: Schema.Struct({ visitorId: Schema.String }),
steps: Effect.gen(function* (step) {
// Step 1: Wait for visitor engagement
const engagement = yield* step.do("education", () =>
Effect.succeed({ engaged: true })
)
// Step 2: Collect registration (deterministic)
const registration = yield* step.do("registration", () =>
collectRegistration(engagement.visitorId)
)
// Step 3: Process product selection
const product = yield* step.do("product-selection", () =>
selectProduct(registration.userId)
)
// Step 4: Confirm and activate
yield* step.do("confirmation", () =>
activateSubscription(product)
)
return { status: "complete" }
}),
})Visitor Arrives
INITSession created, page context loaded
Education & Engagement
AIAI answers questions, builds trust via page-specific prompts
Registration
DETERMINISTICState machine collects validated fields — no AI involvement
Product Selection
SCHEMAGuided selection with schema-validated structured outputs
Confirmation
COMPLETESubscription activated, workflow complete
The AI doesn't advance the workflow. The handler does. The AI is the interface; the handler is the controller; the workflow is the state machine.
Making AI Feel Alive
Character-by-Character Reveal
Each character appears at ~18ms intervals with an opacity fade-in. A blinking cursor follows the last revealed character. The effect feels like "ink on paper" — deliberate and mechanical, not bouncy.
Markdown Parsing
Text segments are parsed for bold, italic, and links. Each segment renders with appropriate HTML wrappers during the animation — not after. Users see formatted text as it appears, character by character.
Scroll Anchoring
A progress callback fires every 20 characters to trigger smooth scrolling. The chat auto-scrolls to the bottom on new messages so users always see the latest text.
Performance
Uses requestAnimationFrame for 60fps-smooth animation. State tracked via refs to survive React re-renders without restarting the animation.
Accessibility
Users with prefers-reduced-motion get the full text instantly — no animation. Screen readers receive the complete text via aria-live region, not character by character.
The core animation loop uses requestAnimationFrame with a ref-based timestamp to control character pacing without blocking the main thread.
function useTypewriter(text: string, isNew: boolean) {
const [displayCount, setDisplayCount] = useState(
isNew ? 0 : text.length
)
const frameRef = useRef<number>()
const lastTickRef = useRef(0)
useEffect(() => {
if (!isNew || displayCount >= text.length) return
const prefersReduced = window.matchMedia(
"(prefers-reduced-motion: reduce)"
).matches
if (prefersReduced) {
setDisplayCount(text.length)
return
}
const CHAR_INTERVAL = 18
const animate = (timestamp: number) => {
if (timestamp - lastTickRef.current >= CHAR_INTERVAL) {
setDisplayCount((c) => Math.min(c + 1, text.length))
lastTickRef.current = timestamp
}
frameRef.current = requestAnimationFrame(animate)
}
frameRef.current = requestAnimationFrame(animate)
return () => cancelAnimationFrame(frameRef.current!)
}, [text, isNew, displayCount])
return text.slice(0, displayCount)
}Text dumps feel robotic. The response just... appears.
Character-by-character reveal creates perceived intelligence.
The animation is the difference between “the AI responded” and “the AI is thinking about your question.” Same data, completely different experience.
Determinism Is a Design Choice
Engagement is moving to chatbots. Users expect prompt-based interaction. The question isn't whether to build a chatbot — it's how to build one that's engaging, accurate, and deterministic. The tools exist. System prompts set boundaries. Page prompts adapt behavior. Tools give AI structured capabilities while page-gating prevents misuse. State machines bypass AI for critical paths. Canned responses handle the predictable with zero latency. Workflows ensure business logic executes exactly as designed. Animation makes it all feel alive. Effect-TS ties it together with type safety at every layer.
Prompts, tools, gates, state machines, workflows. Each layer catches what the previous one missed. Defense in depth for AI behavior.
AI generates. Handlers validate. Workflows execute. The AI's output is raw material — never the final product.
Effect Schema, typed actions, validated tools. The type system is your runtime safety net. If it compiles, it behaves.
Layers of Control
Raw AI Reaching Users
Type-Safe Pipeline
Ready to transform your engineering?
Whether you need technical leadership, enterprise development, or team optimization—let's discuss how we can help.