Chatsy
Glossary

Intent Classification

Intent classification is the process of analyzing a user message and categorizing it into a predefined intent — the action or information the user is seeking. For example, classifying "I want to cancel my subscription" as a "cancellation" intent and "How much is the Pro plan?" as a "pricing inquiry" intent.

How it works

Intent classification is the first step in processing any chatbot interaction. The system needs to understand what the customer wants before it can provide the right response.

Modern intent classification approaches include: - **LLM-based classification**: The language model understands intent from context without predefined categories, handling novel phrasings naturally - **Traditional NLU models**: Trained classifiers that map messages to predefined intent categories with confidence scores - **Hybrid approaches**: LLM understanding combined with intent routing rules for business-critical categories

Beyond simple classification, advanced systems also extract entities (the specific details within the intent). For "Cancel my Pro plan effective March 1," the intent is "cancellation" and the entities are "Pro plan" (product) and "March 1" (date).

Why it matters

Accurate intent classification determines whether the chatbot responds helpfully or frustratingly. If the AI misclassifies "I want to upgrade" as a cancellation intent, it provides completely wrong information. Intent accuracy above 90% is the threshold where AI chatbots become genuinely useful rather than frustrating for customers.

How Chatsy uses intent classification

Chatsy uses the underlying LLM to understand customer intent naturally from context, then combines this with RAG retrieval to find the most relevant knowledge base content. This approach handles the infinite variations in how customers phrase requests without requiring manually defined intent categories for every possible question.

Real-world examples

Routing to the right knowledge base section

A customer asks "my payment didn't go through." The system classifies the intent as "payment failure," retrieves articles about declined payments and retry instructions from the billing knowledge base section, and generates a targeted troubleshooting response — not a generic FAQ answer.

Multi-intent message handling

A customer writes "I need to update my billing address and also want to know when my next invoice is." The system identifies two intents — address update and invoice inquiry — and addresses both in a single response with instructions for each.

Escalation intent detection

When a customer writes "This is unacceptable, I need to speak to a manager immediately," the system classifies this as an escalation intent with high urgency. Instead of attempting to resolve the issue, it immediately routes to a human agent with manager-level access.

Key takeaways

  • Intent classification determines what the customer wants, enabling the right response or action

  • Modern LLMs classify intent from natural language context without needing predefined intent categories

  • Accuracy above 90% is the threshold where chatbots become genuinely useful for customers

  • Advanced systems extract both intent (what they want) and entities (the specific details) from messages

  • Multi-intent detection handles complex messages where customers ask about multiple things at once

Frequently asked questions

How is intent classification different from keyword matching?

Keyword matching looks for specific words. Intent classification understands the meaning behind the message. "I want out," "please cancel," and "end my subscription" all have different keywords but the same cancellation intent. LLM-based classification handles this naturally.

How many intents should a support chatbot handle?

Most customer support chatbots effectively handle 20-50 core intents (pricing, billing, cancellation, technical issues, account management, etc.). LLM-based systems do not require explicitly defining intents — they understand them from context. The key is ensuring your knowledge base covers the top intents with thorough content.

What happens when the AI cannot classify the intent?

When the AI has low confidence in its classification, the best approach is to ask a clarifying question ("Could you tell me more about what you need help with?") or offer common options. If confidence remains low after clarification, escalate to a human agent.

Can intent classification handle misspellings and slang?

Yes. LLM-based intent classification is robust to misspellings, slang, abbreviations, and informal language. "cant login plz help" and "I am unable to access my account" are both correctly classified as authentication/login intent because the model understands meaning, not just words.

Related terms

Further reading

Related Resources

See intent classification in action

Try Chatsy free and experience how these concepts come together in an AI-powered support platform.

Start Free

Browse the glossary