Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.korint.io/llms.txt

Use this file to discover all available pages before exploring further.

The AI Companion is an in-app chat that helps brokers get answers about their portfolio without leaving the page they are working on. It can also prepare common policy operations — such as a mid-term adjustment — and present them as a preview that the broker confirms before anything is applied.
The AI Companion is enabled on a per-product basis. Brokers see the Companion only on tenants and products where it has been turned on. If you do not see the Companion in your interface, it has not been activated for your configuration.

What the Companion does

The AI Companion is designed for two use cases:
  • Answer questions about policies, customers, billing, and platform features. It reads from the same data the broker can already see and replies in plain language.
  • Prepare actions that would otherwise require navigating through several screens. The Companion collects the necessary information, builds a preview of the change, and asks the broker to confirm before applying it.
The Companion never performs an action silently. Every change to a policy, customer, or invoice is shown in a preview first and applied only after the broker explicitly accepts.

What the Companion can read

The AI Companion has read access to the policies, customers, and assets that the broker is allowed to see. It uses this access to answer questions like:
  • “What’s the renewal date on policy P-12345?”
  • “Show me all active policies for this customer.”
  • “Which policies are unpaid this month?”
The Companion respects the same permissions as the rest of the platform: it cannot read data the broker is not authorized to see.

What the Companion can change

The AI Companion can prepare a limited set of policy operations. Today, this includes mid-term adjustments (MTAs) — modifications to an active policy such as updating coverage, changing the insured asset, or adjusting the effective date. The flow is always the same:
  1. The broker describes the change in plain language (“change the address on policy P-12345 to …”).
  2. The Companion builds a preview showing the current values, the proposed values, and the effective date.
  3. The broker reviews the preview and either confirms or asks for adjustments.
  4. The change is applied only after explicit confirmation. Until then, nothing is committed.
The AI Companion cannot bypass product rules, validation logic, or business constraints. If a requested change would violate the product configuration (for example, a coverage option that is not available on the selected product), the Companion declines and explains why.

What the Companion will not do

The AI Companion is intentionally scoped. It will not:
  • Take destructive actions (terminate a policy, delete a customer) without an explicit confirmation step.
  • Perform operations that fall outside its current toolset. If a broker asks for something not yet supported, the Companion declines and suggests the manual path in the platform.
  • Reliably answer questions that fall outside the platform’s scope. The Companion is optimized for Korint data rather than general-purpose use, and typically declines or redirects questions outside its scope.

Conversation history

Each broker has their own conversation history with the AI Companion. Conversations are persistent: a broker can close the chat, return later, and pick up where they left off. The history is scoped to the broker’s account and is not shared across users. Conversations can be started in two ways:
  • From the AI Companion page — A dedicated page where brokers can browse and resume their past conversations.
  • From a policy or customer page — The Companion can be opened in context, with the relevant entity already loaded into the conversation.

Feedback on AI responses

When an answer or proposed action is wrong, brokers can flag the specific message directly in the conversation. Flagged messages are recorded and reviewed by the Korint team to improve the AI Companion over time.
Flagging is the most effective way to improve the Companion. Specific examples of incorrect answers — with the exact policy or customer involved — give the team the context needed to diagnose and correct the underlying behavior.
A flagged message is marked in the conversation so the broker can see at a glance which responses they have already reported. Flags do not roll back the action that was taken (if any) — they are a feedback signal, not an undo. To reverse a change that was applied in error, the broker uses the standard policy management flow.

Reliability and fallbacks

The AI Companion runs on AI models that are subject to occasional latency or unavailability. When that happens:
  • Slow responses — The Companion indicates that it is still working. Brokers can keep the conversation open and continue once the response arrives.
  • Interrupted responses — If a response is cut off mid-message (for example, due to a transient infrastructure issue), the Companion attempts to flag the message as incomplete so the broker can retry. In rare cases the message may simply appear truncated, in which case the broker can resend the prompt.
  • Service unavailable — If the AI service is temporarily down, the Companion typically surfaces the outage so the broker can continue the same workflow manually through the platform’s standard interfaces. Transient errors or timeouts may also occur, in which case retrying or falling back to the standard interface is recommended.
In all cases, no policy, customer, or invoice change is applied without an explicit broker confirmation. A failed or interrupted Companion response cannot leave the system in an inconsistent state.