GENChat

Multi-tenant embeddable chat widget with pluggable AI backends and human escalation

GENChat is designed as a governed conversational layer: deterministic orchestration first, with LLMs as one inference component when appropriate.

What it is

A drop-in chat experience that can be embedded into multiple sites (tenants), each with independent configuration, policies, and backends.

Designed for regulated conversation and operational use.

Features

Theming

Auto light/dark detection with full CSS customisation to match your site.

Pluggable backends

Connect advanced inference engines and tooling automation via a controlled integration layer.

Human escalation

Escalate to human agents using secure links and controlled context handover.

Session persistence

Conversations persist across page navigation for a continuous user experience.

Security

Domain allow-listing, CORS controls, and iframe sandboxing support safe embedding.

Analytics

Capture IP/rDNS, user agents, and full chat history for auditing and improvement.

Multi-tenant

Multiple sites with independent configurations, policies, and backends.

GAIN-Powered Multi-Inference Chatbot Platform Version 3

Version 3 is the first release where finely tuned and highly constrained LLMs are allowed to participate in genuine conversational interactions — always regulated by logic and supported by multiple information endpoints for grounded knowledge generation and deterministic behaviour.

Important

Chatbots should never be directly connected to an LLM. Production-grade assistants need governance: safety constraints, policy enforcement, tool routing, validation, caching, and deterministic fallbacks. LLMs are one inference component — not the system.

Revision history

Version 1 — Pure logic & algorithms

Deterministic routing and outcomes driven by explicit rules, state machines, and classical inference.

Version 2 — LLMs used for inference signals

LLMs were introduced for intent determination and extraction, while the platform remained primarily logic/algorithm driven.

Version 3 — Constrained conversation + multi-endpoint grounding

Finely tuned LLMs can participate in conversation, constrained and orchestrated by deterministic logic, with multiple information endpoints used to generate grounded responses.

Key points

  • Governance first: policy and routing layer controls behaviour.
  • Determinism where it matters: reliable flows and fallbacks for operational tasks.
  • Grounded responses: pull from trusted endpoints, not free-form hallucination.
  • Tenant isolation: separate configs, domains, and backends per site.