# Main concepts

At the core of Syntphony CAI is a **structured orchestration model** that defines how conversational systems are designed, evaluated, and executed.

Rather than relying on a single assistant or rigid intent trees, the platform organizes interactions through a set of coordinated components, each with a clearly defined role.

At the center of this model is a fundamental separation between decision-making and execution. One layer determines what should happen, while other components are responsible for carrying it out.

Within this architecture, every interaction follows a governed process:

* Requests are interpreted;
* Decisions are evaluated;
* Execution is delegated to the appropriate component.

Based on this process, specialized Agents and skills are activated **according to explicit rules and responsibilities, not autonomous behavior.**

Together, these **principles** establish a controlled **Agentic architecture:** intelligence is not emergent, but orchestrated.

{% tabs fullWidth="true" %}
{% tab title="Project" %}
A [**Project** ](https://docs.conversational-ai.syntphony.com/user-guide/create-a-project) is the top-level container that defines and encapsulates a complete conversational AI solution. It establishes both the structural boundary of the system and the rules that govern how it operates.

Within a Project, all core components are configured and executed, including the Supervisor, specialized Agents, Workflows, Actions, Knowledge, and integration settings.

Each component has a clearly defined role:

* The **Supervisor** orchestrates interactions by evaluating user input, applying governance rules, and determining how each request should be handled;
* **Agents** execute specific tasks within a defined scope once selected;
* **Workflows, Actions, and Knowledge** provide the operational capabilities required to fulfill requests, from structured processes to external integrations and information retrieval.

These components do not operate independently. The Project defines how they **interact end-to-end:** how requests are interpreted, how routing decisions are made, and how execution is delegated across the system.

As a result, the Project functions as both the structural boundary and the behavioral definition layer of the system.

{% hint style="info" %}
This structure enables organizations to design conversational systems that are **modular**, **governed**, and **scalable**, supporting use cases ranging from customer support to complex operational workflows.
{% endhint %}
{% endtab %}

{% tab title="Supervisor" %}
Within the Syntphony CAI ecosystem, the [**Supervisor**](https://docs.conversational-ai.syntphony.com/user-guide/ai-agents/supervisor) is the orchestration and decision layer of a Project. It governs how every interaction is evaluated, classified, and routed according to a defined governance model.

**Rather than relying on autonomous reasoning, the Supervisor operates as a deterministic decision engine.** It evaluates each request against predefined eligibility criteria and applies a formal decision hierarchy to determine the correct handling path.

The Supervisor coordinates multiple Agents and Workflows through this centralized control layer. While Agents encapsulate domain-specific capabilities, the Supervisor is responsible for deciding when and how those capabilities are used.

#### **Coordination across multiple Agents**

In complex scenarios, such as technical support, multiple Agents may be available (e.g., diagnostics, troubleshooting, billing).

The Supervisor coordinates these components by routing each request to the appropriate Agent or Workflow based on governance rules and eligibility criteria. It ensures that the right capability is activated at the right time, without ambiguity or overlap.

#### **Structured orchestration**

The Supervisor operates through a governed and deterministic process. For every interaction, it:

* **Verifies** eligibility across available Agents or Workflows based on the selected governance model;
* **Classifies** the request as conversational (chit-chat) or task-oriented;
* **Selects** the appropriate handling path based on predefined decision logic;
* **Applies** a controlled fallback when no valid path exists.

{% hint style="info" %}
This ensures that decisions are not inferred or improvised, but consistently enforced according to the Project’s defined rules and capabilities.
{% endhint %}

#### **Rules and Guardrails**

They define the boundaries of system behavior and ensure that all interactions remain aligned with governance policies and domain constraints.

* **Rules** define what the system is allowed **or** not allowed to do;
* **Guardrails** enforce those boundaries at runtime, ensuring that decisions and outputs remain compliant.

Together, they establish a controlled environment where behavior is explicitly defined rather than implicitly learned.
{% endtab %}

{% tab title="Persona" %}
**Personas** define how Agents communicate, transforming them from generic interfaces into structured communication partners aligned with specific contexts and expectations.

They provide a configurable communication layer that determines how responses are expressed, including tone, style, and contextual framing.

By defining personality traits, communication styles, and domain-specific context, **Personas allow Agents to adapt their approach to different user profiles, industries, or interaction scenarios**. This behavior is explicitly configured as part of the system design, ensuring consistency and control.

As a result, interactions remain clear, relevant, and aligned with business expectations, while enabling more natural and context-appropriate communication.
{% endtab %}

{% tab title="Agents" %}
In Syntphony CAI, [**Agents** ](https://docs.conversational-ai.syntphony.com/user-guide/ai-agents/ai-agents) are specialized execution units responsible for handling requests within a defined domain.

Each Agent operates through a structured set of skills that define how it retrieves information, performs tasks, and interacts with external systems. Agents do not make decisions or control routing. They are invoked by the Supervisor once a valid handling path is determined.

As part of the execution layer, an Agent’s role is to perform tasks within its scope—not to orchestrate or classify interactions. This separation ensures that execution remains modular, predictable, and fully aligned with the system’s governance model.<br>

### Agent skills

Agents rely on a set of skills that enable them to process inputs, generate responses, and perform operations.

#### KAI Collection (Knowledge)

Knowledge enables Agents to retrieve and use information from [structured content sources.](https://docs.conversational-ai.syntphony.com/user-guide/ai-agents/knowledge/knowledge-sources)

Through Retrieval-Augmented Generation (RAG), Agents access relevant data at runtime and ground their responses in external knowledge, improving accuracy, consistency, and contextual relevance.

#### Rules and Guardrails

Within Agents, **Rules and Guardrails define how tasks are executed and ensure that execution remains safe and controlled**.

* **Rules** specify how the Agent performs its Actions, including execution steps, constraints, and operational protocols;
* **Guardrails** enforce safety, policy, and domain boundaries during execution, preventing unsafe or non-compliant outputs.

#### Tools

Tools are integrations that allow Agents to interact with external systems and services.

They enable real-world operations such as retrieving data, updating records, triggering processes, and executing workflows—extending Agent skills beyond response generation.
{% endtab %}

{% tab title="Actions" %}
[**Actions**](https://docs.conversational-ai.syntphony.com/user-guide/ai-agents/actions) define the specific tasks an Agent performs to fulfill a request. They represent the core unit of execution within an Agent.

An Action is a **structured execution** contract that defines:

* Which task should be performed;
* What data is required; and
* How the task should be executed.

Each Action is configured through a set of components:

* **Name:** identifies the purpose of the Action. For instance: ticket resolution, appointment scheduling;
* **Instructions:** define when the Action should be executed and how it should handle its inputs and behavior;
* **Properties:** specify the required data for execution, including what information must be collected and which inputs are mandatory. Each property represents a structured input used during task execution.

Once all required properties are provided, the Action is executed to perform a defined operation, such as generating content, processing information, or triggering workflows.

{% hint style="info" %}
By structuring both inputs and execution logic, Actions enable Agents to translate conversational input into controlled, task-oriented outcomes.
{% endhint %}
{% endtab %}
{% endtabs %}
