# Other NLP and LLM Connectors

Learn how to connect to an external NLP engine.&#x20;

{% hint style="info" %}
The NTT DATA proprietary Natural Language Processing engine - NLP comes integrated as default.
{% endhint %}

Syntphony Conversational AI allows you to use different NLP engines:

1. IBM Watson Assistant
2. Google Dialogflow Essentials
3. Microsoft Luis
4. Amazon Lex
5. OpenAI for LLM models

To use any of these, just follow this step by step.&#x20;

Click on `Change Model` to open this window with other options.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FqAZAdOg6Zn1A95RClmXB%2Fimage.png?alt=media&#x26;token=a2c3e86a-8c07-4be6-bb13-45b8fa049223" alt=""><figcaption></figcaption></figure>

## NLP

### IBM Watson Assistant

Watson is a service package offered by IMB. Among them, there is a question-answering software that applies natural language processing, information retrieval, knowledge representation, automated reasoning and machine learning technologies to answer questions posed in natural language.

1\)   Go to <https://login.ibm.com/>

2\)   Log in with your IBMid

3\)   Click on "skills" in the upper left corner

4\)   Then click “create skill” to create a virtual agent on Watson

5\)   If you have existing skills, select one, then click on the menu in the upper right corner of the selected skill card.

![Watson skills](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/hqztDMvgCXxBUkJuoP1o/image.png)

6\)   Click on “view API details”&#x20;

7\)   If you are using a newer account, copy the links and codes after Assistant URL and Api Key insert them on cockpit. Remember to switch to the newer version in Syntphony Conversational AI.

8\)   If you are using an older account, copy the links and codes after v1 Workspace URL, Username and Password and insert them on **Syntphony CAI**. Remember to switch to the older version in **Syntphony CAI**.

![APIs](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/nO87calYGo57zOMYbARX/image.png)

### Google Dialogflow Essentials

Google Dialogflow is a human-computer interaction framework that works on natural language.

1\)   Go to <https://dialogflow.com/>

2\)   Then click on “go to console”.

3\)   Click on settings on the upper left corner (the cogwheel icon - see image).

4\)   Click the link right after “Project ID”.

5\)   You will be taken to a page in the Google Cloud Platform.

6\)   Once in the Google Cloud Platform, click on the link below “e-mail”.

{% hint style="warning" %}
**Important:** Remember to charge your agent permission or else your intents won’t work
{% endhint %}

7\)   Go to IAM on the upper left corner of the menu (as shown in the image below).<br>

![IAM](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/1GWQcREjYTqWo267dcyK/I%20AM.jpg)

8\)   Once there, click on the edit icon (pencil) on the right of the agent named as Dialogflow Integrations (see image below).<br>

![Agents list](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/tdlqB4EMnSVHHs7CuTf1/Agents%20Lists.jpg)

9\)   Now, select “Dialogflow” and then “Dialogflow API Admin” (as shown in the image below).<br>

![Permissions](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/0qdXcCcSGEzJO8YFtyot/Permissions.jpg)

10\)   Once you changed your agent permission, go to “service accounts” and then click on the menu on the right of the agent you want to use.

{% hint style="warning" %}
**Important:** If you don’t have a Service Account, click on “Create Service Account” and create one
{% endhint %}

![Agent selection](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/ViEBMXt7wb784AicoeXC/image.png)

11\)   Click on “create key” and select JSON.&#x20;

12\)   Save the JSON file on your computer.

{% hint style="info" %}
New option to configure Dialogflow multi region.
{% endhint %}

13\) (Optional) If you want to use a Dialogflow agent from a specific region, you need to modify the JSON file with a new parameter called Dialogflow\.region. This parameter must contain the official region identifier described in this table:

| Country grouping | Geographic location                                          | Region ID            |
| ---------------- | ------------------------------------------------------------ | -------------------- |
| Europe           | Belgium                                                      | europe-west1         |
| Europe           | London                                                       | europe-west2         |
| Asia-Pacific     | Sydney                                                       | australia-southeast1 |
| Asia-Pacific     | Tokyo                                                        | asia-northeast1      |
| Global           | Dialogflow delivery is global, data at rest is within the US | global               |

If this parameter does not exist when creating the bot in **Syntphony CAI**, the global region will continue to be used by default as it has been to date.

Example Dialogflow metadata JSON with “region” parameter:

```
{
  "type": "service_account",
  "project_id": "projectId",
  "private_key_id": "d8313783b67e14489ef0ea8b2fafd2b23c62c507",
  "private_key": "-----BEGIN PRIVATE KEY-----CRIPTED_KEY-----END PRIVATE KEY-----\n",
  "client_email": "email@ email.iam.gserviceaccount.com",
  "client_id": "1234",
  "auth_uri": "https://accounts.google.com/o/oauth2/auth",
  "token_uri": "https://oauth2.googleapis.com/token",
  "auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
  "client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/...",
  
  # NEW PARAMETER --------------------------------
  "region": "australia-southeast1"
  # NEW PARAMETER --------------------------------
}
```

14\. Upload this file when creating a Dialogflow virtual agent in cockpit to complete the integration.

### Microsoft Luis (Deprecated)

Language Understanding (LUIS) is a cloud-based API service that applies custom machine-learning intelligence to a user's conversational, natural language text to predict overall meaning, and pull out relevant, detailed information.

To integrate LUIS to Syntphony Conversational AI, you have to have an active Azure account with created resources.

1\)   Go to luis.ai

2\)   Login with your Microsoft account.

3\)   Create an app or click on an existing one.

4\)   Click on “manage”.

![Endpoints on Azure](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/kI8oJPp7XrCxHRJDwzvo/image.png)

5\) Then click on “Azure Resources” at the left.<br>

![Azure resources](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/y1ZbJhbkc83SvoCA6EK0/Azure%20Resources.jpg)

6\)   Copy the example query, located at the bottom of the screen.

![](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/U8OyMhZcN5S4pOyu5r8i/example%20query.jpg)

7\)   Then, click on authoring resource and copy the primary key.

![](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/uWgkmqLFpqsG9EWtUHJF/MicrosoftTeams-image%20\(12\).png)

8\)   Paste the Example Query on the URL prediction field and the primary key on the authoring key field.

![](https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/ewRDKlOHABzlCHgukHUJ/4%20luis.png)

#### Using system entities in Luis

Syntphony Conversational AI supports Luis version 2. When using the datetimeV2 system entity in Luis, you can use subcategories, such:

* date
* time
* datetime
* daterange
* timerange
* datetimerange

Those subcategories should be added after a dot (.).

So, if you are using the date subcategory, the entity name should be builtin.datetimeV2.date

Where builtin.datetimeV2 is the system entity name and date is the subcategory.

For further information, check <https://docs.microsoft.com/en-us/azure/cognitive-services/luis/luis-reference-prebuilt-datetimev2?tabs=1-3%2C2-1%2C3-1%2C4-1%2C5-1%2C6-1#subtypes-of-datetimev2>

### Amazon Lex

You will be asked to provide some information on your request, as listed below:

* [AWS User and Password ](#aws-user-and-password)
* [Name ](#name)
* [Alias ](#alias)
* [Region ](#region)
* [Version](#version)

#### 1. Create a new user

#### AWS User and Password

* Log in and access IAM in the AWS menu
* Create a new user by clicking on Users on the Access Management menu&#x20;
* Fill in the required information (tip: try naming it with something obvious, such as syntphony-user).&#x20;
* Then, click on "Next: Permissions" and select "existing policies", enabling the "AmazonLexFull" policy.&#x20;
* Finish downloading this user and the CSV file.&#x20;

This file contains all the data required to integrate Syntphony Conversational AI  to your AWS account.&#x20;

#### 2. Go back to Amazon Lex page

Access the Services menu to go back to the Amazon Lex page

<figure><img src="https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/U7AQeiKQ3Jj5OoLjeJiZ/image.png" alt=""><figcaption></figcaption></figure>

#### Name

Then, proceed to the side menu to access the virtual agent you want to integrate. Click on the name to open this "Bot details" card. Copy the ID.

<figure><img src="https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/UE6WSZ1rvIex0mT1nXjE/image.png" alt=""><figcaption></figcaption></figure>

#### **Alias**

On the same side menu, choose "Implementation" and then "Aliases". Select the alias you want to use, then find the value on the fiel "ID" within "Details".

<figure><img src="https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/V3rzYLb1jYW2F5tQxiCI/image.png" alt=""><figcaption></figcaption></figure>

#### **Region**

There are two ways of finding out the region: the first is on your virtual agent URL.

One way is clicking on the top bar and find the selected region (as seen below).

<figure><img src="https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/V2tss4Z8SezGZw37SloS/image.png" alt=""><figcaption></figcaption></figure>

The other way is through the URL, for example: “<https://us-east-1.console.aws.amazon.com/lexv2/home?region=**us-east-1**#bot/YZ24GFVCSX”.&#x20>;

Note that it shows the region **us-east-1**.&#x20;

#### **Version**

Now Select "Draft Version" and find the field "Version".

<figure><img src="https://content.gitbook.com/content/n6zS4HeuuVpRHZEvDiFU/blobs/c6YVUBiLixmCgZn4v1lm/image.png" alt=""><figcaption></figcaption></figure>

These are the information required to integrate Amazon Lex.

## LLM

### OpenAI

{% hint style="success" %}
Learn more about the [Zero-Shot learning model ](https://docs.conversational-ai.syntphony.com/user-guide/zero-shot-llm)
{% endhint %}

You'll be asked to provide the following information:

* [Endpoint](#endpoint)
* [API Key](#api-key)
* [Deployment Name](#deployment-name)
* [Tokens Limit](#tokens-limit)

#### Endpoint

The current OpenAI Endpoint is always the same: [https://api.openai.com](https://api.openai.com/v1).&#x20;

-> Read the [OpenAI documentation ](https://platform.openai.com/docs/models/model-endpoint-compatibility)to learn more about endpoints.&#x20;

#### API Key

**1)**   Access <https://platform.openai.com/docs/overview> and click on the lock icon, corresponding to the API Keys.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FxFb8RzRQavEQlz65dPQB%2Fimage.png?alt=media&#x26;token=6f3f9204-92bf-4020-b0e7-22c1eb18f003" alt=""><figcaption></figcaption></figure>

**2)**   When you're on the API Keys page, click on `Create New Secret Key`:

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2F8c6XRRB3eisAMcSaGNzm%2Fimage.png?alt=media&#x26;token=6b9fe17d-a8b4-407b-9516-45ad0e13872a" alt=""><figcaption></figcaption></figure>

**3)**   Enter a name that represents the key and click on `Create Secret Key`.

<div align="left"><figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FvU1jN9elnOlvsSKuqhcy%2Fimage.png?alt=media&#x26;token=ee82d0db-bd4a-4fab-aa85-67b96def5e7d" alt="" width="439"><figcaption></figcaption></figure></div>

**4)**   After the key is created, before you click `Done`, remember to save it somewhere right away. \
⚠️ **It is only possible to view the key at the time of creation**

<div align="left"><figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FlSOSC0DvyS1bww5zCuDG%2Fimage.png?alt=media&#x26;token=fcdefa60-a1db-4310-a6f3-659d9a884c03" alt="" width="435"><figcaption></figcaption></figure></div>

#### Deployment Name

After filling out the Endpoint and API Key fields, the system will load the available model options.

Refer to the OpenAI documentation to learn about the models: &#x20;

{% embed url="<https://platform.openai.com/docs/models>" %}

#### Tokens Limit

A token is roughly 3-5 characters long, but its exact length may vary. It usually consists in the sum of both a system prompt and the user input.&#x20;

The outcome may depend on the availability of the generative service chosen and the token limit defined. If you're using Azure OpenAI by Syntphony CAI, the limit is set at 4000 tokens.

This model is highly influenced by the limitation of tokens. You can set this limit at the time of creation of the virtual agent or at the [Parameters](https://docs.conversational-ai.syntphony.com/user-guide/configurations/parameters) page.

### Azure OpenAI

{% hint style="success" %}
Learn more about the [Zero-Shot learning model ](https://docs.conversational-ai.syntphony.com/user-guide/zero-shot-llm)
{% endhint %}

You will be asked to provide the following information:

* [Endpoint](#api-key-and-endpoint)
* [API Key](#api-key-and-endpoint)
* [Deployment Name](#deployment-names)
* [Tokens Limit](#tokens-limit)

#### API Key and Endpoint

**1)**   Once you're in the Azure webpage, select the OpenAI instance (the one marked with the OpenAI symbol), in this case, it's the **`eva-dev-openai-keys`**.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2Fi8iKh5s5n5p6YYbJMQ5J%2Fimage.png?alt=media&#x26;token=f774997e-5df0-430f-ae34-93bc4de31555" alt=""><figcaption></figcaption></figure>

**2)**   It'll direct you to your main OpenAI instance page. On the side menu, select the option `Keys and Endpoint`.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FPdxxHyL26VkaNy4MOdJF%2Fimage.png?alt=media&#x26;token=b7ff70d7-cb27-45a2-a459-c7b1059e44f9" alt=""><figcaption></figcaption></figure>

**3)** On this page, you will be able to view the Keys and Endpoint that will be used on the Syntphony Conversational AI cockpit screen.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FZYD04rn5qXgjby4BUWYb%2Fimage.png?alt=media&#x26;token=8f039f51-ab1d-4759-ba9f-8a426692889b" alt=""><figcaption></figcaption></figure>

#### Deployment Name

After filling out the Endpoint and API Key fields, the system will load the available model options.

Refer to the Azure OpenAI documentation to learn about the models: &#x20;

{% embed url="<https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models>" %}
