# Call properties

## Building a Voice Agent with Voice Gateway

In this guide, you'll learn how to easily build a virtual agent for voice channels using two main answers templates and the technical text.

## Creating Voice channel <a href="#creating-voice-channel" id="creating-voice-channel"></a>

First, add a voice channel, which can be done in two different moments: when you're creating a virtual agent or adding it later to an existing agent. In the later, access the side menu option "Channels" and then click the "Create channel" tab.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FuKVnc4Ydb0oJ010x9TCL%2FCaptura%20de%20pantalla%202025-11-13%20103223.png?alt=media&#x26;token=12891570-4357-42b9-a99a-1104903c8da4" alt=""><figcaption></figcaption></figure>

#### How to configure a voice channel <a href="#how-to-configure-a-voice-channel" id="how-to-configure-a-voice-channel"></a>

Once you're in the Channel's Library, choose the Phone category to open a modal to configure the evg channel. Before continuing, make sure you have read this [step-by-step guide](https://docs.eva.bot/user-guide/using-eva/develop-your-bot/build-your-first-bot) until the [Welcome Flow](https://docs.eva.bot/user-guide/using-eva/develop-your-bot/build-your-first-bot#welcome-flow) item.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2F48IGJYFwnECiV0TwXguT%2FCaptura%20de%20pantalla%202025-11-13%20103438.png?alt=media&#x26;token=83ff8a50-98e8-4307-99e3-2e6f0a27d86f" alt=""><figcaption></figcaption></figure>

{% hint style="success" %}
Before continuing, make sure you have read this [step-by-step guide](https://docs.eva.bot/user-guide/using-eva/develop-your-bot/build-your-first-bot) until the [Welcome Flow](https://docs.eva.bot/user-guide/using-eva/develop-your-bot/build-your-first-bot#welcome-flow) item.
{% endhint %}

## How to configurate a DNIS <a href="#how-to-configurate-a-dnis" id="how-to-configurate-a-dnis"></a>

The following JSON contains all the data and configurable properties you must provide eva.

This JSON allows you to insert the default DNIS configurations, including setting up a Conversation Property (voice providers).

{% hint style="info" %}
These properties can be modified individually within the flows by utilizing the "technical text" field of the answer cells, as demonstrated ahead in this documentation.
{% endhint %}

Please refer to each property table to understand the configurable fields used in the JSON and their reference values: [**TTS** ](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#tts-configurations)(text-to-speech) properties, such as **BargeIn** and **Flush**, used in [audio ](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#audio-template)and [text ](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#text-template)answer templates, [**Play Silence**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#play-silence), [**DTMF menu**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#dtmf-menu), [**Voice menu**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#voice-menu), [**Transfer**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#transfer-to-human), [**Fetch**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#fetch), [**Default Error Behaviour**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#default-error-behavior), [**Regional Expressions**](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#synonymous-for-regional-expressions), etc.

<details>

<summary>JSON for DNIS configuration</summary>

```
{
   "dnis":"913",
   "properties":{
      "tts":{
         "bargeIn":false,
         "flush":false,
         "bargeInOffset":200,
         "mask":"\u003cspeak xmlns\u003d\u0027http://www.w3.org/2001/10/synthesis\u0027 xmlns:mstts\u003d\u0027http://www.w3.org/2001/mstts\u0027 xmlns:emo\u003d\u0027http://www.w3.org/2009/10/emotionml\u0027 version\u003d\u00271.0\u0027 xml:lang\u003d\u0027en-US\u0027\u003e\u003cvoice name\u003d\u0027pt-BR-FranciscaNeural\u0027\u003e\u003cprosody rate\u003d\u0027-15%\u0027 pitch\u003d\u00270%\u0027\u003e $TEXT \u003c/prosody\u003e\u003c/voice\u003e\u003c/speak\u003e",
         "voiceProvider":"MICROSOFT",
         "microsoftTtsConfig":{
            "region":"brazilsouth",
            "subscriptionKey":"***",
            "language":"pt-BR"
         }
      },
      "audio":{
         "bargeIn":false,
         "flush":false,
         "bargeInOffset":200
      },
      "playSilence":{
         "time":50,
         "bargeIn":false,
         "flush":false
      },
      "dtmfMenu":{
         "numOfDigits":1,
         "timeout":20000,
         "interDigitTimeout":3000,
         "termTimeout":500,
         "termChar":"#"
      },
      "voiceMenu":{
         "sensitivity":0.01,
         "maxSpeechTimeout":30000,
         "timeout":20000,
         "incompleteTimeout":20000,
         "voiceProvider":"MICROSOFT",
         "microsoftAsrConfig":{
            "region":"brazilsouth",
            "subscriptionKey":"***",
            "language":"pt-BR"
         }
      },
      "transfer":{
         "uui":"evatest",
         "dest":"1234@172.16.0.7"
      },
      "fetch":{
         "fetchTimeout":45000,
         "fetchAudio":"",
         "fetchAudioDelay":0,
         "fetchAudioMinimum":0,
         "fetchAudioInterval":0
      },
      "defaultErrorBehaviour":{
         "audio":"",
         "tts":"ssml",
         "transfer":false
      },
      "firstConversationRequest":{
         "text":"",
         "code":"%EVA_WELCOME_MSG",
         "entities":{
            
         },
         "context":{
            
         }
      },
      "conversationProperties":{
         "headers":{
            "API-KEY":"***",
            "OS":"evg",
            "LOCALE":"pt-BR"
         },
         "conversationUrl":"https://api-dev-instance1.eva.bot/eva-broker/org/2fbe99b2-ea98-484f-b392-f649f1844e03/env/f5317429-55bb-4418-a7ca-00f6992388b2/bot/80d9ab14-5374-402a-9a93-6f1dc77f7675/channel/47a77735-d652-4c6c-a283-4d18028a3b18/v1/conversations"
      },
      "conversationAuthProperties":{
         "keycloakUrl":"https://keycloak-dev-admin.eva.bot/auth/realms/everis/protocol/openid-connect/token",
         "secret":"***",
         "clientId":"***"
      },
      "regionalExpressionsFileUrl":"https://***/regional-expressions.json",
      "welcomeTimeout":5000,
      "conversationTimeout":30000
   }
}js
```

</details>

## TTS configurations <a href="#tts-configurations" id="tts-configurations"></a>

To build a voice agent in eva, there are a few concepts that are different from a "text first" agent. The flow building logic is the same, the difference is the consistent use of the technical text field using JSON. We'll call it [property](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#properties); each property has a command that will tell the agent what to do.

Before jumping to them, let's see how an answer cell for voice agents would look like in eva?

*Don't worry if you don't understand some of the terms in the following example, we'll get to all the concepts ahead in this chapter.* 😉

Now, imagine you have an audio file with a greeting and a menu, and you want the user to choose a number option off of the menu:

1\. Click the + icon to add a cell, in this case, a Welcome flow.

2\. Select the channel and choose the [audio template](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#audio-template)​

3\. Add the audio URL (WAV or FLAC formats)&#x20;

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FUJmETDB4NvM70OsR2Qpa%2FCaptura%20de%20pantalla%202025-11-13%20105530.png?alt=media&#x26;token=e31f5e10-ca80-4c71-8eee-78fe326aba0e" alt=""><figcaption><p>Choose audio template</p></figcaption></figure>

4\. Use the "Add option" to create [buttons](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#buttons) that will be used to identify the menu options

5\. After that, attach a JSON to the technical text field with the [DTMF menu](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#menu) property, as follows:

```
{
   "dtmfMenu":{}
}
```

6\. Finally, click Save.If you don't have an audio, just choose the [text template](https://app.gitbook.com/o/-MNw8KuH71OCN_5QL614/s/p0SUdPEICXSM7gLqSIa9/voice-gateway/building-a-voice-agent-with-voice-gateway#text-template) to use the text-to-speech function and proceed to step 4.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FjsiU5HxuGF44YBUbHxMF%2Fimage.png?alt=media&#x26;token=8d76cf68-ce03-4f0f-bbfe-d11663d0df36" alt="" width="306"><figcaption><p>Text field with a SSML</p></figcaption></figure>

#### Audio template <a href="#audio-template" id="audio-template"></a>

The following example is an answer using the audio template. The formats supported are WAV and FLAC.&#x20;

There are a few properties that you can attach to the technical text field to enrich the experience, like allowing the user to interrupt the audio playback at anytime.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FpcBxakOX03UYi7Vd6AMb%2FCaptura%20de%20pantalla%202025-11-13%20110139.png?alt=media&#x26;token=5a6bfe1a-82fa-454a-8e17-0e5726211a9f" alt=""><figcaption></figcaption></figure>

JSON used in the example:

```
{
   "configuration": {
      "bargeIn":false,
      "flush":false
   }
}
```

Other audio commands that overwrite the default settings:&#x20;

<table><thead><tr><th width="133.88887532552081">Name</th><th width="101.6666259765625">Type</th><th>Description</th></tr></thead><tbody><tr><td>bargeIn</td><td>boolean</td><td>Allows users to interrupt an audio using a DTMF keypad input. For ex., in a menu audio, the user wouldn't have to wait all the options to finally be able to choose.</td></tr><tr><td>bargeInOffset</td><td>Long</td><td>This configuration allows users to interact with the IVR from a specific point in the audio. For ex., if you set the value 300ms, this means that the user will be able to interact with the IVR when it is 300 milliseconds before the audio stops playing.</td></tr><tr><td>flush</td><td>boolean</td><td>Whether the audio should be flushed or just queued. <a href="#flush">Learn more</a></td></tr></tbody></table>

#### Flush

When flush property is enabled as "true", the IVR will wait the audio to be fully reproduced before continuing the flow. It applies for audios, TTS, and play silence.&#x20;

{% hint style="info" %}
It's not mandatory to use all these JSON configurations when using the answer templates. When they are not attached the system will use the default configurations.
{% endhint %}

Prefer the audio template to reproduce audios. When an audio is entered, the text-to-speech (TTS) property will be ignored.&#x20;

### Text template <a href="#text-template" id="text-template"></a>

Text-to-speech technology receives a text as an input and produces speech as an output. To produce the audible speech for IVR, create an answer using the text template. You can either fill it with regular text or with a SSML.

When you insert a regular text, the IVR will play the default configurations, but if you want to change the default rate, pitch and even voice, use a SSML with the new configuration, as seen below. &#x20;

<div><figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FPzf4wMtW9TvgX28KtleX%2Fimage.png?alt=media&#x26;token=e6f96ea1-ae2b-4fee-a434-67109a1da701" alt=""><figcaption><p>Regular text</p></figcaption></figure> <figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FggfNidO0Aw26JVoCNr9N%2Fimg%20ssml.png?alt=media&#x26;token=1b7305e4-ed62-4efe-9aca-e18f832a949a" alt=""><figcaption><p>SSML</p></figcaption></figure></div>

{% hint style="info" %}
The text field has 2000 character limit.
{% endhint %}

You can also overwrite the default configurations using the following JSON in techinal text field.

&#x20;

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2Fnqs363hlYBohT4fK0Oym%2FCaptura%20de%20pantalla%202025-11-13%20111128.png?alt=media&#x26;token=abbccb5a-5478-4255-8f20-de8820886bff" alt=""><figcaption></figcaption></figure>

JSON example:

```json
{
   "configuration":{
      "bargeIn":false,
      "flush":false,
      "voiceProvider":"MICROSOFT",
      "mask":"<speak xmlns='<http://www.w3.org/2001/10/synthesis>' xmlns:mstts='<http://www.w3.org/2001/mstts>' xmlns:emo='<http://www.w3.org/2009/10/emotionml>' version='1.0' xml:lang='en-US'><voice name='pt-BR-FranciscaNeural'><prosody rate='6%' pitch='3%'>$TEXT</prosody></voice></speak>",
      "microsoftTtsConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"ba471adb1da790bd4e222a9d4041ed90",
         "language":"pt-br"
      }
   }
}
```

In the example above, we used a mask with the variable $TEXT to replace with the content you have written in the text template, so you don't have to repeat it in the xml. If the content of the answer is an xml starting with "\<speak" the default xml won't be used.

Other TTS commands that overwrite the default settings:

<table data-header-hidden><thead><tr><th width="171.66668701171875">Name</th><th width="150.22222900390625">Type</th><th>Description</th></tr></thead><tbody><tr><td>bargeIn</td><td>boolean</td><td>Allows users to interrupt an audio using a DTMF keypad input</td></tr><tr><td>bargeInOffset</td><td>Long</td><td>This configuration allows users to interact with the IVR from a specific point in the audio. For ex., if you set the value 300ms, this means that the user will be able to interact with the IVR when it is 300 milliseconds before the audio stops playing.</td></tr><tr><td>flush</td><td>boolean</td><td>Whether the audio should be flushed or just queued. <a href="#flush">Learn more</a>​</td></tr><tr><td>voiceProvider</td><td>String</td><td>TTS Provider Name. So far, only the MICROSOFT value is supported.</td></tr><tr><td>microsoftTtsConfig</td><td>JSON Object</td><td>Credentials to access Microsoft</td></tr></tbody></table>

## Properties

Now that we know the basics of how an answer cell for IVR looks like in eva using audio and text templates, let's move on to the technical text field.&#x20;

To use the eva-evg channel or implement a connector that will be integrated to an IVR, there are some configurations that need to be informed. **They are the properties**, i.e. a regular JSON attached to the technical text field.

{% hint style="info" %}
**In case no properties are attached to the technical text field, the system will use the default properties.**
{% endhint %}

Let's breakdown the properties and learn how to use them to create commands.

### Menu

Mostly used when you need an input from the user. You can use all templates available: audio, text and custom.

There are three types of menu:

* [**DTMF**](#dtmf-menu): allows the user to interact with the IVR by the telephone keypad
* [**VOICE**:](#voice-menu) allows the user to interact with the IVR by speech
* [**DTMF VOICE**:](#dtmf-voice-menu) allows the user to interact with the IVR by both, telephone keypad and speech

Let's breakdown each type.

#### DTMF menu

As mentioned, the DTMF menu allows the user to interact with the IVR through the telephone keypad. Use the following command in the technical field:

JSON used in the example:

```json
{
   "dtmfMenu":{}
}
```

It's possible to overwrite some configurations of the DTMF menu:

<table data-header-hidden><thead><tr><th width="160.33331298828125">Name</th><th width="83.66668701171875">Type</th><th width="399.11114501953125">Description</th><th>Default</th></tr></thead><tbody><tr><td>numOfDigits</td><td>int</td><td>Numbers of digits to be captured</td><td>1</td></tr><tr><td>timeout</td><td>int</td><td>Pause timeout in milliseconds for the user to send an input (DTMF or speech).</td><td>5500 ms</td></tr><tr><td>interDigitTimeout</td><td>int</td><td>Inter-digit timeout in milliseconds for the user to enter a DTMF input</td><td>3000 ms</td></tr><tr><td>termTimeout</td><td>int</td><td>Timeout in milliseconds since the user's last input (DTMF or speech) before terminating the call</td><td>300 ms</td></tr><tr><td>termChar</td><td>String</td><td>Users can indicate when the DTMF input has finished by sending a special character.​<em>If the user types only the character # (hashtag) without informing any numbers, this is the value sent to eva; but if there are other information sent along, the # won't be sent.</em></td><td>#</td></tr></tbody></table>

{% hint style="info" %}
**Timeouts**: Refer to the pauses between words or phrases when speaking or when entering DMTF inputs. You can control the length of these pauses so the engine can detect when a user has done speaking or entering the DTMF input.
{% endhint %}

To overwrite the default settings we can enter the following JSON in the technical text.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FZuhVbpvgZ9nyzdxkm4OB%2Fimage.png?alt=media&#x26;token=5a4a3aac-6303-4a90-9fa3-bd9c07effb4c" alt="" width="563"><figcaption></figcaption></figure>

JSON used in the example:

```json
{
   "dtmfMenu":{
      "numOfDigits":1,
      "timeout":5000,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#"
   }
}
```

It is possible to combine multiple configurations of different items to achieve proper customization of the menu, as in the example below (for DTMF menu and audio):

```json
{
   "dtmfMenu":{
      "numOfDigits":1,
      "timeout":5000,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#"
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

Usually, a DTMF menu is used with [buttons ](#buttons)to find the input that will be sent to eva during the conversation. For example, when the user press "1" in the phone keypad, eva will receive the value, like in the example bellow, the value sent to eva was "Schedule".&#x20;

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FWdldKEGI33eONMllZ1AK%2Fimage.png?alt=media&#x26;token=1eeb975c-99ab-49ea-b02d-c09a00b892ac" alt="" width="335"><figcaption></figcaption></figure>

<br>

#### VOICE menu

As mentioned, the Voice menu allows the user to interact with the IVR by speech.&#x20;

If you want use the voice property but not overwrite any other configuration, just attach the following JSON in the technical text:

```json
{
   "voiceMenu":{}
}
```

In case you want to overwrite some default configurations, use the following commands in the technical text.

<table data-header-hidden><thead><tr><th width="173.66668701171875">Name</th><th width="123.44439697265625">Type</th><th width="343.2222900390625">Description</th><th>Default</th></tr></thead><tbody><tr><td>voiceProvider</td><td>String</td><td>ASR Provider Name: MICROSOFT</td><td>-</td></tr><tr><td>sensitivity</td><td>double</td><td>Noise reduction sensitivity. Lower values will lower the audio silence threshold and more noise will be recorded. Higher values will raise the audio silence threshold and louder audio will be needed to trigger the record. Valid values go from 1 to 100.</td><td>20</td></tr><tr><td>timeout</td><td>int</td><td>Pause timeout in milliseconds for the user to send an input (DTMF or speech)</td><td>5500 ms</td></tr><tr><td>maxSpeechTimeout</td><td>int</td><td>The maximum duration of user speech. If this time elapsed before the user stops speaking, the event "nomatch" is activated.</td><td>15000 ms</td></tr><tr><td>incompleteTimeout</td><td>int</td><td>Timeout in milliseconds the IVR will wait for a page/json fetch</td><td>300 ms</td></tr><tr><td>microsoftAsrConfig</td><td>JSON Object</td><td>Credentials to access Microsoft</td><td>-</td></tr></tbody></table>

JSON example:

```json
{
   "voiceMenu":{
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   }
}
```

It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

```json
{
   "voiceMenu":{
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

#### **DTMF VOICE menu**

As mentioned, the DTMF VOICE menu allows the user to interact with the IVR by both, telephone keypad and/or speech.

To overwrite the default settings we can enter the following JSON in the technical text.

If you want use the DTMF VOICE property but not overwrite any other configuration, just use the following JSON in the technical text:

```json
{
   "dtmfVoiceMenu":{}
}
```

Settings for the DTMF VOICE menu will be the same as those used for DTMF and VOICE.

JSON example:

```json
{
   "dtmfVoiceMenu":{
      "numOfDigits":1,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#",
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   }
}
```

It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

```json
{
   "dtmfVoiceMenu":{
      "numOfDigits":1,
      "interDigitTimeout":1000,
      "termTimeout":500,
      "termChar":"#",
      "voiceProvider":"MICROSOFT",
      "sensitivity":0.01,
      "timeout":20000,
      "maxSpeechTimeout":30000,
      "incompleteTimeout":20000,
      "microsoftAsrConfig":{
         "region":"brazilsouth",
         "subscriptionKey":"efvouqheg91b34fw094rtybyqyiwsdfqf",
         "language":"pt-br"
      }
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

When used with buttons we can find the input that will be sent to eva during the conversation. For example, when the user press "1" in the phone keypad, eva will receive the value, a word or a phrase like "I want to buy".&#x20;

### Buttons

Let's learn how to use buttons in the context of eva-EVG. All three answer templates for voice channels allow you to add buttons. Click "Add option" to expand the two fields for buttons: Option and Value.&#x20;

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FIeGVpes2lq2qcEfeoQan%2FCaptura%20de%20pantalla%202025-11-13%20111949.png?alt=media&#x26;token=c5db2fcc-a240-464e-bfc6-6444e446f32f" alt="" width="310"><figcaption></figcaption></figure>

The value saved in the context works as a map, helping eva identify where the user should be led.&#x20;

When combined with a DTMF or DTMF VOICE menu, it's possible to associate the "Option" field with the digit and send the value to eva. For example:

**Option**: "1" \
**Value**: "Buy clothes"

When the user press "1", the value that was actually sent to eva is "Buy clothes", leading the user to the appropriate flow.

Users may also consider an alternative approach by spelling out the number instead. So these are the third input possibilities:

* "1" (phone button)
* "Buy clothes" (spoken)
* "One" (spoken)

To cover this third option, represented by "One" in this example, you can add a Cardinal [System entity](https://docs.conversational-ai.syntphony.com/user-guide/build-dialogs/dialog-cells/entity#b-system-entities) (eva NLP pre-built entity for numbers) followed by a [Rule cell](https://docs.conversational-ai.syntphony.com/user-guide/build-dialogs/dialog-cells/rule), as seen below.&#x20;

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FPCOKfTRqzvdqn4c1MTyX%2FCaptura%20de%20pantalla%202025-11-13%20112400.png?alt=media&#x26;token=555413cf-2366-4989-b928-18d26fc8fcc0" alt=""><figcaption></figcaption></figure>

On the Rule cell you can create a condition to segment the flow and, subsequently, add a [Jump cell](https://docs.conversational-ai.syntphony.com/user-guide/build-dialogs/dialog-cells/jump) to said flow. **Use this field to handle possible input options and help the STT recognize any variations of the spoken number.**

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FLFTbIbfHqrOJ6JUYBWqF%2FCaptura%20de%20pantalla%202025-11-13%20112156.png?alt=media&#x26;token=cd46e4f7-998c-4282-8c8b-9d68a920a182" alt=""><figcaption></figcaption></figure>

### Play Silence

To provide greater fluidity and natural speech when you have answers/audios in sequence, we recommend to use the **play silence** property. It will allow the audios to not be played immediately after another.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FNIK9ZV9KmY3W38SvOTQg%2Fimage.png?alt=media&#x26;token=3d4d6ef3-6324-443e-93c9-ec77482a3f5e" alt="" width="563"><figcaption></figcaption></figure>

The play silence should be included in the answer that comes first, in the example, "Buy".

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FtFIF9EDMJyx5DcJOm0hw%2FCaptura%20de%20pantalla%202025-11-13%20113131.png?alt=media&#x26;token=69d0d31f-e11d-4b7e-9fff-8b5a47d24731" alt="" width="437"><figcaption></figcaption></figure>

JSON example:

```json
{
   "playSilence":{}
}
```

{% hint style="info" %}
**Important:** When the answer has a menu setting, the play silence will not be executed.
{% endhint %}

It's possible to overwrite some configurations with:

<table data-header-hidden><thead><tr><th width="95.44439697265625">Name</th><th width="98.77777099609375">Type</th><th width="442.22222900390625">Description</th><th>Default</th></tr></thead><tbody><tr><td>time</td><td>int</td><td>Silence duration in milliseconds. Maximum value accepted is 45.000 ms.</td><td>0</td></tr><tr><td>bargeIn</td><td>boolean</td><td>Allows users to interrupt an audio using a DTMF input</td><td>False</td></tr><tr><td>flush</td><td>boolean</td><td>Whether the audio should be flushed or just queued. <a href="#flush">Learn more</a>​</td><td>False</td></tr></tbody></table>

To overwrite the default settings we can enter the following JSON in the technical text.

JSON example:

```json
{
   "playSilence":{
      "time":"50",
      "bargeIn":"false",
      "flush":"false"
   }
}
```

It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

```json
{
   "playSilence":{
      "time":"50",
      "bargeIn":"false",
      "flush":"false"
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

### Transfer to human

Transfer property is used to transfer the call to live agents.

<table><thead><tr><th width="140.33333333333331">Name</th><th width="123">Type</th><th>Function</th></tr></thead><tbody><tr><td>uui</td><td>String</td><td>A custom message that will be transferred along with the call via the user-to-user SIP header. We recommend to use a <a href="https://www.convertstring.com/EncodeDecode/HexEncode">hexadecimal encoding</a>. </td></tr><tr><td>dest</td><td>String</td><td>Call destination, where it will be transferred to. You can declare it as <em>sip</em> or <em>tel</em>. </td></tr></tbody></table>

Below are some examples:

* How to declare you want a call to be transferred *(remember to replace the information inside the quotation marks)*:&#x20;

```json
{
   "transfer":{
      "uui":"48656C6C6F20776F726C64;encoding=hex",
      "dest":"sip:12345678@172.16.0.7:5060"
   }
}
```

In the example above, the value "48656C6C6F20776F726C64" will be translated as "hello world" by the agent.

* By combining transfer configurations, it's possible to overwrite audio configurations, using TTS (text template). It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

```json
{
   "transfer":{
      "dest":"sip:12345678@172.16.0.7:5060?user-to-user=342342ef34;encoding=hex"
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

In the example above, the hex encoding is declared in the default.

{% hint style="warning" %}
**Important:** Transfer property has priority over menu and play silence. When you attach these commands with transfer, the other two will be ignored.
{% endhint %}

### Hangup

This property is used to end the flow. In other words, after this, the call will be terminated. Simply attach in the technical text the following JSON:

```json
{
   "hangup": true
}
```

* By combining terminate configurations, it's possible to overwrite audio configurations, using TTS (text template). It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

```json
{
   "hangup":true,
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

{% hint style="warning" %}
**Important:** Hangup property has priority over transfer, menu, and play silence. When you attach these commands with hangup, the other three will be ignored.
{% endhint %}

### Recall&#x20;

The recall property can be used to simulate an asynchronous delivery of the answers and also to send eva a user input that can be used to trigger a flow or validate a service.

This behavior is useful when the system requires a lengthy processing and you don't want to hang the user waiting in silence wondering if the call is still active.

{% hint style="success" %}
💡 It's a good practice to give the user a feedback with audios with background music or informative messages.
{% endhint %}

This is how you use a recall. Add a wait-input cell after the answer you want delivered before continuing in the flow.<br>

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FBYL80Z7VjNQTcwz2arr6%2FCaptura%20de%20pantalla%202025-11-13%20102336.png?alt=media&#x26;token=9a548fba-14d8-4efc-b2f3-5f1447ac17f0" alt=""><figcaption></figcaption></figure>

You can use the same parameters as those in the [Conversation API](https://docs.conversational-ai.syntphony.com/user-guide/api-docs/api-guidelines/creating-channels-the-conversation-api#request-body) to specify the user input (if it's text, code, context, intent, confidence, or entities).&#x20;

In the example below, the code "357YVU" is being used as a value to validate a service.

```json
{
  "recall": {
     "code": "357YVU"
   }
} 
```

In the following case, the intent "shopping" was triggered without the need of identifying utterances, you just have to inform the name of the intent the way it's registered in eva.&#x20;

```json
{
  "recall": {
    "confidence": "0.50",
    "intent": "shopping"
  }
}
```

This next example is a simpler way of using the recall property. In this scenario, eva would be called with an empty input.

```
{
  "recallText": ""
} 
```

{% hint style="info" %}
Fill in the technical text with this content to activate the recallEva.
{% endhint %}

### Fetch

The fetch property represents the waiting time for the IVR to make a new request to eva and then continue the flow. You can also overwrite the default setting it in the technical text to only reflect a specific execution (audio playback, TTS, etc.).&#x20;

<table><thead><tr><th width="199">Name</th><th width="97.33333333333331">Type</th><th>Function</th></tr></thead><tbody><tr><td>fetchTimeout</td><td>Long</td><td>The default amount of time in milliseconds the IVR will wait for a page/json fetch.</td></tr><tr><td>fetchAudio</td><td>String</td><td>The path to the default audio file to be used during IVR platform fetch events.</td></tr><tr><td>fetchAudioDelay</td><td>Long</td><td>The default value for the fetch audio delay. This is the amount of time in milliseconds the IVR will wait while transitioning and fetching resources before it starts playing the fetch audio.</td></tr><tr><td>fetchAudioMinimum</td><td>Long</td><td>The minimum time in milliseconds to play a fetch audio source, once started, even if the fetch result arrives in the meantime. The idea is that once the user does begin to hear a fetch audio, it should not be stopped too quickly.</td></tr><tr><td>fetchAudioInterval</td><td>Long</td><td>Controls the time interval between fetch audio loops. The default value is 0. A value of -1 is valid and will prevent the audio loop.</td></tr></tbody></table>

Below are some examples:

* Fetch configuration

```json
{
   "fetch":{
      "fetchTimeout":45000,
      "fetchAudio":"",
      "fetchAudioDelay":0,
      "fetchAudioMinimum":0,
      "fetchAudioInterval":0
   }
}
```

* By combining fetch configurations, it's possible to overwrite audio configurations, using TTS (text template). It's possible to combine multiple configurations of different items to achieve a proper menu customization, as in the example below:

```json
{
   "fetch":{
      "fetchTimeout":45000,
      "fetchAudio":"",
      "fetchAudioDelay":0,
      "fetchAudioMinimum":0,
      "fetchAudioInterval":0
   },
   "configuration":{
      "bargeIn":false,
      "flush":false
   }
}
```

{% hint style="info" %}
**Important:** If none of the properties above mentioned ([DTMF](#dtmf-menu), [VOICE](#voice-menu), [DTMF\_VOICE](#dtmf-voice-menu), [play silence](#play-silence), [transfer](#transfer-to-human), [hangup](#hangup), or [recall](#recall)) are attached, a DTMF\_VOICE with the default configurations will be added.
{% endhint %}

### Synonyms for regional expressions

This property gives a contextual understanding of expressions and words variations. For example, in English it's common to say O (letter) instead of zero when giving a phone number.&#x20;

To help the STT intelligence understand this is the number 0 and not a letter, you can use a JSON file that gathers all “Regional Expressions”, as in the example:&#x20;

```
{
   "O": "0"
}
```

{% hint style="warning" %}
**Important:** The JSON with regional expressions has to be a **public file**. To enable it, provide the URL in the [JSON with the default configurations](#setting-a-phone-number).&#x20;
{% endhint %}

To enable this property, simply attach in the technical text the following JSON:&#x20;

```
{
   "useRegionalExpressions": true
}
```

This way, the agent will have a better recognition of specific entities such as phone number, credit card number, etc. Bear in mind that each time a new change is made to the file, **it can take up to one hour to reflect in the call**.&#x20;

### Configure first flow

If you want to start the conversation with a different flow, use the following code to set the first interaction when configurating the DNIS:

```json
"firstConversationRequest": {
   "code":"",
   "text":"",
   "intent":"",
   "confidence":1,
   "entities":{
      "comida":"",
      "carro":""
   },
   "context":{
      
   }
}
```

[See here all the properties you can use in this JSON](https://docs.eva.bot/user-guide/for-technicians/api-guidelines/creating-channels-the-conversation-api#request-body-1).&#x20;

You can use this scenario to change the channel, to start on a specific seasonal flow, or outbound calls, for example.

## Handling events

### Disconnected user

When a call is interrupted unexpectedly, either because the user hung up accidentally or as a result of some system error, it's possible to configure a flow in eva so that the conversation resumes from the same point if this same user calls again in less than 5 minutes.

This setup not only enhances user experience but also refines the abandonment metric by filtering out abandoned calls and excluding those that were resumed.

To create this scenario, you'll have to:&#x20;

* Create a welcome answer with a [transactional ](https://docs.conversational-ai.syntphony.com/user-guide/build-dialogs/dialog-cells/answer#e-transactional-answer)service to identify the call.
* Create a User Journey flow specifically for this use case. Add the utterance "USER\_DISCONNECTED" to your intent followed by a service cell (see image below) to identify the call and resume from the same point where it left off.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FDv1zr4wfIzmPX9ofsvJ7%2Fimage.png?alt=media&#x26;token=1914de7c-e644-44d8-b1fd-f323fc90d26d" alt=""><figcaption></figcaption></figure>

### No input

When the user doesn't interact with the agent within the configured timeout, which means there isn't a DTMF or a speech input, the system sends eva the code IVR\_NO\_INPUT, visible on the User Messages column on Dashboards.

<figure><img src="https://4008706377-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2Fn6zS4HeuuVpRHZEvDiFU%2Fuploads%2FxLkf4pc5uwAAJkbBtv18%2Fimage.png?alt=media&#x26;token=2b476723-a7f1-47ba-8cbe-e7a0751440b0" alt=""><figcaption></figcaption></figure>

### No match

Used to manage events when it is not possible to identify or transcribe the input, the system sends the code IVR\_NO\_MATCH, visible on the User Messages column in Dashboards (see image above).

## Handling Errors

During a call some errors may occur. We list below possible errors:

* Communication with eva, due to some misconfiguration.
* Failed authentication with eva
* Flow not found (when a Not Expected flow wasn't created, for example).
* The use of a template not supported by the IVR channel.

There are two ways of handling them:

1. Redirect the call to a live agent
2. End the call

{% hint style="info" %}
For both cases, we recommend you to deliver a message notifying the user what will happen next.
{% endhint %}

### **Default error behavior**

<table data-header-hidden><thead><tr><th width="91.6666259765625">Name</th><th width="89.4444580078125">Type</th><th>Description</th></tr></thead><tbody><tr><td>audio</td><td>String</td><td>The field must contain an audio URL in WAV or FLAC format, when this response is delivered to the IVR it will play the audio content.</td></tr><tr><td>tts</td><td>String</td><td>The field content will be synthesized by the IVR, you can fill it with free text or with an SSML.</td></tr><tr><td>transfer</td><td>boolean</td><td>If set as <em>true</em> the call will be transferred after the message is played; if set as <em>false</em> or when the property is not specified the call will be terminated after the message. To make the call transfer we will use the default transfer settings.</td></tr></tbody></table>
