Advance AI Settings

Use this section to connect your agent to powerful AI models and fine-tune how they operate at scale.

What is this?

The Advanced Agent Settings section allows you to integrate your AI Agent with external LLMs (like ChatGPT) and configure advanced AI performance settings. It’s designed for teams that need granular control over how their agent behaves in high-load, multi-tool, or AI-intensive scenarios.


Where to Find It

  1. Go to the Agents tab and select the agent you want to configure

  2. Click on Advanced Agent Settings from the left-hand sidebar


Key Sections

🔹 LLM Provider

  • If no LLM is integrated yet, click Integrate LLM Provider

  • This redirects you to the Integrations page, where you can connect to tools like:

    • OpenAI (ChatGPT)

    • Calendars

    • CRMs

    • Other third-party platforms

🔹 Advanced Settings

Fine-tune the agent’s operational behavior with the following controls:

  • Parallel Tool Calling (Toggle): Allow the agent to call multiple tools simultaneously

  • Strict Tool Calling (Toggle): Ensure only predefined tools are used without fallbacks

  • Switch Model on Token Limit (Toggle): Automatically switch to a fallback LLM if token count exceeds threshold

  • Model Selector (Dropdown): Choose fallback models like o3, o4-mini, gpt-4.1, etc.

  • Token Limit (Input): Set the maximum number of tokens before switching models

  • Temperature (Input): Adjust randomness/creativity of the model response (range: 0–1)

Action: Click Update to save all advanced configurations.


🔹 Voice Provider

Voice Provider Settings

The Voice Provider section allows you to connect a voice service provider to enable text-to-speech (TTS) for your agent.

1. Voice Provider

Select the voice service provider you want to use.

This determines:

  • Voice quality

  • Supported languages

  • Available voice styles

Note: A voice provider is mandatory to enable voice responses.


2. Voice Provider Credential

Choose or add credentials linked to your selected voice provider.

Credentials are required to:

  • Authenticate your voice service

  • Securely process voice responses


3. Voice

Select the voice type for your agent.

Examples:

  • Male / Female voices

  • Natural or conversational tones

  • Language-specific voices

Choose a voice that matches your brand and use case.


4. Model

Select the voice model used for speech generation.

The model affects:

  • Speech clarity

  • Naturalness

  • Response speed


5. Purchase Voice Add-on

If voice features are not enabled on your plan, click Purchase Voice Add-on to activate voice capabilities.


6. Update

Click Update to save and apply your voice configuration.


How Voice & Transcription Work Together

When enabled:

  1. User speaks to the agent

  2. Transcription provider converts voice to text

  3. AI processes the text

  4. Voice provider converts AI response back to speech

This enables a complete voice-to-voice conversation experience.


🔹 Transcription Provider

The Transcription Provider settings allow you to convert voice conversations into text automatically. This enables your AI agent to understand spoken user inputs, process them correctly, and store conversations in a readable format.

Transcription is essential for voice-based agents, call automation, and analytics.

  • Converts user voice input into text

  • Sends the text to the AI for processing

  • Enables searchable and loggable conversations

Without transcription, voice inputs cannot be understood by the AI agent.


Accessing Transcription Provider Settings

To configure transcription:

  1. Log in to your BotPenguin Dashboard

  2. Go to Agents

  3. Select your Agent

  4. Open the Advanced section

  5. Expand Transcription Provider


Transcription Provider Configuration Fields

1. Voice Provider

Select the transcription service provider you want to use.

This determines:

  • Speech-to-text accuracy

  • Supported languages

  • Processing speed

Required Field


2. Voice Provider Credential

Choose or add credentials associated with the selected provider.

Credentials are required to:

  • Authenticate transcription requests

  • Secure your voice data

Required Field


3. Model

Select the transcription model.

The model affects:

  • Accuracy of speech recognition

  • Noise handling

  • Language understanding

Required Field


4. Language

Select the language spoken by your users.

This ensures:

  • Accurate transcription

  • Proper handling of accents and pronunciation

Required Field


5. Purchase Voice Add-on

If transcription is not enabled for your plan, click Purchase Voice Add-on to activate voice and transcription features.


6. Update

Click Update to save and apply your transcription settings.


How Transcription Works

  1. User speaks to the agent

  2. Audio is sent to the transcription provider

  3. Speech is converted into text

  4. AI processes the text

  5. (Optional) AI responds using voice via the voice provider


When Should You Enable Transcription?

Enable transcription if:

  • Your agent accepts voice inputs

  • You are using call-based bots

  • You want conversation logs for analysis

  • You need accurate AI understanding of spoken queries


Need Help?

If your agent doesn’t show up after installation or you encounter code-related issues, contact us at [email protected] or reach out directly via the WhatsApp Support option. We typically respond within 48 business hours.

Last updated

Was this helpful?