Skip to main content
ElevenLabs integration is coming soon. This page documents the planned implementation.

Overview

ElevenLabs offers conversational AI capabilities alongside their renowned text-to-speech technology. Preclinical will support testing ElevenLabs conversational agents via WebSocket.

Planned Configuration

interface ElevenLabsConfig {
  provider: "elevenlabs";
  config: {
    api_key: string;           // ElevenLabs API key
    agent_id: string;          // Conversational agent ID

    // Optional
    voice_id?: string;         // Voice for responses
    model_id?: string;         // LLM model to use
    timeout_ms?: number;       // Default: 120000
  };
}

Planned Setup

1

Get API Key

  1. Go to ElevenLabs
  2. Navigate to ProfileAPI Key
  3. Copy your API key
2

Create Conversational Agent

  1. In ElevenLabs, go to Conversational AI
  2. Create and configure your agent
  3. Copy the Agent ID
3

Add Integration

{
  "name": "My ElevenLabs Agent",
  "provider": "elevenlabs",
  "config": {
    "api_key": "your-elevenlabs-api-key",
    "agent_id": "agent_xxxxx"
  }
}

Planned Features

WebSocket Streaming

Real-time bidirectional audio streaming

Voice Quality

Leverage ElevenLabs’ high-quality voices

Transcript Capture

Full conversation transcripts

Latency Metrics

Time to first byte and response latency

Timeline

We’re actively working on ElevenLabs integration. Expected availability:
  • Conversational AI Testing: Q2 2026

Stay Updated

Request Early Access

Contact us if you need ElevenLabs integration sooner

In the Meantime

If you’re using ElevenLabs conversational AI and need testing now:
  1. Backend Testing: If your ElevenLabs agent uses an OpenAI-compatible backend for logic, you can test that directly via the OpenAI integration
  2. Manual Testing: Use ElevenLabs’ playground for manual testing, export transcripts, and analyze patterns

Next Steps