Documentation

Everything you need to build, deploy, and manage context-aware AI agents with Promptev.

Quick Start

Getting Started

Get up and running with Promptev in three steps. No credit card required for the free tier.

1

Create a Project

Sign up at app.promptev.ai and create your first project. Projects are containers for connectors, context packs, prompts, tools, and deployments.

2

Connect Your Data & Models

Connect integrations (Google Drive, Jira, GitHub, etc.) for data and tools. Then add model keys (OpenAI, Claude, Gemini) under BYOK.

Google DriveJiraGitHubSlackNotion+12 more
3

Create a Prompt & Deploy

Build a prompt in Prompt Studio, attach context packs and tools, test in the playground, then deploy as a web agent, API, SDK, or Slack bot.

Context Packs

Context Packs are Promptev's managed knowledge system. They turn your raw data into searchable knowledge that agents use to answer questions accurately.

How It Works

1

Connect data sources — Google Drive folders, Notion spaces, Confluence pages, GitHub repos, uploaded files.

2

Automatic processing — Promptev extracts text, chunks content, generates embeddings, and builds entity graphs.

3

Live re-indexing — Webhook-driven updates keep your context pack in sync when files change.

4

Dual retrieval — Semantic search + graph-based cross-document linking for maximum accuracy.

# Pipeline states
idle → queued → processing → completed

Track status via the dashboard or API. Failed packs show detailed error logs.

Prompt Studio

Version control for AI behavior. Create prompts, attach context packs and tools, test side-by-side, rollback to any version, and deploy with confidence.

Version Control

Every prompt change creates a new version. Compare diffs, rollback, see who changed what.

Side-by-Side Testing

Test two versions against the same input. Compare responses, latency, and token usage.

A/B Testing

Run experiments across models (GPT-4 vs Claude vs Gemini) with traffic splitting.

Audit Trail

Full history of every change, deployment, and test run. Traceable and compliant.

Tools

Tools give agents the ability to act in the real world. Three tool types, one unified governance layer.

System Tools

233+ built-in across 17+ integrations

Pre-built tools that auto-load when you connect an integration. Each has a defined config schema for parameter validation.

HTTP Tools

Custom REST API endpoints

Connect any REST API as a tool. Configure URL, method, headers, auth, and schemas via the dashboard.

GET POST PUT PATCH DELETE

MCP Tools

Model Context Protocol

Connect any MCP-compatible server. MCP is an open standard — think USB-C for AI tool connectivity.

How it works: Add your MCP server URL → Promptev auto-discovers tools → Agents call them with full governance.

Tool Governance

All three types share unified governance: per-tool approval requirements, inline & email notifications, full audit logging, and RBAC.

Deployment

Deploy prompts and agents to any channel. All methods share the same execution engine.

Web AI Agent

Branded shareable link. Optional OTP/email gating for private access.

Embedded Widget

Embed via iframe. Responsive, customizable, works on any domain.

API Endpoint

RESTful API with streaming. Manage conversations programmatically.

Python / JS SDK

Type-safe clients with streaming and async support.

Slack / Teams Bot

Deploy as a bot in your existing workspace.

Autonomous Agents

Event-driven, scheduled, or goal-based with persistent memory.

SDK & API

Everything you need to integrate Promptev. Get your project key, install the SDK, and start calling endpoints.

1

Get your Project Key

Go to Project Settings in the dashboard and copy your project API key (pk_...). This key authenticates all calls — no Authorization header needed.

2

Install the SDK

Python
pip install promptev
JavaScript / TypeScript
npm install @promptev/client
3

API Endpoints

Base URL: https://api.promptev.ai

POST /api/sdk/v1/prompt/client/{project_key}/{prompt_name}

Execute a prompt. Add ?stream=true for SSE streaming.

Request Body

{
  "variables": {
    "key": "value"
  }
}

Response

{
  "output": "The AI response text..."
}
Python
client.run_prompt(prompt_key, query, variables)
client.stream_prompt(prompt_key, query, variables)
JavaScript
client.runPrompt(promptKey, query, variables)
client.streamPrompt(promptKey, query, variables)
POST /api/sdk/v1/agent/{project_key}/{chatbot_id}/start

Start or resume an agent session. Returns a session token for streaming.

Request Body

{
  "visitor": "user@company.com",
  "platform": "sdk"
}

Response

{
  "session_token": "tok_...",
  "chatbot_id": "...",
  "name": "My Agent",
  "memory_enabled": true,
  "messages": []
}
Python
client.start_agent(chatbot_id, visitor, platform)
JavaScript
client.startAgent(chatbotId, { visitor, platform })
POST /api/sdk/v1/agent/{project_key}/{chatbot_id}/stream

Stream an agent response via SSE. Requires session_token from /start.

Request Body

{
  "query": "Create a Jira ticket for this bug"
}
// Pass session_token as query param

SSE Events

{ "type": "thoughts",    "output": "..." }
{ "type": "processing", "output": "..." }
{ "type": "done",       "output": "..." }
{ "type": "error",      "output": "..." }
Python
client.stream_agent(chatbot_id, session_token, query)
JavaScript
client.streamAgent(chatbotId, { sessionToken, query })

Async support (Python): Every method has an async variant — prefix with a (e.g., arun_prompt, astart_agent, astream_agent). JavaScript methods are async by default.

Webhooks

Real-time updates from connected integrations. When files change, issues update, or pages are edited, webhooks trigger automatic context pack re-indexing.

Supported Events

File created / updated / deleted
Jira issue created / updated
GitHub push / PR / issue events
Confluence page created / updated
Slack messages & reactions
Microsoft subscription events

BYOK (Bring Your Own Keys)

Promptev never owns your model keys or data. Connect your own LLM providers and control where inference runs.

Supported Providers

O
OpenAI
C
Claude (Anthropic)
G
Gemini (Google)
M
Mistral
C
Cohere
S
Self-hosted

Security

API keys are encrypted at rest using AES-256, never logged, and never exposed in responses. Rotate or delete keys at any time.

Ready to Build?

Start with the free tier. Deploy your first agent in under 10 minutes.