Easybeam
  • Getting Started
    • 👋What is Easybeam?
    • 📙SDKs
  • Features
    • 📝Prompts
    • ⭐Reviews & Score
    • 🧠Agents
      • Getting Started
      • Core concepts
      • Data Types
      • Sidebar
      • Start Step
      • Prompt Step
      • Decision Step
      • Action Step
      • Website Step
      • Database Step
      • Async Branch
      • Knowledge Step
      • API Step
      • Solo Agent
    • 📖Knowledge
      • Websites
      • Documents
    • 📋Logs
    • 🔍Analytics & Reviews
    • 🧑‍🔬Test Center
  • 🤖Documentation Agent
Powered by GitBook
On this page
  • Overview
  • Tests
  • Test Results
  1. Features

Test Center

Catching the edge cases before your users do.

PreviousAnalytics & Reviews

Last updated 4 months ago

Overview

Here you can create standardized tests per Prompt that can be run across all versions. The idea here is to be able to catch edge cases and validate your output before delivering a version to a customer, and have confidence in each version you create.

Tests are defined per Prompt, but are meant to be run across multiple versions at the same time. You can also select multiple tests to run, creating a wide spread of results that you can compare.

Tests

Every test can be thought of as a pre-defined "bag" of user data that you predefine in order to run against different versions of your Prompts.

The test variables for you to fill in come from all the variables you've used across all versions of your Prompt, in order to make the test viable for all possible versions you might wanna check out.

Test Results

The results of each test are streamed in live, so you can even watch what your AI provider writes as it comes in!

🧑‍🔬