navbarLogo
EXPERIMENT

AI can make mistakes. Google is not responsible for third-party model output.

open_in_new

Documentation

settings

Settings

chevron_left

Evaluator Gallery

chevron_right

Name

info

Model

info
tune

Prompt

info

Craft a clear set of instructions for the AI model. This prompt will guide the model in assessing a generated output on criteria you define, and then assign a specific score based on this criteria. Your instructions must use {{output}} for the model's generated response and can optionally use {{input}} for the original user prompt, {{history}} for chat history, and {{expected_output}} for the ground truth. These variables will be replaced with actual data from your dataset during evaluation.

Map Evaluator Score to Analytics

Define how rubric scores from the evaluator prompt are graded and colored in analytics.

Rubric category

info

Score mapping

info

Score color

info
keyboard_arrow_down
Green
keyboard_arrow_down
Orange
keyboard_arrow_down
Red