navbarLogo
EXPERIMENT

AI can make mistakes. Google is not responsible for third-party model output.

open_in_new

Documentation

settings

Settings

chevron_left

Evaluator Gallery

chevron_right

Name

info

Prompt

info

Craft a clear set of instructions for the AI model. This prompt will guide the model in assessing a generated output on criteria you define, and then assign a specific score based on this criteria. Your instructions can use {{output}} for the model's last generated response and can optionally use {{input}} for the original user prompt, {{history}} for chat history, {{expected_output}} for the ground truth and {{system.instruction}} for the respective system instructions. These variables will be replaced with actual data from your dataset during evaluation.

Model

info
tune

Map Evaluator Score to Analytics

Define how rubric scores from the evaluator prompt are graded and colored in analytics.

Rubric category

info

Score mapping

info

Score color

info
keyboard_arrow_down
Green
keyboard_arrow_down
Orange
keyboard_arrow_down
Red