Evaluator Gallery
chevron_rightName
infoPrompt
infoCraft a clear set of instructions for the AI model. This prompt will guide the model in assessing a generated output on criteria you define, and then assign a specific score based on this criteria. Your instructions can use {{output}} for the model's last generated response and can optionally use {{input}} for the original user prompt, {{history}} for chat history, {{expected_output}} for the ground truth and {{system.instruction}} for the respective system instructions. These variables will be replaced with actual data from your dataset during evaluation.
Model
infoMap Evaluator Score to Analytics
Define how rubric scores from the evaluator prompt are graded and colored in analytics.
Rubric category
infoScore mapping
infoScore color
info