Evaluator Gallery
chevron_rightName
infoModel
infoPrompt
infoCraft a clear set of instructions for the AI model. This prompt will guide the model in assessing a generated output on criteria you define, and then assign a specific score based on this criteria. Your instructions must use {{output}} for the model's generated response and can optionally use {{input}} for the original user prompt, {{history}} for chat history, and {{expected_output}} for the ground truth. These variables will be replaced with actual data from your dataset during evaluation.
Map Evaluator Score to Analytics
Define how rubric scores from the evaluator prompt are graded and colored in analytics.
Rubric category
infoScore mapping
infoScore color
info