Evaluators
Orchestrator
rait_connector.evaluators.EvaluatorOrchestrator
Orchestrates parallel evaluation of multiple metrics.
This class manages the execution of multiple metric evaluators, optionally running them in parallel to reduce total evaluation time.
Attributes:
| Name | Type | Description |
|---|---|---|
model_config |
Azure OpenAI model configuration |
|
azure_ai_project |
Azure AI project configuration |
|
credential |
Azure credential for authentication |
__init__(model_config=None, azure_ai_project=None, credential=None)
Initialize the orchestrator with Azure configuration.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
model_config
|
Optional[Any]
|
Azure OpenAI model configuration |
None
|
azure_ai_project
|
Optional[Union[str, Dict[str, str]]]
|
Azure AI project configuration dict or URL |
None
|
credential
|
Optional[Any]
|
Azure credential for authentication |
None
|
evaluate_metrics(prompt_data, ethical_dimensions, parallel=True, max_workers=5, fail_fast=False)
Evaluate all enabled metrics for a prompt.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
prompt_data
|
Dict[str, str]
|
Dict with query, response, context, ground_truth |
required |
ethical_dimensions
|
List[Dict[str, Any]]
|
List of ethical dimensions with metrics config |
required |
parallel
|
bool
|
Whether to run evaluations in parallel |
True
|
max_workers
|
int
|
Maximum number of parallel workers |
5
|
fail_fast
|
bool
|
Whether to stop on first error |
False
|
Returns:
| Type | Description |
|---|---|
List[Dict[str, Any]]
|
Updated ethical dimensions with evaluation results |
Raises:
| Type | Description |
|---|---|
EvaluationError
|
If fail_fast=True and any evaluation fails |
Registry Functions
rait_connector.evaluators.create_evaluator(metric_name, model_config=None, azure_ai_project=None, credential=None)
Create an evaluator instance for a given metric name.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metric_name
|
str
|
Name of the metric (case-insensitive) |
required |
model_config
|
Optional[Any]
|
Azure OpenAI model configuration |
None
|
azure_ai_project
|
Optional[Dict[str, str]]
|
Azure AI project configuration |
None
|
credential
|
Optional[Any]
|
Azure credential |
None
|
Returns:
| Type | Description |
|---|---|
Any
|
Configured evaluator instance |
Raises:
| Type | Description |
|---|---|
EvaluationError
|
If metric not found or initialization fails |
rait_connector.evaluators.can_evaluate_metric(metric_name, has_context, has_ground_truth)
Check if a metric can be evaluated with available data.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
metric_name
|
str
|
Name of the metric |
required |
has_context
|
bool
|
Whether context is available |
required |
has_ground_truth
|
bool
|
Whether ground_truth is available |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if metric can be evaluated |