Skip to content

components batch_output_formatter

github-actions[bot] edited this page Sep 3, 2024 · 22 revisions

Batch Output Formatter

batch_output_formatter

Overview

Output Formatter for batch inference output

Version: 0.0.12

View in Studio: https://ml.azure.com/registries/azureml/components/batch_output_formatter/version/0.0.12

Inputs

Name Description Type Default Optional Enum
model_type Type of model. Can be one of ('oai', 'oss', 'vision_oss', 'claude') string True
batch_inference_output The raw batch inference output. uri_folder False
label_column_name The label column name. string True
additional_columns Name(s) of additional column(s) that could be useful to compute metrics, separated by comma (","). string True
endpoint_url string True
ground_truth_input The raw batch inference output. uri_folder True
handle_response_failure The way that the formatter handles the failed response. string use_fallback False ['use_fallback', 'neglect']
fallback_value The fallback value that can be used when request payload failed. string True
min_endpoint_success_ratio The minimum value of (successful_requests / total_requests) required for classifying inference as successful. If (successful_requests / total_requests) < min_endpoint_success_ratio, the experiment will be marked as failed. By default it is 0. (0 means all requests are allowed to fail while 1 means no request should fail.) number 0 False
is_performance_test If true, the performance test will be run. boolean False False
use_tiktoken If true, cl100k_base encoder is used from tiktoken to calculate token count; overrides any other token count calculation. boolean False True

Outputs

Name Description Type
predictions uri_file
performance_metadata uri_file
ground_truth uri_file
successful_requests uri_file
failed_requests uri_file
unsafe_content_blocked_requests uri_file

Environment

azureml://registries/azureml/environments/model-evaluation/versions/34

Clone this wiki locally