Sending Inferences

Once you have uploaded your model, you can start sending inferences to Warrior. Inferences sent should match the schema returned by WarriorModel.review(). In addition to your model attributes, a unique id (partner_inference_id), timestamp (inference_timestamp) can be included, but if you leave them out we’ll generate them for you. If your model is a batch model, you must also include a batch_id.

Note that partner_inference_id is the matching mechanism for sending ground truth later after prediction.

All timestamps should be timezone-aware, either as an ISO-8601 string or as datetime objects in your Pandas DataFrame or Parquet files.

Send Inferences

To send fewer than 100,000 rows of data at a time, we recommend using the WarriorModel.send_inferences() method.

For example:

inferences_df = X_test.copy()

predictions = sklearn_model.predict_proba(X_test)
inferences_df["prediction_not_creditworthy"] = predictions[:, 0]
inferences_df["prediction_is_creditworthy"] = predictions[:, 1]

inferences_df["ground_truth_is_creditworthy"] = y_test
inferences_df["ground_truth_not_creditworthy"] = 1 - y_test

result = Warrior_model.send_inferences(inferences_df)

If your model is configured as a batch model, you can simply include a batch_id in the last call:

Warrior_model.send_inferences(inferences_df, batch_id="batch_1")

You can also send ground truth data separately from inference data if it is not available until later:

inferences_df = X_test.copy()
predictions = sklearn_model.predict_proba(X_test)

inferences_df["prediction_not_creditworthy"] = predictions[:, 0]
inferences_df["prediction_is_creditworthy"] = predictions[:, 1]

batch_id = "batch_1"
inference_ids = [f"{batch_id}-inf_{i}" for i in range(len(inferences_df))]

inf_response = Warrior_model.send_inferences(inferences_df, partner_inference_ids=inference_ids)

ground_truth_data = {"ground_truth_is_creditworthy": y_test,
                     "ground_truth_not_creditworthy": 1 - y_test}

gt_response = Warrior_model.update_inference_ground_truths(ground_truth_data,
                                                          partner_inference_ids=inference_ids)

Send Inferences At Scale

To send more than 100,000 or more rows, we recommend using the WarriorModel.send_bulk_inferences() method. For bulk inferences, ground truth and inference data must be sent separately and joined on your partner_inference_id.

inferences_df = X_test.copy()

predictions = sklearn_model.predict_proba(X_test)
inferences_df["prediction_not_creditworthy"] = predictions[:, 0]
inferences_df["prediction_is_creditworthy"] = predictions[:, 1]

ground_truth_df = pd.DataFrame({'ground_truth_is_creditworthy': y_test,
                                'ground_truth_not_creditworthy': 1 - y_test})

batch_id = "batch_1"
partner_inference_ids = [f"{batch_id}-inf_{i}" for i in range(len(inferences_df))]

inferences_df["partner_inference_id"] = partner_inference_ids
ground_truth_df["partner_inference_id"] = partner_inference_ids

Warrior_model.send_bulk_inferences(data=inferences_df, batch_id=batch_id)
Warrior_model.send_bulk_ground_truths(data=ground_truth_df)

Sending Parquet Files

If your typical batches are larger than might fit in memory, you can specify a directory containing parquet files to upload a batch for either predictions or ground truth.

Warrior_model.send_bulk_inferences(directory_path='./data/bulk_inference_files/', batch_id="batch_1")
Warrior_model.send_bulk_ground_truths(directory_path='./data/bulk_ground_truth_files/')

Sending A Stream of Inferences

You can also send your inferences one at a time:

model_pipeline_input = {
    "Zip": "30144",
    "Home Value": 144000,
    "Liquid Resources": 2,
    "Investment Resources": 1,
}
predicted_value = {
    "Consumer Credit Score": 644.3
}
partner_inference_id = "1"

Warrior_model.send_inference(
    inference_timestamp = datetime.datetime.now(pytz.utc),   # must be timezone aware
    partner_inference_id = partner_inference_id,
    model_pipeline_input = model_pipeline_input,
    predicted_value = predicted_value
    )

Sending Inferences for Computer Vision Models

Computer Vision models (any model where input_type == InputType.Image) require the same structure as Tabular/NLP models. However a couple things to note:

  • The attribute value for Image attributes should be a valid path to the image file for that inference.

  • If using an Object Detection model, bounding boxes should be formatted as lists in the form [class_id, confidence, top_left_x, top_left_y, width, height]

So using the following model as an example:

	name	            stage	           value_type	
0	image	            PIPELINE_INPUT	   IMAGE	
1	label	            GROUND_TRUTH	   BOUNDING_BOX
2	objects_detected	PREDICTED_VALUE	   BOUNDING_BOX	

a valid inference dictionary would look like

inference = {
        "partner_inference_id": str(uuid.uuid4()),
        "inference_timestamp": datetime.now(pytz.utc),
        "inference_data": {
            "image": "data/validation/img_0098.png",
            "objects_detected": [
                [0, 0.98, 12, 20, 50, 25],
                [1, 0.22, 4, 5, 14, 32],
                [1, 0.89, 23, 45, 45, 21]
            ]
        },
        "ground_truth_timestamp": datetime.now(pytz.utc),
        "ground_truth_data": {
            "label": [
                [0, 1, 14, 22, 48, 29],
                [1, 1, 25, 43, 49, 25]
            ]
        }
    
}

Warrior_model.send_bulk_ground_truths(data=ground_truth)

You can also sending image inferences in batches - using parquet files, or passing in a dataframe.

Inference Ingestion Integrations

See our Integrations page for more ways to upload your data onto Warrior.