Binary Classification
Binary Classification for Text Models in Arthur
Binary classification models predict a binary outcome (ie. one of two potential classes). In Arthur, these models fall into the category of classification and are represented by the Multiclass model type.
Some common examples of Text binary classification are:
- Is this email spam or not?
- Was this review written by a human or a robot?
Frequently, these models output not only a yes/no answer but also a probability for each (i.e. prob_yes and prob_no). These probabilities are then categorized into yes/no based on a threshold. In these cases, during onboarding teams will supply their classification threshold and continuously track the class probabilities (i.e. prob_yes, prob_no).
Formatted Data in Arthur
Text binary classification models require three things to be specified in their schema: the text input, predicted probability of outputs, and a column for the inference's true label (or ground truth). Many teams also choose to onboard metadata for the model (i.e. any information you want to track about your inferences) as non-input attributes.
Attribute (text input) | Probability of Prediction A | Probability of Prediction B | Ground Truth | Non-Input Attribute (numeric or categorical) |
---|---|---|---|---|
Ubi amor, ibi dolor | .95 | .05 | A | Male |
Lupus in fabula | .86 | .14 | B | Female |
Predict Function and Mapping
These are some examples of common values teams need to onboard for their binary classification models.
The relationship between the prediction and ground truth column must be defined to help set up your Arthur environment to calculate default performance metrics. There are 3 options for formatting this, depending on your reference dataset. Additionally, if teams wish to enable explainability, they must provide a few Assets Required For Explainability. Below are common examples of the required runnable predict function (that outputs two values, the probability of each potential class).
## Option 1: Single Prediction Column, Single Ground Truth Column
# Map PredictedValue Column to its corresponding GroundTruth value.
# This tells Arthur that the `pred_proba_credit_default` column represents
# the probability that the ground truth column has the value 1
pred_to_ground_truth_map_1 = {'pred_proba_credit_default' : 1}
# Building the Model with this technique
arthur_model.build(reference_data,
ground_truth_column='ground_truth',
pred_to_ground_truth_map=pred_to_ground_truth_map_1,
)
## Option 2: Multiple Prediction Columns, Single Ground Truth Column
# Map each PredictedValue attribute to its corresponding GroundTruth value.
pred_to_ground_truth_map_2 = {'pred_0' : 0,
'pred_1' : 1}
# Building the Model with this technique
arthur_model.build(reference_data,
ground_truth_column='ground_truth',
pred_to_ground_truth_map=pred_to_ground_truth_map_2,
positive_predicted_attr = 'pred_1'
)
## Option 3: Multiple Prediction and Ground Truth Columns
# Map each PredictedValue attribute to its corresponding GroundTruth attribute.
pred_to_ground_truth_map_3 = {'pred_0' : 'gt_0',
'pred_1' : 'gt_1'}
# Building the Model with this technique
arthur_model.build(reference_data,
pred_to_ground_truth_map=pred_to_ground_truth_map_3,
positive_predicted_attr = 'pred_1'
)
# example_entrypoint.py
sk_model = joblib.load("./serialized_model.pkl")
def predict(x):
return sk_model.predict_proba(x)
# example_entrypoint.py
from utils import pipeline_transformations
sk_model = joblib.load("./serialized_model.pkl")
def predict(x):
return sk_model.predict_proba(pipeline_transformations(x))
Available Metrics
When onboarding Text classification models, you have a number of default metrics available to you within the UI. You can learn more about each specific metric in the metrics section of the documentation.
Out-of-the-Box Metrics
The following metrics are automatically available in the UI (out-of-the-box) when teams onboard a binary classification model. Find out more about these metrics in the Performance Metrics section.
Metric | Metric Type |
---|---|
Accuracy Rate | Performance |
Balanced Accuracy Rate | Performance |
AUC | Performance |
Recall | Performance |
Precision | Performance |
Specificity (TNR) | Performance |
F1 | Performance |
False Positive Rate | Performance |
False Negative Rate | Performance |
Inference Count | Ingestion |
Inference Count by Class | Ingestion |
Drift Metrics
In the platform, drift metrics are calculated compared to a reference dataset. So, once a reference dataset is onboarded for your model, these metrics are available out of the box for comparison. Find out more about these metrics in the Drift and Anomaly section.
Of note, for unstructured data types (like text and image), feature drift is calculated for non-input attributes. The actual input to the model (in this case text) drift is calculated with multivariate drift to accommodate the multivariate nature / relationships within the data type.
PSI | Feature Drift |
---|---|
KL Divergence | Feature Drift |
JS Divergence | Feature Drift |
Hellinger Distance | Feature Drift |
Hypothesis Test | Feature Drift |
Prediction Drift | Prediction Drift |
Multivariate Drift | Multivariate Drift |
Note: Teams are able to evaluate drift for inference data at different intervals with our Python SDK and query service (for example data coming into the model now, compared to a month ago).
Fairness Metrics
As further described in the Fairness Metrics section of the documentation, fairness metrics are available for any tabular Arthur attributes manually selected to monitor for bias. For text models, however, the only attribute required to onboard a model is the text attribute. So, it is only possible to monitor non-input attributes for fairness in text models.
Metric | Metric Type |
---|---|
Accuracy Rate | Fairness |
True Positive Rate (Equal Opportunity) | Fairness |
True Negative Rate | Fairness |
False Positive Rate | Fairness |
False Negative Rate | Fairness |
User-Defined Metrics
Whether your team uses a different performance metric, wants to track defined segments of data, or needs logical functions to create a metric for external stakeholders (like product or business metrics). Learn more about creating metrics with data in Arthur in the User-Defined Metrics section.
Available Enrichments
The following enrichments can be enabled for this model type:
Anomaly Detection | Hot Spots | Explainability | Bias Mitigation |
---|---|---|---|
X | X | X (Non-Input Attributes) |
Updated 11 months ago
Learn more about the model onboarding process or jump right into an NLP onboarding quickstart