Product DocumentationAPI and Python SDK ReferenceRelease Notes
Schedule a Demo
Schedule a Demo


Image Regression Models within Arthur

Regression models predict a numeric outcome. In Arthur, these models are listed under the regression model type.

Some common examples of Image regression are:

  • Predict user age based on a photograph
  • Predict house price from the image of the home

Formatted Data in Arthur

Image regression models require two columns: image input and numeric output. When onboarding a reference dataset (and setting a model schema), you need to specify a target column for each inference's ground truth. Many teams also choose to onboard metadata for the model (i.e., any information you want to track about your inferences) as non-input attributes.

Attribute (Image Input)Prediction (numeric)Ground Truth (numeric)Non-Input Attribute (numeric or categorical)
image_1.jpg45.3462.42High School Education
image_2.jpg55.153.2Graduate Degree

Predict Function and Mapping

These are some examples of common values teams need to onboard for their regression models.

The relationship between the prediction and ground truth column must be defined to help set up your Arthur environment to calculate default performance metrics. Additionally, if teams wish to enable explainability, they must provide a few Assets Required For Explainability. Below is an example of the runnable predict function, which outputs a single numeric prediction.

## Single Column Ground Truth 
output_mapping = {

# Build Arthur Model with this technique,
## Example prediction function for binary classification

def predict(x):
  return model.predict(x)

Available Metrics

When onboarding regression models, several default metrics are available to you within the UI. You can learn more about each specific metric in the metrics section of the documentation.

Out-of-the-Box Metrics

The following metrics are automatically available in the UI (out-of-the-box) per class when teams onboard a regression model. Learn more about these metrics in the Performance Metrics section.

MetricMetric Type
Root Mean Squared ErrorPerformance
Mean Absolute ErrorPerformance
R SquaredPerformance
Inference CountIngestion
Average PredictionIngestion

Drift Metrics

In the platform, drift metrics are calculated compared to a reference dataset. So, once a reference dataset is onboarded for your model, these metrics are available out of the box for comparison. Learn more about these metrics in the Drift and Anomaly section.

Of note, for unstructured data types (like text and image), feature drift is calculated for non-input attributes. The actual input to the model (in this case, text) drift is calculated with multivariate drift to accommodate the multivariate nature/relationships within the data type.

PSIFeature Drift
KL DivergenceFeature Drift
JS DivergenceFeature Drift
Hellinger DistanceFeature Drift
Hypothesis TestFeature Drift
Prediction DriftPrediction Drift
Multivariate DriftMultivariate Drift

Note: Teams can evaluate drift for inference data at different intervals with our Python SDK and query service (for example, data coming into the model now compared to a month ago).

User-Defined Metrics

Whether your team uses a different performance metric, wants to track defined data segments, or needs logical functions to create a metric for external stakeholders (like product or business metrics). Learn more about creating metrics with data in Arthur in the User-Defined Metrics section.

Available Enrichments

The following enrichments can be enabled for this model type:

Anomaly DetectionHot SpotsExplainabilityBias Mitigation