Bias Mitigation
Post Process Technique to Determine the Fairest Threshold
If you are interested in bias mitigation capabilities, we’re happy to discuss your needs and what approaches would work best for you. Within Arthur Scope, we offer postprocessing methods; we encourage exploring alternate (pre- or in-processing) methods if your data science team has the bandwidth to do so.
Our currently available postprocessing method for use is the Threshold Mitigator. It automatically evaluates for demographic parity, equalized odds, and equal opportunity constraints.
Enabling Bias Mitigation will automatically train a mitigation model for all attributes marked as.monitor_for_bias=True
, for the constraints of demographic parity, equalized odds, and equal opportunity.
The Onlys of Bias Mitigation
Bias Mitigation is an enrichment in Arthur with a few only's.
- Bias Mitigation is only available for binary classification models
- It can only be enabled if at least one model attribute is marked as.
monitor_for_bias=True
- It is the only enrichment that is only available in the Python SDK. This also means that it is the only enrichment you run in a notebook each time you want to use it.
Bias Mitigation with the Python SDK
As mentioned above, bias mitigation is only available through our Python SDK. Here is an example notebook that we have put together on how to use the bias mitigation capabilities: Bias Mitigation Notebook on Arthur GitHub
Understanding the Algorithm
To learn more about the algorithm used for bias mitigation. Please refer to the Arthur Algorithms documentation section.
Updated about 1 year ago