Concepts and Terminology

The following definitions are specific to the Warrior platform, though in most cases are applicable to ML more broadly.

Warrior Inference

Container class for inferences uploaded to the Warrior platform. An inference is composed of input features, prediction values, and (optionally) ground truth values and any Non-Input data.

Example:

ground_truth = {
    "Consumer Credit Score": 652.0
}
inference = Warrior_model.get_inference(external_id)
inference.update(ground_truth)

Related terms: inference, WarriorModel

Warrior Model

Model object used for sending and retrieving data pertinent to a deployed ML system. The WarriorModel object is separate from the underlying model that is trained and which makes predictions; it serves as a wrapper for the underlying model to access Warrior platform functionality.

An WarriorModel contains at least a name, an InputType and a ModelType.

Examples:

Warrior_model = connection.model(name="New_Model",
                               input_type=InputType.Tabular,
                               model_type=ModelType.Regression)
Warrior_model = connection.get(model_id)
Warrior_model.send_inference(...)

Attribute

A variable associated with a model. Can be input, prediction, ground truth or ancillary information (these groupings are known as Stages in the Warrior platform). Can be categorical or continuous. Example:

The attribute age is an input to the model, whereas the attribute creditworthy is the target for the model.

Synonyms: variable, {predictor, input}, {ouput, target}, prediction.

Related terms: input, stage, prediction, ground truth

Bias

While bias is an overloaded term in stats&ML, we refer specifically to where a model’s outcomes have the potential to differentially harm certain subgroups of a population.

Example:

This credit approval model tends to lead to biased outcomes: men are approved for loans at a rate 50% higher than women are.

Related terms: bias detection, bias mitigation, disparate impact

Bias Detection

The detection and quantification of algorithmic bias in an ML system, typically as evaluated on a model’s outputs (predictions) across different populations of a sensitive attribute. Many definitions of algorithmic bias have been proposed, including group fairness and individual fairness defintions. Group fairness definitions are often defined by comparing group-conditional statistics about the model’s predictions. In the below definitions, the group membership feature is indicated by \(G\) and a particular group membership value is indicated by \(g\).

Example:

Common metrics for group fairness include Demographic Parity, Equalized Odds, and Equality of Opportunity.

Related terms: bias mitigation

Demographic Parity

A fairness metric which compares group-conditional selection rates. The quantity being compared is:

\[ \begin{align*} \mathbb P(\hat Y = 1 | G = g) \end{align*} \]

There is not necessarily a normative ideal relationship between the selection rates for each group: in some situations, such as the allocation of resources, it may be important to minimize the disparity in selection rates across groups; in others, metrics based on group-conditional accuracy may be more relevant. However, even in the latter case, understanding group-conditional selection rates, especially when compared against the original training data, can be useful contextualization for the model and its task as a whole.

Related term: disparate impact

Equal Opportunity

A fairness metric which compares group-conditional true positive rates. The quantity being compared is:

\[ \begin{align*} \mathbb P(\hat Y = 1 | Y = 1, G = g) \end{align*} \]

For all groups, a true positive rate closer to 1 is better.

Equalized Odds

A fairness metric which incorporates both group-conditional true positive rates and false positive rates, or, equivalently, true positive rates and true negative rates. There are a variety of implementations (due to the fact that some quadrants of the confusion matrix are complements of one another); here is one possible quantity to compare across groups:

\[ \begin{align*} \mathbb P (\hat Y = 1 | Y = 1, G = g) + \mathbb P(\hat Y = 0 | Y = 0, G = g) \end{align*} \]

In this implementation, this quantity should be as close to 2 as possible for all groups.

Bias Mitigation

Automated techniques to mitigating bias in a discriminatory model. Can be characterized by where the technique sits in the model lifecycle:

  • Pre Processing: Techniques that analyze datasets and often modify/resample training datasets so that the learned classifier is less discriminatory.

  • In Processing: Techniques for training a fairness-aware classifier (or regressor) that explicitly trades off optimizing for accuracy and also maintaing fairness across sensitive groups.

  • Post Processing: Techniques that only adjust the output predictions from a discriminatory classifier, without modifying the training data or the classifier.

Related terms: bias detection

Binary Classification

A modeling task where the target variable belongs to a discrete set with two possible outcomes.

Example:

This binary classifier will predict whether or not a person is likely to default on their credit card.

Related terms: model-type, classification, multilabel classification

Categorical Attribute

An attribute whose value is taken from a discrete set of possibilities.

Example:

A person’s blood type is a categorical attribute: it can only be A, B, AB, or O.

Synonyms: discrete attribute

Related terms: attribute, continuous, classification

Continuous Attribute

An attribute whose value is taken from an ordered continuum, which can be bounded or unbounded.

Example:

A person’s height, weight, income, IQ can all be through of as continuous attributes.

Synonyms: numeric attribute

Related terms: attribute, continuous, regression

Classification

A modeling task where the target variable belongs to a discrete set with a fixed number of possible outcomes.

Example:

This classification model will determine whether an input image is of a cat, a dog, or fish.

Related terms: model-type, binary classification, multilabel classification

Data Drift

Refers to the problem arising when, after a trained model is deployed, changes in the external world lead to degradation of model performance and the model becoming stale. Detecting data drift will provide a leading indicator about data stability and integrity.

Data drift can be quantified with respect to a specific reference set (eg. the model’s training data), or more generally over any temporal shifts in a variable with respect to past time windows.

Your project can query data drift metrics through the WarriorAI API. This section will provide overview of the available data drift metrics in WarriorAI’s query service.

Related terms: out of distribution

Definitions

P and Q

We establish some mathematical housekeeping for the below metrics. Let \(P\) be the reference distribution and \(Q\) be the target distribution. These are both probability distributions that can be approximated by binning the underlying reference and target sets. Generally, \(P\) is an older dataset and \(Q\) is a new dataset of interest. We’d like to quantify how far the the distributions differ to see if the reference set has gone stale and algorithms trained on it shoud not be used to perform inferences on the target dataset.

Entropy

Let \(\text{H}(P)\) be the entropy of distribution \(P\). It is interpreted as the expected (i.e. average) number of bits (if log base 2) or nats (if log base \(e\)) required to encode information of a datapoint from distribution \(P\). WarriorAI applications use log base \(e\), so interpretation will be in nats.

\[ \begin{align*} \text{H}(P) = -\sum_{k=1}^K P(x_k)*\text{log}P(x_k) = -\text{E}_P[\text{log}P(x_k)] \end{align*} \]

KL Divergence

Let \(\text{D}(P \parallel Q)\) be the Kullback-Leibler (KL) Divergence from \(P\) to \(Q\). It is interpreted as the nats of information we expect to lose in using \(Q\) instead of \(P\) for modeling data \(X\), discretized over probability space \(K\). KL Divergence is not symmetrical, i.e. \(\text{D}(P \parallel Q) \neq \text{D}(Q \parallel P)\), and should not be used as a distance metric.

\[\begin{split} \begin{align*} \text{D}(P||Q) = \sum_{k=1}^K P(x_k)*(\text{log}P(x_k)-\text{log}Q(x_k)) \\ = \text{E}_P[\text{log}P(x)-\text{log}Q(x)] \end{align*} \end{split}\]

Population Stability Index (PSI)

Let \(\text{PSI}(P,Q)\) be the Population Stability Index (PSI) between \(P\) and \(Q\). It is interpreted as the roundtrip loss of nats of information we expect to lose from \(P\) to \(Q\) and then from \(Q\) returning back to \(P\), and vice versa. PSI smooths out KL Divergence since the return trip information loss is included, and this metric is popular in financial applications.

\[\begin{split} \begin{align*}\text{PSI}(P,Q) = \text{D}(P||Q) + \text{D}(Q||P) \\ = \sum_{k=1}^K (P(x_k)-Q(x_k))*(\text{log}P(x_k)-\text{log}Q(x_k)) \\ = \text{E}_P[\text{log}P(x)-\text{log}Q(x)]+\text{E}_Q[\text{log}Q(x)-\text{log}P(x)] \end{align*} \end{split}\]

JS Divergence

Let \(\text{JSD}(P,Q)\) be the Jensen-Shannon (JS) Divergence between \(P\) and \(Q\). It smooths out KL divergence using a mixture of the base and target distributions and is interpreted as the entropy of the mixture \(M=\frac{P+Q}{2}\) minus the mixture of the entropies of the individual distributions.

\[\begin{split} \begin{align*}\text{JSD}(P,Q) = \frac{1}{2}\text{D}(P||M) + \frac{1}{2}\text{D}(Q||M) \\ = \text{H}(\frac{P+Q}{2})-\frac{\text{H}(P)+H(Q)}{2} \end{align*} \end{split}\]

Hellinger Distance

Let \(\text{HE}(P,Q)\) be the Hellinger Distance between \(P\) and \(Q\). It is interpreted as the Euclidean norm of the difference of the square root distributions of \(P\) and \(Q\).

\[\begin{split} \begin{align*} \text{HE}(P,Q) = {\frac {1}{\sqrt {2}}}{\bigl \|}{\sqrt {P}}-{\sqrt {Q}}{\bigr \|}_{2} \\ = {\frac {1}{\sqrt {2}}}{\sqrt {\sum _{k=1}^{K}\left({\sqrt {P(x_k)}}-{\sqrt {Q(x_k)}}\right)^{2}}} \end{align*} \end{split}\]

Hypothesis Test

Hypothesis testing uses different tests depending on whether a feature is categorical or continuous.

For categorical features, let \(\chi_{\text{K}-1}^2(P,Q)\) be the chi-squared test statistic for \(P\) and \(Q\), with \(\text{K}\) being the number of categories of the feature, i.e. \(\text{K}-1\) are the degrees of freedom. Let \(\text{N}_{Pk}\) and \(\text{N}_{Qk}\) be the count of occurrences of feature being \(k\), with \(1\leq k \leq K\), for \(P\) and \(Q\) respectively. The chi-squared test statistic is the summation of the standardized differences of expected counts between \(P\) and \(Q\).

\[\begin{split} \begin{align*} \chi_{K-1}^2(P,Q) = \sum_{k=1}^K \frac{(\text{N}_{Qk}-\text{N}_{Pk})^2}{\text{N}_{Pk}}\\ \end{align*} \end{split}\]

For continuous features, let \(\text{KS}(P, Q)\) be the Kolmogorov-Smirnov test statistic for \(P\) and \(Q\). Let \(F_P\) and \(F_Q\) be the empirical cumulative density, for \(P\) and \(Q\) respectively. The Kolmogorov-Smirnov test is a nonparametric, i.e. distribution-free, test that compares the empirical cumulative density functions of \(P\) and \(Q\).

\[\begin{split} \begin{align*} \text{KS}(P,Q) = \sup_x (F_P(x) - F_Q(x)) \\ \end{align*} \end{split}\]

The returned test statistic is then compared to cutoffs for significance. A higher test statistic indicates more data drift. We’ve abstracted the calculations away for you in our query endpoint.

For HypothesisTest, the returned value is transformed as -log_10(P_value) to maintain directional parity with the other data drift metrics. That is, lower P_value is more significant and implies data drift, reflected in a higher -log_10(P_value).

Multivariate

Warrior also offers a multivariate Anomaly Score through the Anomaly Detection Enrichment. See here for an explanation of how these scores are calculated.

Disparate Impact

Legal terminology orginally from Fair Lending case law. This constraint is strictly harder than Disparate Treatment and asserts that model outcomes must not be discriminatory across protected groups. That is, the outcome of a decisioning process should not be substantially higher (or lower) for one group of a protected class over another.

While there does not exist a single threshold for establishing the presence or absence of disparate impact, the so-called “80% rule” is commonly referenced. However, we strongly recommend against adopting this rule-of-thumb, as these analyses should be grounded in use-case specific analysis and the legal framework pertinent to a given industry.

Example:

Even though the model didn’t take gender as input, it still results in disparate impact when we compare outcomes for males and females.

Related terms: bias, disparate treatment

Disparate Treatment

Legal terminology originally from Fair Lending case law. Disparate Treatment asserts that you are not allowed to consider protected variables (eg race, age, gender) when approving or denying an applicant for a credit card loan. In practical terms, this means that a data scientist cannot include these attributes as inputs to a credit decisioning model.

Adherence to Disparate Treatment is not a sufficient condition for actually acheiving a fair model (see proxy and bias detection). “Fairness through unawareness” is not good enough.

Related terms: bias, disparate impact

Enrichment

Generally used to describe data or metrics added to raw data after ingestion. Warrior provides various enrichments such as Anomaly Detection and Explainability. See Enrichments for details around using enrichments within Warrior.

Feature

An individual attribute that is an input to a model

Example:

The credit scoring model has features like “home_value”, “zip_code”, “height".

Ground Truth

The true label or target-variable (Y) corresponding to inputs (X) for a dataset.

Examples:

pred = sklearn_model.predict_proba(X)
Warrior_model.send_inference(
  model_pipeline_input=X,
  predicted_values={1:pred, 0: 1-pred})

Related terms: prediction

Image Data

Imagery data commonly used for computer vision models.

Related terms: attribute, model type, Stage

Inference

One row of a dataset. An inference refers to passing a single input into a model and computing the model’s prediction. Data associated with that inference might include (1) input data, (2) model’s prediction, (3) corresponding ground truth. With respect to the Warrior platform, the term inference denotes any and all of those related components of data for a single input&prediction.

Related terms: WarriorInference, stage

Input

A single instance of data, upon which a model can calculate an output prediction. The input consists of all relevant features together.

Example:

The input features for the credit scoring model consist of “home_value”, “zip_code”, “height".

Related terms: feature, model

Input Type

For an WarriorModel, this field declares what kind of input datatype will be flowing into the system.

Allowable values are defined in the InputType enum:

Example:

Warrior_model = connection.model(name="New_Model",
                               input_type=InputType.Tabular,
                               model_type=ModelType.Regression)

Related terms: model type, tabular data, nlp data

Model Health Score

On the UI dashboard, you will see a model health score between 0-100 for each of your models. This score is an average over a 30 day window of the following normalized metrics: performance, drift, and ingestion.

  • Performance:

    • Regression: 1 - Normalized MAE

    • Classification: F1 Score

  • Drift

    • 1 - Average Anomaly Score

  • Ingestion

    • Variance of normalized time periods between ingestion events

    • Variance of normalized volume differences between ingestion events

You can extract the health score via an API call as well.

Model Type

For an WarriorModel, this field declares what kind of output predictions will be flowing out of the system.

Allowable values are defined in the ModelType enum:

  • Regression

    • appropriate for continuous-valued targets

  • Multiclass

    • appropriate for both binary classiers and multiclass classifiers

  • Multilabel

    • appropriate for multilabel classifiers

Example:

Warrior_model = connection.model(name="New_Model",
                               input_type=InputType.Tabular,
                               model_type=ModelType.Regression)

Related terms: input type

Multilabel Classification

A modeling task where each input is associated with two or more labels, from a fixed set of possible labels.

Example:

This computer vision model can detect common road signs see on US highways. The model is trained on example images which contain any of 250 different road signs in each image.

Related terms: model-type, multiclass clasification

NLP Data

Unstructured text sequences commonly used for Natural Language Processing models.

Related terms: attribute, model type, Stage

Out of Distribution Detection

Refers to the challenge of detecting when a input (or set of inputs) is substantially different than the distribution of a larger set of reference inferences. This term commonly arises in the context of data drift, where we want to detect if new inputs are different than the training data (and distribution thereof) for a particular model. OOD Detection is a relevant challenge for Tabular data as well as unstructured data such as images and sequences.

Related terms: data drift

Prediction

The output prediction (y_hat) of a trained model for any input.

Examples:

pred = sklearn_model.predict_proba(X)
Warrior_model.send_inference(
  model_pipeline_input=X,
  predicted_values={1:pred, 0: 1-pred})

Related terms: ground truth

Protected Attribute

An attribute of an inference that is considered sensitive with respect to model bias. Common examples include race, age, and gender. The term “protected” comes from the Civil Right Act of 1964.

Synonyms: sensitive attribute

Related terms: bias, proxy

Proxy

An input attribute in a model (or combination thereof) that is highly correlated with a protected attribute such as race, age, or gender. The presence of proxies in a dataset makes it difficult to rely only on [Disparate Treatment] as a standard for fair ML.

Example:

In most US cities, zip code is a strong proxy for race. Therefore, one must be cautious when using zip code as an input to a model.

Related terms: bias, disparate impact, disparate treatment

Regression

A modeling task (or model) where the target variable is a continuous variable.

Example:

This regression model predicts what the stock price of $APPL will be tomorrow.

Related terms: model type

Stage

Taxonomy used by the Warrior platform to delineate how attributes contribute to the model computations. Allowable values are defined in the Stage enum:

  • ModelPipelineInput : Input to the entire model pipeline. This will most commonly be the Stage used to represent all model inputs. Will contain base input features that are familiar to the data scientist: categorical and continuous columns of a tabular dataset.

  • PredictFunctionInput: Potential alternative input source, representing direct input into model’s predict() method. Therefore, data here will have already undergone all relevant transformations including scaling, one-hot-encoding, or embedding.

  • PredictedValue: The predictions coming out of the model.

  • GroundTruth: The ground truth (or target) attribute for a model.

  • NonInput: Ancillary data that can be associated with each inference, but not necesaarily a direct input to the model. For example, sensitive attributes like age, sex, or race might not be direct model inputs, but will useful to associate with each prediction.

Tabular Data

Data type for model inputs where the data can be thought of as a table (or spreadsheet) composed of rows and columns. Each column represents an input attribute for the model and each row represents a separate record that composes the training data. In supervised learning, exactly one of the columns acts as the target.

Example:

This credit scoring model is trained on tabular data. The input attributes are income, country, and age and the target is FICO score.

Related terms: attribute, model type, Stage

Sensitive Attribute

See protected attribute