Precision

precision.factor R Documentation

Description

A generic funcion for the precision. Use weighted.fdr() for the weighted precision.

Other names

Positive Predictive Value

Usage

## S3 method for class 'factor'
precision(actual, predicted, micro = NULL, na.rm = TRUE, ...)

## S3 method for class 'factor'
weighted.precision(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)

## S3 method for class 'cmatrix'
precision(x, micro = NULL, na.rm = TRUE, ...)

## S3 method for class 'factor'
ppv(actual, predicted, micro = NULL, na.rm = TRUE, ...)

## S3 method for class 'factor'
weighted.ppv(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)

## S3 method for class 'cmatrix'
ppv(x, micro = NULL, na.rm = TRUE, ...)

## Generic S3 method
precision(
 ...,
 micro = NULL,
 na.rm = TRUE
)

## Generic S3 method
weighted.precision(
 ...,
 w,
 micro = NULL,
 na.rm = TRUE
)

## Generic S3 method
ppv(
 ...,
 micro = NULL,
 na.rm = TRUE
)

## Generic S3 method
weighted.ppv(
 ...,
 w,
 micro = NULL,
 na.rm = TRUE
)

Arguments

actual

A vector of - of length \(n\), and \(k\) levels.

predicted

A vector of -vector of length \(n\), and \(k\) levels.

micro

A -value of length \(1\) (default: NULL). If TRUE it returns the micro average across all \(k\) classes, if FALSE it returns the macro average.

na.rm

A value of length \(1\) (default: TRUE). If TRUE, NA values are removed from the computation. This argument is only relevant when micro != NULL. When na.rm = TRUE, the computation corresponds to sum(c(1, 2, NA), na.rm = TRUE) / length(na.omit(c(1, 2, NA))). When na.rm = FALSE, the computation corresponds to sum(c(1, 2, NA), na.rm = TRUE) / length(c(1, 2, NA)).

micro = NULL, na.rm = TRUE Arguments passed into other methods

w

A <numeric>-vector of length \(n\). NULL by default.

x

A confusion matrix created cmatrix().

Value

If micro is NULL (the default), a named <numeric>-vector of length k

If micro is TRUE or FALSE, a <numeric>-vector of length 1

Definition

Let \(\hat{\pi} \in [0, 1]\) be the proportion of true positives among the predicted positives. The precision of the classifier is calculated as,

\[ \hat{\pi} = \frac{\#TP_k}{\#TP_k + \#FP_k} \]

Where:

  • \(\#TP_k\) is the number of true positives, and

  • \(\#FP_k\) is the number of false positives.

Examples

# 1) recode Iris
# to binary classification
# problem
iris$species_num <- as.numeric(
  iris$Species == "virginica"
)

# 2) fit the logistic
# regression
model <- glm(
  formula = species_num ~ Sepal.Length + Sepal.Width,
  data    = iris,
  family  = binomial(
    link = "logit"
  )
)

# 3) generate predicted
# classes
predicted <- factor(
  as.numeric(
    predict(model, type = "response") > 0.5
  ),
  levels = c(1,0),
  labels = c("Virginica", "Others")
)

# 3.1) generate actual
# classes
actual <- factor(
  x = iris$species_num,
  levels = c(1,0),
  labels = c("Virginica", "Others")
)

# 4) evaluate class-wise performance
# using Precision

# 4.1) unweighted Precision
precision(
  actual    = actual,
  predicted = predicted
)

# 4.2) weighted Precision
weighted.precision(
  actual    = actual,
  predicted = predicted,
  w         = iris$Petal.Length/mean(iris$Petal.Length)
)

# 5) evaluate overall performance
# using micro-averaged Precision
cat(
  "Micro-averaged Precision", precision(
    actual    = actual,
    predicted = predicted,
    micro     = TRUE
  ),
  "Micro-averaged Precision (weighted)", weighted.precision(
    actual    = actual,
    predicted = predicted,
    w         = iris$Petal.Length/mean(iris$Petal.Length),
    micro     = TRUE
  ),
  sep = "\n"
)