## S3 method for class 'factor'
precision(actual, predicted, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
weighted.precision(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'cmatrix'
precision(x, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
ppv(actual, predicted, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
weighted.ppv(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'cmatrix'
ppv(x, micro = NULL, na.rm = TRUE, ...)
precision(...)
weighted.precision(...)
ppv(...)
weighted.ppv(...)
precision
precision.factor | R Documentation |
Description
The precision()
-function computes the precision, also known as the positive predictive value (PPV), between two vectors of predicted and observed factor()
values. The weighted.precision()
function computes the weighted precision.
Usage
Arguments
actual
|
A vector of |
predicted
|
A vector of |
micro
|
A |
na.rm
|
A |
…
|
Arguments passed into other methods |
w
|
A |
x
|
A confusion matrix created |
Value
If micro
is NULL (the default), a named <numeric>
-vector of length k
If micro
is TRUE or FALSE, a <numeric>
-vector of length 1
Calculation
The metric is calculated for each class \(k\) as follows,
\[ \frac{\#TP_k}{\#TP_k + \#FP_k} \]
Where \(\#TP_k\) and \(\#FP_k\) are the number of true positives and false positives, respectively, for each class \(k\).
Examples
# 1) recode Iris
# to binary classification
# problem
$species_num <- as.numeric(
iris$Species == "virginica"
iris
)
# 2) fit the logistic
# regression
<- glm(
model formula = species_num ~ Sepal.Length + Sepal.Width,
data = iris,
family = binomial(
link = "logit"
)
)
# 3) generate predicted
# classes
<- factor(
predicted as.numeric(
predict(model, type = "response") >` 0.5
),
levels = c(1,0),
labels = c("Virginica", "Others")
)
# 3.1) generate actual
# classes
actual <- factor(
x = iris$species_num,
levels = c(1,0),
labels = c("Virginica", "Others")
)
# 4) evaluate class-wise performance
# using Precision
# 4.1) unweighted Precision
precision(
actual = actual,
predicted = predicted
)
# 4.2) weighted Precision
weighted.precision(
actual = actual,
predicted = predicted,
w = iris$Petal.Length/mean(iris$Petal.Length)
)
# 5) evaluate overall performance
# using micro-averaged Precision
cat(
"Micro-averaged Precision", precision(
actual = actual,
predicted = predicted,
micro = TRUE
),
"Micro-averaged Precision (weighted)", weighted.precision(
actual = actual,
predicted = predicted,
w = iris$Petal.Length/mean(iris$Petal.Length),
micro = TRUE
),
sep = "\n"
)