## S3 method for class 'factor'
precision(actual, predicted, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
weighted.precision(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'cmatrix'
precision(x, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
ppv(actual, predicted, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
weighted.ppv(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'cmatrix'
ppv(x, micro = NULL, na.rm = TRUE, ...)
## Generic S3 method
precision(
...,micro = NULL,
na.rm = TRUE
)
## Generic S3 method
weighted.precision(
...,
w,micro = NULL,
na.rm = TRUE
)
## Generic S3 method
ppv(
...,micro = NULL,
na.rm = TRUE
)
## Generic S3 method
weighted.ppv(
...,
w,micro = NULL,
na.rm = TRUE
)
Precision
precision.factor | R Documentation |
Description
A generic funcion for the precision. Use weighted.fdr()
for the weighted precision.
Other names
Positive Predictive Value
Usage
Arguments
actual
|
A vector of |
predicted
|
A vector of |
micro
|
A |
na.rm
|
A |
…
|
micro = NULL, na.rm = TRUE Arguments passed into other methods |
w
|
A |
x
|
A confusion matrix created |
Value
If micro
is NULL (the default), a named <numeric>
-vector of length k
If micro
is TRUE or FALSE, a <numeric>
-vector of length 1
Definition
Let \(\hat{\pi} \in [0, 1]\) be the proportion of true positives among the predicted positives. The precision of the classifier is calculated as,
\[ \hat{\pi} = \frac{\#TP_k}{\#TP_k + \#FP_k} \]
Where:
-
\(\#TP_k\) is the number of true positives, and
-
\(\#FP_k\) is the number of false positives.
Examples
# 1) recode Iris
# to binary classification
# problem
$species_num <- as.numeric(
iris$Species == "virginica"
iris
)
# 2) fit the logistic
# regression
<- glm(
model formula = species_num ~ Sepal.Length + Sepal.Width,
data = iris,
family = binomial(
link = "logit"
)
)
# 3) generate predicted
# classes
<- factor(
predicted as.numeric(
predict(model, type = "response") > 0.5
),levels = c(1,0),
labels = c("Virginica", "Others")
)
# 3.1) generate actual
# classes
<- factor(
actual x = iris$species_num,
levels = c(1,0),
labels = c("Virginica", "Others")
)
# 4) evaluate class-wise performance
# using Precision
# 4.1) unweighted Precision
precision(
actual = actual,
predicted = predicted
)
# 4.2) weighted Precision
weighted.precision(
actual = actual,
predicted = predicted,
w = iris$Petal.Length/mean(iris$Petal.Length)
)
# 5) evaluate overall performance
# using micro-averaged Precision
cat(
"Micro-averaged Precision", precision(
actual = actual,
predicted = predicted,
micro = TRUE
),"Micro-averaged Precision (weighted)", weighted.precision(
actual = actual,
predicted = predicted,
w = iris$Petal.Length/mean(iris$Petal.Length),
micro = TRUE
),sep = "\n"
)