## S3 method for class 'factor'
fpr(actual, predicted, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
weighted.fpr(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'cmatrix'
fpr(x, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
fallout(actual, predicted, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'factor'
weighted.fallout(actual, predicted, w, micro = NULL, na.rm = TRUE, ...)
## S3 method for class 'cmatrix'
fallout(x, micro = NULL, na.rm = TRUE, ...)
fpr(...)
fallout(...)
weighted.fpr(...)
weighted.fallout(...)
false positive rate
fpr.factor | R Documentation |
Description
The fpr()
-function computes the False Positive Rate (FPR), also known as the fall-out (fallout()
), between two vectors of predicted and observed factor()
values. The weighted.fpr()
function computes the weighted false positive rate.
Usage
Arguments
actual
|
A vector of |
predicted
|
A vector of |
micro
|
A |
na.rm
|
A |
…
|
Arguments passed into other methods |
w
|
A |
x
|
A confusion matrix created |
Value
If micro
is NULL (the default), a named <numeric>
-vector of length k
If micro
is TRUE or FALSE, a <numeric>
-vector of length 1
Calculation
The metric is calculated for each class \(k\) as follows,
\[ \frac{\#FP_k}{\#FP_k + \#TN_k} \]
Where \(\#FP_k\) and \(\#TN_k\) represent the number of false positives and true negatives, respectively, for each class \(k\).
Examples
# 1) recode Iris
# to binary classification
# problem
$species_num <- as.numeric(
iris$Species == "virginica"
iris
)
# 2) fit the logistic
# regression
<- glm(
model formula = species_num ~ Sepal.Length + Sepal.Width,
data = iris,
family = binomial(
link = "logit"
)
)
# 3) generate predicted
# classes
<- factor(
predicted as.numeric(
predict(model, type = "response") >` 0.5
),
levels = c(1,0),
labels = c("Virginica", "Others")
)
# 3.1) generate actual
# classes
actual <- factor(
x = iris$species_num,
levels = c(1,0),
labels = c("Virginica", "Others")
)
# 4) evaluate class-wise performance
# using False Positive Rate
# 4.1) unweighted False Positive Rate
fpr(
actual = actual,
predicted = predicted
)
# 4.2) weighted False Positive Rate
weighted.fpr(
actual = actual,
predicted = predicted,
w = iris$Petal.Length/mean(iris$Petal.Length)
)
# 5) evaluate overall performance
# using micro-averaged False Positive Rate
cat(
"Micro-averaged False Positive Rate", fpr(
actual = actual,
predicted = predicted,
micro = TRUE
),
"Micro-averaged False Positive Rate (weighted)", weighted.fpr(
actual = actual,
predicted = predicted,
w = iris$Petal.Length/mean(iris$Petal.Length),
micro = TRUE
),
sep = "\n"
)