function calculateTPR() {
var tp = parseFloat(document.getElementById('tp_input').value);
var fn = parseFloat(document.getElementById('fn_input').value);
var resultDiv = document.getElementById('tpr_result_container');
if (isNaN(tp) || isNaN(fn)) {
alert("Please enter valid numbers for both TP and FN.");
return;
}
if (tp < 0 || fn < 0) {
alert("Values cannot be negative.");
return;
}
var totalActualPositives = tp + fn;
if (totalActualPositives === 0) {
alert("The sum of TP and FN must be greater than zero to calculate a rate.");
return;
}
var tpr = tp / totalActualPositives;
var tprPercent = tpr * 100;
document.getElementById('tpr_decimal').innerText = tpr.toFixed(4);
document.getElementById('tpr_percentage').innerText = tprPercent.toFixed(2) + "%";
document.getElementById('tpr_explanation').innerText = "This means the model correctly identified " + tprPercent.toFixed(2) + "% of all actual positive cases.";
resultDiv.style.display = 'block';
}
Understanding the True Positive Rate (TPR)
In machine learning and statistics, the True Positive Rate (TPR), also known as Sensitivity or Recall, measures the proportion of actual positive cases that were correctly identified by a model. It is a critical metric when the cost of missing a positive case is high, such as in medical diagnoses or fraud detection.
The True Positive Rate Formula
To calculate the True Positive Rate from a confusion matrix, you need two values:
True Positives (TP): The number of positive instances correctly predicted as positive.
False Negatives (FN): The number of positive instances incorrectly predicted as negative.
The formula is expressed as:
TPR = TP / (TP + FN)
Step-by-Step Calculation Example
Imagine a diagnostic test for a specific condition. You have a confusion matrix with the following results:
True Positives (TP): 90 patients who have the condition and tested positive.
False Negatives (FN): 10 patients who have the condition but tested negative.
Using the formula:
Add TP and FN to find the total number of actual positive cases: 90 + 10 = 100.
Divide TP by the total: 90 / 100 = 0.90.
The True Positive Rate is 0.90 or 90%.
Why TPR Matters
A high TPR indicates that the model is very good at "recalling" positive instances. For example, in cancer screening, a high True Positive Rate (Sensitivity) is vital because you want to ensure that as many people with the disease as possible are identified, even if it results in some false alarms (which would be captured by the False Positive Rate).
TPR vs. Precision
While TPR focus on how many of the actual positives we caught, Precision focuses on how many of our predicted positives were actually correct. Both are essential parts of evaluating a model's performance via the confusion matrix.