Machine Learning Win Rate Calculator
Enter the results from your model's confusion matrix to calculate its Overall Win Rate (Accuracy), Precision, and Recall.
Calculation Results
'; outputHtml += 'Based on a total of ' + totalPredictions + ' predictions made by the model.'; // Main Result Box outputHtml += 'Overall Win Rate (Accuracy)
'; outputHtml += " + accuracy.toFixed(2) + '%'; outputHtml += 'The model correctly predicted the outcome ' + correctPredictions + ' times out of ' + totalPredictions + '.'; outputHtml += 'Understanding Machine Learning "Win Rate"
In the context of Machine Learning (ML) classification models, the term "Win Rate" is often used interchangeably with Accuracy. It represents the percentage of time the model made a correct prediction, regardless of whether the correct prediction was a positive outcome (a "win") or a negative outcome (a "loss").
However, depending on your specific application—such as algorithmic trading bots, lead scoring, or fraud detection—a general accuracy score might not tell the whole story. That is why this calculator uses a confusion matrix approach to provide a more nuanced view of your model's performance.
The Inputs: The Confusion Matrix
To calculate a robust win rate, we need to break down the model's predictions against the actual outcomes:
- True Positives (TP): The model predicted a "win" (positive class), and it actually was a "win".
- True Negatives (TN): The model predicted a "loss" (negative class), and it actually was a "loss".
- False Positives (FP): The model predicted a "win", but it was actually a "loss". (Also known as a Type I Error).
- False Negatives (FN): The model predicted a "loss", but it was actually a "win". (Also known as a Type II Error).
The Metrics Explained
Overall Win Rate (Accuracy)
This is the most common interpretation of win rate in ML. It answers the question: "How often is the model correct overall?"
Formula: (TP + TN) / Total Predictions
While useful, accuracy can be misleading if your dataset is unbalanced (e.g., if 99% of your data are "losses", a model that always predicts "loss" will have a 99% accuracy "win rate" despite being useless at finding actual wins).
Precision (The "Positive Win Rate")
In scenarios like high-frequency trading or spam filtering, you care deeply about the quality of the positive predictions. Precision answers: "When the model predicts a win, how confident can I be that it's actually a win?" High precision means very few false alarms.
Formula: TP / (TP + FP)
Recall (The Capture Rate)
In scenarios like medical diagnosis or fraud detection, missing a real positive case is disastrous. Recall answers: "Of all the actual winning opportunities that existed, how many did the model successfully capture?" High recall means very few missed opportunities.
Formula: TP / (TP + FN)
Example Scenario
Imagine an ML model designed to detect profitable stock trades ("wins"). You run it on 225 historical samples.
- It correctly identified 85 profitable trades (TP: 85).
- It correctly rejected 120 unprofitable trades (TN: 120).
- It incorrectly suggested 15 trades that lost money (FP: 15).
- It missed 5 trades that would have made money (FN: 5).
Using the calculator above, the Overall Win Rate (Accuracy) is 91.11%. However, the Precision is 85.00% (meaning 15% of the time it says "buy", it's wrong), and the Recall is 94.44% (meaning it captured nearly 95% of all profitable opportunities).