“Who will win a game of Jeopardy!?” is a question that many fans have tried to answer over the previous years, and it is one that I have been trying to answer for a long time.
The methodology that you are about to read has been refined over the past six years; the first version was first used for the All-Star Games in 2019. The basics are still the same—and I’ll be rehashing those basics in this article—but the calculations have been refined.
Essentially, what I am now calling the Unified Prediction Model—as it can be used to predict both regular play and tournament matches—takes a calculated “performance mean” and “performance variance” for each player and uses that data as inputs for a Monte Carlo simulation. Rankings of contestants can also be generated based upon either their performance means or a combination of their mean and variance (as I believe more “exciting” contestants, as a result of their higher bets on Daily Doubles, have more variance, and their games are more exciting for viewers as a result). The relative rankings of the players in this context can be used to predict how players could perform against each other on Tournament of Champions-level or higher material.
So, What Are The Basics?
As was the case in the initial version of the model in 2019, a player’s performance on Jeopardy can often be broken down into three parts: 1) how they do on regular clues; 2) how they do on Daily Doubles; 3) how they do in Final Jeopardy. This part has not changed—a player’s raw performance mean is still simply an addition of those three numbers. Their calculation, though, has been refined over the years.
Throughout this article, I’ll be using Victoria Groce, the most recent Masters champion (as of publication) to illustrate my calculations.
My friends over at Geeks Who Drink have introduced a daily trivia game—Thrice! Existing to make daily clever trivia content accessible to a wide audience, it's a daily challenge that tries to get you to the answer via three separate clues. It has a shareable score functionality to challenge your friends and new questions every day will give you a new daily social ritual. You can find it at thricegame.com.
Are you going on the show and looking for information about how to bet in Final Jeopardy? Check out my Betting Strategy 101 page. If you want to learn how to bet in two-day finals, check out Betting Strategy 102. In case the show uses a tournament with wild cards in the future, there is also a strategy page for betting in tournament quarterfinals.
Are you looking for information on how to stream Jeopardy! in 2024? Find out information here on how to stream from most places in North America!
Do you appreciate the work I do here on The Jeopardy! Fan? Would you like to make a one-time contribution to the site? You may do so here!
You can find game-by-game stats here at The Jeopardy! Fan of all 17 players, now including Adriana Harmeyer, that have won 10 or more games on Jeopardy!
You can now listen to Alex Trebek-hosted Jeopardy! episodes from TuneIn Radio without leaving The Jeopardy! Fan — listen now!
Part 1—Calculating The Numbers
As mentioned in the previous section, a number of separate numbers need to be calculated to reach a player’s performance mean and variance.
Taking The Average Has Changed In This Model
Back in 2019, the model took into account each of a player’s games generally equally (though regular play in the past was given a slightly lower weighting). However, when I realized that players often get better throughout their career (or potentially worse after a certain point as they age), giving a higher weighting to a player’s more recent performance is better. Thus, a weighted average has been instituted when taking the average of a player’s performance on clues and Daily Doubles—a player’s first game gets 1 point of weighting, a player’s second game gets 2 points of weighting, and so on through however many games a player has played. Victoria, having played 18 games, thus has her first game from 2005 weighted at 1/171, while her most recent game at this point, Game 2 of the 2024 Masters final, is weighted at 18/171. (The denominator of 171 is the sum of the numbers 1 through 18.)
Regular Clues and “Base Coryat”
In my initial version of this model, I made reference to “low-value” and “high-value” clues. That has not changed at all. For each game a player plays in, “net correct responses” (correct responses, less incorrect responses) are tracked for “low-value” and “high-value” clues, with “low-value” being the entire board in Kids Week, the top four rows of the board in Teen and Celebrity play, the top three rows of the board in regular, College, Teachers, Professors, and pre-ToC postseason play, and the top two rows of the board in any Tournaments of Champions, JIT, Masters, or past reunion events. “high-value” is thus the rest of the board (bottom row in Teen/Celeb, bottom 2 rows in regular, bottom 3 rows in ToC.)
By way of example for Victoria, in her initial game on September 19, 2005, she picked up net 8 correct on low-value clues and net 6 correct on high-value, and in her most recent game, Game 2 of the 2024 Masters final, she picked up net-12 low-value and net-12 high-value. Throughout her initial regular play games, JIT, and Masters, Victoria has a weighted average of 8.04 low-value (standard deviation 3.48) and 12.95 high-value (standard deviation 3.66) clues.
Now, this next step deviates slightly depending on whether you are using this number to predict a regular-difficulty game or a higher-difficulty game. Because “low-value” clues are the top three rows in regular difficulty and just the top two rows in higher difficulty, their respective weighting is different. To come up with a “Base Coryat” mean and standard deviation for regular play, take your low-value weighted average (and standard deviation), multiply it by 600 (the average low-value clue value in regular difficulty), and take your high-value weighted average (and standard deviation), multiply it by 1350 (the average high-value clue value in regular difficulty). To come up with the same value for ToC predictions, your respective numbers are 450 for the low-value and 1200 for the high-value.
Thus, Victoria’s “base Coryat” for tournament play is 450 * 8.04 + 1200 * 12.95 = 19,162, with a standard deviation of 450 * 3.48 + 1200 * 3.66 = 5,965. (If calculating for regular play, the number would be a mean of 22,307 and a standard deviation of 7,029.)
Adding Daily Doubles and Final Jeopardy
At this point, a player’s weighted average of their net gain on Daily Doubles for each game (and the standard deviation of those net gains for each game) are added to the Base Coryat. In Victoria’s case, her net gain on Daily Doubles from her first game was $5,600 (a loss of $2,200 and a gain of $7,800); her net gain from her most recent gain was $14,400 (a gain of $13,600 and $800). Her career weighted average is $8,378 (standard deviation $8,650).
For Final Jeopardy, a player’s overall career percentage in Final Jeopardy is translated into a number to add. Firstly, if a player has yet to play Final Jeopardy 10 times, that number is adjusted to make the denominator 10; 0.5 correct responses are added to the numerator for each game fewer than 10. (To give an example, 5 for 7, or 71.4% in Final Jeopardy, would adjust to 6.5 for 10, or 65%.) Then, the inverse of the standard normal cumulative distribution function is performed on that percentage, with the result multiplied by 5,000 to give a number to add. Nothing gets added here for Victoria, whose career Final Jeopardy percentage is 50%. However, a player like Ben Ingram—who is 93.33% lifetime in Final Jeopardy—gets a boost of 7,504 to his performance mean. As Jennifer Quail demonstrated in her quarterfinal last year against Brandon Blackwell and Alex Jacob, being strong on Final Jeopardy means that as long as you’re in contention going into Clue #61, you might have a better chance than many others who might not be as strong in that part of the game. Being that the standard deviation of the standard normal cumulative distribution function is 1, 5,000 is always added to a player’s performance variance at this point. This also has the advantage of always introducing some variance into the Monte Carlo simulation for a player—even if they have identical values on low-value clues, high-value clues, and Daily Doubles.
I’ve considered adjusting the Final Jeopardy number to account for regular-play Final Jeopardy versus Tournament of Champions-level Final Jeopardy, and I have chosen not to because I believe that the adjustments for opponent strength at various levels (see two sections below) provide enough of an adjustment to a player’s overall numbers here.
Putting It Together: The Performance Mean
In order to create a Performance Mean and Performance Variance for each player, a player’s Base Coryat, Daily Double net gain average, and Final Jeopardy performance are added together.
For Victoria, adding together these numbers gives us a Performance Mean if 19,162 + 8,378 + 0 = 27,540 and a Performance Variance of 5,965.48 + 8,650.13 + 5,000 = 19,615.61 (rounded up to 19,616).
Adjusting For Opponent Strength: The Adjusted Performance Mean
Past models stopped at the previous point. However, it seems obvious that it is much easier to put up better numbers against regular-play opposition than if your opponents are James Holzhauer and Yogesh Raut. Thus, for each game, each game’s “opponent strength” is taken, and a weighted average of that opponent strength per game is taken. If the prediction model is tracking an opponent, that opponent’s Adjusted Performance Mean (yes, taking into account that player’s opponent strength) is used. For a regular-play opponent (or player in a Teen/High School Reunion Tournament) who did not make the Tournament of Champions or a JIT, that strength is 7,500. For Season 1 of Celebrity Jeopardy, that strength is 4,500; for Season 2 of Celebrity Jeopardy, or teen tournaments, that opponent strength is 6,000. For Kids Week, that opponent strength is 4,000. Additionally, for the four-player Super Jeopardy quarterfinals from 1990, the opponent strength is multiplied by 4/3, as it was that much more difficult for players in what could potentially be a four-way buzzer race instead of a three-way race. And finally, for a player who made the Tournament of Champions (or higher) whose stats are not yet being tracked, that number is 10,000. (For the Excel nerds out there: Yes, this means that I had to turn on iterative calculations because of the intentional circular references.)
For Victoria, the fact that she had to play David Madden (adjusted performance mean 13,607 at publication time) and one other regular play opponent in her initial game gives her an opponent strength for Game 1 of 10,554. For her most recent game, Game 2 of the 2024 Masters final, her opponents James Holzhauer (21,514) and Yogesh Raut (19,714) gave her an opponent strength of 20,614. Over the course of her 18 games, her weighted average in terms of opponent strength is currently 18,479.
To come up with the Adjusted Performance Mean, one multiplies the Performance Mean by one-third and the Opponent Strength by two-thirds (this gives each player in a game “one-third” weighting); for Victoria, this is 27,540 * (1/3) + 18,479 * (2/3) = 21,499.
The “Excitement Factor”: One Standard Deviation Above The Mean
In order to account for players like Roger Craig who like to make big bets on Daily Doubles, an “excitement factor” can also be calculated to rank the players by. This is calculated by taking a player’s Adjusted Performance Mean and adding their Performance Variance to it. For Victoria, that is 21,499 + 19,616 = 41,115. By way of example, when ranking by Adjusted Performance Mean, a player like Roger Craig, who at publication time is ranked 46th of the 500+ tracked players with an APM of 14,574, moves to 5th in the Excitement Factor rankings when his standard deviation of 22,436 is added.
We have many new offerings at The Jeopardy! Fan Online Store! Here are our current featured items, including our new Masters Season 3 Player List T-shirt:
Part 2: Making Predictions
I mentioned in my introduction that a player’s adjusted performance mean and performance variance are used as inputs into a Monte Carlo simulation. Tournament play is relatively trivial—a Python script or Excel function run many thousands of times can do that with the three players’ adjusted performance mean and performance variance. But how do you handle regular play? Firstly, to handle the variance issue, a “null game” (0 low-value, high-value, and Daily Double net earned) is added to the current regular play champion’s statistical record on the variance side, solely to introduce more variance. Then, in order to give the champ representative opposition, a moving average consisting of the most recent 200 games of regular play have their challengers’ statistics compiled (in terms of low-value, high-value, Daily Double net gain, and overall Final Jeopardy get rate) to get a performance mean and performance variance for the average challenger. Then, much like for tournament play, performance numbers for three players are put into a Monte Carlo simulation to determine a champion’s chances of victory in the next game.
Part 3: Rankings
When I began drafting this article, I had about 40% of all Tournament of Champions-level players tracked. However, as I realized the importance of opponent strength in the rankings, I decided that it would be better to track every player who has been invited to a Tournament of Champions (or, would have been had it not been for death or personal misconduct), JIT, or higher-level super-tournament. As of publication time, that number is 524 players.
Here is the list of the current Top 25 players, through the February 7, 2025 game, as ranked by Adjusted Performance Mean:
- James Holzhauer, 21,514
- Victoria Groce, 21,499
- Watson, 20,769 (no longer playing)
- Yogesh Raut, 19,714
- Troy Meyer, 19,055
- Andrew He, 16,994
- Matt Amodio, 16,992
- Ken Jennings, 16,766 (no longer eligible)
- Leszek Pawlowicz, 16,544
- Amy Schneider, 16,472
- Larissa Kelly, 16,430
- Ben Chan, 16,358
- Emma Boettcher, 16,339
- Brad Rutter, 16,274
- Matt Jackson, 16,263
- Dan Melia, 16,126
- Michael Daunt, 16,088
- David Siegel, 16,057
- Mike Dupée, 16,020
- Sam Buttrey, 15,991 (no longer eligible)
- Jerome Vered, 15,917
- Bob Blake, 15,806
- Bruce Simmons, 15,736
- Leah Greenwald, 15,664
- Tom Cubbage, 15,644
And the Top 10 players, per Excitement Factor:
- James Holzhauer, 45,345
- Victoria Groce, 41,115
- Matt Amodio, 37,082
- Troy Meyer, 37,038
- Roger Craig, 37,010
- Leszek Pawlowicz, 36,415
- Andrew He, 35,955
- Yogesh Raut 34,598
- Brad Rutter, 34,974
- Watson, 34,800 (no longer playing)
Become a Supporter now! Make a monthly contribution to the site on Patreon!

Contestant photo credit: jeopardy.com
When commenting, please note that all comments on The Jeopardy! Fan must be in compliance with the Site Comment Policy.
If you are going to quote any information from this page or this website, attribution is required.
Have you had a chance to listen to our podcast game show, Complete The List, yet? Check it out! It's also available on Apple Podcasts.
Andy! Good Super Bowl Sunday morning.
I have now had time to carefully read and think about the ‘Unified Prediction Model”. Spent some thinking about the parameters and the MATH.
Fascinating!
I reviewed the two lists of highest money winnes [regular play and the one that includes Tournaments.
I find it interesting that the prediction lists have some names I never heard of [being a recent watcher] and that some names not on the list [ex: Chris Pannilo – sp?].
Andy – any comments?
Also I have wanted to contribute to the site. I don’t do this online. If there is another way to do this please contact me by email. Thank ypu.
Wow! Very interesting formulation!
Thank you for sharing. Having a math background, I’ve been wondering how your prediction model works.