NBA Predictions For 2022

While I’m not really a fan of the “I will explain a big idea in a 300-page best seller” format, one exception for me is Philip Tetlock’s Superforecasting: The Art and Science of Prediction.

There’s a lot going on in this book and this isn’t going to be a book review, but one of the takeaways for me was that making good predictions is a skill, and it’s a skill you can improve with practice. That means making a lot of testable predictions, and checking them for accuracy after the fact. These predictions don’t need to all be hugely important or counterintuitive, but they should be probabilistic, so you can check in the future whether events you forecasted to happen 40% of the time actually occurred roughly 40% of the time. If you pick a lot of “low hanging fruit”, your after-the-fact accuracy will reflect as much.

A bunch of bloggers, but most prominently for me Matt Yglesias and Scott Alexander have begun making annual predictions in this format, logging them, and then scoring them after the fact. I’ve decided to try and do the same this year. The format is to make binary, testable predictions for events to occur or not occur in 2022 specifically. This will be a mostly NBA-specific list of predictions, since my views on politics and current events are dull, although I may do a separate post about that stuff too down the road.

The specific predictions below were sourced from some friends, as well as a twitter thread AMA I opened.

The Predictions

  1. The Lakers win the NBA championship (2%).
  2. The Lakers make it past the play-in games (55%).
  3. The Memphis Grizzlies host a home playoff series (85%).
  4. The Memphis Grizzlies win a playoff series (45%).
  5. The Timberwolves are a top 6 seed (35%).
  6. The Milwaukee Bucks make the NBA finals (40%).
  7. The Dallas Mavericks win a playoff series (55%).
  8. The Chicago Bulls finish as the 1 seed in the East (35%).
  9. The Brooklyn Nets win the NBA championship (35%).
  10. The NBA finals winner opens as the betting favorite for the 2022-23 season (25%).
  11. Ben Simmons is traded before the 2022 trade deadline (45%).
  12. Ben Simmons is traded (80%).
  13. Kyrie Irving plays a home game in Brooklyn before the end of the 2022 season (70%).
  14. Any player who played on 2-way or otherwise replacement contract in 2022 will play at least 10 mpg in the NBA finals (30%).
  15. Klay Thompson to shoot over 40% from three in the 2022 NBA season (70%).
  16. Steph Curry to finish the 2022 season over 40% from three (40%).
  17. DeAndre Ayton to sign a max extension (90%).
  18. Miles Bridges to sign a max extension (65%).
  19. Ron Artest III to play in the NBA this season (70%).
  20. The finals MVP will be a first-time winner (65%).
  21. Trae Young will be rated higher in DPM than Luka Doncic at the end of the 2022 season (80%)*.
  22. Trae Young will be rated higher in DPM than Luka Doncic on 12/31/2022 (50%).
  23. Zion Williamson to play at least 100 minutes in the 2022 season (30%).
  24. Zion Williamson to play more NBA minutes in the 2022 calendar year than Michael Porter Jr. (80%).
  25. Zion Williamson is traded (5%).
  26. Giannis remains #1 in DPM wire-to-wire through 12/31/22 (70%).
  27. Ja Morant will have a higher DPM than Darius Garland on 12/31/22 (55%).
  28. Ja Morant will have a higher DPM than Luka Doncic on 12/31/22 (25%).
  29. Damian Lillard is traded (20%).
  30. Collin Sexton is a Cavalier for the first game of the 2022-23 season (35%).
  31. Collin Sexton gets at least $40M in guaranteed money (25%).
  32. Russell Westbrook is traded (40%).
  33. De’Aaron Fox is traded (25%).
  34. Jaylen Brown is traded (20%).
  35. Clint Capela is traded (30%).
  36. Malik Monk will average 30+ mpg in the playoffs (10%) [if the Lakers don’t make the playoffs, this is a loss].
  37. Russell Westbrook comes off the bench in a 2022 playoff game (5%).
  38. At least one player who started multiple conference finals games will miss an NBA Finals game due to COVID protocols (15%).
  39. Jarred Vanderbilt will be top-15 in D-DPM at end of season (30%).
  40. 2022 NBA MVP plays under 70 games (65%)
  41. 2022-23 NBA Salary Cap set over $119M (70%).
  42. Nikola Jokic repeats as MVP (25%).
  43. RJ Barrett has a +1 DPM before 12/31/22 (15%).
  44. Scottie Barnes is the #1 ranked sophomore in DPM on 12/31/22 (40%).
  45. Klay Thompson’s DPM will be >0 at the end of the 2022 regular season (51%).
  46. Bradley Beal will end the 2022 regular season with a TS% of at 55.6% (70%).
  47. Isaiah Hartenstein to get above the taxpayer MLE this offseason (60%).
  48. DARKO’s preseason-only wins predictions beat out the DARKO player+preseason model.
  49. Evan Mobley to make the all-star team in 2022 (20%).
  50. Any player to score 60 points in the 2022 season (75%).
  51. I roll out NCAAB DARKO (35%).
  52. I get a blue checkmark on twitter (20%).

I may supplement these with a few more in coming days. And then look out for me to grade these early next year, along with grading the calibration curve (i.e., was I systemically over or underconfident).

NBA Stabilization Rates and the Padding Approach

Stabilization Rates

One of the most common issues NBA fans and analysts grapple with in analyzing players is when does a certain level of new performance become ‘real’, e.g., “Marcus Smart is shooting 54% from three after 4 games: has he had a breakout, or this just noise?” Enter the concept of ‘stabilization rates’, which is the idea that a player needs some number of shooting attempts or possessions before they ‘own’ a skill (alternatively you can use DARKO).

Stabilization rates are a concept which were originally popularized by Russell Carleton (AKA Pizza Cutter) writing about baseball, but have come to basketball as well from Darryl Blackport, Krishna Narsu, Nathan Walker, myself (and again), and others. In short, the idea is that at some point in a season, the skill aspect of a player’s performance starts to outweigh in the inherent randomness in player performance, and stabilization rates are an attempt to figure out when that point is.

There are a lot of different ways to calculate stabilization rates – Carleton and Blackport used something called a “Kuder-Richardson 21 reliability score”, which involves grouping plate appearances/shooting attempts into buckets, and looking for the relationship in those buckets. Narsu simply looked for the point in the season at which a team’s current performance has more than 0.5 R^2 with their final statistics in a given metric. I prefer a predictive approach, via the ‘padding method’, which I first read about from Justin Kubatko, but is commonly employed by Tangotiger as well, and I presume has some long storied history I’m ignorant of.

The Padding Method

Under the ‘padding method’, at any given point in the season, you ‘pad’ a player’s performance with some sample of league-average performance, and that’s your projection of a player’s talent going forward the rest of the season. To take three point attempts as an example, Duncan Robinson opened the 2019-2020 season by shooting 17 of 34 from three (50%). There’s presumably some randomness in that performance, so we ‘pad’ both the numerator and the denominator there with some amount of league average performance (call this number X), to get a projection in the form of:

expected_fg3_pct = (17 + X*league_average_fg3_pct)/(34 + X)

If the padding number for threes is 240 (more on how to calculate this later), and the league average three-point-percentage is .355, then our projection becomes:

expected_fg3_pct = (17 + 240*.355)/(34 + 240) = .373

So Robinson’s 34 attempts from three have raised our estimate of his three-point shooting talent from .355 (league average) to .373 (good, not great shooter). I think that roughly aligns with our intuitions as well – 34 attempts just isn’t very many, so we can’t go too wild, but it’s a nice sign all the same.

The beauty and elegance of the padding method is that the amount of padding remains constant throughout the season. You don’t need to use different coefficients depending how much of the season has elapsed. At all times – you just add 240 attempts of league average performance to a player’s shooting to get their future projection. Early on, the 240 ‘padding’ attempts totally dominate a player’s projection. But eventually, as the sample size grows, the player’s actual performance has a bigger and bigger impact.

Applying that to Robinson, you get the chart below, showing the progression of his projected shooting in red, and his actual YTD performance in blue. The cadence is the same, but the projection is naturally more conservative.

Note even the more conservative projections eventually agree Robinson is an elite shooter

Padding Values Are Stabilization rates

So what does the padding approach have to do with stabilization rates? Well – they’re actually the same thing. Before games on January 3, 2020, Duncan Robinson had taken 237 threes, and made 110 of them. At that point in the season, his projection was:

expected_fg3_pct = (110+ 240*.355)/(237 + 240) = .409

Meaning at that point, roughly half his projection was the 237 attempts he’d actually made, and half was the 240 padding attempts. After that game, the 240 padding attempts made up less and less of his projection, and his projection consisted mostly of his own performance. I emphasize ‘mostly’ here to reinforce that there’s nothing magical about the ‘stabilization point’, or about 50%. We still add 240 league average attempts to Robinson’s projection even after this point – they just have less and less relative impact. (I’m not a huge fan of the term ‘stabilization’ for this reason, since it implies after X attempts a player’s performance is “stable”, but I’ve given up fighting on this point).

I’m a big fan of the ‘padding approach’ as a way to address the stabilization question for its simplicity and directness. It doesn’t involve somewhat opaque statistical methods such as KR21 rates, and it directly answers the question most people want to know when they’re discussing stabilization: how do we project this player going forward?

With that somewhat lengthy preamble out of the way, here are the padding numbers/stabilization numbers for the main basic and advanced box-score metrics, sorted by most ‘stable’ to least, along with what I suggest ‘padding’ each metric’s performance with.

These were calculated the simplest way I know how – trial and error via differential evolution, finding the best ‘padding’ number to add to year-to-date performance for each player, after every game of the season, for every season since 2001, to best predict rest-of-the-year performance (about 750K rows of data).

As you can see, minutes are extremely stable (duh), since they’re almost entirely a coach’s decision and not a question of randomness/noise. On the other side, plus/minus data is extremely noisy, and you should regress a player’s performance by over 1,000 league average possessions going forward.

Note that three point percentage is relatively noisy at 242 attempts before your performance is mostly ‘real’ rather than noise. But even 242 attempts is a much smaller number than Darryl Blackport had found in his 2014 piece, where he concluded you needed 750 attempts for ‘stabilization’. For context, only 7 players have ever taken more than 750 three point attempts in an entire season, so this is a pretty big difference.

Note that there are other ways to do this: you could attempt to predict the next game’s performance instead of the rest of the season’s performance. You could also use career-to-date performance to project rest-of-season performance, or career-to-date performance to project rest-of-career performance. There are pros and cons to all these approaches, although they mostly matter less than you may expect (which is one of the virtues of the padding approach in the first place). I’ll explore the others in upcoming posts however, and also revisit my prior research on team-stabilization-padding rates.


Hello – this is Konstantin Medvedovsky (@kmedved on twitter)’s site. Going forward, I’ll aim to make this a repository of research I do, or other topics I’m interested in.

More to follow.