How real plus-minus can reveal hidden NBA stars
When real plus-minus debuted on ESPN.com’s NBA page in April 2014, then-Golden State Warriors reserve Draymond Green had yet to crack double-digit NBA starts. His breakthrough performance in the opening round of the 2014 playoffs, starting in place of injured center Andrew Bogut, was a few weeks away.
Over the next five seasons, Green would become an All-Star, the league’s Defensive Player of the Year in 2016-17 and a three-time NBA champion with the Warriors. Who could have seen it coming? Well, maybe someone who took a close look at the RPM leaderboard.
Back then, when Green was averaging just 6.2 ppg, his plus-3.5 RPM rating ranked among the league’s top 40 players. (Granted, I missed it too; Green wasn’t among my list of “RPM All-Stars” in conjunction with the rollout.)
ESPN’s real plus-minus is back, with player tracking data added to the box score and play-by-play stats. Check out the top-rated players so far this season.
Revealing Green’s value always required going beyond the box score. While his rates of steals and blocks are impressive (he led the league in the former category in 2016-17), none of our traditional stats measures Green’s aptitude for switching out on a guard to stifle a play or contesting a shot at the rim. So in only one season in his career has Green rated better than 16.5 in John Hollinger’s player efficiency rating (PER), where 15 is league average.
Basketball-Reference.com’s box plus-minus (BPM) comes closer to capturing Green’s value, and he led the league in defensive BPM during his Defensive Player of the Year campaign. But Green has cracked the top 10 in BPM just once in his career (2015-16) while doing so in RPM every year from 2014-15 to 2016-17 (finishing eighth, second and fourth, respectively).
How does RPM value a player like Green? By combining what we can glean from the box score with how teams perform with and without a player on the court, and now by also factoring in tracking data generated by Second Spectrum’s cameras in NBA arenas.
Real plus-minus’ roots can be traced back to adjusted plus-minus, pioneered by Indiana University professor Wayne Winston and team ratings maestro Jeff Sagarin in the early 2000s. They realized that the raw plus-minus ratings now prominent in NBA box scores, showing how much a team has outscored or been outscored by the opposition with the player on the court, reflect not only their own performance but also that of the other nine players on the court at the same time.
Much as Sagarin’s well-known NCAA football ratings take into account the outcome of every game and seek to find the team ratings that best explain them, Sagarin and Winston did the same with the performance of every combination of lineups in the league.
In theory, adjusted plus-minus is an ideal metric because it cuts out subjective stats like assists and blocks that tend to be prone to home scorekeeper bias. In practice, however, adjusted plus-minus has a difficult time dealing with players who typically play most of their minutes together, because it’s unclear how to distribute the credit or blame for team performance with both of them on the court. As a result, the number of different lineups needed to create reliable measures of individual performance is too large to reach over a single season.
To help solve this problem, RPM co-creators Jerry Engelmann and Steve Ilardi employed a novel solution: They essentially gave the regression that yields ratings a head start based on more stable measures of player performance from the box score. This starting point, derived in a similar manner to box plus-minus — weighting box score stats by how well they predict adjusted plus-minus across all players — tells the regression that when two players share the court, the one with superior box-score stats should rate better.
The result is ratings that stabilize more quickly, making them reasonable to use on a season-by-season basis, and that incorporate both what the box score tells us and what can’t be tracked in the box score.
Because Engelmann joined the Dallas Mavericks as a senior analyst, RPM has moved in-house, with ESPN Analytics taking the lead in generating it for this and future seasons. In conjunction, we’ve bolstered the box-score prior with the inclusion of tracking data. The spirit of RPM remains largely the same in the new version, which you might call RPM 2.0. The most important difference is that advanced box stats derived from Second Spectrum player tracking data are now included along with, or sometimes in lieu of, the standard box score stats.
Many of these new stats are derived from matchup data. For example, since we know who is guarding whom, we can develop a “matchup-adjusted field goal percentage” for shooters. This stat is similar to traditional field goal percentage but accounts for the quality of defenders that each shooter faces. It can be thought of as the field goal percentage that the shooter would have when matched up against an average defender. An offensive player who often draws the best defenders from the other team could be a bit higher in this stat compared to his typical, unadjusted field goal percentage.
Another benefit is that matchup-adjusted field goal percentage is automatically regressed toward league average for players with relatively few shots. This means that wild fluctuations in shooting percentages early in the year are tempered a bit. This stat leads to a little better performance than field goal percentage for our model, so we have replaced field goal percentage in the model with its matchup-adjusted counterpart.
More important, we can use matchup data to develop corresponding stats for defenders. For example, the first step would be to calculate “field goal percentage allowed” for defenders. But we might as well take it one more step and derive a matchup-adjusted field goal percentage allowed, which is the field goal percentage that a defender would allow against an average shooter. Stats like this tend to be a big help to the model.
Many of our other advanced box stats are matchup-adjusted versions of standard box stats, like 3-point field goal percentage, assists, turnovers, fouls drawn/committed and the corresponding versions for defenders. Other stats derived from player tracking data, like passes ahead and time of possession, are included as well. We’ll go into more detail about these additions and how the model works in a forthcoming explainer.
The RPM output you see on the site is offensive and defensive ratings per 100 possessions, as well as a total RPM that combines them. These can be understood as how much a given player has contributed relative to league average to his team’s offensive and defensive ratings, also denominated per 100 possessions. In addition to this measure of effectiveness, we also estimate a player’s value to his team based on both productivity and durability with wins above replacement. This value stat is better to use for awards purposes or to determine the impact of a player’s absence.
The NBA is back! Tune in here.
Wednesday, Dec. 18
• Heat at 76ers | 7 p.m. ET
• Celtics at Mavericks | 9:30 p.m. ET
Friday, Dec. 20
• Mavericks at 76ers | 8 p.m. ET
• Pelicans at Warriors | 10:30 p.m. ET
Of course, it’s not necessarily intuitive for us to discuss players in terms of their points added per 100 possessions. That’s why it’s natural to look at where players rank relative to their peers, both overall and by position. (FYI: The positions on the RPM pages don’t factor into the ratings whatsoever.) After all, when referencing Green’s value, I noted that he ranked in the top 10 three years in a row rather than citing his actual RPM ratings.
Still, it’s worth highlighting that RPM ratings are estimates of a player’s value that are subject to fluctuation. In particular, players can be helped or hurt by how well opponents shoot 3s and free throws with them on the court or on the bench — factors apparently largely outside of their control. So it’s not reasonable to look at a player who ranks 10th in RPM and say he’s decidedly better than a player who ranks 11th, or even 15th. These small distinctions are trivial. Instead, it makes more sense to focus on players whose ratings dramatically diverge, or those whose ratings are far different from conventional wisdom.
Additionally, RPM can rate players only in the specific role in which they’re being used. That’s true of all NBA stats and metrics, certainly, but particularly important with RPM because of how a player’s impact on team performance can vary based on context. This season, for example, we’ve seen that Green can’t have the same influence if surrounded by teammates incapable of switching on defense or knocking down the shots he creates as a passer on offense.
With these caveats in mind, RPM can be revealing. After all, our projections based on RPM — specifically, the multiyear version that has more predictive power than the one found on the site but isn’t as useful evaluating players for awards or detecting changes in their performance from season to season — have a track record of outperforming teams’ preseason over/under win totals.
Not every RPM standout will develop as well as Green or Khris Middleton of the Milwaukee Bucks, whose top-20 rating in 2015-16 presaged his eventual emergence as an All-Star. Nonetheless, a close reading of the RPM leaders could help you stay ahead of the curve.