Basketball Analytics: Still Misunderstood

by Stephen Shea, Ph.D. (@SteveShea33)

June 1, 2016

 

We take one step forward and then two steps back. Every time I think our world is beginning to understand analytics, an article like that from Michael Wilbon for TheUndefeated.com comes out to destroy my optimism.

Wilbon brings up a very important point that analytics could be “a new path to exclusion, intentional or not” in NBA front offices. That’s a topic worth discussing, but not one that I’ll be addressing today.

Rather, Wilbon’s commentary, some of the quotes he pulled from those around the league, and the subsequent chatter on Twitter and through the media have left me (once again) concerned that fans, the media, coaches, front offices, and players still don’t know what basketball analytics is or its proper role in an NBA organization.

Here is my attempt to correct the major misconceptions surrounding basketball analytics.

1. Basketball analytics begin with an understanding of and feel for the game.

There is a misconception that basketball analysts experience the game as a series of numbers streaming across their computer screen. That couldn’t be further from the truth.

Long before I cared about eFG% (or even FG%), I picked up a ball and walked to the local court just like every player in the NBA. In those pick-up sessions that could last for several hours, never once did I think about my assist-to-turnover ratio or whether I had an unhealthy obsession with mid-range jumpers. I just played.

I still spend more time playing and watching basketball than I do querying a database or examining numbers, and that experience with the game greatly influences what I do when I am behind a computer. Let me walk through an example.

In 2013, I watched intently as the Spurs and Heat squared off in the NBA Finals. I immediately noticed that the Spurs opened the series with a unique defensive strategy. They backed off of LeBron, begging him to take perimeter shots. In addition, they exploited the below-average perimeter shooting of Wade and Haslem to bring more help defense into the lane and further dissuade LeBron from attacking the hoop.

LeBron was not a terrible shooter, but he was more dangerous going to the hoop than he was on the perimeter. The Spurs were choosing the lesser of two evils, and it worked. San Antonio won 2 of the first 3 games in the series.

Then Miami adjusted. Mike Miller (an excellent 3-point shooter) replaced Haslem in the starting lineup. In addition, Ray Allen (who is arguably the greatest 3-point shooter of all time) played over 33 minutes off the bench in game 4. Miami’s offensive adjustments appeared to stretch San Antonio’s defense. This meant more room for LeBron and Wade to attack the hoop.

Miami’s adjustments worked. They won 3 of the last 4 games to take the series in 7.

I didn’t have to be an analyst to notice the Spurs’ defensive strategy and Miami’s adjustment, and through the conclusion of the series, I didn’t run any numbers. Actually, it was my years of experience surveying the help defense before considering driving to the hoop or being forced to decide when to leave my man and help on an opponent’s shot that were percolating in my brain, not some algorithm.

However, this experience started nagging at my mathematical side. I began to wonder if Miami’s offensive adjustment truly stretched the Spurs’ defense, and if so, how much? Heck, I was intrigued simply by the question of how to quantify defensive stretch.

I wondered how much more efficient Miami’s offense became after the adjustment. Was this improvement simply a product of making more 3s or was stretching the defense improving Miami’s 2-point efficiency? Again, how much?

I enlisted the help of Chris Baker, and we began working through the details. There was tedious data gathering, programming, mathematics, and statistics, but also a lot of basketball. Every step of the process was heavily influenced by our experience with and feel for the game.

When I do analytics, there is one question I ask more than any other. What would I do in the player’s position?

Most analysts don’t have professional playing experience, and they couldn’t make the plays that the pros make look easy, but that doesn’t mean the analysts can’t tap into the perspective of the player at some basic level or that they don’t have a feel for the game.

(You can read more about our work on floor spacing in Chapter 5 of Basketball Analytics: Spatial Tracking, or at the blog here, here, or here.)

2. Basketball analytics are creative

There are no statistics textbooks that tell you how to measure the defensive value of a basketball player. There is no standard way to measure the impact J.J. Redick has from the weak side when Chris Paul and DeAndre Jordan run the pick and roll.

An analyst’s value isn’t in the programming languages she or he knows or in the statistics degrees she or he holds. As mentioned above, an analyst must have an understanding of the game, but beyond that, there is a requirement for creativity at the intersection of a number of disciplines.

When trying to run an objective draft model that projects the future pro impact of prospects, a good analyst will recognize that a statistical regression that identifies statistical markers, which are linked to the success of prospects in the past, might not always predict success in future prospects. Why? One reason is that the game is evolving.

NBA teams have learned to adjust their defense post the abolishment of illegal defenses. As a result, offenses have had to rely more on 3-point shooting to help space the floor. Also, as the pool of available players became more adept at shooting 3s, it became a more efficient shot in and of itself. This opens up the opportunity for 3-point shooting to be a better predictor of a future prospects’ pro impact than it was for past prospects.

But, what exactly are defenses doing differently? What does this mean for team needs in terms of defensive personnel? How are offenses adjusting? Have teams found near optimal strategies for the current set of rules or is there still significant room for growth? If there is significant room for growth, how quickly will NBA teams catch on and adjust? The answers to these questions require an odd blend of basketball knowledge, technical expertise and psychology.

Basketball analytics is not a dry subject. We aren’t simply button pushers that operate fancy software. Good basketball analytics require a great amount of creativity. There is no one approach to a given problem (and as a result, analysts often disagree).

3. The appropriate output is conversations

If you are an NBA general manager, coach, trainer or player, and the majority of the analytics information you get is through numbers in a PDF or portal, you are doing it wrong.

If a team is considering taking Utah center Jakob Poeltl with the 7th overall selection in the 2016 NBA draft, an analyst can provide all sorts of numbers that might help influence that decision. That analyst can tell you all about Poeltl’s college production, box score and other (such as efficiencies as the roll man in pick and rolls). An analyst could approximate how much a comparable player will cost in free agency in 2018, 2019 and 2020.

An analyst could also run numbers on how often NBA teams play a true “big” center in Poeltl’s mold and how efficient those lineups are in comparison to “smaller” groups.

An analyst could run those numbers and lots more, but the most value an analyst could provide is to be part of a conversation with the decision makers of the organization on how Poeltl will and should fit within the organization in the coming years.

As mentioned above, the NBA game is evolving. The modern “pace and space” offenses mean defenses have to “chase.” There is a lot of value in a big man that can switch screens and guard other positions. There is a lot of value today, but there will be even more value in future years as more teams adapt. (For example, you can expect Luke Walton to run a different offense in L.A. than Byron Scott ran.)

Ultimately, the most value an analyst can provide is not in a spreadsheet filled with percentages, but a conversation on how Poeltl’s role in the NBA 3 years from now will be significantly different than what a similar player’s role was 3 years ago.

4. The goal is collaboration

Above, I suggested that the appropriate output for analytics is a conversation. When I referred to a conversation on how the role of the traditional center is changing in the NBA, I meant a true two-way conversation with both sides listening, sharing information, and asking questions.

The conversation shouldn’t start after the analysis is complete. Let’s obliterate the “go-for-coffee” model where the general manager puts in the order, and the analyst makes the run.

I’ve had the pleasure of visiting several front offices in different professional sports. Almost always, I come out wondering, “Where are the white boards?” Coaches use them to scheme. I know sports teams bring them in for draft preparations, but I saw too many offices and too many conference rooms without big writing spaces.

Front office decisions can be challenging puzzles, and often, those puzzles have an undeniably quantitative component. Every offseason, a team has to consider how it will fit returning players, free agent targets, trade targets, and draft picks into a competitive roster and under the cap. So, order some food, gather the bright minds in the organization and head to the boards to brainstorm.

And while we’re on the topic, let me emphasize that communication is part of collaboration, but collaboration requires more than communication. Communication can mean that the general manager asks the analytics team to create a draft model that ranks prospects, and then the analytics team produces a clear presentation of the results.

Collaboration begins with the general manager, scouts, coaches, and analytics team in a room addressing questions like, “What are the short and long term objectives of the organization?” and “What characteristics do we want to prioritize in a prospect?” Collaboration continues with questions like, “How will we help Cheick Diallo continue to develop?” or “What would Buddy Hield’s role within the organization be this year? in 3 years?” or “What do we believe are Henry Ellenson’s defensive limitations?”

The goal of analytics is not a hostile takeover of front offices. Analytics thrive on basketball wisdom, and NBA teams are stacked with tremendous basketball minds. So, let’s collaborate!

I like peanut butter. That doesn’t imply I don’t want jelly. In fact, a sandwich with both is far better than eating either one alone. I like analytics, but that doesn’t mean I wouldn’t drop everything to learn from a great basketball mind like Phil Jackson, Larry Bird or Danny Ainge.

I’ve championed for years that every team should have a member of their analytics group travel with the team. The individual’s role is not to provide information to the team (although that’s a possibility too). Rather, the purpose would be for the analyst to see how the team interacts, the culture in the locker room, how the coaches and players communicate, what the players are focused on from the bench and on the court, the toll the season takes on the players’ bodies and minds, etc. The analyst should travel with the team to learn.

Basketball analytics won’t negotiate a contract with an agent, won’t run the drills in practice, and won’t make the shots in the game. The role of analytics is to support all of the talented basketball people in their current roles, not to replace them. The goal is collaboration, not competition.

Final Thoughts

I’m an analyst, but I’m not a robot. I play, watch, feel, and “smell” the game too. I talk about how a player is a leader, plays with intensity, or otherwise has characteristics “that can’t be measured.” I have gut feelings and instincts. I’m drawn to players that can make the spectacular dunk or block. I’m wowed by players that simply “look good” playing the game.

Analytics can’t come close to telling us everything that’s important in the game of basketball. When analytics does provide good information, it often agrees with traditional thinking. However, sometimes, it suggests that something we believed were true wasn’t actually so, that maybe our years of experience with the game has tinted the lens through which we evaluate performance, and that we may not have completely solved the best way to adjust to the new talents of current players and the recent modifications to the rules.

I’m thankful for the times that analytics have proved me wrong.

College Prospect Ratings (CPR) 2016

By Steve Shea, Ph.D. (@SteveShea33)
April 6, 2016

College Prospect Ratings (CPR) is a formula that uses NCAA players’ on-the-court performance to quantify their NBA potential. Below, I will present the CPR ratings for 105 of the top prospects in the 2016 draft. Before I get to this year’s ratings, I go through some of the aspects of the model and present some of the model’s successes and failures among recent draft classes.

A Performance-based Model

CPR uses each player’s performance on the court (as measured by box-score stats) to approximate his pro potential. There can be a number of reasons a prospect does not perform well on the floor. Many of these are an indication that the prospect will not be a great pro. However, there are other reasons for a lack of performance that may not suggest lesser potential. The most extreme example is an injury that takes the player off the court entirely. This happened for high-profile prospects Kyrie Irving and Nerlens Noel in recent seasons. When an injury takes a player off the court, it’s going to hurt his CPR. In these cases, it’s important to understand that a lower CPR does not reflect a lower talent level for these prospects.

As a performance-based model, CPR will not like players like Skal Labissiere and Cheick Diallo. These players did not perform well this season. Any team that drafts them will be doing so based on indicators besides their on-the-court performance this past season.

Quality of Opposition

CPR does not adjust for quality of opposition. It’s true that certain players face different contexts, but I have not yet seen an appropriate way to measure this context.

Often, quality of opposition is factored into a model by measuring the quality of the teams the individual faced. For example, Andrew Harrison’s Kentucky team in 2014-15 faced tough competition. That Kentucky team had a strength of schedule score (according to Sports-Reference.com) of 8.67. In contrast, Steph Curry’s 2008-09 Davidson team did not face very difficult competition. Davidson had a strength of schedule score of -3.33.

In 2008-09 Curry shot 38.7% on 3s. In 2014-15, Andrew Harrison shot 38.3%. Since Harrison’s team faced a tougher schedule, should the model be more impressed by Harrison’s 3P%? I’d argue the exact opposite. Harrison played on a loaded Kentucky team. How often was the defense focused on stopping Harrison? How often was Harrison double-teamed? Almost never.

Curry was the offense in Davidson. Opponents schemed specifically for Curry. Curry may have been playing mid-major competition, but they were draped all over him, and he still managed to shoot an amazing percentage. If we were going to adjust for quality of opposition, I’d argue that Curry’s numbers should be inflated as opposed to Harrisons.

In my experience, using measures such as strength of schedule in a draft model grossly underappreciates the context players like Steph Curry, Damian Lillard and C.J. McCollum played in, and thus, grossly underrates these players.

No Physical Measurements

CPR does not include height, weight, wingspan, or any other physical measures of the prospect. These measurements are important information, but mashing physical characteristics with on-the-court performance into one metric can be difficult to interpret.

Speaking about Providence’s Ben Bentil, a scout said, “He’s not going to be a power forward in the pros. He’s not 6-9. I’m hoping he’s 6-8 with a pair of sneakers on, so that means he’s going to have to be some form of a small forward.”

It sounds like what scouts said about Draymond Green in 2012. “The consensus is that Green won’t be able to guard either forward position because true small forwards will be quicker and true power forwards taller and able to post him and shoot over him.”

First, guarding in the post has far more to do with a player’s footwork, anticipation, awareness, athleticism, grit, length, etc. than it does an inch in his height. Second, the traditional five positions is an antiquated notion. In an NBA where the ability to switch screens is incredibly important, players like Green shouldn’t be labeled “tweeners.” They are versatile.

The scouts can determine whether an inch or two in height is important. The teams can decide to pass on Karl Towns because he doesn’t have a big enough ass. CPR will focus on basketball performance.

CPR’s Successes

CPR has correctly identified numerous 2nd round picks that eventually went on to have pro careers that far exceeded the expected value of a 2nd-rounder. For example, in 2012, Draymond Green (CPR=5.0), Jae Crowder (CPR=4.7) and Will Barton (CPR=6.2) all went in the 2nd round, but CPR rated all 3 in the top 10 for the class. In retrospect, all three would have been great 1st round selections. CPR had Kyle Korver (CPR=5.2) as a first round talent in 2003, and thought Hassan Whiteside (CPR = 14.6) was a ridiculous steal when he went 33rd overall in 2010.

CPR has made the right choice when many teams have missed. Here are just a few examples. In 2009, Minnesota selected Johnny Flynn (CPR=4.3) ahead of Steph Curry (CPR=10.6). In 2010, Golden State took Ekpe Udoh (CPR=4.8) when they could have had Paul George (CPR=8.9). In 2011, Phoenix took Markieff Morris (CPR=2.1), and Houston drafted Marcus Morris (CPR=2.3) right before Indiana drafted Kawhi Leonard (CPR=5.7). In 2012, Cleveland drafted Dion Waiters (CPR=1.8) 4th overall when Damian Lillard (CPR=4.8) went two spots later.

CPR correctly identifies superstar talent. Kevin Durant (CPR=38.6), Anthony Davis (CPR=24.1) and Carmelo Anthony (CPR=14.9) are the top 3 overall scores (among an incomplete run of recent draft classes). The “above 10” class also includes Blake Griffin (CPR=10.1), Tim Duncan (CPR=12.7), DeMarcus Cousins (CPR=10.9), and Kevin Love (CPR=14.5) among others.

CPR’s Failures

CPR doesn’t always find the late-round steals. CPR thought Chandler Parsons (CPR=1.7) was a 2nd round pick in 2011. That’s where he went, but his performance in the NBA has made that pick look great in retrospect.

CPR has missed at the top. CPR had Greg Oden (CPR=10.2) as the 2nd best prospect behind Kevin Durant in 2007. Oden went 1st overall. Unfortunately, Oden’s career was derailed by injuries.

In a poorly rated 2013 class, CPR thought Anthony Bennett was a top 3 pick (CPR=7.6). Bennett went 1st overall, but has been a complete bust thus far in his brief career.

2016 CPR

CPR offers a perspective that differs from traditional scouting. When CPR agrees with scouts, it provides added assurance on the prospect. When CPR disagrees with scouts, it should prompt teams to ask why and to take a second look at the player. With that in mind, here are the 2016 scores.

PlayerSchoolCPR
Brandon IngramDuke9.0
Ben SimmonsLSU8.8
Henry EllensonMarquette8.4
Jamal MurrayKentucky8.2
Jakob PoeltlUtah7.5
Kay FelderOakland7.5
Patrick McCawUNLV7.3
Benjamin BentilProvidence6.7
Dejounte MurrayWashington6.4
Buddy HieldOklahoma6.3
Pascal SiakamNew Mexico St.5.6
Denzel ValentineMichigan State5.6
Grayson AllenDuke5.6
Isaiah WhiteheadSeton Hall5.3
Dillon BrooksOregon5.3
Daniel HamiltonUConn5.1
Marquese ChrissWashington4.7
Tyler UlisKentucky4.5
Diamond StoneMaryland4.4
Malik BeasleyFlorida State4.2
Kris DunnProvidence4.2
Shawn LongLouisana3.9
Bryant CrawfordWake Forest3.9
Melo TimbleMaryland3.7
Jarrod UthoffIowa3.7
Malachi RichardsonSyracuse3.7
David WalkerNortheastern3.6
Domantas SabonisGonzaga3.6
Taurean PrinceBaylor3.3
Bennie BoatwrightUSC3.3
Gary Payton IIOregon St.3.2
Georges NiangIowa St.3.2
Dwayne BaconFlorida State3.2
Kyle WiltjerGonzaga3.2
Joel BolomboyWeber St.3.2
Jaylen BrownCalifornia3.0
Jameel WarneyStony Brook3.0
Dorian Finney-SmithFlorida3.0
Stephen ZimmermanUNLV2.9
Chinanu OnuakuLouisville2.8
Michael GbinijeSyracuse2.8
Brice JohnsonUNC2.8
A.J. HammonsPurdue2.7
Aaron HolidayUCLA2.7
Wade BaldwinVanderbilt2.7
Edmond SumnerXavier2.6
Antonio BlakeneyLSU2.6
Deandre BembrySt. Joseph's2.5
Michael CarreraSouth Carolina2.5
Yogi FerrellIndiana2.4
Ivan RabbCalifornia2.4
Anthony BarberN.C. State2.4
Demetrius JacksonNotre Dame2.4
Chris BoucherOregon2.4
Shake MiltonSMU2.3
Allonzo TrierArizona2.3
Josh HartVillanova2.3
James Webb IIIBoise St.2.2
Jaron BlossomgameClemson2.2
Nigel HayesWisconsin2.1
Isaac CopelandGeorgetown2.1
Malcolm BrogdanVirginia2.0
Monte MorrisIowa St.1.9
Ron BakerWichita St.1.9
Daniel OchefuVillanova1.9
Justin JacksonUNC1.8
Jake LaymanMaryland1.7
Alex CarusoTexas A&M1.7
Malik NewmanMississippi St.1.7
Fred VanVleetWichita St.1.7
Perry EllisKansas1.6
Thomas BryantIndiana1.6
Devin RobinsonFlorida1.6
Damion LeeLouisville1.5
Robert CarterMaryland1.5
Danuel HouseTexas A&M1.5
Mathew Fisher-DavisVanderbilt1.5
Troy WiliamsIndiana1.4
Tim QuartermanLSU1.4
Marcus PaigeUNC1.4
Luke KornetVanderbilt1.3
John EgbunuFlorida1.3
Moses KingsleyArkansas1.3
Sheldon McClellanMiami1.3
Isaiah BriscoeKentucky1.3
Isaiah TaylorTexas1.3
Tyler HarrisAuburn1.2
Caris LeVertMichigan1.2
Deyonta DavisMichigan State1.1
Damian JonesVanderbilt1.1
Zach AugusteNotre Dame1.1
Tyrone WallaceCalifornia1.0
Devin ThomasWake Forest1.0
Wayne SeldenKansas1.0
Shevon ThompsonGeorge Mason1.0
Skal LabissiereKentucky0.9
Kaleb TarczewskiArizona0.8
Alex PoythressKentucky0.7
Tonye JekiriMiami0.7
Sviatoslav MykhailiukKansas0.7
Amida BrimahUconn0.6
Carlton BraggKansas0.5
Prince IbehTexas0.4
Marcus LeeKentucky0.3
Cheick DialloKansas0.3

Updated 2016 CPR

By Stephen Shea, Ph.D. (@SteveShea33)

College Prospect Ratings (CPR) are an objective measure of an NCAA prospect’s NBA potential. They are generated from a player’s projected position, his years experience in college and the box score production captured in his game logs.

Below, I will present the updated ratings (as of March 29th), which for many players (including Ben Simmons) are their final CPR ratings.

Flashback to 2009

As Steph Curry is destroying the NBA, teams that passed on him in 2009 have to be wondering if they missed something pre-draft that would have provided some insight that Steph would develop into such an exceptional player. In particular, the Wolves, who drafted 2 point guards 5th and 6th overall and right before Curry went to the Warriors, have to wonder if their draft strategy was flawed.

No one saw Steph Curry becoming the all-time elite player that he is today, but there were reasons to suspect that he would be great. A draft model like CPR would have been one of them.

Here are the top 7 picks from the 2009 draft with their CPR scores (excluding Rubio).

PickPlayerCPR
1Blake Griffin10.1
2Hasheem Thabeet6.4
3James Harden5.5
4Tyreke Evans7.2
5Ricky Rubio-
6Johnny Flynn4.3
7Steph Curry10.6

On average, about 1 player per draft will rate above 10 in CPR. Without a doubt, a rating above 10 suggests a top 3 pick.   In 2009, both Blake Griffin and Curry rated above 10 with Curry slightly edging out Griffin for the high score. The general rule of thumb is that integer differences matter in CPR, while decimal differences aren’t that significant. Griffin and Curry were rated close enough that the model wouldn’t object to the selection of Griffin over Curry.

In contrast, CPR strongly favors Curry over the other NCAA players drafted above him (especially Johnny Flynn), and in retrospect, the model was right.

It’s also important to note that players can score well in CPR and not develop into solid NBA players, and players can score low and surprise. Hasheem Thabeet’s rating of 6.4 suggest that he should be a mid to late lottery selection and that he could develop into not a star, but a functional center. To date, he hasn’t done it. In contrast, Harden’s rating of 5.5 suggests he’s a late lottery selection, but he’s developed into the type of player that justifies his top 3 selection.

2016 Draft

Below are the updated CPR Rating for 35 of the top NCAA prospects.

PlayerCPR
Brandon Ingram9.0
Ben Simmons8.8
Henry Ellenson8.4
Jamal Murray8.2
Jakob Poeltl7.5
Dejounte Murray6.4
Buddy Hield6.1
Denzel Valentine5.6
Grayson Allen5.6
Marquese Chriss4.7
Tyler Ulis4.5
Diamond Stone4.4
Malik Beasley4.2
Kris Dunn4.2
Melo Trimble3.7
Domantas Sabonis3.6
Taurean Prince3.3
Jaylen Brown3.0
Stephen Zimmerman2.9
Brice Johnson2.8
A.J. Hammons2.7
Wade Baldwin2.7
Thomas Bryant2.5
DeAndre Bembry2.5
Ivan Rabb2.4
Demetrius Jackson2.4
Nigel Hayes2.1
Malcolm Brogdon2.0
Malik Newman1.7
Caris LeVert1.2
Deyonta Davis1.1
Damian Jones1.1
Wayne Selden1.0
Skal Labissiere0.9
Cheick Diallo0.3

Ben Simmons dropped significantly from his midseason CPR of about 15.6. There are reasons for the drop that aren’t due to him performing poorly necessarily.

First, LSU concluded its season after their last conference tournament game on March 12. This meant that Simmons had less opportunity to demonstrate his pro potential and to improve his CPR score. Had LSU made the NCAA tournament, it’s likely Simmons’ CPR score would be higher.

The second reason for the drop in CPR score is a technical reason and suggests that he was overrated in the midseason report. CPR uses a player’s 3P% in its formula. At season’s end, there are a minimum number of 3-point attempts needed for this 3P% to factor positively into the formula. (We don’t want a player that went 1 for 2 on 3s to profile as an excellent 3-point threat.) Early in the season, I usually don’t require any minimum number of 3-point attempts since CPR is dealing with small sample sizes everywhere. Later in the season, I usually require some prorated minimum, but I neglected to do this with Simmons in the above-reference ratings.

On the season, Simmons was 1 for 3 (33.3%) on 3-pointers. Obviously, that’s not a large enough sample, and so CPR considers Simmons to not have demonstrated college 3-point shooting ability. This is a significant blow to Simmons’ rating.

CPR looks for excellence in statistical production, whether it’s steals, blocks, rebounds, 3P% or elsewhere. The final output’s growth is exponential with regards to the accumulation of “excellence.” So, a player that has not profiled as excellent in anything would only get a small bump in CPR if he were a good 3-point shooter. In contrast, a player that has demonstrated excellence in 6 stats already would get a huge boost for adding 3-point shooting. Ben Simmons has demonstrated excellence in a number of categories. That’s why he has a CPR of 8.8, which is usually good enough to be in the top 3 in the draft class. If he had also demonstrated solid (but not exceptional) 3-point shooting, he would be about a 12 in CPR. In other words, CPR suggests that a Ben Simmons that could shoot 3s would be a better prospect than Blake Griffin was.

Ingram is now the top-rated 2016 prospect.  Jamal Murray, Hield and Poeltl saw significant improvements in their CPR scores since midseason. CPR likes Dejounte Murray as a late first round sleeper. There is nothing in the box score production to suggest Skal Labissiere or Cheick Diallo is going to be a great pro. Finally, CPR suggests Jaylen Brown is overrated by scouts that have him in the top 7.

How an NBA team can use an expected points model

By Chris Baker (@ChrisBakerAM) and Steve Shea (@SteveShea33)

March 16, 2015

 

We played well tonight, but the shots didn’t fall.” – Every coach ever

We hate the use of “random” or “luck” when describing the outcomes of athletic events. When Curry makes a three, it’s not the same as walking up to a slot machine, pulling the lever and winning the jackpot. One event requires skill, and the other does not.

Of course, Curry has spent nearly his entire life preparing to knock down a jumper when given the opportunity, but the nonrandom contributions extend further. Any specific 3-point attempt can be the result of the efforts of several skilled Golden State players. Perhaps Klay Thompson picked off a pass and started the transition that led to an open shot for Curry on the wing. Maybe Andrew Bogut set a perfect screen to spring Curry in the corner. Maybe the defense simply made a mistake. Curry doesn’t get open “by chance.”

Basketball is not random. However, the difference between making and missing can be so small that we can’t necessarily fault a player for missing a shot. Even Curry will miss the occasional open catch-and-shoot corner 3. (I think…maybe…Is he human?)

One of the major themes in sports analytics is to evaluate the process and not be fooled by the results. Sometimes good process can produce poor results, and vice versa. For example, a team can prepare exceptionally well for the draft and still miss on the prospect they select. Another team can throw darts blindfolded to pick their prospect and hit. It doesn’t mean that throwing darts is the better draft preparation.

Players miss good shots and make bad shots. Yet, we assume that the player that went 8 for 10 from the field played well, and the player that went 3 for 10 did not. The first player may have made 5 contested mid-range jumpers where he usually shoots 28%. The second player may have worked hard to get open corner 3s where he usually shoots 46%, but tonight he went 0 for 4. Perhaps we should be praising the second player and not the first.

An analogous situation occurs from the defensive perspective. We shouldn’t necessarily judge a defender based on how many shots his opponent makes. Instead, we should focus on the shots he forced his opponent to take. If those were bad shots, even if his opponent made an unusually high percentage of them, it was a solid defensive performance.

To shift the attention from the results (make or miss) to the process (the quality of the shot), we need an expected points model.

Expected points model

NBA.com used to provide shot logs for all players. These shot logs detailed the shot’s distance to the hoop, the distance to the closest defender, and whether or not the player dribbled into the shot. We used this information to calculate the average shooting percentage in all situations for all players for the each of the previous two seasons.

With this information, we can calculate how many points a player is expected score given their shot opportunities in a given game. We can then aggregate the expected points for the players to arrive at an expected total for the team. (We can calculate totals as well as rates, such as expected points per shot.)

Team analysis

No player is working in isolation. When one player gets open, he can often credit his teammates for creating the space. Similarly, defense is a team activity with rotations, switches and help. Thus, the team level may be the most appropriate domain to apply an expected points model. At least, it’s a good place to start.

The following plots the Spurs’ expected points per shot versus their actual points per shot by game in the 2014-15 season. (Note that the team total is the aggregation of the individual totals. In other words, the formula for expected points per shot pays attention to who’s taking the shot.)

Spurs game log PPS

As expected, actual points per shot varies more than expected points per shot. There are games when the team’s actual points per shot far exceeds their expected points per shot and vice versa. For example, on April 15, 2015 against New Orleans, the Spurs had an expected points per shot of 1.04. That’s a poor number by their standards. However, that night the shots fell. They scored 1.22 points per shot. That was quite different than what took place March 17, 2015 against the New York Knicks. That night, the Spurs’ expected points per shot was 1.13. That shows that San Antonio was able to get (and chose to take) good shots. However, the shots didn’t fall against New York. The team scored 0.94 points per shot.

Spurs O PPS

The question for San Antonio is how do they want to evaluate their offense? Do they want to praise the performance against New Orleans (April 15) because the shots fell, or are they going to be more pleased with the offense against New York (March 17) where they were able to create better looks?

The situation is similar on the defensive side. On December 10, 2014, the Spurs held the Knicks to an expected points per shot of 0.93. However, that night the shots fell for New York. They scored 1.14 points per shot. On March 27, 2015, the Spurs allowed 1.08 expected points per shot from Dallas. However, the Mavs only scored 0.84 points per shot. Based on the actual points per shot, it appears as though the Spurs played much better defense against the Mavs. The expected numbers tell a different story.

Spurs D PPS

It’s important to note that the opponents’ expected points per shot are based on the opposing players’ average numbers on the season. Thus, they are not restricted to performances against a particular team. For example, New York’s expected points per shot of 0.93 against the Spurs on December 10th was based on the shots each player got against San Antonio and what those players typically shoot in those situations (against the Spurs or not).

Not all contested shots are equal. When Serge Ibaka contests a shot at the rim, it looks very different than when Isaiah Thomas contests a shot at the rim. Thus, opponents’ expected points per shot and actual points per shots may differ.

Last season, Houston, Golden State, Oklahoma City, Chicago, and Milwaukee saw the biggest difference in opponents’ expected PPS and opponent’s actual PPS on average per game. For Houston, the difference was about 3 points per 100 shots (where 0.44 FTA is a “shot”). In other words, Houston’s opponents scored 3 less points per 100 shots than their expected points per shot suggested. Golden State, Oklahoma City, and Chicago were all around 2 points per 100 shots. Milwaukee was close to 1.5.

All five of the teams at the top of this metric were known for having great “length” and “positional versatility” on defense. Some may wonder why the Houston Rockets, which put a great emphasis on perimeter shooting in their offense, would go for players like Josh Smith, Corey Brewer and K.J. McDaniels. We’re seeing part of the answer in these numbers. Great length and quickness can certainly influence expected points. It can mean running more players off the 3-point line, or coaxing players to pull up in mid-range (as opposed to challenging the length at the hoop). However, what we’re capturing in this metric is that these types of defenders are doing even more.

At the other end, Minnesota gave up a terrible 6 more points than the expected model predicted for every 100 shots. New York and Orlando were the next closest at around 3 points per 100 shots.

San Antonio didn’t see a significant difference between their opponents’ expected points per shot and actual points per shot on average.

The table below displays the averages for all teams in each of the last two seasons.

Opponents' Expected vs. Actual Points Per Shot (PPS)

SeasonTeamOpp. Exp. PPSOpp. Act. PPSDifference
2015MIN1.0851.142-0.057
2015NYK1.0811.114-0.033
2015ORL1.0751.108-0.032
2015BRK1.0681.092-0.024
2015LAL1.0971.119-0.022
2015DEN1.0781.099-0.021
2015DET1.0671.087-0.020
2015CLE1.0581.073-0.015
2015TOR1.0751.090-0.015
2015CHO1.0411.053-0.012
2015MIA1.0801.092-0.012
2015BOS1.0571.068-0.011
2015SAC1.0811.091-0.010
2015PHO1.0761.084-0.009
2015ATL1.0541.060-0.006
2015MEM1.0541.059-0.005
2015DAL1.0811.085-0.004
2015SAS1.0421.045-0.003
2015UTA1.0601.061-0.001
2015LAC1.0761.0750.001
2015NOP1.0741.0700.003
2015WAS1.0471.0430.004
2015PHI1.0961.0880.008
2015IND1.0561.0460.010
2015POR1.0401.0290.011
2015MIL1.0801.0650.016
2015CHI1.0411.0200.020
2015OKC1.0831.0610.022
2015GSW1.0581.0360.022
2015HOU1.0851.0540.031
2014ORL1.0591.091-0.032
2014MIL1.0901.121-0.031
2014PHI1.1051.136-0.031
2014UTA1.0931.122-0.029
2014DET1.0941.120-0.026
2014ATL1.0661.092-0.026
2014MIN1.0771.101-0.024
2014BRK1.0841.104-0.020
2014NYK1.1001.120-0.020
2014CLE1.0761.093-0.017
2014WAS1.0771.092-0.015
2014BOS1.0821.097-0.015
2014MIA1.0871.100-0.014
2014DAL1.1021.113-0.011
2014SAC1.0981.108-0.010
2014SAS1.0321.040-0.008
2014MEM1.0681.074-0.006
2014CHA1.0501.056-0.005
2014NOP1.1171.120-0.003
2014LAL1.0921.094-0.003
2014PHO1.0871.0860.001
2014TOR1.0821.0780.004
2014POR1.0591.0550.004
2014DEN1.0941.0870.006
2014CHI1.0431.0220.021
2014GSW1.0661.0440.022
2014HOU1.0821.0580.024
2014LAC1.0871.0570.030
2014OKC1.0901.0590.032
2014IND1.0501.0080.042

While actual and expected averages for opponents do not always align on a season, the expected model can still be quite useful for evaluating team defensive performance. For one, it reflects the extent to which a team forced difficult shots (e.g. contested low percentage opportunities).

Also, if not faced with significant injuries or trades, teams tend to keep player minutes and usage consistent. Thus, when comparing performances for a particular team, any added benefit from defenders that contest “better” remains close to constant.

Individual analysis

The chart below plots James Harden’s actual versus expected points per shot by game for the 2014-15 season. Similar to the team level, the individual’s actual points per shot will vary much more than their expected points per shot.

Harden PPS Chart

On December 31, 2014, Charlotte held Harden to 1.11 expected points per shot. Harden’s average game that season was 1.21 points per shot. (Again, “shot” includes 0.44 free-throw attempts.) Part of Charlotte’s success was that they held Harden to just 4 free-throw attempts.

Unfortunately, Harden scored 1.73 points per shot. (It helped that he went 8 for 11 on threes). Charlotte should certainly review the game film to see what they could have done better. However, the expected numbers imply that Charlotte did a better job defending Harden than the actual numbers suggest.

On March 12, 2015, Utah held Harden to 0.80 points per shot. However, the expected model reveals that it may have been more a result of Harden having an off night than anything exceptional from Utah’s defense. Utah allowed Harden to get 1.31 expected points per shot that night.

In spite of the poor performance from Harden against Utah, the Jazz might want to go back and revise how they defended him. If they continue to allow 1.31 expected points per shot from Harden, he’s going to score more than 0.80 points per shot.

Final thoughts

Good process can occasionally yield poor results. Poor process can occasionally yield good results. When teams focus too much on the results, they can be misled. For example, if the shots aren’t falling for three straight nights, a coach might think he needs to mix things up. These changes may not be necessary, and the expected points model would go a long way in determining if the lack of offensive efficiency is due to something systematic, or just a run of “bad luck.”

Additional notes

-We’re not sure what an “expected turnover” looks like, but actual turnovers could be added to both the expected and actual shot production to get an expected and actual offensive rating for the team or player.

-Here, we used an entire season as the base line to judge games in that season. Teams would likely want to see expected production following each game as the season progresses. To do this, teams could use a rolling 60 to 82 prior games (dating back to the previous season) as the baseline. Rookies would likely need an artificial prior until a decent sample could be gathered from their NBA minutes.

-More detailed information about player locations could help this model. For example, a defender contesting a shot at the rim from behind the offensive player is much different than a player contesting from in front of the defender. The shot logs did not contain this level of detail.

-There were some glitches in the data, but we did not find anything that would dramatically influence the numbers presented in this article.