I hear this one GM has a brain doctor or something.It is definitely a piece but I wonder how much value they place on them and/or how much they are utilized.
The Wizards, for example, hire the creator of PIPM then trade for Westbrook so he can take 10 mid-range jumpers each night and turn the ball over 6 times. Many teams employ sleep coaches or performance health directors yet players are out at bars, strip clubs or casinos until 4am when they are on the road. Like all performance tools, they are only valuable if they are implemented.
How well do they do at predicting future performance of players that change teams?One general note in favor of the utility of these various "all in one metrics" is that they're pretty strong at predicting future team performance. Essentially every NBA gambling group I know uses some flavor of these metrics. You can even use simple RAPM, add a basic aging curve, and combine with basic minutes projections and get pretty good looking preseason win projections.
Great post,love hearing a real world perspective.Basketball is not a closed sport with seperate events like baseball at bats, so the stats are less valuable in comparing players, who is the best etc.
The analytics are getting better. For example I have heard at countless coaching symposiums that the midrange shot is terrible, useless, etc, that the pullup is the worst shot in basketball, etc. Yet I often hear Kobe and MJ are the GOATs. I always argued these conclusions since my eyes told me most pullups were at the end of the shot clock. More advanced metrics now consider this. Kemba being able to make 40% (just recalling this from somewhere, maybe different this year) of contest pulls in the last 3 seconds of the clock is actually pretty good. I saw a stat at a coaching thing once about how Bird, Reggie Miller, MJ, had whole seasons where they were insanely better than the league in late shot clock situations, but these were all shots that hurt their overall efficiency.
They are valuable for a coach trying to improve a team or exploit weaknesses of another team. I use HUDL a video streaming and sharing service with my high school team, but the stats feature was even more useful. I believe if I had these stats a year earlier I may have been n the championship game in 2019, nt a cancelled 2020 game.
What the stats showed me was my bias, for guys that are tough athletic on the ball defenders. I was playing two players guy like this too much, and more importantly their negative value was on offence and could be mitigated. The stata also allowed me to show the players that if they never, ever shot they would be more valuable, but the solution was to stop taking particular shots, and recognize their strengths and weaknesses.
Can you elaborate on the success of the gambling groups you have familiarity with?Essentially every NBA gambling group I know uses some flavor of these metrics.
Measuring how well a metric predicts individual player performance is hard and can be circular. The simple answer is that if you want to predict a player's RAPM next year, you take their trailing 3-year RAPM, add a simple aging adjustment, and then regress to the mean by about 25%. If the player has changed teams, you regress their RAPM to the mean by an additional ~20%. So that gives you a sense of scale of how much RAPM is tied to team.How well do they do at predicting future performance of players that change teams?
As you can imagine, they're somewhat secretive, but generally the various NBA gambling consortiums I'm familiar with all work the same way. They have their own version of RAPM, with a fairly sophisticated prior, which they combine with daily minutes projections to get an estimate of each team's rotation strength. They blend this with a top-down estimate of team-strength (which is not dependent on minutes), to capture the value of coaching/scheme. And then the rest is adjusting for rest/travel/etc... where the adjustments can get pretty sophisticated. Everyone has their own flavor of adjustments here, and nobody bets "blind", but that's the basic outline of what the groups I know of are/were doing.Can you elaborate on the success of the gambling groups you have familiarity with?
You can make RAPM perfectly predictive of itself by making it a constant for every player.Measuring how well a metric predicts individual player performance is hard and can be circular. The simple answer is that if you want to predict a player's RAPM next year, you take their trailing 3-year RAPM, add a simple aging adjustment, and then regress to the mean by about 25%. If the player has changed teams, you regress their RAPM to the mean by an additional ~20%. So that gives you a sense of scale of how much RAPM is tied to team.
That's somewhat circular however, since predicting future RAPM isn't the same as predicting future player performance. But it's one datapoint here.
That was an excellent article, thanks.Long-form article on the evolution of, and current status, of defensive stats in NBA: https://www.theringer.com/nba/2021/5/11/22423517/nba-defense-analytics-nikola-jokic.
I don't think it mentions anything that hasn't been said before in this thread or other pop-up discussions on the topic but I thought it was interesting to see it all compressed into one article.
I was interested to know that the biggest problem with tracking data is that there is too much information to make organize sensibly.
Of course the real question is how close they come.That was an excellent article, thanks.
"The best advice overall might be to remain humble when assessing player defense, with numbers, film study, or ideally a combination of the two. Analysts have to approach the project as “knowing that you’re not really going to get it perfectly right,” says FiveThirtyEight’s Paine. “You’re just trying to get close.”
Great post.Of course the real question is how close they come.
As I think about it some, I wonder if the issue is trying to figure out a metric to talk about defense overall instead of thinking about defense as component parts. It may be one instance where raw stats could be more descriptive than trying to pin an overall number. What I mean by this is that a guy with a lot of deflections, is good at deflections, which is important. Same goes with steals and FG% allowed at the rim. And it may well be that a guy who has a low FG% against is good at contesting shots. And to have a great defense, you need all of these in different players.
It will be interesting to see how these stats evolve. It will be really interesting to see if anyone is ever going to be able to figure out who is the best at being in the correct spot on defense - so he can help cut off the lane on a drive or get to 3P shooters on rotation or the like.
Great points. Some players excel in specific components of defense while have glaring weaknesses in others. Take TJ McConnell......among the best in the league, if not the best, at deflections yet can be exploited in switches due to size. How can you put a numerical value on the latter when schemes would result in a variety of doubles and/or soft helps? Then you have his shot contesting which is tricky as he can be quick enough to “contest” a shot but isn’t long enough to truly contest it......which is of greater value, a hard contest by McConnell or a soft contest by say a Thybulle or Covington?Of course the real question is how close they come.
As I think about it some, I wonder if the issue is trying to figure out a metric to talk about defense overall instead of thinking about defense as component parts. It may be one instance where raw stats could be more descriptive than trying to pin an overall number. What I mean by this is that a guy with a lot of deflections, is good at deflections, which is important. Same goes with steals and FG% allowed at the rim. And it may well be that a guy who has a low FG% against is good at contesting shots. And to have a great defense, you need all of these in different players.
It will be interesting to see how these stats evolve. It will be really interesting to see if anyone is ever going to be able to figure out who is the best at being in the correct spot on defense - so he can help cut off the lane on a drive or get to 3P shooters on rotation or the like.
Thanks. I think the data you describe is being collected by the Second Spectrum cameras but as the article notes, a lot of it isn't making its way to the public.Great post.
If we had average distance from the shooter when he receives the pass, it would provide some insight into how well defenses rotate (and perhaps particular guys). A lot of the meaningful data on defenses simply isn't being collected yet, I think.
You - and to its credit, the article - raise the issue about scheme. If McConnell is playing in an aggressive trap the PnR defense, he's going to have better "metrics" than if he's playing in MIL's heavy drop coverage. And it's super tough to figure out scheme from play-to-play.Great points. Some players excel in specific components of defense while have glaring weaknesses in others. Take TJ McConnell......among the best in the league, if not the best, at deflections yet can be exploited in switches due to size. How can you put a numerical value on the latter when schemes would result in a variety of doubles and/or soft helps? Then you have his shot contesting which is tricky as he can be quick enough to “contest” a shot but isn’t long enough to truly contest it......which is of greater value, a hard contest by McConnell or a soft contest by say a Thybulle or Covington?
You'd think. In reality, it's probably not much different then when someone takes a hitters spray chart and superimposes it on Fenway. This dude is going to hit 80 HRS!You would think that NBA teams spend enough time watching film that they'd have a pretty good idea if a guy is a good fit defensively for their schemes or not.
I always wonder how much an NBA coaches/scouting meeting resembles the scene in Moneyball complete with the eye rolls.You'd think. In reality, it's probably not much different then when someone takes a hitters spray chart and superimposes it on Fenway. This dude is going to hit 80 HRS!
I think that everyone agrees on this.Measuring things accurately - or as accurately as possible - is difficult but it doesn't mean it shouldn't be done. Regardless of whether the eye rolling is going on, there is far too much money at stake to not keep pushing for better metrics.
Weren't you around for the JD Drew era?To use baseball, the most analytically inclined sport we have, as an example, it is really hard for effort to come into question during the typical machinations of a game.
The black box complaint was addressed upthread for the metrics we widely discuss.I think that everyone agrees on this.
The effort is worthwhile. And evolving it takes time and bodies. Less black box would help.
Defensive complexities exist in football and hockey as well where it is difficult to capture how a teammates actions affect another player. This is one part of the game where I’d find the eye test to be most effective if you have experience in these schemes. If you don’t know what the players are supposed to be doing you can watch until you’re blue in the face and it won’t matter but for a coach or a trained eye you should be able to recognize whether each players actions were correct/incorrect aa well as fast/slow.The black box complaint was addressed upthread for the metrics we widely discuss.
The important take away for me is that fans really aren't equipped to have a good faith discussion about defense. If people associated with the teams are telling you how tough it is, its highly unlikely that someone sitting in their mom's basement will have a better handle on this aspect of the game. I know we will have people that are pretty confident in their eye-test so I don't doubt we'll be discussing it. However given the complexity of measuring defense (e.g. how do you account for one player looking lazy or lost on D when it was really the function of another player failing to help or switch properly) it seems like a fruitless task.
This just goes down a rabbit hole because it gets into biases even with experienced professionals (e.g. what if the player a person is evaluating is someone they cannot stand? We see this all the time in this forum where posters are really fixated on one player as being a problem and then every wrong thing they do is dissected in great detail and sometimes it gets a bit irrational. It a human trait though and afflicts the best of us). The eye test certainly has some value but imo, its fairly low and you then have to apply a filter for the "tester" which just complicates things.Defensive complexities exist in football and hockey as well where it is difficult to capture how a teammates actions affect another player. This is one part of the game where I’d find the eye test to be most effective if you have experience in these schemes. If you don’t know what the players are supposed to be doing you can watch until you’re blue in the face and it won’t matter but for a coach or a trained eye you should be able to recognize whether each players actions were correct/incorrect aa well as fast/slow.
I share your frustration, believe me.The black box complaint was addressed upthread for the metrics we widely discuss.
The important take away for me is that fans really aren't equipped to have a good faith discussion about defense. If people associated with the teams are telling you how tough it is, its highly unlikely that someone sitting in their mom's basement will have a better handle on this aspect of the game. I know we will have people that are pretty confident in their eye-test so I don't doubt we'll be discussing it. However given the complexity of measuring defense (e.g. how do you account for one player looking lazy or lost on D when it was really the function of another player failing to help or switch properly) it seems like a fruitless task.
Bill Belichick often criticizes football metrics that are based on analysis of each play using the all 22 film on the grounds that the evaluators don't always know what a player's assignment was on a given play. So maybe the player blew something and the analyst missed it, or maybe the player did exactly what he was supposed to do and the analyst, not knowing this, dinged him for a missed assignment.This just goes down a rabbit hole because it gets into biases even with experienced professionals (e.g. what if the player a person is evaluating is someone they cannot stand? We see this all the time in this forum where posters are really fixated on one player as being a problem and then every wrong thing they do is dissected in great detail and sometimes it gets a bit irrational. It a human trait though and afflicts the best of us). The eye test certainly has some value but imo, its fairly low and you then have to apply a filter for the "tester" which just complicates things.
In short, it seems like an ideal defensive metric would essentially remove any subjective analysis but, alas, we aren't even close. As such, we will continue to get evaluations from everyone including people who have never experienced a lick of organized basketball as well as those people you reference who have experience in the schemes.