How good is Al Horford on this team?

HomeRunBaker

bet squelcher
SoSH Member
Jan 15, 2004
30,272
Because certain of these metrics (e.g., defensive RPM) can be used to effectively project actual team results (within a reasonable margin of error). For example, the win projections which I post aren't perfect, but they do reasonably well, having beaten Vegas every year for 4 years now. These are driven in large part by metrics very similar to RPM (I run my own version of RPM, but it's very similar). There are many other win projection systems which work similarly.

I'm sympathetic to the fact that there are elements of teamwork which aren't being captured here, but that's letting the perfect be the enemy of the good. Despite the very real issues with individual player defensive stats - they still work reasonably well for the purposes of forecasting wins. Are they perfect? Certainly not. Is there "any value" there however? Of course there is.
Are you utilizing individual defensive stats to forecast wins or are they disguised as team defensive stats? What are these numbers showing as to the year-to-year difference in a players individual defensive numbers after changing teammates such as Kyrie, Crowder, Cousins, Gay, Noel, DeAndre Jordan (entirely new backcourt), Gobert (3 new starters), and others?
 
Last edited:

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
Are you utilizing individual defensive stats to forecast wins or are they disguised as team defensive stats? What are these numbers showing as to the year-to-year difference in a players individual defensive numbers after changing teammates such as Kyrie, Crowder, Cousins, Gay, Noel, DeAndre Jordan (entirely new backcourt), Gobert (3 new starters), and others?
I use individual defensive stats to forecast the quality of each team's defense. With one minor exception, I don't use any team defensive stats at all (the one exception is I use team-level stats from preseason games for the preseason-adjustment, but this is mostly because I haven't gotten around to making this more granular yet). I have not found any predictive power to team-level defensive metrics once you already have individual player stats in the model. That's not to say there isn't any predictive power of course - just that I haven't found it.

In addition to generic regression to the mean for all players, I apply an additional regression to the mean factor based on what % of a player's teammates are the same as the previous two seasons (weighted by minutes), as well as a dummy variable for a coaching change. Effectively, what this means is that anyone changing team (and thus changing coaches), gets regressed to the mean by about 40%. So if my forecast for you was a +2 defensive impact, but you changed teams (and thus teammates and coaches), then you'd end up at a +1.2 forecast. That number is based on what I've found to be the best-fit to minimize errors out-of-sample (i.e., maximize projection accuracy).
 

HomeRunBaker

bet squelcher
SoSH Member
Jan 15, 2004
30,272
I use individual defensive stats to forecast the quality of each team's defense. With one minor exception, I don't use any team defensive stats at all (the one exception is I use team-level stats from preseason games for the preseason-adjustment, but this is mostly because I haven't gotten around to making this more granular yet). I have not found any predictive power to team-level defensive metrics once you already have individual player stats in the model. That's not to say there isn't any predictive power of course - just that I haven't found it.

In addition to generic regression to the mean for all players, I apply an additional regression to the mean factor based on what % of a player's teammates are the same as the previous two seasons (weighted by minutes), as well as a dummy variable for a coaching change. Effectively, what this means is that anyone changing team (and thus changing coaches), gets regressed to the mean by about 40%. So if my forecast for you was a +2 defensive impact, but you changed teams (and thus teammates and coaches), then you'd end up at a +1.2 forecast. That number is based on what I've found to be the best-fit to minimize errors out-of-sample (i.e., maximize projection accuracy).
See, now this is an example of the type of work I'd expect a team to perform internally......for which someone like yourself is more than qualified to do this work. I'm referring to the generic numbers that ESPN and others show.....how do those players I mention above score in their Def RPM numbers before/after their teammates changed? That is my question.
 

Eddie Jurak

canderson-lite
Lifetime Member
SoSH Member
Dec 12, 2002
44,663
Melrose, MA
I use individual defensive stats to forecast the quality of each team's defense. With one minor exception, I don't use any team defensive stats at all (the one exception is I use team-level stats from preseason games for the preseason-adjustment, but this is mostly because I haven't gotten around to making this more granular yet). I have not found any predictive power to team-level defensive metrics once you already have individual player stats in the model. That's not to say there isn't any predictive power of course - just that I haven't found it.
Of course, you pick up team related factors based on their impact on individual stats, so adding in team defensive metrics directly could be double counting.

In addition to generic regression to the mean for all players, I apply an additional regression to the mean factor based on what % of a player's teammates are the same as the previous two seasons (weighted by minutes), as well as a dummy variable for a coaching change. Effectively, what this means is that anyone changing team (and thus changing coaches), gets regressed to the mean by about 40%. So if my forecast for you was a +2 defensive impact, but you changed teams (and thus teammates and coaches), then you'd end up at a +1.2 forecast. That number is based on what I've found to be the best-fit to minimize errors out-of-sample (i.e., maximize projection accuracy).
What do you do for players with negative defensive ratings?

It makes total sense to assume that a +2 guy who changes teams might reasonably be expected to lose some value.

It makes less sense to assume that... Isaiah Thomas, say, would be more of a defensive liability had he stayed with the Celtics than he would be to a his team. (There could be team-specific reasons why somehting like this would be the case, but no generic reason to think changing teammates, no matter who they are, would help him.)
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
What do you do for players with negative defensive ratings?

It makes total sense to assume that a +2 guy who changes teams might reasonably be expected to lose some value.

It makes less sense to assume that... Isaiah Thomas, say, would be more of a defensive liability had he stayed with the Celtics than he would be to a his team. (There could be team-specific reasons why somehting like this would be the case, but no generic reason to think changing teammates, no matter who they are, would help him.)
I use the same regression to the mean for above and below average players. For the same reason that a +2 player may lose some value when they change teams (because we were capturing some team context there), a -2 player may gain value when they change teams (some of their -2 was actually team context dependent).

I don't see a fundamental difference between above and below average players in this respect. Their ratings are still somewhat context-dependent. It would be a pretty strange result if only good players had context-issues with their RPMs, while the bad players did not.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
See, now this is an example of the type of work I'd expect a team to perform internally......for which someone like yourself is more than qualified to do this work. I'm referring to the generic numbers that ESPN and others show.....how do those players I mention above score in their Def RPM numbers before/after their teammates changed? That is my question.
I haven't looked much at the in-season RPM numbers on ESPN, but for the end-of-season numbers, you'll find a similar pattern to what I referenced above: players who change teams can expect to regress about 40% of the way towards the mean between seasons (beyond what can be explained by aging). That's certainly substantial, but I think it still leaves dRPM as a pretty useful stat, even if "only" 60% of it carries over when players change teams.

The in-season numbers at this point I suspect are far more noisy however. This is because of sample issues, but also because of certain decisions the creator of RPM has made for how to handle in-season data (he uses a huge box-score component at this point in the season).
 

DJnVa

Dorito Dawg
SoSH Member
Dec 16, 2010
54,037
Hardwood Paroxysm (@hpbasketball) lists Al as #5 in his MVP rankings setting off the Lebron fans.
 

Eddie Jurak

canderson-lite
Lifetime Member
SoSH Member
Dec 12, 2002
44,663
Melrose, MA
I don't see a fundamental difference between above and below average players in this respect. Their ratings are still somewhat context-dependent. It would be a pretty strange result if only good players had context-issues with their RPMs, while the bad players did not.
I guess the question is what mean are you regressing towards?

IT is probably a bad example because he could be an outlier, but it is hard to envision why it would be a good idea to assume he'll have more defensive value on the Cavs than he did on the Celtics.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
I guess the question is what mean are you regressing towards?

IT is probably a bad example because he could be an outlier, but it is hard to envision why it would be a good idea to assume he'll have more defensive value on the Cavs than he did on the Celtics.
The mean is the NBA mean (0 in RPM terms). I'm sympathetic to your point that we shouldn't expect IT to be better defensively just by changing teams. The reason is that IT's defensive issues largely stem from physical limitations that basically no amount of coaching or scheme can fix. The counterpoint is someone like Kyrie, who also had horrific defensive numbers. In his case, there's probably a lot of value in assuming he'll improve just by changing teams, so for him, the regression to the mean factor is adding value.

In other words, I agree there's probably further improvement that's available here through a more granular slicing of the data to identify which players were maximizing their talents or not, but it's hard to figure out which players fall into which bucket. IT is a pretty extreme case, so we can place him reasonably easily. Kyrie is a pretty extreme case the other way (where nobody thought it was a physical issue). But most NBA players are somewhere in between, and categorizing them is tricky.
 

Eddie Jurak

canderson-lite
Lifetime Member
SoSH Member
Dec 12, 2002
44,663
Melrose, MA
In other words, I agree there's probably further improvement that's available here through a more granular slicing of the data to identify which players were maximizing their talents or not, but it's hard to figure out which players fall into which bucket. IT is a pretty extreme case, so we can place him reasonably easily. Kyrie is a pretty extreme case the other way (where nobody thought it was a physical issue). But most NBA players are somewhere in between, and categorizing them is tricky.
I'm just struggling to come up with the reality underlying a model that says players who change teams will trend towards average (that is, the better ones will see their performance worsen, the worse ones will see improvement, and those around league average won't be much different either way.

To the extent that individual player performance drive individual defensive ratings, this just doesn't make a whole lot of logical sense, at least to me.

Regressing players towards their career norms (speaking theoretically here, not sure this would actually be feasible).
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
Regressing players towards their career norms (speaking theoretically here, not sure this would actually be feasible).
If a player has spent their entire career with one team, a lot of team-context will be caught up in their career norms. The idea of regression to the mean is effectively an acknowledgement that we don't have a context-neutral talent evaluation for players, so all else being equal, regressing towards the NBA average will serve to minimize errors.

It's not "right", but it's less wrong than the alternatives.
 

JakeRae

Member
SoSH Member
Jul 21, 2005
8,135
New York, NY
As with most of the roster, Horford has seen his RPM drop with the team's recent struggles. He still leads the team, but has fallen to 22nd in the league in RPM (23rd in RPMwins). He's still having the best season of his career as measured by RPM (RPM didn't exist for the first half of his career, so that limitation is significant). Hopefully he can keep it up. This team could be tremendously deep 1-5 next year with continued development from Tatum and Brown and Hayward back.
 

Imbricus

Member
SoSH Member
Jan 26, 2017
4,861
Masslive.com, in All-Star break grades, gives Horford the only "A" among the Celtics (I'm assuming they're grading on a mixture of what the player has done and potential). Personally, considering where they graded everyone else, I think Horford's mark may be a little generous (Irving was B+ e.g.), though he was probably an A the first 15 or 20 games of the season. They also liked Jaylen at B+. Tatum was a B (I think he deserves higher, but maybe he was a victim of recency bias). I found it surprising that Rozier was a C+ and Smart a D. Both seem low, especially since Baynes got a B-. Anyway, some people here may find the article interesting during the doldrums before games resume.
 

Reverend

for king and country
Lifetime Member
SoSH Member
Jan 20, 2007
64,417
Masslive.com, in All-Star break grades, gives Horford the only "A" among the Celtics (I'm assuming they're grading on a mixture of what the player has done and potential). Personally, considering where they graded everyone else, I think Horford's mark may be a little generous (Irving was B+ e.g.), though he was probably an A the first 15 or 20 games of the season. They also liked Jaylen at B+. Tatum was a B (I think he deserves higher, but maybe he was a victim of recency bias). I found it surprising that Rozier was a C+ and Smart a D. Both seem low, especially since Baynes got a B-. Anyway, some people here may find the article interesting during the doldrums before games resume.
I can come up with any number of standards or rubrics that explain maybe half the grades, e.g. compared to potential, or expectations, of true talent level, etc. as Imbrocus mentioned, but I can’t make sense of one standard yielding all of these grades.
 

PedroKsBambino

Well-Known Member
Lifetime Member
SoSH Member
Apr 17, 2003
31,335
If a player has spent their entire career with one team, a lot of team-context will be caught up in their career norms. The idea of regression to the mean is effectively an acknowledgement that we don't have a context-neutral talent evaluation for players, so all else being equal, regressing towards the NBA average will serve to minimize errors.

It's not "right", but it's less wrong than the alternatives.
Good to see evolution in the framing and appliation of all this. I've long been of the view we can't really do a context-netural assessment (the substitution patterns create sample size challenges, and the team's strategies impact outcomes too) and thus positioning the metrics as an imperfect but still helpful assessment is much wiser and accurate than some past framings. I do believe the analytics help improve decisions, and we also need to keep open the possibility in any particular case that the numbers cannot uncover the true talent level, or other variables that might exist. This is one of the points Bill James made in his Fog article, which I've cited endlessly around here.