NBA Draft Game Thread (Spoilers allowed)

Eddie Jurak

canderson-lite
Lifetime Member
SoSH Member
Dec 12, 2002
44,715
Melrose, MA
bowiac said:
One question I'm interesting in getting thoughts on is about what constitutes "value" from a draft pick. I don't mean the question of VORP vs. RAPM etc... I mean, what are you aiming for. Possibilities:
 
1) Total value seasons 1-4. These are the rookie contract years.
 
2) Career value. Bird rights, RFA, and other cap quirks mean there's a lot of value to drafting a good player because of "incumbency", and this lasts beyond the rookie contract.
 
3) Peak value (best season, average of best 3 seasons, etc...). RAPM hates Durant as a rookie for instance, like one of the worst players in the entire league. Lets say that's correct, and he was terrible. Who cares? Insofar as you're trying to win a title, you mostly just care about peak performance. The lottery exacerbates this of course - if Durant hadn't been so terrible, maybe Westbrook is somewhere else. More generally, almost nobody's rookie season matters. This isn't baseball or football, where impact rookies happen all the time. NBA impact rookies are super rare.
 
My current preference is to look at seasons 3-7 I think - that's about when you hope to get good seasons out of a player, both max guys, and sub-max guys. But I'm interested in other thoughts.
What about taking a player's full career into account, but weighting the years differently. As you point out, incumbency has value. But not full value.  
 
Also, should there just generally be a discount rate?  Near term performance is worth more than future performance by some fractional amount (in general, teams would not trade a 3-win player in year 1 for a player with identical performance in year 2 or year 3.
 
Finally, rather than dropping the rookie year, what about taking the best 3 of 4 years (or whichever range of years you decide is best)?  In practice, for most players, "best 3 of 4" would be years 2-4 but that may not be true for everyone (particularly those who get hurt).
 

PedroKsBambino

Well-Known Member
Lifetime Member
SoSH Member
Apr 17, 2003
31,359
MainerInExile said:
 
Right, but we have no idea what is luck and what is skill.  Small sample sizes may make 1 the expected value of hits even with skill.  But for all we know, the GM is throwing darts at a list of players.  Compromise: 0.5*(regressed model) + 0.5*(unregressed model).
 
There's a few constraints here, as several have noted and many (though clearly not all) appreciate. 
 
1) In any given year, and at any given spot in the draft, a GM can only select the best guy available.  
 
2) Everyone agrees that picking Kawhi is better than picking Big Baby; bizarre to depict as a choice between 'upside' vs 'extra guy' when literally no one in the thread has said anything in favor of the latter.
 
3) Many selections are not between 'Kawhi' and 'Big Baby'because there isn't a kawhi available-- and we need to have a way to credit a GM for picking 'Big Baby' instead of a stiff which is a good pick given the options
 
4) The n of picks by a gm during their career is small.  Ainge has, what, 40 picks and that's likely at the highest end of current GMs.  Many likely have 20 or fewer.    Many of those picks certainly generate 0 VORP/WAR.   So the total VORP/WAR available from all picks isn't a huge number, and that's even more true when we are using VORP/WAR over average for slot.
 
5) Because the N of picks is small, and the VORP/WAR is limited (and the VORP/War over slot is even smaller) the impact of getting 1-2 'Kawhi' Picks correct on the 'ranking' of the GM is very large.   If we were trying to figure out 'who has made the most impactful picks (which is what bowiac designed) this would be fine---the impact of a Kawhi is in fact greater.  No one has really disputed that, I don't think.  
 
However, remember point 1 above:  each GM can only pick the best guy available at their slot each year, and most of the time there isn't a Kawhi pick available.   So the question is whether picking the best guy available (or closer to the best guy available) represents 'good drafting' given the constraints of the actual draft pool or not.   To me, it's unrealistic to say that a GM should do better than the best guy available at their slot---which is the impact of the method bowiac used---even though we can also say a different GM picking in a different slot (and thus having different options) might generate more value picking someone who ends up being better. 
 
If I were going to put a lot more time into it (which I almost surely will not, but someone who is a spreadsheet guy might choose to) I'd think about a couple different measures.  One would be to smooth out the impact of the 'Kawhi' pics to better assess average pick value, on theory that we are assessing ability to make EACH pick, and thus don't want a single great pick to overwhelm the others.  This is imperfect, but better than what we have because it reduces chance those outliers give a false picture of true talent for picking players.
 
I'd also think about comparing picks to who was selected after, and perhaps calculating what % of available value each GM got each pick.   That might also need smoothing, becuase it's going to kill the guys who missed on 'Kawhi' perhaps more than it should, but I'd try and see what picture that presented.   I think one can argue that the best drafter likely claims the highest % of available value each pick---and thus, this is how best to look at a complex problem.   If someone wants to argue we should expect the best drafter to know when to trade up and get the huge-value 'Kawhi' picks too, that's not unreasonable--but not sure how to measure that other than 'total impact of picks' and that's a little different to me.
 
I might also look at total value from picks, to see if there were big differences, and then look at why.  If we found a GM who clearly had a strategy of 'swinging for the fences' with high-risk/high reward picks and hit on enough, I'd think about calling that a skill not just chance.  Not sure we have the sample to do so, but perhaps.  So, approach bowiac presented is a reasonable way to look at this, and I'd just go deeper into it than looking at the total, because I'm interested in trying to understand what is skill and what is likely just fortune (to limited degree we can tell)
 
To me, all of those tell part of a story.  It's not as simple as 'high upside vs safe' because I'm not sure we can say GMs really know the difference when they pick, or have a repeatable skill at doing so...and thus, to me, we should look at the data and see if that shows up or not. I think we want to look at some different ways of assessing quality and impact of picks and see what the net is
 

PedroKsBambino

Well-Known Member
Lifetime Member
SoSH Member
Apr 17, 2003
31,359
bowiac said:
So, first, all those flyers are going to count against him here - they're not "free" as it were. But in the abstract, lets say you draft LeBron in the second round, and he's such a great talent that he overwhelms all the misses. Then yes, that guy would rate better than GM A.
 
The issue is:
 
1) We don't know to what degree that's sheer luck. Some of it probably is, but there's also a skill element. We regress something like BABIP to the mean because we have a pretty good idea what degree its luck, and what degree its not. I have no clue to what degree drafting Kawhi Leonard is luck/ Nor has anyone suggested a good way to find out. There's no way to use "math" to solve this problem without figuring out a method to figure out what we're regressing towards and how much.
 
2) By regressing to the mean (lets say 50%), you're going to end up rewarding GMs who consistently draft low upside talent. By doing so, they are more likely to get a series of small "hits", but losing out on the big wins. Now you're taking away part of the main downside of that strategy (the lack of impact talent). There is both skill in picking an NBA quality player, but also skill in having a good draft strategy in the first place. 
 
I think both these issues are pretty important. Issue #2 is particularly salient with respect to Ainge I suspect (as it seems he does draft a lot of low-upside talent). Given the importance of impact talent in the NBA, I'm not sure how smart that is.
 
Number 2 is an assumption you have made about what constitutes 'good' and also about how the data 'must' look at the end of the analysis. This makes you, in effect, Exponent seeking to prove a predetermined conclusion.  Which is ok, but is precisely why I've said you need to be able to step back from the assumptions, do the math, and then see what story is once we do.  If the analysis actually ends up with lower-upside guys at the top, we can decide whether to adjust it or because that is actually valuable for reasons that were not immediately apparent.  But refusing to do the analysis because the answer might surprise you is not how real analysis gets done, imo.
 
I think in this case we'd smooth the numbers because we recognize there is huge variability and the relationships are uncertain; we also do have an observed mean value to regress towards---though it is imperfect, obviously.  
 
To put those together, we can regress BABIP today because Voros did blue-sky thinking to help define what part of BABIP is luck and what is skill.  The assumptions being imposed here would prevent us from ever getting that piece in place, and that's why I think we should open the aperture.
 
Just since there has been some trouble reading and processing, I'll be extra explicit:  I think it is more valuable to get the high-upside guy, but I also think we need to do a bunch of things to determine whether, and in what part, doing that a time or two represents skill or luck.  We also need to be able to manage the reality of who is available at each pick for each GM, which is a variable (not a constant) across GMs.
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
I'm not sure why I'm bothering with you again, but here goes:
PedroKsBambino said:
Number 2 is an assumption you have made about what constitutes 'good' and also about how the data 'must' look at the end of the analysis. This makes you, in effect, Exponent seeking to prove a predetermined conclusion.  Which is ok, but is precisely why I've said you need to be able to step back from the assumptions, do the math, and then see what story is once we do.  If the analysis actually ends up with lower-upside guys at the top, we can decide whether to adjust it or because that is actually valuable for reasons that were not immediately apparent.  But refusing to do the analysis because the answer might surprise you is not how real analysis gets done, imo.
1) What is the assumption here? Please articulate it. I'm not making any assumptions here, so I want to understand what part is confusing you.
 
2) Please stop saying "do the math", and explain what that means. Do what math?
 

PedroKsBambino

Well-Known Member
Lifetime Member
SoSH Member
Apr 17, 2003
31,359
You said:
 
 
 
By regressing to the mean (lets say 50%), you're going to end up rewarding GMs who consistently draft low upside talent. By doing so, they are more likely to get a series of small "hits", but losing out on the big wins. Now you're taking away part of the main downside of that strategy (the lack of impact talent). There is both skill in picking an NBA quality player, but also skill in having a good draft strategy in the first place. 
 
There are multiple assumptions here, which were all noted previously.  One is that regressing will reward GMs who consistently draft low upside talent.   A second is that this will result in small 'hits' and losing out on big wins.  A third (which is implicit) is that going for big wins is a better draft strategy.  Whether or not I agree with these, they are assumptions and they are driving what you've done (which you seem not to recognize)
 
I have explained the math I believe should be done a couple times.  You are, of course, free not to do it, or continue to disingenuously suggest it has not been stated.
 

PedroKsBambino

Well-Known Member
Lifetime Member
SoSH Member
Apr 17, 2003
31,359
bowiac said:
Oh, I see. You just don't understand anything about regressing to the mean (or maybe your confusion goes deeper?). Hint: it causes big results to turn out smaller. This isn't an assumption, and it's not predetermining the conclusion. It's hard to discuss your second point, since you won't articulate it.
 
I understand regression.   I also understand that without running the numbers we actually don't know how different distributions of results will aggregate.  There are multiple years with multiple results in the dataset we're discussing.  So if the question were  "what will the impact on a single high-impact pick" your response would make sense.  But the assumption you made is not about that---it is about how that regression would play out across a whole lot of picks with different outcomes, and then apply to specific GMs.  That is (in fact) more complicated than saying regressing a single number will make it smaller.   
 
In addition, I noted assumptions about what constitutes 'good' that have nothing to do with regressing a single number.
 
A different approach is to assume we know the answer and then refuse to do anything that might change that answer.
 

luckiestman

Son of the Harpy
SoSH Member
Jul 15, 2005
32,814
This process needs clearer definitions and theory. It reads to much like guys trying to prove their priors by searching for a model.
 
Before a single regression is run, the question should be clear. 
 
Once the question is clear, the theory should be developed. 
 
Once it is decided that the theory seems goods, then data can be analyzed and the results are the results. 
 
The result then only answers the question that was posed under the given assumptions. 
 
If the result seems crazy, the process should start over at step one. 
 
This seems like a nice exercise but I think the data is too scarce to get anything meaningful. 
 

bowiac

Caveat: I know nothing about what I speak
Lifetime Member
SoSH Member
Dec 18, 2003
12,945
New York, NY
luckiestman said:
This process needs clearer definitions and theory. It reads to much like guys trying to prove their priors by searching for a model.
I don't think there's any of that going on. As noted, I'm search for advice for what a good theory is (and avoiding posting much in the way of results until I get some clarity there).

There's not much in the way of results yet, but most of the "first run" (straight VORP minus expected VORP) passes the smell test. I don't think that's really all that good an approach, which is why I'm seeking advice for how to refine it.