Ok say you’re trying to game plan for a seven game series. You model a bunch of different approaches to your offensive and defensive schemes. In each of the models, you assume that three point shooting are entirely independent events drawn from a binomial distribution with true mean of 0.37 (or whatever the season average was).
Now say instead of doing that, you say each game is going to have it’s “true” random shooting, which itself is drawn from the binomial w 0.37. So Game 1, the team’s true shooting is actually 0.34. And Game 2, the team’s true shooting is actually 0.42. Then the three point shots they take in that game are independent events, but the probability of making them are more similar to each other than the overall season average.
Now let’s say you’re designing a team, figuring out how many bigs you need, how many 3-Ds, etc. And you simulate a bunch of consecutive seven game series. Instead of treating every single shot as an independent event, each series has its own “true” shooting percentage (again taken as a random draw from the season average) and each game within that series has a new true shooting percentage (taken as a draw from the series average).
That’s what nested means. It’s entirely uncontroversial. An upper division stats class hammers this problem home. In any simple experimental design, you almost always end up with nested groups - samples that are more like each other than they are to the overall population, usually for no measurable reason. (That doesn’t mean the reason doesn’t exist, just that it’s too hard to measure or control for and it’s not necessarily of interest for the time being).
When you ignore nested problems and treat all events as independent, you get overly confident in the outcome. You assume that, say, a team that hits 50% of 3 pointers one night can’t possible do it again. Or that a team that hit a season low from three one night won’t do it again the very next game. Or that a candidate who appears to be down significantly in the polls can’t possibly win.
Again, the actual reason for the non-independence doesn’t matter (although as folks have speculated above, it would probably be really useful if you COULD measure it - then you might be able to say “tonight we move away from the 3 point offense.”) The math is kind of shitty to deal with but it’s completely doable. When you run the models you just account for the fact that three point shooting within games, and within series, are more like each other than expected by random chance. Those are called random effects. You can include other stuff too - opponents defense, expected shot quality, player usage, etc - but that’s not what I’m talking about. Those are fixed effects. They’re measurable. They’re repeatable. The lack of random effects is partially what sunk the banks and what sunk Clinton, and I have a strong suspicion is causing problems for game planning for a bunch of NBA teams. Again, I think if the Celtics had a reasonable expectation that the three point shooting would go as it did, they might have approached the series differently. Non accounting for the non-independence within games/series made them too confident in shooting percentages regressing. That is purely speculation on my part but given it’s been a problem in other sectors for so long (and given that apparently not a lot of people here have even heard of random effects?) makes me think it could legitimately be a problem in the NBA.