Oh man. I kind of hate these discussions, because there are so many moving parts that by slightly shifting the priority we give to one detail or the other, we can "prove" completely opposite positions. However there is one really big element that I think gets a mistaken treatment almost every time. Not to single anyone out, but:
we often assume things are "mistakes" by using hindsight bias.
Hindsight is exactly the right tool to use to evaluate a GM. If well-informed fans have access to information and decision-making tools and analysis methods that are genuinely just as good as the GM does, so that we can make just as well-informed decisions from newspaper articles and the back of baseball cards, then the GM is grossly overpaid.
Whatever the yardstick, the ultimate decider of the quality of a GM's work has to be how well he has managed the odds and manufactured luck, maximizing upside and minimizing downside. Analyzing individual trades or deals in this sense is almost as useless as looking at individual at-bats. In aggregate, a good executive is supposed to
get luckier* than other smart people with the same information. Otherwise, he's not earning his pay package.
The overall measure of a GM's performance has to be
the outcome. It is his job to get luckier* than you or I could predict. Otherwise, all we are left with is deciding how closely his picks matched what mine were at the time, which is frankly foolish and undercuts the entire notion of objective evaluation, unless we are going to extend the argument to say that the entirety of baseball analysis is now a closed science over which I have complete mastery.
Please note that none of this is to say that "hindsight bias" is not a huge factor in armchair GM evaluations. Instead, I am saying that "hindsight bias" is only (or at least predominantly) manifest in analysis that is already too narrow or too speculative in scope to be of much use, anyway.
To put it in stark terms, analyzing a GM by analyzing individual trades is bad practice. Only in really obvious cases can we say definitely that player X was empirically worth more or less to the particular team in context than what he cost as of the minute he signed. And "really obvious cases" are not a great measure of the quality of a GM, for really obvious reasons.
When player X signs and gets injured, or under-performs, does that prove that those against the deal were right, or that the team got unlucky? When another player over-performs, or shows surprising durability, does that prove anything? When a player performs exactly to his projected value, does that prove anyone was right? And whose projection is the standard? When angry Bill calls up EEI and says that Wily Mo sucks, does that prove that he was smarter or better-informed than Theo? The answer is obviously no, and the fact is that even when smart, informed analysts do the same, in effect, the variance is huge.
The point is not whether Theo was right or wrong on X, Y, or Z, but whether they got the entire alphabet better or worse than average, or anyone else, or whatever the measure is. And the final answer has to be about the total team performance, not about one trade at a time. A poker player or a fund manager can be wrong on most of his bets but still be massively ahead if he makes the good ones count. Similarly, a fund manager or a poker player who fails to perform better than average is a poor one, regardless of how "smart" he may be, or how clever any particular play was. In endeavors where the variables are too complex and too numerous to completely quantify, smarts are ultimately demonstrated by success or failure, if only because there is no other way to evaluate that is not ultimately speculative or self-referential.
No matter how many numbers you run and how fine a point you put on the details, the variance in such things ultimately exceeds the outcomes, such as the margin by which the division is won, or the percentage of the productive roster that is intact come the post-season. If we decide to attribute that entire variance to randomness, then we have defeated the very purpose of the job that we are trying to evaluate. IOW, if the job of the GM is to get the team to the post-season with a certain regularity or whatever, then we have to believe that the goal is achievable. And if that goal is achievable, then that's really the only measure we need of job performance.
Once we start to triage which details and individual outcomes were the result of luck vs decision-making skill, then we are making the assumption that there exists a complete set of knowable data, and that we are in possession of it. Not only is this false, but the entire presumption underlying this kind of analysis is so incredibly permeable to confirmation bias of every sort that it may as well be unmitigated opinion, regardless of the number of decimal places used.
This is not to diminish the value of advanced decision-making tools or analysis methods, rather to say that these things are exactly that: tools and methods. We would like to think that the GM is using good ones, and using them appropriately, but they are ultimately in service to an end goal of some sort, and it is the outcome that proves the tool, not the other way around.
*In all cases, I am using "luck" as a shorthand for factors that affect outcome in ways that outside observers could not/did not account for in advance, including, but not limited to, actual luck.
Edited by yep, 03 September 2008 - 12:24 PM.