There was a very intriguing post a few weeks ago at the Hardball Times by Derek Carty called Using FIP to evaluate pitchers? I wouldn’t. The FIP refers to "fielding independent ERA," the idea that pitchers should be evaluated only on outcomes that don't involve the fielders. That would include HRs allowed. But, the article said:
"Here's how things work: a pitcher can influence the rate of fly balls he gives up. By this logic, the more fly balls allowed, the more total balls will clear the fences for home runs (all else being equal). However, while a starting pitcher can control the rate of fly balls allowed, he cannot do a very good job of controlling the rate at which those fly balls become home runs (with very few exceptions).
To put it more simply, starting pitchers don't have any underlying ability to prevent home runs—the best they can do is prevent fly balls. If those fly balls are clearing the fence at too high a rate (or too low), we say that the pitcher has been unlucky (or lucky)."
I am not sure I completely agree with this. It could be that there is a difference in flyballs allowed that accounts for the HR rates allowed across pitchers. But whatever the reason, the year-to-year correlation of HR rates allowed by pitchers, although not as high as they are for their walk rates and strikeout rates, they are not small.
The data I looked at involves year-to-year correlations of various years for pitchers who faced at least 500 batters in both of two consecutive seasons. The table below summarizes the results. Starting with the 1955 season, I eliminated IBBs from the calculations. HBP were counted as walks in all years. The columns show the correlation between the rates allowed for each stat year-to-year. The last line is the simple average of all the correlations.
Overall, the correlations are much higher for strikeout rates and walk rates (the denominator I used in all cases was batters faced). But the correlations do seem to be getting higher for the HR rates. It was very surprising to see how low they were in some of the earlier years.
One more thing that I tried (and this really makes me think that we should keep looking at HR rates) is that I found a high correlation in HR rates from one period to the next using more years. For that, I found all the pitchers that had 1000+ batters faced in both the 2003-05 period and the 2006-08 period. The correlations for walk rates and strikeout rates from period 1 to period 2 were 0.736and 0.767, respectively. But for HR rates it was 0.505. This seems high enough to say that, yes, pitchers do differ in the HR rates they allow, even if the reason is their flyball rates.
Subscribe to:
Post Comments (Atom)
3 comments:
You hit the nail on the head, Cy: "This seems high enough to say that, yes, pitchers do differ in the HR rates they allow, even if the reason is their flyball rates." Pitchers do differ in their ability to prevent HRs, but this is due to their ability to greatly control their fly ball rates. If you were to look at year-to-year correlations of HR/FB, they would be very low. As I noted in the original article, this applies only to SPs with a few exceptions. Some RPs follow the rule as well, but there are more likely to be exceptions here.
Nice work :)
-Derek Carty
Dang, that was fast, Derek. Thanks for commenting. Any thoughts on why the HR correlations were so low in those early years I looked at? That has me puzzled. Maybe HR hitting was new and pitchers had not figured it out yet? Maybe they were still such rare events that you would not see much correlation.
I also wonder if I got a fairly high correlation for the 2 three-year periods because HRs are still such a rare occurrence (compared to walks and strikeouts) that it takes more observations to get a good read on pitchers. Is there anyway you could look at the HR/flyball rate for 2 three-year periods?
Also, maybe we need to see a guy pitch 3-4 years before we know his "true" HR rate.
Hey Cy,
Sorry for the delay in responding here.
What we must remember when conducting any study on MLB players is that there is inherent, tremendous selection bias at play. We are dealing with human beings who are among the best in the world at playing baseball. While we know that pitchers have very little control over their BABIPs, if we were to drop Johan Santana into the middle of a little league season, I can guarantee you that he would post a BABIP well below average. The findings from today's MLB game that we analysts write about can only be assumed to be true for the years and subjects the studies are conducted upon.
This is such an important consideration that we can't even generalize our results onto the minor leagues, much less play from 80 years ago. It's very possible that a pitcher like Tim Linecum's .250 BABIP at Double-A in 2006 was based on skill and not simply luck. If he was a major league caliber pitcher pitching to minor league caliber batters, the rules don't stay the same. The same is true for the game years and years ago.
The suggestions you provide might answer the "why" question, and as a fantasy analyst I really can't offer any other suggestions. I've never studied (or really even read anything about) anything other than the present, but it's very possible that game was simply different then and so the statistical rules are different.
As to your second point, you're correct in that the more observations of something we have, the more we can say about a player's true talent level for that particular skill. If we were creating a projection system for Ks and HRs, there would be a much heavier regression to the mean component for HRs because we have less information about the pitcher's HR ability than for Ks. Adding more years would decrease the regression to the mean component, but there would be a natural one introduced because most pitchers regress to the mean over time.
Post a Comment