And people say I shy away from the controversial topics.
This analysis is at the team level. Let's start with runs scored. The following regression equation shows the relationship between team runs per game and team OPS:
R/G = 13.02*OPS - 5.04
I used all teams from 1996-2009. For OBP I used (H + BB)/(AB + BB). The r-squared was .897 and the standard error was .1608. That works out to 26.05 runs per season.
Now when I used 1.8*OBP + SLG (call it adjusted OPS or ADJOPS) instead of OPS, the equation was
R/G = 10.26*ADJOPS - 5.68
The r-squared was .907 and the standard error was .153. That works out to 24.77 runs per season. So using 1.8*OBP + SLG is a bit better. The standard error is 5% lower. It is also 1.28 runs lower. That would be worth about a tenth of a win.
I have also used a team's OPS differential to predict winning percentage. That is, its hitting OPS - the OPS it allows its opponents. Here is the regression equation:
Pct = .5 + 1.26*OPSDIFF
The r-squared was .798 and the standard error was .0311. That works out to 5.04 wins per season. I used all teams from 1989-2002.
Now the same analysis with the differential using 1.8*OBP + SLG (call it the ADJOPS differential):
Pct = .5 + .986*ADJOPSDIFF
The r-squared was .815 and the standard error was .0298. That works out to 4.83 wins per season. So again, as in the analysis of runs scored, 1.8*OBP + SLG does just a bit better.
One reason why 1.8*OBP + SLG only does slightly better is probably that the range of OBP and SLG across teams is not that great. But for individual players, the range varies much more. So it might make more sense to use 1.8*OBP + SLG instead of OPS in those cases.
No comments:
Post a Comment