For 2010-2012, here are the correlations between each stat and team runs per game (so that is 90 teams/seasons)

OPS: 0.95277

wOBA: 0.95058

This surprised me. wOBA definitely looks like it should do better than OPS since OPS is just OBP + SLG and OBP should get a bigger weight. wOBA is fairly sophisticated, providing a different run value for several different events.

In regressions, here are the standard errors on runs per game from each stat

OPS: 0.1364174

wOBA: 0.1394665

OPS is lower, so it is slightly more accurate.

Since OPS doing a bit better is not what I expected, I looked at the years 2007-09 as well, and the results were just about the same, with OPS coming out on top by a very slim margin. Maybe I did something wrong, so if anone else has done analysis like this and got something different, please let me know.

1.8*OBP + SLG is said to be a good approximation of wOBA. The funny thing is that when I compared 1.8*OBP + SLG to OPS, OPS came in second. See

**OPS vs. 1.8*OBP + SLG**, so I certainly expected wOBA to do better here.

**: I used the years 2003-2012. So that is 300 team seasons. Here are the correlations between each stat and team runs per game:**

__Update July 30, 10:57 amd central time__OPS: 0.95584

wOBA: 0.952973

1.8*OBP + SLG: 0.95714

So 1.8*OBP + SLG is better than wOBA. Not what I expected.

There is an interesting discussion of issues like this at Tom Tango's site. See

**wOBA v Runs / PA, 2003-2012, Team Offense**

Here is what

**Fangraphs**says about wOBA, pasted from their site:

"Weighted On-Base Average combines all the different aspects of hitting into one metric, weighting each of them in proportion to their actual run value. While batting average, on-base percentage, and slugging percentage fall short in accuracy and scope, wOBA measures and captures offensive value more accurately and comprehensively.

The wOBA formula for the 2012 season was:

*wOBA = (0.691×uBB + 0.722×HBP + 0.884×1B + 1.257×2B + 1.593×3B +*

2.058×HR) / (AB + BB – IBB + SF + HBP)

2.058×HR) / (AB + BB – IBB + SF + HBP)

## 9 comments:

Maybe it's just random? 90 seasons isn't a whole lot, and OPS is pretty good, almost as good as wOBA.

I think that is why I looked at 2007-2009 and got the same results. So I did 180 team/seasons. But maybe more should be looked at

I used the years 2003-2012. So that is 300 team seasons. Here are the correlations between each stat and team runs per game:

OPS: 0.95584

wOBA: 0.952973

1.8*OBP + SLG: 0.95714

So 1.8*OBP + SLG is better than wOBA. Not what I expected.

Is there a way to get a standard error for the correlation coefficient?

What if you do each year separately, take the 10 coefficients, and calculate the SD of those. Then, the overall coefficient's SD is 1/3 that size, right? I bet that SD is large compared to the difference between .95584 and .95297.

Just a guess.

There probably is a way to get a standard error for the correlation coefficient. I think I would have to look in one of my stats textbooks.

If I did those 10 years separately like you say and got the result you suggest, what would it tell us?

A large SD would tell us that there's lots of randomness involved, so it's hard to tell the difference between the two stats with a sample size that small.

Or, what if you do this? For each season, see which of the two regression equations predicts team runs better for that year. Maybe wOBA will go 160-140, or something.

My guess is that OPS will predict just a bit better because the standard error of the regression is just a bit lower. If I get a chance, I will try what you say

OPS was closer in 165 cases. But in around 260 cases OPS and wOBA are within .10 runs per game of each other. In about 160 cases they are with .05 of each other

So, 165-135. Hmmm, let me see ... the SD of that is 8.6 games, so that's a bit less than 2 SD difference.

More significant than I thought!

Post a Comment