Skip to main content

MLB Hitters Are Not Living Up to Their Expected Offensive Production

When perfect contact is no longer perfect, maybe it’s time we question everything we think we know about baseball—especially the baseball.

Welcome to The Opener, where every weekday morning during the regular season you’ll get a fresh, topical story to start your day from one of SI.com’s MLB writers.

You see a ball that has been hit squarely, going back, back, back. You hear it. This is a home run. How could it not be? It’s gone. It simply has to be... until it dies on the track, nestled comfortably in the glove of a waiting outfielder.

If you’ve watched, oh, three innings from this season of MLB, you’ve seen this scenario play out at least once.

Yankees second baseman Gleyber Torres has a .222 batting average with a .314 expected average. His .444 actual slugging percentage is much lower than his expected slugging (.619).

Balls that seem as if they should be hits are instead ending up as outs. That’s clear in the frustrations voiced by some players and coaches—who have sharedtheir doubtsaboutthe qualityof the baseballitself. But it’s even clearer in the statistics: Specifically, the expected statistics, the Statcast numbers such as xBA and xSLG that show what a hitter’s performance “should” be based on batted-ball data. At the individual level, a gap between expected and actual stats can tell us about a player who is underperforming or overperforming, and what we should expect moving forward. At the league-wide level, however, there isn’t usually much of a gap at all: In a sample that large, what you “should” see is generally more or less what you actually do.

This year looks a bit different right now.

Take a look. This is how league-wide expected stats have compared to actual stats over the last few seasons:

At a glance: Typically, there’s little difference between what the data predicts and what actually happens on a league level. Yet this year, there’s a comparatively huge one. (The difference in real and expected batting average this season is equivalent to the difference between what would be the worst league offense in history and what would be the best in a few years.) But it’s not quite that simple! Here’s a breakdown of what we can—and can’t—know from the expected stats so far.

Is This Really the Biggest Gap We’ve Seen Between Expected and Actual Stats?

This might seem obvious from the chart above: Yes, duh, the gap is huge. But the real answer is… kind of, maybe, but not quite.

For context, these expected stats come from taking basic information about an individual batted ball—such as exit velocity, launch angle and the ballpark where it was hit—and comparing that to past data to determine the most likely outcome. But the key bit here is the data used to make those comparisons. The league calculates the baseline for that twice per year, according to MLB stats analyst Mike Petriello: once at the All-Star Break and once more after the end of the season. This means that before the All-Star Break—like, say, right now!—the baseline actually comes from past years.

Yes, this is all very wonky, (literally) inside-baseball stuff. But it’s crucial to understanding what these numbers are saying. In other words: When you see there was basically no league-wide gap between expected and actual stats in 2021 and 2020, you’re seeing there was basically no gap once the baselines had been adjusted to reflect the offensive environment for those years, which has not yet happened for 2022. In a world that didn’t have any dramatic, year-to-year swings in offensive environments, this wouldn’t mean too much. Unfortunately, MLB is clearly not in that world. If there’s a big change in the offensive environment from one year to the next—for instance, if the baseball itself is different—the numbers will show notably different things.

Scroll to Continue

SI Recommends

The result? You can’t make a direct comparison right now between the expected stats for this season and for past ones. They simply reflect different contexts. The data from last year is set up to match the specific circumstances of last year, and the data from this year is, well, also set up to match the specific circumstances of last year. At least until July.

(If you’re wondering why MLB doesn’t update the baseline to reflect the current environment sooner? That’s been discussed, Petriello says, but it potentially would be confusing to have numbers shifting mid-season. The All-Star Break and end of the season provide natural points for that kind of re-orientation.)

White Sox center fielder Luis Robert’s actual slugging percentage is .230 lower than his expected mark (.689).

So What Do the Expected Stats Actually Tell Us Right Now?

Still a lot! When you look at an actual league batting average of .233 versus an expected batting average of .253, what you’re seeing is this year’s real performance versus what that same performance would have been expected to yield in last year’s offensive environment.

Which tells you that this year’s environment is really, really different.

If it were unchanged from last year? With this quality of contact from hitters, the ball would be flying a lot more, and there would be no serious conversations about why offense has collapsed. The league batting average would be over .250 and slugging would be over .430! Yet the same contact in this environment is leading to dramatically lower offensive numbers.

Why? How Much of That Might Be Defensive Positioning?

Speaking of the environment: What about the shift, which is more popular this year than ever? Sure, that wouldn’t affect would-be home runs that are now deep fly-outs, but it can do a lot to turn hits into outs elsewhere on the field, right?

Yes, but the numbers don’t suggest that’s a significant answer here. There’s a sizeable gap between real and expected batting averages for every infield alignment: standard (.240 vs. .257), strategic or partial shift (.238 vs. .249) and full shift (.222 vs. .247). The difference is obviously biggest when teams use a full shift—defined as three infielders to one side of second base—but it’s still quite big when teams are in a standard infield alignment. In other words, the gap between real and expected batting average isn’t due to teams optimizing the shift. It appears to transcend that.

What Else Could Be Causing This?

Well, there’s no way to say definitively. But given that there have been no dramatic, large-scale changes from last season either in the player pool or in the dimensions of ballparks across the league… the baseball itself is left as the prime culprit. When you’re talking about something that affects all of MLB, with such a notable change from one year to the next, it’s hard to find any other plausible answer.

What Has This Meant in Practice?

Simply put, a lot of balls that you might expect to be hits are now outs. There have been 33 balls this season that ended up as outs despite having an xBA above .950—including one with an xBA of 1.000. Yes, a ball with an exit velocity and launch angle that had been a hit in every other past context… caught for a fly-out.

An Out With a Perfect xBA? What Does That Look Like??

Apologies to Ronald Acuña Jr. Ball may not lie, but if it changes from one season to the next, it doesn’t feel like it’s being entirely truthful.

More MLB Coverage:
The Pitching Coach Behind the Diamondbacks’ Revival
Angels’ Season Gains a Sense of Magic With Detmers’s No-Hitter
MLB Power Rankings: These Breakouts and Busts Are Shaking Things Up
There’s Nothing Typical About Nestor Cortes Jr.
Five-Tool Newsletter: Christian Yelich Is Back in MVP Form