News

Evaluating our pre-season standings projections

Dom Luszczyszyn
By:
Evaluating our pre-season standings projections

T.J. Oshie, Alex Ovechkin, and Evgeny Kuznetsov Image by: Getty Images

News

Evaluating our pre-season standings projections

Dom Luszczyszyn
By:

A big part of better analysis is looking back on what went right or wrong and why that may have been. That’s exactly what we’re going to do here.

The regular season is over, the playoffs are officially under way, but there’s still some unfinished business regarding the first 82 games for every team: evaluating some pre-season projections.

Each year, every major outlet puts out predictions and it’s very much a no-risk venture because rarely do you see anyone actually check back on how they did. A big part of better analysis is looking back on what went right or wrong and why that may have been. That’s exactly what we’re going to do here.

Before the season started, we posted a preview for each team where I had a section of my own for what my model, based on Game Score, projected. In that section was a handy chart for projected points (with “best” and “worst” case scenarios), playoff chances and each player’s value based on the model. Those were updated just before the season started to reflect opening night rosters. Here’s what the model projected to be the most likely.

 

 

Most likely is the operative term there as it was never meant to be “this is what will happen” it was more “this is what’s likely to happen, expect large margins of error because hockey is crazy.” That’s why I put best and worst in quotes up there, because the window for that was eight points on each side. In reality that should cover about two thirds of the field (spoiler alert: this year it didn’t), but that still leaves about 10 teams that will be more than eight points off and have big outlier seasons. When we did the same thing two years ago, I wrote about the uncertainty of predictions and why they’re so often “wrong” in hockey and this was the key chart.

null

 

Even one of the best teams from the pre-season can easily falter (Los Angeles) while one of the worst can be a surprise contender (Columbus). It’s because each game is so close (in about 75 percent of games the favourite has a 50 to 60 percent chance of winning) and is oftentimes decided by sheer randomness. A single bounce here or there changes everything, and a couple of those per year can tank a season.

On average, you’d expect to be wrong by about eight to nine points, that’s what it was for the 2015-16 season from three models and Vegas over/unders. This year was much more unpredictable. My model was off by 9.9 points on average, which sounds terrible, but is actually the best from nine other sources I tracked that range from other models, sportsbooks, expert prognostications, a video game simulation as well as last year’s point totals. The key takeaway is maybe don’t put much stock into what next year’s EA Sports simulation says.

null

 

Ten points is still a lot of room for error so we got some explaining to do. At the bottom of this post is a chart for each team, but first a short breakdown for each one with model expectations, pre-season narrative and excuses for how it could’ve been so wrong.

Anaheim Ducks

Model Expectations: Playoff bubble team. New coach could make things worse. Old players resisting father time could make things better.

Pre-season narrative: Division contender that could fall back without Bruce Boudreau.

What actually happened: They won the division. Again. Why bother betting against this?

Excuses: The model had an age curve applied, downgrading the old guys who were still very good this year.

Who was right: Not the model.

Arizona Coyotes

Model Expectations: One of the worst teams in the league.

Pre-season narrative: One of the worst teams in the league.

What actually happened: One of the worst teams in the league.

Excuses: We projected 83, they got 70. The issue here is the spread of actual results is much wider than true talent. So the worst projected teams bottom out around 75 to 85 points, leaving room for error in case they surprise. This year, the bottom five projected teams averaged 80 projected points, but only 75 real points. That’s a five point difference despite correctly calling four of the five bottom teams.

Who was right: Both.

Boston Bruins

Model Expectations: Elite team that is very likely to make the playoffs.

Pre-season narrative: Bubble team.

What actually happened: This is a fun one because the Bruins were in fact a bubble team, but their underlying numbers suggest they were much better than their record. They had the fifth or sixth best odds to win the Cup on Pinnacle just before the playoffs started, despite being a seventh seed.

Excuses: Best Corsi team with a terrible PDO deflated their point totals.

Who was right: Technically, both were right. Boston was off by five points pushing them toward the bubble, but I’m not listening to any arguments that say they shouldn’t have had more. This was a good team that very few believed in.

Buffalo Sabres

Model Expectations: Still terrible.

Pre-season narrative: A team on the rise that could surprise.

What actually happened: They were still terrible.

Excuses: Jack Eichel’s start of the season injury hurt a lot.

Who was right: Math was right in one of the closest predictions, off by just two points.

Calgary Flames

Model Expectations: Bubble team that looks more likely than not to make it.

Pre-season narrative: Bubble team that looks less likely than not to make it.

What actually happened: They made it.

Excuses: They might’ve been even better if Brian Elliott made a save in the first few months.

Who was right: This was actually the most bang on projection, off by one point.

Carolina Hurricanes

Model Expectations: A team that’ll struggle to score and make saves and will still be bad.

Pre-season narrative: A team that’ll struggle to make the playoffs, but could surprise.

What actually happened: They still struggled to score and make saves, but they were a bit better than the model anticipated and closer to what most were saying.

Excuses: Sebastian Aho was even better than expected.

Who was right: This team was still not good, but I’ll lean toward the pre-season narrative.

Chicago Blackhawks

Model Expectations: Still elite and should challenge for the Central crown.

Pre-season narrative: Still elite and should challenge for the Central crown.

What actually happened: They won the entire West, because they’re the Blackhawks and that’s what they do. Better than anticipated.

Excuses: Off by 11 points thanks to conservatism.

Who was right: As much as everyone likes Chicago, I’m not sure they were de facto West favorites with the questions about their depth going into the season. I’d call this a wash.

Colorado Avalanche

Model Expectations: One of the worst teams in the league, but some star players could keep them afloat.

Pre-season narrative: One of the worst teams in the league with a lot of uncertainty thanks to a new coach.

What actually happened: They were not just the worst team in the league – they were the worst team I’ve ever seen. This is the wrongest I’ve ever seen most projections. Even the most bearish (USA Today) had them at 73 points which ended up being off by a whopping 25.

Excuses: Most people don’t remember this, but this team was actually okay until they got hit with injuries to Erik Johnson and key forwards like Matt Duchene and Gabriel Landeskog. By the time they got back the season was already over.

Who was right: Pretty much everyone though they’d be bad – no one thought they’d be that bad.

Columbus Blue Jackets

Model Expectations: Bubble team on the outside looking in.

Pre-season narrative: Tire-fire destined for the bottom five.

What actually happened: They were a top five team. In the league. They were the biggest positive surprise at 19 more points than expected – and I already had them at 89 because of a pretty strong forward group. That’s a lot higher than most would’ve had them.

Excuses: Zach Werenski was a number one D-man and Sergei Bobrovsky returned to Vezina form. That was the biggest difference and they were two things that were very hard to predict.

Who was right: Technically, no one. But technically, the model was much less wrong so I’m counting that as a win.

Dallas Stars

Model Expectations: Playoff team, albeit a vulnerable one.

Pre-season narrative: Small drop-off, but should still compete for the division.

What actually happened: They were a disaster and one of the biggest disappointments of the season. Their defense was an issue all year and the goalies were as bad as ever.

Excuses: They were hit hard by injuries early.

Who was right: Like Columbus, this is a matter of less wrong. The model knew they’d be a bit more vulnerable thanks to a depleted defense, but not to this degree.

Detroit Red Wings

Model Expectations: Potential lottery team, 24th overall.

Pre-season narrative: Potential bubble team that’s likely to miss the playoffs.

What actually happened: Finished 24th overall.

Excuses: This one was a gimmie. The team wasn’t good and losing Datsyuk would hurt big time.

Who was right: The model was.

Edmonton Oilers

Model Expectations: Bubble team with a nearly 50/50 shot.

Pre-season narrative: Depends who you ask.

What actually happened: They nearly won the division and made the playoffs for the first time in a decade.

Excuses: McDavid became the best player in the world and they finally got competent goaltending. Not as easy to say that would happen at the start of the year.

Who was right: I can’t recall what the Edmonton narrative was, but I feel like it was close to a bubble team with a bit more pessimism. Very few were saying playoffs. I’d call this a draw.

Florida Panthers

Model Expectations: Would challenge for the division.

Pre-season narrative: Would challenge for the division.

What actually happened: Yikes.

Excuses: Huge injury to Jonathan Huberdeau and later Aleskander Barkov was too hard to overcome. Front office turmoil was likely a distraction.

Who was right: Another huge disappointment this season, but I can’t say the prevailing wisdom was very different from what the projections were saying.

Los Angeles Kings

Model Expectations: Would challenge for the division.

Pre-season narrative: Would challenge for the division.

What actually happened: Yikes.

Excuses: Huge injury to Jonathan Quick was too hard to overcome. Deadline trades were puzzling. Low PDO was expected, but not that low as a lot of good players had near career lows.

Who was right: This was a 102 point team in 2015-16 and a perennial possession powerhouse. I’m very okay with a 100+ point projection despite how this season went. Their depth looked suspect, but a team with Anze Kopitar, Jeff Carter and Drew Doughty should’ve been better than 86 points. I’ll call this a wash where a large majority including the model missed.

Minnesota Wild

Model Expectations: Playoff bubble team, could be even better with Boudreau.

Pre-season narrative: Playoff bubble team, could be even better with Boudreau.

What actually happened: Were even better with Boudreau, almost winning the division.

Excuses: Hard to account for coaching, but he brought out the good in that forward group to get even better results than expected. As usual.

Who was right: Tie.

Montreal Canadiens

Model Expectations: Bounce-back season to challenge for the division.

Pre-season narrative: Bounce-back season to challenge for the division.

What actually happened: Won the division.

Excuses: None, we all nailed this. Good job team.

Who was right: Everyone’s a winner here.

Nashville Predators

Model Expectations: Could threaten for the division, but probably a mid-tier playoff team.

Pre-season narrative: The year for them to take the next big step.

What actually happened: Barely squeaked into the playoffs, but only three points off projected total.

Excuses: This team is better than they played and it never looked like it fully clicked.

Who was right: They didn’t take the next big step, but I’m not sure if that was a consensus narrative. Opinions were a bit wide with them. They basically played as expected, maybe a little worse.

New Jersey Devils

Model Expectations: Bottom five team that might be inflated by Cory Schneider and Taylor Hall.

Pre-season narrative: Bottom five team that might be inflated by Cory Schneider and Taylor Hall.

What actually happened: Bottom five team that was sunk by Cory Schneider.

Excuses: Conservatism strikes again. They finish 28th while we had them 27th, it was just with 12 fewer points.

Who was right: Both.

New York Islanders

Model Expectations: Likely on the outside looking in for the playoffs with Kyle Okposo and Frans Nielsen.

Pre-season narrative: Should still be among the top eight in the East.

What actually happened: Just barely missed the playoffs.

Excuses: This was pretty bang on for both sides. They did miss, but it was with 94 points – a little higher than expected.

Who was right: Slight lean to the model.

New York Rangers

Model Expectations: Vulnerable team that might miss the playoffs if another team below them breaks out. Projected for eighth in the East.

Pre-season narrative: Window closing, but should still make it.

What actually happened: They finished with 102 points, they weren’t done yet.

Excuses: A team that confounds advanced stats as they’re able to sustain a high shooting percentage year after year. Better data will help, for now, I’ll assume I’m always too low on them.

Who was right: The public was, and I’d like to issue a brief apology to THN as I was the most vocal supporter for kicking them out of the playoffs in our season preview (though it was in favor of Boston).

Ottawa Senators

Model Expectations: Bubble team that likely misses.

Pre-season narrative: Probable lottery team.

What actually happened: They made the playoffs, easily.

Excuses: The new coach implemented The System that made everyone look a lot better than expected.

Who was right: It was the model. Very few were saying this team had much of a chance at the playoffs, but the numbers said it would be close. Ottawa had the highest difference between projected win total and their over/under line at the start of the year. Funny how that works now that 100 percent of Sens fans have yelled at me despite being among the first to believe in them.

Philadelphia Flyers

Model Expectations: Solid playoff team

Pre-season narrative: Vulnerable playoff team

What actually happened: They missed despite stringing together a 10 game winning streak at one point.

Excuses: Everyone regressed to their worst form. Even their best players couldn’t score. Might be a coaching issue, I’m not sure.

Who was right: I’d go with the public here. The model wasn’t off by much, this was an above average prediction, but I do think it liked them a bit more than most people at the time.

Pittsburgh Penguins

Model Expectations: Best team in the league.

Pre-season narrative: One of the best teams in the league.

What actually happened: They finished with the second best record in hockey despite injuries.

Excuses: Injuries. A healthy Pens team might’ve actually been the best team in hockey.

Who was right: I’ll say both here. They actually finished with more points than projected with 111 and that’s usually enough to lead the league. Who knows what happens with better health. And if that was the case, they might be a more popular Cup pick right now, so I think both sides are arguable.

San Jose Sharks

Model Expectations: Best team in the West.

Pre-season narrative: One of the best teams in the West.

What actually happened: At one point it was looking like they’d be West favourites, even if they weren’t leading in points. Then they fell apart in March and ended up with one more point than last season.

Excuses: None, this team was only four points worse than expected.

Who was right: A really good prediction, but their overall talent level downgraded this season, so I’d have to lean towards the pre-season narrative that the title for West’s best was much more up for debate.

St. Louis Blues

Model Expectations: Compete for the Central crown with 98 points.

Pre-season narrative: A small drop-off after a big year thanks to free-agent exodus.

What actually happened: Finished with 99 points, but were never competitive for the division.

Excuses: Conservatism means “compete for division” with 99 points because you need room above and below and shouldn’t go higher. It’s right, even if the placement is wrong because it doesn’t account for a big outlier getting 100 or more points.

Who was right: This was tied for the most accurate prediction, off by just one point, but it was pretty much on point with what most people were saying. I’ll call it a draw.

Tampa Bay Lightning

Model Expectations: One of the best teams in the East.

Pre-season narrative: One of the best teams in the East.

What actually happened: Just missed the playoffs by one point.

Excuses: Their entire team was on IR.

Who was right: No one, blame the injury bug.

Toronto Maple Leafs

Model Expectations: Bubble team that likely misses.

Pre-season narrative: Probable lottery team.

What actually happened: They were actually good.

Excuses: Rookies. They’re very hard to project. With a correct rookie forecast for the Leafs big three, this model would’ve had them around 92 to 93 points. That’s how important that addition was.

Who was right: The model was closer than consensus opinion. A lot closer. But even it wasn’t optimistic enough. Very few were calling this a team capable of making the playoffs. The thought alone ostracized some people on Twitter as being crazed homers. But the signs were there if you paid attention to them. This was a break-even possession team with brutal shooting and goaltending talent that upgraded on both fronts with a legit starter and three star rookies with huge upside. The Leafs making the playoffs was more predictable than you think, but very few were willing to go there.

Vancouver Canucks

Model Expectations: 29th in the league.

Pre-season narrative: Really bad.

What actually happened: They finished 29th in the league and were really bad.

Excuses: They finished 10 points lower than projected despite getting the correct spot. Part of that is what happens after the deadline as bad teams start losing more because their roster is depleted.

Who was right: Not the people in charge of assembling this team.

Washington Capitals

Model Expectations: One of the best teams in the league that’ll regress from a big 120 point campaign.

Pre-season narrative: The best team in the league.

What actually happened: They got 120-ish points again.

Excuses: No team should be that good on true-talent. Conservative forecasting should yield better results, and it helps when a team like Tampa Bay misses, but not with a team like Washington. The top five teams were projected for 103 points on average. They got 102 on average. Three were close, but two were off by 17. Washington was one. It’s hard to tell which team will be off by that much though, and that’s another reason why projections are lower than actual totals.

Who was right: I’ll give this to the mainstream folks.

Winnipeg Jets

Model Expectations: Bubble team.

Pre-season narrative: Bubble team.

What actually happened: Finished as a bubble team because of a huge winning streak to end the season, but never actually looked like one.

Excuses: None, this was pretty close. Only off by four points which you can probably chalk entirely up to worse goaltending than expected.

Who was right: Both

★★★

Times when the model was better: 8 -- BUF, CGY, CLB, DAL, DET, NYI, OTT, TOR

Times when the narrative was better: 6 -- ANA, CAR, NYR, PHI, SJ, WSH

Times when they agreed: 16 -- ARI, BOS, CHI, COL, EDM, FLA, LA, MIN, MTL, NSH, NJ, PIT, STL, TB, VAN, WPG

The overall lesson here is this: models are very helpful as a starting point. From there, it’s important to use our qualitative knowledge to fine tune our predictions. The 2016-17 was one of the wildest in terms of unpredictability. Thanks to statistical modelling, we know just how unpredictable some of those events are, making them even more awe-inspiring or disappointing.

null

 

Comments
Share X
News

Evaluating our pre-season standings projections