I also, admittedly, have a tendency to want to post less when Sporting Kansas City is doing poorly. If they are running the table, I'll probably be updating every week.
Anywho, before we get into the numbers I need to talk about a change to the rater I made this offseason. I've been extremely reticent to force Elo ratings to regress to the mean in the offseason. Most other sports Elo raters include this manual adjustment at the end of each season to allow for time off, personnel changes, etc. I've never included it because Elo's system is self-correcting and I've made the case that I like to watch those changes over the course of the season. For the past several seasons, I have noticed that the end-of-season ratings, when used in a Monte Carlo simulation of the following season before a single game has been played, give some extreme results. For instance, last year, Chicago ended the season with a rating of 1384, which after Monte Carlo simulation resulted in a probability of winning the Supporters Shield of 0% (and a chance of making the playoffs at just over 6%). On the other end, Seattle ended the season with a rating of 1629, which after Monte Carlo simulation resulted in a 36% chance of winning the Supporters Shield. While I do believe Seattle have a significantly higher probability of winning than Chicago do, I believe that spread is too extreme on both ends.
After playing around with several different factors in the rating formulas, I decided that the end-of-season regression to the mean was, indeed, the best way to fix this issue. I've retroactively added this to all years and now my rater acts as if I'd been doing this all along. One particular consequence to this revisionist history is every single Elo exhibit I've ever posted to this blog is now wrong. :D
So after redoing past calculations, implementing the end-of-season correction to year-end 2016 ratings, and loading in this season's schedule, we have the following exhibit with which to kick the season off:
I have to admit these results seem much more reasonable than their predecessors. One thing I wanted to point out: Kansas City, Colorado, Portland, and Minnesota all have nearly identical ratings, and their predicted finish is almost entirely based on strength of schedule. That's a fairly good case study on just how much home/away matchups against certain teams play into these predictions, even when all four of those teams are playing in the same conference and have similar schedules.
Now on to the Monte Carlo simulation:
These numbers are a lot more reasonable than the previous run (see the deltas for how much these numbers changed). While 5-1 odds for Seattle is probably too high and 667-1 odds for Chicago is probably too low, these are at least in the ballpark now.
Those are all of the words I have for now. If you made it this far, thanks for reading.