Not affiliated with Epic Games, Psyonix, BLAST, or Rocket League.

RAE Ratings — Methodology

Published:

As might be self-explanatory, the Region-Adjusted Elo Ratings model is based on the Elo rating (though this is actually a misnomer*), a simple system that assumes that each team has a hidden "true" strength can be represented through a logistically related rating, and attempts to find these ratings by updating them based on both the match result and estimated probabilities of winning based on the players' ratings.

*The logistic model RAE and most other systems use is called the Bradley-Terry model. Arpad Elo's model was based on the assumption that individual performance for a given match is a normally distributed random variable.

I seriously recommend watching j3m's video on the Elo rating system to learn more about the statistics behind it, but the TL;DR is that the larger a gap between two teams, the less movement there is if the higher team wins, and the more movement there is if the lower team wins. At any given moment, the system is calibrated to create the smallest gap possible between true and estimated team strengths/ratings (and so the smallest gap possible between actual and predicted outcomes), based on all prior matches it has seen.

I'd like to give a disclaimer that I am aware a lot of the methodology I'm about to lay out is far from theoretically sound. The RAE Ratings system is built off of a spreadsheet I made in 2022 that wasn't even close to a real ratings system, and while it has been progressively improved upon and modified, some of the bones of it (specifically, the technical limitations) still trace back to that first iteration. If you've ever seen xkcd 2347, the "infrastructure" of the RAE Ratings system is built similarly.

Match Predictions

At its core, the RAE Rating you see for any two given teams are two inputs in the Elo win probability formula, where the probability of Team A winning vs Team B given pregame ratings for Team A and Team B is:

$$ PW_A(R_A,R_B) = \frac{1}{10^{\frac{R_B-R_A}{400}}+1} $$

Specifically, this is the probability that Team A wins in a best-of-five (Bo5). In order to extrapolate these odds to any series length, we need to get the odds of a team winning a single game.

Bo5-to-Bo1 odds conversion works as follows:

  • Find the odds of a team winning a Bo5 given odds to win a Bo1, from 0 (0%) to 1 (100%). By finding formulas for the odds of each individual series result game-by-game, then adding and subtracting where appropriate to consider only where the team wins, this formula comes out to:
  • $$ P_{Bo5}=6P_{Bo1}^{5}-15P_{Bo1}^{4}+10P_{Bo1}^{3} $$

  • No analytic solution exists for finding the inverse of this function, so we can solve for the inverse of given Bo5 odds either graphically or by using a table of input-output pairs. The former method is used in the simulation code, while the latter is used on the spreadsheet (10,000 values) and website (100,000 values).

As in any Elo-based system, win probabilities are based on the raw difference in team ratings, rather than their absolute size. Some rating differences and their corresponding win probabilities can be seen below:

Rating Difference Bo1 WP Bo5 WP Bo7 WP
50 53.8% 57.1% 58.1%
100 57.6% 64.0% 66.2%
200 64.7% 76.0% 79.4%
300 70.9% 84.9% 88.6%
400 76.2% 90.9% 94.0%

These Bo1 odds are used in the simulation code and when calculating matchup odds on the website or the spreadsheet. For the sake of updating the team ratings themselves, only the core Elo win probability formula is used.

Rating Changes

The Bo5 odds based on the Elo win probability formula are then used to update each team's rating after a match.

In a standard Elo system, the net gain in rating between two teams after a match is always 0 — the amount the winning team gains is the same that the losing team loses. However, because team ratings in the RAE system naturally deflate (more on that later), the net rating gain after a match between two teams (except in forfeits, where neither team's rating changes) is actually around 3.5.

The specific formula for calculating a rating change after a match between two teams is:

$$ \Delta{R}=K(ResultScore-P_{Bo5}) $$

The ResultScore is based on the series score of the match, and is as follows:

Series Result ResultScore
4-0 1.025
3-0 1.025
4-1 0.892
3-1 0.858
4-2 0.803
3-2 0.758
4-3 0.739
3-4 0.311
2-3 0.292
2-4 0.247
1-3 0.192
1-4 0.158
0-3 0.025
0-4 0.025
DNP* 0.000
*Only used for teams who miss Top 32

The most important part in adjusting ResultScore values is finding a balance between rewarding winning a match outright and rewarding winning by a large margin.

The K in the formula stands for K-Score, a constant in all Elo systems that determines how reactive it is to individual results. The higher the number, the more teams' ratings move match-to-match. It's important to note that this is an Elo system, not a Glicko system, so this score is the same for all teams, regardless of how much data they have. This means that an unproven bubble team that begins getting great results may take several events to get to a rating that accurately estimates their skill level.

Here are the K-Scores for specific events:

Event K-Score
Online Opens, Kickoff LAN 55
Majors, EWC, Worlds 67.5
Region Ratings 25

LAN events, giving rare and valuable data on international matchups, impact ratings more, especially when adding on the movement caused by region rating adjustments.

Combined, changes to the values for ResultScore and K-Score make up the majority of minor changes to the rating system. K-Score especially has gone through many iterations and rounds of testing, such as weighting Worlds more or giving less weight to Split 2.

Pre-Top 16

The RAE Ratings system only tracks matches in the Top 16 (or "main event") of an RLCS Open, and all matches at RLCS LANs and EWC. For teams that miss an online Top 16, their rating will change by either:

  • Simulating a loss to a Non-Qualifying Team in their region (more on that later) by a series score equal to their Swiss record, if they are eliminated 17th-32nd. Their rating cannot drop lower than the rating of the Non-Qualifying Team in their region.
  • Simulating a loss to a Non-Qualifying Team in their region using the DNP ResultScore (0.000), if they are eliminated before Top 32. Their rating cannot drop lower than the rating of the Non-Qualifying Team in their region.
  • Making no change, if a team does not sign up for an event or forfeits all matches in an event.

This, combined with new players starting with their region's Non-Qualifying Team rating, is where the deflationary nature of RAE Ratings comes from: spanning across an entire region, teams that miss Top 16 will drop in rating without any other team gaining it, taking rating points out of the system. This is why ratings losses need to be countered by slight rating inflation for teams who make Top 16.

The deflation issue is one I've spent a lot of time trying to fix, and in its current state the system is mostly stable.

Roster Moves

One of the toughest parts of any Elo system in a team-based competition is adjusting the model after a roster move occurs. This is where the RAE model struggles the most — it fundamentally does not have player ratings due to the limitations of running it on a spreadsheet, so it has to take other measures to update after roster moves. While the system adjusts, it is poor at grading these transfers until it has gathered data on the new teams.

Non-Qualifying Teams

In order to allow for new players to smoothly enter the ratings system, RAE keeps track of a "Non-Qualifying Team" in each region, functioning as an estimation of the RAE Rating of a bubble team in that region (somewhere around Top 24-Top 32). They also serve as the rating floor for every team and player in that region.

Non-Qualifying Teams are treated the same as any other team in the model, with the added caveat that their only result each online event is a single 1-3 loss to themselves, dropping their rating by about 17 points. This, too, is a source of ratings deflation, though must less than in the past — in 2025 and 2024, the single-event drop for a Non-Qualifying Team was 28 points.

Each time a player makes their first RLCS event, their rating is set to the rating of their region's Non-Qualifying Team.

Calculating Player Ratings

The rating of a team after an outside-of-window transfer is the arithmetic mean of the rating of the most recent team of each of its three players. If a player has not made an RLCS Top 16 before, the regional Non-Qualifying Team rating is used.

This system is imperfect, but is especially so in cases where a player has played with teammates of vastly different skill levels in recent events; in that case (and others, such as a player having not played an RLCS event in a long time), I use a weighted average of the regional Non-Qualifying Team rating and the rating of the player's previous team(s) to determine their contribution to the new team rating, with the exact weighting being set on a case-by-case basis.

In rare cases (usually when a bubble player from a major region makes a main event in a minor region, with results indicating a significantly higher skill level than that region's Non-Qualifying Team rating), I will arbitrarily set a player's rating.

Transfer Window Ratings

For the transfer window between Split 1 and Split 2, ratings of teams that make a legal transfer (adding/removing a single player) move only halfway to their calculated rating as seen above.

This is admittedly poor methodology, but tends to create more ratings stability among top teams (usually preventing them from dropping after picking up a good player from a worse team), which in turn mirrors results in early events. Lower-rated teams tend not to have the same level of stability of results.

Ultimately, this is another symptom of RAE Ratings not having individual player ratings, which makes it difficult to accurately gauge player skill for roster moves.

Substitutes

In cases where a team has a player substitute for another in a single event or match, the team's rating for that event/match will be calculated as described in Calculating Player Ratings with the ratings of the two starters and the substitute, with changes in rating after each match being applied with 2/3 weighting to their starting, non-substitute roster (considering that 1/3 of the starting team's rating comes from the player who is inactive for the match, whose rating is considered "frozen").

Preseason Ratings

At the start of each season, to counteract deflation and make intra-region ratings denser (to allow for quick rankings adjustments at the start of the season, recognizing that teams and players after an offseason are harder to accurately rate), team ratings are shifted towards their region rating (which usually happens to be close to the average rating of the regions' top 12 teams).

Specifically, a team's start-of-season RAE Rating is calculated as follows:

$$ R_{Preseason}=M_{p=1}(M_{p=1.5}(R_{P1},R_{P2},R_{P3},R_{Region}),M_{p=1}(R_{P1},R_{P2},R_{P3})) $$

Mp represents the generalized mean/power mean, where:

$$ {\displaystyle M_{p}(x_{1},\dots ,x_{n})=\left({\frac {1}{n}}\sum _{i=1}^{n}x_{i}^{p}\right)^{{1}/{p}}} $$

Mp=1 is a standard arithmetic mean, while Mp=1.5 is a power mean that biases slightly towards higher values. The effect of this is pulling lower ratings within a region up while not pulling top teams' ratings as far down, an issue the RAE Ratings system had at the beginning of RLCS 2024 and RLCS 2025 (making strong minor region teams, such as FURIA and Falcons, rated lower internationally than they should have been).

Once a season is finished, the parameters and methodology used for RAE Ratings for that season are "frozen," and the final ratings for that season are used as the inputs for the next season, regardless of any methodological changes that the model undergoes thereafter.

Region Ratings

Systematic region ratings (ones generated rather than entered manually) were first introduced to RAE in 2024 using region seeds as the primary input for matches between regions. In 2025, this was replaced with a system that used team ratings to update region ratings.

Region ratings are made using data from all international matches starting at the 2021-22 Fall Major and are updated live as international LANs occur.

Region Ratings Before RLCS 2024

Because RAE Ratings for RLCS 2021-22 and RLCS 2022-23 aren't robust enough to use to update region ratings accurately, a different-seeding based system was used to change Elo.

To avoid a length adjustment period and allow for more fine-tuning of region ratings early on, region ratings prior to the RLCS 2021-22 Fall Major were set as follows:

Region Initial Region Rating
North America 1500
Europe 1500
South America 1400
Middle East & North Africa 1400
Oceania 1400
Asia-Pacific 1100
Sub-Saharan Africa 1000

Each region is given an "effective rating" for each match, based solely on its rating and the seed of the team playing. Specifically, this effective rating is given by the formula:

$$ R_{E}=R-55(S-1) $$

Where S is the seed of the team playing. Put plainly, this means that the region's effective rating for a given match is its current rating, minus 55 rating points for each seed its team is below 1.

Region Ratings Starting RLCS 2024

The effective rating of each region in a match for all matches starting in RLCS 2024 is the RAE Rating of the team playing. Like with preseason team ratings, these are frozen at the end of each season, regardless of methodological changes.

Updating Region Ratings

The effective ratings of both regions in a match are then used to calculate their probabilities of winning, which in turn is used in the same rating change formula as team ratings.

Region ratings do not experience deflation, so their ResultScore table is slightly modified to reflect this:

Series Result Region ResultScore
4-0 1.000
3-0 1.000
4-1 0.867
3-1 0.833
4-2 0.778
3-2 0.733
4-3 0.714
3-4 0.286
2-3 0.267
2-4 0.222
1-3 0.167
1-4 0.133
0-3 0.000
0-4 0.000

The K-Score for region ratings is also lower than for team ratings (25 vs 67.5 for LAN matches) to avoid overadjusting due to a good event by a specific team.

Updating Team Ratings

For teams not participating at a LAN, their change in rating after a LAN event is the same as their region's change in rating:

$$ R_f=R_i+(RR_f-RRi) $$

Teams participating at a LAN are only affected by changes in region rating half as much; that is, their change in rating due to changes in region rating is equivalent to half of the change in region rating.

Cross-Region Transfers

Players and teams are considered to be a part of the region which they most recently played an event in. For instance, after moving to MENA for the LCQ during RLCS 2025, Synergy was considered to be a MENA team for the sake of updating their rating after the World Championship.

Event Simulations

For the odds posted on the RAE spreadsheets, articles, and live event hubs, 1,000,000 simulations of the event are run. Some mid-day updates may be based on 100,000 simulations.

Because there are over 500 million different possible brackets for a single online regional, these simulations are meant to model RAE's theoretical odds for each event as closely as possible. I estimate the current margin of error for any given percentage to be at or lower than 0.5%, with percentages closer to 50% having more variance.

The simulations are run using a proprietary Python script; they are not native to the website, and thus are manually updated. As stated in the match predictions section, they calculate the Bo5 odds for a given match and graphically solve the Bo5-to-Bo1 equation to find the appropriate Bo1 odds.

Team ratings tend to be rounded to the nearest integer as teams, ratings, and split points are input manually.

Major Qualification Odds

When generating major qualification odds, the simulation script will check for the top 5/4/2/1 teams (depending on the region) and mark the appropriate amount as qualified to the major in each simulation.

Tiebreakers are the only complication in this; the script accounts for all tiebreaker scenarios with 6 teams or fewer and 5 spots or fewer, in accordance with the official rulebook. For any greater number of teams, the list of tied ones are truncated based on rating to the highest number the script can run a tiebreaker bracket for. This isn't an exact solution, but happens in so few simulations that the impact is marginal.


Version History

RAE Ratings for RLCS 2026

1.0 | Updated RAE Ratings for RLCS 2026, adjusting K-Scores and ResultScores. Added incorporation of Swiss Stage results for teams 17th-32nd. Modified preseason rating formula to prevent minor region deflation.
January 19, 2026