PGRZ 2018-2020 Stats

2018-2020 #PGRZ Frequently Asked Questions

Have questions about #PGRZ? Hopefully the answers you want are here!
By Andrew “PracticalTAS” Nestico
4 min readPublished on
As stated here, #PGRZ will be releasing 10 players at a time over the next two weeks:

What tournaments were taken into account for #PGRZ?

A total of 94 offline events were included. This includes all offline World Tour events, any offline open bracket with 64 or more entrants, and three notable invitationals (Summit of Power, Tokyo International Film Festival, and FDJ Masters League. Events were weighted based on their entrant counts as well as whether they were World Tour events or invitationals. For the full details, check out our PGRZ Tournament Tier System.

How does the PGstats algorithm work?

Broadly, it runs an iterative calculation which determines the strength of each player based on their wins, losses and outplacings within the season. The data points it takes in are each player’s matches and final standings at each PGR tournament. The actual calculation is not available to the public (see below).
Placements: Players are not rewarded directly on the actual numerical value of their placing at a given tournament; rather, their placement score is derived based on the other players in attendance and whether they outplaced or were outplaced by them. The person who finishes 1st receives head-to-head points against everyone else who attended; the person who finishes 2nd receives points against everyone else except the first-place finisher, and so on.
Each player is given a head-to-head score against all other players who attended at least one tournament in common. This score is based on the given players’ set count, with outplacements treated as partial wins (a win/outplacement at a larger tournament is worth more than one at a smaller event). From these, we generate an objective list of each player’s score or “strength", the value of a win against that player. Thus, if two players have identical overall records against the top 50 but one player’s wins have come against stronger opponents, that player will have a higher score and will be higher on the list.
The algorithm also includes a confidence factor, which can be conceptualized as consistency multiplied by attendance. It measures how confident the algorithm is that a player will advance in bracket to reach the other PGR-eligible players in attendance. Your confidence goes up when you defeat non-PGR-eligible players and goes down when you lose to them. Your confidence also goes down if you did not attend enough events; this removes the incentive to protect your confidence score by avoiding tournaments after a few good results.

Other Notes

  • If you DQ out of an event, your matches do not count as having been played.
  • If you drop out of an event, and have not lost a match or played a single PGR-eligible player, then your placement for that event is disregarded.
  • Due to their inherently high volatility, the value of an outplacing at a single-elimination event is smaller than the value at a similarly-sized double-elimination event.
  • Events which are not open to the public, including invitationals, do not contribute towards players’ confidence factors.

Why is the algorithm private?

There are two main reasons why the algorithm isn’t shared with the public:
  1. If it was, players might change their behaviour. If they knew going into a tournament exactly how their performance would affect their rank, they could play to maximize their rank instead of playing to win. We have already seen some of this in Smash Ultimate, where we recently decided to retire the use of the algorithm in favor of a Melee-style panel-based ranking.
  2. If it was, people could calculate the PGR at the conclusion of the season and “spoil” the reveal. This is also something we’d like to avoid, if possible.

What does the algorithm take into account?

The algorithm’s only inputs are the wins, losses, and placements at PGR tournaments of the players who attended.

Why doesn't a panel change the rankings at the end?

If you have a panel change the results of the algorithm at the end, then there’s no real reason to have an algorithm at all. The point of the algorithm is to be objective; introducing a subjective panel into the mix will leave you with the worst of both worlds.

Does the algorithm have any form of recency bias?

It does not. We have tested including some form of recency bias in the past, but determined that this resulted in the algorithm favoring whoever peaked at the last big event. The confidence factor is our substitute for any form of recency bias, as it penalizes players who did not attend a large amount of big events (whether those events were at the beginning of the game’s lifespan or at the end).

I want to know more about the PGRZ methodology and its details.

Feel free to DM @PracticalTAS on Twitter.