My views on the current IFPA ranking system, which rewards more play and doesn’t punish poor play, are well known.
Many people have said if you don’t like it run your own ranking system – so here it is.
The IPFA has a Power 100 system. The way this works is it looks at all of the players ranked in the top 250, and compares their head to head records to give a “win percentage”. I think that this is a much better indicator of a player’s strength.
The way it works is that if the top 250 played in a single comp and a player finished 1st they would have be recorded as wins 249 (as they finished higher than 249 others competing) – ties 0 – losses 0. Giving a percentage of 100%. If a player finishes 125th they would have a record of wins 125 – ties 0 – losses 124 giving a percentage of 50.2%. The ranking system compares ALL competitions where 2 or more players in the top250 have competed. Ties are ignored in the calculations.
Obviously if a comp only consists of 2 players their percentages for THAT competition would be 100% and 0%, but their wins and losses would only increase by 1, not 249 as in the previous example. This gives a natural weighting to competitions with more players in the top 250 as the final percentage is based on TOTAL number of wins and losses, not individual tournament percentages.
With a LOT of help from Paul @Wizcat and access to the IFPA records for ALL of the registered tournaments (no matter where they were held) direct from their database - I have utilised the same system for the UK players.
Due to the fact that it takes so long to run through all of the data, there have had to have been a couple of conditions put on which players are being compared.
To qualify for the ranking, the players must be ranked in the top 150 in the UK and they must have competed in 9 tournaments in the last 3 years. This ‘qualifies ‘ 94 players included in the report. It’s worth noting that of all of the UK players, only 47 have played 20 comps or more.
The reasons behind this are to ensure that the report does not take forever to run, comparing and searching for results of players who may have only competed in the UK League or at a single event; and that statistical anomalies are filtered out from the offset.
The full final ranking table is attached, but the top 20 stands at:
Craig Pullen 85.64%
Andrew Foster 84.46%
William Dutton 77.33%
Matt Vince 76.98%
Wayne Johns 74.61%
Martin Ayub 74.46%
Rich Mallett 73.17%
Nick Marshall 71.28%
Greg Mott 70.03%
Garry Speight 66.67%
Andrew Shillabeer 65.59%
Peter Blakemore 65.08%
David Mainwaring 63.65%
Phil Dixon 62.75%
Chris Poyntz 62.52%
Clive Bush 61.11%
Tim Porter 60.68%
Kate R-Jackson 60.32%
Andrew Wilson 59.88%
Ivan Miles 58.23%
Phenomenal records for Craig @roadshow16 and Andy @PUP
Feel free to make of it what you want; it’s certainly not going to change anyone’s life.
Personally, from looking at the players I know and have played against the most, I feel that this is a much more accurate measure (backed up by the data) of each players comparative strength compared to the current WPPR system of ranking.
Many people have said if you don’t like it run your own ranking system – so here it is.
The IPFA has a Power 100 system. The way this works is it looks at all of the players ranked in the top 250, and compares their head to head records to give a “win percentage”. I think that this is a much better indicator of a player’s strength.
The way it works is that if the top 250 played in a single comp and a player finished 1st they would have be recorded as wins 249 (as they finished higher than 249 others competing) – ties 0 – losses 0. Giving a percentage of 100%. If a player finishes 125th they would have a record of wins 125 – ties 0 – losses 124 giving a percentage of 50.2%. The ranking system compares ALL competitions where 2 or more players in the top250 have competed. Ties are ignored in the calculations.
Obviously if a comp only consists of 2 players their percentages for THAT competition would be 100% and 0%, but their wins and losses would only increase by 1, not 249 as in the previous example. This gives a natural weighting to competitions with more players in the top 250 as the final percentage is based on TOTAL number of wins and losses, not individual tournament percentages.
With a LOT of help from Paul @Wizcat and access to the IFPA records for ALL of the registered tournaments (no matter where they were held) direct from their database - I have utilised the same system for the UK players.
Due to the fact that it takes so long to run through all of the data, there have had to have been a couple of conditions put on which players are being compared.
To qualify for the ranking, the players must be ranked in the top 150 in the UK and they must have competed in 9 tournaments in the last 3 years. This ‘qualifies ‘ 94 players included in the report. It’s worth noting that of all of the UK players, only 47 have played 20 comps or more.
The reasons behind this are to ensure that the report does not take forever to run, comparing and searching for results of players who may have only competed in the UK League or at a single event; and that statistical anomalies are filtered out from the offset.
The full final ranking table is attached, but the top 20 stands at:
Craig Pullen 85.64%
Andrew Foster 84.46%
William Dutton 77.33%
Matt Vince 76.98%
Wayne Johns 74.61%
Martin Ayub 74.46%
Rich Mallett 73.17%
Nick Marshall 71.28%
Greg Mott 70.03%
Garry Speight 66.67%
Andrew Shillabeer 65.59%
Peter Blakemore 65.08%
David Mainwaring 63.65%
Phil Dixon 62.75%
Chris Poyntz 62.52%
Clive Bush 61.11%
Tim Porter 60.68%
Kate R-Jackson 60.32%
Andrew Wilson 59.88%
Ivan Miles 58.23%
Phenomenal records for Craig @roadshow16 and Andy @PUP
Feel free to make of it what you want; it’s certainly not going to change anyone’s life.
Personally, from looking at the players I know and have played against the most, I feel that this is a much more accurate measure (backed up by the data) of each players comparative strength compared to the current WPPR system of ranking.