Antweight Ranking Table
Moderators: BeligerAnt, petec, administrator
If you had the SQL tables: robots, aws and fights you could use the following structure to keep details on robots, and an archive of their past AWS performances and how they did in the most recent AWS as well.
robots - name, weight, length, width, height speed, websiteurl, etc etc
aws - number, location, date
fights - RobotWinner, RobotLoser, AWSNumber
Calculating points could possibly be done in the SQL query (I'm not too familiar with the mathematical functions in SQL) or in PHP after you have retrieved the data from the tables.
Disclaimer: I came up with the above very quickly and hence have not thought it through properly. No guarantee I haven't missed something MASSIVE and it will in fact not work at all!
robots - name, weight, length, width, height speed, websiteurl, etc etc
aws - number, location, date
fights - RobotWinner, RobotLoser, AWSNumber
Calculating points could possibly be done in the SQL query (I'm not too familiar with the mathematical functions in SQL) or in PHP after you have retrieved the data from the tables.
Disclaimer: I came up with the above very quickly and hence have not thought it through properly. No guarantee I haven't missed something MASSIVE and it will in fact not work at all!
Mike - Bobblebot.co.uk
-
- Posts: 1134
- Joined: Tue Jan 20, 2004 12:00 am
- Location: London
- Contact:
The problem comes in double elimination with someone going all the way to final winning all there fights and someone loosing the first round then winning through to the final. This only happens on 8 or more robots. As the example below
A scoring system for final place, like F1 would be more suitable, or a ratio of win/loss?
=================================================
ie 8 robots
A B C D E F G H, winners in bold
Round 1
A v B
C v D
E v F
G v H
Round 2
Winners--------Loosers
A v C-----------B v D
E v G-----------F v H
------------------B v C
------------------F v G
Round 3
Winners--------Loosers
A v E-----------B v F
------------------B v E
Final B must win twice
A v B
Replay
A v B
Results
A 1st 4 wins 1 loss {10points 80%}
B 2nd 5 wins 2 loss {8points 71%}
E 3rd 3 wins 2 loss {6points 60%}
F 4th 2 wins 2 loss {4points 50%}
C 5th= 1 win 2 loss {2points 33%}
G 5th= 1 win 2 loss {2points 33%}
D 7th= 0 win 2 loss {1point 0%}
H 7th= 0 win 2 loss {1point 0%}
A scoring system for final place, like F1 would be more suitable, or a ratio of win/loss?
=================================================
ie 8 robots
A B C D E F G H, winners in bold
Round 1
A v B
C v D
E v F
G v H
Round 2
Winners--------Loosers
A v C-----------B v D
E v G-----------F v H
------------------B v C
------------------F v G
Round 3
Winners--------Loosers
A v E-----------B v F
------------------B v E
Final B must win twice
A v B
Replay
A v B
Results
A 1st 4 wins 1 loss {10points 80%}
B 2nd 5 wins 2 loss {8points 71%}
E 3rd 3 wins 2 loss {6points 60%}
F 4th 2 wins 2 loss {4points 50%}
C 5th= 1 win 2 loss {2points 33%}
G 5th= 1 win 2 loss {2points 33%}
D 7th= 0 win 2 loss {1point 0%}
H 7th= 0 win 2 loss {1point 0%}
TEAM GEEK!
- Simon Windisch
- Posts: 1806
- Joined: Tue Apr 15, 2003 12:00 am
- Location: Reading
- Contact:
-
- Posts: 187
- Joined: Thu Jan 27, 2011 5:27 pm
- Location: aldershot
- BeligerAnt
- Posts: 1872
- Joined: Wed May 15, 2002 12:00 am
- Location: Brighton
- Contact:
This thread has produced a lot of posts over the last few days, so it must be interesting (and complicated)
My 2p's worth:
Since RW101 is undoubtedly the home of the AWS, we should give PeteC first refusal on hosting, especially since there is already the (albeit very out of date) antweight database.
Since Oliver has always posted the results on his site, we should give him second refusal. Note, both Pete & Oliver have been consistently involved with ants since AWS1
I'm happy to modify the AntLog spreadsheets to provide data in a more usable format for import into a database. I don't want to make AntLog a db in its own right - competitions are too fluid for anything other than a simple logging application in my experience.
Double elimination makes it a little awkward to work out placings beyond third. I think the best solution is to award a number of points based on the round that a robot is eliminated in (i.e. second loss). No-one is eliminated in Round 1. Those eliminated in round 2 have lost 2 fights so score zero. Those eliminated in round 3 score 1 point etc. This provides scoring based on overall performance rather than number of fights, so it doesn't matter which route a robot takes to the final, it still gets maximum points if it wins the AWS.
This solution would also award more points for winning a big AWS than a small one. Is this a good thing?
Rankings should definitely be over a limited period, either a calendar year or a "rolling year" (last 3 AWS's).
I really think we should keep this simple, at least to start with, as it could easily grow into something very complex if we let it! For example, parsing the current AntLog spreadsheets to get details of each fight is not straightforward because the spreadsheet layout depends on the size of the group, and the winner can only be determined by comparing cells. This could lead us down a long a tortuous route to obtain data that is only marginally interesting. A simple ranking system is a good start, and if it proves popular we could look at something more complex later on.
Off to think about making the spreadsheet output more database-friendly...
My 2p's worth:
Since RW101 is undoubtedly the home of the AWS, we should give PeteC first refusal on hosting, especially since there is already the (albeit very out of date) antweight database.
Since Oliver has always posted the results on his site, we should give him second refusal. Note, both Pete & Oliver have been consistently involved with ants since AWS1
I'm happy to modify the AntLog spreadsheets to provide data in a more usable format for import into a database. I don't want to make AntLog a db in its own right - competitions are too fluid for anything other than a simple logging application in my experience.
Double elimination makes it a little awkward to work out placings beyond third. I think the best solution is to award a number of points based on the round that a robot is eliminated in (i.e. second loss). No-one is eliminated in Round 1. Those eliminated in round 2 have lost 2 fights so score zero. Those eliminated in round 3 score 1 point etc. This provides scoring based on overall performance rather than number of fights, so it doesn't matter which route a robot takes to the final, it still gets maximum points if it wins the AWS.
This solution would also award more points for winning a big AWS than a small one. Is this a good thing?
Rankings should definitely be over a limited period, either a calendar year or a "rolling year" (last 3 AWS's).
I really think we should keep this simple, at least to start with, as it could easily grow into something very complex if we let it! For example, parsing the current AntLog spreadsheets to get details of each fight is not straightforward because the spreadsheet layout depends on the size of the group, and the winner can only be determined by comparing cells. This could lead us down a long a tortuous route to obtain data that is only marginally interesting. A simple ranking system is a good start, and if it proves popular we could look at something more complex later on.
Off to think about making the spreadsheet output more database-friendly...
Gary, Team BeligerAnt
- joey_picus
- Posts: 1137
- Joined: Tue Jan 13, 2009 1:51 pm
- Location: Lancaster, Lancashire
- Contact:
I think they could well be the two people who have been involved in Robot Wars, in any form, for the longest amount of timeBeligerAnt wrote:Note, both Pete & Oliver have been consistently involved with ants since AWS1
Regarding the big AWS/small AWS thing: I would personally have thought it was broadly a good thing - coming first out of 65 robots is arguably far more of an achievement than first of 20 or 30 robots. However, if the competition sizes are so disparate that you have a robot ranking about 20th in a bigger competition getting more points than the winner of a smaller AWS, then you do have problems, so...statistics isnt my stong point but I thought I'd better throw it out there?
Joey McConnell-Farber - Team Picus Telerobotics - http://picus.org.uk/ - @joey_picus
"These dreams go on when I close my eyes...every second of the night, I live another life"
"These dreams go on when I close my eyes...every second of the night, I live another life"
- peterwaller
- Posts: 3213
- Joined: Fri Feb 15, 2002 12:00 am
- Location: Aylesbury Bucks
- Contact:
First I think this is a good thing to get organised properly maybe we could look at the results over the last year by each of the methods that look promising and compare the rankings to see which is most representative.
One problem will be that some like me recycle the names so they would continue to count even if a completly new design where as others have a new name every time hey enter.
One problem will be that some like me recycle the names so they would continue to count even if a completly new design where as others have a new name every time hey enter.
- BeligerAnt
- Posts: 1872
- Joined: Wed May 15, 2002 12:00 am
- Location: Brighton
- Contact:
So, I've been playing with the results from AWS33 and it looks fairly promising, with some caveats.
With a field of 55 robots, 46 score some points:
9 score 0
15 score 1
8 score 2
7 score 3
4 score 4
4 score 5
2 score 6
2 score 7
1 scores 8
1 scores 9 (3rd place)
1 scores 10 (runner-up)
1 scores 11 (winner)
I can modify the spreadsheets to automatically calculate the points and produce a sorted list of robots and points (on a separate sheet) at the click of a button.
The number of points gained in a group will depend on the group size (i.e. the number of rounds in the group stage). Since we often need different-sized groups, the number of points may vary. The final stage handles this by basing its scoring on the highest-scoring group. This seems to be the only sensible solution.
i.e.
Groups 1-7 = 7 way, max score = 3
Group 8 = 6 way, max score = 2
First round of finals scores 4 points
Edit: Here's the results file from AWS33:
http://homepage.ntlworld.com/g0xan/aws3 ... points.csv
One slight issue occurs to me... The robot names need to be entered the same at each event, even down to capitalisation, punctuation etc. Variations in spelling could lead to some "interesting" (i.e. undesired!) results.
With a field of 55 robots, 46 score some points:
9 score 0
15 score 1
8 score 2
7 score 3
4 score 4
4 score 5
2 score 6
2 score 7
1 scores 8
1 scores 9 (3rd place)
1 scores 10 (runner-up)
1 scores 11 (winner)
I can modify the spreadsheets to automatically calculate the points and produce a sorted list of robots and points (on a separate sheet) at the click of a button.
The number of points gained in a group will depend on the group size (i.e. the number of rounds in the group stage). Since we often need different-sized groups, the number of points may vary. The final stage handles this by basing its scoring on the highest-scoring group. This seems to be the only sensible solution.
i.e.
Groups 1-7 = 7 way, max score = 3
Group 8 = 6 way, max score = 2
First round of finals scores 4 points
Edit: Here's the results file from AWS33:
http://homepage.ntlworld.com/g0xan/aws3 ... points.csv
One slight issue occurs to me... The robot names need to be entered the same at each event, even down to capitalisation, punctuation etc. Variations in spelling could lead to some "interesting" (i.e. undesired!) results.
Gary, Team BeligerAnt