Showing posts with label statistics. Show all posts
Showing posts with label statistics. Show all posts

9 May 2010

A first step in statistics

In the NBA, they have shot charts for each game which show where each shot was attempted from and whether it was complete or not. Here's an example for a Cavs win over the Celtics.

In ultimate, when the statistical revolution gains steam, we should track how and where teams score with the disc and where they turn over the disc.

Where does the disc go dead/get dropped? Where is it thrown from? Are sideline hucks more risky? Is a team getting stuck on the sideline?

We could also log the time between throws and catches, to identify how quickly the disc is actually moving in an offence, and whether this correlated positively or negatively with scoring a goal, for a particular team.

14 October 2008

2008 Uni Games Part 3 - the point differentials

In Part 2 of my Uni Games review, I looked at the five teams with the best point differential per game.

Team A: +7.9
Team B: +5.4
Team C: +4.6
Team D: +3.5
Team E: +3.5

These teams were, in order, Sydney Uni, Flinders, Monash, UWA, Adelaide.

Most folks guessed Sydney at the top correctly, then it was a mix of correct and incorrect guesses.

Here is a graph. All game results and the graph are here. Scores came from AFDA.
The teams are ordered left to right by where they finished the tournament. So Flinders (1st) is the left most column, Sydney (2nd) is next, then Adelaide, Melbourne and so on all the way across to QUT (19th).

The point differential measured here is good predictor of ability, in a single number.

Let's look at an example. Deakin had 5 wins and 6 losses for the tournament. Meanwhile La Trobe went 5-5. They seem pretty close? I mean La Trobe only finished one spot higher. Actually, La Trobe was losing games to the best teams by only a few points, and generally thumping low teams. While Deakin never got closer than 6 points to a top 8 finishing team.

The point differential shows this: La Trobe +3.1 and Deakin -2.6.

What else do we notice?

Flinders had a point differential of +5.4, lower than Sydney Uni (+7.9), who had swept through all their opponents more easily than anyone else. So Flinders' win in the final can be considered an upset, given Sydney's scoring ability.

Melbourne Uni, the team that finished 4th, finished higher than point differential would predict, while Monash finished lower (7th). Melbourne's one-point victory in the quarterfinal (the single game that changes final position the most) over Monash was responsible for that.

Point differential isn't strictly comparing apples with apples. Some players get injured. Some teams rest their stars for parts of games. And the teams did not play all other opponents.

What is consistent is that the top 8 teams played 3-4 pool games, 4-5 crossover games against strong opponents, and then 3 games against strong opponents. So comparisons between them are pretty reliable. Latrobe, Deakin, Murdoch and Ballarat had a different run, playing 3 weaker opponents to finish the tournament. La Trobe, in particular, feasted on RMIT and ECU on Thursday (15-2 wins both games) while the top 8 teams were playing strong opponents.

The remaining 7 teams had weaker opponents for the majority of the tournament.
 
A glance at the graph of point differential does show 9 elite teams though, that matches well with subjective observation at the tournament.

And the graph does point out how much better ANU were than their final finishing place of 15th.

Is point differential useful? Well, next time you are at tournament and want to predict the outcome of key games such as semis and finals, check out the scoring margins from previous games. There will always be occassional upsets, but on average, the team with a higher point differential goes through.

20 August 2008

The stars of Worlds

Strap on your helmet, fasten your seatbelts, we're entering Stats World...

Here are the player statistics for WUGC2008. I have put the data online as a spreadsheet too, for anyone to analyse. It is a list of the players, in order of goals caught plus assists thrown, per game played.

You can pull some interesting stories out of these numbers, despite their limitations.

Lets sift through this data a bit to find the stars of Worlds 2008.

Firstly, Juniors should be in a separate category. The junior divisions are still mostly uneven playing fields, with a wide range of abilities and athleticism on many teams. Some junior players will go on to become superstars but don't show it now, whereas the older divisions have a bit more "what you see is what you get". Many of the non-North American juniors have only been playing for 1 or 2 years.

So we'll pull them out of this list, and put them in their own category.

Now to the level of ultimate played. The big stats from players on weak teams are less representative of elite play. There is a decently sized pool of players who could throw or catch numerous goals playing with weaker players against weaker teams. But only the best can do so on a strong roster in the top pools. So I have pulled out players whose team finished in the bottom third of their division, an arbitrary cut-off point (I'm all ears for a more objective method for filtering out weak games).

Those left behind are the stars.

Somewhere down the track, WFDF may put in the manpower to track who plays which point, and we can get more reliable goals per point (GPP) and assists per point (APP) stats, as opposed to the current per game stats. Actually these current numbers are not even true GPG and APG stats as we don't know who sat out games in this list.

In this statistically measured future, I feel we need a scaling factor to account for the far larger number of possessions a player starting a point on O faces compared to a D player.

Alternatively, we need to separate the stats into Goals per Offensive Point (GPOP) and Goals per Defensive Point (GPDP), and likewise for assists.

I'll finish with some questions.

How good an indicator of player performance is points plus assists per game? I did a simple comparison to subjective opinion in 2006. The Australian selectors for World Games 2005 picked 6 men from across the country in that year. One year later, 5 of those players were in the top 6 Australian male scorers at World Clubs (I'll try and put those stats online soon too). That's a solid correlation. I am certain we can find better measures though.

How does Ultistats show this info? Does it have different stats? I can't remember what it showed when I last used it.

And lastly, does this list have anyone at the top who looks ridiculously out of place?

I believe it passes the "laugh test".

Rank Division Team # Name Goals Assists Points Avg
1 Open JPN 12 Yohei Kichikawa 23 35 58 5.8
2 Masters GBR 80 Merrick Cardew 23 29 52 4.73
3 Masters NZL 46 Gary Jarvis 5 44 49 4.45
4 Open JPN 10 Masahiro Matsuno 23 20 43 4.3
5 Masters NZL 47 Shane Vuletich 20 26 46 4.18
6 Women JPN 8 Sanako Inomata 12 29 41 4.1
7 Open CAN 7 Michael Grant 21 19 40 4
8 Women AUS 14 Diana Worman 20 22 42 3.82
9 Women GER 44 Sara Wickström 14 24 38 3.8
10 Mixed CAN 98 Brendan Wong 28 10 38 3.8
11 Open VEN 11 Pablo Saade 5 32 37 3.7
12 Women COL 37 Andrea Trujillo 15 22 37 3.7
13 Women AUS 2 Lauren Brown 28 11 39 3.55
14 Open SWE 17 David Wesley 31 8 39 3.55
15 Women JPN 9 Eri Hirai 31 4 35 3.5