As I’ve previously discussed, my main Expected Goals (ExpG) Model is based entirely on where shots on target were taken from. This makes it really handy to begin measuring the quality of saves a keeper makes as distance and angle of shot are two big factors in how difficult a save is. The model contains nearly 13500 Premier League saves and now includes direct free kicks and penalties too.
I can put keepers’ saves through the model and rank each keeper on the saves they’ve made this season. Here’s the table:
Over a season, ExpG doesn’t seem to favour keeper’s who play for ‘good’ or ‘bad’ defensive teams. Chelsea and Southampton conceded very few shots all told this season. Everton were middling in this regard while Sunderland and West Ham conceded loads. Keepers from these teams make up the Top 6 here.
I have looked at the % of shots keepers face from each of the 5 major location zones (see this piece on Petr Cech for details) and the spread isn’t huge. All keepers, no matter who they play for, have to deal with a similar proportion of shots from each major zone.
Few in the football analytics community like save data as saving shots has shown not to be a repeatable skill from one season to the next (even when the shot data takes location into account). I’m a believer that in time, it’s a skill that shows through. @DanKennett did a great piece on Statsbomb recently hinting at the fact that this might be the case. I asked Dan to do a Premier League only version of his graphic:
While Dan’s data only takes into account shot volume (not location or anything else) it’s by far the biggest bank of data (6 years) I’ve seen for individual keepers. For the unfunnelitiated (made up word alert) amongst you, any keeper outside the curved lines is 2 standard deviations above/below mean average. This is where we can start to summise that these keepers might be showing statistically significant skill (or lack of) compared to their peers. I don’t know about you but those names above and around those curved lines look about right to me.
My own data goes back 4 years. I’ve ranked all keepers who’ve made 300+ saves and adjusted for shot location:
You’ll note that over time the amount of keepers who have conceded less goals than expected has whittled down to just 3. And they’re the ‘Big 3’ too. As with strikers it seems that simply maintaining just above ‘average’ performance over time marks you out as a special player.
Doing not so well in Dan’s data and not so well in mine too, is Newcastle’s Tim Krul. To my eyes, the Dutchman is one of the most lackadaisical keepers around. As we’ve seen, though, making saves is a volatile business. And what was the best keeping performance of last season in a single game according to my model? Yep, it was Krul pulling out all the stops at Tottenham. Here goes:
Krul made 14 saves that day. The model suggests that we could expect 3.6 goals on average from shots from those positions. Simulating those shots many thousands of times over suggests there was less than 2% chance that Krul would keep that clean sheet. On that day at least, he was well up for it and focused!
Another performance that deserves special mention is David Marshall’s against Aston Villa back in February. While lots of keepers made a greater number of saves in a match this season, Cardiff’s No.1 faced the highest average shot quality for a keeper who faced a dozen or more saves in a single match. He faced 6 shots with an average ExpG of 0.4 per shot. Simulations suggest there was just a 5% chance of him keeping that particular clean sheet. Unfortunately, I only have limited footage from that day (I’m working on it) but there’s enough here to give a good idea:
Two absolutely great saves in there. However, I also wanted to highlight what this model doesn’t take into account – Marshall’s role in shots that go off target. Remember the model just measures saves. I love Marshall’s positioning in relation to his defence for both the efforts that end up going off target here. Many keepers would fly out in these situations. Marshall maintains his composure, edging forward and standing up as long as possible forcing the striker to make his decision. He knows his defenders are covering all the other angles and bases. I can’t stand it when the keeper makes the striker’s decision for him!
My model doesn’t pretend to cover all bases when it comes to measuring keeper performance. The data to do keepers full justice simply isn’t available in the public domain. All shots from a particular location aren’t the same. There’s no indication of defensive pressure in the data – where are the defenders in relation to the striker of the ball for instance? The model simply gives a benchmark average to compare players. For so few variables taken into account, though, it looks like it does a decent job over time. Put more data variables in and I think you could make a really good model.
I often wonder how scouts measure and compare what they’re watching. I’ve read The Nowhere Men by Michael Calvin recently. The book gives the impression that keeper scouting isn’t that specialised – scouts that judge outfield players will also file reports on goalkeepers. I may be wrong.
Unless you know how often you can expect ‘that’ save to be made how do you measure it? How many times do keepers get watched? Does a performance analyst sit there and cut tape of every save they’ve ever made? I sincerely doubt it. Do scouts even look at saves? Are they more interested in command of the box (whatever that means – I suspect it means different things to different people)? Watch a game and you’ll see how a keeper deals with pressure when coming for crosses for example. ‘Crosses claimed’ numbers on a scouting software package doesn’t tell you that.
With both on and off ball data being collected now, you could build a model to give you a greater idea of all of this. Scouts and coaches can’t watch every game and remember every detail of performance. A model can pinpoint exactly what you want your scouts to look out for and you can also use it in training for the keepers you already have. If the model’s serving it purpose, it can tell you things you don’t already know – it’s strength should be in seeing what you can’t.
In the coming weeks I’ll look at holes in certain keepers that even my simple model has identified. Some keepers over the last 4 seasons have been beaten at a statistically signifcant level from certain shot locations. Most of the time, it’s down in part to poor decision making and positioning by keepers – not just an AWOL defence.
As usual hit me with your thoughts @footballfactman on Twitter or in the comments below.