New to PFF?
Offering a tour of PFF's Premium Stats, Sam Monson helps you navigate the numbers.
New to PFF?
New To PFF? Let’s Take the Tour
Free Agency is a little like Christmas at PFF. It’s the one time of the year everybody wants a subscription to be able to tear off the wrapping paper on their team’s new signing and really see what they’ve got.
We get a lot of new subscriptions in March every year so it seems like a good opportunity to take a little spin around the PFF Premium section as well as set out a few good rules of thumb when it comes to navigating and understanding PFF data.
Rankings: Context and Limitations
The first place many people will head to is our ‘By Position’ pages. This is where you get to see a ranking of each position we chart in every season of the PFF era (2007 onward. ’07 is as yet only partially complete, but is being added over the coming weeks in its entirety). You can see exactly where that new guard you signed was ranked compared with the rest of the NFL. Ditto your new corner, receiver, running back and so on.
The problem is the rankings need a bit of context. When the site first went up, Neil Hornsby, PFF’s founder, didn’t want to have position rankings at all. He figured people would look at them, see the numbers out of context and run with only half the story. That’s exactly what happens all too often, but if you think a little more about things they can still be excellent tools.
Let’s take the cornerback rankings as an example. Last season Vontae Davis’ season grade landed him inside the Top 5 at the end of the year, but 72% of his positive grade came in one ludicrously good game against Peyton Manning’s Broncos in Week 7. You won’t find a PFF staff member who would argue that Davis was among the five best corners in football last season because they’re conscious of the context.
Position rankings are arranged by overall ranking by default. That values the grade for each category we evaluate equally, but often people don’t believe they should hold equal value. If you don’t care whether your corner plays the run, then click to sort the rankings by coverage grade instead. Maybe you don’t care if your tight end can block – click to sort by receiving only. Sorting by coverage alone last year takes Earl Thomas from ninth overall to inside the Top 5 safeties and drops Revis from the No. 1 spot to fifth among corners.
The rankings also don’t take into account a guy’s role compared to his peers – we leave that to you to assign how much weight you want to put on it. Joe Thomas is effectively left on his own, rarely ever seeing tight end help at left tackle, and the Browns a year ago weren’t exactly quick at getting the ball out. In addition to being as good as anyone else using the objective measures we have, that’s about as tough as any OT has it. By contrast, The Denver OL graded well for us, but to what extent Manning’s quick release and ability to manage pressure aided them is for you to decide.
The rankings are a great tool, but should never be presented as the whole story.
Dive a Little Deeper
Don’t limit yourself to just the season stats. Andy Dalton has a barely above average grade for 2013, but don’t assume that grade comes as a result of being at that level consistently throughout the season. Instead, Dalton has the game-by-game chart of a coach killer. When he is good, he is really very good, but when he is bad it’s disastrous, and he’s rarely more than two games from either.
On the other hand, there are players out there who have a robotic consistency. Evan Mathis isn’t just the best guard in football, but he’s a guy who practically never has an off-day. He hasn’t graded negatively overall since the final game of the 2010 season when he played just 26 snaps for the Bengals.
On the top right of the page in any player’s page there is a drop down menu for statistics. These vary depending on the position that player plays, but there is a lot more information on each player than what appears by default on his player page. Play around with all of the drop down menus you can find.
Don’t Cross Compare Grades
The PFF grading system is a pretty simple one – 0.0 is the ‘average’ grade. A player who grades 0.0 was the definition of average. Anything positive is better than average, negative is below average. The further you get away from 0.0 the better or worse the grade. However, because of the way the grades get normalized to account for the number of snaps played, each position’s grades are different – they cannot be directly compared. A +20.0 for a QB is different than a +20.0 for a running back, defensive tackle, safety and so on.
Grades can be compared within each position group, hence the ranking pages, but they don’t compare directly across positions.
Stats Lie – All the Time
Seems a little odd coming from PFF, right? Believe it or not we’re not actually a stats company. We provide plenty of stats, but they get produced almost as a by-product to the real work we do – player grading and football analysis. We grade every player, on every play, in every game (I think I remember reading that somewhere…), and while we’re doing that we figure we might as well take down a whole host of unique data.
People often criticize PFF by pointing out that Moneyball can’t work in the NFL the way it has in baseball; there are too many variables. Believe me when I tell you we are the last people to whom that needs to be pointed out. Baseball can easily be distilled into a few numbers that accurately reflect key trends over time. Football virtually never works like that – there are just too many moving parts on any given play and too few data points. Identical situations present themselves too rarely to produce bullet-proof statistics.
Take our coverage numbers for example. They’re certainly indicative – the best players usually have good coverage numbers and the worst usually very bad ones – but they lie all the time, and the smaller the sample size you’re looking at the more fragile they are.
Maybe a corner blew his coverage, got killed by a receiver off the line but the receiver dropped a perfectly thrown pass for what should have been a touchown. The stats say 0-for-1, but we all know that guy blew it and got lucky. We always point people back towards the grades. The grades add intelligence to the numbers, accounting for what really happened on the play, not for the resulting statistic or the overall outcome.
If a quarterback threw a great deep pass but the receiver bobbled it into the hands of the safety that was otherwise beaten on the play, the quarterback can earn a significant positive for a throw that resulted in an interception. It works in reverse too. If that same quarterback fires a ball straight to a waiting defender who should have had an easy pick-six on a plate, but he drops it, that won’t absolve the quarterback from a terrible throw and he will receive the same downgrade as if the defender had caught it.
Stats are nice, but no one statistic can come close to telling the whole story. Our grades get you far closer.
Performance Changes Year to Year
Just because PFF says your favorite player was the 87th-ranked receiver last season doesn’t mean we think he sucks. We’re just analyzing what happened in any given period of play. Good players play poorly sometimes and terrible players can manage to hit the jackpot 16 times running and put together a pretty good season. We aren’t analyzing talent, and the grades aren’t necessarily predictive.
In 2012 Philip Rivers graded out at -4.5 for the season, a year later he was a +25.5 and trailed only Manning and Brees overall in our quarterback rankings. Players will fluctuate in performance wildly from one season to the next, often due to circumstances none of us have any hope of gaining knowledge of. Just because a guy grades poorly or grades well one year doesn’t necessarily mean that is his baseline.
So that is a few of the big topics covered. Now dive in and enjoy the football data!
Follow Sam on Twitter: @PFF_Sam