Overrating Athletes And Predicting Success: Trouble In The Scouting RanksBasketball, Bull City, Football — By Joe Simmons on May 19, 2014 at 2:39 pm
Very few people in life can say that they get paid to critique athletes. As a color analyst for a few networks, I often get told by my production manager to stop being a fan and become more critical of the players I am watching. The thing about me is that I just love being a fan. I do have my critical moments, but being a fan is the greatest thing about sports. There are so many ways to be a part of the team and the process.
Earlier this past week, I was on Twitter venting about how high school athletes are rated. There were plenty of people criticizing the NFL draft because X-PLAYER was the number one rated athlete out of high school and didn’t get drafted or Y-PLAYER was a five-star recruit out of high school and didn’t go until the seventh round.
The gap between high school athletics and professional athletics is astronomical. With football especially, the difference between getting hit by a 16-year-old and getting hit by a 25- year-old veteran is pretty noticeable. However, my issue isn’t with the fact that these kids went undrafted, but with the whole rating process itself.
Since I have started working at all three levels, I’ve noticed something about these high school athletes and their ratings. Too many of them are rated high. When you start putting players in tiers (stars), then you should have very few in the highest category. In fact, the highest category should be reserved for the top one percentile. Oversaturation of a tier not only waters it down, but it makes for a lot of questioning of how a player got that status.
I’m a little old-school myself. Production is essential to me rating a kid. When a coach asks me what I think, I tell that coach my view based on what I see. Combine numbers are good, but if they don’t translate to field production they are just numbers that mean potential but not necessarily anything about how the player actually plays the game. When a kid runs a 4.4 40-yard dash, I expect to see breakaway speed on the football field or the ability to create separation. If I watch tape and I don’t see that, then I have questions about the time or the kid.
Even in basketball, how often has the number one rated player in high school let us down? The past few seasons are an indicator of that analogy. It’s not that these kids aren’t good. It’s not even that they aren’t the best in their age group. It’s just that the level of competition may not be consistent across the board and you may have overrated a kid because he or she had measurables and dominated weak competition on the regular.
Harrison Barnes in 2010, Michael Kidd-Gilchrist in 2011, Nerlens Noel in 2012 and Andrew Wiggins in 2013 were all the best rated players in high school basketball. These kids are good, but the first three have yet to cement their impact in the NBA. Even in college, none of them were what we thought they would be. Granted, they all only stayed a year or two, but out of high school they were consensus number one picks. If Wiggins goes number one, he will be the only one out of the group who did.
Competition and measurables are important. Playing against older players usually exposes a player’s weaknesses. These rating systems all need a little work. I know that none of them will ever be perfect, but there are so many people looking to put these kids on pedestals that they forget to make them earn their way on these lists. Some do and do great jobs of it while others put these kids in a situation that they can never live up too. It’s not the kids’ fault that they have been overrated.
The fan in me hates to see these kids not make it and have to go down to play at the level they should have been in the first place. So maybe it’s time to reevaluate the entire ranking process.