Last Thursday and Friday, I was lucky enough to be among the 22 media members selected to participate in the NCAA’s annual Men’s Basketball Mock Selection, a tool the NCAA uses to educate and inform about the process.
The exercise reinforced a notion I have built in the last two years of interacting with members of the men’s basketball committee: no process is as thoroughly vetted as the selection process for the men’s basketball tournament.
Each of the 10 members on the committee is assigned primary and secondary conferences to keep up-to-date with everything going on in that conference throughout the year.
Those members watch as many games as possible, and with the Horizon League Network, ESPN and ESPN Full Court packages, I know they watch plenty of Horizon League games.
The Horizon League monitors have routine conference calls with the league and review the league’s weekly release, so I can speak to their preparation. So when the conference monitors are watching the Horizon League Network or ESPN Networks to keep track of the Horizon League, I’m filling out online report on a weekly basis to fill in any gaps.
The NCAA and Men's Basketball Committee asks all 31 conferences to compile Conference Monitoring Reports, a detailed analysis of teams’ best wins, worst losses, key injuries, other factors influencing outcomes and upcoming games to watch.
When in the room, committee members have tools on top of tools to help differentiate teams that are remarkably close in comparison. While the RPI is used as a sorting tool, Sagarin, Pomeroy, Palm, Sukop, LMRC and other rankings systems are available.
Yes, the RPI is all over the materials, whether it be in determining top 50 wins, sub-200 losses, etc.. While Jeff Tourial (the West Coast Conference’s Director of Communications) and I largely kept it simple, my colleagues next to me – Matt Norlander and Randy McClure, made a dogged effort to use every available tool.
In all likelihood, the duo simulated the process far closer to an actual committee member that just about anyone else in the room (I know Jeff and I would have made a concerted effort to use more sources had we had more time). While it was a time-consuming process in the Mock, over the course of 5 days, given that the rankings were readily available on the massive overhead projector, Committee members can reference outside numbers as much as possible.
With this in mind, the 10 members of the men’s basketball committee become virtual experts when it comes to the 60 or so teams that will wind up under consideration and will use a variety of criteria to compare and vet each team.
To further complicate matters, members of the committee with conflicts must leave the room when their teams are discussed; Jeff Hathaway, chair of the committee this year, will in all likelihood miss plenty of discussion when Big East teams come up, since he works for the conference.
The Mock Selection takes a week’s worth of preparation and bracket creation completed by the committee and condenses it into 24 hours. Because of that, there is no way to accurately simulate just how thorough the vetting process truly is when those 10 members gather in Indianapolis each March. With six days to discuss each and every nuance of every team under consideration, the committee can be confident that it has built the best field of 68 possible.
How can I sure in that statement? Because in the short 24-hour span that was meant to simulate six days, I found the process of selection to border on the obsessive.
Initially, committee members will give reports on all the teams from their respective conferences they feel are locks for the tournament or deserve consideration. For instance, when discussing the Horizon League, one might have said, “there are no locks, but Cleveland State deserves consideration.”
With that in mind, the monitor will give a detailed report on Cleveland State, breaking down the Vikings’ season, noting any injuries (the D’Aundray Brown injury would receive a detailed report), and answering any other questions committee members may have.
After all the reports, committee members then vote on the teams they believe to be locks, with the teams receiving all but of the eligible votes advancing straight to the at-large pool.
From there, the most thorough vetting process outside of military operations commences, with the members then building a “List 8,” with each one voting on the next eight teams they believe should be in the field.
From there, the eight teams receiving the most votes advance to a “Rank 8,” where each member then ranks the teams from 1-8. The four teams with the fewest points then advance into the at-large pool, with the other four remaining in “purgatory.”
Then, it’s back to the List 8, with the top four vote-getters heading into a Rank 8 with the teams in purgatory. Again, the top four teams would then advance into the At-Large selections.
Of course, writing this makes the process out to be extremely simple. Within the room, it is anything but. Since the top at-large teams are all placed into the pool at once, the Under Consideration pool includes teams that have remarkably similar resumes.
Consider this blind resume, using just some of the available metrics (yes, they're the RPI metrics, but still):
Team A
- RPI: 44
- vs. RPI 1-50 – 2-2
- vs. RPI 51-100 – 3-1
- Non-conference strength of schedule: 253
Team B
- RPI: 41
- vs. RPI 1-50 – 3-3
- vs. RPI 51-100 – 2-2
- Non-conference strength of schedule: 237
Team C
- RPI: 36
- vs. RPI 1-50 – 1-5
- vs. RPI 51-100 – 5-1
- Non-conference strength of schedule: 57
How should you evaluate that? Which aspect do you weigh more? Is it the top 50 wins? Is it a team going out and challenging itself in non-conference play? Where do injuries factor in? These, and a host of other questions, factor into both the “List 8” and “Rank 8” scenarios as you try and pare the field to the most deserving 37 at-large teams. (For the record, Team A is New Mexico, Team B is Kansas State and Team C is Alabama.)
Now, the 10-member committee has six days (and a whole season really) to hash these questions out and use a variety of tools to make their decsions. While we had those same tools, the entire process was sped up much quicker than any the committee must take over March 6-11.
To harp on the selection committee for using RPI as a metric seems to be missing the forest for the trees; at the end of the day, committee members are using it as one of several tools (as Committee Chairman Jeff Hathaway expressed) available. The sheer amount of preparation – whether it is watching as many games as possible, talking with conferences, getting feedback from coaches – ensures that RPI is not the determining factor in whether or not a team reaches the NCAA Tournament.
Are there better metrics, or an easier way of tabulating all the metrics into one giant formula that encapsulates everything? Quite possibly. But at the end of the day, based on my experience, the RPI is not the end-all, be –all of determining whether a team is worthy of at-large selection to the NCAA Tournament.
Instead, the onus is placed where it belongs – on the teams. Build a strong non-conference schedule (which for some “mid-majors” is easier said than done given past successes), win games both in and out-of conference and prove yourself to be among the best 37 teams in the country.
Is it tough? Certainly. But with the committee able to watch virtually every game, and the amount of data at their disposal, no stone is left unturned to fill the field. And once you’re in, as the Horizon League has demonstrated over the years, anything is possible.