The NCAA tells us that there are 5 primary criteria for selection. In a nutshell they are won-lost percentage, head-to-head, results versus common opponents, results versus ranked opponents and Strength of Schedule (SOS). The NCAA also tells us that all of the criteria will be evaluated and that they are “not listed in priority order”. This is kind of an interesting statement because if you read into it, there is a priority order; they just don’t tell us what it is. Now, it could also mean that all of the criteria is considered equally but if that was the case why not just state it?
I firmly believe that the NCAA does have a priority order and they elect not to publicize it so they don’t have to answer to esteemed journalists like myself. (Note – I entered “third-rate hack blogger” but Word auto corrected to “esteemed journalists”. Not sure how to fix that.) It also allows the NCAA to do one more thing that I also firmly believe they do and that is to change the priority order. Now, I don’t mean change the priority order each year based on feedback and lessons learned conferences…I mean I believe, at times, they change the order week-to-week. I also believe that different RACs can be given different priorities. There are two things every RAC member seems to share in common and that is they don’t talk about the inner workings of the RAC meetings and that they get very frustrated with the process. There is absolutely no reason the RAC meetings should be frustrating. There are five criteria and they are very easy to apply (the way the NCAA applies them currently). You can certainly be frustrated with the result but the process is pretty straight-forward (the way the NCAA does it currently). Even an esteemed journalist like myself can do it. (Dang, autocorrect again.)
I’ve gleamed enough over the years to understand that the frustration is rooted in the changing priorities handed down by the NCAA and the selection committee. A RAC comes up with their rankings and told that they misapplied the criteria. The ranking is then corrected and the RAC uses this new knowledge the next week only to be told they made a mistake that seems to contradict what was done the previous week. You can see how this would be frustrating.
The next logical question is why is this done? If I’m a RAC chair I would certainly consider manipulating the rankings when things are close to put the teams that have the best chance at NCAA selection first. Now, selectability isn’t one of the 5 criteria but I think it’s a smart move by a RAC chair. The worst thing you can do for your region is have a highly regionally ranked team that is going to lose the comparison battle with the other regions and block every other team sitting behind it. I struggle to think of any other reason why priorities would change other than this reason and, frankly, even my reasoning is suspect. If I put my conspiracy hat on for a second I can see a situation where the NCAA is trying to minimize flights to the regionals and uses the criteria when it sees things might get out of hand. If this occurs then it would be especially true for the West Region. Another good conspiracy would be for one of the regional chairs to figure out where their strengths lie inside the region and make that the priority. Then, when it comes to the selection process, their region is setup better than the other regions. All nice cloak and dagger sort of stuff but I find it hard to believe any of that is true. With that said, the frustration exists.
One of my favorite activities over the last year plus has been doing case studies on the decisions made by the RACs. I have all of the information (criteria) they have so my results should be the same but they seldom are. I have no doubt that the time pressures to get the RAC rankings updated and the entire selection process completed and then allocate regional host sites is a factor in mistakes being made. Very little time exists between the conference finals and the selection show. I also firmly believe that some regional chairs are just better at the selection game than others. What I view as a mistake is another person’s ability to get an unworthy team from their region into the tournament.
Back to the priority of the criteria. I believe that win/loss percentage and SOS are priority 1A and 1B followed by head-to-head and then results versus ranked opponents. Results versus common opponents is the ugly step-sister of the criteria, which is shameful because…
Did you notice six paragraphs up that I kept adding the phrase, “the way the NCAA applies them currently”? Two of the five criteria (results versus ranked opponents and results versus common opponents) allow for some wiggle room in the very cut and dry selection process. The problem is that everything I’ve ever heard (and the way my case studies play out) is that this isn’t done. Results versus ranked opponents leaves open the door that the wins and losses are weighed differently depending on the opponents. Everyone can certainly understand that a win against CMS (ranked #1 in the West) is not equal to a win against the 12th team in the New England Region, right? Unfortunately, it appears (year after year) that the ranked win criterion becomes more of a percentage just like win/loss. In the process, this criterion becomes devalued when I believe it might be the most important of the criteria if used correctly. Results versus common opponent is another criterion where comparisons can be done and original thought can be used. Why did Team A who the criteria say is better than Team B lose to Team C who was beat by Team B? This is a red flag to me and would cause me to look at the criteria more closely. Instead, results versus common opponents typically becomes an outlier data point that is discarded by the RACs because it takes time to go through and this extra step isn’t supported by the NCAA. To me, it’s a way to open the discussion and bring art into the process when it’s most required.
For those readers that check out the comments (or follow Volleytalk), you’ll have seen that our friend from New England, “Ned”, has created a Quality Win Index he’s playing with this year. Basically he assigns points (and takes away points) based on wins and losses depending on where their opponent is ranked by Pablo. Pablo can be found on the Rich Kern site and it attempts to rank all of the D3 volleyball teams. What Ned is doing is basically trying to get the concept of good wins and bad losses into the selection discussion. I 100% support this as I have often felt that bad losses (defined by me in the past to be losses to teams not regionally ranked) can indicate problems with a team that the criteria hide. (Bad losses probably should be losses to teams unranked or ranked 9th or lower so that the regions are treated fairly). None of this would be needed, however, if the NCAA selection committee would use results verses ranked opponents and results versus common opponents correctly. The current criteria allow for these discussions but until the NCAA pushes this direction down to the RACs, teams that don’t deserve selection will continue to be selected. I think if we could change the application of these two criteria then the five criteria would be sufficient and the priority order would pretty much be equal. Having equal criteria would reduce the misapplication issue and hopefully eliminate the frustration felt inside the RACs.
I love this time of year! Name some names, quote people out of context, break some eggs! I see a Pulitzer!
The notion that you would rank your region based on how they compare with other regions is a great concept that I will admit I have not considered in past years (one more variable for the list). That would of course require someone on high (I wonder who?) managing the shifting criteria priority the NCAA is apparently pushing (not exactly what you said? too late).
Remember our leader’s mantra: results, results, results!
LikeLike
To be clear…I know that multiple RACs in past years have been very frustrated with the process. The way the process is applied, it shouldn’t be. It will (and has) produced results that are strange but the process is straight-forward. I was just spit balling possible reasons although none of them really make sense.
Yes, our leader, as always, is correct. But when the current application of the selection process doesn’t always reward results, it makes sense to change the application.
LikeLike
I understand, I have to kid a bit or it would make us crazy (if it already hasn’t). The NCAA has set a moving target and that is frustrating.
LikeLike
Any chance you’ll do an article on the #1s from each region before Wednesday’s release?
LikeLike
My little secret is that I do want to do more national stuff but finding the time is so hard with what I spend only on the west. With that said, you did just put the bug in my ear…
LikeLike
🙂
LikeLike
Over four hours later and I only have two of the rankings done. I’m having fun but I really dislike you at this moment.
LikeLike