2015 D3 Season: NATIONAL PERSPECTIVE

Started by D3soccerwatcher, February 08, 2015, 12:49:03 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

TennesseeJed

Quote from: Flying Weasel on October 21, 2015, 11:09:44 PM
(4) These rankings are numbers driven, that is clear.  So if you want to understand them (which is different from agreeing with them) you can't just make general, subjective comparisons (like OWU's and Kenyon's schedule look about the same) and then throw your hands up and say it doesn't make sense.  You need to quantify things, and when you do, a lot more things will make sense (again, it doesn't mean you have to like it or agree with it).  As RH illustrated, there was a significant numerical difference in Kenyon's and OWU's non-conference schedule even if subjectively we might find them similar.

Putting Kenyon and OWU aside entirely, the primary criteria listed by the NCAA are all quantitative, however they do not provide any quantitative disclosure for the rankings other than SoS and Win%, and neither figure, by itself, gets you to the rankings.  Taken together, for the current week, they should, but there's no guidance as to how each data point was evaluated or rated or, if used in combination, how they were weighted.  In a comment you made in reply to me on another post, you indicated (not verbatim, my best attempt to restate.  apologies if I misstate your sentiment here) that you thought the committees used the raw data (Win % and SoS) as raw data inputs in their consideration and deliberations but ultimately made the decisions on rankings [subjectively] within the committees, guided by the data.  If they are going off of ranked numerical statistics, they haven't given any of us the formulas that get them to the results.  So, I think they're kind of numbers driven, but not entirely.  I'd agree to numbers-directed or numbers-influenced.  That's not the same as numbers driven.  I'm actually arguing for an even tighter and stronger "numbers-driven" approach, where there's little left for the committee to do once the numbers have been tallied.  As RH said in a related post, there's no way for most of the committee member coaches to see so many of the games that they're genuinely familiar with each team's performance and SoS, so a numbers-based approach is a better, less biased approach.  RH seems to be suggest in a few posts that SoS (not sure if he's saying does or should, or both) far outweighs a team's performance in ranking decisions.  I'd argue that that can't be, and isn't right, or Allegheny and Wittenberg would potentially sit atop the Great Lakes region.  Similarly, if performance far outweighed SoS, PSU-Behrend might be the leader.  Looking at both factors in concert, and as I said earlier, I would expect (and completely accept) that OWU should have a high rank in the GL region.  I think we all agree that the two metrics, taken together, is an appropriate basis, on virtually all dimensions of the argument, for assessing rank.  (Adding RvR's next week will make it even better.)  The simplest numbers-driven approach would be to simply take the two numbers together to get a scaled, SoS adjusted performance rank.  The rankings, as I've indicated elsewhere, do not follow this rank at all.  So, if it's a numbers-driven process, they must follow some other function or calculation that assigns value to each metric.  I'm just simply arguing that, if that's the case, disclose the calcs, or function or weights...  Or, alternatively, if the approach is numbers-guided and the committees use the two metrics as general guidance, but they collaborate to determine, subjectively, what the committee thinks the rankings ought to be, then just disclose that..  I just want somebody to tell me clearly and concisely what is behind the rankings with enough detail that, if I wanted to reconstruct all the calcs, I could get to the same result. 

I'm not saying that you're wrong about it being numbers-driven.  It's just that none of us can prove you right, nor my hypothetical assertions about it being numbers-influenced, or even totally subjective, wrong.  The NCAA hasn't disclosed its approach, so we just really don't know...

Flying Weasel

Quote from: NCAC New England on October 22, 2015, 12:16:50 AMYou may have understood and accepted how the numbers worked years ago.  I'm confronted with it for the first time, so give me half a day.

I didn't realize this is the first time you were confronted with this. I thought you were following this for a few years.  I can perfectly understand newcomers to the process/criteria/rankings/etc. having trouble coming to grips with it.  I just honestly didn't think you were a newcomer to all this.  Apologies for not taking you to be a newbie.

And I understand how people could be surprised because I know people built up expectations based on things not being considered by the committee and without considering things the committee does consider.  And I know that no matter how much those in the know try to prepare others, most just have to experience for themselves.  I mean, the importance of SOS (and wins vs. ranked) has been repeated by myself and Mr. Right (and maybe some others) so many times, but maybe it takes seeing a Kenyon get "punished" for a sub .500 SOS with a low ranking or seeing Whitworth go unranked for the reality of what we were saying to really sink in.

Anyway, have your half day.  Didn't mean to take it away from you, I just honestly thought you would have had it before this year.

Quote from: NCAC New England on October 22, 2015, 12:16:50 AMAnd, yes, in fact, I still think the schedules are comparable.  And don't know how a school intentionally plans for this without knowing how what records opponents will have on a following year.

But you can think they are comparable to your hearts content.  That's your choice.  However, if you want to come closer to an understanding of (not agreeing with) how the committee comes up with their rankings, you need to set aside what you think/feel for a second and quantify their schedules, ideally in a manner as consistent as possible with how the NCAA quantifies their schedules.  And let me be clear, I'm not defending the committee and I'm not saying I agree with the rankings (I often don't) and I've often stated my opinion on what and how things like the SOS calculation are flawed.  So, in that sense I sympathize with you and your frustrations, but I also have allowed myself to (a) recognize the criteria for what it is, (b) get a vague but helpful sense of how the committee applies/weights the criteria, and (c) have more realistic expectations for what the results of the process can/will be.  Nevertheless, I've never gotten into the game of predicting their rankings, because the jump from the criteria and my sense of how they apply it to the rankings themselves is still a bit too much and too hard to predict.  What is much easier to predict is the at-large selections because you have the three weekly regional rankings which provide a pretty reliable foreshadowing.

Flying Weasel

#1322
Quote from: TennesseeJed on October 22, 2015, 12:33:36 AM
Quote from: Flying Weasel on October 21, 2015, 11:09:44 PM
(4) These rankings are numbers driven, that is clear.  So if you want to understand them (which is different from agreeing with them) you can't just make general, subjective comparisons (like OWU's and Kenyon's schedule look about the same) and then throw your hands up and say it doesn't make sense.  You need to quantify things, and when you do, a lot more things will make sense (again, it doesn't mean you have to like it or agree with it).  As RH illustrated, there was a significant numerical difference in Kenyon's and OWU's non-conference schedule even if subjectively we might find them similar.

Putting Kenyon and OWU aside entirely, the primary criteria listed by the NCAA are all quantitative, however they do not provide any quantitative disclosure for the rankings other than SoS and Win%, and neither figure, by itself, gets you to the rankings.  Taken together, for the current week, they should, but there's no guidance as to how each data point was evaluated or rated or, if used in combination, how they were weighted.  In a comment you made in reply to me on another post, you indicated (not verbatim, my best attempt to restate.  apologies if I misstate your sentiment here) that you thought the committees used the raw data (Win % and SoS) as raw data inputs in their consideration and deliberations but ultimately made the decisions on rankings [subjectively] within the committees, guided by the data.  If they are going off of ranked numerical statistics, they haven't given any of us the formulas that get them to the results.  So, I think they're kind of numbers driven, but not entirely.  I'd agree to numbers-directed or numbers-influenced.  That's not the same as numbers driven.  I'm actually arguing for an even tighter and stronger "numbers-driven" approach, where there's little left for the committee to do once the numbers have been tallied.  As RH said in a related post, there's no way for most of the committee member coaches to see so many of the games that they're genuinely familiar with each team's performance and SoS, so a numbers-based approach is a better, less biased approach.  RH seems to be suggest in a few posts that SoS (not sure if he's saying does or should, or both) far outweighs a team's performance in ranking decisions.  I'd argue that that can't be, and isn't right, or Allegheny and Wittenberg would potentially sit atop the Great Lakes region.  Similarly, if performance far outweighed SoS, PSU-Behrend might be the leader.  Looking at both factors in concert, and as I said earlier, I would expect (and completely accept) that OWU should have a high rank in the GL region.  I think we all agree that the two metrics, taken together, is an appropriate basis, on virtually all dimensions of the argument, for assessing rank.  (Adding RvR's next week will make it even better.)  The simplest numbers-driven approach would be to simply take the two numbers together to get a scaled, SoS adjusted performance rank.  The rankings, as I've indicated elsewhere, do not follow this rank at all.  So, if it's a numbers-driven process, they must follow some other function or calculation that assigns value to each metric.  I'm just simply arguing that, if that's the case, disclose the calcs, or function or weights...  Or, alternatively, if the approach is numbers-guided and the committees use the two metrics as general guidance, but they collaborate to determine, subjectively, what the committee thinks the rankings ought to be, then just disclose that..  I just want somebody to tell me clearly and concisely what is behind the rankings with enough detail that, if I wanted to reconstruct all the calcs, I could get to the same result. 

I'm not saying that you're wrong about it being numbers-driven.  It's just that none of us can prove you right, nor my hypothetical assertions about it being numbers-influenced, or even totally subjective, wrong.  The NCAA hasn't disclosed its approach, so we just really don't know...


A few comments in response . . .

When I started using the term "numbers-driven" I simply meant the decision-making process (for rankings or at-large selections) seems to be much more quantitative than qualitative or subjective. I have no idea the breakdown, e.g. 75% quantitative, 25% subjective.  It just seems to skew heavily towards quantitative.  That's what I meant/mean by numbers-driven.

I'm just guessing when I say "I think the committee . . ."  I have no inside info.  I am trying to infer and deduce whatthe process may be like.  It's just my feeling, nothing more, that they do NOT have some super formula that combines a quantification of each of the primary criteria and weighs and factors them and spits out who they should ranked where and that's that.  I just think they at some point must discuss the data the criteria generates in a conference call.  I can imagine that discussion being less thorough and more hurried in week 1 and being much detailed and lengthy on Selection Sunday.  I think they have established, more informally than formally, what criteria is given more weight or is more determinate.

I have to laugh at the cry for more transparency.  I'm not laughing at you and I'm not entirely disagreeing with you.  I'm just thinking that the current process provides much more transparency than what came before it and some measure of predictability.  So remembering those days before regional rankings leading up to Selection Sunday existed, before there was some formula for quantifying strength of schedule, before data sheets were released, etc., it's funny that all that which was done to provide transparency and predictability is now considered by a new generation and new set of eyes to lack the necessary transparency and predictability. Imagine not having the three regional rankings, no SOS formula, no data sheets and just checking in on the Monday before the tournament to find out who got selected for at-large berths. That's how it was for many years.  Talk about surprises, confusion, bewilderment, etc.

Finally, I am not trying to prove myself right (right about what, exactly?  I'm not making any concrete claims) and everybody else wrong (I actually agree with most of what everyone else thinks is wrong about the process and its outcomes) as we banter back and forth.  I've been following all this for a very long time, since this approach began back in . . . 2003, right?  So what I say comes from observing this over all those years. I share my deductions, my suspicions, my perspectives, etc. for whatever they are worth.  I usually try to focus on sharing and educating, not winning an argument or putting anybody in their place.  If I erred in the direction of the latter today, that's my wrong.

TennesseeJed

#1323
Quote from: Flying Weasel on October 22, 2015, 01:45:56 AM
Quote from: TennesseeJed on October 22, 2015, 12:33:36 AM
Quote from: Flying Weasel on October 21, 2015, 11:09:44 PM
(4) These rankings are numbers driven, that is clear.  So if you want to understand them (which is different from agreeing with them) you can't just make general, subjective comparisons (like OWU's and Kenyon's schedule look about the same) and then throw your hands up and say it doesn't make sense.  You need to quantify things, and when you do, a lot more things will make sense (again, it doesn't mean you have to like it or agree with it).  As RH illustrated, there was a significant numerical difference in Kenyon's and OWU's non-conference schedule even if subjectively we might find them similar.

Putting Kenyon and OWU aside entirely, the primary criteria listed by the NCAA are all quantitative, however they do not provide any quantitative disclosure for the rankings other than SoS and Win%, and neither figure, by itself, gets you to the rankings.  Taken together, for the current week, they should, but there's no guidance as to how each data point was evaluated or rated or, if used in combination, how they were weighted.  In a comment you made in reply to me on another post, you indicated (not verbatim, my best attempt to restate.  apologies if I misstate your sentiment here) that you thought the committees used the raw data (Win % and SoS) as raw data inputs in their consideration and deliberations but ultimately made the decisions on rankings [subjectively] within the committees, guided by the data.  If they are going off of ranked numerical statistics, they haven't given any of us the formulas that get them to the results.  So, I think they're kind of numbers driven, but not entirely.  I'd agree to numbers-directed or numbers-influenced.  That's not the same as numbers driven.  I'm actually arguing for an even tighter and stronger "numbers-driven" approach, where there's little left for the committee to do once the numbers have been tallied.  As RH said in a related post, there's no way for most of the committee member coaches to see so many of the games that they're genuinely familiar with each team's performance and SoS, so a numbers-based approach is a better, less biased approach.  RH seems to be suggest in a few posts that SoS (not sure if he's saying does or should, or both) far outweighs a team's performance in ranking decisions.  I'd argue that that can't be, and isn't right, or Allegheny and Wittenberg would potentially sit atop the Great Lakes region.  Similarly, if performance far outweighed SoS, PSU-Behrend might be the leader.  Looking at both factors in concert, and as I said earlier, I would expect (and completely accept) that OWU should have a high rank in the GL region.  I think we all agree that the two metrics, taken together, is an appropriate basis, on virtually all dimensions of the argument, for assessing rank.  (Adding RvR's next week will make it even better.)  The simplest numbers-driven approach would be to simply take the two numbers together to get a scaled, SoS adjusted performance rank.  The rankings, as I've indicated elsewhere, do not follow this rank at all.  So, if it's a numbers-driven process, they must follow some other function or calculation that assigns value to each metric.  I'm just simply arguing that, if that's the case, disclose the calcs, or function or weights...  Or, alternatively, if the approach is numbers-guided and the committees use the two metrics as general guidance, but they collaborate to determine, subjectively, what the committee thinks the rankings ought to be, then just disclose that..  I just want somebody to tell me clearly and concisely what is behind the rankings with enough detail that, if I wanted to reconstruct all the calcs, I could get to the same result. 

I'm not saying that you're wrong about it being numbers-driven.  It's just that none of us can prove you right, nor my hypothetical assertions about it being numbers-influenced, or even totally subjective, wrong.  The NCAA hasn't disclosed its approach, so we just really don't know...


A few comments in response . . .

When I started using the term "numbers-driven" I simply meant the decision-making process (for rankings or at-large selections) seems to be much more quantitative than qualitative or subjective. I have no idea the breakdown, e.g. 75% quantitative, 25% subjective.  It just seems to skew heavily towards quantitative.  That's what I meant/mean by numbers-driven.

I'm just guessing when I say "I think the committee . . ."  I have no inside info.  I am trying to infer and deduce whatthe process may be like.  It's just my feeling, nothing more, that they do NOT have some super formula that combines a quantification of each of the primary criteria and weighs and factors them and spits out who they should ranked where and that's that.  I just think they at some point must discuss the data the criteria generates in a conference call.  I can imagine that discussion being less thorough and more hurried in week 1 and being much detailed and lengthy on Selection Sunday.  I think they have established, more informally than formally, what criteria is given more weight or is more determinate.

I have to laugh at the cry for more transparency.  I'm not laughing at you and I'm not entirely disagreeing with you.  I'm just thinking that the current process provides much more transparency than what came before it and some measure of predictability.  So remembering those days before regional rankings leading up to Selection Sunday existed, before there was some formula for quantifying strength of schedule, before data sheets were released, etc., it's funny that all that which was done to provide transparency and predictability is now considered by a new generation and new set of eyes to lack the necessary transparency and predictability. Imagine not having the three regional rankings, no SOS formula, no data sheets and just checking in on the Monday before the tournament to find out who got selected for at-large berths. That's how it was for many years.  Talk about surprises, confusion, bewilderment, etc.

Finally, I am not trying to prove myself right (right about what, exactly?  I'm not making any concrete claims) and everybody else wrong (I actually agree with most of what everyone else thinks is wrong about the process and its outcomes) as we banter back and forth.  I've been following all this for a very long time, since this approach began back in . . . 2003, right?  So what I say comes from observing this over all those years. I share my deductions, my suspicions, my perspectives, etc. for whatever they are worth.  I usually try to focus on sharing and educating, not winning an argument or putting anybody in their place.  If I erred in the direction of the latter today, that's my wrong.

FW, I didn't mean to overplay the point about it being a you right vs. me right or wrong, and I also did not mean to try to pin you down on your words about process, so sorry if it came across that way.  My frustration is certainly not with you.  On the contrary, I appreciate all your efforts to make this as transparent as you have.  You've been a student and participant far longer than me.  I'm really just reacting to some things I'm learning as a newer observer that strike me as odd, problematic, undesirable, etc. and also that create some surprises in the output that don't square with my understanding of the criteria and how they are [or probably, more accurately how I think they "should be"] applied.  My real beef is w/ the NCAA for not making this more transparent.  It's probably unrealistic for me to expect anything different from them and I take your point that the process is more transparent than it used to be.   I also fully accept the point made by you or RH that the coaches should fully understand the process, particularly if they've served on Regional Committees before too and that there should be no surprises to them.  Whether I like the process and results or not, I think I've taken up enough air time on my points and I'm perfectly content to move on.  Thanks for taking the time to reply, and again, for all the work to try to demystify this process.  I apologize if my responses made this feel like a competition or argument.  Definitely not my intent. 

Flying Weasel

No problems, no worries, TennesseeJed. I like a good, strong/opinionated exchange and don't get my feathers ruffled too easily or quickly. 

Mid-Atlantic Fan

Quote from: Flying Weasel on October 22, 2015, 08:51:20 AM
No problems, no worries, TennesseeJed. I like a good, strong/opinionated exchange and don't get my feathers ruffled too easily or quickly.

FW, what do you make of Messiah's chances after the release of the rankings? Obviously their record vs ranked will hurt them drastically but should remain ranked in the region. If Wash. Lee gets ranked that will give them 1 win but if not they are looking at 0-3-0 right now. Luckily for them CMU is not ranked or they would be 0-4-0 which is a possibility. If Hopkins slips and Gettysburg climbs that will at least give them 0-3-1. Thoughts?

Mr.Right

Messiah will have no shot for a Pool C because of their Record v Ranked..

Mr.Right

The one coach that I know of(I am sure there are more) that has the committee standards figured out is Amherst and Justin Serpone. He actually was the Nescac representative and New England representative a few years back. This is what he does:

5 Nescac road games
5 Nescac home games
5 Non-Conference road games

So 5 Home and 10 away. He also picks weaker teams from weaker leagues that will have an above .500 records. He knows he most likely will beat those teams and keep his SOS and OWP somewhat high at least into .580 SOS etc.. It is a very smart and commendable move by Serpone. I cannot even remember the last time he played a Non-Conference game at Home

Shooter McGavin

Quote from: Mr.Right on October 22, 2015, 01:14:32 PM
The one coach that I know of(I am sure there are more) that has the committee standards figured out is Amherst and Justin Serpone. He actually was the Nescac representative and New England representative a few years back. This is what he does:

5 Nescac road games
5 Nescac home games
5 Non-Conference road games

So 5 Home and 10 away. He also picks weaker teams from weaker leagues that will have an above .500 records. He knows he most likely will beat those teams and keep his SOS and OWP somewhat high at least into .580 SOS etc.. It is a very smart and commendable move by Serpone. I cannot even remember the last time he played a Non-Conference game at Home

That's not good for the parents...too many road trips. But yes smart by a coach because of the home vs away multipliers etc...

Shooter McGavin

Texas Dallas down 1-0 at half...uh oh

PaulNewman

I was looking at the page with the 7 remaining unbeaten teams, and another "number" to throw in the mixer jumped out at me....GF and GA.

F&M -- 14-0 -- 1.000 -- GF 33 GA 4

Amherst 12-0 -- 1.000 -- GF 32 GA 2

Calvin 15-0-1 -- .969 -- GF 56 GA 4

Whitworth 12-0-1 -- .962 -- GF 37 GA 3

Whitworth has given up 2nd lowest number of goals in the country.  Calvin's 4 GA also very impressive given Knights have played 16 games.

lastguyoffthebench


I am still baffled with Messiah's SOS.  Considerably weak, but playing E-town and Lycoming on the road with the 1.25 factor bolsters their numbers.   They are the benefactors, much like Calvin and Kean for playing so many road games.

I still believe that Messiah wins the AQ and Lyco will get an at-large bid.   



Mr.Right

Quote from: lastguyoffthebench on October 23, 2015, 01:47:19 PM

I am still baffled with Messiah's SOS.  Considerably weak, but playing E-town and Lycoming on the road with the 1.25 factor bolsters their numbers.   They are the benefactors, much like Calvin and Kean for playing so many road games.

I still believe that Messiah wins the AQ and Lyco will get an at-large bid.


Yes the Messiah SOS is completely baffling, especially when the past few years they have had a much stronger schedule and had much lower SOS...I will say this gain but Lycoming IMO is a bubble team but on the right side of the bubble..

lastguyoffthebench

Quote from: lastguyoffthebench on August 25, 2015, 12:35:28 PM
Player with most goals in 2015?  Mike Ryan  (Golz, Colofranson, someone from CSS).     

Team with the most shutouts?  Kenyon

Will a team go unbeaten before the tourney this year?   No

Longest unbeaten streak?  Wheaton Ill (gonna be tough going into Loras and getting the W).

Over/under Messiah losses... 3.5?  (over)

Over/under Tufts losses... 4  (over)

FINAL FOUR PREDICTIONS:  Wheaton Ill, Trinity, Amherst, Rutgers-Camden    (Elite 8:  Loras, Ohio Wesleyan, SLU, Oneonta St).   Messiah bows out in Sweet 16.

I'll have to remind myself to check this come tournament time... haha.


Well these predictions are somewhat balanced between completely accurate and totally off the wall...