2015 D3 Season: NATIONAL PERSPECTIVE

Started by D3soccerwatcher, February 08, 2015, 12:49:03 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

Off Pitch

Quote from: NCAC New England on October 21, 2015, 08:54:23 PM
And your post reveals one of the biggest underlying flaws.  Midd's .520 OOC number, while comparatively lower, is still way too HIGH.  They played teams that play in the weakest conferences in the country, so a team may be 8-6, 6-8, or even 12-3 in that very weak conference and that counts the same as a 7-7 UAA or NESCAC team.

Actually, no.  That is the reason that it is not just OWP, but the OOWP is also factored in.  The UAA or NESCAC team will have a much higher OOWP than a team from the weakest conference.

PaulNewman

Quote from: Off Pitch on October 21, 2015, 09:52:46 PM
Quote from: NCAC New England on October 21, 2015, 08:54:23 PM
And your post reveals one of the biggest underlying flaws.  Midd's .520 OOC number, while comparatively lower, is still way too HIGH.  They played teams that play in the weakest conferences in the country, so a team may be 8-6, 6-8, or even 12-3 in that very weak conference and that counts the same as a 7-7 UAA or NESCAC team.

Actually, no.  That is the reason that it is not just OWP, but the OOWP is also factored in.  The UAA or NESCAC team will have a much higher OOWP than a team from the weakest conference.

Not sure I see why that would necessarily be the case.  If a top team from a weak conference boosts the OWP and that team's opponents are from conferences where wins and losses are distributed pretty typically why would that not result in a net advantage for the team playing a top team in a weak conference?

TennesseeJed

#1307
Quote from: Off Pitch on October 21, 2015, 08:50:16 PM
Quote from: NCAC New England on October 21, 2015, 06:52:40 PM
Midd had a higher (slightly) SOS than Wesleyan.  They play the exact same in-conference schedule.  The only difference you could note is that Midd has already played Amherst and Wes hasn't yet.  Otherwise, Midd's OOC games were Norwich, Green Mtn, Colby-Sawyer, Castleton, and Plymouth State.  Wes played ECSU, Haverford (away, right, so even more credit), John Jay, WNEC, and Salve Regina.  I'm sure the numbers were all plugged in correctly but which OOC schedule looks tougher, even significantly tougher?

Wesleyan out of conference opponents' record:  40-31-2    0.562
                                      remaining schedule:  22-3-1      0.865

Middlebury out of conference opponents' record: 38-35-3   0.520
                                        remaining schedule:  14-8-4    0.615

Given that their respective SOSs are roughly equal, one can reasonably assume that Wesleyan's SOS will indeed surpass Middlebury's by the end of the regular season.  They have the same in-conference schedule, but they have not yet played the same in-conference schedule.

Not trying to nit pick the analysis above at all, but I don't think the calcs above prove or support the point you're trying to make.  Looking at opponents' records in aggregate is not the way OWP's or OOWP's (therefore, SoS's) are calculated, and they distort much of what's going on with OWP's and OOWP's, and therefore, SoS's if you try to look at them that way, at least IMO.  If you're saying that a proxy for OWP and/or OOWP is to look at aggregate records (unweighted for home vs. away, an calc'd as a single average), I understand the spirit of what you're trying to show, but I respectfully disagree with the conclusion.  The OWP's and OOWP's used to figure SoS's are themselves averages of all of the individual OWP's for each team played and each team's opponents' opponents.  In other words, there will be an OWP and an OOWP figure for each team played over the season, with an adjustment applied for home vs. away for each.  When the last 3+/- games are played, each contributes an additional OWP and OOWP figure (again, each adjusted for home or away) for the season as a whole.  (So, by now, a team with 13 games in it's existing record will have 13 OWP's and 13 OOWP's on a spreadsheet, each adjusted for home/away.  Each additional game will contribute one additional set of OWP's and OOWP's.)  For each weekly SoS calculation, all the weighted OWP-OOWP figures for each opponent will be averaged to get the team's SoS.  The methodology above obfuscates the incremental nature of the calculation.  The actual SoS will be far less sensitive to the 14th, 15th and 16th observations because each is adding incrementally less to the total.  Above, it assumes that the past and future are roughly equal and it ignores whether the team played 2 opponents with flawless records and a whole bunch of team with weak records, or all teams with average records.  The individual game OWP's and OOWP's will be far more sensitive.  Additionally and importantly, even if Middlebury and Wesleyan have the exact same remaining schedules in terms of final 3 games, the SoS's will be impacted significantly (as much as 45-50% on a team by team basis) by whether each team (Middlebury or Wesleyan) plays each opponent at home or away.  As an purely hypothetical example:

Date                Opponent     Opponent's OWP    Middlebury Location   Wesleyan Location  Wtd Opp OWP for Midd    Wtd Opp OWP for Wesleyan
10/24                   ABC                 .500                    Home                     Away                  .500 * .85 = .425            .500 * 1.2 = .600
10/27                   XYZ                 .600                    Away                     Home                  .600 * 1.2 = .720            .600 * .85 = .510
10/31                   123                  .400                   Neutral                   Neutral                .400 (no adjustment)       .400 (no adjustment)

Therefore, you can see that it's not just the strength of the opponent that matters, but the location of the game in which the opponent is played.  For Middlebury, playing a .400 team in a neutral location is virtually the same as playing a .500 team at home.  Wesleyan is much better off playing a .500 team away for their SoS than they are by playing a .600 team at home.  This says NOTHING about whether they win or lose the games--that's all captured in their W/L%, which the NCAA doesn't appear to care too much about, relative to SoS.  What it says, and what I'm so surprised about, is that the OWP for the exact same team can vary from .425 to .600 or from .510 to .720 for the exact same team, depending on whether you play them at home or away.  That's approximately equivalent to the saying that the exact same team simultaneously has 6-6-0 record for Wesleyan and a 9-3-0 record for Middlebury.  How is that possible?

Assuming the numbers above were real numbers that were going to be used by each school for determining their incremental SoS's for the next 3 weeks of the season, prior to conf championships, each game figure would be averaged in, along with all the existing figures from all past weeks.  So, if Midd and Wes had both played 13 games prior to 10/24, Midd would add .425 as the 14th OWP for determining SoS and Wesleyan would add .600 for the game they each played against team ABC.  The following week, Midd would add .720 and Wesleyan would add .510 to all prior weeks (including 10/24 games and OWPs) and then rerun the averages.  Since the denominator of the average goes up by 1 or 2 each week (equals the number of games in the week), each incremental OWP added will contribute less to the overall average for the team's SoS, making each game harder and harder to impact the overall SoS. 

What can, and does, change is that all previous opponents are continuing to play games each week too, so past OWP's and OOWP's are not static--they must be updated each week too, making your existing SoS good only for a one week period (actually only until one previous opponent plays one new game).  So, if previous opponents win more games, it will positively impact your OWP and SoS.  Similarly, if a team that had a strong OWP and SoS weakens as the season goes on, then it will negatively impact your OWP and SoS as well.

dontshootthegoose

Whitewater with the regionally ranked win out at Dubuque tonight, and Oshkosh loses 1-0 at home to Luther. Welcome to the NCAA Tournament Whitewater! (Pool B)

Flying Weasel

Quote from: TennesseeJed on October 21, 2015, 06:41:30 PM. . . .it's clear to me from the numbers that are out there today, the regional rank has little to do w/ the numbers in several different regions.

Which regions?  Which teams/rankings?

Ryan Harmanis

Quote from: NCAC New England on October 21, 2015, 08:25:33 PM
The objection isn't exactly to the criteria, although that is a likely a complaint too.  It's to how the criterion is formulated in a way that doesn't meet even a gross eye test.  How could Midd's SOS be higher than Wesleyan's?  Or OWU's substantially higher than Kenyon's when you look at the 2 schedules side by side.  So how the SOS is reached seems problematic as much as whether or not that criterion is too highly valued.

I think we're on the same page, I'm referring to SOS as constructed, not in the abstract - it has to be a part of the rankings in some form.  The question, aside from the weight it gets, is subjectivity versus objectivity.  How much leeway do we want to give the committee?  These guys don't get to watch everyone else play, and there's such a disparity from game-to-game in how teams play that you can't just leave it down to coaches having seen a team play once - that's not a good indicator.  Plus, once you start relying on subjectivity you open up a whole new can of worms with potential for favoritism, reputation/talent versus production, etc.  With very firm, data-based criteria, the guys on the committee matter less, even though it might come at the expense of, as you put it, common sense.  I agree that there's probably a better middle road than what we're using now, but in a sport where even the most informed people have only seen a handful of teams multiple times, numbers are probably the safer route and do the least harm.

As an anecdotal example, here's the problem with eyeballing things.  Without checking, I'd probably have agreed that there's not much difference between Kenyon and OWU's schedules.  Then I looked at the non-conference.  Kenyon's non-conference: combined record 42-52-10, Win% 0.452.  OWU's non-conference: 86-59-7, Win%.589.  That's not close at all.  Granted, OWU lost to Ohio Northern and Thomas More, which accounts for most of the difference, but that also means that if you gave OWU Kenyon's schedule they're undefeated.  Now, if you add in Case Western for Kenyon, the schedule jumps to 53-54-11, Win% 0.496, but still below .500.  And that would have been a tougher game for Kenyon than anything they've seen outside DePauw.

Regardless, Kenyon still very much controls their own destiny.  First, they can win the NCAC tournament again.  Second, even if they don't, I don't think Pool C is out of the question.  If we use 2014 and 2013 as baselines, there have been teams with SOS barely above .500 that get at-large bids - Texas-Dallas (.516) in 2014, and Gordon (.517) and Salisbury (.521) in 2013.  Future games with OWU (maybe x2), Wabash (good win%) and a conference semi with DePauw/Denison would, I assume, get the SOS into the .520 range that puts them in the conversation.

If I'm Whitworth, I couldn't care less - there's no conference tournament and they have a 4 point lead in the NWC, so the automatic bid is easily attainable.  Maybe it prevents them from hosting, but Trinity gets punished like that regularly, and it's just an unfortunate outcome based on geography.

PaulNewman

RH, a couple of problems with your analysis.  If you gave Kenyon's OWU's schedule, Kenyon would not have to be undefeated to have a claim.  They could be ahead, even with, or even behind a spot, but not 6 spots.  Secondly, yes, I think at a minimum Kenyon would have split TMC and ONU.  Also, at least 12 of the losses you're counting on Kenyon's opponents record belong to a single team (Waynesburg).  You saying Kenyon might still be in the running for a Pool C sort of shows how laughable this is.  This morning I can't imagine that you could imagine writing that.  I think you personally have Kenyon in your own top 5 nationally.  And if the teams we're talking about were reversed in these circumstances?

Flying Weasel

A few points.

(1) TennesseeJed already made this point quite strongly a few posts up, but it bears repeating.  Simply having the same remaining opponents does not mean the same impact on SOS due to home and away multipliers. Having, for example, having Amherst away and Colby at home is not insignificantly different from having Amherst at home and Colby on the road.

(2) Again, TJ or someone mentioned this in passing, but it bear repeating because of how often someone has mention winning or losing when discussing SOS. SOS has nothing to do with whether you won or lost (or will win or loss) versus your opponents.  A win at home versus a tough opponent doesn't contribute any more or any less to a team's SOS than a loss at home to that same opponent.

(3) Your winning pct. does matter.  No one is getting ranked on SOS alone.  Babson has a very good SOS at .609, much much better than five of New England's ranked teams, but their winning pct. is a poor .533.  They didn't get ranked.  I'm the first to preach that SOS and wins (not so much record) vs. ranked opponents seem to be the most predictive criteria and I'm also on record as thinking it is weighted too heavily, but winning pct. isn't ignored.

(4) These rankings are numbers driven, that is clear.  So if you want to understand them (which is different from agreeing with them) you can't just make general, subjective comparisons (like OWU's and Kenyon's schedule look about the same) and then throw your hands up and say it doesn't make sense.  You need to quantify things, and when you do, a lot more things will make sense (again, it doesn't mean you have to like it or agree with it).  As RH illustrated, there was a significant numerical difference in Kenyon's and OWU's non-conference schedule even if subjectively we might find them similar.

(5) The committees didn't spring anything on us.  This criteria and their application of it has been in place for a long time.  I just don't get much of the surprise, shock, confusion, outrage.  (Again, that's not me saying I agree with the rankings and the process, the criteria, etc.)

Off Pitch

Quote from: NCAC New England on October 21, 2015, 10:06:48 PM
Not sure I see why that would necessarily be the case.  If a top team from a weak conference boosts the OWP and that team's opponents are from conferences where wins and losses are distributed pretty typically why would that not result in a net advantage for the team playing a top team in a weak conference?

The teams in the weak conference and the strong must play some non-conference games.  The weak conference teams will not fare as well as the strong conference teams against non-conference opponents.  If the conference is truly weak, they will collectively have a poor record.  Comparatively, the stronger conference will feast on weaker out of conference competition.  Therefore, the weak conference will have a weaker OOWP.

PaulNewman

FW, regarding #4, not trying to just throw up my hands.  When a difference is noted by some formulation regarding SOS, the difference computed doesn't make sense, and a subjective but reasonable look comparatively at the schedules raises a question about how the results were reached, I don't think that's unreasonable or unreasonable to question exactly how the results are derived.  I DO get that that is how they are going to be derived, but trying to see if there are in fact any flaws is sort of a natural reaction.  I also don't think the cmtes have no input, and at minimum could probably quickly determine "oh, that's odd, why is their SOS so low, then see that 10+ opponent losses cam from 1 team and 16 total from two teams or whatever, and then take that into account.  The "outrage" is I think what any serious/passionate fan of a team everyone on this board would have expected to fall no lower than #2 or #3 regionally that has been in the conversation as the #1 team in the country would experience.  It's easy to be dismissive of the results when the results fall very favorably for your own team, and I'll be the first to admit that I probably wouldn't have thought about it much at all if it happened to someone else's team, although in fairness to myself, while not being outraged about it I probably would have noted being blown away by something really unexpected.

TennesseeJed

Quote from: Flying Weasel on October 21, 2015, 10:27:24 PM
Quote from: TennesseeJed on October 21, 2015, 06:41:30 PM. . . .it's clear to me from the numbers that are out there today, the regional rank has little to do w/ the numbers in several different regions.

Which regions?  Which teams/rankings?

The East and Great Lakes were the two that I was specifically referring to, but I think I could find others based on comments from others on various threads.  Not trying to get too in the weeds about any one team or ranking, nor focus the conversation on "my" team (though I clearly have one...as most of us do.).  My general problem is that the framework that you so carefully explained does not enable anyone to understand the process or outcomes well enough to replicate the actual regional rankings and predict future rankings with any degree of confidence.  When you try to compare approaches across regions, there doesn't appear to be any explanation or consistency as to why one team may have been left out entirely, or why their rank is low or high relative to other teams in their region, or that enables or helps you to understand rankings for teams in other regions.  There is no indication in what the NCAA put out in the Pre-C Manual, nor in what you wrote (unless I missed it) that suggests that SoS is more heavily weighted than any of the other primary factors.  I don't have a problem if it is, I'd just like the NCAA to be transparent and consistent in how they're applying the criteria across teams and regions and, in my own analysis I just don't see it.  I see that there is generally favor for teams with higher SoS's, which I understand.  I don't see much of a link between Win% and rank, which surprises me.  I also don't see any correlation to the ranked combination of win% combined with SoS in the rankings, which would have been my hope and expectation.  It's the only figure, with the data available at the time of this ranking (RvR's will be added later) and provided by the NCAA (and detailed in the Pre-C Manual) that puts all teams on an equal playing field.  Win % alone says nothing of schedule difficulty.  SoS says nothing about how a team itself performed--it only compares how each team's opponents did.  The combination of the two clearly states how each team performed relative to the competition it faced.  The rankings are not consistent w/ that metric as far as I can tell, at least not in a number of cases where the ranks do not reflect the ranks of the teams' strength of schedule based performances (again, assuming my calcs are correct...I accept that this may all be a case of operator error on my part...   ::)). 

I think the SoS calcs are problematic for a number of reasons which I've written about elsewhere, most significantly the size and bias of home and away multipliers, which I think are far too large and end up falsely biasing all SoS's in favor of teams with higher % away schedules (assuming I understand the math correctly), but as long as they're consistently applied, they're at least consistently applied--(note, I wouldn't agree that they're fair, given that they're biased, but at least this bias is transparent).

If you want to get more into specific teams and regions I'm happy to dig in further with you but I'd rather do it offline.  Feel free to email or PM me if you'd like to discuss.

Ryan Harmanis

NCAC, I totally get it.  I'm not at all arguing it from a personal perspective, just from what the committee has to go on.  The criteria are pretty clear, and right now Kenyon's SOS is just so, so, so low, I don't know what choice the committee has. The second they say "okay, we know Kenyon is better than seventh, let's bump teams down," they're going against the criteria for an NCAA bid.  And how does the committee then explain things to teams so that they can set up their schedule for future seasons?  The exercise above was just to illustrate the difficulty with taking a "let's do what we think, rather than what the book and the numbers say" approach to things.

This conversation does seem to pop up regularly, so you would think it would lead coaches to schedule to avoid it as much as possible.  As for role reversal, I know that OWU intentionally schedules to avoid this problem.  That's the obvious solution, aside from the most obvious, which is to win the AQ and make this whole exercise moot.  I agree that, at a minimum, the Kenyon-Case game should have been rescheduled.  Anyway, I'm with you that the regional rankings, as currently constructed, don't seem lined up with the general opinion on these teams.  We'll see if that remains the case once the regular season wraps and record-versus-ranked comes into play.

As for the outrage, maybe the added press, the website, and another set of national rankings increase the contrast between the regional rankings and perception?  As FW noted, this should surprise no one, especially coaches who have served on the NCAA committee.  This is just the setup we have right now, and it's been this way for years.  After seeing Kenyon's and Whitworth's SOS, I'm more surprised that Kenyon is ranked at all than that they're ranked 7th.  Again, not based on where I would rank them, as I've had both teams in the Top 10 just about every Friday and would put both at/near the top of their regions.  But if I was on the committee, rather than just ranking the teams as I see fit, I don't know what I'd do differently.

Flying Weasel

Quote from: NCAC New England on October 21, 2015, 10:56:36 PMYou saying Kenyon might still be in the running for a Pool C sort of shows how laughable this is.  This morning I can't imagine that you could imagine writing that.  I think you personally have Kenyon in your own top 5 nationally.
Whether RH could have imagined that or not this morning is irrelevant if he never looked at the numbers.  If you noticed, I avoided making any predictions as to rankings while so many on here were trying to do so.  And I'm not saying nobody should have tried to make predictions.  It can be fun and a challenging exercise--go for it!  And I know attempts were made to estimate SOS's.  But I knew that I didn't have the time or energy to dig into the numbers enough (read, compile tons of data and compute like crazy) so that I felt I had any solid basis for making predictions. If anyone couldn't imagine that this numbers-driven ranking process could produce rankings that were not compatible with their subjective sense of how teams should be ranked, well, I don't know what to say.

Flying Weasel

Quote from: NCAC New England on October 21, 2015, 11:25:13 PMFW, regarding #4, not trying to just throw up my hands.  When a difference is noted by some formulation regarding SOS, the difference computed doesn't make sense, and a subjective but reasonable look comparatively at the schedules raises a question about how the results were reached, I don't think that's unreasonable or unreasonable to question exactly how the results are derived.  I DO get that that is how they are going to be derived, but trying to see if there are in fact any flaws is sort of a natural reaction.  I also don't think the cmtes have no input, and at minimum could probably quickly determine "oh, that's odd, why is their SOS so low, then see that 10+ opponent losses cam from 1 team and 16 total from two teams or whatever, and then take that into account.  The "outrage" is I think what any serious/passionate fan of a team everyone on this board would have expected to fall no lower than #2 or #3 regionally that has been in the conversation as the #1 team in the country would experience.  It's easy to be dismissive of the results when the results fall very favorably for your own team, and I'll be the first to admit that I probably wouldn't have thought about it much at all if it happened to someone else's team, although in fairness to myself, while not being outraged about it I probably would have noted being blown away by something really unexpected.
It seemed to me you were content to subjectively and generally say the OWU's and Kenyon's schedules were comparable and therefore conclude that the difference in their position in the rankings didn't make sense.  Just seemed an odd and unproductive approach (sans hard cold numbers) when its been oft stated how these rankings are very numbers-based.  My point is that we didn't need to have the rankings come out to know the flaws in the SOS calculations, to know the flaws in an over-reliance on numbers, etc.  Where was the consternation last week?  We already knew everything we needed to know to be upset/frustrated/fill-in-the-blank. The criteria didn't change last minute on us.  It's been this way for years.  My opinion about the criteria, the SOS calculation, the seemingly hyper-quantitative process, etc. hasn't changed one bit having now seen this weeks rankings.

PaulNewman

Quote from: Flying Weasel on October 21, 2015, 11:39:37 PM
Quote from: NCAC New England on October 21, 2015, 10:56:36 PMYou saying Kenyon might still be in the running for a Pool C sort of shows how laughable this is.  This morning I can't imagine that you could imagine writing that.  I think you personally have Kenyon in your own top 5 nationally.
Whether RH could have imagined that or not this morning is irrelevant if he never looked at the numbers.  If you noticed, I avoided making any predictions as to rankings while so many on here were trying to do so.  And I'm not saying nobody should have tried to make predictions.  It can be fun and a challenging exercise--go for it!  And I know attempts were made to estimate SOS's.  But I knew that I didn't have the time or energy to dig into the numbers enough (read, compile tons of data and compute like crazy) so that I felt I had any solid basis for making predictions. If anyone couldn't imagine that this numbers-driven ranking process could produce rankings that were not compatible with their subjective sense of how teams should be ranked, well, I don't know what to say.

And if you can't imagine that anyone wouldn't be surprised when the results came out, I don't know what to say.

And, yes, in fact, I still think the schedules are comparable.  And don't know how a school intentionally plans for this without knowing how what records opponents will have on a following year.  And I think TMC's schedule has been more difficult than both, and I'd be just as surprised/outraged/fill-in-the-blank if I was a TMC fan.  TMC should probably be #1 and in no scenario today lower than #2.  Otherwise, now that's it's happened, I'm trying to understand how the numbers are derived.  You may have understood and accepted how the numbers worked years ago.  I'm confronted with it for the first time, so give me half a day.