UAA soccer 2021

Started by D3_Slack, September 11, 2021, 10:34:05 AM

Previous topic - Next topic

0 Members and 1 Guest are viewing this topic.

blooter442

Quote from: Buck O. on October 18, 2021, 09:03:53 AM
OK, let me try this again.  I am NOT suggesting that the NCAA should use the Massey rankings.  I'm not.  Really, I'm not.  The fact that I referred to them to illustrate my point does not mean that I'm saying the NCAA should actually use the rankings straight from Massey's website.

What I AM saying is that a selection procedure that penalizes a loss to a good team more than a loss to a mediocre or poor team, and the NCAA's current procedures can do just that.

I think that a better system would recognize that Mt. St. Vincent is not a very good team, and would therefore penalize NYU more for that loss than for a loss to a stronger team.  Such a system would therefore reach the same conclusion as the Massey ratings reach.  But, again, that does NOT mean that it would use the Massey rankings, or any other external ranking system.

I tend to agree. Under the current system I get why RvR is important — if a team goes 0-6 against "good teams" then perhaps they aren't competitive in games against the top sides — but from my understanding it would appear that the punishment for a loss to a good team hits both the RvR and the winning pct. whereas a loss to a less competitive team hits just the winning pct. How to rectify that, I'm not sure, but I do think it is a bit of a loophole.

jknezek

#121
Quote from: Buck O. on October 18, 2021, 09:03:53 AM
Quote from: deiscanton on October 17, 2021, 11:29:45 AM
Quote from: Buck O. on October 17, 2021, 08:35:28 AM
Quote from: deiscanton on October 15, 2021, 04:45:58 AM
Quote from: Buck O. on October 14, 2021, 11:01:56 PM
Quote from: PaulNewman on October 12, 2021, 10:16:41 PM
That is a stunning result, while admitting I know less than nothing about Mt St Vincent.  And the box score references the NYU GK on both goals.
Doesn't actually hurt NYU much except with silly poll voters.  And maybe their confidence takes a mild hit, but will not impact RvR or at large chances hardly at all.

That seems to be an indictment of the current procedure for selecting teams.  One game doesn't mean everything, but losing to the team that is currently ranked #263 by Massey should be a significant hit.

NCAA Division III does not allow selection committees to use outside polls or outside computer algorithms as part of the primary or secondary selection criteria. 

That is not what I was suggesting.  I referred to Mt. St. Vincent's standing in the Massey rankings to illustrate the point that Mt. St. Vincent's is not a very good team (despite their glossy record, which has been compiled against weak competition other than the NYU game).  My point remains that losses to worse teams should hurt more than losses to better teams, and to the extent that the current tournament selection procedures lead to the opposite outcome, it demonstrates the need to revise those procedures.

Buck O, you are still referring to an outside ranking (Massey) to compare the teams as to who is better or worse-- , which under NCAA DIII Selection rules set by the DIII Championships Committee, is not allowed for selection purposes, for various reasons.

OK, let me try this again.  I am NOT suggesting that the NCAA should use the Massey rankings.  I'm not.  Really, I'm not.  The fact that I referred to them to illustrate my point does not mean that I'm saying the NCAA should actually use the rankings straight from Massey's website.

What I AM saying is that a selection procedure that penalizes a loss to a good team more than a loss to a mediocre or poor team, and the NCAA's current procedures can do just that.

I think that a better system would recognize that Mt. St. Vincent is not a very good team, and would therefore penalize NYU more for that loss than for a loss to a stronger team.  Such a system would therefore reach the same conclusion as the Massey ratings reach.  But, again, that does NOT mean that it would use the Massey rankings, or any other external ranking system.

To be fair, the NCAA criteria does, sort of, do this. A Regionally Ranked team would be considered a "good team" and those RESULTS are very important to the NCAA for at large bids. A loss to an Regionally Ranked team can sometimes be more important that a win over a non-Ranked opponent. What you may object to is that everyone who is not Regionally Ranked is treated the same. So there is no real difference, outside the hit to your SOS, between an 0-16 team and an unranked 13-3 type team. I can see what you are saying, and I don't tend to disagree in an ideal world, though soccer is a funny game.

There are lots of times a dominant team can lose 1-0 because the ball just wouldn't go in the net despite a 20-2 advantage in shots or a 10-1 advantage in SOG. I'm just not a big fan of massively penalizing a team for a single bad result that much. It's more or less a 15 or 20 game season. Most teams get a fluky result in that span. Maybe it's a tie, maybe it's a grind out 1-0 win against an overmatched opponent, maybe it's a loss that just looks really bad from the score line, but when you look at the stats you see it just was unlucky or a hot goalie.

Plus there is limited time for the committees to do their work. Do we really want them deciding what are good and bad losses? Or do we want an RPI or NET type automated system? We already have SOS, which most of us can point to some issues that can skew results. And that's the problem. You have to understand that what the NCAA is doing is trying to get the most milk with the minimum moo, and that always involves compromise. You can't have committees spending days on this, they don't have time. You don't really want outside black box systems deciding important things. You want a consistent, transparent, applicable set of relatively easy to understand rules that can be applied year after year regardless of how the committee members rotate.

While I don't think the NCAA system is perfect, I think it works well enough. When you pair it with the AQs, you generally get a well-thought out, well-earned field. Are there questions at the margins? Yes, and there always will be no matter what system you use.


jknezek

#122
Quote from: blooter442 on October 18, 2021, 09:38:26 AM
Quote from: Buck O. on October 18, 2021, 09:03:53 AM
OK, let me try this again.  I am NOT suggesting that the NCAA should use the Massey rankings.  I'm not.  Really, I'm not.  The fact that I referred to them to illustrate my point does not mean that I'm saying the NCAA should actually use the rankings straight from Massey's website.

What I AM saying is that a selection procedure that penalizes a loss to a good team more than a loss to a mediocre or poor team, and the NCAA's current procedures can do just that.

I think that a better system would recognize that Mt. St. Vincent is not a very good team, and would therefore penalize NYU more for that loss than for a loss to a stronger team.  Such a system would therefore reach the same conclusion as the Massey ratings reach.  But, again, that does NOT mean that it would use the Massey rankings, or any other external ranking system.

I tend to agree. Under the current system I get why RvR is important — if a team goes 0-6 against "good teams" then perhaps they aren't competitive in games against the top sides — but from my understanding it would appear that the punishment for a loss to a good team hits both the RvR and the winning pct. whereas a loss to a less competitive team hits just the winning pct. How to rectify that, I'm not sure, but I do think it is a bit of a loophole.

Yes, but you have to look at the other side. Losses to RROs generally HELP SOS, whereas a win against a less competitive team can hurt your SOS. So it's not as black and white as it appears. And as you know, the committee is not looking at RECORD against RROs only, they are looking at RESULTS. That's an important distinction. Being 13-3 and 0-2 against RROs is generally thought of as being a better resume than 13-3 and 0-0 against RROs, all other things being the same.

Flying Weasel

Quote from: blooter442 on October 18, 2021, 09:38:26 AM
Quote from: Buck O. on October 18, 2021, 09:03:53 AM
OK, let me try this again.  I am NOT suggesting that the NCAA should use the Massey rankings.  I'm not.  Really, I'm not.  The fact that I referred to them to illustrate my point does not mean that I'm saying the NCAA should actually use the rankings straight from Massey's website.

What I AM saying is that a selection procedure that penalizes a loss to a good team more than a loss to a mediocre or poor team, and the NCAA's current procedures can do just that.

I think that a better system would recognize that Mt. St. Vincent is not a very good team, and would therefore penalize NYU more for that loss than for a loss to a stronger team.  Such a system would therefore reach the same conclusion as the Massey ratings reach.  But, again, that does NOT mean that it would use the Massey rankings, or any other external ranking system.

I tend to agree. Under the current system I get why RvR is important — if a team goes 0-6 against "good teams" then perhaps they aren't competitive in games against the top sides — but from my understanding it would appear that the punishment for a loss to a good team hits both the RvR and the winning pct. whereas a loss to a less competitive team hits just the winning pct. How to rectify that, I'm not sure, but I do think it is a bit of a loophole.

Responding to the highlighted part above:  Often a loss to a less competitive team would hit both the winning pct. and the Strength-of-Schedule.  However, the exception is a below average team with a gaudy record playing largely a cupcake schedule.

PaulNewman

I think big picture most of us agree.  I don't expect the committees to do even more or for criteria to get even more nuanced (except of course when my team gets squeezed out and then something in the moment will feel arcane again).

But in the example that was real in this discussion, Mt. St. Vincent DOES have a good record, and so in that regard the NYU loss helps in one way while not really counting much against them.  To me, it almost plays out like it was exhibition game.  I also don't (and I'm probably contradicting something I posted on this earlier)
necessarily agree that a shock loss to a very inferior opponent should count MORE than a loss to a national contender peer but as is in this case I agree with Buck O that the weighting on how much counts feels a little off.

Flying Weasel

Quote from: jknezek on October 18, 2021, 09:41:21 AM
Quote from: Buck O. on October 18, 2021, 09:03:53 AM
OK, let me try this again.  I am NOT suggesting that the NCAA should use the Massey rankings.  I'm not.  Really, I'm not.  The fact that I referred to them to illustrate my point does not mean that I'm saying the NCAA should actually use the rankings straight from Massey's website.

What I AM saying is that a selection procedure that penalizes a loss to a good team more than a loss to a mediocre or poor team, and the NCAA's current procedures can do just that.

I think that a better system would recognize that Mt. St. Vincent is not a very good team, and would therefore penalize NYU more for that loss than for a loss to a stronger team.  Such a system would therefore reach the same conclusion as the Massey ratings reach.  But, again, that does NOT mean that it would use the Massey rankings, or any other external ranking system.
To be fair, the NCAA criteria does, sort of, do this. A Regionally Ranked team would be considered a "good team" and those RESULTS are very important to the NCAA for at large bids. A loss to an Regionally Ranked team can sometimes be more important that a win over a non-Ranked opponent. What you may object to is that everyone who is not Regionally Ranked is treated the same. So there is no real difference, outside the hit to your SOS, between an 0-16 team and an unranked 13-3 type team. I can see what you are saying, and I don't tend to disagree in an ideal world, though soccer is a funny game.

There are lots of times a dominant team can lose 1-0 because the ball just wouldn't go in the net despite a 20-2 advantage in shots or a 10-1 advantage in SOG. I'm just not a big fan of massively penalizing a team for a single bad result that much. It's more or less a 15 or 20 game season. Most teams get a fluky result in that span. Maybe it's a tie, maybe it's a grind out 1-0 win against an overmatched opponent, maybe it's a loss that just looks really bad from the score line, but when you look at the stats you see it just was unlucky or a hot goalie.

Plus there is limited time for the committees to do their work. Do we really want them deciding what are good and bad losses? Or do we want an RPI or NET type automated system? We already have SOS, which most of us can point to some issues that can skew results. And that's the problem. You have to understand that what the NCAA is doing is trying to get the most milk with the minimum moo, and that always involves compromise. You can't have committees spending days on this, they don't have time. You don't really want outside black box systems deciding important things. You want a consistent, transparent, applicable set of relatively easy to understand rules that can be applied year after year regardless of how the committee members rotate.

While I don't think the NCAA system is perfect, I think it works well enough. When you pair it with the AQs, you generally get a well-thought out, well-earned field. Are there questions at the margins? Yes, and there always will be no matter what system you use.

A lot of astute observations made in the 2nd and 3rd paragraphs of this post by jknezek that I completely concur with.  I hesitate to add anything because it would be redundant.  But . . . totally agree that a loss can look extremely bad if only looking at the opponent and the result, when in fact it was a case of total domination with bad luck finishing that day.  But even if it isn't one of those cases, and it was a more general off-day, I still don't think too much weight should be put on one single loss.  Now, if a team has a few such losses during the season, now that's different. And I think it's absolutely noteworthy to mention the tough job the committee has with making at-large selections on a very quick turn-around despite they themselves having head coaching or AD responsibilities (that don't let up) and not having all the time in the world to crunch numbers and do deep dives. Doesn't mean I don't disagree with some of the selection committee's subjective decisions, but I think the process overall is very serviceable given the parameters.  I'd personally make changes to the SOS computations, but no matter what you do, there will always be outliers and exceptions that skirt the intended impact/outcome of the process (e.g., a below average opponent with a gaudy record actually helping a team's SOS).  Usually, when in the context of the full season and body of work, these won't make a difference in who gets selected or not, but, when choosing between "bubble" teams, they could make the difference. But that's just part of the risk of being on the "bubble".

deiscanton

#126
Thanks for discussing this morning as to whether or not it would be worthwhile for the DIII Championships Committee to consider tweaking the selection criteria in future years as to how to better weigh wins and losses in DIII sport seasons. 

This morning, I was reading the minutes of the DIII Championships Committee for the September 13-14, 2021 meeting.  At that meeting, the Championships Committee, upon recommendation of the DIII Men's Basketball Committee, voted to institute a pilot program for each sport committee this year on how to release the first week's regional rankings.

As you are aware, this week, only 4 of the 5 primary criteria can be used in ranking teams: (1) DIII Winning pct, (2) DIII Strength of Schedule (2/3 OWP, 1/3 OOWP), (3) DIII head to head, and (4) Common DIII opponents.  Results vs ranked DIII teams cannot be used this week, as there is no regional ranking yet released.

Therefore, rather than listing the regionally ranked teams in numerical order this week ( which would most likely be inaccurate as one of the primary criteria cannot be used ), the Championships Committee has agreed to let each sport committee publish a list of the regionally ranked teams for the first week of regional rankings in alphabetical order.  The list would then go back to the numerical order that we all are accustomed to in the week 2 regional rankings once the results vs ranked teams can be used.  This year is a pilot one for this method of releasing the regional rankings, and the Championships Committee will review the pilot at the end of the academic year to see whether or not it will continue in future years.



jknezek

Quote from: deiscanton on October 18, 2021, 12:44:19 PM
Thanks for discussing this morning as to whether or not it would be worthwhile for the DIII Championships Committee to consider tweaking the selection criteria in future years as to how to better weigh wins and losses in DIII sport seasons. 

This morning, I was reading the minutes of the DIII Championships Committee for the September 13-14, 2021 meeting.  At that meeting, the Championships Committee, upon recommendation of the DIII Men's Basketball Committee, voted to institute a pilot program for each sport committee this year on how to release the first week's regional rankings.

As you are aware, this week, only 4 of the 5 primary criteria can be used in ranking teams: (1) DIII Winning pct, (2) DIII Strength of Schedule (2/3 OWP, 1/3 OOWP), (3) DIII head to head, and (4) Common DIII opponents.  Results vs ranked DIII teams cannot be used this week, as there is no regional ranking yet released.

Therefore, rather than listing the regionally ranked teams in numerical order this week ( which would most likely be inaccurate as one of the primary criteria cannot be used ), the Championships Committee has agreed to let each sport committee publish a list of the regionally ranked teams for the first week of regional rankings in alphabetical order.  The list would then go back to the numerical order that we all are accustomed to in the week 2 regional rankings once the results vs ranked teams can be used.  This year is a pilot one for this method of releasing the regional rankings, and the Championships Committee will review the pilot at the end of the academic year to see whether or not it will continue in future years.

It's an interesting tweak. It makes sense from a programming point of view as it is kind of an endless loop to include RRO when you are setting the first Regional Rankings.

Gregory Sager

Quote from: Maine Soccer Fan on October 17, 2021, 06:35:45 PM
From a ref's point of view there is another way to handle it. All chatter from the bench redounds to the head coach. "I heard something head coach. I'm not absolutely certain who said it so the red is yours head coach."

That's basically the same thing, except that adopting the baseball rule would offer the benefit of having the actual perpetrator -- the assistant coach -- get tossed alongside the head coach.
"To see what is in front of one's nose is a constant struggle." -- George Orwell

College Soccer Observer

The problem with that suggestion is that it will make referees less likely to eject for bad behavior.  Right now, assistant coaches are disposable.  Like it or not, the perception exists that coaches have influence over assignments, so many refs will be reluctant to toss head coaches for that reason.

Buck O.

Quote from: Flying Weasel on October 18, 2021, 10:25:09 AM
Quote from: jknezek on October 18, 2021, 09:41:21 AM
Quote from: Buck O. on October 18, 2021, 09:03:53 AM
OK, let me try this again.  I am NOT suggesting that the NCAA should use the Massey rankings.  I'm not.  Really, I'm not.  The fact that I referred to them to illustrate my point does not mean that I'm saying the NCAA should actually use the rankings straight from Massey's website.

What I AM saying is that a selection procedure that penalizes a loss to a good team more than a loss to a mediocre or poor team, and the NCAA's current procedures can do just that.

I think that a better system would recognize that Mt. St. Vincent is not a very good team, and would therefore penalize NYU more for that loss than for a loss to a stronger team.  Such a system would therefore reach the same conclusion as the Massey ratings reach.  But, again, that does NOT mean that it would use the Massey rankings, or any other external ranking system.
To be fair, the NCAA criteria does, sort of, do this. A Regionally Ranked team would be considered a "good team" and those RESULTS are very important to the NCAA for at large bids. A loss to an Regionally Ranked team can sometimes be more important that a win over a non-Ranked opponent. What you may object to is that everyone who is not Regionally Ranked is treated the same. So there is no real difference, outside the hit to your SOS, between an 0-16 team and an unranked 13-3 type team. I can see what you are saying, and I don't tend to disagree in an ideal world, though soccer is a funny game.

There are lots of times a dominant team can lose 1-0 because the ball just wouldn't go in the net despite a 20-2 advantage in shots or a 10-1 advantage in SOG. I'm just not a big fan of massively penalizing a team for a single bad result that much. It's more or less a 15 or 20 game season. Most teams get a fluky result in that span. Maybe it's a tie, maybe it's a grind out 1-0 win against an overmatched opponent, maybe it's a loss that just looks really bad from the score line, but when you look at the stats you see it just was unlucky or a hot goalie.

Plus there is limited time for the committees to do their work. Do we really want them deciding what are good and bad losses? Or do we want an RPI or NET type automated system? We already have SOS, which most of us can point to some issues that can skew results. And that's the problem. You have to understand that what the NCAA is doing is trying to get the most milk with the minimum moo, and that always involves compromise. You can't have committees spending days on this, they don't have time. You don't really want outside black box systems deciding important things. You want a consistent, transparent, applicable set of relatively easy to understand rules that can be applied year after year regardless of how the committee members rotate.

While I don't think the NCAA system is perfect, I think it works well enough. When you pair it with the AQs, you generally get a well-thought out, well-earned field. Are there questions at the margins? Yes, and there always will be no matter what system you use.

A lot of astute observations made in the 2nd and 3rd paragraphs of this post by jknezek that I completely concur with.  I hesitate to add anything because it would be redundant.  But . . . totally agree that a loss can look extremely bad if only looking at the opponent and the result, when in fact it was a case of total domination with bad luck finishing that day.  But even if it isn't one of those cases, and it was a more general off-day, I still don't think too much weight should be put on one single loss. Now, if a team has a few such losses during the season, now that's different. And I think it's absolutely noteworthy to mention the tough job the committee has with making at-large selections on a very quick turn-around despite they themselves having head coaching or AD responsibilities (that don't let up) and not having all the time in the world to crunch numbers and do deep dives. Doesn't mean I don't disagree with some of the selection committee's subjective decisions, but I think the process overall is very serviceable given the parameters.  I'd personally make changes to the SOS computations, but no matter what you do, there will always be outliers and exceptions that skirt the intended impact/outcome of the process (e.g., a below average opponent with a gaudy record actually helping a team's SOS).  Usually, when in the context of the full season and body of work, these won't make a difference in who gets selected or not, but, when choosing between "bubble" teams, they could make the difference. But that's just part of the risk of being on the "bubble".

I took the liberty of bolding one sentence in your response, because I agree with that.  A loss, even if it is a bad loss, is just one game, and sometimes funny things happen in soccer.  So I wouldn't rule NYU out of the NCAA tournament based on one off day.  (They have the inside line on the UAA's AQ, even with the loss to Chicago, but I would select them for an Pool C bid based upon their current body of work, even with the loss to MSV, if they didn't have the AQ.) 

But I still think that a loss to a poor or mediocre team should hurt more than a loss to a pretty good team (i.e., a team good enough to be included in the RvR) and that doesn't necessarily happen under the current system.  While the NET system certainly has its faults, as does almost any ranking system (I've posted here in the past about various odd things I've seen in the Massey rankings), I think that systems like this should be adaptable to D3 soccer, and the use of an automated tool like that would help.  It wouldn't fully determine which teams are invited--it doesn't do that for the D1 basketball tournament either--but it is trying to answer the question directly--which team is better than which other teams--rather than attacking the problem indirectly and inadvertently creating loopholes, as the current procedures do.

Gregory Sager

Quote from: College Soccer Observer on October 18, 2021, 07:12:06 PM
The problem with that suggestion is that it will make referees less likely to eject for bad behavior.  Right now, assistant coaches are disposable.  Like it or not, the perception exists that coaches have influence over assignments, so many refs will be reluctant to toss head coaches for that reason.

The idea that coaches have influence over assignments may be true of some leagues, but it isn't true of others. If the league employs an outside service to supply its referees, or has a referee assigner who is not connected in any way to the league's coaches, then the refs have the ability to red-card coaches without having to worry about their job security. If there are leagues in which the coaches do have that kind of pull, then perhaps the NCAA's baseball ejection rule should just be employed for soccer on a league-by-league basis in circuits where the refs don't have to look over their shoulders.
"To see what is in front of one's nose is a constant struggle." -- George Orwell

deiscanton

The Tuesday polls for the week from D3Soccer.com and the United Soccer Coaches are up (nice to have those to see what teams people think are the best in the country this week in DIII), but tomorrow, we get the first rankings from the NCAA DIII Men's Soccer Committee (and tomorrow's rankings are done using 4 of the 5 primary criteria that are used to select teams for the 21 at-large bids to the DIII Men's Soccer Tournament--43 of the 64 teams in the field will qualify automatically by winning their conference AQ.):

As far as where UAA teams stand this week in the D3Soccer.com and United Soccer Coaches national polls (the ones for fun):

D3Soccer.com Top 25

Emory-- #9-- 652 points (up 1 spot from last week)

Chicago-- #11-- 477 points (up 2 spots from last week)

NYU-- #14-- 465 points (down 9 spots from last week)

Wash U-- #16-- 402 points (down 4 spots from last week)

Rochester-- #25-- 154 points (Enters top 25)

United Soccer Coaches national poll

Wash U-- #8 (up 1 spot)

NYU-- #12 (down 5 spots)

Emory-- #14 (up 1 spot)

Chicago-- #16  (remains the same)

Also on an interesting note, the United Soccer Coaches Regional poll for Region IV this week now has Montclair State at #1 in the region, and NYU at #2-- however the United Soccer Coaches poll does not use the same criteria that the NCAA DIII Men's Soccer Committee is required to use to rank teams.

deiscanton

Tonight, we have our first non-conference game to be played over the mid-season UAA break as Emory travels to Piedmont.    Piedmont's season will most likely end after Saturday's game vs Covenant, as the Lions are currently in 5th place in the USA South West Division-- 1 point behind LaGrange in the USA South West Division standings-- with 1 league game remaining.  Piedmont has already lost to 3 of the 4 teams already above them in the division table (Maryville (TN), Huntingdon and LaGrange), and the Lions will have to defeat Covenant, which is currently in 2nd place in the West Division, on Saturday, and then get help elsewhere to qualify for the USA South Conference Tournament (top 4 teams in each division of USA South will advance to Saturday October 30, 2021's first round games.)

Hopefully, Emory will be able to take care of business tonight and come out with a win....


deiscanton

#134
The DIII Men's Soccer Committee has released the Week 1 Regional Rankings, and as I previously wrote in an earlier post, the Championships Committee decided last month to have each sport committee this year release the Week 1 regional rankings with the teams in alphabetical order rather than the numerical rank order that we are accustomed to seeing. 

The teams will be listed in numerical rank order starting with the Week 2 Regional Rankings.

The following UAA men's soccer teams made the list of DIII ranked teams ranked by the DIII Men's Soccer Committee this week for Week 1:  (in alphabetical order)

1.)  Carnegie Mellon-- Region VII

2.)  Case Western Reserve University-- Region VII

3.)  Chicago-- Region VIII

4.)  Emory-- Region VI

5.)  NYU-- Region IV

6.)  Rochester-- Region III

7.)  Wash U-- Region VIII

Congratulations to the 7 of the 8 UAA Men's Soccer teams that made the Regional Rankings this week.

The Brandeis men's soccer team was the only UAA soccer team (men or women) not to be ranked in an evaluation region this week.  (All 8 UAA women's soccer teams made the regional rankings as released today by the DIII Women's Soccer Committee.)