Announcement

Collapse
No announcement yet.

RPI...and how it is used

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • RPI...and how it is used

    First, my caveat. I am no RPI guru like several of you guys.

    I assume that RPI must have some validity as a measure of relative team strength when ranking teams that do not play each other head to head (nor common opponents in many instances).

    But how valid is RPI score to actual performance in a 64 team field? My guess is that it is a fairly valid indicator of playoff performance with the usual outlier or two.

    And, once you decide that RPI has some merit in staging a playoff, how much does RPI play in on those decisions?

    From what I read, RPI is a "moving target". The committee relied heavily on RPI one year, relied more on "other factors" the next year. It appears that RPI is a tool...and one that can back up a committee decision when they need that or a tool that can be just one of many when they feel RPI doesn't give them the teams that they want.

    This link provides some insight to this year's proceedings, which seemed a little less likely as last year's on using RPI as a "hammer".



    The Big Ten's Commissioner, Delaney, just made a play for an extension of the CWS to ten teams with an automatic slot for two "cold weather teams".

    There is a lot of thought around the web about some teams/conferences being advantaged and some being disadvantaged by the current system.

    As non BCS guys, what ya think about the current system?

  • #2
    There has been a brief discussion already here:





    I don't think anyone advocates Delany's idea of adding teams based on geography. Moving the starting date back, sure. Adjusting the RPI formula, sure.

    Comment


    • #3
      Thanks for link...the average daily high (usually afternoons) is 20 degrees warmer in Tallahassee in February than it is in Wichita and 25 degrees warmer than Columbus, Ohio.

      Comment


      • #4
        Now this is interesting...the average daily high and low in Charlottesville (U of Va.) in February is almost exactly that of Wichita. I am guessing that the latitudes are fairly close.

        Comment


        • #5
          In order to understand geography's impact and Delaney's argument, I compared Ohio State's and Florida's early schedules.

          Ohio State, in February, is colder than Florida, no doubt...and Ohio State didn't play in Columbus in February, nor March, really.

          OSU was scheduled to play 7 games in February, all in Florida.

          OSU was scheduled to play 15 games in March...3 of these games were in Florida, 3 in North Carolina, and 8 in California. It was March 29 before the first game was played in Ohio.

          Florida's first game was on Feb 18th, and so was OSU's....

          OSU was scheduled to play 13 games prior to March 15th and went 6-5 with 2 rain outs,,all of them played in Florida or NC,

          Florida had 16 games prior to March 15th and went 14-2.

          By the end of the regular season OSU had played 50 games,with 4 games cancelled due to rain,, Florida had played 56.

          It seems that OSU took their team on the road almost continuosly in February and March...playing neutral fields mainly in Florida and as a visitor in California.

          While OSU got plenty of warm weather playing in during February and March, it wasn't at home, and I am sure that practice was curtailed because of travel.

          Actually, after looking at OSU's February/March schedule..I applaud these guys for keeping up their coursework while being on the road to play 21 out of state games in less than six weeks. About every Friday and weekend.

          Comment


          • #6
            San Francisco, Wichita and Washington D.C. are all close to the same latitude.

            I think there must be something a bit screwy with the RPI.

            For example, this year WSU's resume seemed to be markedly better than K-State, yet their RPI was in the 40s and WSU's was in the 70s or so.

            The MVC was the 7th best conference in college baseball. That would seem to mean to me that the 2nd place team (WSU) would be an automatic to the NCAA tourney.

            From: http://blogs.kansas.com/lutz/2011/05...s-not-compute/

            "Kansas State, for instance, is just 7-14 against the rest of the NCAA Tournament field, 6-12 against fellow Big 12 teams and 1-2 in non-conference games. That’s right, K-State played only three non-conference games this season against teams that made the NCAA Tournament, one each against Creighton, California and Coastal Carolina.

            Wichita State was 10-5 against teams in the NCAA Tournament and 8-3 against non-conference teams in the field: 2-0 vs. Alcorn State; 2-0 vs. Arizona; 1-0 vs. Dallas Baptist; 1-1 vs. Oral Roberts; 1-1 vs. Kansas State; and 1-1 vs. Oklahoma State.

            The Wildcats were 2-11 in games against Oklahoma State, Baylor, Texas A&M and Texas."

            Comment


            • #7
              Yeah...that doesn't make much sense re the RPI differential between KSU and WSU...

              Comment


              • #8
                In February, Florida was the "Grapefruit League" for some northern schools....

                OSU played Western Michigan (4 games), Yale, Army, and Illinois State in Florida in February.

                Comment


                • #9
                  Originally posted by ABC
                  I think there must be something a bit screwy with the RPI.

                  For example, this year WSU's resume seemed to be markedly better than K-State, yet their RPI was in the 40s and WSU's was in the 70s or so.
                  WSU had the potential to have a decent RPI, but they screwed the proverbial pooch against Top 100-200 competition going 10-10. That was their RPI killer.

                  Meanwhile KSU took care of business and beat the teams they should beat.

                  The MVC was the 7th best conference in college baseball. That would seem to mean to me that the 2nd place team (WSU) would be an automatic to the NCAA tourney.
                  MVC being 7th means that the conference can't be blamed for holding WSU back (where in 2010 the MVC strength probably hurt WSU chances).

                  Comment


                  • #10
                    Re: RPI...and how it is used

                    Originally posted by billybud
                    First, my caveat. I am no RPI guru like several of you guys.

                    I assume that RPI must have some validity as a measure of relative team strength when ranking teams that do not play each other head to head (nor common opponents in many instances).
                    RPI has no validity or statistical basis (e.g. has no predictive probabilities). It is an arbitrary metric that ignores your opponents strength. People make the mistake of trying to use RPI as a measure of team strength but - it is driven by schedule.

                    If you use ISR, sagrin, massey, ken pom, etc - these rating all have the ability to assign a probability of teams winning by comparing their ratings. They all attempt to model behavior.

                    RPI can be manipulated. The key - play the best teams you know you can win (and don't play really bad teams). You can actually lose games (against good teams) and your RPI will increase. 75% of the RPI is not based on anything your team does, but how your opponents fared.

                    RPI has a bias for power conferences (because of it formulation of opponents schedule is most important).




                    From what I read, RPI is a "moving target". The committee relied heavily on RPI one year, relied more on "other factors" the next year. It appears that RPI is a tool...and one that can back up a committee decision when they need that or a tool that can be just one of many when they feel RPI doesn't give them the teams that they want.
                    My perception is the selection committee use RPI when it supports their decisions, and throws it out when it doesn't.

                    Comment


                    • #11
                      Ok...If I follow, Sagarin is valid to predict the winners when two teams meet each other (relative strength) but RPI isn't?

                      No one really knows Sagarin's proprietary algorithms but he does use Bayesian rules in his math and uses a team's opponent's and opponents' opponents' win-loss records. I believe that, in football, the only "measure of strength" used is the win-loss columns. One of Sagarin's methods (the Predictor) also uses margin of victory which Sagarin claims makes it more predictive than his BCS algorithimn

                      If I was going to look at whether RPI was indicative of relative post season success (a possible measure of RPI validity as a relative strength measure)...I would look at RPI vs post season wins and determine if there was a correlation between RPI and the "sweet sixteen" or super regionals and RPI and the CWS participants.

                      It might be difficult to isolate RPI as a single number that describes the degree of relationship between two variables (teams).

                      There may be too many other factors, depth of pitching in a tournament situation vs regular series, difference in quality of matched opponents, etc.

                      But, if RPI has any validity as a relative "strength index", one should see weaker RPI's falling off in the play offs in favor of stronger RPI's.

                      One would have to then determine at what level of confidence the data occurs...is it repeated over time? In successive years, does RPI behave the same way?

                      Comment


                      • #12
                        Baseball RPI is figured differently than it is in basketball. In basketball, road wins count more than home wins. If this were the case in baseball, the warm-weather schools who only go on the road for conference games would be penalized - OMG! The BCS powers-that-be simply cannot stand for that to happen. JMO.

                        Comment


                        • #13
                          Originally posted by Shox32
                          Baseball RPI is figured differently than it is in basketball. In basketball, road wins count more than home wins. If this were the case in baseball, the warm-weather schools who only go on the road for conference games would be penalized - OMG! The BCS powers-that-be simply cannot stand for that to happen. JMO.
                          Baseball does have some "secret" rewards and penalties.

                          Boyd Nation with the help of the people in the Texas athletic department help him break the formula long time ago. I can't remember the guy from the Texas name, but he would give boyd copies of the RPI as the NCAA would hand it out periodically and he would attempt to match it. I also think they found that the NCAA tweaked these rewards/penalties every year slightly. That is why you will see differences between Boyds RPI and Warren Nolans - Boyd is probably more accurate.

                          Comment


                          • #14
                            Originally posted by billybud
                            If I was going to look at whether RPI was indicative of relative post season success (a possible measure of RPI validity as a relative strength measure)...I would look at RPI vs post season wins and determine if there was a correlation between RPI and the "sweet sixteen" or super regionals and RPI and the CWS participants.
                            I looked a little at the regional results. RPI predicted the winner about 2/3 of the time (and of course didn't predict any upsets). Is that good? Or could you have predicted the winners of each game better?

                            If you used Boyd's nation ISR, you would have gotten had a better idea of at least what series that were open for the #1 losing. Though in the TCU series you DBU pulled some long odds.

                            Reg Super Final Champ Team
                            34.0 5.2 0.7 0.2 UCLA
                            41.4 12.5 3.3 1.2 Fresno State
                            21.3 5.7 1.0 0.3 UC Irvine
                            3.3 0.1 0.0 0.0 San Francisco

                            52.9 22.2 2.1 0.6 Texas Christian
                            40.8 17.0 1.7 0.5 Oklahoma
                            4.8 0.8 0.0 0.0 Dallas Baptist
                            1.6 0.2 0.0 0.0 Oral Roberts


                            58.0 23.4 2.4 0.8 Georgia Tech
                            33.8 10.4 0.9 0.3 Southern Mississippi
                            8.1 1.8 0.1 0.0 Mississippi State
                            0.2 0.0 0.0 0.0 Austin Peay State
                            of course there was the Rice regional where the ISR didn't predict having much of chance of being upset.

                            72.2 50.6 11.3 4.5 Rice
                            10.4 2.9 0.1 0.0 Baylor
                            17.4 6.3 0.5 0.1 California
                            0.0 0.0 0.0 0.0 Alcorn State

                            Comment


                            • #15
                              Keeeerist!

                              If no one knows how RPI is formulated, how can you critique it?

                              Only in its relative effectiveness as a determinant (IMHO).


                              If RPI is used to seed teams, to put bubble teams into the play off, it should be transparent.

                              Comment

                              Working...
                              X