October 17, 2008

A look at the components of the BCS

Eight sets of rankings are used in the BCS standings: The USA Today coaches' poll and Harris Interactive poll each make up one-third, while the final third is an average of six computer polls.

The BCS rankings have undergone several tweaks since being unveiled in 1998. Most recently, the BCS replaced The Associated Press poll with the Harris poll, put greater weight on the human polls, removed strength of schedule and decided to use only computer polls that don't consider margin of victory.

But for the third consecutive season, the BCS has made no changes to the overall formula. Here's a run-down of each component.

THE COACHES' POLL

Sixty-one coaches rank teams one through 25 weekly in the coaches' poll, compiled by USA Today.

The voters are somewhat evenly distributed between the 11 conferences. The Big Ten, Big 12 and SEC have seven each; the ACC, Conference USA and Mid-American have six each; the Pac-10 has five; the Big East, Mountain West, Sun Belt and WAC have four; and independents have one (Notre Dame's Charlie Weis).

The coaches are selected by the American Football Coaches Association and reviewed by USA Today. The only coaches not allowed to vote are coaches of teams on major probation (those teams also aren't eligible to be ranked). The AFCA urges all coaches to make themselves available for the poll, but some decline to take part (usually first-year coaches). There are no tenure requirements.

About a quarter of the voters change each season, either because coaches decline to take part or no longer are FBS coaches. Eleven coaches who voted last season are not voting this season, including seven who no longer are coaching.

Coaches' ballots are kept secret until the final regular-season poll.

The coaches vote in a preseason poll, then vote weekly throughout the season.

THE HARRIS POLL

In replacing The Associated Press Top 25, which removed itself from the BCS formula after the 2004 season, the BCS turned to the Harris Interactive Poll.

The Harris poll consists of 114 voters. All have some connection to college football. The poll consists of former players, coaches, athletic directors, conference commissioners and sports information directors, as well as media members.

Harris Interactive randomly selects voters from among the more than 300 people nominated by conference offices and FBS universities.

Unlike the coaches, Harris poll voters do not submit votes until midseason.

Before the season, Harris Interactive releases the names of the 114 voters but does not make public any information that describes the voters' location or connection to college football.

Like the coaches' poll, each voter's final ballot will be made public.

THE COMPUTERS

Six computer rankings make up the final third of the BCS formula. The best and worst computer rankings are thrown out, and the sum total of the remaining four is divided by 100 (the maximum possible points) to come up with the BCS' computer rankings percentage for each team.

The computers are included in the equation with the intention of removing human bias.

Though the BCS removed strength of schedule as a stand-alone component in 2004, schedule strength is included in some form in each of the computer rankings. The BCS also removed margin of victory from the formula and uses only computers that do not take the final score into account.

Each computer ranking has its own formula, using different ways to calculate strength of schedule and, in some cases, conference strength.

Here is a brief description of each computer ranking:

Anderson & Hester: Its organizers says their rankings do not prejudge teams. The rankings are not compiled until the fifth week of the season, reflecting a team's "actual accomplishments on the field, not its perceived potential." The strength-of-schedule component judges the records of each team's opponents and the opponents' opponents. The rankings also put weight on conference play in strength of schedule. Conference strength is rated by each league's non-conference record and difficulty of the league's non-conference schedule. The Anderson & Hester rankings also take into account home and road records in evaluating strength of schedule.

Billingsley Report: The Billingsley Report is one of three computer polls that give teams a starting position. Rather than preseason rankings, teams start where they finished the previous season. Unlike the human polls, there is much more movement from the first poll. For example, defending national champion LSU started at No. 1 this season, but the Tigers dropped after they defeated Appalachian State because other teams defeated higher-ranked opponents. This computer poll takes team's opponents' rank into account in strength of schedule rather than simply a team's record. For example, a team would get more value from defeating Oregon State than UTEP, though both are 3-3.

Colley Matrix: The founder, Wes Colley, has a Ph.D. in astrophysical sciences from Princeton. He bills his rankings as bias-free, ignoring influence from opinion and past performance. Strength of schedule has a strong influence on the final ranking, but it is calculated by a simple won-loss record. In the formula, a four-loss team playing against opponents with better winning percentages will be ranked ahead of a two-loss team facing weaker opponents. The rankings claim to be free of bias toward conference, tradition or region. All teams are assumed to be equal at the beginning of the season, so, for example, Florida starts at the same point as Florida International. As a result, early polls bear little resemblance to the human polls. His computer model does not account for home and road wins. Colley won't be surprised if his rankings differ from the AP and coaches polls, the two most commonly cited rankings, since Colley says the two human polls reinforce each voter's perceptions.

Massey Ratings: Like some of its computer counterparts, the Massey Ratings are designed to evaluate past performance, not to predict future outcomes. The rankings only take score, site and date of game into account. BCS computers are not permitted to use margin of victory, but points scored and points allowed are a factor for Massey, including calculations for a team's home-field advantage. The site of the game is included in his formula for schedule strength. Preseason ratings based on each teams' postseason ranking is included, but he says that effect is "damped out completely" by the end of the season. Though margin of victory isn't a factor, the final score is used to calculate the probability a team would win a rematch under the same conditions. Massey indicates that the scores help assign an objective value to non-quantifiable elements, such as motivation or weather. A team that wins a close, high-scoring game receives a lower probability to win a rematch than a team that wins 10-0. In his formula, there are diminishing returns for a team that runs up the score. The difference between a team winning 30-0 and 56-3 is minimal, though there is an advantage to winning comfortably. Conference strength does not play a role, but non-conference records do factor into the rankings; therefore, conferences with better non-conference records will have stronger schedule strength.

Sagarin Ratings: Jeff Sagarin's ratings are published by USA Today and probably are the best-known computer rankings. Sagarin has multiple computer rankings, including one that takes margin of victory into account. The rankings that include margin of victory are featured in USA Today, and he considers them his "best effort." But the BCS uses the rankings that don't include a margin-of-victory component. Input in Sagarin's rankings for the BCS are wins, losses and site. The site is sorted by home, away, neutral or a "close-by," which would be an LSU game in New Orleans but not Florida-Georgia in Jacksonville or Oklahoma-Texas in Dallas, which are neutral-site games. Since the ratings compound the record of team's opponents and opponents' opponents, strength of schedule is implicit, though it is not considered a separate entity in the formula. Road wins receive more weight, as do undefeated teams. Before the BCS releases its rankings, Sagarin removes preseason rankings from the formula so each team starts on equal footing.

Wolfe: The Wolfe rankings use a method called the "maximum likelihood estimate." The formula determines the likelihood of one team defeating another, using win/loss data collected on all 697 four-year teams. These rankings take into account conference strength and location of the game.

David Fox is a national writer for Rivals.com. He can be reached at dfox@rivals.com.




 

Rivals.com is your source for: College Football | Football Recruiting | College Basketball | Basketball Recruiting | College Baseball | High School | College Merchandise
Site-specific editorial/photos Yahoo! Inc. All rights reserved. This website is an officially and independently operated source of news and information not affiliated with any school or team.
About | Advertise with Us | Contact | Privacy Policy | About our Ads | Terms of Service | Copyright/IP policy | Yahoo! Sports - NBC Sports Network

Statistical information 2007 STATS LLC All Rights Reserved.