Computer-based ratings are necessarily objective in that they treat all input equally. Their only problem is that until there is enough input to be treated at all their results as an unreliable as subjective humans who can only process the smaller subset of data available to them. We are nearly to the point in the 2016 season where the computer rankings start "making sense" so I am now including the meta-ranking analysis of the computer rankings Dr. Massey reports at http://www.masseyratings.com/cbase/compare.htm.
Through week six all but 6 team-pairs are connected by no worse than an A played B played C played D played E chain. By week seven all team-pairs will be connected by no worse than an Opponents' opponents' opponents' opponent relationship. To see how far we've come in terms of inter-regional scheduling, note that as late as 1999 more team-pairs were at distance five after the season was over than are now less than half-way through the season.
It was Boyd Nation's Breadcrumbs Back to Omaha column A Look At the Distance Matrix that piqued my interest in schedule topology and its impact on advanced rating systems.
The "average" or "consensus" ranking for each team is determined using a least squares fit based on paired comparisons between teams for each of the listed ranking systems. If a team is ranked by all systems, the consensus is equal to the arithmetic average ranking. When a team is not ranked by a particular system, its consensus will be lowered accordingly.and also reports the median team rank.
There are many ways to combine ordinal ranks into one, and there's a famous theorem that proves that which one is "right" depends upon your definition of "right." In other words, there is no unambiguous way to combine lists of ordered ranks into a composite list that will meet every definition of "right." So to supplement Dr. Massey's composite I calculate these meta-rankings:
These composite "meta-rankings" are included (with the computers from Dr. Massey's list) in the Computer Ratings by Retrodictive Ranking Violations report. For each team this report lists:
One can compare two ordinal lists by counting the number of team-pairs that are in the same order in both lists ("concordant" pairs) and those which are in opposite order ("discordant" pairs.) The number of discordant pairs is called the distance between the two lists. I report the distance for each of the composite ranks compared to the computers' rankings and the computer rankings to each other. To give perspective, I also include the complement of that metric as the percentage of team-pairs that are concordant with respect to the rankings. See Computer Ranking Correlations.
In memory of