I started this research not because they are biased but because their research process is slow and deeply flawed. You can find my main critique of their approach here:
Seven serious problems with Consumer Reports
One key difference: their current ratings are based on a survey conducted in April 2008. Our results cover owner experiences through June 2009.
So, do you want to know how reliable a car was over a year ago, when it was over a year younger and had 14,000 fewer miles on it, or how reliable it has been lately?
Regarding "Seven Serious Problems with
1. "Serious problems" - times in shop and days in shop may be a function of the competence of the technician or the availability of parts and may not be a one for one relationship with the car itself.
2. "Relative ratings" - I will contend that the average member (subscriber) to Consumer Reports does not use the CR exclusively and does understand that the ratings are relative. I don't care that your mathematics calculates that there is "difference was just one-tenth
of a serious problem per car." That is still a difference of 10% and that is significant in my book.
3. "Ranges" - again, ranges are just the starting point. An intelligent consumer will search for and read more than just what is depicted by the blobs. And, just as in #1, trips to the shop and days in shop may be a function of the competence of the technician or the availability of parts and may not be a one for one relationship with the car itself.
4. "Only averages" - the report that the average eight-year-old domestic brand model was reported (on page 17 of the 2005 auto issue) to have fewer than one-and-a-half "serious problems" per year means that over the eight year life of the car it had a dozen serious problems
. You might want that car but I surely don't. Compare that to my 1998 Dodge Intrepid. In the 11 years that I have owned it, it has had one serious problem; transmission speed sensor.
And, odds are based on the law of averages. How are you going to generate odds without using averages.
5. "Survey (in)frequency" - I disagree with your conclusion that people won't accurately remember what happened. People who are astute enough to subscribe to CR pay attention to the products they have bought and will remember what happened. My 1993 Magnavox TV had a problem with the sound in 1999. The problem was a fried silicon control rectifier in the amp circuit that, coincidentally, I fixed it by unsoldering the bad one and soldering in a replacement.
6. "Stale information" - if someone is interested in a hot new design, the hot new design will influence the purchase decision far more than reliability statistics. And, reliability statistics may go out the window if the design is hot enough and new enough. Otherwise, you would have no source for gathering your statistics.
Most of us know that hot new designs may suffer in the beginning with serious problems but, over time, they will probably be fixed or eliminated. Gathering data from a universe that is too small can significantly skew the results and conclusions that are made from those results.
7. "Fossilization" - If the members (subscribers) were not confident or, at least, content with the reports they publish, CR would have been out of business a long time ago.