«

»

Jan 07

IN THE NEWS: It Turns Out Health Care Provider Ratings Aren’t So Reliable…

Photo credit: MS Office

One of the things that we discuss often around the office is how reliable (or rather, how unreliable) crowdsourced review sites for service providers can be. For example, I’m a frequent contributor to Yelp!, and I use Yelp! to discover new restaurants and businesses around town or to find local hotspots when I’m traveling. But I’ll be the first to admit that until a business’s reviews reach a critical mass with dozens of independent reviews, the ratings you’ll find can be quite unreliable.

Case in point: yesterday, my wife and I stopped in a sandwich shop in Columbia, MO that was highly rated by 14 Yelp users (and was, in fact, the highest-rated quick service restaurant in town). The reviews all talked about how good the food is, but most failed to mention that the place is dingy and dirty, that the booth seating is worn to the point of having visible tears, that there’s no seating for small children and that there’s no restroom. This was a three or four-star restaurant at best. (Honestly, I felt like even the food was overrated; the hyped-up veggie sub was good, but far from the best I’ve ever eaten.)

The problem with user reviews on sites like Yelp! (or even on retail sites like Amazon.com) is that they’re completely unqualified. The reader has no idea what the writer’s purpose in authoring a review is, or how qualified the reviewer is to have any opinion at all. Some are just trying to be helpful, but others have realized that they can use reviews to their advantage. When I worked in product marketing, I was repeatedly contacted by people who labeled themselves a “top Amazon.com reviewer” who wanted free product in exchange for an Amazon.com review.

It’s also quite common for small businesses to attempt to boost their profile by having friends and employees write “sock puppet” reviews that extol the virtues of the business to fool those looking into review sites into thinking that they’re going to be delighted. This sort of behavior typically catches up with genuinely bad businesses quickly (as real reviewers gradually add to the body of work), but mediocre businesses rarely inspire people to write negative reviews to counter positive ones since they’ll be more likely to assume that they just went on an off day.

All of this brings me to an article I saw on NPR’s website over the weekend about user review sites for physicians. Simply put, the article says that studies are showing that these sites don’t work and aren’t worth consulting, because they have a strong tendency to attract a small number of negative reviews rather than a large amount of honest feedback. For example, if one patient gives a negative review to a urologist and only three patients have posted a review, that one review carries a tremendous amount of weight on the online profile. But it is unlikely to be representative of how the majority of patients feel, and in fact may be entirely idiosyncratic.

Here’s a snippet from the article that cites some of the findings of the studies:

The 500 urologists surveyed averaged 2.4 reviews on 10 physician-ranking websites, with total reviews per doctor ranging from 64 to zero. The reviews were overwhelmingly positive, at 86 percent. But the negative reviews focused more on things like office decor than whether the doctor delivered good health care.

But this study suggests that information from crowdsourced doctor-ranking sites should be taken with a grain of salt, Ellimoottil says. He became interested in the topic when he Googled a prominent surgeon for a research project. “He had a Healthgrades ranking that was 5 out of 5. The next one down was Vitals, and it was 2 out of 5. The reviews were 180 degrees different.”

Another study published in early 2012 in the Journal of Medical Internet Researchfound some correlation between online ratings and quality of Virginia doctors. Overall, the ratings were positive. Patients were most critical about punctuality and staff. Still, like the study of urologists’ ratings, this one found ratings were often based on the experience of just a few patients, about three, on average.

The problem is that the patient-physician relationship is difficult to rate because of the complexity of the service that is being offered. Other types of user reviews make sense because they’re measuring a simple aspect of a transaction: A product review is about the value of a product; a restaurant review is about the experience of going to a restaurant; a transaction review on eBay is about helping others to know whether or not an individual can be trusted.

But a physician’s primary job is to ensure that a patient remains well. The experience of waiting to see the physician can be uncomfortable, and the physician might not be very good at interacting with people (and yet still an excellent healer). The patient has no real way of understanding a physician’s level of skill or competence, and can only really rely on the subjective experience of seeing the doctor or the testimonials of others.

Health care quality is something that can be objectively measured, and it is measured by health care organizations. But this quality is outcomes-based rather than perceptual. It’s also not widely available to patients or easy for them to understand.

While it’s a nice idea that a physician can be rated online like a restaurant or a product, the reality is that any site purporting to grade doctors should really focus on objective criteria and use testimonials as a means of providing commentary. And even then, those testimonials should be filtered to flag extremely negative comments as being the experience of one person and not necessarily representative of the patients of the physician as a whole.