COB (or Coordination of Benefits) is a core managed care function that is often ignored (and varies by state to make things more complicated). It is a required process by which managed care companies need to identify if people have other coverage (i.e., should I send someone else the bill). Given the high dollars here, you would think companies would be focused on driving this as a cost management or profit initiative.
Good companies find 2.5% or more of their population have secondary insurance. I have seen analysis saying that if you include claims that should be billed to worker’s compensation, auto insurance companies, etc. that the number could be as high as 15%, but that seems really high.
An interesting fact that one of our experts shared with me was that claims data could explain over 65% of the variance in COB responses for a working age population while it could only explain just over 40% for seniors. They have found some incredible correlations to create ROIs for clients in the 2,000% range. [Not bad. If I could get my boss a 20:1 return, I think he would pay attention.]
Of course, as a patient, all I care about is that my claim gets paid, and I don’t get a bill from my provider.
Again…I may be preaching to the choir, but this is why data matters. This is why you need metrics. This is why you need to know your baseline and track how you improve this.
And, always make sure you understand the definition, the data sources, and the data quality. I remember doing data standardization processes for Sprint back in the mid 1990s. It took a while just to get agreement on what a customer was from a business and systemic perspective. Another example I previously had when looking at two vendors was defining success. They attempted to create a standard metric of abandonment for people that came to a website to take a survey (i.e., how many abandoned the process before completion). One seemed dramatically better than the other.
Upon research, we found that one broke the survey into sections and offered them an out on each page. As long as the consumer exited at a planned opt-out point, they were a “success” and had not abandon the survey (even though they hadn’t completed it). The other only counted those that finished the survey. Not surprising who had a better score.

February 8, 2008 


No comments yet... Be the first to leave a reply!