by Jared Staheli
October 16th, 2014
It is common knowledge that fraud is a large problem for payers, who must spend money paying fraudulent claims as well as on recovery. According to a Forbes article, a health care actuary estimated that private insurers annually “lose perhaps 1 to 1.5 percent in fraud.” Meanwhile, the problem is worse for public insurers. The same actuary estimates that “Medicare and Medicaid may be closer to 10 to 15 percent.” According to the recently released Medicare Fee-for-Service 2013 Improper Payment Rate Report, the improper payment rate increased from 8.5% in 2012 to 10.1% in 2013. Part of the reason for this spread between private and public is that the public insurers are just beginning to use statistical tools that have been in use for years in the private sector. Part of this change was a result of the PPACA (Patient Protection and Affordable Care Act), which enables HHS (Department of Human and Health Services) to adjudicate claims before making a payment, rather than paying immediately and then going after only obviously fraudulent claims. Using statistical software to analyze claims would not only save money by avoiding the payment of fraudulent claims, but from reducing the millions spent on inefficiently recovering improperly paid claims.
It can be helpful to understand the statistical processes that would allow for a reduction in fraud, and perhaps bring public insurers closer to the fraud rate experienced by private sector insurers. An interesting development on this topic is that not only are public insurers now using these tools; the tools used are getting better. Because the problem of fraud is so expansive, a large number of outliers will show up in traditional statistical models, skewing results and making fraudulent claims appear to be within the realm of an honest claim. An article from Advance Healthcare Network outlines new techniques designed to combat this problem occurring in traditional models. These techniques include the interestingly named “Multivariate Outlier Detection Using Robust Mahalanobis Distance”, which simply allows for better detection and correction of a group of outliers which distort the natural distribution. In layman terms, this means the fraudulent claims cannot hide behind the skewed average they and other fraudulent claims create. Because of varying fees in different regions, not all outlier groups are a result of fraud. Clustering analyzes these groups of outliers detected by the above technique to see if that group of outliers is worthy of fraud investigation. Perhaps the most fascinating development in fraud detection is the progress in Artificial Intelligence (AI), which allows for techniques to refine themselves as they gather more information. Machine Learning Techniques build on the work of neuroscience and computer science. Mirroring the processes of the human brain will allow fraud detection to become much more efficient, saving taxpayers millions of dollars.
For more detailed information on these techniques as well as a few additional methods not discussed in this article click here.
To read the full Forbes article, click here.