Pooling data from across the insurance industry can help surface fraud that might otherwise go unnoticed.AndreyPopov/iStockPhoto / Getty Images
Fraudulent life and health insurance claims cost Canadian insurers millions of dollars each year, but the industry is stepping up efforts to fight back.
The Canadian Life and Health Insurance Association Inc. (CLHIA) is expanding its project to fight fraud by pooling insurance records, with artificial intelligence (AI) analytics playing a central role.
CLHIA’s pooled data program takes data from multiple providers in Canada and analyzes it to detect patterns of potential fraud across the industry.
The program began with a limited scope in 2021, but it will expand to include more insurers and types of claims. To accomplish that, the CLHIA continues to work with its original program partner, Shift Technology, which provides AI-powered analytics software for data analysis.
Shift Technology’s head of U.S. health care customer success, Jesse Montgomery, says pooling data from across the industry can help surface fraud that might otherwise go unnoticed.
“As an individual provider, you’re not necessarily going to see something in your data,” he says. “But if you can bring 10, 20 or more insurers together and leverage that data collectively, you’re going to find patterns you wouldn’t find yourself.”
Insurance fraud takes on many forms. Much fraud happens through simple misrepresentation on applications, such as failing to declare pre-existing conditions or lifestyle habits. But other types of more sophisticated fraud include “stacking,” in which people take out multiple policies at lower levels to maximize coverage while avoiding increased scrutiny by underwriters.
Sometimes, insurance agents are implicated. This year, the Financial Services Regulatory Authority of Ontario revoked the licence of a financial advisor in London, Ont., who was accused of defrauding clients, collecting money from them without investing it. Other tactics include “churning,” in which advisors inappropriately replace policies with newer ones to increase commissions.
More often, though, the fraud happens through a mixture of stolen identities and misrepresentations to payers. In June, U.S. authorities disrupted transnational organizations responsible for US$12-billion in fraudulent health insurance claims. The crime group used stolen identities to submit the claims, the U.S. Department of Justice said.
In 2023, CLHIA says insurers paid $36.6-billion in supplementary health claims, and it estimates fraud activities cost millions each year.
AI’s role as an offensive and defensive tool
AI is becoming more necessary to catch life and health insurance fraud because criminals can use the same technology to make such fraud more convincing, Mr. Montgomery says.
Generating synthetic (made-up) identity data is already big business, for example, and generative AI technology can make both textual and visual information more convincing.
“They’re using it in its simplest form to change the claims that are being submitted. But on the other end of the spectrum, they’re using it to fabricate invoices and medical records, generate x-rays and other related documentation,” he says.
AI can spot life and health insurance fraud by catching anomalies that human analysts might miss, especially when spread over large numbers of claims and applications, Mr. Montgomery says.
He recalls one case in which a health care provider was submitting claims to a payer but reusing the same documentation with slight changes.
“To an individual reviewer, they’re not going to be able to see that because they’re not looking at everything together and they’re not able to see all of those similarities,” he says.
AI can spot trends in wording within a document. It can also inspect the document metadata. That’s the invisible code embedded in electronic documents that reveals details such as who created it, when and where.
Shift uses multiple AI algorithms to analyze other data, including images. The company deals with a lot of X-ray imagery, for example.
“You have to look at the different aspects of the image and how they compare to other similar images to be able to find the anomalies that are likely fraud, waste and abuse,” he says.
Beyond medical images, that includes many administrative documents that can arrive as scanned images themselves. AI can spot similarities, such as images that were skewed slightly while scanning, or even gradients that look the same across multiple documents.
As Mr. Montgomery notes, fraudsters don’t have to play by the same regulatory rules facing legitimate businesses. That makes strength in numbers a necessity.
No Comment! Be the first one.