xxx
Multinational companies in regulated industries like pharmaceuticals and insurance have been somewhat reluctant to adopt generative artificial intelligence to handle legally sensitive documents due to the risk of AI-generated mistakes.
That seems to be changing. Take Novo Nordisk, the Danish drugmaker behind Ozempic. The company for years has tested chatbots such as OpenAI’s ChatGPT and models such as Meta Platforms’ Llama to help write documents it files to regulators when submitting a drug for approval, but the technology has been prone to errors, said Louise Lind Skov, who leads the company’s tech strategy. Sometimes it could take employees more time to correct the AI errors than if they’d done everything by hand in the first place, she said.
It wasn’t until Novo started testing Anthropic’s Claude 3.5 Sonnet model last fall that the company found the number of errors fell significantly, said Waheed Jowiya, a strategy director overseeing its use of AI. Novo Nordisk now uses Claude to draft clinical study reports based on data that human researchers collected during a study. These documents, which describe the results of a drug trial, can be hundreds of pages apiece.
Humans still oversee Claude as it drafts the report, essentially by pointing and clicking at data and asking Claude to describe it in plain English, Jowiya said. Novo uses Claude through an interface that it built using Amazon Web Services software, he said.
And Novo used a common method for reducing AI mistakes: retrieval-augmented generation. For instance, when Claude generates a clinical definition of obesity that a human expert determines is good, the human will tell Claude to reuse the description in any future documents that concern trials on obesity.
From: Ozempic Maker Says AI Is Finally Reliable Enough to Produce Sensitive Documents — The Information.
xxx