European hospitals launch Microsoft-backed AI network to agree privacy guardrails

Artificial intelligence, it is widely assumed, will soon unleash the biggest transformation in health care provision since the medical sector started its journey to professionalization after the flu pandemic of 1918.

The catch is that bringing this about will require new institutional channels for knowledge, engineering, and ethical collaboration that don’t yet exist.

This week’s Hlth Europe show in Amsterdam saw the European launch of the Microsoft-backed Trustworthy & Responsible AI Network (TRAIN) consortium that wants to meet this need.

The institutions that have signed up to be part of TRAIN are Erasmus MC and University Medical Center Utrecht in the Netherlands, Sweden’s Sahlgrenska University Hospital and Skåne University Hospital, Finland’s HUS Helsinki University Hospital, Italy’s Universita Vita-Salute San Raffaele, and patient advocacy non-profit Foundation 29.

This follows the launch of TRAIN in the US in March, which saw a who’s who of famous medical organizations and hospitals add their names, including Boston Children’s Hospital, Cleveland Clinic, Johns Hopkins Medicine, and Mount Sinai Health System.

Hovering over all this, of course, is Microsoft as the technology partner, styling itself as its enabler rather than leader but still an important influence.

What is TRAIN?

The consortium’s stated goals cover a mixture of technical and ethical issues that are often mentioned in AI announcements:

  • Collaborating to develop tools to “enable trustworthy and responsible AI.”
  • To agree on guardrails that put limits on how AI should be used.
  • Sharing best practices on the outcomes of AI in healthcare, including how to avoid the bugbear of bias. This will be done through a “federated AI outcomes registry.”

In short, members won’t share data or algorithms but there will be a collective system allowing expertise and learning to be shared.

It’s very similar in outlook to another AI collective, the Coalition for Health AI (CHAI), whoseUS launch in March listed 20 non-profit medical institutions including several that are also members of TRAIN. CHAI’s engineering partners include Microsoft (again) alongside Amazon, Google, and CVS Health.

How might AI be used in healthcare?

According to AI optimists, the technology will allow long and arduous drug discovery pipelines to be shortened, cutting the cost of drug development. Meanwhile, the huge bureaucracy associated with patient care and medical records will be automated by machines.

Clinical decision making and diagnostics will speed up by an order of magnitude and become more accurate.

Everyone remembers the guesswork and uncertainty of the pandemic. In future, this might disappear as AI-driven analytics makes predictions about viral evolution before it has happened.

While it’s true that AI has been over-hyped across the tech sector as a whole, in healthcare many important elements are already in place.

Underpinning all this is data, the element that fuels AI but also threatens it if security and privacy of patient records are put at risk in any way.

Tech party

An obvious issue is the involvement and influence of big tech, in this case Microsoft. In comments sent to CIO, Microsoft listed a number of responsible AI (RAI) tools it is making available to TRAIN members, including through the open source Responsible AI Dashboard project.

Assisting the deployment of the underlying technologies would be the focus of Microsoft’s involvement, confirmed Microsoft’s vice president of healthcare, David Rhew. A major issue for him was that the demands of AI could lead to a multi-tier system in which only the largest institutions thrive.

“There is a need to ensure that the AI revolution will not only benefit well-resourced facilities but that low-resourced organizations will also be able to take advantage of AI and implement it responsibly,” said Rhew.

“Some academic medical centers (AMCs) and healthcare organizations already have processes in place to test and approve AI algorithms. However, many of these organizations need help to scale their processes in order to meet the needs of AI’s growing use.”

Within this, basic processes such as assessing AI guardrails could quickly turn out to be hugely complex.

“Industry complexities can be addressed from every angle when public, private, educational and research organizations from various backgrounds across the industry come together in collaborative partnership,” said Rhew.

This comes when the rules around data privacy are being tightened in Europe with the AI Act. But the bigger problem is the fragmented regulatory landscape already heavily policed by the US’s Health Insurance Portability and Accountability Act (HIPAA), the Medical Device Regulation (MDR), and GDPR. Rhew’s comments from Microsoft’s news release emphasized this theme.

“The primary goal for TRAIN is to enable individuals and organizations to operationalize responsible AI principles through technology-based guardrails. TRAIN will also enable organizations to collaborate through federated, privacy-preserving approaches,” he was quoted as saying.

“The formation of TRAIN in Europe will help foster trust and confidence in the application of AI in health and ensure that data privacy is maintained.”

© Foundry