Medics are using more and more AI tools but how safe is your data? Google’s search facility has transformed our lives. Whether it’s the times of shows at your local cinema or academic research via Google Scholar, information is available at our fingertips.
What’s not to like?
Well, when it comes to trust in large technology companies, we need to pause the enthusiasm button. Last autumn Facebook admitted a massive data breach affecting more than 50 million people. Meanwhile in March, we found out that Cambridge Analytica had exploited Facebook to harvest millions of people’s profiles. Worryingly, the social network had not alerted users when it discovered the breach.
Nearly 80 per cent of internet users search online for health-related information leading to the catchphrase “Dr Google will see you now”. You probably don’t mind that the information you seek may lead to some pointed advertising on your social media feed. But what if Apple, Google or Facebook decide to move into the management of your private medical information? Their involvement in health care can then continue through a doctor’s diagnosis and even into monitoring a patient’s chronic condition.
This is already happening and with sometimes unedifying results: in October Google announced its DeepMind Health unit would be absorbed by Google Health.
DeepMind Health was previously part of the artificial intelligence-focused research group DeepMind, a Google sibling; both divisions are owned by the organisation’s holding company Alphabet. At its launch, the unit’s chief, Mustafa Suleyman, reassured the public of its intentions: “DeepMind operates autonomously from Google, and we’ve been clear from the outset that at no stage will patient data ever be linked or associated with Google accounts, products or services,” he wrote.
Google says the restructure is necessary to allow DeepMind’s flagship health app, Streams, expand globally. The app, which was created to help doctors and nurses monitor patients for AKI, a severe form of kidney injury, has since grown to offer a full digital dashboard for patient records.
So information patients initially consented to for a restricted use has now been absorbed into a global monolith. As a clinical task management and alerts app, Streams was developed on the back of personal health information generated by real patients in the UK’s National Health Service. Hardly a trustworthy move on Google’s part.
Writing recently in the online journal Stat, Michael L Millenson an associate professor of medicine at Northwestern University Feinberg School of Medicine, said “what’s true about the way in which Google and its tech brethren handle your information today may not remain the way they use that information in the future”.
The ubiquity of tech’s invasion of our medical privacy is striking. Let’s say you were recently diagnosed with diabetes. Your doctor may have diagnosed it using Google AI applied to your electronic medical record. They may then engage with a Google app (enabled with artificial intelligence) designed for remote professionals to detect diabetes-related eye disease. And, of course, at home you can choose to receive daily diabetes reminders from your Google Assistant.
Now at one level this is obviously good for your health. However you may have quite reasonably chosen not to reveal your diabetes diagnosis to certain people or organisations. But on the back of some lapse practices by a tech company, you may, in the future, find your personal health information readily accessible on the web.
As patients and citizens, we should have the right to full transparency about the digitised data collected about our bodies and ourselves. We should be able to choose with whom our information is shared and with whom we wish to engage. And as Millenson points out, “we deserve clear standards of accountability that forthrightly address the unprecedented medical and quasi-medical relationships now emerging”.
The situation may improve under the new GDPR regulations. For now, it’s up to us to be as tech savvy as possible while being extremely careful about which technology companies we trust with our personal medical information.