When The New York Times and The Observer broke the news in March that a little-known consulting firm named Cambridge Analytica had used private data from millions of Facebook users, allegedly without their consent, few readers could have foreseen the major implications for their health care. They certainly could not have known that Cambridge Analytica’s client list extended far beyond the victorious political campaigns of Ted Cruz and Donald Trump, to a major healthcare provider, New York University’s Langone Hospital, and a major insurance provider, the London-traded Hiscox Ltd.1 Even now, the public remains largely unaware of the threats to their personal health information from new and rapidly developing technologies.
For the past 2 years, our research has been uncovering these technological risks to health information privacy and raising concerns about potential abuses. For most of human history, personal health information has been collected, maintained, and analyzed predominantly by medical professionals within the controlled environment of diagnosis and treatment. Our laws reflect this limited scope. The Health Insurance Portability and Accountability Act (HIPAA), which famously governs the use, storage, and sharing of our health records, only applies to healthcare providers and insurance plans and their “business associates.” The age of technological innovation has shattered this contained sphere of access. Now we have fitness trackers, smartphones, and smartwatches that collect biometric data; direct-to-consumer genetic testing services that analyze and sell our entire family history; and software applications and websites that consumers use to motivate, monitor, and share their progress toward a healthy lifestyle. HIPAA does not apply to any of these digital actors. We have increasingly shared our most sensitive personal information with corporations, governments, and neighbors—sometimes knowingly, often unwittingly—without any of the protections we previously demanded when the same data were in the hands of licensed experts.
In a forthcoming book chapter and an article in the American Journal of Law & Medicine, we identify five categories of harms that could arise from this “digital health revolution.”2
The first type of harm is the very act of privacy violation itself. Confidentiality is a foundational principle of our healthcare system. Without it, patients lose trust in physicians and medical institutions and shy away from research. Only when they feel their personal history is secure can patients make informed decisions with confidence in their caregivers. Secure communication is, therefore, at the core of our society’s health, the medical industry’s economic sustainability, and our personal autonomy to make choices about our bodies.
Second, firms may have greater opportunity to discriminate against employees based on preexisting conditions, disabilities, or genes. Such discrimination is prohibited under the Affordable Care Act (ACA), the Americans with Disabilities Act (ADA), and the Genetic Information Nondiscrimination Act (GINA), respectively, and employers are barred from requesting this information to prevent such misuse. Each law has an exception, however, allowing employers to collect these data from employees participating in “voluntary” workplace wellness programs, many of which now involve digital devices and websites that track private behaviors. Even if this information is usedillegally, it is difficult to prove intentional discrimination if it has been accessed legally.
Third, new technologies can have adverse health impacts if: 1) they are used without a doctor’s oversight; 2) they are construed as medical advice; or 3) they are tied to medical diagnosis or treatment without demonstrating the necessary efficacy. The experimental evidence is mixed, for example, regarding exercise activity improvements among consumers using fitness trackers. The track record is even less certain for apps and watches that companies are currently designing to monitor vital signs and other disease markers.
Fourth, devices that collect precise geographic information can put consumers’ physical safety at risk, especially if these data are not encrypted sufficiently or consumers are unaware how widely they are being shared among their network of fellow device users. By one estimate, 70% of such devices transmit personal information without encryption.3
Finally, the digital health movement may exacerbate societal inequalities by penalizing the least advantaged consumers who have the least access to or understanding of the new technologies—or if employers and other institutions use these innovations to reward the most active users, implicitly raising premiums and lowering wages for vulnerable populations.
Regulators have not been blind to these risks. The Food and Drug Administration (FDA), in particular, has considered whether to regulate “digital health” under its statutory authority over “medical devices.” To date, however, it has not done so. The 21st Century Cures Act reinforced the FDA’s reasoning by distinguishing products intended “for maintaining or encouraging a healthy lifestyle” from the FDA’s traditional definition of “medical devices” as intended for “the diagnosis, cure, mitigation, prevention, or treatment of a disease or condition.” The latest FDA guidance reasons that such “general wellness” devices pose “low risk” to users.
To read this article in its entirety please visit our website.
-Anthony W. Orlando, Ph.D, Arnold J. Rosoff, J.D.
This article originally appeared in the February issue of The American Journal of Medicine.