New and Future Healthcare Industry Security Threats
The healthcare industry is constantly faced with security concerns. Current threats include phishing, ransomware, data breaches, and DDoS. However, with recent technological advancements, healthcare must prepare for a new threat—artificial intelligence (AI). The emergence of AI tools like ChatGPT will certainly change the game.
According to Chat GPT, the technology has the “potential to revolutionize healthcare in various ways,” including areas involving medical data, personalized healthcare, patient information and engagement, and clinical research, to name a few. This technology has the potential to enhance and improve different areas of healthcare, and hospitals are embracing this new technology, with many respected hospitals currently piloting programs to take full advantage of AI’s capabilities.
However, the integration of AI opens up a plethora of potential cybersecurity risks. A security intelligence company recently reported a rise in botnets, a next generation threat which requires no coding skills at all—a cyber-Frankenstein that uses AI and existing code libraries to trigger ransomware attacks, keylogging, etc.
To safeguard against such potential attacks, the overall healthcare industry must fully understand the extent of the risks associated with AI and how to combat them.
AI Risks
Financial and Patient Care Consequences
Any hacker can create a fictitious email with embedded links to a malicious program. AI and Machine Learning (ML) algorithms can easily subject a hospital or clinic to phishing attacks, fake patient or patient record data, etc., and if a hospital or clinic staffer is unaware, it could be disastrous. An assistant director at the cyber division of the FBI stated, “Attacks can occur at every stage of machine learning ML and AI development and deployment cycles.” There are plenty of risks that hospitals, clinics, and healthcare partners should be looking out for as these scams put both an organization’s finances and patient care at risk.
Potential Scams and Their Consequences
A recent AHIMA SmartBrief confirmed the cost of the average healthcare data breach has reached $10.93 million, an 8% jump from a year ago. By comparison, the average cost of a data breach across all industries is $4.45 million. In the first six months of 2023, the U.S. Department of Health and Human Services HHS reported more than 300 cyber attacks with the top 10 alone affecting over 30 million Americans.
Product Abuse and Synthetic Accounts
The Health ISAC Executive Summary Annual Threat Report identifies product abuse and synthetic accounts to be on the rise for security threats. Product abuse typically entails an organization with an internet-facing product (such as a web login portal) being targeted by a hacker employing an account takeover via proxy networks, employing compromised user credentials, or customized crimeware.
Synthetic accounts are fake credit profiles that use a lack of verification authorization to potentially pay medical billing and other health-related activity to accounts without a patient. Both threats can be amplified with AI or ChatGPT programs. If a profile looks real enough, what is there to suspect?
Public Data Breaches
Some scams can make a lot of money on patient records. About 80% of data breaches expose public data that contains personal information, with stolen medical records of patients potentially selling for up to $1,000 dollars per record on the black market. According to the HIPAA Journal, 1.94 healthcare data breaches of 500 or more records were reported per day in 2022. Allowing access without verifying who has that access and where the technology is coming from can pose incredibly dangerous risks.
Facial Recognition
Recently, a government agency suffered an attack where malicious actors took advantage of facial recognition models. They stole nearly $77 million dollars by using high-definition face images off the black market and AI to create videos to fool facial recognition software. Apply this to the healthcare industry: if a cyberattack created something convincing enough, it could cause a costly data breach of up to $7.13 million dollars, according to the Ponemon Institute.
Malicious Use of ChatGPT
Individual patients can also be harmed by cyberattacks. Along with the above risk of losing a medical record, patients could also have their personal and sensitive data released to unauthorized parties, causing financial and personal harm to the victim. Worse even, a patient could be deliberately and maliciously misdiagnosed by ChatGPT, receiving the wrong treatment, and could be harmed physically. If a medical device using AI, such as surgical robotics, was tampered with the results could be fatal. This may be an extreme case, but it is still a possibility because of the multitudes of ways AI is being integrated into healthcare delivery, and by extension how it can potentially be abused.
A radio host from Georgia recently began a lawsuit against ChatGPT, which libelously attached him to an embezzling case he had nothing to do with. This speaks to a broader scope of individual harm ChatGPT can cause, if unregulated.
AI Benefits
We have addressed some of the risks of using Artificial Intelligence in health care, but there are benefits, as well. For example, ChatGPT, the most recognizable AI, can more quickly analyze cybersecurity incidents and make strategic recommendations to help inform and make immediate and long-term defense measures than humans starting from scratch.
AI can be used to help businesses, including healthcare, to better understand and improve their security by drafting reports of security incidents from data that is fed to ChatGPT, thus freeing up time for IT staff to focus on other critical activities.
Generative AI is anticipated to be able to almost instantaneously take cybersecurity intelligence from multiple sources, identify patterns in the data, create a cheat sheet of sorts of adversarial tactics, techniques, and procedures, and recommend cyberdefense strategies. This would allow security teams to quickly adjust their security controls.
In healthcare, AI can diagnose complex patient issues, especially in the E.R., and recommend treatment options much more quickly and accurately than a physician or other healthcare worker running tests and other diagnostic procedures to diagnose a patient’s ailment.
Some hospitals are using AI to ease medical workers documentation burden. Others are using AI to integrate data from images, reports, and spreadsheets. AI is also being used to create automated transcripts of patient encounters. AI can also be used to detect irregularities in patient care. In other words, artificial intelligence can help interpret data, build predictive models, streamline tasks and procedures, and enhance communication with patients.
Some physicians are predicting that in the future, AI will be used to diagnose a wide range of anomalies and afflictions and provide continuously adjusted treatments in milliseconds versus the human time required to gather and review data, make a diagnosis, and determine treatment options.
Of course, no one expects AI to replace humans for care delivery in the foreseeable future, but it can be used to assist humans for better tech-enabled human healthcare service delivery.
According to one hospital, in-depth security strategies and championing engagement with executives prioritizes cybersecurity efforts within the medical practice. Education of AI and ML programs is on the rise for healthcare industry veterans, as well as congressional representatives, to understand the new technology and how to prevent cyberattacks via the use of such programs.*
The AI Disclosure Act has been referred for consideration in the House Committee for Energy and Commerce and is intended to alleviate concerns of AI and ML programs by proposing that they be regulated at the federal level. This could help combat possible disinformation cyberattack campaigns, as well as require any AI usage to be disclosed to patients.
In any case, the most important defense against a cyberattack—not only for the healthcare industry, but with any internet connected enterprise—is diligent awareness and an industry-wide dedication to security.
IKS Health’s Role in the Healthcare Industry
Security and safeguarding customer data has always been our highest priority at IKS. We are committed to continually improving our security posture through a multilayer approach. Today IKS has implemented policies, controls, technologies, and training in a comprehensive security maturity model validated by annual third party SOC2 Type 2, ISO 27001, and KLAS Censinet audits.
IKS utilizes an internationally recognized Information Security Management System, with policies and procedures for every aspect of security, encryption of data at every stage, frequent vulnerability scanning, and tabletop Business Continuity and Security Incident exercises for all areas of the business. The deployment of physical and environmental security remains key in our efforts against cyberattacks. We also maintain a robust security and compliance education program for all employees that fosters an environment where security and compliance come first.
Learn more about IKS’s commitment to security and how we serve and support our clients.
Marty Serro - Chief Information Officer, Chief Security Officer
Marty has over 35 years of diversified technology management experience in support, development, security, and implementation across varied industries. During Marty’s tenure, he has built global support infrastructure through innovative tools and a high-touch customer supporting infrastructure. Under his leadership, our security team has built an industry leading security framework that ensures client data protection at all times. Marty leads the company’s SOC2, ISO 27001, and HITRUST annual certifications and has established a robust security education and training program for all staff.