Gordon Krass – President and CEO, IntelliGuardTM
Artificial Intelligence (A.I.) is becoming more and more integrated into our lives. The applications for A.I. continue to expand across every industry and has a growing effect on each of us as individuals. While the benefits of using A.I. is without debate, it is important to discuss the impact it is having on us as individuals.
To begin, let’s understand what A.I. is and how it is used today. A.I. is software algorithms that works off large data sets commonly known as Big Data. What sets A.I. apart from other software is that the software has learning capabilities know as Machine Learning or Deep Learning. It learns over time attempting to mimic human cognition. Using large data sets with vast computational power A.I. has now surpassed what we as humans can do however it only works in a single domain and is not capable of generalized intelligence or common sense. In other words, it’s not human.
There are currently remarkable applications of A.I. in a number of industries. Within healthcare we are seeing A.I. technologies that can read radiology scans to identify and predict the rate of growth of certain cancerous tumors. A.I. is being used to identify high risk diabetes patients. Another recent example is using A.I. to identify high risk discharge patients in an effort to prevent re-admittance. The use of A.I. is in its infancy and will proliferate quickly over time.
While there are many wonderful applications for A.I. there are some recent trends that should give us pause. Recently there are companies promoting the use of A.I. for surveillance and or profiling. Specifically, they are using A.I. to monitor the clinician’s administration and management of controlled substances. Using data from EMR and medication dispensing systems along with other relevant data feeds they are creating a scoring system for each individual clinician to determine who may be diverting. While they have benign names for this scoring system I will call it out for what it really is. It is a Criminal Propensity Score. Every clinician who dispenses medications in a hospital setting will get a score. The higher the score the higher the likelihood you could be a criminal. These high scoring individuals will have their names sent to administrators who will step up their surveillance and may even use this score to remove you from your job or worse press criminal charges. The problem is the high scores may not be an accurate reflection of your clinical position or based on inaccurate data sources. The negative potential outcome of damaging one’s reputation or career would be unjustified
I know diversion is an issue and that technology can be a compelling method to mitigate diversion however putting a criminal score on a clinician is fundamentally wrong. A.I. is only as good as the data that it uses to feed its predictive analytics capabilities. The system assumes that data is always 100% accurate. The data that feeds these surveillance systems comes from Automated Medication Dispensing systems relying on bar-code technology. It is a well-known fact that bar-code systems used in hospitals are not accurate. In a recent conversation with a supply chain expert I was informed that in the best practice cases bar-code accuracy is 80-85%. I was then told that very few organizations reach the best case levels. In fact, the real number in more like 65% accuracy. So now we are using inaccurate data to identify potential diverters within the clinical staff? This is very disturbing that a hospital using these A.I. systems could be potentially ruining a clinician’s career with accusations driven by inaccurate data. Sorry your score is too high.
Also keep in mind the Laws of Unintended Consequences. Let’s think of this in the terms we are all familiar with, Credit Scores. The introduction of credit scores came about in the financial industry as A.I. was beginning to be used. You can take a person’s credit history, that has a very accurate data source and create predictive scores that will indicate the credit worthiness and the likelihood this person will make payments on time. As the practice of credit scores grew so do consumer awareness. We learned what behaviors changed the scoring system in both directions. Not to mention what happens when bad data gets into our credit profile. Good luck fixing that. This has for the most part made us more responsible consumers. We now get updates on our credit score which we look forward to because we did something we hoped will mover our score upward. We have bragging parties. I’m 790. Well I’m 809. This scoring system drives our behavior. Will this also happen within hospitals that use these scoring systems? Will a nurse caring for a patient withhold pain meds before the scheduled time for a patient that is in pain because it will increase their score? Will Anesthesiologist reduce their medication delivery to their patients to avoid the risks associated with increasing their score? While many of you would think this would never happen think again. Add the human element of self-preservation and you begin to understand the laws of unintended consequences.
Just because we can, doesn’t mean we should. There are better ways to reduce the risks associated with diversion. There are other technologies that rely on factual data and proof not accusations. These systems that kick out a criminal score are dangerous and violate our individual rights. Hospital Clinicians, Nurses, Doctors, and other professionals should object to this type of surveillance and administrators should think through the implications and liabilities they will subject themselves and their employees before committing to these A.I. systems.