Addressing workers’ concerns about AI

By Bill Surrette

Artificial intelligence (AI) and machine learning (ML) solutions are being adopted across every industry today. Quite often, these initiatives involve deploying ML models into operational settings where the model output ends up being a widget on the screens or a number on the reports that are put in front of hundreds, if not thousands, of front-line employees. These could be underwriters, loan officers, fraud investigators, nurses, teachers, claims adjusters, or attorneys. No industry is immune to these transformations.

These initiatives are typically driven from the top down. Management monitors and looks for ways to improve KPIs, and increasingly, AI/ML initiatives are identified as a means to this end. Certainly, there’s plenty of communication among executive, finance, data science, and operational leaders about these initiatives. Unfortunately, in many of the organizations I’ve worked with, the group of folks who are most commonly left out of the discussion are the front-line employees.

As initiatives are rolled out, and the widgets or other indicators are incorporated into the modified daily standard operating procedures (SOPs), the impact of these initiatives on the morale of the front-line workers is often overlooked. If managers don’t proactively seek to educate the workforce with a healthy perspective, they are leaving to chance the interpretation those folks will have.

In this article, I’ll describe some of the common, often latent, reactions employees have to AI/ML initiatives, along with an approach that management can adopt to foster a positive, informed mindset. I won’t spend much time on the consequences of failing to manage the reception of AI by the workforce. Managers already know that every initiative they roll out will be only as successful as the front-line workers want it to be. Fail to get them on board, and these initiatives are doomed.

The spectrum of reactions

When corporate leaders unveil new AI/ML initiatives that will impact the daily routines of front-line professionals, a spectrum of reactions emerges:

All of these reactions will be present in your workforce when you roll out a new AI/ML initiative. These reactions should be addressed, and a healthy and informed perspective should be shared across the organization.

Fostering a healthy perspective

If we don’t want the employees to adopt these perspectives—which are largely driven by a lack of understanding—we have to decide what perspective we do want them to have and then give them the training that is required.

Communicating a healthy perspective on an AI/ML initiative may look something like the following, which was written for an operation such as an insurance claims organization.

As the thousands of calls and emails flow through every day, we need to take the right actions, at the right times. Knowing what actions to take and knowing how to handle each claim comes from knowledge. We gain this knowledge through experience—i.e., through the data—that we collect from each interaction. Of course, this is the knowledge that in part makes our employees on the front lines so valuable. Our traditional source of operational knowledge is our employees’ minds. But there is another source of data, and that is the servers hidden in our buildings or a remote data center. Our employees have observations and insights that don’t exist on those servers. And similarly, those servers have insights that don’t exist in any of our minds.

It would be irresponsible to let the information sitting in that server just sit there. That’s like an oil company sitting on a reserve and just refusing to drill. If that server is the oil reserve, machine learning is the drill that can extract the signal. So the next time you make one of the many decisions you make each day on what action to take, we want you to lean on your experience and apply your judgment, but of course, we want you to be as informed as possible when making that judgment.

That’s what these AI/ML models do; they extract the patterns present in that data sitting in our data centers, and once we give that to you, the right decision then lies with you to take it the rest of the way. The models will tell you, “Of all the claims we get that look like this, 80% of them go into litigation.” (Note, that means 20% do not!) Your job requires judgment; we are giving you all the information we have so that you make the most informed judgment you can.

Being real with employees

Let’s take a minute to see how we can directly address some of the concerns we discussed earlier.

For the insecure folks in the group who worry that the AI/ML initiative means they are doing a bad job, let them know that you hear them, reassure them that this is not the case, and reiterate the perspective above.

The concerns about job security, at some level, are well-founded. Is AI taking anyone’s job today? The short answer is no. Companies typically don’t roll out AI/ML initiatives and then look to lay people off, but we all know that AI does threaten jobs at some level. We should acknowledge this to our teams, lest we seem disconnected and out of touch. At the same time, we should stress that AI is just a new technology, like any of the other technologies that have been introduced for centuries. Over time, some jobs will get automated away.

I have heard it said that if a machine can do a job, it should. This makes sense because people are far too capable to be assigned to rote tasks. Let the machine do whatever it can so that people are freed up to do the truly nuanced and complex work. What this means is that, as the years pass, our workforces will typically change slowly through attrition, not abruptly through layoffs. So, will AI replace any of us next week? Probably not. Will we have fewer job prospects in 10 to 20 years because of AI? Potentially yes. The solution to this is to make sure that we’re evolving our skills to stay current and in-demand.

Regarding the perception that AI knows more about customers, reiterate the perspective above that AI is a complement to human judgment, not a replacement. Augmented intelligence is a more apt name because the human remains in the loop.

For those employees skeptical of corporate initiatives, recognize that skepticism can stem from various sources and is a separate issue, not related to AI/ML specifically.

Education and communication are key

In the journey of AI/ML integration, effective communication with everyone—including the front-line worker—is paramount if these initiatives are to succeed. Organizations can conduct town halls, all-hands meetings, focus groups, and educational sessions. They can publish wiki articles, send regular email newsletters, and conduct regular video interviews with peers. They can even conduct regular classes on AI/ML—not to make your front-line workers data scientists but to open the black box and demystify the technology. Educate your teams on what AI is and what it isn’t. The less opaque we make it, the less mysterious and threatening it will be.

Successful integration of AI and ML models into daily operations hinges on understanding and addressing the reactions of front-line employees. When our front-line workers are not informed, we leave it to their imaginations to make sense of it all. All too often, our imagination fills in the blanks with our fears, insecurities, and anxieties, leading us to imagine the worst.

Educate front-line workers on the realities of AI. They will be happier and more productive, and your AI/ML initiatives will be more successful.

Bill Surrette is senior data scientist at CLARA Analytics, provider of artificial intelligence technology for insurance claims optimization. Bill has more than 20 years of experience in the property and casualty insurance industry, having worked for several of the largest carriers in the US in both actuarial and data science roles. He also spent several years at an AI/ML startup consulting with customers on their data science projects.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

© Info World