Artificial intelligence has the potential to ease the workload of both technicians in the shop and employees in the back office, whether from helping to find parts and filter VMRS codes or through helping to schedule interviews. But there’s more to making AI useful for your shop than just selecting which technology to use.
Not only does leveraging AI require knowing when to use it and how to use it, but as with any new technology, fleets need to know the associated risks and how those risks could be passed on to them. According to a 2024 AI C-Suite Survey Report from Littler Mendelson, a legal firm that focuses on labor and employment law, some company executives are avoiding AI entirely due to risk of litigation.
“As the adoption of artificial intelligence (AI) spreads across corporate America, the risks are growing in kind," the report stated. “Yet while AI is increasingly on C-suite and boardroom agendas, fewer than half of organizations report having policies in place that can help mitigate these risks.”
To better understand how to best use AI, understand the risks, and evaluate whether or not it’s right for your shop, here’s some highlights from the report.
How to use AI in the shop
For some context, Littler’s report drew from 330 responses from executives throughout the U.S., including chief executive officers, chief legal officers, general councils, chief human resources officers, chief operating officers, and chief technology officers. It included companies considering both predictive and generative AI, particularly regarding executives’ perceptions of the value and risks of both forms of the technology. As a general overview, 65% of the organizations polled said they regularly used generative AI, 44% of executives said their company had policies in place regarding generative AI, and 85% of respondents said they were concerned with litigation surrounding AI in HR functions.
One of the large factors the report examined was whether the polled companies have policies for AI usage or not. Of those polled, 31% did not have a policy for AI usage, and of those, 48% said they don’t have one because they perceived AI use as a low risk to the organization. That might seem the case with fleets and shops using AI, especially in the shop where AI might only be assisting with parts and inventory sorting or establishing predictive maintenance programs.
However, it’s just as important to consider when you’re leveraging AI and how as any other organization, especially given how new the technology is.
“Maybe 10 or 15 years from now, we’ll be able to get AI into a system specifically built where all of this predictive modeling is done,” Brian Antonellis, senior vice president of fleet operations at Fleet Advantage told Fleet Maintenance in March. “But we want to make sure we don’t go too fast. We’re protective of our customers’ information, and we want to make sure the analytics we’re making are the right ones.”
These kinds of concerns make having a policy dictating when employees use AI and how they use it critical to operations, and according to Littler’s report, company executives feel the same way. Reportedly, the number of organizations with an employee policy for generative AI has jumped significantly from 2023, where the company’s Annual Employer Survey found that only 10% of companies had some sort of AI policy in place.
Read more: How AI is becoming the ultimate shop assistant
According to Littler, the increased necessity for such policies could be due to the rapid adoption, development, and accessibility of generative AI over the past year. And some legal experts, such as Nicole A. Ozer, the technology and civil liberties director for the American Civil Liberties Union of Northern California, expect that there could be increased litigation surrounding the technology.
But creating these policies might be tricky going forward.
“Creating an effective policy also requires technical knowledge and due diligence,” the report stated. “The latter poses additional challenges due to what is often a lack of transparency on the developer side or via vendors, who may be biased when articulating their tools’ risks.”
One way to help grow technical literacy required for strong AI policies and usage is through training — and not just on how to create an effective prompt, but on how to protect a fleet or shop from liability while using this technology.
In the report, Littler found that 78% of respondents included data privacy topics in their AI training, 76% featured confidential/protecting proprietary information training, and 71% included informational security/cybersecurity information.
The potential and risks of AI in HR
Another key way AI can assist a shop is through creating recruitment ads, chat-based hiring platforms to schedule interviews, and collecting candidate contact information. For example, Whiterail Recruits is a marketing services company that provides these services to more efficiently recruit drivers and technicians with tools like these.
And according to Littler, companies are using AI in this way. Forty-two percent are using AI to create HR-related materials, such as job descriptions and onboarding documents, and 30% using AI for recruiting, such as resume screening and candidate assessments.
But that doesn’t mean that using AI in HR doesn’t come with some risks, especially regarding potential regulation. In the survey, 52% of executives surveyed reported that regulatory uncertainty decreased their company’s use of AI in HR to a large or moderate extent.
A couple examples include laws in Illinois (Amendment HB3773) and Maryland (HB1202), which regulates the use of AI in interviewing, while New York requires employers that use AI for candidate screening do an annual bias audit and inform candidates that AI is being used (Law 2021/144). Additionally, Colorado is expected to require an impact assessment for AI tools and risk management processes in 2026 (Senate Bill 24-205), and California is considering over two dozen pieces of AI-related legislation, one of which was recently vetoed by Gov. Gavin Newsom.