facebook twitter linkedin google gplus pinterest mail share search arrow-right arrow-left arrow print vcard

Agency and Generative AI: Does your Business Have a Rogue Artificial Employee?

05.09.24

By Matthew Mitchell

For all their power, for all the advanced technology under the hood, for all the information they consume, process, and synthesize, generative AI systems like ChatGPT sometimes get it wrong. One phenomenon, termed “hallucination,” occurs when an AI system inaccurately extrapolates in response to gaps within its source knowledge, thus creating outputs that are inaccurate yet appear on their face to be credible.  

Hallucinations are particularly problematic when generated within customer-facing systems such as chatbots. As these systems become more sophisticated, it becomes increasingly difficult for customers to identify whether they are speaking to a live person or an artificial intelligence. As the use of chatbots and other similar systems increases, businesses must navigate the risks posed by artificial systems that could potentially interact with customers in misleading or otherwise inappropriate ways.  

An Unsettled Legal Landscape Regarding Liability for Customer Facing Artificial Intelligence

Traditionally, the doctrine of respondeat superior has been used, subject to exceptions, to hold businesses accountable for the unlawful actions of their employees committed within the scope of employment. This doctrine and its related development over time presents interesting yet unsettled questions as it relates to artificial intelligence. As generative systems such as chatbots become less and less distinguishable from people, who should be liable in the event an artificial system engages in misleading or borderline fraudulent conduct?

For example, should the system be considered an employee with apparent authority to act on behalf of the business using the system? If not, should the developer of the system itself be liable for the flaws within the system, both to the company using the technology and that business’ customers?

While questions such as these will need to be answered in the future, the law, as it currently stands, is woefully behind in terms of developing a cohesive framework for dealing with direct and derivative liability for those developing and employing artificial intelligence. What seems clear, however, is that businesses must be cognizant of the risks of any system they employ that gives customers the impression they are dealing with a human being. Absent a disclaimer when an artificial system is being used, the possibility that a customer will rely to their detriment on inaccurate information provided by a system certainly remains real, and it is unclear who should bear liability for the actions of a system that, in many ways, is designed to mimic a human being.  

What Businesses Can Do To Protect Themselves From Inaccurate AI-Generated Content 

To reduce the risks that can arise from inaccurate or harmful AI-generated content, businesses should develop and implement clear and thorough written policies regarding their use of the technology. Specifically, businesses should:

  • Enact risk management protocols and review systems to ensure all AI-generated content is thoroughly vetted for inaccuracies or problematic language. 
  • Educate all employees about the unreliability of the program’s outputs, its propensity to “hallucinate,” and the legal risks that can arise from such errors. 
  • Carefully review any agreements with generative AI software vendors and consider adding indemnification clauses for damages or injuries and damages caused by its use.
  • Identify areas where potential AI biases could lead to discriminatory outcomes (especially in hiring or credit and lending decisions) and work with experienced counsel to carefully and regularly review the technology’s use in these contexts to identify potential problems. 
  • Carefully vet external AI vendors and inquire about what efforts they have taken to reduce biases in their algorithms and programs.
  • Mandate thorough human review of any AI-assisted decision-making processes.

If you have any questions or concerns regarding your business’s use of generative AI and the steps to reduce potential exposure arising from harmful or inaccurate content, please contact Matthew Mitchell at Maddin Hauser.