Mainebiz

August 19, 2024

Issue link: https://nebusinessmedia.uberflip.com/i/1525350

Contents of this Issue

Navigation

Page 9 of 27

By Justin B. Cary, Attorney Drummond Woodsum | AI Practice Group W hen ChatGPT was released in November of 2022, it felt like the world would change. A powerful technology, gen- erative artificial intelligence (GAI), emerged beneath our fingertips. GAI is a subset of artificial intelligence (AI) technology that allows users to rapidly create new content in multimodal formats with simple commands. ChatGPT, a popular GAI product, promised to fill our work- days with less drudgery and more productivity. ChatGPT largely deliv- ered on this promise, streamlining to-do lists. With acuity, ChatGPT drafted letters notifying custom- ers of outstanding bills, created personalized marketing brochures, analyzed large datasets, and con- versed about actionable insights, among many other things. But now, close to two years since the release of ChatGPT, an infor- mal survey reveals that fewer than 25% of Maine businesses have ex- perimented with GAI. When asked why, common reasons include the uncertainty associated with the nascent state of the technology and risks of misuse. As an attorney at Drummond Woodsum, I appreciate skepticism of lofty promises. In fact, last year, as I presented locally and nationally on my experience in AI/GAI Law, I cautioned restraint about adoption of GAI technology into businesses. I spoke to hundreds of employers about the susceptibility for GAI mis- use, the unreliability of GAI outputs, the myriad data privacy and intel- lectual property right issues with GAI, and the propensity for bias. There is good reason to be cau- tious. Over the last year and a half, a parade of high-profile cases about GAI misuse marched through the headlines. A few of these cases be- came canon. Mata v. Avianca shows a lawyer sanctioned for submitting a legal brief that included non-existent citations generated by ChatGPT. In the Samsung GAI Conversational Leak, Samsung employees accidentally leaked sensitive company information by inputting confidential data into ChatGPT. The list goes on. These cases seem to confirm GAI is Pandora's Box — with increased capabilities comes misuse, lawsuits, corner-cutting, and unauthorized disclosures. But closer examination reveals a more important lesson. The through- line in each case is an individual's unfamiliarity with the technology. In Avianca, lawyers failed to grasp that GAI models are prone to hal- lucinations (the creation of fake but plausible information without disclosure to the user). In Samsung, employees misunderstood the data privacy implications of submitting a prompt to ChatGPT and Samsung lacked a policy to guide employee conduct. In every GAI run amuck case, decisionmakers failed to un- derstand a basic capability or defi- ciency of GAI. Like the internet, the true risk of GAI is not found in technology, it is found in how humans understand it, or fail to do so. In many cases, the problem is not the presence of artificial intelligence, but the absence of human comprehension. Taking a passive approach to GAI has legal risks: you might not use GAI to streamline work, but your employees will. GAI materializes unbidden — such as GAI-created deepfakes or scams, GAI-interfacing software with lax data privacy protec- tions, or undisclosed subcontractor use of GAI, catching the uninitiat- ed decisionmaker flatfooted. The passive approach also has business risks: money will likely be spent on repackaged GAI services that are free elsewhere, time wasted on tasks that are aided by GAI, competitive edges lost to GAI-savvy competitors. To remedy this, our Artificial Intelligence Practice Group has been training businesses on best practices for using GAI, drafting policies for GAI in the workplace, counseling clients on negotiat- ing Data Privacy Agreements and Service Agreements with an eye to GAI-related issues, providing pre- sentations and webinars teaching the first-draft approach to GAI, and IP/confidential information-safe prompt engineering. While clients tell me that these services help, the first step is free: create an account for a trusted GAI service — try ChatGPT, Google's Gemini or Microsoft Edge, and safely experiment. Ask questions you know the answers to and check the GAI's work. Then, if you find yourself per- plexed, concerned, or excited about what GAI means for your business, consider reaching out to Drummond Woodsum's Artificial Intelligence Law Practice Group. Call 207-253-0568 or email jcary@dwmlaw.com to start the conversation. S P O N S O R E D C O N T E N T Addressing the AI in MAIne "Like the internet, the true risk of GAI is not found in technology, it is found in how humans understand it, or fail to do so. In many cases, the problem is not the presence of artificial intelligence, but the absence of human understanding."

Articles in this issue

Links on this page

Archives of this issue

view archives of Mainebiz - August 19, 2024