The rapid growth of generative AI (GAI) has taken the world by storm. The uses of GAI are many as are the legal issues. If your employees are using GAI, they may be subjecting your company to many unwanted and potentially unnecessary legal issues. Some companies are just saying no to employee use of AI. That is reminiscent of how some companies “managed” open source software use by employees years ago. Banning use of valuable technology is a “safer” approach, but prevents a company from obtaining the many benefits of that technology. For many of the GAI-related legal issues, there are ways to manage the legal risks by developing a thoughtful policy on employee use of GAI.Continue Reading Microsoft to Indemnity Users of Copilot AI Software – Leveraging Indemnity to Help Manage Generative AI Legal Risk
Artificial Intelligence
Solving Open Source Problems with AI Code Generators – Legal Issues and Solutions
AI-based code generators are a powerful application of generative AI. These tools leverage AI to assist code developers by using AI models to auto-complete or suggest code based on developer inputs or tests. These tools raise at least three types of potential legal issues:Continue Reading Solving Open Source Problems with AI Code Generators – Legal Issues and Solutions
Valve Rejects Games with AI Assets Over Copyright Concerns
Valve has reportedly adopted a policy to reject games that use AI-generated content over infringement concerns. A developer posted on the “aigamedev” subreddit that in response to submitting a game with some assets that were obviously AI-generated, he received a rejection notice from Valve stating:Continue Reading Valve Rejects Games with AI Assets Over Copyright Concerns
The Need for Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI
The Federal Trade Commission (FTC) has been active in enforcements involving various AI-related issues. For an example, see Training AI Models – Just Because It’s “Your” Data Doesn’t Mean You Can Use It and You Don’t Need a Machine to Predict What the FTC Might Do About Unsupported AI Claims. The FTC has also issued a report to Congress (Report) warning about various AI issues. The Report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and can incentivize relying on increasingly invasive forms of commercial surveillance. Most recently, the FTC instituted an investigation into the generative AI (GAI) practices of OpenAI through a 20 page investigative demand letter (Letter). Continue Reading The Need for Generative AI Development Policies and the FTC’s Investigative Demand to OpenAI
Congress Proposes National Commission to Create AI Guardrails
The U.S. Congress has introduced a bipartisan bill that would create a National AI Commission (“Commission”). A focus of the Commission will be to ensure that through regulation, the United States is mitigating the risks and possible harms of AI, protecting its leadership in AI innovation and ensuring that the United States takes a leading role in establishing necessary, long-term guardrails. Additionally, it will review the Federal Government’s current approach to artificial intelligence oversight and regulation, how that is distributed across agencies and the capacity and alignment of agencies to address such oversight and regulation.Continue Reading Congress Proposes National Commission to Create AI Guardrails
Training AI Models – Just Because It’s Your Data Doesn’t Mean You Can Use It
Many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. However, what some companies have not thought through, is whether they can actually use that data for this purpose. Sometimes this data is collected over many years, often long before a company thought to use it for training AI. The potential problem is that the privacy policies in effect when the data was collected may not have considered this use. The use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. This has led to class action lawsuits and/or enforcement by the FTC. In some cases, the FTC has imposed a penalty known as “algorithmic disgorgement” to companies that use data to train AI models without proper authorization. This penalty is severe as it requires deletion of the data, the models, and the algorithms built with it. This can be an incredibly costly result.Continue Reading Training AI Models – Just Because It’s Your Data Doesn’t Mean You Can Use It
ChatUSG: What Companies Doing Business with the Government Need to Know About Artificial Intelligence
While you were asking ChatGPT to create a 3-course menu for the upcoming book club you’re hosting or to explain the Rule Against Perpetuities, several federal government agencies announced initiatives related to the use of artificial intelligence (AI) and automated systems, focusing on the potential threats stemming from the misuse of this powerful technology. As the development and use of AI becomes integrated into our daily lives and employee work routines, and companies begin to leverage such technology in their solutions provided to the government, it is important to understand the developing federal government compliance infrastructure and the potential risks stemming from the misuse of AI and automated systems.Continue Reading ChatUSG: What Companies Doing Business with the Government Need to Know About Artificial Intelligence
Celebrity “Faces Off” Against Deep Fake AI App Over Right of Publicity
Generative AI (GAI) applications have raised numerous copyright issues. These issues include whether the training of GAI models constitute infringement or is permitted under fair use, who is liable if the output infringes (the tool provider or user) and whether the output is copyrightable. These are not the only legal issues that can arise. Another GAI issue that has arisen with various applications involves the right of publicity. A recently filed class action provides one example.Continue Reading Celebrity “Faces Off” Against Deep Fake AI App Over Right of Publicity
Another Federal Agency Issues Request for Comments on AI
The National Telecommunications and Information Administration (NTIA) has issued a Request for Comments (RFC) on Artificial Intelligence (“AI”) system accountability measures and policies to advance its efforts to ensure AI systems work as claimed and without causing harm. The RFC is targeting self-regulatory, regulatory, and other measures and policies to provide reliable evidence that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. It is also seeking policies that can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed (similar to how financial audits create trust in financial statements).Continue Reading Another Federal Agency Issues Request for Comments on AI
Will eBook Ruling Impact Fair Use Analysis for Generative AI?
Scanning books to create a searchable database of books constitutes fair use. Scanning books to create eBooks does not. Will scanning images (or other copyright-protected content) to create a generative AI model for use in creating images be deemed fair use?Continue Reading Will eBook Ruling Impact Fair Use Analysis for Generative AI?
Copyright Office Artificial Intelligence Initiative and Resource Guide
On March 16, 2023, the U. S. Copyright Office (USCO) launched a new AI Initiative to examine the copyright law and policy issues raised by artificial intelligence (AI), including the scope of copyright in works generated using AI tools and using copyrighted materials in AI training. According to the USCO: “This initiative is in direct response to the recent striking advances in generative AI technologies and their rapidly growing use by individuals and businesses.” It is also a response to requests from Congress and the public.Continue Reading Copyright Office Artificial Intelligence Initiative and Resource Guide