The U.S. Congress has introduced a bipartisan bill that would create a National AI Commission (“Commission”). A focus of the Commission will be to ensure that through regulation, the United States is mitigating the risks and possible harms of AI, protecting its leadership in AI innovation and ensuring that the United States takes a leading role in establishing necessary, long-term guardrails. Additionally, it will review the Federal Government’s current approach to artificial intelligence oversight and regulation, how that is distributed across agencies and the capacity and alignment of agencies to address such oversight and regulation.Continue Reading Congress Proposes National Commission to Create AI Guardrails

Many companies are sitting on a trove of customer data and are realizing that this data can be valuable to train AI models. However, what some companies have not thought through, is whether they can actually use that data for this purpose. Sometimes this data is collected over many years, often long before a company thought to use it for training AI. The potential problem is that the privacy policies in effect when the data was collected may not have considered this use. The use of customer data in a manner that exceeds or otherwise is not permitted by the privacy policy in effect at the time the data was collected could be problematic. This has led to class action lawsuits and/or enforcement by the FTC. In some cases, the FTC has imposed a penalty known as “algorithmic disgorgement” to companies that use data to train AI models without proper authorization. This penalty is severe as it requires deletion of the data, the models, and the algorithms built with it. This can be an incredibly costly result.Continue Reading Training AI Models – Just Because It’s Your Data Doesn’t Mean You Can Use It

On May 1, NYDFS settled with a cryptocurrency trading platform over the company’s cybersecurity deficiencies, resulting in a consent order and $1.2 million fine for the crypto company. NYDFS alleged “multiple deficiencies in the Company’s cybersecurity program” discovered during NYDFS examinations in 2018 and 2020. The examinations prompted an investigation, ultimately leading to the consent order and the fine.Continue Reading New York Settles with Crypto Company, Proposes Crypto Legislation

On May 3, 2023, New York Attorney General Letitia James introduced legislation that, if passed, would substantially increase oversight and regulation of the cryptocurrency industry in New York. James touts the bill as the “Crypto Regulation Protection, Transparency and Oversight Act,” also to be known as the “CRPTO Act.” (the “Bill”).Continue Reading NYAG Bill Seeks to “Bring Order” to Crypto Industry

While you were asking ChatGPT to create a 3-course menu for the upcoming book club you’re hosting or to explain the Rule Against Perpetuities, several federal government agencies announced initiatives related to the use of artificial intelligence (AI) and automated systems, focusing on the potential threats stemming from the misuse of this powerful technology. As the development and use of AI becomes integrated into our daily lives and employee work routines, and companies begin to leverage such technology in their solutions provided to the government, it is important to understand the developing federal government compliance infrastructure and the potential risks stemming from the misuse of AI and automated systems.Continue Reading ChatUSG: What Companies Doing Business with the Government Need to Know About Artificial Intelligence

Generative AI (GAI) applications have raised numerous copyright issues. These issues include whether the training of GAI models constitute infringement or is permitted under fair use, who is liable if the output infringes (the tool provider or user) and whether the output is copyrightable. These are not the only legal issues that can arise. Another GAI issue that has arisen with various applications involves the right of publicity. A recently filed class action provides one example.Continue Reading Celebrity “Faces Off” Against Deep Fake AI App Over Right of Publicity

The National Telecommunications and Information Administration (NTIA) has issued a Request for Comments (RFC) on Artificial Intelligence (“AI”) system accountability measures and policies to advance its efforts to ensure AI systems work as claimed and without causing harm. The RFC is targeting self-regulatory, regulatory, and other measures and policies to provide reliable evidence that AI systems are legal, effective, ethical, safe, and otherwise trustworthy. It is also seeking policies that can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed (similar to how financial audits create trust in financial statements).Continue Reading Another Federal Agency Issues Request for Comments on AI

The Court of Appeals for the Federal Circuit (CAFC) affirmed a district court ruling that the asserted nonliteral elements of a software program were not copyright protectable, in part, because allegedly copied materials contained unprotectable open-source elements, factual and data elements and other known elements that were not original.Continue Reading Divided Federal Circuit Makes Controversial Ruling That Nonliteral Elements of “Cloned” Software Are Not Protectable Because It Was Based on Open Source and Other Known Material

On March 16, 2023, the U. S. Copyright Office (USCO) launched a new AI Initiative to examine the copyright law and policy issues raised by artificial intelligence (AI), including the scope of copyright in works generated using AI tools and using copyrighted materials in AI training. According to the USCO: “This initiative is in direct response to the recent striking advances in generative AI technologies and their rapidly growing use by individuals and businesses.” It is also a response to requests from Congress and the public.Continue Reading Copyright Office Artificial Intelligence Initiative and Resource Guide