no ai fraud act

Federal lawmakers have introduced a bipartisan bill, the No AI Fraud Act, that would establish penalties for unauthorized soundalike tracks and more. Photo Credit: Markus Winkler

As artificial intelligence music continues to make waves, federal lawmakers have introduced a bill, the No AI Fraud Act, that they say would effectively target those responsible for creating unauthorized soundalike tracks.

A bipartisan group of five representatives formally introduced the No AI Fraud Act today, after senators from both sides of the aisle in October unveiled the similar-but-distinct No Fakes Act. Industry responses to the newer legislation are overwhelmingly positive, with the RIAA having indicated in a supportive release that the No AI Fraud Act “builds on” the No Fakes Act.

Expanding upon that point, the latter bill begins by conspicuously summarizing the splash made by unapproved soundalike works including “Heart on My Sleeve” and “Demo 5: nostalgIA” during 2023.

In an effort to combat these and a variety of different unapproved AI outputs, the 13-page No AI Fraud Act stresses at the outset that “every individual has a property right in their own likeness and voice.”

Also as described in the proposed law, said “intellectual property rights” exist regardless of whether one commercially exploits them during his or her lifetime.

And for good measure, this more recent publicity-rights bill emphasizes that the relevant likeness and voice IP will, following one’s passing, transfer to the appropriate heirs or executors for a decade.

Next, a contract for digital voice or likeness “replicas” will only be valid if the impacted individual had been over the age of 18 at the time of signing and had been “represented by counsel in the transaction and the agreement was in writing,” according to the legislative text.

Alternatively, these deals could under the No AI Fraud Act be “governed by a collective bargaining agreement.”

Shifting to the penalty side, the bill would slap with fines any person or entity who, without consent and “in a manner affecting interstate or foreign commerce,” creates or makes publicly available “a personalized cloning service.” This refers particularly to any program or system designed to replicate the appearance or sound of specific individuals.

Likewise on the hook are those who make publicly available soundalike or lookalike media – besides, rather significantly, anyone who “materially contributes to, directs, or otherwise facilitates” the two above-described practices despite knowing that the associated works are unauthorized.

As laid out in the No AI Fraud Act, persons responsible for the “unauthorized distribution, transmission, or other making available of a personalized cloning service” would be compelled to cough up the greater of $50,000 per violation or actual damages, on top of any profits.

Deepfakes, for their part, would be fined at the greater of actual damages or $5,000 per offending “publication, performance, distribution, transmission, or other making available,” once again including profits, the legislation proceeds.

The No AI Fraud Act also goes ahead and notes that “injured parties” need only “present proof of the gross revenue attributable to the unauthorized use,” with each entity or person facing the allegations required from there “to prove his or her expenses deductible therefrom.”

Further blocked by the bill are any defenses involving disclaimers about the lack of rightsholder permission for the digital re-creation at hand. Leaving nothing to chance, the No AI Fraud Act spells out in more words that labels and distributors can sue on behalf of any person with whom they’ve inked an “exclusive” deal.

Lastly, the act limits related civil actions to four years from when the filing party “discovered, or with due diligence should have discovered, the violation.”

A number of industry organizations and companies have reached out to Digital Music News with reactions to the No AI Fraud Act. RIAA CEO Mitch Glazier, for instance, described the legislation as “a meaningful step towards building a safe, responsible and ethical AI ecosystem.”

“To be clear, we embrace the use of AI to offer artists and fans new creative tools that support human creativity,” Glazier proceeded in part. “But putting in place guardrails like the No AI FRAUD Act is a necessary step to protect individual rights, preserve and promote the creative arts, and ensure the integrity and trustworthiness of generative AI.”

Meanwhile, the Human Artistry Campaign (which counts as members the RIAA, the Recording Academy, and many others) summed up its support for the No AI Fraud Act in an email. That message includes similarly enthusiastic remarks from the heads of member organizations including A2IM, the NMPA, and SoundExchange.

Plus, Universal Music head Lucian Grainge in a separate release underscored his company’s backing of the bill.

“Universal Music Group strongly supports the ‘No AI FRAUD Act’ because no one should be permitted to steal someone else’s image, likeness or voice,” communicated Grainge. “While we have an industry-leading track record of enabling AI in the service of artists and creativity, AI that uses their voice or identity without authorization is unacceptable and immoral.

“We call upon Congress to help put an end to nefarious deepfakes by enacting this federal right of publicity and ensuring that all Americans are protected from such harm,” he concluded.