FTC generative AI

Photo Credit: Agence Olloweb

The FTC announces the initiation of an investigation into five tech companies with partnerships and investments involving generative AI: Alphabet, Amazon, Anthropic, Microsoft, and OpenAI.

The Federal Trade Commission has announced the launch of an investigation into five companies — Alphabet, Amazon, OpenAI, Anthropic, and Microsoft — requiring them to provide information about investments and partnerships pertaining to generative AI companies and major cloud service providers.

The inquiry will “scrutinize corporate partnerships and investments with AI providers to build a better internal understanding of these relationships and their impact on the competitive landscape.”

Namely, the FTC is seeking information related to:

  • Specific investments or partnerships and the agreements and strategic rationale behind them
  • The practical implications of a specific partnership or investment
  • Analysis of the transactions’ competitive impact and competitive dynamics of key products and services
  • Information provided to any other government entity in connection with any investigation or other requests for such information

The five companies will have 45 days from the date they received the order to respond.

As if on cue, the agency’s inquiry comes hot on the heels of the fake nudes of Taylor Swift generated by AI and circulating on social media that saw X/Twitter doing damage control to combat the spread of the images.

That incident sparked comment from the White House, with Press Secretary Karine Jean-Pierre telling ABC News that the things that can be accomplished with AI technology are “alarming.”

“While social media companies make their own independent decisions about content management, we believe they have an important role to play in enforcing their own rules to prevent the spread of misinformation, and non-consensual, intimate imagery of real people,” said Jean-Pierre.

And more government officials should increasingly be discussing this and similar issues — the bipartisan “Preventing Deepfakes of Intimate Images Act” introduced last year by Rep. Joe Morelle (D-NY), is currently referred to the House Committee on the Judiciary.

“We’re certainly hopeful the Taylor Swift news will help spark momentum and grow support for our bill, (which) would address her exact situation with both criminal and civil penalties,” a spokesperson for Morelle told ABC.

Generative AI technology continues to grow, making it increasingly easier to generate AI-assisted deepfakes that can cause significant damage to real people. As a result, more companies like Microsoft — whose Microsoft Designer tool intended to make social media posts easier to create was used to generate the fake Taylor Swift images — may have their feet held to the fire.