September was a watershed moment for advocates of AI ethics. The FTC issued Section 6(b) orders to seven major tech companies that produce chatbots to be used as companions. This is the first major federal inquiry specifically targeting AI companion chatbots designed to mimic human characteristics, emotions, and intentions. The FTC isn’t pulling punches. It’s going after some of AI’s most heavyweight companies. Alphabet, Meta Platforms, Instagram, OpenAI, Snap, Character Technologies, and Elon Musk’s xAI all got letters.

The commissioners voted unanimously to issue the orders, signalling bipartisan regulatory concern over AI. They gave companies 45 days to explain various aspects of their AI operations, including how they monetize user engagement, process user inputs, generate outputs in response to user inquiries, and develop and approve their AI characters. They must also report on how they measure, test, and monitor for negative impacts before and after deployment, as well as how they mitigate negative impacts, particularly to children.

Those reports also discuss how they inform users and parents about potential negative impacts, about their data collection and handling practices, along with how they use or share personal information. So there’s broad concern over potential harms to privacy and to vulnerable users.

How Does This Affect B2B Companies?

We’re both surprised and not surprised that the FTC is doing this. On one hand, the Trump administration is strongly anti-regulation, especially when it comes to AI. On the other hand, multiple families have filed lawsuits alleging that AI chatbots encouraged teen suicides. Children have allegedly been egged on to take their own lives by AI algorithms that functioned irresponsibly. So it’s high time something was done.

However, AIs that play fast and loose with vulnerable people’s emotional states raise issues of consumer harm. How might this affect B2B companies?

Any assumption that B2B organizations can ignore consumer-facing AI regulation is dangerous, warned Mayer Brown attorneys Christa Bieker and Christopher Leach in 2022. FTC Section 5 contains no express limitations on who qualifies as a consumer. It has brought enforcement actions against B2B payment processors and small business lending companies, for example.

And this action suggests that the FTC won’t be as hands-off as some of us might have thought when it comes to correcting some of AI’s harms.

In 2023, then-chair Lina M. Khan said that “There is no AI exemption to the laws on the books, and the FTC will vigorously enforce the law to combat unfair or deceptive practices or unfair methods of competition.” It remains to be seen how much of that will stick in the new administration, but there’s clearly still an appetite for holding businesses to account.

Companies That Don’t Ingest AI Properly Risk “Algorithmic Disgorgement”

It’ll be a sad and rare person indeed who falls in love with their accounts receivable chatbot. Nevertheless, irresponsible use of AI could still cause problems for B2B enterprises.

For example, the FTC established an algorithmic deletion enforcement measure that it can use to make companies delete AIs that have been developed with improperly obtained data. Those have mainly been applied in consumer cases, but as we’ve learned, businesses can also be consumers.

Use ISO 42001 to Document Best Practice

There are measures that companies can take to protect themselves by ensuring that they develop and deploy their AIs responsibly. One of the most powerful is to document the fact that you’re doing things responsibly from the ground up. ISO 42001, an international standard specifically for artificial intelligence management systems (AIMS), addresses AI challenges including ethics, transparency, continuous learning, bias, and accountability. These are precisely the concerns driving the FTC inquiry.

For example, organizations following this standard must identify and document potential harms before deployment, assess disproportionate impacts on protected groups, regularly reassess risks, and implement documented mitigations. That’s exactly what FTC demands from chatbot providers.

Similarly, ISO 42001’s annexe A.7 controls mandate documentation of data acquisition sources, along with transformation, usage, and storage. It looks at data quality management, including the impact of any biases in the system.

Even companies that don’t deal with consumers directly would do well to look at the FTC’s current investigation through a B2B lens. Chatbots aren’t just new interfaces to existing enterprise systems. Rather, they’re far more powerful in terms of how they can collect and massage data, and far more fluid and natural in their language. That creates new modes of interaction with business customers and with employees.

FTC Section 6(b) orders don’t signal legal action; they’re fact-finding tools to help flesh out the Commission’s knowledge. However, if it doesn’t like what it finds, then enforcement measures could follow. And that could set precedents for all kinds of AI usage against all types of consumers (even small businesses). In a volatile regulatory landscape where policymakers are grappling with new technology, it pays to be cautious and to implement controls.