Connect with us

Bipartisan Amendment Would Exclude AI From Section 230 Protections

Chris Agee
Like Freedom Press? Get news that you don't want to miss delivered directly to your inbox

A controversial provision in the Communications Decency Act — known as Section 230 — has come under bipartisan fire in recent years for essentially providing social media platforms with a shield against legal action based on the content posted thereon by users.

As artificial intelligence quickly establishes itself as the most impactful technological and cultural advancement since social media, two senators — one Republican and one Democrat — are taking preemptive measures to avoid a similar issue down the road. 

In a new bill introduced by Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT), the lawmakers seek to update Section 230 to specifically exclude AI companies.

Advertisement

Fabricated images and “deepfake” audio or video files have already begun to surface, and critics are concerned that such misinformation will be used to sow confusion, destroy reputations, and even stoke global division.

The amendment put forward by Hawley and Blumenthal would allow the creators of such content to be held criminally or civilly responsible. In a statement advocating for the No Section 230 Immunity for AI Act, Blumenthal wrote that, if enacted, it would “empower Americans harmed by generative AI models to sue AI companies in federal or state court.”

Hawley issued a press release citing the perceived need for early and decisive action.

“We can’t make the same mistakes with generative AI as we did with Big Tech on Section 230,” he wrote. “When these new technologies harm innocent people, the companies must be held accountable. Victims deserve their day in court and this bipartisan proposal will make that a reality.”

Advertisement

For his part, Blumenthal said that it is crucial for AI companies to be held accountable for the potentially negative consequences that their creations could inflict.

“AI companies should be forced to take responsibility for business decisions as they’re developing products — without any Section 230 legal shield,” he wrote in his own press release.

The Connecticut Democrat went on to describe the proposed legislation as “the first step in our effort to write the rules of AI and establish safeguards as we enter this new era,” adding: “AI platform accountability is a key principle of a framework for regulation that targets risk and protects the public.”