
Tennessee Republican Sen. Marsha Blackburn said it is “essential” to advance the federal preemption standard as U.S. states begin to respond to growing voter concerns about the risks associated with the use of artificial intelligence.
Earlier this week, California Governor Gavin Newsom signed a series of bills focused on these concerns, while vetoing some tougher AI conditions that lawmakers had hoped for, including safeguards for chatbots, psychological risk labeling for social media apps, and requiring age-verified tools in device manufacturers’ app stores.
Additionally, Utah and Texas have also signed legislation introducing AI safeguards for minors, suggesting other states may follow suit.
“The reason states have stepped in, whether it’s to protect consumers or to protect children, is because the federal government hasn’t been able to pass federal preemption legislation,” Blackburn said Wednesday at the CNBC AI Summit in Nashville. “Until Congress says no to the big tech platforms, states must step into the gap.”
Blackburn has long been an advocate for legislation on children’s online safety and social media regulation, introducing the Children’s Online Safety Act of 2022, which aims to establish guidelines to protect minors from harmful content on platforms. The bipartisan bill passed the Senate with an overwhelming majority, and Blackburn said he is “hopeful that the House will take up this bill and pass it,” as big tech companies seek to block the bill from passing in both chambers.
Mr Blackburn said the concerns that the Act was intended to address in relation to social media had a knock-on effect with the rise of AI.
Sen. Marsha Blackburn (R-Tenn.) speaks at a gathering hosted by Accountable Tech and Design It For Us in Washington, D.C., on January 31, 2024, to hold technology and social media companies accountable for taking steps to protect children and teens online.
Countess Jemal | Getty Images Entertainment | Getty Images
“One of the things I’ve heard from a lot of people involved in this is that we need to enact an online consumer privacy bill so that people can set up firewalls to protect the virtual you (as I call it),” she said, adding, “Once the LLM scoops[your data and information]they’re going to use it to train their models.”
Blackburn also highlighted several other ways to protect the information used by AI, including legislation that focuses on how AI uses things like a user’s name, image, and likeness without their consent.
“We need a way to protect information in the virtual space just like we do in the physical space,” she says.
With the rapid advancement of AI, Blackburn acknowledged that regulation will need to “focus on end-use utilization and legislate the framework that way, rather than focusing on a particular delivery system or a particular technology.”
It also means responding to the way AI companies change their products. Earlier this week, OpenAI CEO Sam Altman said the company could “safely ease” most restrictions because it had been able to alleviate “serious mental health issues,” adding that the company “is not the chosen world’s morality police.”
Blackburn said lawmakers are increasingly hearing from “parents who know what is happening to their children and who say they cannot afford not to experience or not see what they have experienced in chatbots, in virtual worlds, in the Metaverse.”
“I’ve talked to a lot of people now who say their kids shouldn’t have a cell phone until they’re 16, but a lot of parents believe it’s the same as driving a car,” she said. “They’re not going to let their kids have that because as a society we have to have rules and laws in place to protect children and minors.”
