OpenAI, Meta, Google’s parent company Alphabet, Snap, xAI, Instagram, and Character.AI have to respond to this sweeping investigation.
Datamation content and product recommendations are
editorially independent. We may make money when you click on links
to our partners.
Learn More
U.S. federal regulators just delivered a massive blow to the tech industry. On September 11, seven major AI companies received orders to hand over detailed information about how their chatbots impact children and teens. OpenAI, Meta, Google’s parent company Alphabet, Snap, xAI, Instagram, and Character.AI now have just 45 days to respond to this sweeping investigation.
This is not a routine inquiry. It is a deep dive into how these platforms make money from young users, what they are doing to prevent harm, and how they are following child protection laws. One eye-catching detail, all three Republican commissioners voted to approve this bipartisan investigation.
The legal backdrop is shifting
The study runs on 6(b) authority, which lets regulators issue subpoenas for market studies. The process can take years, but the ripple effects start now.
Regulators can use the information they uncover to open official investigations or support existing probes. The agency has already been investigating OpenAI for the past two years over whether ChatGPT violated consumer protection laws.
The legal backdrop is shifting. Current US law already bars tech companies from collecting data about children under 13 without parental permission. Three months ago, updated COPPA rule became effective in June, with companies having until April 2026 to comply with most new requirements.
The scale and timing make this probe different. AI chatbots are spreading fast, and regulators are asking a blunt question, are we letting an entire generation of children become psychological test subjects for profit? It could be the answers will force changes that go far beyond chat apps.