- ⚖️ FTC enforcement of AI misrepresentation cases dropped sharply in early 2025.
- 📉 Responsible-use AI cases, such as bias and safety, have nearly halted since the administration change.
- 🔁 A federal court ruled Trump’s firing of an FTC commissioner illegal, signaling ongoing legal friction.
- ⚠️ The AI Action Plan penalizes states for enforcing stricter local AI regulations.
- 🧪 Experts warn real-world AI testing without pre-deployment oversight increases public harm risk.
AI is moving fast—and so are the risks that come with it. For years, the Federal Trade Commission (FTC) served as one of the few U.S. agencies actively holding AI companies accountable. But recent political shifts suggest that the FTC’s once-strong enforcement of deceptive or harmful AI practices may soon fade. If you're building or deploying AI systems, this change could seriously affect your responsibilities, liabilities, and user trust.
The FTC’s Bold Moves Under the Biden Era
Under the leadership of Chair Lina Khan, the FTC began a strong crackdown on unethical practices in AI development. This time was a change, not just in tone but in legal thinking. The agency did not just wait for AI-related harm. It investigated and punished companies that created harmful new tech before problems grew.
The FTC’s power under Section 5 of the Federal Trade Commission Act allowed it to go after both deceptive and unfair trade practices. With this legal power, Khan showed how using AI without checks could cause big problems. These problems included bias, surveillance, discrimination, and breaking consumer rights. And the results would be much worse than past tech changes. Many worried about privacy and how AI could treat groups unfairly. So, the FTC became the nation’s main AI watchdog.
Important cases made it clear where the agency drew the line between good and bad AI development. One clear example was the FTC’s case against Intellivision, a company that claimed its facial recognition tool was unbiased across race and gender lines. The FTC charged Intellivision with deceptive marketing, saying there were no studies or real-world proof to support its big claims. This was not just a PR issue. The company’s technology could wrongly identify people, mark them for scrutiny, or treat innocent people unfairly.
These actions set the stage for what many hoped would be a strong rule in AI regulation: holding tech accountable not just for what it promises, but for the consequences of what it delivers.
Deception Cases That Made Headlines
Throughout the 2020s, the FTC got attention with many big cases. These cases put deceptive AI practices in the public eye. These were not just worries. They dealt with real harm to people caused by AI claims that were too big or not checked well.
Some prominent examples include:
-
Evolv Technologies: Marketed as a new security scanner that used strong AI to detect weapons in public places like schools and concert halls. But Evolv’s system missed a 7-inch knife at a school event, which was later used in a stabbing incident. The FTC accused Evolv of saying its tech could do more than it could. It also made clients feel safer than they were (Federal Trade Commission, 2024).
-
Workado: This firm offered AI content detection services. It marketed them as methods proven by science for finding AI-generated content like term papers or job applications. An FTC review showed that their claims did not have strong scientific proof. The agency told Workado to take back these claims. It also told Workado to get independent proof for them in the future (Federal Trade Commission, 2025).
-
AI-generated fake reviews and legal advice: Other companies received penalties for using AI programs to fill online stores with fake product reviews. And some suggested that their chatbots could give “legal advice,” even though they were not accurate or watched by experts.
These big crackdowns had support from both major political parties. They showed a widely agreed-upon idea: stop businesses from lying about their products. But these efforts also helped people expect honest ads in the quickly changing AI field.
The Pivotal Case: Rite Aid’s Biased Facial Recognition
Perhaps the most important enforcement action came in 2023. At that time, the FTC issued a major order against Rite Aid. The case was new. It was not about false advertising, but about the very careless way the system was put into use.
Rite Aid had installed facial recognition systems across hundreds of stores. This was supposedly to find repeat offenders or stop shoplifting. But the truth was much worse. Checks by other groups showed the system unfairly marked people of color, women, and young individuals as threats. Customers were humiliated, removed from stores, or even reported to police. This was often based on nothing more than AI suspicion.
The FTC found no strong proof that Rite Aid had tested the software for racial or gender bias. Nor could the firm show that it had put in place good rules for data privacy or ways to get permission from users. So, the agency banned Rite Aid from using similar AI technologies for five years. This move showed that AI enforcement would now cover more than just lies. It would also look at how AI affects people.
This case became a big change. It set a rule for “responsible deployment” as a legal requirement. This meant not just honest claims, but also fair use.
A Political Reversal: Trump’s New FTC AI Policy
Any hopes for continued regulation ended quickly in 2025. That is when the Trump administration put out an AI Action Plan. This plan was made to greatly change how new ideas and rules worked together.
The Action Plan says AI regulation hurts economic growth. And it tells agencies not to stop market activity by saying it is for social good. It says that ideas about AI causing harm—especially about bias, fairness, or unwanted use—are just guesses. And it says these ideas should no longer be a reason for rules. It even threatens to stop giving federal AI research money to states that make stronger local rules.
The policy summary makes clear: speed of development now is more important than harm from misuse.
The FTC, once set to lead AI rules, lost its power almost overnight.
Internal Conflict and Legal Blowback
Big changes at the FTC quickly followed. The administration dismissed two Democratic commissioners, including the important Rebecca Slaughter. It tried to clear the way for its plan to lessen rules. But a court later ruled that Slaughter could not legally be removed. This was because of the partly independent nature of the FTC’s job.
Though put back in her place, Slaughter returned to a very changed organization. It was now run mostly by political hires who wanted fewer rules. This divided place has made it almost impossible to enforce rules in a clear way. Agency staff did not know what to do or if AI enforcement actions would get support. So, they have largely stopped starting investigations on their own.
The message across the industry is clear: showmanship over safety.
Racing Ahead Without Guardrails
The reason for the change is about global power. U.S. innovation, say administration officials, must be faster than China in AI. And that means letting private companies create new things with very few rules. But this approach has costs.
When tech companies put untested systems into widespread use, poor outcomes like bias or manipulation are not just single events. They become patterns. Harm to consumer privacy, public safety, and marginalized communities may get worse, even if rules are not enforced as much.
Without clear AI regulation or FTC AI policy, companies are more likely to cut corners. But while the legal consequences might be delayed, reputational and social damage can be swift—and severe.
Two Types of AI Enforcement—And One Is on the Chopping Block
According to Leah Frazier, a former FTC advisor and now Director at the Lawyers’ Committee for Civil Rights Under Law, FTC AI investigations usually fit into two kinds:
-
🧾 Deception-based enforcement: This type focuses on false or misleading claims. For example, promising what AI can do when it does not exist. These cases tend to be stronger in court. And they will likely still be a part of future rules.
-
⚖️ Responsible-use enforcement: This type focuses on fairness in how AI works, biased use of AI, or risks that have not been tested. An example is the Rite Aid case. These cases are much more likely to be seen as less important under the current FTC approach.
This split means that while marketing lies may still be punished, actual harmful uses—if honestly advertised—could avoid being checked by regulators entirely. So, regulators are stepping back from AI’s most harmful risks.
The “Try First” Approach—and Why It’s Risky
By clearly taking up a "try first, regulate later" stance, the U.S. risks skipping key testing times to put systems into use right away. This way of developing AI, where people using it are the test lab, is already producing bad results.
Putting systems into real-world use without safety rules or checks for fairness means that the first people to use the newest AI are also the first to be harmed. It might be a shopper wrongly accused or a student punished by bad AI detection. In either case, the public increasingly suffers most from company tests.
What's more, the AI Action Plan also stops local action. States that wish to pass stricter AI laws risk losing federal funding. This is a clear reason for states not to act on their own.
Faster Than China—But at What Cost?
Seeing AI development as a global contest is not new. But the risks have never seemed bigger. National security, being the best in economics and tech are making countries put a lot of money into artificial intelligence. But doing things right and staying safe are not just about morals. They are also about smart planning.
Using faulty AI hurts trust around the world in U.S. innovation. Governments across Europe, for example, have already put in place AI rules. One example is the EU’s AI Act, which clearly stops high-risk AI use if it lacks good safety measures. U.S. firms that ignore safety may find themselves locked out of profitable international markets.
Loosening FTC AI policy may buy short-term speed—but at the cost of lasting trust.
What Developers Need to Know Right Now
For AI creators, the changing rules mean both a clear path and a dangerous area. On one hand, fewer rules mean faster improvements. On the other, the safety net from regulators has worn thin. This puts the job back on developers to do things right from the start.
Key takeaways for builders:
- 🚫 Don’t rely on FTC enforcement to catch problems after putting systems into use.
- 📣 Public feedback and media exposure remain strong ways to hold people to account.
- ⚖️ You can still face lawsuits, like group lawsuits, claims of carelessness, or product safety suits.
- 🧠 Stopping risks before they happen now makes you stand out from others. It is not just about following rules.
How to Stay Ethical in a Deregulated Ecosystem
Smart organizations aren't waiting for regulators to tell them what to do. Instead, they’re building their own rules for good behavior. These are designed to prevent harm before it reaches the public.
Options include:
- 🛠️ Bias Testing Tools: Microsoft Fairlearn and IBM’s AI Fairness 360 are good open-source tools for checking AI.
- 🧭 Groups that advise on ethics: Joining groups like IEEE’s Ethics in Action or the ORCAA Consortium can give you access to ideas checked by others.
- 🎯 Real-World Tests: Testing algorithms in closed environments or tests of unusual situations makes sure systems are stronger before public use.
Regular internal checks and records make things clear and ready for review. This is true whether from consumers, journalists, or future people who make rules.
Deregulation Doesn’t Mean Immunity From Consequences
Less rules being watched does not mean your technology is safe from bad results. If your system falsely accuses someone, helps cause unfair treatment, or puts users at risk, lawsuits and harm to your name remain very likely outcomes.
Take the Rite Aid case. It not only led to a 5-year ban on using the system, but it also led to people not buying from Rite Aid and strong criticism from the media. Even with weakened FTC AI enforcement, companies face the court of public opinion.
The best defense remains prevention.
A Developer’s Checklist: Building Trustworthy AI in 2025 and Beyond
Use this checklist to deal with a less regulated—but more dangerous—AI world:
- ✅ Test for bias across race, gender, and age groups.
- ✅ Get clear user permission for all data put in.
- ✅ Write clear, easy-to-understand guides for people who are not tech experts.
- ✅ Use tools that explain AI models to make decisions clear.
- ✅ Create ways for users to complain and get things fixed.
- ✅ Plan checks after putting systems into use to watch for changes in how they perform.
This framework won’t get rid of all risks. But it can greatly cut them down, and keep users' trust.
The FTC May Be Retreating—But You Don’t Have To
AI regulation may be moving into a politically unclear area. But that does not let tech creators off the hook for being responsible. In fact, it increases the need for tech creators managing themselves. At Devsolus, we believe the future of AI depends not on faster models but smarter use. Your users won't ask if your tech is following rules—they'll ask if it's fair.
Deploy carefully. Build fairly. The world is watching.
Citations
-
Federal Trade Commission. (2023, December). Rite Aid banned from using AI facial recognition after FTC says retailer deployed technology without safeguards. Retrieved from https://www.ftc.gov/news-events/news/press-releases/2023/12/rite-aid-banned-using-ai-facial-recognition-after-ftc-says-retailer-deployed-technology-without
-
Federal Trade Commission. (2024, November). FTC takes action against Evolv for deceptive claims about AI-powered security screening. Retrieved from https://www.ftc.gov/news-events/news/press-releases/2024/11/ftc-takes-action-against-evolv-technologies-deceiving-users-about-its-ai-powered-security-screening
-
Federal Trade Commission. (2024, December). FTC takes action against Intellivision for misleading bias claims in facial recognition. Retrieved from https://www.ftc.gov/news-events/news/press-releases/2024/12/ftc-takes-action-against-intellivision-technologies-deceptive-claims-about-its-facial-recognition
-
Federal Trade Commission. (2025, April). FTC alleges Workado’s AI-detection tool lacks scientific backing. Retrieved from https://www.ftc.gov/news-events/news/press-releases/2025/04/ftc-order-requires-workado-back-artificial-intelligence-detection-claims
-
Frazier, L. (2025). Former FTC advisor, now Director at Lawyers’ Committee for Civil Rights Under Law.