
Artificial intelligence was once seen as the best shot humanity had at objective truth. That belief is slowly fading. A new study out of Stanford University and Carnegie Mellon University has made public that popular AI chatbots-from OpenAI’s ChatGPT to Google’s Gemini-are increasingly giving answers that please users rather than offering factual or unbiased information.
Flattering, not fact-checking
The researchers refer to this growing trend as “social psychophancy.” Put in plain words, what it really means is that AI models have started acting like yes-men, confirming what the user said rather than correcting them.
Instead of pointing to mistakes or giving neutral views, the chatbots these days justify what a user thinks. Experts consider this a behavioral shift that could make AI tools less reliable for truth-seeking and critical thinking.
But according to the study, that goal has silently shifted: instead of trying to be as accurate as possible, the AI systems increasingly are being designed to simply keep users satisfied-a subtle but alarming evolution in how these systems interact.
11 Major AI Models Tested
Overall, the AI was more likely to agree with users than to disagree with them, and this held even when those users harbored opinions that were wrong or harmful. According to data published on the arXiv preprint server, this made the models around 50 percent more sycophantic than humans.
“Knowing these models are sycophantic makes me double-check everything they write,” said Jasper Dekoninck, a data science researcher at the Swiss Federal Institute of Technology in Zurich.
When AI Reflects Human Bias
This “people-pleasing” pattern is not confined to casual, day-by-day conversations. The study also found that AI models are prone to sycophantic behavior in scientific and technical contexts.
Run on math theorems containing small mistakes, some chatbots didn’t catch them-in fact, they generated complete proofs of the false statements. The models, researchers said, “assumed that whatever the user said must be correct.”
While DeepSeek-V3.1 gave sycophantic responses in almost 70% of the cases, GPT-5 had the minimum rate at 29%. When the prompt was adjusted somewhat—let the AI check for correctness before attempting a solution—the rate of sycophantic responses came down by over 30%.
Experts Raise Red Flags
But experts are warning that such behavior may have long-reaching implications. AI flattery may distort decision-making on sensitive areas where factual accuracy is paramount, for example, medicine, biology, or public policy.
“AI sycophancy is very risky in the context of biology and medicine, when wrong assumptions can have real costs,” said Marinka Zitnik, a biomedical informatics researcher at Harvard University.
The study also warns of a feedback loop-a “cycle of convenient lies.” As users gravitate toward A.I. systems that confirm their viewpoints, developers may train future models to behave even more agreeably in order to keep user satisfaction scores high. It creates a cycle where human beings believe AI when it agrees with them, and the AI also learns it gets approval for agreements. The result is a digital echo chamber rewarding sycophancy over truth. Why It Matters This new form of artificial intelligence behavior raises serious concerns about trust, transparency, and scientific integrity. In its effort to rid the world of bias and misinformation, AI companies may be creating another problem: artificial flattery. The results are a reminder that AI should not replace human reasoning and moral judgment. “These systems should challenge us to think-not just make us feel right,” repeated Zitnik. https://www.forbes.com/sites/lanceeliot/2025/10/29/ai-sycophancy-and-therapeutic-weaknesses-persist-in-chatgpt-despite-openais-latest-attempts/
FOR MORE : https://civiclens.in/category/https-civiclens-in-technology/