Story and multimedia by Joey Garcia, University Communications and Marketing
If you鈥檝e used generative artificial intelligence, you鈥檝e likely noticed that the system is often in agreement, complimenting the user in its response. But human interactions aren鈥檛 typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.
AI systems don鈥檛 hold firm beliefs the way humans do. They generate responses based on statistical data patterns without tracking how confident they are in an idea or whether that confidence should change over time. Building on that limitation, USF doctoral student Onur Bilgin to study how AI systems respond to disagreement. The work was conducted in USF Associate Professor John Licato鈥檚 .

USF Associate Professor John Licato and doctoral student Onur Bilgin built this framework to explore how future AI systems might reason together more transparently and predictably.
鈥911爆料网 wanted to understand what happens when AI systems are given the ability to hold a belief and then encounter opposing viewpoints, similar to situations people find themselves in. That process can help people think through complex problems by examining different perspectives rather than relying on a single answer.鈥
USF Doctoral Student Onur Bilgin
GIVING AI EXPLICIT BELIEFS
Using this approach, the lab focused on how assigning beliefs and confidence levels shapes the way AI systems respond to disagreement. In his framework, Bilgin used agents. Unlike a typical chat interaction, agents are user-created roles within the same AI system with defined tasks and viewpoints.
In Bilgin鈥檚 framework, each agent is designed to have a specific belief and confidence level. For example, one agent might argue that solar energy is the most reliable renewable power source and hold that view with high confidence. A second agent is then introduced in the same chat to challenge that belief, arguing that wind energy is more reliable because it can generate power day and night, but with lower confidence.
鈥淩ather than trying to decide which belief is right, we鈥檙e focused on understanding how different levels of confidence shape the way an AI system responds when its beliefs are challenged and how those beliefs shift or stabilize over time,鈥 Bilgin said.
OBSERVING HUMAN-LIKE PATTERNS IN AI
After the debate rounds, the team observed how closely the AI agents鈥 behavior mirrored familiar human group dynamics. Agents assigned lower confidence levels were more open to revising their beliefs, while those starting with higher confidence tended to be more persuasive. When several agents disagreed with a single participant, that participant was more likely to change its position, similar to peer pressure in human discussions.
鈥淭hese aren鈥檛 emotions or opinions in the human sense,鈥 Bilgin said. 鈥淏ut the patterns of belief change we observed, including confidence, openness and influence from others, are very similar to how people reason in group settings.鈥

Bilgin with his AI framework.

Final confidence levels after multiple debate rounds, revealing agent two as the more persuasive agent.
Notably, these behaviors emerged without retraining the AI models. Simply adding structured belief information to the prompt was enough to change how the systems reasoned during debate.
WHY BELIEF STRUCTURE MATTERS
The findings show an important distinction in AI design: Changing how AI sounds isn鈥檛 the same as changing how it decides. Many users assume that telling AI to have a certain personality will influence its behavior. But this research suggests that meaningful behavioral change requires more than tone. It requires explicit structure defining what the system believes and how those beliefs can evolve.

The belief framework is one of many exciting projects happening within the college and in Jon Licato's Advancing Machine and Human Reasoning Lab.
鈥淎s AI systems are increasingly used to support planning, analysis and decision-making, understanding how beliefs form and change becomes critical,鈥 Licato said. 鈥淚f we want AI systems to reason together reliably, we need to think beyond surface-level prompts.鈥
The research offers insight into how future AI systems might reason together more transparently and predictably. Systems that can track and update beliefs may be easier to inspect, test and govern, contributing to ongoing conversations around AI safety and trust.