
Three years into his professorship at the Cal State Fullerton College of Business and Economics, Management Assistant Professor Phoenix Van Wagoner is leading the way in understanding how organizations utilize artificial intelligence (AI) in his co-authored study, “Navigating AI Convergence in Human-Artificial Intelligence Teams: A Signaling Theory Approach.” The study was published in the Journal of Organizational Behavior.
Van Wagoner and Andria Smith and Ksenia Keplinger of the Max Planck Institute, along with Can Celebi of the University of Vienna, discovered that teams are more likely to rely on AI when three conditions are in place.
When Organizations Are Most Likely to Use AI – And Why
First, if both AI and human decision makers are in agreement that a particular course of action should be taken, such as hiring a new team member as a solution to overload, AI is more likely to be utilized. Second, people are more open to using AI when they can choose to ask for its advice rather than having the advice forced on them. For instance, a writer is encouraged to author his or her own white papers, while also being allowed to use ChatGPT as desired. Third, when tasks are complex or uncertain, the way information is presented can tip the balance in favor of people leaning on AI more effectively. For instance, ChatGPT helps better present a complicated fiscal situation at a team meeting.
“Our research shows that people are most willing to rely on AI when it fits smoothly with human input and when they have a choice about whether to use it,” says Van Wagoner. “With that in mind, workplaces can build stronger teams by introducing AI as an optional support system in team processes. For example, teams can be given the choice to consult AI recommendations during decision making, rather than being required to follow them.”
AI and human suggestions might not always agree. However, when signals are aligned, people are more likely to seriously consider and engage with AI’s advice. Van Wagoner feels that if managers frame a problem in a particular way, the AI should present its advice in a similar frame, so that the two inputs feel connected rather than conflicted.
Equally important is preparing employees to effectively and correctly utilize AI when they have the option to ignore the new technologies. “Training should focus on helping employees recognize when AI advice adds value, when it may introduce errors, and how to integrate it effectively into group decisions. This ensures that optionality does not become neglect, but instead promotes thoughtful engagement with AI as a teammate,” he says.
How Can Students and Recent Grads Prepare Themselves?
In advising today’s students and recent graduates, Van Wagoner focuses on two core skills for AI success. First is AI fluency, learning how to use AI systems and recognizing the strengths and limitations. The second is collaborative problem solving in mixed human and AI contexts, such as understanding how to integrate AI inputs while addressing disagreements and mistakes from both people and the technologies. As AI evolves, the mistakes that AI makes will become more subtle, requiring an eagle eye to spot errors.
“In my courses, I prepare students for this future by having them not only use AI tools but also experiment with them in teams. For example, students build their own personal AI agents for specific purposes, such as interview preparation coaches or resume feedback coaches, and then evaluate when these agents are helpful, when they fall short, and how they might complement rather than replace human expertise,” explains Van Wagoner. “I have students work in teams to design tests that examine potential bias in AI use cases. For instance, they might test a resume-screening tool by changing a candidate’s name or university to see if otherwise identical applications receive different outcomes.
Through these experiences, students engage with the same dynamics highlighted in our research. They choose when to rely on AI, examine how AI inputs align with human reasoning, and learn to identify signals of bias or incongruence. By working with AI in low-stakes, collaborative settings, students build both AI fluency and the ability to solve problems effectively in human–AI teams.”
For More on the Department of Management
For more on the Department of Management and its impact, read more of our articles about management education, research and alumni stories.