In the iconic Star Wars series, Captain Han Solo and humanoid droid C-3PO boast drastically contrasting personalities. Driven by emotions and swashbuckling confidence, Han Solo often ignores C-3PO's logic-driven caution. That human-droid relationship is exemplified in Solo's famous statement, "Never tell me the odds!" as he dismisses C-3PO's advice against navigating an asteroid field, given a 3,720-to-1 chance of survival, odds painstakingly calculated by the shiny sidekick.
While that comedic relationship creates an irresistible drama in the Hollywood classic, such a dynamic wouldn't work in everyday reality for a successful human-machine relationship. Today, as AI becomes part of many individuals' daily lives, humans and machines must learn to work well together, says Assistant Professor Bei Yan at Stevens School of Business, who studies human-machine teamwork. "Companies are using AI alongside people, but it's hard for them to work well together," she says. "People think differently from AI. People use experience, judgment, and social cues. AI uses statistical patterns learned from data."
Why Human and Machine Thinking Misalign
These differences can be complementary, but only if they are well coordinated, she adds. When they are not, users may over-trust AI outputs, misuse systems, or waste time correcting or working around them. "In these cases, AI does not reduce effort. It adds friction," she says. "That mismatch makes teamwork between humans and AI often underperform." And sometimes outright fail.
When analyzing AI failures, companies attribute it to one of the two pitfalls: the technology is either not powerful enough, or it is too powerful to be trusted. However, Yan suggests a different reason: the machines and people aren't well-aligned to work together. "AI failures happen because humans and machines are not aligned in how they understand tasks, roles and responsibilities."
Limits of Fixed Task Allocation in AI Workflows
When introducing AI into the workplace, companies tend to proactively divide the tasks between humans and AI, Yan notes. That only works if tasks are stable and predictable and don't change as time goes on. But that's not true for most work settings.
Yan uses high frequency trading algorithms as one example, where AI is deployed to quickly monitor the market, spotting trends and opportunities. But certain unexpected events-such as sudden market drop, major policy changes, or inflation data releases-may skew the AI's understanding of the market. "The algorithms are trained with preset rules, so AI is not really designed to understand such events, and it may change the whole market and even lead to crashes," she says.
Hybrid Cognitive Alignment in Human AI Teams
In her new paper, titled Syncing Minds and Machines: Hybrid Cognitive Alignment as an Emergent Coordination Mechanism in Human-AI Collaboration, published in the Academy of Management journal on March 18, 2026, Yan argues that effective human–AI partnerships should be structured differently. They should rely on a process called "hybrid cognitive alignment" - the gradual development of shared expectations about what the AI is for, how it should be used, and when human judgment should take precedence. "This alignment does not happen automatically when a system is deployed," Yan says. "Instead, it emerges over time as people learn how the AI behaves, adapt how they interact with it, and recalibrate their trust based on experience."
Examples of AI Human Collaboration in Practice
For example, AI is now being used in medical settings to analyze X-rays or CT scans. Trained on millions of images, it is often better at identifying cancer or other problems than a physician's eye may overlook. Yet what it doesn't know well is a patient's medical history or how they respond to medications, so without human input and oversight, the analysis won't be as strong.
Similarly, in the customer service setting, AI is trained on thousands of previous interactions and can search the company's internal documents on its policies at record speed, but it may not understand the specific customer's problem or needs. Without training people to use AI properly, many such efforts may not yield good outcomes.
Strategies for Effective Human AI Teamwork
So what should companies do when they're rolling out AI? "They should focus more on how tasks and roles are divided between people and machines, and how that may change over time, Yan says. "Training that emphasizes how AI should be used and time for teams to adapt are essential," she stresses. "Treating AI as a 'plug-and-play' solution often backfires; treating it as a new collaborator yields better results. For managers, these implications are immediate," she notes.
AI developers can learn from the paper, too. The study findings highlight the importance of designing not just for performance, but also for collaboration. "Systems should clearly communicate their capabilities and limitations, support user learning over time, and help users form strong partnerships with them," she says. "Ultimately, the promise of AI lies not in making machines smarter in isolation, but in making human–AI collaboration work better. Alignment, not raw intelligence, is what turns AI from a source of frustration into a source of value."
Source:
Journal reference: