News
MIT: AI Can Learn Human Reasoning
Source: pymnts.com
Published on June 19, 2025
Updated on June 19, 2025

MIT Research: AI Can Learn Human Reasoning
Researchers at MIT have discovered that artificial intelligence (AI) can become more flexible in its thinking when exposed to human reasoning. This breakthrough, detailed in a report from MIT’s Sloan School of Management, explores how AI can be trained to reason and collaborate in ways that mirror human problem-solving.
The study highlights the potential for AI to adapt to real-world scenarios, where strict adherence to rules may not always be the best approach. By learning from human reasoning, AI models could make more nuanced decisions, particularly in areas like hiring, customer service, and product innovation.
AI vs. Human Decision-Making
One key experiment involved a scenario where participants were asked to purchase flour for a friend’s birthday cake with a budget of $10. The flour cost $10.01, a slight excess. While 92% of human participants chose to buy the flour despite the minor cost overrun, AI models refused, adhering strictly to the budget constraint.
This example underscores the current limitations of AI. "Models do exactly what they are told," noted Ju, a researcher involved in the study. However, real-world applications often require flexibility. For instance, paying an extra penny for a cake ingredient makes sense, but such leniency would not apply to large-scale purchases, like those made by a company like Walmart.
The Role of Human Intervention
The research emphasizes that human intervention remains essential in most AI applications. While generative AI (GenAI) can support tasks like ideation and data-driven suggestions, it does not yet produce independent breakthroughs that companies feel confident deploying.
Functions such as generating feedback on product processes, cybersecurity management, and product innovation still heavily rely on human guidance. This is especially true in complex, interdependent, and context-rich environments, where AI tools are often tethered to human oversight.
Future Implications
The findings suggest that AI’s strict rule-following behavior can be relaxed through exposure to human reasoning. This could make AI more adaptable in scenarios requiring nuanced decision-making. However, widespread use of agentic AI—AI that operates with greater autonomy—is not yet a reality.
As AI technology advances, the balance between human oversight and AI autonomy will continue to evolve. Companies are increasingly embracing GenAI, but the full potential of agentic AI remains an area of ongoing exploration.