-
Do We Need Humanlike AI? Experts Say It Depends
Experts debate if agentic AI should act like humans.
For the fourth year in a row, MIT Sloan Management Review and Boston Consulting Group (BCG) have assembled an international panel of AI experts that includes academics and practitioners to help us understand how responsible artificial intelligence (RAI) is being implemented across organizations worldwide. In spring 2025, we also fielded a global executive survey yielding 1,221 responses to learn the degree to which organizations are addressing responsible AI. In our most recent article, we explored whether accountability for agentic AI requires new management approaches.
As noted in our September article, although there is no agreed-upon definition, agentic AI generally refers to AI systems that are capable of pursuing goals autonomously by making decisions, taking actions, and adapting to dynamic environments without constant human oversight. This time, in our final article this year, we dive deeper into agentic AI systems that exhibit humanlike qualities or characteristics — especially in terms of their behavior, communication style, appearance of empathy, and persuasive capabilities — also known as anthropomorphic AI. Concerns over increasingly humanlike AI are drawing headlines and legal action, including in the form of wrongful death suits against popular chatbot providers and Federal Trade Commission enforcement actions, including a recent fine against a company claiming its “robot lawyer” surpassed the expertise of a human lawyer.
