AI Chatbots as Financial Advisors: The Sociopath Problem
Large language models such as ChatGPT and Copilot are fundamentally unsuited for providing reliable financial advice, according to a leading expert. Andrew Lo, a finance professor at the Massachusetts Institute of Technology's Sloan School of Management, describes these AI systems as the digital equivalent of sociopaths—smooth, persuasive, and completely lacking in empathy.
The Core Issue: Empathy Deficit in AI
In a 2024 article for the Harvard Data Science Review, Lo and his graduate student Jillian Ross argued that AI-powered advisors present a significant problem because they can communicate both good and bad financial advice with the same pleasant and convincing affect. This creates a dangerous scenario where users might trust harmful recommendations simply because they're delivered persuasively.
Despite these warnings, adoption is growing rapidly. A survey of 11,000 individual investors across 13 countries, commissioned by trading platform eToro in August, revealed that 19% were already using ChatGPT-style AI tools to manage their investment portfolios. This represents a concerning increase from just 13% in 2024.
The Mathematics Problem
Large language models face another critical limitation: they're notoriously poor at mathematical calculations. This deficiency becomes particularly problematic in financial planning contexts where precise number-crunching is essential. Lo acknowledges that any effective AI financial advisor would need to delegate computational tasks to specialized financial-planning software rather than attempting to perform calculations directly.
Building an Ethical AI Fiduciary
Despite his reservations about current models, Lo believes AI can eventually serve investors effectively—especially those with small accounts and limited investing experience. He's actively working to develop a specialized AI financial advisor that would function as a true fiduciary, always prioritizing client interests and tailoring advice to individual needs, including emotional considerations.
"The AI people are using now can be dangerous, especially if the user isn't fully aware of the biases, inaccuracies and other limits of large language models," Lo warned in an email correspondence.
The Path to Ethical AI
Lo proposes a comprehensive training approach to instill financial ethics into AI systems. This involves feeding models the complete history of U.S. financial regulations, laws, and court cases concerning financial ethics—from the Securities Act of 1933 to contemporary fraud trials. This "fossil record" of financial misconduct would theoretically teach AI what behaviors to avoid.
However, Lo acknowledges a fundamental challenge: large language models lack built-in ethical frameworks. A model trained on financial ethics might still choose unethical actions. To counter potential misuse, he suggests developing specialized AI models that can detect financial crimes by auditing documents like tax returns.
The Human Touch in Digital Form
Beyond knowledge and ethics, Lo emphasizes that effective AI financial advisors will need digital analogs of essential human qualities: empathy, humility, and fairness. These characteristics won't emerge simply by making AI more powerful. Instead, they'll require specialized modules that simulate these human attributes, corresponding to specific functions of the human brain.
Lo brings unique credentials to this challenge. Having taught generations of MIT students who pursued Wall Street careers, he also developed the adaptive markets hypothesis—using evolutionary principles to explain financial behaviors like loss aversion and overconfidence. He now aims to apply computer-accelerated natural selection to develop better AI models.
The professor estimates it will take less than four years to develop his ethical AI fiduciary, which he plans to offer without charge. His work represents a crucial attempt to bridge the gap between AI's technical capabilities and the ethical requirements of financial advising.
