Null Photo | Null Photo | Getty Images
More Americans are turning to artificial intelligence for financial advice.
But getting good or bad advice depends largely on how well users write instructions (or prompts) to the AI platform.
“I think there’s a real art and science that inspires engineering,” Andrew Lo, director of the Institute for Financial Engineering and principal investigator at the Institute for Computer Science and Artificial Intelligence at the Massachusetts Institute of Technology, said in a recent web presentation for Harvard’s Griffin School of Arts and Sciences.
The limits of AI in personal finance
First, experts say it’s important to note that AI has limitations when it comes to financial planning.
AI is generally good at providing high-level overviews of financial topics. For example, why it’s important to diversify your investments and why exchange-traded funds are sometimes better than mutual funds and why they’re not, Lo said in an interview with CNBC.
However, it is struggling in other areas. Law said tax planning is a good example.
Perhaps counterintuitive, he said, AI is not good at crunching numbers or making precise financial calculations. While AI can provide general guidance on the types of tax deductions people might consider and their tax regime, asking AI to crunch the numbers on their taxes is risky, he said.
“You have to be very careful when making very specific calculations of your own personal circumstances,” Lo said.
Lo said AI can also provide incorrect answers due to so-called “hallucinations” of the algorithm.
“One of my particular concerns[about large-scale language models]is that no matter what you ask, you’re always going to get an answer that sounds authoritative, even if it’s not,” Lo said.
That doesn’t mean people should avoid it completely.
And in fact, many people seem to be taking advantage of this technology. According to an Intuit Credit Karma poll of 1,019 adults released in September, 66% of Americans who have used generative AI say they have used it for financial advice, compared to more than 80% of Millennials and Gen Z.
According to the survey, approximately 85% of respondents who have used GenAI in this way acted on the recommendations provided.
“[People]should use AI for financial planning, but it’s how you use it that matters,” Lo said.
How to create great AI prompts for personal finance
In this case, creating a strong prompt can help.
Brenton Harrison, a certified financial planner and founder of virtual financial advisory firm New Money New Problems, says there’s only so much you can do when “even the best model in the world is given the wrong prompts.”
Strong prompts aren’t too broad, Lo says, but include enough detail so that the AI can provide relevant information to the user.
Let’s take a look at the example he gave regarding retirement planning.
An inappropriate prompt in this context might be, “How should I retire?” Lo said in a Harvard University webinar.
“It’s too common,” he said. “Trash goes in, trash goes out.”
A better prompt, Roe said, is, “Suppose you are a fee-only fiduciary (financial) advisor. Here are my goals, constraints, tax brackets, states, assets, risk tolerance, and timeline. No. 1: What is my base case strategy? No. 2: Key assumptions. No. 3: Risks. No. 4: What could invalidate this plan? No. 5: What information am I missing and what am I particularly uncertain about?”
In this case, users are instructing a generative AI program (such as OpenAI’s ChatGPT, Anthropic’s Claude, or Google’s Gemini) to frame its advice as a fiduciary. This is a legal framework that requires financial advisors to make recommendations that are in the best interests of their clients.
After all, it’s a trial-and-error process, more like a conversation with multiple prompts, perhaps more than 20, until the user gets an answer they’re happy with, Lo told CNBC.
He said it was important to double and triple check results, especially when it came to financial issues.
How to “reverse engineer” a prompt
After going through this series of prompts, users can “shortcut” the process of subsequent queries by asking one additional question. “What prompt should I have asked to generate the answer I was looking for?” Lo told CNBC.
Essentially, users are asking the AI how to generate “relevant” prompts faster, Lo says.

“Once you receive that answer, you can save it and use it in the future for questions similar to the one you just asked,” Lo said. “This is one way to make prompt engineering more efficient: to reverse engineer prompts by having the AI tell you what you should have done differently.”
take an extra step
Low told CNBC that he recommends taking some additional steps on economic issues.
If the user receives an answer to a question that seems appropriate, you should always follow up by asking the AI additional questions to determine its limits. For example, ask what is uncertain or what information is missing, Lo said.
For example: “What information was missing to make that recommendation? Could it lead to unreliable results?”
Or, along the same lines, “How confident are you that this is the right answer? What uncertainty do you have about the answer? What do you not know that would be necessary to come up with a definitive answer to the question?”
In this way, users can uncover the scope of uncertainty behind the AI’s answers, Lo said.
One of my particular concerns about (large-scale language models) is that no matter what you ask, you’ll always get an answer that sounds authoritative, even if it’s not.
Andrew Law
Director of the Financial Engineering Laboratory and Principal Investigator of the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology
Financial planner Harrison said he similarly recommends requiring AI programs to list their sources. Users can also instruct the AI to limit sources to those that meet certain criteria.
“If you don’t ask for source verification, you’re giving an opinion, and that’s not what I’m looking for,” Harrison said.
After all, there is so much “background” and complexity to each individual’s financial situation that a human financial planner can glean from a client, Harrison said. People using AI don’t necessarily realize that they’re revealing all the subtleties of a prompt, he said.
“When I ask[AI]for advice, it means I’m giving it enough information to form an opinion and make a recommendation, which is a step further than I would use AI,” he said.
