Artificial intelligence (AI) search tools repeatedly give incorrect information on pensions because they cannot access information on websites effectively, according to research by Quietroom.
The firm said the solution was to write content that was easily understandable for both humans and AI tools.
Quietroom tested OpenAI's new Operator tool, an AI ‘agent’ which is designed to carry out tasks for the user, on UK pension websites.
The company found that OpenAI’s new Operator agent was unable to convert the content published on websites into accurate information for users, partly because of barriers created in the way the information was presented.
In one test, the company asked Operator to calculate when a deferred member could retire from a scheme.
The way in which the information was displayed on the screen (via ‘accordions’, or boxes that expand when clicked) made it impossible for OpenAI’s Operator to read the information contained within them properly, resulting in an incorrect answer.
Separate tests involved simple questions put to ChatGPT and Google about a pension scheme – testers had verified the information on the relevant websites prior to the exercise.
Although the information was on the websites, the AI tools were unable to find this information because of the way the website content was organised.
But instead of replying that they had not found the information, the tools made up incorrect answers based on content from other schemes. Quietroom found this happened repeatedly across various pension scheme sites.
Other failures included AI agents giving answers from a different scheme where the schemes had the same initials, offering to provide complex calculations in which they subsequently provided incorrect results, and directing members to third parties – including disreputable financial advisers – instead of to the schemes in question.
The issue of AI inaccuracies is all the more pertinent since the Financial Conduct Authority’s Consumer Duty makes it clear that firms remain accountable for the outcomes where AI is concerned, Quietroom warned, adding that where members making decisions based on wrong information, schemes are accountable for the content that led them astray.
“The solution isn't to write for robots, but to write better for humans,” said Quietroom director Simon Grover.
“Our research shows that AI does a much better job accurately summarising or explaining content if that content is already clear, consistent, well-structured and in short sentences."








Recent Stories