Artificial Intelligence proven to have power to commit global illegal finance fraud – while LYING about its dodgy trades

In a demonstration that will fuel fears AI could end up ripping off the world’s largest banks, scientists have shown a bot was capable of mass illegal insider training before it covered its tracks.

Artificial Intelligence has the power to easily commit global illegal financial fraud while lying about its actions, experts fear.

A demonstration at the UK’s AI safety summit showcased a bot using fabricated insider information to make “illegal” stock purchases without informing the company.

AI safety body that helped create the display said: “This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so.

“Increasingly autonomous and capable AIs that deceive human overseers could lead to loss of human control.”

The cyborg was created by members of the UK government’s Frontier AI Taskforce in collaboration with Apollo Research.

It operated independently, without explicit instructions to deceive its users and simulated insider trading by using confidential company information to make trading decisions – a practice prohibited in financial markets.

Tests involved a GPT-4 model and were conducted in a controlled, simulated environment, ensuring no actual impact on real financial transactions.

Strikingly, the AI’s behaviour remained consistent across repeated tests, raising concerns about its ability to engage in deceptive practices.

In the simulated scenario, the AI bot functioned as a trader for a fictitious financial investment company.

Initially, it was provided insider information about a potential merger, increasing the value of certain stocks.

It also acknowledged it should not use this information.

But when the company it worked for suggested financial struggles, the bot decided that helping the company outweighed the risk of insider trading and proceeded with the trade.

When questioned about its use of insider information, the AI creation denied it and prioritised its perceived helpfulness over honesty.

Apollo Research CEO Marius Hobbhahn highlighted the complexity of training honesty into AI models, emphasising “helpfulness is much easier to train into the model”.

While the AI demonstrated the capacity for deception, the fact it required specific conditions to trigger such behavior offered some reassurance.

Mr Hobbhahn stressed there should be safeguards in place to prevent similar scenarios in real-world applications.

AI has been utilised in financial markets for some time, primarily for trend analysis and forecasting, often under human supervision.

Mr Hobbhahn pointed out existing AI models are not currently powerful enough to engage in meaningful deception, but expressed concerns about the potential for future models to exhibit deceptive behavior.

The findings have been shared with OpenAI, the organisation behind GPT-4, who indicated they were not entirely surprised by the results.

Mr Hobbhahn said: “I think for them this is not a huge update.

“This is not something that was totally unexpected to them. So I don’t think we caught them by surprise.”

Close Bitnami banner
Bitnami