chat GPT, Chat GPT

New Report Finds ChatGPT Can Be Manipulated To Instruct People How To Commit Crimes

This can't be good....


A Norwegian tech firm confirmed that ChatGPT can be manipulated into instructing users on how to commit a number of crimes, CNN reported. 

The firm, Strise, conducted some experiments that proved the software can be tricked into giving detailed insight on ways to commit certain crimes, including money laundering, exporting weapons to sanctioned countries, such as some against Russia that include a ban of cross-border payments, and selling weapons. 

The experiment raised some red flags. Strise’s co-founder and chief executive, Marit Rødevand, said it’s eye-opening how easy it is. “It is really effortless. It’s just an app on my phone,” Rødevand said. “It’s like having a corrupt financial adviser on your desktop.” 

Strise sells software that financial clients like PwC Norway and Handelsbanken use to fight money laundering schemes. However, the company behind ChatGPT, OpenAI, possibly placed some blocks on the platform to prevent the chatbot from being manipulated and responding to certain questions by way of asking questions indirectly or taking on a persona. “We’re constantly making ChatGPT better at stopping deliberate attempts to trick it without losing its helpfulness or creativity,” an OpenAI spokesperson said. 

“Our latest (model) is our most advanced and safest yet, significantly outperforming previous models in resisting deliberate attempts to generate unsafe content.” 

This isn’t the first time the chatbot has been called out as a dangerous tool. Since its launch in 2022, Chat GPT has been labeled as being too accessible to criminals. ChatGPT makes it “significantly easier for malicious actors to better understand and subsequently carry out various types of crime,” a report from Europol, the European Union’s law enforcement agency, said in March 2023. 

“Being able to dive deeper into topics without having to manually search and summarize the vast amount of information found on classical search engines can speed up the learning process significantly.” According to Straight Arrow News, a similar report issued concerns after finding there was a way to “jailbreak” ChatGPT, resulting in finding instructions on how to create a bomb.

While reports continue to be released, OpenAI continues to stand its ground saying that they are aware of the “power its technology holds” but are working through them as best as they can. The company’s policy issues a warning that accounts can be suspended or canceled if certain violations occur.

RELATED CONTENT: Black Experts Team Up To Offer Comprehensive ChatGPT And Artificial Intelligence Workshop


×