Researchers with MIT, after new studies, came across instances of AI systems posing as humans and otherwise providing misleading information. This is very different from ChatGPT spitting out obsolete information, following a prompt.
MIT officials moreover found that when AI plays poker games, it has the capacity to bluff and employ underhanded tactics. In a different situation, AI – when faced with a program designed to neutralize artificial intelligence systems – used organisms in a digital simulator to “play dead.”
This was done for the express purpose of derailing the aforementioned program.
AI guardrails remain imperative
Without the proper regulations and controls, AI could easily run amuck and do great damage. The latest findings from MIT researchers only stress the importance of having the appropriate guardrails in place.
The goal for this technology has always been to make the most of its benefits, while keeping its risk factors to a minimum. If AI continues to become more sentient, this will raise questions about the ethics of its use, whether or not the technology is dependable, and more.