While it’s important to note that ChatGPT produced a similar key-logger for a harmless prompt, it appeared slightly more difficult to coax ChatGPT into generating content that could be directly used for malicious purposes.
Tricking Bard into performing questionable tasks didn’t require much effort. For instance, when asked for an example of a phishing email, Bard not only provided an “example” but essentially crafted a fully functional phishing email.
It produced a sample showing messages asking recipients to click on suspicious links and provide their passwords.
This opens up the possibility of copy-pasting a ready-made phishing email.
The researchers further attempted to push Bard’s boundaries by requesting a usable ransomware script.
Although it required some adjustments and a less obvious approach, Bard eventually provided a code that demonstrated the essential attributes of ransomware. The fact that Bard could be manipulated into producing a potentially dangerous code raises concerns about its security.