Bard or ChatGPT: Cybercriminals Give Their Perspectives
Table of Contents
Listen to post:
Getting your Trinity Audio player ready...
|
Six months ago, the question, “Which is your preferred AI?” would have sounded ridiculous. Today, a day doesn’t go by without hearing about “ChatGPT” or “Bard.” LLMs (Large Language Models) have been the main topic of discussions ever since the introduction of ChatGPT. So, which is the best LLM?
The answer may be found in a surprising source – the dark web. Threat actors have been debating and arguing as to which LLM best fits their specific needs.
Hallucinations: Are They Only Found on ChatGPT?
In our ChatGPT Masterclass we discussed the good, the bad, and the ugly of ChatGPT, looking into how both threat actors and security researchers can use it, but also at what are some of the issues that arise while using ChatGPT.
Offensive and Defensive AI: Let’s chat(GPT) About It | Watch the WebinarUsers of LLMs have quickly found out about “AI hallucinations” where the model will come up with wrong or made-up answers, sometimes for relatively simple questions. While the model answers very quickly and appears very confident in its answer, a simple search (or knowledge of the topic) will prove the model wrong.
What was initially perceived as the ultimate problem-solving wizard now faces skepticism in some of its applications, and threat actors have been talking about it as well. In a recent discussion in a Russian underground forum, a participant asked about the community’s preference when it comes to choosing between ChatGPT and Bard.
Good day, Gentlemen. I’ve become interested in hearing about Bard from someone who has done relatively deep testing on both of the most popular AI chatbot solutions – ChatGPT and Bard. Regarding ChatGPT, I have encountered its “blunders” and shortcomings myself more than once, but it would be very interesting to hear how Bard behaves in the sphere of coding, conversational training, text generation, whether it makes up answers, whether it really has an up-to-date database and other bonuses or negatives noticed during product testing.
The first reply claimed that Bard is better but has similar issues to ChatGPT:
Bard truly codes better than ChatGPT, even more complex things. However, it doesn’t understand Russian. Bard also occasionally makes things up. Or it refuses to answer, saying, “I can’t do this, after all, I’m a chatbot,” but then when you restart it, it works fine. The bot is still partly raw.
The next participant in this discussion (let’s call him ‘W’), however, had a lot to say about the current capabilities of LLMs and their practical use.
All these artificial intelligences are still raw. I think in about 5 years it will be perfect to use them. As a de-facto standard. Bard also sometimes generates made-up nonsense and loses the essence of the conversation. I haven’t observed such behavior with ChatGPT. But if I had to choose between Bard and GPT, I’d choose Bard. First of all, you can ask it questions non-stop, while ChatGPT has limits. Although maybe there are versions somewhere without limits. I don’t know. I’ve interacted with ChatGPT version 3. I haven’t tried version 4 yet. And the company seems to have canceled the fifth version. The advantages of Bard are that it gives, so to speak, what ChatGPT refuses to give, citing the law. I want to test the Chinese counterpart but I haven’t had the opportunity yet.
The member who provided the first reply in this conversation chimed in to make fun of some of the current views on ChatGPT:
The topic of coding on neural networks and the specifics of neural networks (as the theory and practice of AI and their creation and training) is extremely relevant now. You read some analysts and sometimes you’re amazed at the nonsense they write about it all. I remember one wrote about how ChatGPT will replace Google and, supposedly, the neural network knows everything and it can be used as a Wikipedia. These theses are easily debunked by simply asking the bot a question, like who is this expert, and then the neural network either invents nonsense or refuses to answer this question citing ethics, and that’s very funny.
This comment brought back ‘W’ to the discussion.
Partially true. In fact, Google itself plans to get rid of links in search results. There will be a page with a bot. This is a new type of information search, but they will not completely get rid of links, there will be a page where there will only be 10 links. I don’t know if this is good or bad. Probably bad, if there will only be 10 of them in that search result. That is, there won’t be the usual deep search.
For example, it’s no longer interesting to use Google in its pure form. Bing has a cool search – a must-have. But sometimes I forget about it and use good old Google. Probably I would use Bing if it wasn’t tied to an account, Windows, and the Edge browser. After all, I’m not always on Windows, it would be hell to adapt this to Linux.
+++ I have already encountered the fact that the neural network itself starts to make up nonsense.
Another member summarized it as he sees thing, in English:
Bard to search the web. ChatGPT to generate content. Both are very limited to write code from scratch. But, as wXXXX said, we must to wait some years to use it in our daily life.
In our next masterclass session Diana Kelley and I will dive into the different aspects of AI, how and why these “AI hallucinations” happen, what buyers of this technology need to ask vendors who claim to use LLMs as well the concerns raised in this discussion by cybercriminals.