Dating strategies are gaining a new dimension. According to a survey by cybersecurity company Kaspersky, 3 out of 4 users of dating apps (such as Tinder, Bumble and Inner Circle) declared an interest in using ChatGPT or other AI platforms to improve their flirting skills and increase their chances with potential romantic partners.
According to the study, the male audience is the most attracted to technology, with 54% of men stating that they would use the tool to improve their personality in terms of intelligence and humor. Among women, the figure was 51%.
Ironically, despite the high interest in the tactic, most respondents said they were “concerned” about its use, claiming the practice would be “dishonest” and enable a new era of AI-powered catfish.
The hidden dangers of ChatGPT
Users’ reservations are, in fact, very well founded. As the latest news shows, artificial intelligence tools have been used not only for romance scams, but also in various online scams. Below are the most common cases.
Catfish
Catfish involves creating fake online identities to trick people into relationships – often for personal gain or simple emotional manipulation. With ChatGPT resources, criminals can create fake profiles with the help of artificial intelligence and thus develop more realistic conversations and stories to convince their targets.
Once they have gained their victims’ trust, fraudsters can abuse this loop to ask for money or personal information, and take advantage of the situation to carry out their malicious actions.
Phishing
Another common way of exploiting ChatGPT for malicious purposes has been the creation of phishing emails. While the use of these emails is nothing new, artificial intelligence platforms have crafted such compelling messages that sometimes even the most knowledgeable cyberthreat intelligence professionals (Cyber Threat Intelligence) end up being deceived.
The main problem with these fake communications is the potential risks they pose to users. Through phishing, after all, criminals can cause data breaches, financial losses, malware propagation, and even reputational damage.
Malware
As if that weren’t enough, ChatGPT can also be used for other harmful activities, such as creating malicious programs, as demonstrated by cybersecurity specialist Leonardo La Rosa, who created a “computer virus factory” using the platform.
According to La Rosa, it was possible to develop codes for keyloggers (used to capture everything that is typed by users) and ransomware (used to “hijack” documents from devices and then demand a “ransom payment”) as well as strategies for hide these malicious files from the radar of antiviruses.
How to protect against threats created by ChatGPT?
Protecting against the dangers generated by ChatGPT requires a combination of vigilance, awareness and the use of cybersecurity tools. One should, first of all, be cautious when talking to strangers on the internet – whether they are for romantic purposes or not.
Second, it is crucial for users to stay up-to-date with new technological trends, especially in terms of online scams. By being aware of what criminals are doing today, it is easier to understand where the “warning signs” are that should be noticed in your virtual communications.
Finally, cybersecurity tools can help users avoid downloading malware, or avoid fraudulent websites that simulate genuine pages.
Combining these tactics, users can safeguard themselves from the major risks posed by ChatGPT. Already the risks of a loving disappointment…
Dating strategies are gaining a new dimension. According to a survey by cybersecurity company Kaspersky, 3 out of 4 users of dating apps (such as Tinder, Bumble and Inner Circle) declared an interest in using ChatGPT or other AI platforms to improve their flirting skills and increase their chances with potential romantic partners.
According to the study, the male audience is the most attracted to technology, with 54% of men stating that they would use the tool to improve their personality in terms of intelligence and humor. Among women, the figure was 51%.
Ironically, despite the high interest in the tactic, most respondents said they were “concerned” about its use, claiming the practice would be “dishonest” and enable a new era of AI-powered catfish.
The hidden dangers of ChatGPT
Users’ reservations are, in fact, very well founded. As the latest news shows, artificial intelligence tools have been used not only for romance scams, but also in various online scams. Below are the most common cases.
Catfish
Catfish involves creating fake online identities to trick people into relationships – often for personal gain or simple emotional manipulation. With ChatGPT resources, criminals can create fake profiles with the help of artificial intelligence and thus develop more realistic conversations and stories to convince their targets.
Once they have gained their victims’ trust, fraudsters can abuse this loop to ask for money or personal information, and take advantage of the situation to carry out their malicious actions.
Phishing
Another common way of exploiting ChatGPT for malicious purposes has been the creation of phishing emails. While the use of these emails is nothing new, artificial intelligence platforms have crafted such compelling messages that sometimes even the most knowledgeable cyberthreat intelligence professionals (Cyber Threat Intelligence) end up being deceived.
The main problem with these fake communications is the potential risks they pose to users. Through phishing, after all, criminals can cause data breaches, financial losses, malware propagation, and even reputational damage.
Malware
As if that weren’t enough, ChatGPT can also be used for other harmful activities, such as creating malicious programs, as demonstrated by cybersecurity specialist Leonardo La Rosa, who created a “computer virus factory” using the platform.
According to La Rosa, it was possible to develop codes for keyloggers (used to capture everything that is typed by users) and ransomware (used to “hijack” documents from devices and then demand a “ransom payment”) as well as strategies for hide these malicious files from the radar of antiviruses.
How to protect against threats created by ChatGPT?
Protecting against the dangers generated by ChatGPT requires a combination of vigilance, awareness and the use of cybersecurity tools. One should, first of all, be cautious when talking to strangers on the internet – whether they are for romantic purposes or not.
Second, it is crucial for users to stay up-to-date with new technological trends, especially in terms of online scams. By being aware of what criminals are doing today, it is easier to understand where the “warning signs” are that should be noticed in your virtual communications.
Finally, cybersecurity tools can help users avoid downloading malware, or avoid fraudulent websites that simulate genuine pages.
Combining these tactics, users can safeguard themselves from the major risks posed by ChatGPT. Already the risks of a loving disappointment…