020 3633 3182

Call Us for IT & Cyber Security Advice

0 %
Response times

We guarantee to get back to you within 30 seconds 99% of the time.

0 +
Benefits

100+ Customers have experienced the benefits of our IT Support.

0 %
Specialist support

95% of our customers would recommend us as a specialist.

0 %
Response times

100% of our calls are answered by specialist engineers.

Unmasking the Dark Side of AI: Social Engineering through ChatGPT

Imagine yourself as a developer who is constantly trying to improve workflows and meet the rising expectations of your company. What if, though, the AI and automation technologies you rely on to do your work more efficiently were to be turned against you? With ChatGPT, that is the situation.

The most recent Netskope Cloud and Threat Report indicates that social engineering was still the most common method of malware infiltration during Q1 2023, with attackers employing chat, email, collaboration, and search engines to deceive their targets into downloading malware. These tactics take use of timely issues or significant events to better cloak harmful software as legitimate files or web pages, deceiving the target into thinking it has good intentions.

With all the buzz surrounding ChatGPT, it was only a matter of time before attackers began to take advantage of the excitement surrounding the AI chatbot. As an illustration, campaigns might be launched that disseminate malware while pretending to be implausible ChatGPT users or phishing URLs could promise improbable free access to a similar service or other AI tools.

The attacker’s inventiveness was stimulated by this lucrative opportunity, and as a result, multiple hostile OpenAI chatbot-themed operations have been launched thus far in 2023.


ChatGPT Isn’t a Magical Solution

At least in the view of the general public, ChatGPT is at the cutting edge of the generative AI trend. It is a language model created by OpenAI that has been trained on a sizable amount of text data. This state-of-the-art AI technology generates human-like responses to queries or prompts using a deep neural network, enabling a more complex and nuanced relationship between humans and robots.

However, as generative AI has grown in popularity, especially ChatGPT, there are also worries that the technology could be abused for malicious purposes like hacking. Recently, Italy became the first Western nation to outlaw ChatGPT, and most businesses understand how critical it is to put new security measures in place to thwart assaults.


Social Media is The Optimal Catalyst

After ChatGPT’s initial release at the end of 2022, beginning in late February 2023, numerous hostile attacks utilising ChatGPT were identified. The efforts distributed the malicious content through a variety of methods, including false social media pages with links to typo squatted or misleading domains that looked like the legitimate OpenAI website. Additionally, there were fraudulent ChatGPT membership payment phishing pages created to steal credit card information, as well as the customary swarm of malicious mobile applications that use the ChatGPT symbol and promise AI functionality to lure users into downloading them.

For the record, in January 2023, the first efforts to use ChatGPT for illegal activity were discovered. Instead of launching fraudulent operations with ChatGPT as the focal point, they primarily sought to weaponize the AI tool itself. Threat actors reportedly originally concentrated on getting around the limitations to develop new dangerous tools and polymorphic malware.


Seizing Social Media Accounts

Attackers have not restricted themselves to creating phony social media pages; they have also used ChatGPT-themed attacks to take advantage of browser extensions. One intriguing effort, for instance, was uncovered in March 2023 and used a malicious fork of the open-source extension “ChatGPT for Google,” which had code intended to harvest Facebook session cookies. Another intriguing feature of this operation was that the malicious extension’s download link could be found in the official Chrome Store, where it was downloaded more than 9,000 times before being taken down, and was advertised through deceptive sponsored search results on Google. This is just another illustration of how SEO poisoning is once again becoming popular with threat actors. This wasn’t the only Google Search Ads campaign with a ChatGPT theme.


ChatGPT-Powered Malware

Publishing ads that appear to be legitimate and encourage free downloads of malware posing as legitimate software is one of the finest methods to take advantage of a Facebook account that has been compromised. The threat actors used hacked community or company Facebook pages to market and distribute the malware-as-a-service RedLine stealer disguising itself as a ChatGPT client and its partner Google Bard in another ChatGPT campaign that was detected in mid-April. Coming full circle, Facebook accounts have been hacked using the buzz surrounding ChatGPT, and these accounts have then been used to advertise malware downloads.


Using Trojanized Installers (such as ChatGPT) to Distribute the Bumblebee Malware

Another typical tactic used by attackers is to insert malware inside the installers of trustworthy applications. With all the attention being paid to ChatGPT, it was only a matter of time before threat actors began disseminating trojanized installers for the OpenAI chatbot. Later in April, a similar campaign was found, and this time the infection chain depended on malicious Google Ads that directed users to bogus download pages for trojanized versions of well-known programmes like Zoom, Cisco AnyConnect, Citrix Workspace, and ChatGPT. The malicious loader known as Bumblebee, which is typically used to acquire early network access and launch ransomware assaults, would be sent through the bogus pages.


Taking Credentials Saved in Google Chrome

Consider that between November 2022 and early April 2023, the number of newly created and squatting sites related to the AI chatbot increased by 910% monthly. This gives you an idea of the scope of the tremendous development in the abuse of ChatGPT-themed attacks. Additionally, researchers from Meta have identified and stopped the use of about 10 malware families that use ChatGPT along with other AI-related themes since March 2023. This pattern is persisting and will certainly result in the regular detection of new themed attacks. A similar effort was identified in late April, delivering a new info thief that imitated a ChatGPT Windows desktop client and was able to copy saved login information from the Google Chrome login data file.


Constantly Poisoning Google Ads With ChatGPT-Themed Malware

As we’ve seen, the most efficient approach to spread malware attacks themed around ChatGPT and other AI-based tools like Midjourney is through the misuse of Google Ads. One of the earliest methods to exploit ChatGPT as an enticement to persuade people to install malware was possibly this one. By February 2023, it had already been established that such a campaign was being run by the financially motivated threat actor known as Void Rabisu, with the intention of disseminating the RomCom implant, a backdoor that was used to deliver ransomware that was also employed against Ukraine.

A malicious advertising campaign with themes linked to AI technologies like ChatGPT, Midjourney, and Dall-e was spotted in Google’s search engine in May 2023, according to multiple security researchers. Users were lured into installing a false installer by the malicious advertisement, which eventually uninstalled the RedLine info stealer (again). It’s interesting that this campaign used a variety of evasion tactics; for instance, a non-malicious variant of the domain was served if the link did not originate from the Google advertisements redirector. Additionally, the campaign misused Telegram’s API for command-and-control (C&C) communication, operating as an evasion strategy to blend malicious communication with legitimate traffic, improving the likelihood of escaping detection.


Attacks Are Not Just Malware-Based

Financial fraudsters quickly modified their methods to hop on the chatbot train and planned sophisticated investment frauds to target people seeking an AI-powered financial specialist to create an additional source of passive income. The ChatGPT hook was also shown to be compelling for these criminals. 

Early in March 2023, a similar campaign that targeted users in numerous European nations and included a phishing leitmotif, abuse of human frailty (in this case, the assurance of relatively cheap money), and the enthusiasm surrounding ChatGPT capabilities was uncovered. The attacks began with unsolicited emails that contained links to a fake OpenAI website. After being quickly evaluated by a fake chatbot, the user was then forwarded to a call centre that offered unlikely earnings in exchange for an entry fee of at least €250, leaving the victims vulnerable to additional attacks by the fraudsters. 


What Can You Do?

Attackers are continually looking for fresh and disruptive events to exploit as lures for their operations since social engineering attacks rule the current threat landscape. The introduction of ChatGPT (and other AI tools) was a golden opportunity, and threat actors wasted no time seizing it. As they try to stop these attacks and identify new vulnerabilities, this poses a risk for people and businesses. To lessen the danger, the following actions should be taken:

  1. Users are being made aware of possible social engineering methods used against them and the company. Establish a clear method and channel for users to quickly report and get feedback on things they find questionable at the corporate level.
  2. Examine every HTTP and HTTPS download, together with all web and cloud traffic, to stop malware from entering the network either directly or through a vulnerable endpoint.
  3. To limit the danger to only those necessary applications and instances (company vs. personal), configure policies to restrict downloading from applications not used in the organisation.
  4. To lower the total risk surface, prevent downloads of any risky file types from recently registered domains, recently observed domains, and other risky categories.
  5. To streamline security operations, make sure that all security defences cooperate and exchange intelligence.

It is obvious that ChatGPT-themed attacks have grown in favour among attackers and will continue to be in their toolbox for years to come, much like the phony utilities and banking schemes. However, many campaigns have distinguishable characteristics, and putting good security practises in place will help businesses defend against these attacks. 


ChatGPT is Not Going Anywhere – So Are the Threats

With better efficiency, velocity, and task management, ChatGPT and generative AI have the power to change the software and cybersecurity sectors. However, this technology also has significant risks, especially in the possession of hackers who can take use of its potential to produce spear-phishing emails, write malware, and design convincing ransomware campaigns.

It is vital that we put first cybersecurity measures and develop strong defences in light of the evolving threat scenario as enterprises progressively employ these technologies. With the correct strategy, we can make use of ChatGPT’s and generative AI’s advantages while reducing risks and guaranteeing a safe and secure cyber ecosystem.

Blogs

Weekly Blogs For A Quick Informative Read!

Our Partners

Clients Testimonials

We take pride in our service and maintaining strong relationships with our customers.

Being partnered with WPC is a joy. Their level of service and turnaround is exceptional. As is every member of the support team that I am in contact with. Savvy support and great to work with!

 

Stephen Sawley, Director

I have worked with this company for over 4 years and can safely say that the customer service is second to none. The staff go above and beyond to assist with clients and suppliers and are always very friendly and responsive. I would highly recommend Workplace to anyone looking for a quality IT partner.

Elliot Azim, Director

We have been using Workplace Connect for around 2 years now, and have found them to be a great company to work with. The change over from our last provider was seamless, and we have enjoyed an uninterrupted service since then. They are always available to assist with any enquiries, and deal with all matters promptly. I wouldn't hesitate to recommend them to other businesses.

Mark G, Director

Your Partners in Professional Excellence

Round-the-Clock Assistance:

Our commitment to your success knows no bounds. Experience unwavering support with our 24/7 service, ready to serve you anytime, any day.

Strategic Locations for Strategic Partnerships:

Basingstoke: Never Despair Studios, Unit 2, Alton Road, Hook, RG29 1RT

London: 86-90 Paul Street, London, EC2A 4NE

Dedicated Expertise for Specialised Sectors:

Speak to a Specialist

If you have any queries or would like to learn more about how we can support your business, contact us today.

Certifications

Get Our Free Guide

Get our free guide today to learn the key threats you should be looking out for when using your device and working online.

This free guide includes:

If you would like further advice and support then contact us today!

Latest Resources

Use our latest resources to learn more and keep updated on news regarding cyber security and IT.