
Artificial intelligence (AI) makes creating new materials, such as text or images, as easy as typing a simple text prompt. Even though that capability means big productivity gains for individuals, bad actors can exploit AI to create elaborate cyber scams.
Also: The best VPN services (and how to choose the right one for you)
Evidence suggests cyberattacks are on the rise. Between March 2024 and March 2025, Microsoft stopped approximately $4 bn of fraud attempts. Many of those attacks were AI-enhanced.
“We’ve seen it where a bunch of people are using AI really well to improve their lives, which is what we want, but in the hands of bad actors, they’re using AI to supercharge their scams,” said Kelly Bissell, CVP, Fraud and Abuse at Microsoft, to ZDNET.
Also: Navigating AI-powered cyber threats in 2025: 4 expert security tips for businesses
On Wednesday, Microsoft published its Cyber Signals report titled ‘AI-Driven Deception: Emerging Fraud Threats and Countermeasures’ to help people identify common attacks and learn what preventative measures they can take. You can find a roundup of the attacks identified in the report and tips to stay safe online below.
E-commerce fraud
If you have encountered any AI-generated content, whether it’s an image or text, you have likely seen how realistic AI content can be. Bad actors can use this capability to create fraudulent websites that are visually indistinguishable from real ones with AI-generated product descriptions, images, and even reviews. Since this action requires no prior technical knowledge and just a small amount of time, consumers’ chances of coming across these scams are higher than in the past.
There are ways to stay protected, including using a browser with mitigations built-in. For example, Microsoft Edge has website typo protection and domain impersonation protection, which use deep learning to warn users about fake websites. Edge also has a Scareware Blocker, which blocks scam pages and popup screens.
Microsoft also identifies proactive measures users can take, such as avoiding impulse buying, as a false sense of urgency is often simulated on fraudulent sites with countdown timers and other similar tactics, and avoiding payment mechanisms that lack fraud protections, such as direct bank transfers or cryptocurrency. Another tip is to be cautious about clicking on ads without verification.
“AI for bad can actually target ‘Sabrina’ and what you do because of all your public information that you work on, customize an ad for you, and they can set up a website and pay for an ad within the search engine pretty easily for Sabrina or lots of Sabrinas,” Bissell said as an example.
Employment fraud
Bad actors can create fake job listings in seconds using AI. To make these ads even more convincing, the actors will list them on various reliable job platforms using stolen credentials, auto-generated descriptions, and even AI-driven interviews and emails, according to the report.
Microsoft suggests that job listing platforms should implement multi-factor authentication for employers so bad actors can’t co-opt their listings and fraud-detection technologies to flag fraudulent content.
Also: How AI agents help hackers steal your confidential data – and what to do about it
Until those measures are widely adopted, users can look out for warning signs, such as an employment offer that includes a request for personal information, such as bank account or payment data under the guise of background check fees or identity verification.
Other warning signs include unsolicited job offers or interview requests via text or email. Users can take a proactive step by verifying the employer and recruiter’s legitimacy to crosscheck their details on LinkedIn, Glassdoor, and other official websites.
“Make sure that if it sounds too good to be true, like minimal experience, where a great salary is probably too good to be true,” said Bissell.
Tech support scams
These scams trick users into thinking they need technical support services for problems that do not exist through advanced social engineering ploys via text, email, and other channels. The bad actors then gain remote access to the person’s computer, allowing them to view information and install malware.
Even though this attack does not necessarily involve using AI, it is still highly effective at targeting victims. For example, Microsoft said Microsoft Threat Intelligence observed the ransomware-focused cybercriminal group Storm-1811 posing as IT support from legitimate organizations through voice phishing (vishing) attacks, convincing users to hand over access to their computers via Quick Assist. Similarly, Storm-1811 used Microsoft Teams to launch vishing attacks on targeted users.
Also: The best VPN services for iPhone and iPad (yes, you need to use one)
Microsoft said it has mitigated such attacks by “suspending identified accounts and tenants associated with inauthentic behavior.” However, the company warns that unsolicited tech support offers are likely scams.
The report said proactive measures users can take are opting for Remote Help instead of Quick Assist, blocking full control requests on Quick Assist, and taking advantage of digital fingerprinting capabilities.
Advice for companies
AI is evolving rapidly and its advanced capabilities can help your organization stay protected. Bissell said every company should consider implementing AI as soon as possible to stay ahead of the curve.
“An important piece of advice for companies is, in this cat and mouse game, they’ve got to adopt AI for defensive purposes now because, if they don’t, then they’re going to be at a disadvantage from the attackers,” said Bissell.