Overlay

What is vishing, and why is it a rising concern in 2025?

In recent months, there has been a marked increase in cyber-attacks leveraging social engineering tactics, particularly those involving voice-based deception.

In their 2024 Annual Threat Intelligence Report, NCC Group analysts spotlighted how rapid advances in artificial intelligence (AI) are fuelling a new generation of social engineering tactics. AI-driven tools are making phishing schemes more convincing, while deepfake technology and generative language models make voice-based attacks like vishing increasingly difficult to detect and defend against.

While technical controls have improved at blocking phishing emails from reaching inboxes, an attack vector gaining traction is voice phishing, or vishing, which refers to telephone-based social engineering. Threat actors use this method to bypass email controls and directly target employees. Groups like Scattered Spider have especially drawn attention for their use of this tactic.

As adversaries pivot to more direct and personalised approaches, understanding the evolving threat landscape around vishing is critical for both security teams and employees alike.

How threat actors use social engineering and OSINT

Unlike phishing, vishing uses phone calls as the primary attack vector. Threat actors impersonate trusted sources such as IT Helpdesks, senior executives, or service providers to manipulate their targets into revealing sensitive information or performing actions.

These attacks are often supported by Open-Source Intelligence (OSINT). Threat actors use social media platforms like LinkedIn to identify employees, understand their roles, and map the organisational structure.

Vishing: 3 scenarios to watch out for

Targeting

Vishing attacks can be as simple as contacting the IT Helpdesk, posing as legitimate employees who have "upgraded their mobile phone and lost access to their MFA app" or "lost access to their password manager."

In organisations without a comprehensive caller policy or with undertrained helpdesk teams, these calls can lead to unauthorised access being granted.

 

Caller verification policy

Your organisation's verification policy for confirming callers should be considered public. Threat actors can place multiple calls to IT helpdesks over time, gradually piecing together the policy.

While each individual call may seem inconsequential, the cumulative effect allows threat actors to map out verification processes, identify gaps, and ultimately bypass security controls more effectively.

After enumerating the verification policy, threat actors can research staff and the organisation online or through calls to other users, such as the Reception or Customer Services Teams, to obtain the required information to successfully verify their identity with the IT helpdesk.

Verification questions such as "Who is your line manager?" or "What was your start date?" and "Can you confirm your job title?" are common but trivial to answer using publicly available information from social media.

Alternatively, threat actors may call end users directly, posing as members of the IT helpdesk team. These calls can coincide with phishing emails or SMS messages (smishing) to increase urgency and credibility. A typical script might involve requesting the MFA PIN from a user:

"Hey Adam, it's Rory from helpdesk. I've been forwarded a ticket from networks as they're experiencing some issues with your MFA device, it appears that there's an error with the syncing of the MFA pin.

“So that we can make sure you don't lose access would you mind opening up your MFA app and letting me know what numbers are currently displaying, it may be worth waiting till the next cycle. I will then make sure everything is in sync."

 

Caller ID spoofing and deepfakes

Phone number spoofing is simple, allowing threat actors to make their calls appear to come from legitimate and known telephone numbers.

Moreover, real-time deepfake voice cloning can now be used to impersonate individuals within organisations.

With only a few minutes of recorded speech, often gathered from public podcasts, social media, or corporate videos, Threat actors can create AI-generated voices nearly indistinguishable from the real person.

Best practices to prevent voice phishing attacks

While it is not possible to block all spoofed calls or prevent deepfakes, organisations can mitigate vishing risks with a structured approach:

  1. Policies: A comprehensive policy should be in place outlining how IT helpdesk staff and end users verify the identity of incoming callers, along with the specific steps to follow when handling such requests.
  2. Call verification: Caller verification questions should not rely on information considered public, such as job role, line manager, or start date. They should be unique to each staff member and not include generic questions about the organisation.
  3. Three-way video calls: Including a line manager, the user, and the IT Helpdesk in a three-way video call where the line manager will ask questions of the user, only known to them, to establish their identity.
  4. Awareness training: Ensure staff can recognise social engineering tactics and verify unusual requests through secondary channels.
  5. Call-back policies: Instruct employees to end calls and call back on numbers recorded internally.
  6. IAM controls: Limit helpdesk capabilities to reset passwords or grant access without secondary approvals.
  7. Monitoring and reporting: Encourage prompt reporting of suspicious calls.
  8. Trigger alerts: All password requests (successful or not) should trigger an email to the user to alert them if someone is trying to reset their password.

How to build a culture resilient to cyber-security threats

As technical controls improve at preventing phishing emails from landing and staff receive years of phishing training, threat actors are becoming more creative by using voice-based social engineering techniques, including deepfakes.

Without strict verification policies, ongoing staff training, and periodic social engineering assessments, threat actors can exploit these weaknesses.

Recognising that the voice on the other end of the phone may not be who they claim to be is the first step in building a more resilient security culture.

To learn more about social engineering prevention, visit nccgroup.com

Get more support with online protection for your business.

This material is published by NatWest Group plc (“NatWest Group”), for information purposes only and should not be regarded as providing any specific advice. Recipients should make their own independent evaluation of this information and no action should be taken, solely relying on it. This material should not be reproduced or disclosed without our consent. It is not intended for distribution in any jurisdiction in which this would be prohibited. Whilst this information is believed to be reliable, it has not been independently verified by NatWest Group and NatWest Group makes no representation or warranty (express or implied) of any kind, as regards the accuracy or completeness of this information, nor does it accept any responsibility or liability for any loss or damage arising in any way from any use made of or reliance placed on, this information. Unless otherwise stated, any views, forecasts, or estimates are solely those of NatWest Group, as of this date and are subject to change without notice. Copyright © NatWest Group. All rights reserved.

scroll to top