1. Home
  2. /
  3. Knowledge bank
  4. /
  5. AI in cyberattacks: What does it mean for you as an IT manager?

How AI shifts the threat landscape

Threat actors using AI is nothing new. But it has changed the threat landscape in several ways.

AI-driven social engineering takes a step forward.

AI-generated communication intended to deceive is contextually adapted, linguistically accurate, and difficult to distinguish from legitimate messages even for trained eyes. Previously, we had the Nordic advantage with small languages acting as a natural filter. This is now disappearing. Large language models easily handle our Scandinavian languages, meaning this can no longer be considered a protection.

AI makes cyberattacks adaptive.

Malware adapts its behaviour based on the environment it encounters. This automated vulnerability analysis works quickly, faster than the sharpest security teams can react. This has resulted in the time margin between intrusion and escalation shrinking.

Attackers scale without compromising precision.

AI blurs the boundary between broad, generic campaigns and targeted attacks. Spear phishing can be conducted on a large scale, tailored to recipients' roles, industries, and organisations. But the scaling does not stop at social engineering. AI will scan and identify vulnerabilities with a power we have not seen before – automated, continuous, and faster than manual processes can respond. This is perhaps the single greatest shift: attackers gain industrial capacity without industrial cost. This in turn requires a corresponding build-up of awareness and resilience on the organisational side.

The attack surface broadens organisationally.

The ability to adapt means that threats increasingly fall outside the traditional scope of the IT department. This might involve deep fake calls to the chief financial officer, AI-generated supplier invoices matching ongoing projects, or manipulated voice messages or emails from a colleague.

The ability to customise means that threats increasingly fall outside the traditional scope of the IT department.

Why existing protections give a false sense of security against AI-driven cyber threats

When the threat landscape shifts, it is reasonable to ask whether your existing security solutions are sufficient. For most organisations, the answer is uncomfortable. Not because protections are lacking, but because they were built for a different type of threat.

When attacks no longer follow known patterns, the accuracy of, for example, rule-based detection and signature-based tools decreases. This also affects vendor relationships. That an MSSP or SOC provider has AI capabilities does not say much if they cannot demonstrate how they concretely impact detection and response.

Training efforts must be adapted so that they are no longer based on yesterday's scenarios. Looking for spelling mistakes or strange senders does not prepare your staff when attacks become smarter. Incident planning is also affected. Especially those that assume human analysis at every step risk being too slow when the attack chain is automated.

When the threat landscape shifts, it is reasonable to ask whether your existing security solutions are adequate?

Strategic questions to address internally to meet AI in cyberattacks

The question is therefore not whether you have protection in place, but whether you set the right requirements for it. Here are some questions to raise internally to adapt your cybersecurity to a threat landscape that changes every day.

What do we require from our suppliers? Move the conversation from "do you have AI" to "show me how your detection capabilities have evolved over the past year, what types of attacks do you identify today that would have gone unnoticed twelve months ago". Ask for concrete examples rather than feature lists and ask the same question to all suppliers.

  • Is our detection capability adapted for unknown patterns? Behaviour-based analysis and anomaly detection sound good in a product presentation. The question is whether this exists as an actual, tested capability in your environment?

  • How quickly can we act? We do not mean on paper, but in practice. If an AI-driven attack escalates within minutes, do our decision paths and mandates work then?
  • Who owns security issues that are not technical? CEO fraud, invoice manipulation and deepfakes aimed at the management team are business issues that require business mandates. In many organisations this falls between the cracks. IT sees the threats but does not have decision-making authority, management holds the mandate but does not see the threats. It cannot be like this.
  • Do we have control over our own AI? Many organisations have introduced both AI agents and tools without ongoing supervision. Which agents are installed and what do they have access to? Is there behavioural control that ensures they do what they are supposed to and do not leak data? What was intended to increase efficiency can become an attack surface in itself if it is not monitored and maintained.
  • Are we building resilience among employees? Technology is not everything. Humans are still the most common entry point in an attack, and the most underestimated line of defence. But training efforts must match today's threat landscape to build actual resilience.
  • Are we testing ourselves on the right things? If penetration tests and simulations do not include AI-generated social engineering or adaptive attacks, you are testing on a reality that no longer exists.

It doesn't start with a tool

No organisation solves this overnight, and that is not the point. The point is that the IT manager who understands how AI changes the playing field, and drives these questions internally, builds a significantly more resilient organisation. It doesn't start with a new tool, but with an updated picture of what you are actually facing.

5 common questions and answers about AI in cyberattacks and what it means for the IT manager

  • How is AI used in cyberattacks today?
    AI is used, among other things, to generate convincing phishing emails, create deepfakes of voice and video, automate vulnerability scanning, and adapt malware in real-time to the environment it is in. This makes the attacks faster, more accurate, and harder to detect with traditional protection.
  • Why are traditional security solutions not sufficient against AI-driven attacks?
    Rule-based detection and signature-based tools are designed to recognise known threat patterns. AI-driven attacks dynamically change their behaviour and do not follow these patterns, which means they often pass unnoticed by conventional protective layers.
  • What can I, as an IT manager, do to protect the organisation against AI-based cyber threats?
    Start by evaluating whether your detection capabilities handle unknown patterns, ensure incident plans work in practice during rapid attack progressions, push for clarifying security responsibilities and mandates beyond the IT department, and set requirements for suppliers regarding actual AI-driven detection capabilities.
  • How does AI affect social engineering and phishing?
    AI makes it possible to create linguistically perfect and contextually adapted phishing messages on a large scale. The Nordic language filter, which previously provided some natural protection, has basically disappeared since large language models handle Scandinavian languages without problems.
  • What risks do the organisation's own AI tools pose?
    AI agents and AI tools introduced without ongoing oversight can themselves become an attack surface. If there is no control over what the tools have access to and how they handle data, there is a risk of data leakage and unwanted behaviours that an attacker can exploit.
En person i grå hoodie ler medan hen tittar på en mobiltelefon utomhus.

Subscribe to our newsletter!

Related articles

Blog
Digital business development
Security

Social engineering: How to prevent attacks on your company

Blog
Security

Stolen data: How to protect yourself after a cyberattack

Blog
Manufacturing industry and logistics
Cloud and infrastructure

Cloud strategy for industry and logistics: Create control in a connected reality

This website uses cookies and personal data

When you visit https://nordlo.com, we at Nordlo Group AB use cookies and your personal data. Some cookies and some processing of personal data are necessary, while you choose whether to consent to others. You make your choice below. Your consent is entirely voluntary.

You have certain rights, such as the right to withdraw your consent and the right to lodge a complaint with a supervisory authority. Read more in our cookie policy and our privacy policy.

Manage your cookie-settings

Cookies and personal data that we use for analysis

Check to consent to the use of Cookies and personal data that we use for analysis

To analyse how you use our website, we use cookies from Google and HubSpot's analytics service. We also process your personal data, e.g. your encrypted IP address, your geographical location and other information about how you use the website. 

Cookies and personal data that we use for marketing

Check to consent to the use of Cookies and personal data that we use for marketing

We use cookies and your personal data to display relevant marketing and to follow up on such marketing when you visit other websites or social media. We do this with the aid of Google, Facebook, HubSpot and LinkedIn. The personal data that we process for marketing purposes include your IP address, information about how you use the website and information that these services already have about you.  

Ad measurement user cookies

Check to consent to the use of Ad measurement user cookies
In order to show relevant ads we place cookies to tailor ads for you

Personalized ads cookies

Check to consent to the use of Personalized ads cookies
To show relevant and personal ads we place cookies to provide unique offers that are tailored to your user data