Skip to content

Artificial Intelligence (AI) – The Perfect Tool or the Perfect Risk?

One of the major topics nowadays is AI. Some praise it as the savior of our digital life, while others think of the typical scenario from the Terminator movies, where an AI – Skynet – takes over control and begins a war against humanity.

Okay, to be honest, both scenarios are a bit exaggerated.

Let’s go a bit more into the facts: AI is currently being included in more and more use cases. You can find it in Microsoft tools (in the form of CoPilot), in the well-known browser-based solution ChatGPT, or also in emerging companies like xAI. If you visit exhibitions, you will see “AI” mentioned here and there, and of course a product is only considered state-of-the-art and cool if there is something labeled “AI.” You can also observe this trend by doing a Google search for “top buzzword 2024.” At least in our search, the top item was “AI.” 😊

But don’t worry - here, we are neither trying to portray “AI” in a bad light nor to promote it as a cure-all. Since we have been asked about it during several Cybersecurity trainings and events, we want to shed some light on the often-unclear discussions.

Understanding the Terms

Quite often, people and companies mix AI-related terms – sometimes even on demand. Here are some definitions:

Artificial Intelligence (AI) can be broadly classified into Strong AI and Weak AI, while Machine Learning (ML) serves as a core technology within AI.

  • Strong AI (Artificial General Intelligence – AGI)

Strong AI refers to an advanced form of AI that possesses human-like intelligence, reasoning, and self-awareness. It can learn and apply knowledge across different domains without human intervention. However, AGI remains theoretical and has not yet been achieved.

  • Weak AI (Narrow AI)

Weak AI is designed for specific tasks and lacks general intelligence or consciousness. It powers many real-world applications, such as virtual assistants (Siri, Alexa), recommendation systems, and self-driving cars. These AI systems operate within predefined boundaries and do not possess independent thinking.

  • Machine Learning (ML)

ML is a subset of AI that enables computers to learn from data and improve performance over time without explicit programming. It is the driving force behind many Weak AI applications, including image recognition, fraud detection, and predictive analytics.

Returning to our introduction, what we currently see labeled as “AI” (e.g., ChatGPT, CoPilot) is Weak AI, while Skynet would be considered Strong AI or even an Artificial Superintelligence, which goes some steps beyond Strong AI.

Phew, so at least in the next couple of years, machines will not take over control of the planet. 😉

AI and Cybersecurity: Should We Worry?

How does this apply to cybersecurity and Weak AI? Should we be concerned? Will AI-based tools break into our systems in a fully automated way?

As AI technology advances, cybersecurity faces both new opportunities and serious challenges. Weak AI can power modern cybersecurity tools that help to detect and prevent cyber threats faster than ever. However, cybercriminals are also leveraging AI to create automated hacking systems, deepfake attacks, and AI-generated phishing scams.

So, there is currently an ongoing digital arms race, where hopefully you are ahead of the attacker with your defences.

Observation #1: Changes in the Threat Landscape

Fully automated AI-driven attacks are not yet widespread, and because of the characteristics of Weak AI, they usually follow predefined methods, which limits their risk.

However, it is important to mention that due to AI tools, hacking is easier than ever before. Around the year 2000, attackers needed a strong skillset to identify vulnerabilities, be familiar with toolkits, and perform complex command-line tasks. This changed significantly with the introduction of Kali Linux in 2013 and the rework of the Metasploit framework around 2010, which provided a ready-to-use “Swiss Army Knife” for hackers. Attackers still needed to know how to utilize the tool and tailor exploits, but execution became much simpler because the toolkit already set up elements like listeners for reverse shells.

Now, with AI, the main requirement is to write a good query, and the AI can provide step-by-step guidance. Even individuals without any IT background can ramp up their attacks in no time.

A small example: one could ask ChatGPT, “How can I execute the EternalBlue exploit via Metasploit on Kali to see if my machine is vulnerable?” Besides a detailed explanation, it might also describe what to do post-exploitation, such as retrieving the SAM database (Windows password database) or how to get passwords in clear text. This can be done in under two minutes with the free version of ChatGPT-scary, right?

You might think, “Yes, but you asked very specific questions…” We tried the same approach for S7 PLCs, and after a brief warning about the legality of hacking, ChatGPT provided information on exploitable vulnerabilities and a step-by-step guide on how to use them.

It should be evident that the likelihood of attacks has increased significantly due to AI, as it removes much of the research and know-how required for simpler, well-known attacks, making them accessible to almost anyone.

Observation #2: Awareness Is More Important Than Ever

With the rise of AI, not only are technical aspects changing-social factors are also affected. Forbidding AI usage entirely in an organization is not a real solution, as it could have significant impacts on business competitiveness. Think about what happened when computers replaced typewriters. The same could occur with organizations that refuse to consider AI in their operations.

When talking about awareness and AI, two areas are particularly important:

  • AI Usage by Attackers in Social Engineering
  • Data Leakage Through the Use of AI

Let’s start with Social Engineering. You might believe you are well-versed in cybersecurity, but imagine suddenly receiving a video call from an unknown number. You answer, and there is your supervisor speaking about a certain topic, then asking you to do something. Would you be suspicious? Perhaps you would verify via their known number, but consider all the people in an organization who are not “Cybersecurity Pros.” They could easily fall victim to deepfakes and AI-based social engineering attacks.

Data Leakage by Accident is another issue. AI can be a very helpful tool, but users must remember that public AI systems learn from the queries entered into them. This can lead to situations where confidential information-like source code-could later be revealed to another user if it helps solve their request. Of course, this depends on the AI in use, but the risk exists, and users need to be aware of it. In tests with ChatGPT, it was confirmed that personally identifiable information, confidential source code, and even confidential emails from C-level executives have been shared in queries.

Yes, AI systems can be a great benefit to workplace processes, but we must ensure confidentiality is maintained!

A quick note on another topic: AI can also make mistakes. We have tested AI for reviewing some documents and found that the answers were not always convincing. We encourage everyone not to blindly trust what the AI tells you.

Conclusion

If you would like to learn more about AI and cybersecurity, please feel free to reach out to us directly. We plan to publish more insights soon, including an in-depth look at the disruptive potential of AI. Stay tuned!

An den Anfang scrollen