Home Technology Artificial Intelligence The impact of artificial intelligence developments on cybersecurity In order to thwart potential cyberattacks and better protect businesses and individuals against cyberthreats, security players are closely monitoring developments in AI by Konstantin Berlin June 18, 2023 Image credit: Supplied The progress of artificial intelligence (AI) in recent years has been most dazzling. With new technological advances, AI is now at the heart of the current technological landscape. According to a survey conducted by Gartner, investment in AI software products is expected to reach $62bn in 2022 – only a tiny fraction of its potential. As AI capabilities gain accessibility and allow cybersecurity specialists to use them as defense tools, malicious actors are also taking advantage of this democratised access. In order to thwart potential cyberattacks and better protect businesses and individuals against cyberthreats, security players are closely monitoring developments in AI and measuring its influence on cybersecurity. Defenders do not have a monopoly on innovations Ten years ago, no one would have imagined seeing AI solutions reach current levels of complexity and sophistication, let alone seeing AI get integrated into such a wide range of technological solutions. According to Gartner’s predictions of the next major technological developments, generative AI technology alone is expected to create 10 per cent of all data generated by 2025, up from just 1 per cent today. The major driver for such a large increase in AI update will be driven by increasingly democratised accessibility to AI models and decreased computational costs required to train and run such models. Researchers in organisations with large compute clusters like Google and OpenAI, have shown that by training huge parameter AI models on large amount of data it possible to push performance of generative models to levels almost indistinguishable from human generated content. These AI models have been demonstrated to generate realistic sounding articles, working source code from text descriptions, and even artistic images that are able to win art competitions. At the same time innovations, like the stable diffusion model, are rapidly lowering the cost of training and using such models. With costs rapidly dropping, cybercriminal will not be far behind in adapting these new AI approaches into their toolsets. Such adaptation will lead to new types of low cost, harder to detect attacks. For example, cybercriminals could cheaply and automatically generate millions of realistic fake social profiles or phishing websites that would be undetectable at first pass to a human. New types of AI models are now able to synthetically generate realistic voice or video from only a small set of samples, making targeted “whaling” attacks much easier. Moving forward, while protecting yourself now New research will have to be done on ways to effectively detect fake AI generated content. Detecting fake content is an active area of research with researchers already looking into building their own AI models that can effectively detect the fake AI generated content. At the same time some of the approaches and expectations to security might need to change. For example, expecting end-users to somehow self-identify attacks without additional support tools will become even less practical in the face of artificially generated hyper realistic fake content. Meanwhile, while we wait for development of new methods against these deepfake cyberattacks, so these kind of attacks can be detected as early as possible, the current cybersecurity advice remains as true as ever: make sure to having multiple level of protection active at every stage stages of the cyberattack, for even the most clever of adversaries will eventually need to perform something so clearly malicious that no amount of AI magic will be able to disguise it. Konstantin Berlin is the head of AI at Sophos Read: How artificial intelligence can help reduce road accidents Also read: How artificial intelligence is transforming the UAE’s real estate market Tags Artificial Intelligence cybersecurity Sophos Technology 0 Comments You might also like UAE consumers worried about application failure during holiday season: Report Oracle targets training 50,000 Saudis in AI, latest tech Abu Dhabi launches free Hala Wi-Fi across emirate How banks are leveraging the power of GenAI