Search
Close this search box.
Search
Close this search box.

Tracking the Rapidly Evolving AI Cybersecurity and Regulatory Landscapes

It’s been almost two years since the initial release of ChatGPT, one of the world’s most accessible Large Language Models (LLM), and the advancements of such public-facing generative artificial intelligence (GenAI) models have only accelerated. Accelerating in tandem are the calls for regulation of AI’s development to slow and think more deeply about the technology’s development, deployment and any expected, or more importantly, unexpected impacts it might have on society and the environment. 

AI Driven Cybersecurity Risks

That said, AI will continue to develop regardless of policy or regulation at the state and organizational levels, so it’s imperative to understand the vulnerabilities to and from AI to better understand how this powerful new technology might be used to exploit or harden the digital infrastructures society depends on to function. From this understanding, a clearer picture can be formed of the emerging cyber threats within the rapidly evolving, AI-driven security and regulation landscapes. 

AI-Driven Threats to AI

Let’s start by looking at two AI-driven threats to AI as identified by the UK Department for Science, Innovation and Technology, which discovered that vulnerabilities to AI at each stage of its lifecycle had not yet been thoroughly evaluated. From the design stage, to development, deployment, and ongoing maintenance, the Department’s study breaks down how malicious actors can poison data and exploit different aspects of AI, leading to cyberattacks targeting companies and agencies that have led to denials of service, breaches of personal data, all “potentially resulting in financial losses, reputational damage, or privacy violations.”  

The first AI-driven threat to AI comes in the development stage of its lifecycle: Insecure AI code recommendations. Depending on the resource constraints your IT staff faces, turning to GenAI tools to support coding projects is quickly becoming the norm. Sometimes, the NCSC explains, tools like GitHub Copilot “may inadvertently learn insecure coding patterns, leading to recommendations of code with security vulnerabilities.” There are several procedures an agency or organization could put in place to reduce their chances of implementing insecure, AI-generated code, and it all starts with putting a human in the loop (HITL) of decision making.  

The most obvious procedure to put in place is a robust code review, led by the most experienced team member and with several fail saves in place. Whether your team plans on doing a manual review or some form of automated static analysis to assess potential security vulnerabilities in the code, making sure multiple people have had a chance to sign-off on code before integrating it to your live environment is an essential step in ensuring the security of your platform. 

The second vulnerability comes from model decay and concept drift, or “the deterioration of an AI model’s effectiveness over time due to shifts in the underlying data.” Without proper upkeep of security and given the natural decreasing performance of AI models, attackers can manipulate predictions made by the model, intentionally creating confusing and meaningless results. To track how much your AI model might be drifting, IMB recommends automating continuous monitoring of your model for drift, including tests throughout its lifecycle to ensure your tool is performing as it should. Regularly making sure your AI model is performing at par is essential for catching early signs of decay or drift.  

System hacked warnings alert Cyber attacks on a computer network, viruses, Spyware, Malware, Phishing email, or Malicious software. Cyber security and cybercrime concept. Compromised information

Predictions for how GenAI Will be an Innovator for Cyberattacks and Defenses

Turning now to how GenAI models can be a powerful tool to conduct more sophisticated cyberattacks and develop targeted malware, let’s look at three key judgements of the UK National Cyber Security Centre (NCSC). The NCSC evaluated AI’s broader influence in the cyber threat landscape, judging how likely different impacts might be, from things like reconnaissance to social engineering. One of the NCSC’s key judgements is that “AI will almost certainly [95 – 100% likelihood of occurring] increase the volume and heighten the impact of cyberattacks over the next two years. However, the impact on the cyber threat will remain uneven.” Cyber threat will be differentially impacted because AI will also be leveraged by state actors, government agencies, companies, and schools to defend their staff’s personal data and other proprietary digital assets.  

Implementing procedures like adversarial testing and Red Team exercises before integrating your model into the live environment is one effective way developers have combatted malicious manipulation of AI models. What do those methods effectively do? During development, trusted partners or internal teams use AI tools to break or manipulate your model before it goes live, which is an effective way of assessing and patching current vulnerabilities. Beyond pre-launch testing, leveraging AI powered threat detection and response tools and behavioral analytics for continuous monitoring to detect any anomalies with your model or the live environment it uses to operate is another effective approach. 

A second key judgement is that “AI will almost certainly make cyberattacks against the UK more impactful because threat actors will be able to analyze exfiltrated data faster and more effectively and use it to train [malicious] AI models.” Being able to identify high-value digital assets among a mire of data within seconds is one way AI will increase the effectiveness and efficiency of cyberattacks, a process which could eventually be automated with an AI model. That said, “AI can [also] improve the detection and triage of cyberattacks and identify malicious emails and phishing campaigns, ultimately making them easier to counteract,” with several frameworks like AI4CYBER available as starting points.  

 

A third and final key judgment from the NCSC’s assessment of the cyber threat landscape is that “AI lowers the barrier for novice cyber criminals, hackers-for-hire and hacktivists to carry out effective access and information gathering operations.” Adding to the global cyber threat is the fact that GenAI is already making it easier for less skilled coders to create more sophisticated ransom- and malware. 

 

AI can also be used to bolster cyber defenses.

Pair this with noble-intentioned organizations like the United States Artificial Intelligence Institute that could unwittingly train many opportunistic cyber criminals, and the threat of an increase in attack volume emerges clearly. To keep your workforce on top of all the emerging, AI accelerated cybersecurity risks, providing regular, robust, and relevant training is an effective solution to more frequent, though less sophisticated attacks by novice cyber criminals. If your team members can easily spot a phishing email or abnormal activity on your platform, then your organization’s chances of becoming compromised by internal, human-error will decrease 

Final Thoughts and Future Directions

To match the pace of these emerging threats, government institutions and companies globally are using the methods highlighted in this first of three cybersecurity-focused blogs to improve risk management in AI’s current and future development. Norm setting by state representatives during international summits is one way this movement is taking form at a global level. For example, the declaration by all European Union and 28 additional countries attending the Bletchley AI Safety Summit in Nov. 2023 “highlighted the importance of ensuring that AI is designed, developed, deployed, and used in a manner that is safe, human-centric, trustworthy, and responsible for the benefit of all.” As an active member of the United Nations Global Compact and a deployer of AI-assisted tools, Echo360 considers itself to be a part of this movement to set security-focused norms for AI’s ongoing development and deployment.   

Stay tuned for the next blog in this EchoInsights series, where we’ll take a deeper look at the technologies being deployed in our current AI accelerated cybersecurity risk and regulatory landscapes.  

Echo360 proudly partners with government and military organizations to deliver secure, accessible training solutions for all needs.

ExploreEchoPOLL.

Learn more about what it can do for your organization

Leading educational institutions are inspiring learning with Echo360.

Related Resources

Customer testimonials, pedagogical articles about teaching and learning, and new product updates and features.

learning and development philosophy

Learning and Development Philosophy: Developing an Approach

Learning and Development Philosophy: Developing an Approach Does this sound like a familiar scenario? Your marketing team has developed a new promotion and informs you about it two weeks before ...
Spice Up Sales Enablement Training image

Virtual Education Platform: Engaging Sales Enablement Training

Spice Up Your Sales Enablement Training for Customer-Facing Roles Hey there! Let’s dive into a topic that’s become essential in our work lives: sales enablement training. With hybrid work here ...
One Learning Transformation Platform™ for Everybody | Echo360

One Learning Transformation Platform™ for Everybody

One Learning Transformation Platform™ for Everybody Regardless of learner or environment, Echo360’s Learning Transformation Platform™ (LTP™) – the Echosystem™– has solutions for everybody. Whether you are a business striving to ...