AI Security - University of KZN MSc Lecture Series (part 1 of 4)

AI Security - University of KZN MSc Lecture Series (part 1 of 4)
Artwork/ Image by Eduardo Romero

Last month I was invited to give a 2-part lecture on AI Security at UKZN. This was a big honour for me - since I'm a fan of their AI and Quantum researchers (e.g., Amira).

Thanks to Prof Manoj Maharaj, for inviting various industry guest lecturers to UKZN. Upcoming speakers include numerous global thought leaders (e.g., Prof Petruccione).

The 3-hour lecture was divided into 4 components - which I will cover in 4 blog posts:

  1. Defensive AI;
  2. Offensive AI;
  3. AI Security; &
  4. Responsible AI.

I recommend reading the article with the original slides - which you can get here.


AI Security

Let's start with AI Security, since I ran out of time and couldn't complete the section.

This article covers the following:

  1. Scope of assessment;
  2. Types of assessment;
  3. Review components;
  4. Your requirement; &
  5. Review methodology.

Scope of AI security assessment

At Snode Technologies we often need to perform security assessments on AI designs, implementations and existing AI operations (e.g., AI red team testing). This includes:

Snode AI security scope of assessment

You (the student) should also look at other framework's scope for secure AI adoption.

NIST AI Risk Management Framework, knowledge base view

Types of AI Security assessments

There are three (3) types of assessments we offer Snode clients, each with unique strengths and applicable to where you are in your AI development lifecycle, namely:

  1. AI Design Review;
  2. AI Red Teaming; &
  3. AI Risk Assessment.
Snode AI Assessments - AI Design Review, AI Risk Assessment and AI Red Teaming

I will do a future series of articles, covering the methodologies for each, in more detail.

AI Security - components for review

Here are some high-level control areas we will look to cover during these assessments. This highlights the full spectrum of issues that are covered for a comprehensive review.

Component and control coverage for AI reviews

Understanding the AI requirements

As with any assessment there are three forces driving your security requirements.

Understanding your AI security requirements.

So why assess alignment to the business requirement? Well, one of the most common overlooked risks with clients - is model drift. Model drift occurs when a model doesn't align or moves out of alignment over time (degradation) with the business objective(s).

Regulatory and statutory requirements, such as the EU AI ACT, need to be addressed.

Security requirements need to address the risks associated with the environment, area and context of the AI application. Here are a few high-level examples:

  1. Is there a risk to human life? Such as an autonomous vehicle use case.
  2. Is there potential for abuse? Such as misinformation or disinformation.
  3. Is there a privacy concern? Such as facial or biometric data disclosures.

High-level AI review methodology

At a high-level the review methodology consists of the following activities:

  1. Business - strategic objective and application alignment.
  2. Culture - AI change management, AI training and talent.
  3. Compliance - AI/ IS regulatory and statutory requirement.
  4. Model specific risk - tampering, drift, poisoning, theft, etc.
  5. Infrastructure - underlying or supporting technology risk.
  6. Data risk - with collection, transmission, storage, bias, etc.
High-level assessment approach with sequencing differences.

If we perform a AI design review, we tend to move downwards, starting with the business requirement. However, for Red Teaming, we move upwards - starting with operational effectiveness and then perform a root-cause analysis with earlier stages.

I recommend to clients that they perform all three assessments at the appropriate point in the AI development lifecycle (secure design directly after the design phase).

Disclaimer - if I got something wrong,...

AI Assessment References

NIST AIRC - AI RMF
The NIST AIRC supports AI actors in the development and deployment of trustworthy and responsible AI technologies.
NVIDIA Deep Learning Institute and Training Solutions
We provide hands-on training in AI, accelerated computing, and accelerated data science.
Google’s AI Red Team: the ethical hackers making AI safer
Today, we\u0027re publishing information on Google’s AI Red Team for the first time.