Governing the AI Juggernaut: DHS Throws a Lifeline, or Does it?

Cyber Strategy Institute
8 min readMay 2, 2024

You Won’t Believe the AI Risks Homeland Security Ignored

The U.S. Department of Homeland Security’s guidance on MITIGATING ARTIFICIAL INTELLIGENCE (AI) RISK: Safety and Security Guidelines for Critical Infrastructure Owners and Operators provides a robust framework for critical infrastructure owners and operators to mitigate risks associated with artificial intelligence (AI) systems. The guidance sets guidelines for four main functions: Govern, Map, Measure, and Manage, in accordance with the NIST AI Risk Management Framework.

It draws insights from sector-specific AI risk assessments to highlight cross-sector use cases, adoption patterns, and analyzes three overarching risk categories: attacks using AI, attacks targeting AI systems, and AI design/implementation failures.

Detailed risk subcategories, along with potential mitigation strategies, are provided for each category. The guidance highlights important practices such as "secure by design", understanding how AI is used, measuring risks, implementing solutions, and promoting continuous AI risk management.

However, it does have gaps that organizations will need to address sooner than later. As it fails to cover tactical threats and strategic framework on how to mitigate those threats. It does a good job of creating a strategic risk framework organizations can use. So lets look at the 6 key areas we think are the most important from the CISA guidance.

AI Use Cases and Adoption Patterns (Key Area 1)

(From report pages 7–8)

Summary:

  • Identifies 10 categories of AI use cases across critical infrastructure sectors based on risk assessments.

Operational Awareness: This involves using AI to gain a clearer understanding of an organization’s operations. For instance, AI can be used to monitor network traffic and identify unusual activity, enhancing cybersecurity.

Performance Optimization: This involves using AI to improve the efficiency and effectiveness of processes or systems. For example, AI can be used to optimize supply chain operations, reducing costs, and improving delivery times.

Automation of Operations: This refers to using AI to automate routine tasks and processes in an organization, such as data entry or report generation. For example, AI can be used to automate the process of sorting and analyzing large amounts of data.

Event Detection: This refers to the use of AI to detect specific events or changes in a system or environment. For example, AI can be used in health monitoring systems to detect abnormal heart rates.

Forecasting: This is the use of AI to predict future trends or events based on current and historical data. For instance, AI can be used to forecast sales trends based on past sales data.

Research & Development (R&D): This refers to the use of AI in the development of new products, services, or technologies. For instance, AI can be used in the pharmaceutical industry to expedite the drug discovery process.

Systems Planning: This refers to the use of AI in the planning and design of systems, such as IT infrastructure. For example, AI can be used to predict the performance of a proposed system under various conditions. • Customer Service Automation: This involves using AI to automate aspects of customer service, such as answering frequently asked questions or processing orders. For example, chatbots are a common application of AI in customer service automation.

Modeling & Simulation: This involves using AI to create models and simulations of real-world scenarios. For example, AI can be used to simulate traffic patterns for urban planning purposes.

Physical Security: This refers to the use of AI in maintaining the physical security of a facility or area. For example, AI can be used in surveillance systems to detect intruders or suspicious activity.

  • Most common: Operational awareness, performance optimization, automation of operations.
  • Lower adoption for more complex use cases like forecasting, modeling, simulation.
  • An overall increasing trend in the adoption of AI is expected.

Potential Gaps:

  • Limited insights on emerging generative AI capabilities and use cases.
  • Lack of detailed analysis on specific sector use cases and sector-wide adoption levels.

Cross-Sector AI Risk Categories (Key Area 2)

(From report pages 8–10, 16–21)

Summary:

  • Establishes 3 overarching risk categories: Attacks using AI, Attacks on AI, AI design/implementation failures.
  1. Attacks Using AI: This risk category refers to the use of AI to automate, enhance, plan, or scale physical attacks on or cyber compromises of critical infrastructure. Common attack vectors include AI-enabled cyber compromises, automated physical attacks, and AI-enabled social engineering.
  2. Attacks Targeting AI Systems: This risk category largely focuses on targeted attacks on AI systems supporting critical infrastructure. Common attack vectors include adversarial manipulation of AI algorithms, evasion attacks, and interruption of service attacks.
  3. Failures in AI Design and Implementation: This risk category stems from deficiencies or inadequacies in the planning, structure, implementation, execution, or maintenance of an AI tool or system leading to malfunctions or other unintended consequences that affect critical infrastructure operations. Common methods of design and implementation failure include autonomy, brittleness, and inscrutability.
  • Provides detailed risk subcategories and example mitigation strategies for each risk category.
  • Highlights risks like AI-enabled cyberattacks, adversarial model manipulation, autonomy failures.

Potential Gaps:

  • Mitigations focused on current risks may lack coverage for novel/unanticipated AI failure modes.
  • Limited mitigations for risks beyond security/safety like bias, privacy, ethical concerns.

Govern AI Risk Management (Key Area 3)

(From report pages 11, 22–23)

Summary:

  • Establish policies, processes for anticipating/managing AI benefits and risks.
  • Foster a “secure by design” culture prioritizing AI safety/security outcomes.
  • Delineate roles and responsibilities with vendors for safe AI operations.
  • Invest in a skilled, diverse workforce for AI risk management.

Potential Gaps:

  • Lacks concrete practices for embedding “secure by design” in development lifecycles.
  • Limited guidance on governing risks from AI systems procured externally.

Map AI Use Context and Risks (Key Area 4)

(From report pages 11–12, 23–25)

Summary:

  • Inventory AI use cases, document context-specific risks.
  • Assess potential safety, security, societal impacts like bias, privacy risks.
  • Review vendor supply chains incorporate vendor risk assessments.
  • Define human oversight processes for AI systems.

Potential Gaps:

  • Minimal details on methodologies for context-mapping and impact assessments.
  • Limited coverage of mapping risks from complex AI system integrations.

Measure AI Risks (Key Area 5)

(From report pages 12–13, 25–27)

Summary:

  • Define metrics to detect, track AI risks, errors, and negative impacts.
  • Continuous testing for cybersecurity and compliance vulnerabilities.
  • Evaluate risk controls, identify gaps in newly applicable metrics.
  • Test safety, security impacts including adversarial red-teaming.

Potential Gaps:

  • Lack of standardized measurement benchmarks across critical infrastructure.
  • Limited guidance on evaluating AI risks are difficult to measure.

Manage AI Risks (Key Area 6)

(From report pages 13–15, 27–28)

Summary:

  • Prioritize identified risks based on potential impact.
  • Implement security controls — encryption, data validation, defensive AI.
  • Apply mitigations to vendor AI systems pre-deployment.
  • Monitor AI inputs/outputs, maintain process redundancy.
  • Develop incident response plans for AI system failures.

Potential Gaps:

  • Mitigations focused on current capabilities may need updating for future AI advances.
  • Limited insights on prioritizing risks when facing resource constraints.

Threats to AI

While we know a risk framework is helpful, it does not go far enough to ensure that organizations fully understand actually how they need to address risk from an AI Threat viewpoint.

https://cyberstrategyinstitute.com/secure-my-ai/

AI Risks Homeland Security Got Right

The CISA guidance does not go into specific details on countering top AI threats like malware, ransomware, data breaches, adversarial attacks, insider threats, denial of service, social engineering, or IoT threats. Nor does it provide in-depth technical mitigations around code review, device/file protection, software distribution practices etc.

The guidance takes a higher-level, risk-based approach to AI threat management for critical infrastructure owners/operators. It categorizes risks into 3 broad buckets:

  1. Attacks using AI (like AI-enabled cyberattacks, physical attacks).
  2. Attacks targeting AI systems (adversarial manipulation, evasion attacks).
  3. AI design/implementation failures (autonomy issues, brittleness, inscrutability).

It then maps these risks to high-level mitigation areas like:

  • Governance (policies, secure by design, workforce training).
  • Risk mapping (documenting use cases, impacts, vendor assessments).
  • Risk measurement (testing, metrics, red teaming).
  • Risk management (prioritization, security controls, monitoring).

AI Risks Homeland Security Ignored

However, it stops short of delving into technical specifics on how to detect and remediate against specific AI threat vectors you mentioned. The guidance is more focused on establishing an overarching risk management framework.

To address the gap you pointed out, the guidance could be supplemented with resources that provide tactical mitigations tied to prevalent AI threat typologies. This could cover aspects like:

  • Secure coding practices to build resilience against AI malware/adversarial attacks
  • Data protection controls to prevent training data poisoning/model theft
  • AI system hardening and isolation to reduce disruption from denial of service
  • Deception technologies to detect/deflect AI-driven social engineering
  • Strict access controls and insider threat programs for critical AI assets

An overall defense-in-depth strategy blending these technical measures with the governance practices from CISA guidance could comprehensively address top AI threats across policy, process, and technology domains.

Summary:

The U.S. Department of Homeland Security’s guidance provides a robust risk-based framework for critical infrastructure owners and operators to govern, map, measure, and manage risks associated with artificial intelligence (AI) systems. Aligned with the NIST AI Risk Management Framework, it establishes guidelines across four key functions, drawing insights from sector-specific assessments. The guidance analyzes three overarching risk categories — attacks using AI, attacks targeting AI systems, and AI design/implementation failures — offering detailed risk subcategories and potential mitigation strategies.

However, the guidance largely stops short of delving into technical specifics on detecting and remediating top AI threat vectors like malware, ransomware, data breaches, adversarial attacks, insider threats, denial of service, social engineering exploits, and IoT vulnerabilities. While it highlights high-level mitigation areas like governance, risk mapping, measurement through testing/red teaming, and risk management controls, there is a gap in prescribing tactical countermeasures. To thoroughly address these threats, the guidance could be supplemented with resources covering secure coding practices, data protection controls, AI system hardening, deception technologies, strict access management, and blended defense-in-depth strategies.

The guidance undoubtedly provides a strong foundation by establishing consistent practices, terminologies, and a unifying framework for AI risk governance across critical infrastructure. However, addressing identified gaps around emerging AI threat vectors through future guidance iterations is crucial. Combining these with sector-tailored resources will cement national preparedness and fortify the security posture of AI-powered critical systems against sophisticated attacks.

Ultimately, robust implementation of these guidelines complemented by comprehensive technical countermeasures tailored to the latest AI threat landscape is vital. Diligent execution by owners and operators can ensure the resilience of critical AI infrastructure against attacks, failures, and unintended consequences as adoption accelerates.

This is where Cyber Strategy Institute comes in to support your efforts. We bring a unique and time-tested approach and framework that you can implement.

Use this link to Book a Call Today! https://cyberstrategyinstitute.com/contact/

--

--

Cyber Strategy Institute

Bringing Clarity to Cyber Strategy! Future Security is through Blockchain & Crypto...