Back to all blog  posts

Achieving Compliance with AI TRiSM: the EU AI Act and US Executive Order on AI

Elad Schulman
Elad Schulman
calendar icon
Thursday
,
March
14
clock icon
5
min read
On this page

It was only a matter of time before regulation began to catch up with the LLM security issues we address regularly on this blog. That time has now arrived, with legislation on both sides of the Atlantic now defining the duties and responsibilities that come with the responsible use of GenAI and LLM technologies.

Both the EU AI Act and the US Executive Order on AI set out to establish standards for the use of these technologies, emphasizing privacy, safety and transparency. For vendors and users of AI and LLM, these laws also limit certain types and uses of GenAI and LLM technology and impose responsibility on organizations to guarantee compliance.

Understanding the EU AI Act

This week the European Union’s parliament approved the world’s first major set of regulatory ground rules to govern the mediatized artificial intelligence at the forefront of tech investment.

The Act imposes basic transparency requirements on users and providers of General Purpose AI Systems (GPAIS). These include disclosing to natural persons (human users) that they are interacting with a chatbot, or that the outputs that have been generated by AI based tools. It also instructs that all data is processed in line with existing data privacy regulations. Providers of these models must publish technical documentation and a summary of the underlying training data and the database.

3 Categories of Risk in the EU AI Act

In addition to this the Act outlines 3 categories of risk:

🚫​ Unacceptable risk: the Act prohibits models that fall in this category, with possible exceptions for law enforcement. Models that present unacceptable risk include those that enable behavioral manipulation, social scoring, and certain types of biometric identification.

❗❗ High risk: this category is potentially  the most important, and it includes GPAIs and widely used models like ChatGPT. It also includes products covered by European product safety legislation that are now augmented by AI, such as aviation, automotive and other industries. High risk AI systems in these domains will be subject to strict standards that include logging of user information, documentation and data quality assessment. 

❗Limited risk: systems are those that pose a lower risk level to safety and privacy. However, basic standards of transparency apply across the board, including these systems.

Overall, the terms of the Act are expected to restrain adoption of LLM technologies to one extent. But in the long term organizations who are able to comply with the Act while integrating these systems stand to gain a major competitive advantage.

What steps can you take to achieve compliance? good question!

Begin with a thorough risk assessment that takes existing AI deployments into account. Any initiatives that are likely to run in conflict with the Act should be eliminated. Legal and procurement leaders will need to collaborate closely with security and privacy teams to evaluate all AI vendors in light of these requirements.

It will become increasingly necessary to maintain an accurate inventory of these vendors. The good news here is that existing GDPR compliance will go a long way to ensuring compliance with the Act.

Organizations should also leverage the growing market for AI trust, risk and security management (TRiSM) technologies. This new category includes tools that have been specifically designed to handle LLM-specific cybersecurity risks. Solutions like these can help to reduce exposure to actual cybersecurity threats, as well as ensuring more transparency, reliability and fairness in the use of AI models and LLMs. 

Exploring the US Executive Order on AI

The White House has issued its own legislation to address the use of AI models in the United States: the Executive Order on AI of 2023. The stated goal of the EO is to encourage safe, secure, and trustworthy development of AI. Like the EU AI Act, the EO will impact US and non-US entities, in both the public and private sectors.

The order lists the following objectives:

➡️​ AI Safety and Security: This goal focuses on creating a framework to ensure AI systems are developed and deployed in a manner that prioritizes safety, requiring developers to disclose safety test results to enhance transparency and government oversight.

➡️​ Privacy Protection: The Executive Order aims to safeguard personal data against the increased risks posed by AI, including unauthorized access and misuse, by establishing robust privacy protection measures.

➡️​ Threat Protection: This aspect of the order is dedicated to mitigating AI-enabled threats, such as fraud and cyber vulnerabilities, through the development of new standards and practices that bolster AI system security.

➡️​ Innovation and Competition: The goal is to foster an environment that encourages the ethical innovation and development of AI technologies, while also ensuring competitive practices that keep American businesses at the forefront.

➡️​ Equity and Civil Rights: It addresses the need to prevent AI from discrimination and inequality, promoting the development of AI in a way that advances civil rights and ensures equitable outcomes for all individuals.

How Can You Better Prepare for the AI Executive Order? another good question!

Security is a team sport- organization should get all stakeholders on the same page is the first step. This includes legal, procurement, security and many other departments. Because these players tend to have mixed degrees of AI understanding, this may prove to be a complex organizational challenge. 

Establishing dedicated committees (AKA AI Task Force) made up of these roles will give organizations the ability to make transparent and critical decisions about AI development that satisfy the compliance essential actions of each individual department. 

Navigating Compliance and Innovation with Lasso Security

As these laws begin to have a global impact, organizations need to find ways to manage their requirements without sacrificing their innovation and technology advantages. To make this vision come true, Lasso Security has partnered with OpenPolicy and other members of the OpenPolicy AI Coalition to join the U.S. AI Safety Institute Consortium (AISIC) and led by the Department of Commerce's National Institute of Standards and Technology (NIST). 

The consortium aims to foster the development and deployment of safe, trustworthy AI by bringing together over 200 AI stakeholders, including AI creators, users, academics, and industry researchers. This collaboration aligns with President Biden's Executive Order to set safety standards and protect the innovation ecosystem, supporting the trusted deployment and development of AI solutions to mitigate emerging threats and advance AI responsibly.

These partnerships strengthen our commitment to enabling safe and secure AI usage, guided by our deep expertise as early pioneers in LLM security. By leveraging Lasso’s platform your organization remains well-prepared and ahead of the curve in complying with AI regulations. With our expert guidance and insights, we will be able to streamline role-based access and your organization's regulatory policies, thereby avoiding potential future penalties.

Book your call with our team to start future-proofing your organization for a new era of regulated artificial intelligence.

Let's Talk