Image
How to ensure a safe and secure use of AI?

How to ensure a safe and secure use of AI?

Apr. 3 2024

The success, progress, and rapid adoption of Artificial Intelligence (AI) have propelled technology like never before. These developments bring profound changes in the way companies operate and evolve, presenting manifold opportunities – as well as an array of risks. How can organizations set up structures capable of integrating safely and securely this powerful new tool? Bogdan Tobol, Global Product Manager for Cybersecurity at Bureau Veritas, explores this topic.

A total market volume expected to reach $738,80bn by 2030[1], impacting 40% of the workforce worldwide[2] while potentially adding 7% to the global GDP over a 10-year period[3]: Artificial Intelligence (AI) has seen an exponential rise in the last few years, and is set to have a deep and lasting effect on the world’s economy. 

As organizations increasingly rely on AI-powered systems in their day-to-day activities, the need for robust regulatory frameworks and efficient international standards has become paramount. These regulations and certifications not only ensure the effective implementation of AI technologies but also address concerns surrounding privacy, data protection, ethical considerations, and cybersecurity. From the General Data Protection Regulation (GDPR) in Europe[4] to the AI Risk Management Framework established by the National Institute of Standards and Technology (NIST)[5] in the United States, governments and standardization bodies worldwide are actively shaping the landscape of AI development and application.

AIMS, their goals, and associated threats

The goals for AI research are clear: problem-solving and reasoning, planning and decision-making, as well as natural language processing (NLP) and general intelligence, among others, are part of the field’s objectives. Artificial Intelligence Management Systems (AIMS) are frameworks which help manage and optimize the deployment, monitoring, and performance of AI within organizations. 

AIMS thus play an essential role in enabling organizations to effectively harness the power of AI, ensuring scalability, reliability, and responsible use of AI technologies. Along with such Management Systems as ITSMS (Information Technology Service Management System), ISMS (Information Security), SCMS (Supply Chain) and PIMS (Personal Information), AIMS contribute to efficiency, performance improvement and compliance in day-to-day operations. “Most of these Management Systems are subject to stringent standards such as ISO 20000-1, ISO 27001, etc.,” states Bogdan Tobol, Global Product Manager for Cybersecurity at Bureau Veritas. “Organizations and lawmakers now need to make sure AI is subject to the same requirements, ensuring a safe and beneficial use of the technology. ” 

The Global Product Manager is keen to point out that, with regard to safety, the industry is not starting from a blank slate: “As surprising as it may sound, no specific risks exist for AI in terms of cybersecurity – they are the same threats that affect any IT system, and they are already clearly defined.” They range from the general, such as reputational or legal liabilities, involving all stakeholders, to the more technical, such as supply chain vulnerabilities or sensitive information disclosure and many other issues common to Large Language Model (LLM) applications.

 

Thoroughness, training, and stringency, keys to a secure use of AI

The upside to this situation lies in the fact that, since the risks are known, so are the solutions. Guidelines and best practices are available for other IT systems, which can be adapted and applied for greater AI cybersecurity. The first step lies of course in evaluating the threats relevant to each specific organization. “It is imperative that both the AIMS and its environment are subjected to a full risk assessment,” insists Bogdan Tobol: “Where the AI exists, how it is developed, how it is maintained.” Once weaknesses have been correctly identified, it is possible to devise a strategy for optimum security. 

In addition, “organizations should be thorough in the testing of their processes,” says Bogdan Tobol. “Training also plays a crucial role in cybersecurity, and, once again, it is the same for AI as for any other IT system: helping all employees and stakeholders understand what AI is, what it can do and how to manage it is vital for the safe deployment and utilization of any AIMS.” Most importantly, once the safeguards are in place, they require vigilant monitoring and adaptation –aligning with, and ideally foreseeing, the evolving landscape of AI technologies and cybersecurity risks.

The most recent of these evolutions comes from the European Union, with the Artificial Intelligence Act. Currently, in the process of being formally adopted and translated[6], this Act aims to set up a framework for the use and supply of AI systems in the EU. Providing a classification of AI-associated risks – from “limited” to “unacceptable”[7], the Act requires companies to follow certain requirements in terms of risk management, testing, technical robustness, data training and governance, transparency, human oversight, and cybersecurity. Any AIMS not meeting these criteria would not be placed on the market or put into service. Companies have about a year to prepare, as the AI Act is expected to come into force in May 2025[8].

Such rules are bound to become more numerous – and more stringent – in the future. Adopting formal standards and certifications helps companies maintain a safe and secure framework for their AIMS, as well as compliance with new legislation. “Strong norms should be adopted in organizations, specific to their own needs and vulnerabilities,” explains Bogdan Tobol. “For the correct deployment of AI, a standard such as the ISO/IEC 42001 requires that several elements should be in place. First, the product’s lifecycle should be sustainable, with continuous improvement. Transparency and strong governance are other vital aspects of this standard, helping prove that, beyond its strategic advantage, AI implementation in the organization promotes an ethical and responsible use of the technology. The key lies in striking the right balance between compliance and innovation.

Cybersecurity risks cannot be entirely eliminated, stresses Bogdan Tobol, especially in the field of AI, “but they can be prevented, accepted or managed – thus saving considerable time, capital and damages.” Organizations would therefore be wise to establish rigorous standards promptly, and to set up processes for their continuous refinement and optimization, in order to keep pace with ever-evolving technologies and conditions.

Bureau Veritas: Guiding organizations in managing cyber risks

At Bureau Veritas, we believe cyber challenges need to be addressed across all dimensions. We therefore provide companies with independent cybersecurity verification, helping them protect their systems, assets, products, and supply chains. Through our expertise and our impartiality, our clients are able to gain insight into their vulnerabilities, mitigate risk areas and demonstrate to their stakeholders tangible actions, commitments, and processes.