Demystifying the AI regulatory landscape


With the UK’s AI Safety Summit taking place at the start of November at Bletchley Park, hot on the heels of an Executive Order from President Biden on the same topic, the debate around AI safety and regulation has been growing ever louder and more complex.
Even before these recent events, the reality is that there was an increasingly intricate global AI regulatory framework taking shape with every country moving at their own speed. In contrast to long-term objectives, the need for greater simplicity and focus now is an imperative starting with three key areas that should be the industry’s first priorities: the AI models themselves, the data being consumed, and the ultimate outcomes these combinations are producing.
Through all three spheres, we must always hold accountability, reliability, impartiality, transparency, privacy, and security top of mind if we want any hope of navigating the complex landscape of AI regulation today.
Reaching clarity
Before we can begin to have a realistic discussion about ‘AI’, it’s important to be crystal clear about which technologies are actually being referred to. In light of the significant variation in AI and Machine Learning (ML) models, there is already a growing concern that these terms are mistakenly conflated. Given the record speed at which ChatGPT has been adopted, people are already misconstruing ChatGPT with AI or ML – the way we might use Google to refer to all search engines.
Regulators need to implement guidelines that help standardize the language around AI, which will help with understanding the model being used, and ultimately regulate risk parameters for these models. Otherwise, its potential to take the exact same data set, and draw wildly different conclusions based upon its biases—conscious or unconscious—that are ingrained from the outset. More importantly, without a clear understanding of the model, a business cannot determine if outputs from the platform fit within its own risk and ethics criteria.
In the automotive sector, well-defined autonomy levels for autonomous vehicles have been put in place, enabling car manufacturers to innovate within clearly delineated parameters. Given the wide spectrum encompassed by AI, ranging from ML data processing to generative AI, regulators have a unique opportunity to inject clarity into this complex domain.
While particular regulations pertaining to AI models may appear to be somewhat limited at present, it is crucial to factor in regulations that govern the ultimate outcomes of these models. For instance, an HR tool employing machine learning for job candidate screening might inadvertently expose a company to discrimination-related legal issues unless rigorous bias-mitigation measures are in place. Similarly, a machine learning tool adept at detecting personal data within images of passports, driver’s licenses, and credit cards must strictly adhere to data protection regulations.
Chief Information Security Officer for EMEA at Netskope.
The AI data ecosystem
Before regulatory action is taken to oversee the development and deployment of AI, it is judicious to examine how existing regulations might be extended to AI. These tools heavily rely on a dependable data supply chain. IT and security leaders are already grappling with the challenge of adhering to a slew of data-related legislation, including acronyms such as HIPAA, GLBA, COPPA, CCPA, and GDPR. Since the advent of GDPR in 2018, Chief Information Security Officers (CISOs) and IT leaders have been mandated to provide transparent insights into the data they collect, process, and store, along with specifying the purpose behind these data-handling processes. Furthermore, GDPR empowers individuals with the right to control the use of their data. Understandably, leaders are concerned about the potential impact of deploying AI and ML tools on their ability to comply with these existing regulatory requirements.
Both businesses and regulators are in pursuit of clarity. They seek to understand how existing regulations apply to AI tools and how any modifications might affect their status as data processors. AI companies are encouraged to exhibit transparency with customers, showcasing how their tools comply with existing regulations through partnership agreements and terms of service, particularly with regard to data collection, storage, processing, and the extent to which customers can exert control over these processes.
Ethical progress
In the absence of unambiguous regulatory guidelines in the AI landscape, the onus falls on technology leaders to champion self-regulation and ethical AI practices within their organizations. The objective is to ensure that AI technologies yield positive outcomes for society at large. Many companies have already released their own guiding principles for responsible AI use, and these consistently underscore the importance of accountability, reliability, impartiality, transparency, privacy, and security.
Technology leaders, if they have not already started, should embark on an evaluation of the ramifications of integrating AI into their products. It is advisable for companies to establish internal governance committees focused on AI ethics. These committees should assess the tools and their application within the organization, review processes, and devise strategies in anticipation of broader regulatory measures.
While the establishment of a regulatory body, akin to the International Atomic Energy Agency (IAEA) or the European Medicines Agency (EMA), was not a focus at the AI Safety Summit, it could prove instrumental in crafting a worldwide framework for AI regulation. Such an entity could foster standardization and delineate the criteria for ongoing evaluations of AI tools to ensure continued compliance as the models evolve and mature.
The path to an enlightened future
AI harbors the potential to revolutionize our lives, yet it must not come at the expense of the fundamental tenets underpinning data rights and privacy as we understand them today. Regulators must strike a fine balance that safeguards individuals without stifling innovation.
After the deliberations among government and industry leaders at Bletchley Park, my primary aspiration is to witness a heightened emphasis on transparency within the existing AI landscape. Instead of relying solely on goodwill and voluntary codes of conduct, AI companies should be compelled to furnish comprehensive disclosures regarding the models and technologies underpinning their tools. This approach would further empower businesses and customers to make well-informed decisions regarding adoption and enhance their autonomy over their data.
We’ve featured the best productivity tool.
With the UK’s AI Safety Summit taking place at the start of November at Bletchley Park, hot on the heels of an Executive Order from President Biden on the same topic, the debate around AI safety and regulation has been growing ever louder and more complex. Even before these recent…
Recent Posts
- Fraudsters seem to target Seagate hard drives in order to pass old, used HDDs as new ones using intricate techniques
- Hackers steal over $1bn in one of the biggest crypto thefts ever
- Annapurna’s 2025 lineup of indie games is full of tea and T-poses
- Google Drive gets searchable video transcripts
- Andor is on the offensive in latest season 2 trailer
Archives
- February 2025
- January 2025
- December 2024
- November 2024
- October 2024
- September 2024
- August 2024
- July 2024
- June 2024
- May 2024
- April 2024
- March 2024
- February 2024
- January 2024
- December 2023
- November 2023
- October 2023
- September 2023
- August 2023
- July 2023
- June 2023
- May 2023
- April 2023
- March 2023
- February 2023
- January 2023
- December 2022
- November 2022
- October 2022
- September 2022
- August 2022
- July 2022
- June 2022
- May 2022
- April 2022
- March 2022
- February 2022
- January 2022
- December 2021
- November 2021
- October 2021
- September 2021
- August 2021
- July 2021
- June 2021
- May 2021
- April 2021
- March 2021
- February 2021
- January 2021
- December 2020
- November 2020
- October 2020
- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- September 2018
- October 2017
- December 2011
- August 2010