Skip links
Conmpanies getting ready for regulation of AI use.

Getting companies ready for artificial intelligence

Regulation for the use of artificial intelligence is on its way with the EU parliament already having agreed on a legal proposal, but companies’ legal departments should get ready already now to be prepared for coming regulation. This is the conclusion of guidelines for companies compiled by marketing and research firm Gartner

The firm says it has identified four critical areas that companies should address: transparency, risk management, human control and privacy risks.

“While laws in many jurisdictions may not come into effect until 2025, legal leaders can get started while they wait for finalized regulation to take shape”, says Laura Cohn, senior principal at consultancy Gartner.

She says these four issues should be addressed when preparing internal rules for use of AI:

Embed Transparency in AI Use. 

Transparency about AI use is emerging as a critical tenet of proposed legislation worldwide. Legal leaders need to think about how their organizations will make it clear to any humans when they are interacting with AI. 

Using AI in marketing and hiring are examples where updating company websites’ privacy notices and terms of conditions are needed.

Ensure Risk Management Is Continuous
Risk management controls that span the lifecycle of any high-risk AI tool. “One approach to this may be an algorithmic impact assessment (AIA) that documents decision making, demonstrates due diligence, and will reduce present and future regulatory risk and other liability”, Cihn says.

“Since legal leaders typically don’t own the business process they embed controls for, consulting the relevant business units is vital.”

Governance That Includes Human Oversight and Accountability
“One risk that is very clear in using LLM tools is that they can get it very wrong while sounding superficially plausible,” said Cohn. “That’s why regulators are demanding human oversight which should provide internal checks on the output of AI tools.”

Companies can appoint an AI point person to help technical teams design and implement human controls. 

The guideline also includes establishing a digital ethics advisory board of legal, operations, IT, marketing and outside experts to help project teams manage ethical issues, and then make sure the board of directors is aware of any findings.

Guard Against Data Privacy Risks
It’s clear that regulators want to protect the data privacy of individuals when it comes to AI use,” said Cohn. “It will be key for legal leaders to stay on top of any newly prohibited practices, such as biometric monitoring in public spaces.”

Legal leaders should manage privacy risk by applying privacy-by-design principles to AI initiatives. For example, require privacy impact assessments early in the project or assign privacy team members at the start to assess privacy risks.

“With public versions of LLM tools, organizations should alert the workforce that any information they enter may become a part of the training dataset. That means sensitive or proprietary information used in prompts could find its way into responses for users outside the business. Therefore, it’s critical to establish guidelines, inform staff of the risks involved, and provide direction on how to safely deploy such tools.”

European Union and AI

The EU parliament recently agreed on what will be one of the first legislations regulating  the use of AI. Next step is that the parliament and ministers in the European Council have to agree on the final legislation.

Read Also:  Artificial intelligence potential impact on economy and jobs

The EU rules  include the right to make complaints about AI systems and that providers must show if content has been generated by AI.

Generative AI that can produce texts looking like they were written by humans, would have to comply with transparency requirements, like showing that the content was generated by AI. Providers would need to design the model to prevent it from generating illegal content and publishing summaries of copyrighted data.

Bans on using AI

The draft approved by the Parliament wants a uniform definition for AI designed to be technology-neutral, so that it can apply to the AI systems of today and tomorrow.

The rules follow a risk-based approach and obligations for providers and users depending on the level of risk the AI can generate. 

AI systems with an unacceptable level of risk to people’s safety would be prohibited, including systems with manipulative techniques, exploit people’s vulnerabilities or are used for social scoring (classifying people based on their social behaviour, socio-economic status, personal characteristics).

Bans on intrusive and discriminatory uses of AI include:

  • “Real-time” and “post” remote biometric identification systems in publicly accessible spaces;
  • biometric categorisation systems using sensitive characteristics (e.g. gender, race, ethnicity, citizenship status, religion, political orientation);
  • predictive policing systems (based on profiling, location or past criminal behaviour);
  • emotion recognition systems in law enforcement, border management, the workplace, and educational institutions; and
  • untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases (violating human rights and right to privacy)
Read Also:  Big tech calls for governments to regulate use of artificial intelligence

Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.

Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!

We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.

    Do you want an experienced opinion on a job issue?
    Moonshot Manager is here to answer!

      Moonshot community sharing thoughts and ideas, in a anonymous, safe environment.