Future regulations of the use of artificial intelligence
The focus on generative artificial intelligence like chatGPT, predicted to have great, but controversial, impact on creation, includes discussions about ethics and safety when content is machine-created. In the years ahead, companies will face increased regulatory pressure around their AI models, predicts Michael Schmidt, Chief Technology Officer at US-based AI-cloud firm DataRobot, in an article published by World Economic Forum.
“Organizations that are prepared to take on uncertainty – from market conditions to geopolitical unrest and everything in between – will be the ones best suited to serve their customers, employees, and shareholders.”
“Regulatory changes are likely to include requirements around both explanations for individual predictions as well as detailed records and tracking of the history and lineage of how models were trained.”
He writes that for years ahead, the ramifications of heightened societal awareness of AI, increased regulatory pressure, the increased momentum of investments in the space, and how AI will continue to increase employee productivity may come to a head. “Practical and applied AI concerns will become paramount to enable continued value from AI growth.”
“Algorithmic bias has been a growing subject of discussion and debate in the use of AI. There are clear ways to approach questions of AI fairness by using data and models with guardrails in place, as well as suggested steps organizations can take to mitigate issues of uncovered bias.”
He notes that the largest source of bias in an AI system is the data it was trained on. That data might have historical patterns of bias encoded in its outcomes. Ultimately, machine learning gains knowledge from data, but that data comes from us – our decisions and systems.
“Because of the expanding use of the technology and society’s heightened awareness of AI, you can expect to see organizations auditing their systems and local governments working to ensure AI bias does not negatively impact their residents. In New York City for example, a new law will go into effect in 2023 penalizing organizations that have AI bias in their hiring tools.”
“In the year ahead, I expect companies to face increased regulatory pressure around their AI models. Regulatory changes are likely to include requirements around both explanations for individual predictions as well as detailed records and tracking of the history and lineage of how models were trained.”
“Increased AI regulation will ultimately be welcomed by the industry as evidenced by 81% of tech leaders saying they would like to see increased government regulation in a recent DataRobot survey.
However, Schmidt writes, more companies are now cognizant of having to react to the potential conversion of voluntary guidelines into regulations.
“Because of this, I predict most companies will need to invest in systems with model governance in place. By investing in systems that have the appropriate guardrails, companies can continue to focus on technological innovation with the peace of mind that their systems comply with legal and regulatory obligations.”
“In 2023, I expect to see continued momentum in AI investments, particularly among businesses most directly impacted by economic and supply chain disruptions, as well as mature industries generally able to scale AI adoption the most, such as financial services, retail, healthcare, and manufacturing.”
However, he also predicts that, while some investments will progress, some AI technology trends will continue to be experimental.
“Looking at financial services for example, I expect that use-cases will turn to AI systems that can improve accuracy of fraud detection and speed up laborious reporting processes.”
“Looking at technology trends, generative AI is receiving tremendous interest based on newly developed deep learning models (from OpenAI and others). However, I predict these models are still too new to be practical for most enterprises because of a few challenges.”
“The first being the fact that it’s difficult to ensure their behaviour on necessary issues like bias and fairness; despite efforts, current versions can be easy to break. This means businesses will need to truly trust providers of these models since they will have no hope to build or create their own.”
“Adapting these models for desired use-cases is also difficult for most to get right. While I expect to see companies continue to work with generative AI, I believe applications will continue to be experimental for many enterprises in the coming year until the business cases and their expected return on investment is better understood.”
“Overall however, businesses that focus on building an AI mentality across the organization by continuing to make investments in the space and fully integrating AI into their operations (including assessing new developments) will be better suited to handle market uncertainty and drive long-term success.”
Moonshot News is an independent European news website for all IT, Media and Advertising professionals, powered by women and with a focus on driving the narrative for diversity, inclusion and gender equality in the industry.
Our mission is to provide top and unbiased information for all professionals and to make sure that women get their fair share of voice in the news and in the spotlight!
We produce original content, news articles, a curated calendar of industry events and a database of women IT, Media and Advertising associations.