Businesses’ use of artificial intelligence could take off in 2023. It has so far been lower-than-expected as the private sector waits for governments to establish regulations. This is likely to change in 2023 as regulatory regimes like the EU AI Act, the US AI Bill of Rights and Chinese regulations come into effect, writes Kay Firth-Butterfield, Head of Artificial Intelligence and Machine Learning and Member of the Executive Committee at World Economic Forum.
“The AI developments in 2022 have laid the groundwork for significant progress and refinements in 2023.”
A WEF report on the use of the Internet of Things (IoT) says there are plenty of benefits to society, governments and businesses, but lack of confidence in areas including privacy and security may stifle progress.
“In 2023, we can expect several industries, beyond tech itself, to further embrace the technology. In some sectors, like care of the elderly and healthcare, we will likely see many positive advancements”, writes Firth-Butterfield.
“This includes enhancing care through tasks like reading x-rays and improving administration systems which will lower costs and ensure regular treatments, among other benefits. We can also expect to see AI developments in manufacturing, pharmaceuticals, mobility and in verticals such as human resources.”
“However, we must take care not to exacerbate the North-South divide as so many businesses in the Global South are small and medium size enterprises which don’t have the money, the computer readable data or the skill to incorporate AI.”
She says we can also expect more dialogue over how best to use AI in society. In some sectors, this will require difficult conversations.
“Law enforcement agencies, for example, are increasing the use of AI-power facial recognition systems, which have sparked widespread concern over privacy and surveillance. AI-powered weapons are also being deployed in civilian settings.”
“Therefore, surveys like the one being carried by the United Nations Interregional Crime and Justice Research Institute and the INTERPOL Innovation Centre are a good first step in giving the public a seat at the table. In this case, the agencies are asking the public how they feel about the use of AI by police. The feedback will inform their work to develop a Toolkit for Responsible AI Innovation in Law Enforcement – an innovative and practical guide to support law enforcement agencies to carefully navigate this complex topic.”
“Above all, the fundamental question for 2023 remains not whether AI systems can be accurate and efficient, but rather how should they be used and to what degree”, Firth-Butterfield writes.
She says last year saw breakthroughs from AI tools such as ChatGPT which generates text and code. “These advances have generated discussion on how AI can be used and expanded. As more industries embrace this technology we must ensure systems are inclusive and ethical.”
“In 2022, we were presented with several stunning developments in AI. Some believe that these advances push the limits of what we have now towards the holy grail of artificial general intelligence (a machine that can mimic the thinking and problem-solving capacities of humans but faster and more accurately).”
As key developments, she mentions four:
- DALL-E, the AI that can create pictures from language prompts. Many of us enjoyed playing with the tool and embracing the ability it gave to us to design in new ways. Others worried about AI taking over our human creativity. Moreover, since DALL-E pulls photos from the web, there is concern that some cultures with little online representation will be left out of these models and will become less represented in the world.
- ChatGPT, DALL-E’s literate and coder “sibling”. Whilst the former creates new images the latter creates text and code. These texts can be newspaper articles, students’ essays, speeches, scientific papers and more. Again, it does so from a written prompt provided by the users. The concerns that surround DALL-E also surround ChatGPT, it is a “black box” so we have no understanding of how it works. But there is no doubt that if ChatGPT improves efficiency and if it is used in the right way, it will be a hugely important tool.
- AI development company DeepMind created an algorithm which codes very well. The system, AlphaCode can beat 72% of human coders in average competitions and recently solved about 30% of the coding problems in a highly complex coding competition against humans. Whilst this figure may seem low, this algorithm will learn exponentially – think AlphaGo which now can beat every human GO player. It is unlikely that AI will take over programming completely, but it will cut the number of humans needed to code dramatically.
- Gato described as a generalist agent by inventors DeepMind, is an important development because whereas currently powerful algorithms do one or two things exceedingly well, Gato can do many. This includes playing Atari, titling images, chatting with users, stacking blocks using a robotic arm and more. This development moves us away from narrow, one task, AI and into the realm of an AI being able to do multiple tasks.
The IoT report says that understanding the opportunities and potential risks of IoT and related technologies is critical to ensuring the maximization of their benefits, while recognizing and minimizing the risk associated with their use.
“This has many implications, particularly regarding questions about the ethical use of the technology, security and privacy, and equal access to the technology.
The findings about IoT include:
- The COVID-19 pandemic has changed the face of IoT and related technologies using new cases and applications, bolstering demand in areas such as health, manufacturing and consumer IoT.
- The increase in innovation of IoT devices and related technologies presents plenty of benefits to society, governments and businesses, but the lack of confidence in areas including privacy and security may stifle progress.
- Rapid advances of IoT technology have challenged the ability to regulate industries and implement industry standards. The survey conducted points towards ethical and responsible use as the area with the largest perceived governance gap.
- The pandemic shed light on user data vulnerabilities, leading users to prioritize privacy and security when using IoT devices and applications. In turn, governments and businesses have had to respond with regulations and updates to systems and devices to build justified user trust.
- The second-largest perceived governance gap is in cybersecurity. Growing reliance on connected devices and related technologies have made organizations, governments and individual users increasingly susceptible to cyberthreats, making the ability of connected devices and related technologies to protect individuals from cyberattacks a leading concern.
- Equal access to technology and its benefits is another area that needs to be prioritized.