We’re only at the beginning of what artificial intelligence can accomplish. Whatever limitations it has today will be gone before we know it, Bill Gates writes in a blog for the World Economic Forum. He joins other tech leaders describing AI as a next giant step in development. But he doesn’t trust market forces alone to make the best use of AI saying that governments and philanthropy are needed to control this.
“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
“In addition to helping people be more productive—AI can reduce some of the world’s worst inequities”, he writes.
“Globally, the worst inequity is in health: 5 million children under the age of 5 die every year. That’s down from 10 million two decades ago, but it’s still a shockingly high number. Nearly all of these children were born in poor countries and die of preventable causes like diarrhea or malaria. It’s hard to imagine a better use of AIs than saving the lives of children.”
He addresses scepticism and fears around AI saying that “first, we should try to balance fears about the downsides of AI—which are understandable and valid—with its ability to improve people’s lives.”
“To make the most of this remarkable new technology, we’ll need to both guard against the risks and spread the benefits to as many people as possible.”
“Market forces won’t naturally produce AI products and services that help the poorest. The opposite is more likely. With reliable funding and the right policies, governments and philanthropy can ensure that AIs are used to reduce inequity. Just as the world needs its brightest people focused on its biggest problems, we will need to focus the world’s best AIs on its biggest problems.”
He writes about problems with the current AI models, like strange and incorrect answers. “But none of these are fundamental limitations of artificial intelligence”, he writes. “Developers are working on them, and I think we’re going to see them largely fixed in less than two years and possibly much faster.”
“Other concerns are not simply technical. For example, there’s the threat posed by humans armed with AI. Like most inventions, artificial intelligence can be used for good purposes or malign ones. Governments need to work with the private sector on ways to limit the risks.”