“Move fast and break things” is the famous mantra by Facebook-founder Mark Zuckerberg to stress the importance of speedy development and experimenting. With Microsoft and Alphabet now racing to introduce generative artificial intelligence in search on Google and Bing, there are several warnings that too early use of this technology in search could be moving too fast and break things – like accuracy and trust.
Basically, the race is about Microsoft trying to take search market shares from search-leader Google.
Chatbots like Bing’s ChatGPT and Google’s Bard do not think. They use algorithms to scrape the internet for information, in worst case including misinformation, and create plausible, but not necessarily correct, answers.
Microsoft has launched a first version of search engine Bing that includes using ChatbotGPT, developed by OpenAI and that Microsoft has invested heavily in. Google just launched its version of using generative AI called Bard saying it will first be tested by a limited number of users.
OpenAI’s chief technology officer, Mira Murati, in an interview with Time Magazine said that the bot “may make up facts” as it writes sentences. She explained that OpenAI’s ChatGPT generates responses by predicting the logical next word in a sentence.
Using generative AI chatbots to look up answers to questions from an infinitely large library of facts is to miss the point of their usefulness, Jonathan May, Research Associate Professor of Computer Science at University of Southern California, argues in an article for World Economic Forum.
They are essentially programmed to write plausible sentences rather than necessarily true ones, his articles stresses.
“As a computer scientist, I often field complaints that reveal a common misconception about large language models like ChatGPT and its older brethren GPT3 and GPT2: that they are some kind of “super Googles,” or digital versions of a reference librarian, looking up answers to questions from some infinitely large library of facts, or smooshing together pastiches of stories and characters.”
“They don’t do any of that – at least, they were not explicitly designed to”, he writes.
“A language model like ChatGPT, which is more formally known as a “generative pretrained transformer” (that’s what the G, P and T stand for), takes in the current conversation, forms a probability for all of the words in its vocabulary given that conversation, and then chooses one of them as the likely next word. Then it does that again, and again, and again, until it stops.”
“So it doesn’t have facts, per se. It just knows what word should come next. Put another way, ChatGPT doesn’t try to write sentences that are true. But it does try to write sentences that are plausible.”
Alphabet and Google CEO, Sundar Pichai, was enthusiastic when announcing Bard but added that the company will combine external feedback with the company’s own internal testing to make sure Bard’s responses “meet a high bar for quality, safety and groundedness in real-world information”.
Explaining more how generative AI can be useful, Pichai did not address the question about correctness but said that it can be helpful synthesizing insights for questions where there’s no one right answer.
He said that soon, you’ll see AI-powered features in Search that distill complex information and multiple perspectives into easy-to-digest formats, so you can quickly understand the big picture and learn more from the web: whether that’s seeking out additional perspectives or going deeper on a related topic.