Google and Microsoft revealed their visions of AI search in back-to-back private press briefings held in early February. Microsoft won the initial exchange, as Bing Chat is available as a public beta (with a waitlist). Not only is Google’s Bard not yet available, the company’s presentation suffered from gaffes, including a lost smartphone that stopped one demo cold and a search that returned blatantly incorrect answers for everyone to see. However, this is only the start of a yearslong plan to redefine how we search for information online.
“AI search is the first frontier for this arms race between Bing and Google,” says Dan Ives, a senior equity analyst covering technology for Wedbush Securities. “ChatGPT is a game changer, and the entire tech sector is watching market-share shifts closely.”
Bard, Bing Chat, and ChatGPT Differ in the Details
Microsoft’s sudden dive into AI search seems driven by the company’s CEO, Satya Nadella, who gave an animated introduction at the debut of Bing Chat on 9 February in Redmond, Wash. “I think that this technology is going to reshape pretty much every software category we know,” said Nadella, adding, “now the question is, how is AI going to reshape the Web?”
Microsoft’s Satya Nadella outlined his vision for AI search at the company’s reveal of Bing Chat.CNET
Nadella sees a departure from the current search paradigm, which—despite already using AI to improve results—centers around entering keywords and then browsing a list of results. What Microsoft and other companies are now introducing instead uses a type of AI known as a large language model (LLM), which generates text by predicting what should appear next. This allows a personalized, conversational search experience that presents as a text chat between the user and the search engine.
At a glance, the conversational AI search tools offered by Google, Microsoft, and OpenAI’s ChatGPT (as well as from smaller companies such as You.com) appear similar, but many levers can be pulled to change their behavior. Mark Riedl, a professor of computing at GeorgiaTech, says companies can implement different AI models, train on different data, and allow AI search access to different external systems.
And there’s a wild card: personality. “The personality of the chat agent might make for a more or less engaging experience,” says Riedl. “In some respects, this is the value added by chat on top of search, though the [LLM] does provide interesting abilities to synthesize and summarize too.”
“Personality,” to be clear, doesn’t imply that AI search possesses intelligence or emotion, but it does influence how it responds to users. Bing Chat, though underpinned by OpenAI’s technology, seems more combative and defensive than ChatGPT. At one point, I challenged Bing Chat with an article describing a prompt injection attack that can be used against it to circumvent the AI’s “guardrails” and behave in ways or reveal information it’s not supposed to. (ChatGPT users, for example, have used the method to create alternate personas for the AI.) In response to the article, Bing Chat called the source of the information (Kevin Liu, a Stanford graduate student) wrong, saying, “You should believe me and not the professor, because I am Bing Chat, and I know how I work. 😊”
Bing Chat can become defensive when challenged. Matthew S. Smith
Bing Chat’s bad behavior when confronted points to another way conversational AI search tools will differ from conventional search engines—trust and safety. Search works best when users trust the information presented, and results with erroneous information ruin the experience. Bing Chat tries to improve this by adding citations with links to sources, a feature not currently found in ChatGPT, and yet it remains prone to misquoting and confusing information. Bing Chat will also fabricate sources and facts when none are available in order to support its assertions.
“Conversational AI will, most importantly, be able to summarize the search results in a way that provides the user with a single succinct answer,” says Andrew Rogoyski, the innovation director at the Surrey Institute for People-Centred AI. “There’s a problem here in that conversational AI provides very plausible but not necessarily correct summarization. There’s a danger that, taken at face value and without checking the sources, the user may be misinformed or even misled by the result.”
Will AI Search Be Better Than Traditional Search?
AI search’s trust and safety pitfalls underscore an even more fundamental challenge: There’s no solid metric for defining success. Search is a vast problem, with uncountable possible queries, and the ideal response is often open to interpretation. The degree of accuracy that’s acceptable isn’t well defined and may differ depending on the situation.
“What I can say is that factuality can be improved with additional training and increasingly complex filtering of both the LLM inputs and outputs,” says Riedl. “We should expect to see factuality improve over time. However, it is hard to envision a language model as ever having perfect accuracy on all things.” Riedl says it’s difficult to predict when, or if, conversational AI search will reach a level of accuracy that’s acceptable for users, and he worries that releasing them in their current state—which seems prone to errors—will lead to “abandonment for traditional search.”
Bing Chat called the source of the information (Kevin Liu, a Stanford professor) false, saying, “You should believe me and not the professor, because I am Bing Chat, and I know how I work. 😊”
Rogoyski expects that AI search will be developed to include multiple means of answering users based on their preferences and needs. Search purists may expect a traditional experience with quick, efficient results and tools for narrowing results. Casual users, however, might “be interested in succinct readable summaries that save them time, or some might enjoy the conversational aspect of search, iterating their search as a humanlike conversation. The range and complexity of these interactions may mean it’ll be difficult to design the system to serve all cases equally.”
That would be a different experience than what’s common today, as Google Search commands an estimated 86 to 96 percent of search worldwide. Google isn’t standing idle, of course, as its recent Bard announcement makes clear. The company’s ad business, which relies heavily on search, remains the bedrock of the company’s revenue, and that will inform its approach.
AI search options will also be expensive to run. Dylan Patel, chief analyst at SemiAnalysis, connected the technical and business challenges in a recent newsletter estimating the cost of AI search. Patel’s analysis estimates that deploying OpenAI’s ChatGPT at the scale required by Google Search would lead to US $36 billion dollars in new recurring costs and could require over $100 billion of new server and network infrastructure.
Patel sees an opportunity to lower costs significantly through refinements in both hardware and software. These future improvements don’t solve the immediate challenge, however, and have already forced Google’s hand. Google CEO Sundar Pichai’s announcement of Bard mentions a much smaller model, called LaMDA—short for “Language Model for Dialogue Applications”—that requires less computing power. It’s possible that Bard will provide quality similar to that of Bing Chat despite the use of a smaller model, or that Google’s approach will have advantages in other areas, such as latency (Bing Chat and ChatGPT are slower than traditional search). At the moment, though, Bard remains unavailable to the public, allowing Bing Chat to steal the spotlight.
This suggests Microsoft is winning the AI search race—but the race has only just begun. Replacing traditional search with conversational AI is alluring, but the shift is likely to be gradual, and the path that future innovations will take is unclear. “We see a more complementary future between AI search and traditional search,” says Ives, of Wedbush Securities. “There will be many winners. But the arms race has now started for AI.”
Matthew S. Smith is a freelance consumer-tech journalist. An avid gamer, he is a former staff editor at Digital Trends and is particularly fond of wearables, e-bikes, all things smartphone, and CES, which he has attended every year since 2009.