Ready or Not: AI Enters the Workforce

Chatbots may not be ready for the office, but that’s not stopping people from using them

4 min read
An illustration shows six office workers in cubicles in yellow and black, and a large white, black and blue chat bot looming over them with a speech bubble showing two gear icons.
iStock/IEEE Spectrum

AI is coming to your office—and, if you use Microsoft Teams or Google Workspace, it might already be there.

That’s the reality behind a trio of announcements from OpenAI, Microsoft, and Google. Each has unveiled “enterprise-grade” AI tools aimed at corporations and other large organizations. In the case of Microsoft and Google, these AI tools plug directly into Office 365 and Google Workspace, the popular productivity platforms used by hundreds of millions of workers worldwide.

“There are [tools] coming through for things like [Microsoft] Teams, and tools like Google Bard, which is enabling the knowledge worker. That’s everybody who’s sitting and working with a computer. Whether it’s somebody writing an email to a customer or evaluating a set of options […] it doesn’t mean I can completely outsource the decision to AI, but I can be better prepared,” says Rajesh Kandaswamy, the AI strategy adviser at Aithena Strategy.

The hydra of workplace AI

Microsoft was first out of the gate, announcing Office 365 Copilot in March 2023, but now it has a pair of major challengers. OpenAI announced ChatGPT Enterprise on 28 August 2023. Google responded immediately, announcing the general availability of Google Duet AI for Google Workspace the following day.

Microsoft promotes Office 365 Copilot for creating presentations, meeting notes, and spreadsheets. Microsoft

AI models already assist with numerous tasks, including analyzing data and automating production (and models have done this for years). Different models can detect flaws in machines from their rhythms, improve warehouse picking machines, and explore the mysteries of biology. But these examples, though extraordinarily useful, are specific, and usually demand users with significant technical expertise.

“One of the reasons why generative AI is so popular is because of the wide applicability and many broad use cases it can be used for versus narrow AIs, which are very narrow and scope and applicability,” says Andy Thurai, the vice president and principal analyst at Constellation Research. A large language model (LLM), such as OpenAI’s GPT-4, has the potential to be useful for tasks as wide-ranging as sentiment analysis, code generation, and office automation. Also importantly, LLMs provide these capabilities through easy-to-learn interfaces.

This could prove key for Microsoft’s Office 365 Copilot and Google’s Duet AI, catchall names for an extraordinarily broad range of AI-powered features spanning text generation, image generation, and data analysis. An engineer looking to explain the capabilities of a system in a meeting, for example, could ask Copilot or Duet AI to outline a presentation based on documentation already hosted in their company’s workspace. This capability is new, but the interface used to accomplish it is familiar.

Isn’t this just ChatGPT in a suit?

The expansion of generative AI into common productivity tools, despite the anticipation, is already drawing criticism. It’s not hard to understand why: ChatGPT Enterprise might look more professional but, at its core, it’s still the same LLM users have accessed through OpenAI’s Web portal for nearly a year—and it’s still prone to familiar mistakes.

“Companies assume they will save, but when they do the return-on-investment calculation, with all costs involved, sometimes it might cost them more,” warns Thurai. “If you employ folks to verify, and validate, the code or content generated by AI, how much are you really saving?”

Consider our hypothetical engineer’s hypothetical presentation. Using AI to generate the presentation might save time, but the engineer would be smart to verify that the data in the presentation is correct. That’s especially true if the engineer’s employer hasn’t trained its AI tools on the company’s own data.

“If you employ folks to verify, and validate, the code or content generated by AI, how much are you really saving?” —Andy Thurai, vice president and principal analyst at Constellation Research

Adding AI to productivity software also spurs worries of a busywork arms race. A sarcastic cartoon by Tom Fishburne encapsulates the problem: It depicts an office worker pleased as generative AI expands a few bullet points into an email, while their colleague is equally pleased with AI’s ability to condense an email to a few bullet points.

An animated demonstration of Google Duet AI. A person uses it to summarize financial information and generate a report.Google Duet AI can create a slideshow presentation from a document, if asked.Google

Google, Microsoft, and OpenAI aren’t oblivious to these problems, and have taken some steps to mitigate them. ChatGPT Enterprise, for instance, provides a fourfold increase in the context window (an important gain that increases the length of input it can ingest while generating a response), adds shareable templates to standardize workflows, and provides an analytics dashboard to reveal usage trends or problems. The trio of companies are also taking steps to pacify worries over data security and privacy: In particular, they vow not to train their general-use AI models on client data.

Companies play catch-up

While generative AI’s downsides may give some pause, the opposite problem is perhaps more pressing. Workers are using free generative AI tools to find their own solutions to the challenges of their jobs, with or without support from employers. This has caused a few headaches: Samsung blocked employees from accessing ChatGPT in March 2023 after finding at least two engineers had used it to troubleshoot confidential code.

“Employees have all played with ChatGPT, and they’re finding clever ways to use it,” says Kandaswamy. “Management is saying, whoa, we haven’t figured this thing out. We need to be worried about compliance, IP issues, security, privacy, and that’s all valid. But the genie’s out of the bottle.”

The current moment reminds Kandaswamy of the first Internet browsers, like Netscape Navigator (which made its own play for enterprise with Netscape Communicator). Easy access to the Web opened a whole world of possibilities—and a world of worries about worker distraction and network security. But companies were forced to embrace the Web, knowing employees would choose to use it whether it was allowed or not.

And the comparison to Web browsers is instructive on another point: It’s still the early days of generative AI. The first offerings from Google, Microsoft, and OpenAI take the obvious route, molding existing platforms into generative AI tools. But Meta remains unaccounted for in this arena–and small up-and-comers like MosaicML and Anthropic (which just announced Claude Pro, an alternative to the consumer-level ChatGPT subscription) could have their own take on the idea.

“As a matter of fact, my prediction is probably someone from the left field will completely level the playing field and take over from those major players, like Meta did with its Llama 2 model, which is leading all the performance and usage charts,” says Thurai.

The Conversation (0)