AI Is Transforming Office Communications. Here’s What Two Experts Want Employers to Know.

Programs like ChatGPT are already having an impact on business communications. But while AI can create workflow efficiencies, it can also introduce privacy risks and inaccurate information. How can companies ensure their employees are using it effectively?
(Illustration: Niklas Wesner)

The recent and swift adoption of generative AI has introduced businesses to free tools, like ChatGPT, that can write emails, draft articles and analyze data, to name a few of the limitless potential uses. Some organizations have already rolled out policies on how they intend to use these large language models, or LLMs; others may not even realize their employees are already using them.

Here, Jodie Lobana, chair of the McMaster Artificial Intelligence Society advisory board, and Molly Reynolds, partner and privacy and cybersecurity lead at Torys LLP, discuss how organizations can mitigate the risks and maximize the effectiveness of generative AI in the workplace.

An illustration of Jodie Lobana, chair of the McMaster Artificial Intelligence Society advisory board, by David Sparshott
Jodie Lobana, chair of the McMaster Artificial Intelligence Society advisory board (Illustration: David Sparshott)

JODIE LOBANA: I’ve started experimenting with ChatGPT for basic communications like emails and social-media posts. I use it almost like it’s a personal assistant—I give it a rough draft, it enhances the work and I go back and forth making adjustments. Crucially, I review everything before it gets sent out. AI can generate false information or misinterpret prompts, so you cannot rely on it to produce final products.

MOLLY REYNOLDS: There isn’t a law firm in North America that has authorized its lawyers to use ChatGPT to write factums (legal documents presenting the facts and arguments in a given case). But there was a recent well-publicized case of a lawyer who did just that, and it turned out that ChatGPT had made up the references that it cited—and the lawyer submitted it to court without realizing. Every business has to stay on top of anticipated uses and make its policies clear. And you don’t want clandestine use.

J.L.: Everyone who uses ChatGPT should keep in mind that the company behind it, OpenAI, has stated that it can read your conversations for the purpose of developing the tool.

M.R.: Here’s a rule of thumb: If you would be happy for any part of the information you’re giving ChatGPT to be made public, it may be appropriate to use the tool. But every business is going to have a different risk tolerance.

Related: How a Government Worker Extorted Millions From Canadian Businesses

An illustration of Molly Reynolds, partner and privacy and cybersecurity lead at Torys LLP, by David Sparshott
Molly Reynolds, partner and privacy and cybersecurity lead at Torys LLP (Illustration: David Sparshott)

J.L.: There are a few simple tips for enhancing privacy while using these tools. The first is removing identifiable information, whether it belongs to you or your clients. For example, when you prompt ChatGPT, use a placeholder like “ABC company” instead of the actual name. There’s also a setting to opt out of OpenAI using your data to train its technology.

M.R.: At our firm, whether or not you use placeholders, ChatGPT must not be used if you’re dealing with confidential or privileged information. There’s always the possibility of identifying someone based on what else is in the database. But I think we will start to see a lot more custom LLMs developed for larger companies. That will make a big difference from a safety perspective. These companies will have product teams that onboard the entire staff, which means everyone will be properly trained on how to use the tool safely and effectively.

J.L.: Training is key for responsible use of this software. The next generation of the workforce should jump on the bandwagon as soon as possible to start learning these skills. Even now, I would much rather hire someone who’s savvy with AI than someone who isn’t.

M.R.: We’ll start seeing companies wrestling with how to train junior employees now that the work they would have done can be automated. At a marketing firm, doing several rounds of revisions on an article may have been an important training exercise for entry-level staff. But, looking out, there’s a good chance a lot of that work will be automated. In client-services businesses like content creation, can you then charge people for supervising an automated tool? And if you can, do you have to charge them less?

J.L.: There’s an important conversation to be had about the potential loss of skills if we adopt these tools en masse. We want to hold on to human creativity and voice. There’s something personal that gets lost when we depend too much on ChatGPT, even with basics like email. And transparency is key. Whether you use ChatGPT for editing, research or something else, it’s important to note that usage somewhere in the final document.

M.R.: There’s room for standardization around basic disclosure. We may see people who regularly use these tools for business communications incorporate a disclaimer that gets put in email footers, for instance—if only so no one can be accused of trying to deceive their counterparty.

Liza Agrba
Liza Agrba
Liza Agrba is an award-winning freelance writer based in Toronto with over a decade of experience covering food, business and culture. Her work regularly appears in The Globe and Mail, Maclean’s, and Toronto Life, among others.