Managing Artificial Intelligence in the Workplace
Steven B. Zwickel
April, 2026
Last year I was invited to give a presentation based on the Communication-Based Model of Leadership I created {See <https://stevenbzwickel.blogspot.com/search?q=leadership>}
I have done this presentation many times since I created it almost 20 years ago and I realized that it needed to be updated and refreshed before I presented it in 2026. The world has changed and my presentation needed to reflect those changes.
I decided that I needed to add a section about being a leader in a world where people don’t feel safe. I have written about the importance of feeling safe before {See “It’s Better to Be Safe” August, 2020 <https://stevenbzwickel.blogspot.com/search?q=safety>} so it seemed logical to include ideas about what leaders can do to help people in the workplace feel safer in my presentation.
Then I realized that a major source of anxiety and fear nowadays seems to be coming from Artificial Intelligence (AI). It seems everywhere you look there are articles about AI and how (A) it will make life easier, (B) it is taking away people’s jobs, (C) it is making people stupid, or (D) it will end civilization as we know it, which may, or may not, be true.
All of these seem to add up to a major change in the workplace: AI makes people feel unsafe because they fear losing their job security, their privacy, their right to fair treatment, and their ability to trust the reality they see online.
I decided to add a section to my presentation about dealing with AI in the workplace. This blog entry summarizes what I told my audience what I think leaders (supervisors, managers, bosses—anyone in charge of other people) need to think about.
To start with, I told my audience that you can’t keep people from being afraid. You can reassure them that you will do what you can to help with job security, privacy, right to fair treatment. Dealing with the other psychological/emotional impacts of AI may require major lifestyle changes (such as reducing/eliminating use of phone and social media—which often make people feel insecure about who they are, what they do, where they live, etc. because that is how advertisers get people to buy goods and services.
I explained that as near as anyone can tell AI is here to stay and that it will continue to cause problems. Although it is “easy to use” it can be a struggle to get useful responses from a chatbot. I consider myself good at framing questions so I get usable answers—the result of going through law school and graduate work in social work. But I have also struggled to get a reliable response from a chatbot, so I know that AI can be a real waste of time.
I also know that AI sometimes hallucinates—it comes up with bad responses and false information. This can lead to a loss of the writer’s credibility, as in recent cases where attorneys were disciplined for citing legal precedents that didn’t exist. It can also lead to an organization making bad business decisions, which can result in wasting time and losing money.
I am a realist, so I know AI is not going away; we are going to have to learn to live with it. {I have no hope that our government will ever come up with a plan for regulating or controlling AI, so there is no point in waiting for legislation to make our lives safer}.
That means that leaders must now consider how to adjust to the reality of AI. If they do nothing, workers will continue to waste time trying to get useful answers, the company will continue to be at risk of losing money due to bad business decisions, and the organization may lose credibility if there are errors and misinformation in what it distributes.
Like it or not, use of AI in the workplace is inevitable, so let me suggest ways an organization might approach these problems.
I believe organizations must have a policy regarding the use of AI. I realize that this technology is brand new and no one knows what it will become, but if you don’t have a policy in place, people will not know what is considered acceptable. Start with a policy and revise it as AI use evolves.
Insist that anyone in the organization using AI must disclose that they did so. The rule ought to be “If you put your name on it, it better be yours, but if you used AI, you need to make that clear”. Call me old fashioned, but I think that people ought not to get credit for someone, or something, else’s work.
Train people to use AI efficiently. The most important skill for using AI efficiently is knowing how to ask a question {prompt} to get a usable answer. Of course, different types of questions will yield different answers, so people need to learn to frame questions properly. Of course, training costs money; the company won’t like this! And training takes time and effort; no one will like this!
Users of AI must conduct due diligence. Chatbots hallucinate! That means that all sources must be checked for validity. To protect the organization from lawsuits, the use of patented/copyrighted material must be noted and AI users should be prepared to answer questions about all the sources. The danger of not doing so is that relying on misinformation could cost a company a lot of money.