Continuous Improvement, Employee Experience

ChatGPT and the dark side of AI

Note: A version of this post appeared on HRExecutive.com in January 2023.

There has been a lot written about the emergence of ChatGPT and the impact it will have on everything from college term papers to journalism. It’s a fascinating topic that simultaneously fills me with both amazement at the technology and a sense of dread about what it means to the human condition.  

The reality is that AI is ubiquitous and powering investment. Microsoft recently invested heavily in OpenAI (the group responsible for ChatGPT), with the global AI market projected to reach $1,581.7B by 2030. Forbes reports that there has been a 14x increase in the number of active AI startups since 2000, and 72% of executives believe that AI will be the most significant business advantage of the future. It’s reminiscent of the early days of the dot-com era when everyone scrambled to find ways to both profit from the explosion in technical capabilities and avoid being left behind as their competitors adopted new approaches to work.  

As someone who works with organizations to transform their business, I see the tremendous opportunities associated with AI. In digital transformations, AI can automate much of the transactional work needed to move historical data and test configurations. It can drive efficiency in business processes through RPA, enabling automation where delivered functionality is not present. Additionally, AI can power Tier 0 support in shared services through chatbots by answering basic questions employees might have.   

Unfortunately, in the rush to exploit the benefits of new technology, there is an inherent danger in moving too quickly to assess risk. And with legal lag that comes with rapidly emerging technologies, regulation is not equipped to proactively address some of the challenges associated AI before it does unintentional harm, despite the fact that 81% of tech leaders would like to see more regulation. This regulatory delay means it’s up to business leaders to act responsibly.  Channeling my inner Dr. Ian Malcolm, I want to highlight some of the areas where businesses should be prepared to address the impact of AI. 


Cybersecurity remains a key concern for businesses, with increased incidents of ransomware attacks and data theft, the need to safeguard infrastructure is top of mind. Ironically, AI has the potential to help boost cyber-protection by analyzing the patterns of attacks to separate the real threats from the noise. Unfortunately, the sophistication that allows AI to safeguard against attacks also means it is more successful at infiltrating any protection tools an organization may have in place.  

In addition to being able to rapidly adapt to any blockers, AI has found success in bypassing security all together by reaching out to human beings – via email. AI-generated phishing emails have higher rates of being opened than the old-fashioned kind. I have seen this evolution up close can confirm that these emails look 100% authentic. Gone are the days of emails with subjects like “I wAnt sH@re This wiTh yoou” – the subject, content and sender are perfectly spoofed. Even Amazon has seen an increase in threat, as AI has gotten better at using personal info shared online to craft incredibly sophisticated and personal messages to entice a reader to open malware. 

To help combat this, leaders need a zone defense – ensure your cyber security tools are updated AND take the time to educate your employees about how to recognize sophisticated phishing attacks. This means double-checking with the supposed sender that they intended to share a file, scrutinizing the content for any mistakes, and generally being more cautious. While this may slow the pace of business in the short-term, it will certainly save hours of work and reputational repair in the long-term. 


In the early days, AI was touted as the solution to help break bias in HR technology, particularly in the hiring process. The hope was that removing humans from the equation in the screening process would lead to a more equitable consideration of candidates based on merit, not emotion. Stripping away everything except relevant factors would magically lead to a more diverse workforce. 

Sadly, that prediction was sorely misguided. In fact, investigations into the impact of AI have found a negative effect on hiring equity. Amazon scrapped its screening tool early on because it found that the AI actively discriminated against female candidates, and this year a new law goes into effect in New York City that penalizes organizations found to have AI bias in their hiring process.  HR is not the only industry facing scrutiny, as healthcare is under fire for similar bias. An algorithm designed to help identify high-risk patients to offer more care was found to discriminate against poorer patients because one of the key factors tied to the algorithm was total spend on healthcare – ignoring the fact that poorer patients will often put off treatment because they cannot afford it.  

The reality is, AI is still programmed by people and people are biased. That doesn’t mean AI can’t add value to your hiring process or your business processes, it just means you need to recognize the potential risk and take steps to mitigate it. Use AI to automate administrative tasks, regularly audit your outcomes for potential bias, and continue to train employees on the importance of bias recognition.  

Creativity and Analysis 

Human beings are complex, emotional creatures. Our awareness of our own mortality has driven us to the arts and philosophy. We are the species of Plato and Socrates. We think, therefore, we are. We find beauty in words and music, and patterns in numbers and behaviors. The ability to create, to grow, to analyze are what separate us from other life on our planet. 

Yet every time we invent something new, we tend to give it power over us. Electricity was a wonderment that allowed society to grow by automating our housework and making it safer to travel at night. But have you ever noticed how quiet it is when the power goes out? Or how much we connect with our loved ones when all we can do is sit and play card games by candlelight? The pattern has continued, whether it be telephones, the internet, computers – inventions that were designed to make our lives easier may actually make our lives less rich. AI will no doubt continue this cycle unless we strive to break it. 

AI is generating artwork, creating images both sublime and ridiculous. ChatGPT is a marvel – the content it generates is remarkably coherent. I’ve seen it write interview questions that would rival the best recruiters and compose a fairly eloquent argument about how it will destabilize work in the future. Buzzfeed has fully embraced AI as a content generation tool, announcing that it will use OpenAI as a core component of its site, causing its stock to jump 150%. As people continue to feed it information, it gets better and better, learning from what it consumes and inching ever closer to mimicking human beings.  

I can’t help but think – is that what we really want? 

AI is a superior curator of information. It can quickly and effectively scrape data from multiple sources and produce a cohesive and concise summary of what it has gleaned. It can write simple copy and produce on demand blog posts, news updates, and other content that would take people hours to complete. What’s missing in all of this is the answer to the question “so what?” I’ve been reading a lot of AI-generated content recently to evaluate its potential impact on business, and what I see missing is true analysis. AI can show you the what, but it is not very good at finding the why.  

Leaders should be wary of leaning too much on AI to help make deeper, strategic decisions. AI is an excellent tool for aggregating and summarizing information in a consumable format, but ultimately, there needs to be a bridge between that information and how to act on that information – and that bridge is the human element. I’m reminded of the story of the soldier who acted contrary to what technology told him SHOULD be done…and ended up averting a nuclear war. The morale of the lesson is that AI is a tool – not the actor. We shouldn’t abdicate our accountability to an algorithm. 

Despite all the caveats, I’m incredibly excited to see what’s next for AI. The possibilities are endless, but as we know from the immortal words of Peter Parker’s family, with great power comes great responsibility. And that responsibility is ours. 


  • Mary Faulkner

    A principal with IA, Mary has more than fifteen years of experience working within organizations undergoing HR transformation. At her core, Mary is a builder and a problem solver. Her HR experience includes operations, learning and development, leadership and organizational development, and performance management.


Please enter a valid email address.
Something went wrong. Please check your entries and try again.

1 thought on “ChatGPT and the dark side of AI”

  1. Pingback: A Trip to the Big Top | Carnival of HR - Robin Schooling

Leave a Comment

Your email address will not be published. Required fields are marked *

Explore the Elements


Don’t let ‘technical debt’ drag down your employee experience


Not All Admins Want to Be Strategists