Why We Haven't Seen Organisations Looking for ChatGPT Experience Yet

The question was posited at a data conference I attended: "Are employers looking for professionals with ChatGPT experience?" and this was experience in terms of 'prompt engineering'.

On reflection, in my experience as a data & analytics technical recruiter this has not been a requirement of any of my clients just yet. And it made me wonder why. Several reasons come to mind and not always the reasons you may think.

The advent of artificial intelligence (AI) has radically transformed the landscape of numerous industries and professions, including data analytics, customer service, digital marketing, and so on. OpenAI's ChatGPT is one such AI that's been garnering a lot of attention in recent times. This powerful language model can write like a human, answer queries, and even generate code chunks, and more importantly is available free or at minimal cost.

Despite the capabilities of ChatGPT, businesses aren't exactly rushing to hire professionals with experience using it. The question arises – why? Here are the five primary reasons I've seen: security, inaccuracy, unknown quantity, liability, and policy.

1. Security

ChatGPT is undeniably a powerful tool with incredible potential. However, its inherent design might present security risks. The model learns by ingesting large amounts of data, but it doesn't necessarily distinguish between secure, private data and publicly accessible information. This lack of discernment could lead to the disclosure of sensitive data or potential misuse of information. Additionally, integrating such an AI into an organisation's system could expose the system to malicious actors who may manipulate it for their gain. Data governance and privacy is of the utmost importance in the current day and trumps the need for speed.

2. Inaccuracy

As an AI language model, ChatGPT learns from the data it's fed. It's important to note that the model doesn't understand context the same way a human does. As a result, the responses it generates can sometimes be inaccurate, incomplete, or even nonsensical - in the development world this is known as a hallucination. While this can be acceptable in casual conversation, inaccuracy can be detrimental in professional settings where precision is vital. There's examples of ChatGPT spouting absolute nonsense or even fabricating legal precedent.

3. Unknown Quantity

ChatGPT represents an emerging technology, and as such, there is a degree of unpredictability associated with its application. It's hard to quantify the exact impact or return on investment when using ChatGPT in a business context. Until there are proven, quantifiable benefits and widespread success stories, organisations might be hesitant to invest resources in training or hiring for ChatGPT-related roles. There are horror stories already of professionals deploying code written by ChatGPT which then causes an array of downstream catastrophes.

4. Liability

Liability issues pose another challenge. For instance, if ChatGPT produces content that's offensive, harmful, or incorrect, who is responsible? Is it the organisation that uses the AI? The individual who interacted with it? Or the entity that created and maintains the AI? Navigating these liability issues and their implications could be daunting and is a legitimate concern for organisations considering the use of AI like ChatGPT.f

5. Policy

Navigating the policy landscape around AI like ChatGPT presents a significant challenge for organisations. Absence of robust regulatory policies and the need for intricate internal guidelines create an environment of uncertainty. This, combined with the risks associated with potential policy violations and the agility required to keep up with rapidly evolving regulations, makes the integration of such AI a complex task. As AI policies mature, we expect these barriers to lower, but for now, they remain a deterrent for organisations seeking ChatGPT expertise.

In conclusion, while ChatGPT has its share of potential benefits, the concerns regarding security, inaccuracy, unknown outcomes, liability, and policy hold organisations back from actively seeking ChatGPT experience. However, as the technology matures and these concerns are addressed, we might witness a shift in this trend. Until then, organisations and professionals alike need to keep a balanced perspective about the use and integration of AI like ChatGPT in their daily operations.

In a professional context, I would recommend asking management about ChatGPT use and being particularly careful with the data that you share with it. Any sensitive information is technically now a part of the learning model and OpenAI has access to it.

Query wisely!

Previous
Previous

Skills Employers are Looking for in Data Professionals - Australia 2023

Next
Next

The Many Hats of Running a Business: Balancing Data and People in the Recruitment Industry