circle circle
 The potential risks of using ChatGPT at work
Technology

The potential risks of using ChatGPT at work

Share

INSIGHTS

What is ChatGPT?

ChatGPT is a chatbot powered by artificial intelligence (AI), created by OpenAI. It uses natural language processing to simulate conversational responses to prompts provided by users.

As a result, ChatGPT can provide written content for users. This can vary from creative writing to business-related materials, like proposals and marketing plans. It can also generate code in programming languages such as Python and Java, as well as legal documents such as style contracts, wills and dispositions.

Every time a user interacts with ChatGPT it learns by analysing responses and improves through machine learning.

Developers can build an application on top of ChatGPT by integrating with ChatGPT’s API (Application Programming Interface) to address specific tasks or provide functionality within their application. A healthcare app could use ChatGPT to understand user queries and provide personalised responses and a virtual assistant app could use ChatGPT to provide a more natural and conversational experience for users when they command the virtual assistant.

An example of an app with ChatGPT integrated is Dulogino, a language-learning platform, which has recently launched a new subscription tier that is powered by ChatGPT-4. The chatbot comes with two features, Explain My Answer and Roleplay. The Explain My Answer feature provides users with detailed grammatical explanations when they make mistakes, while Roleplay enables learners to interact with virtual human-like characters in real-world situations such as airports or restaurants. This feature allows learners to practice speaking in a more realistic context.

Another example is Ada Support, a customer service automation company that uses ChatGPT to automate billions of customer interactions for hundreds of clients. It leverages AI to provide tailored responses that can resolve more complex interactions which require contextually correct responses.

With all of these capabilities, ChatGPT may seem like a perfect solution for tasks at work whether it be assistance drafting an email or in programme related tasks. Yet, it is not without risk.

The risks of using AI-driven applications at work

  • Unintentional infringement of intellectual property rights

Using AI-driven applications may result in unintentional infringement of intellectual property rights, as the generated results may resemble and be based upon existing content in which third parties have rights, resulting in accusations of plagiarism or copyright infringement. This can create legal risks for employers if employees use that content. For example, Getty Images has initiated legal proceedings against Stability AI, the creators of the popular AI art tool Stable Diffusion for alleged copyright infringement. Getty Images claims that Stability AI “unlawfully copied and processed” millions of images protected by copyright from Getty Images’ site to train its software.

  • Incorrect responses

Another risk is the possibility of the AI providing incorrect material. An AI-driven model’s capacity to provide information is dependent on the quality of the information it was trained on. Despite being trained on a vast amount of online data, an AI model’s knowledge base may have limitations. In addition, chatbots may not always understand the context of requests. This can result in the model providing correct but irrelevant responses. AI-driven applications may not be able to verify the information in training data and may derive their responses from inaccurate data, which could lead to significant problems for a business if these issues go unnoticed.

Despite many AI-driven applications warning users of the possibility of generating incorrect responses, it is easy for users to forget about this until they encounter an obviously incorrect response, such as one citing non-existent base material. If employees rely on AI created responses without fact-checking or reviewing them like they would other sources, the risk of error dissemination increases significantly.

  • Leaking confidential information

Although many AI-driven applications do not explicitly retain information from their conversations, they continuously learn through machine learning algorithms. This means if proprietary code or confidential information is shared during “conversations”, it may be accessible through the trained model, creating issues related to confidentiality and data privacy.

According to reports from Korean media, there have been three leaks of confidential information from Samsung employees through ChatGPT. In two instances, employees inputted code and requested a fix or optimisation, while in the third instance, an employee requested a summary of meeting notes. ChatGPT is unable to delete specific prompts from its history and the proprietary code entered into ChatGPT by Samsung could potentially be used to train it and provide results to other users. This raises the question of whether Samsung’s proprietary code is now inadvertently available to anyone who uses ChatGPT, making it de facto open source.

This highlights the importance of data security and the need for companies to ensure that their employees are aware of the risks.

How should the risks be managed?

With these growing concerns, companies have been taking precautions. For example, it has been reported that Walmart issued an internal memo warning its workers not to share confidential information with chatbots, and certain law firms have limited their lawyers’ use of the application.

However, ChatGPT (as with other AI-driven applications) is expected to remain relevant as OpenAI is set to release an intermediate version, GPT-4.5, in September or October 2023, bridging the gap between GPT-4 and GPT-5. In light of this, employers should carefully consider how to address the use of AI-driven applications in the workplace. If prohibition is not considered appropriate, well-defined policies and procedures should be established to ensure appropriate use. This could involve incorporating clauses in employee confidentiality agreements and policies that explicitly prohibit the input of confidential information into AI-driven applications. Additionally, there should be regular reviews of generated responses to ensure accuracy and relevance, and sufficient training should be provided to employees on how to use AI effectively. In some cases, it may be necessary to restrict the use of AI-driven applications in certain areas and address and revise data processing notices and policies. In situations where AI may be used for high-risk purposes, employers should keep a record of the interaction including the prompt that was used. These additional measures can help ensure transparency and accountability in the event of any issues.

Employers should also be aware that AI-driven applications are, in essence, automated decision-making systems and thus their use can be subject to the provisions of the Data Protection Act 2018, the General Data Protection Regulation (GDPR), and other applicable laws, depending on the location of use and the location of data subjects involved. If personal data is being processed there must be adequate compliance protocols addressing individuals’ rights, for example such as establishing a legal basis for data processing, where applicable carrying out a Data Protection Impact Assessment, and determining the impact and relevance of automation employed.

Legal regulation

In the UK, there is currently no AI-specific legislation. Instead, the UK government aims to establish a pro-innovation approach for AI. Rather than establishing a new body solely for AI, the UK Government intends on dividing responsibility for governing the technology between its regulators for human rights, health and safety, and competition. This strategy is to avoid impeding technology and to promote safe and innovative AI use. To guide regulators, the Department for Science, Innovation and Technology in its white paper has outlined five principles to promote the safe and innovative use of AI in the industries they oversee.

They are as follows:

  • safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
  • transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
  • fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
  • accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
  • contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI”

In explaining these principles, Science, Innovation and Technology Secretary Michelle Donelan said:

“Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.”

Although the government is avoiding a “heavy-handed” approach to regulating AI, employers should take some responsibility to manage risks associated with the misuse of AI, such as intellectual property infringement and damage to reputation. Given the constantly evolving technological landscape, staying informed is crucial.

To discuss how AI-driven applications may impact upon your business, and to determine how you should approach meeting your legal obligations, get in touch with a member of our intellectual property and technology team.

CONTACT US

Glasgow Edinburgh Inverness Elgin Thurso Shetland
Get in touch

Call us for free on 0330 912 0294 or complete our online form below for legal advice or to arrange a call back.

Speak to us today on 0330 159 5555

Get in touch

CONTACT US

Get in touch

Call us for free on 0330 159 5555 or complete our online form below to submit your enquiry or arrange a call back.