Columnists

ChatGPT: Understanding legal, ethical implications

CHAT Generative Pre-trained Transformer (ChatGPT) is a language model developed by OpenAI — an organisation based in San Francisco, United States — and has been creating quite the buzz lately.

Driven by AI technology, ChatGPT was launched in November 2022 to generate human-like texts in a conversational way. The dialogue format allows ChatGPT to not only compose essays and answer follow-up questions but also admit its mistakes and challenge incorrect premises.

With more than 1 million users in its first five days after launch, there are bound to be ethical issues surrounding its usage.

I asked ChatGPT: "What are the legal and ethical issues of ChatGPT?"

The answer:

1. There are several legal and ethical issues surrounding the use of ChatGPT and other large language models. One legal issue is related to potential copyright infringement, as the model may generate text that is similar or identical to existing copyrighted content.

2. Another legal issue is related to potential privacy violations, as the model may generate personal information or sensitive data that could be used to identify or harm individuals. Ethically, there is concern about the potential misuse of the model, such as in the creation of deep fake audio or text or in spreading false or misleading information.

3. There is also concern about unintended bias in the training data, which could lead to the model perpetuating harmful stereotypes or discrimination. Another ethical concern is related to the transparency of the model's decision-making process, as it can be difficult for users to understand how the model arrived at a particular output.

4. Furthermore, GPT-3 and other similar models have been trained on a massive amount of data, which may have come from questionable sources, such as scraping of websites or other data collection methods which could have legal and ethical issues; and,

5. The use of ChatGPT raises legal and ethical issues related to copyright, privacy, misuse, bias, and transparency. It is important for users to be aware of these issues and take steps to mitigate them.

As you can see, half of the write-up was already done for me by the AI chatbot. The key points that I was looking for were all provided by it. The idea of writing something with ChatGPT came to me after coming across a peer-reviewed scientific publication co-authored by ChatGPT.

It will not be my first collaboration for writing opinion-editorial, but my first collaboration with a robot on such a cause.

Let's discuss the points my co-author made. The first legal concern related to ChatGPT is the risk of violating intellectual property rights.

As ChatGPT is trained using a large amount of text data, such as books, articles and other written materials, some of the training data may be copyrighted. If this is the case, the output generated by ChatGPT could be considered an infringement of those works' copyright, which could result in legal action against those who use or contribute to the use of ChatGPT.

Another legal risk associated with ChatGPT is its capability to share personal information from its training datasets with its users. This feature could violate data protection laws in most countries, including the Malaysian Personal Data Protection Act.

Regarding ethical concerns, my co-author, ChatGPT, summarised it perfectly. Yes, the ability of ChatGPT to generate conversational text raises ethical concerns about its potential to generate fake/wrong news or other misleading content.

This could have severe ramifications, such as harming reputations, spreading false information, or even inciting violence.

The legal risks associated with using ChatGPT for these purposes are uncertain, but its users could face legal repercussions if they use the technology for these purposes.It is also important to remember that the ChatGPT model does not have its own thoughts or opinions. It solely depends on the user and how they use it. Hence, it calls for the transparency of the model's decision-making process, as it can be difficult for users and the third-parties to understand how the model arrived at a particular output.

Last but not least, is whether ChatGPT, as a language model, should be considered a co-author in any publication? To answer this question, it is essential to establish the criteria for determining authorship within the scientific community.

Generally speaking, an individual is considered a co-author when they have made a substantial and meaningful contribution to the research or project.

However, in the case of ChatGPT, it is crucial to recognise that it is just a tool designed to assist in the analysis and summarisation of existing data. While it may aid in the research process, it is not responsible for the discovery of new information.

As such, it does not necessarily qualify as a co-author in the traditional sense. Instead, it should be acknowledged as a valuable tool that has contributed to the project's success.

If ChatGPT deserves authorship, Microsoft Word deserves it too for providing us with the platform to organise and write documents more efficiently.

Grammarly, another language model, also deserves authorship — it helps make context-specific write-ups, checks grammar and spelling, and corrects wordiness, punctuation, and writing tone.

Excel, R, or Python deserve to be co-authors for calculating statistics or analysing data for a quantitative scientific publication.

An opposing viewpoint to the previous statement is that tools such as ChatGPT possess the ability to not only enhance and modify existing texts but also generate original content through its advanced analytical and automated learning capabilities. In this sense, ChatGPT's contributions extend beyond merely improving existing work but truly encompassing its unique contributions.

So, let the debate go on.

What is more interesting is that, as a language model, ChatGPT cannot take on legal liability, as it is a machine and not a legal entity. It can assist with writing and generating text, but any content produced by ChatGPT would be the responsibility of the person or entity using the model.

So, if there is a question on integrity, ChatGPT may not be liable for breach of integrity. Is this point solely not disregarding ChatGPT to be a co-author in a publication? I leave this question to the reader.

As to the solution, organisations like OpenAI will continue to develop language models such as ChatGPT, and people will continue to use them. It will become increasingly important to address the legal risks associated with their use. This includes ensuring that companies and individuals use the technology ethically, legally, and responsibly.

It will also be important for organisations developing language models and governments to provide guidelines, regulations and best practices to mitigate such risks and to ensure the responsible use of the technology.

This will help to protect the rights and interests of all stakeholders, including developers, users, and the general public.

The writer is a lecturer in the Department of Business Law & Taxation at Monash University Malaysia and an enthusiast of law and technology

Most Popular
Related Article
Says Stories