Beyond Boundaries: Generative AI for
Sustainable Academic Advancements

Mushtaq Bilal

mushtaq@sdu.dk

University of Southern Denmark

doi: https://doi.org/10.26439/ciis2023.7076

ABSTRACT. Since its launch in November 2022, Open AI’s chatbot Generative Pre-trained Transformer, commonly known as ChatGPT, has become one of the most popular generative AI applications in the world (OpenIA, n. d.). Academics across the world are concerned about how ChatGPT is dramatically changing the pedagogical and research landscape. In this paper, I discuss some of the best practices for using ChatGPT for academic purposes.

KEYWORDS: ChatGPT, OpenAI, generative AI

MÁS ALLÁ DE LAS FRONTERAS: INTELIGENCIAS ARTIFICIALES GENERATIVAS PARA EL AVANCE ACADÉMICO SOSTENIBLE

RESUMEN. Desde que fue lanzado en noviembre del 2022, el chatbot transformador generativo preentrenado (generative pre-trained transformer, en inglés) de Open IA, más conocido como ChatGPT, se ha convertido en una de las aplicaciones de inteligencia artificial más populares del mundo (OpenIA, s. f.). Académicos a lo largo y ancho del mundo han expresado su preocupación acerca de cómo ChatGPT está transformando dramáticamente el panorama pedagógico y de la investigación. En este artículo discuto algunas de las mejores prácticas para usar ChatGPT con propósitos académicos.

PALABRAS CLAVE: ChatGPT, OpenAI, inteligencias artificiales generativas

Introduction

I have been writing about how to use AI apps for academic purposes for more than a year now. Below are the six points that I think we need to understand for using AI apps smartly.

1. Use AI for Structure and Not for Content

When it comes to using AI apps for academic writing, understanding the difference between structure and content is crucial. It is a bit tricky to understand this difference because structure and content are intricately intertwined.

Content cannot exist without a structure, and we will have no structure if we have no content. We always have a lot of content based on the research that we are doing. But that content does not mean much if we do not structure it in the form of a research paper or a monograph.

Large language models like ChatGPT are trained on huge amounts of human-generated text. These models have a very good understanding of how we communicate, especially the way we structure our communication. But since these apps use a predictive model, the content they produce is mostly predictable. Predictable content, for our purposes, is of little use. Predictable structure, on the other hand, is very useful.

We have to learn to use generative AI to structure and not to generate content. For example, you can ask ChatGPT to give you an outline for a journal article, but you cannot ask it to write the article for you.

2. Outsource Academic Labor to AI but Not Thinking

Imagine you have to look up a few resources related to your research project. You can go to the library and browse the physical catalog. Suppose you find a few relevant papers. You go to the shelf to pick up physical copies of the relevant journals.

This whole process, as you can imagine, is quite laborious. You could have easily done all this on an app like Google Scholar or PubMed.

AI-powered apps are to Google Scholar what Google Scholar is to a physical brick-and-mortar library. I will give you the example of an AI-powered app called Scite. Suppose you come across a paper published by two Nobel laureates working in a prestigious lab. Because of their Nobel prizes and their stature, most of us would think that they have presented irrefutable evidence. Now imagine you want to find out if there is any evidence that contrasts the claims of these Nobel prize winners. You will have to read a lot of papers to find that out. Google Scholar will not be much help.

But the Scite app will tell you in a matter of seconds the contrasting and supporting evidence to the claims made by those Nobel laureates (AI for Research, n. d.).

In this case, we are using AI to outsource our labor but not our thinking. We cannot outsource our thinking because of the point I made earlier about predictable content.

3. Treat AI as a Research Assistant, Not a Supervisor

Imagine you hire a research assistant and you assign them a task. They complete the assigned task. Will you check how your assistant did or will you simply take what they did and put it in your journal article or research report? Chances are you will check it and give them feedback.

Think of AI apps as your research assistants and not your supervisors. I try to imagine AI apps as smart, willing, eager-to-learn research assistants. They can do certain tasks very efficiently, but I still have to check their output.

4. Do Not Over-Rely on AI and Do NotForget to Use Your Common Sense

It hardly needs to be said that we should use our common sense, but when it comes to AI, you would be surprised by the number of people who absolutely refuse to use their common sense.

Let me give you an example. On the ChatGPT homepage, it is clearly written that it “may occasionally generate incorrect information” (ChatGPT, n. d.) In their naivete, the makers of ChatGPT assumed that anyone using it will read this.

Many people did not bother with it. Among them was a New York lawyer who used ChatGPT to supplement his legal research (Lawyer Who Used ChatGPT, 2023). ChatGPT gave him fake citations to cases that did not even exist. He did not stop there. He asked ChatGPT to give him case reports to those fake citations. ChatGPT complied and generated fake reports to those fake citations.

The lawyer took this bundle of fakery and submitted it in a federal court. As for the judge to whom this fakery was submitted, let’s just say that he was not happy.

5. AI is Neither the Fantasized Utopia nor the Feared Dystopia

When it comes to AI, a lot of people tend to think in terms of extremes. They think AI is going to either solve all their problems (like you press a button and AI writes you a research paper) or take over the world and we will be ruled by robots.

Neither of these positions are helpful. Instead of thinking in these extremes, we should try to understand them as what they actually are.

6. Engage With AI Apps

This brings me to my final point, which is that we should engage with these apps. AI apps are here to stay and if we do not engage with them, we will not be able to equip our students with the latest tools that they will need in the marketplace.

Finally, we should try to combine artificial intelligence with human intelligence and not with human stupidity.

References

AI for Research. (n. d.) Scite.Ai. Accessed 4 November 2023. https://scite.ai.

ChatGPT. (n. d.) OpenAI. Accessed 4 November 2023. https://openai.com/chatgpt.

Lawyer Who Used ChatGPT Faces Penalty for Made Up Citations (2023, November 4). The New York Times. Accessed 4 November 2023. https://www.nytimes.com/2023/06/08/nyregion/lawyer-chatgpt-sanctions.html.