Navigation mit Access Keys

April 27, 2023

«We need to learn how to deal with AI tools like ChatGPT because they will not disappear.»

Since December last year, the artificial intelligence tool ChatGPT has been making headlines. It independently writes texts of such high quality that even experts cannot distinguish whether they have been written by a ChatBot or a real person. Prof. Torsten Schwede teaches and conducts research at the Biozentrum of the University of Basel. As a computational scientist and the Vice President for Research, he is concerned with the possibilities and challenges of artificial intelligence.

Photo of Torsten Schwede, Professor of Bioinformatics at the Biozentrum and Vice President for Research, University of Basel.

Prof. Dr. Torsten Schwede. © Universität Basel, Florian Moritz

Did the development of an artificial intelligence (AI) tool like ChatGPT surprise you personally?
I think we were all surprised by the rapid pace of this development and above all by the quality and versatility of this chat system. Hardly anyone really expected this. 

Why not?
The concept of artificial intelligence is actually not new, but with ChatGPT and image generators such as Midjourney and DALL-E, it has now entered the mainstream. AI has already been finding its way into our daily lives for a long time. We order cinema tickets by phone via speech recognition. When I write a text, the word processing software corrects my spelling and grammar. And Netflix makes suggestions for films that I would probably find interesting. All of these are examples of AI or, more precisely, machine learning. The current developments have heralded in a new era, with AI now generating texts and images so realistic that they are often no longer recognizable as "artificial".

How exactly does AI learn?
Initially, these systems are «trained» with large volumes of data. This teaches an artificial neuronal network or an algorithm to classify data, for instance, to distinguish between pictures of cats and dogs. Moreover, it learns which properties of the data are relevant for making this distinction. The systems we talk about today don’t merely detect patterns but also generate new outputs. ChatGPT‘s learning is based on a large dataset of text data which comes from several sources, such as books, articles, Wikipedia and journals. The publically accessible implementation on Bing combines the language model with an Internet search engine to give it access to current information.

Isn’t that problematic in view of the fact that information on the internet is always biased to some extent?
In the use of AI, the biases ingrained in the training data and searches in the Internet always pose a great challenge. If I use a language model to write a summary of a particular text, bias is less of a problem, because I have limited the scope and precisely defined the data source. But should I pose a general question, then the language model will reflect what it has learnt from the training data and found on the Internet - with all its consequences. An AI application will readily act as an amplifier of existing problems and imbalances.

How is this currently being managed? 
In the development of ChatGPT, the responses generated by the AI are evaluated by the developers as to whether they meet our expectations in terms of content and form. The AI then learns from this feedback to respond «more like a person» and furthermore to not react to certain questions at all. This process is called "alignment" and is a very exciting topic in the field of AI research right now.

Have you ever tried ChatGPT yourself?
Of course, as soon as it became available. My eye-opener was last year when I was writing Christmas cards late one evening. Just for fun, I entered: «write a Christmas card for a former colleague of mine» on ChatGPT and realized that the proposed text was considerably better than my own. That’s when I appreciated that we are now slowly nearing the point where AIs are really becoming useful in our daily lives.
Does this also apply for research?
AI has already been regularly utilized for many years in research. What fascinates me most is that AI applications are no longer limited to tasks humans can do, such as playing chess or the board game Go, or recognizing patterns in Xray images. In my field of research, we have been using the AI tool «AlphaFold» to model protein structures for quite some time. 

And what does «AlphaFold» do? 
This AI tool can predict the 3-dimensional structure of a protein with high accuracy and this also works for previously completely unknown proteins. Until now, no one had described an approach for reliably predicting such protein structures, which could be implemented in practice. The AI system «learnt» this ability directly from the experimental data. This is fascinating on the one hand and quite frustrating on the other. We can now efficiently predict 3-D protein structures, however we still don’t really understand in detail how AI achieves this, nor the molecular mechanism of protein folding.

Are the AI text generators a gain for science or do they pose more of a problem?
I tend to see more opportunities here. In research, for example, it could considerably reduce the workload, if an AI tool would not only find and list literature sources, but could also provide an appropriate summary. I can well imagine that AI will soon play a role in translating scientific texts, facilitating publishing in a foreign language. The challenge is, of course, the quality control. Ultimately, as scientists, we are responsible for our own work and cannot claim that ChatGPT must have misunderstood something.

How can we protect ourselves from false information that can always appear in ChatBot-compiled texts? 
We currently can’t entirely prevent this but it is likely only a question of time. In the field of science, we already have the first language models which also provide the precise source information for statements from scientific publications. Using GPT-4 on Bing, for example, one can choose whether one primarily wants factual fidelity or rather creative answers. 

How important is the ChatGPT issue for the University of Basel? 
The topic has certainly sparked much discussion. Currently, various working groups are trying to clarify how our university should deal with this issue. 

Is there a tendency for how this should be handled?
Especially in teaching, there are, of course, several challenges. The working Group «AI in teaching» is concerned with the question of how AI-based tools can be used in teaching and learning or respectively what adaptations would be necessary to prevent such tools being misused by students in their work. Generally prohibiting their use is not planned. From my point of view, ChatBots are a bigger problem for schools than universities. Linguistic expression should have been taught at school, whereas in university courses, the focus lies on skills such as critical thinking, postulating hypotheses and problem solving. 

Is it possible to recognize ChatBot-generated texts?
I think it’s an illusion to believe that we will one day have algorithms which can reliably distinguish between human and machine-generated texts. The race between generating language models and the detection software is a continuous cat-and-mouse game, unless one agrees to build a kind of «watermark» in AI generated texts.

Can you imagine that ChatGPT or other AI systems might be integrated into the curriculum?
This discussion, in fact, reminds me a little of the introduction of programmable pocket calculators during my “Matura”. At that time, the math teachers discussed whether we students should be allowed to use such programmable devices or rather do calculations by hand. In retrospect, we must say that those students who learnt programming ultimately gained more for life. I think we are now in a similar situation. 

What can we learn from this for the here and now?
We need to understand AI tools and learn how to use them appropriately because they will not disappear. I think that we can’t avoid integrating them into a modern curriculum. In other words: I don’t see any reason not to use new, helpful technologies, if they enable us to be more effective and creative in our work. But this requires an understanding of how these tools work to be able to recognize their limits and associated problems.

In the future, teachers will need to be aware that student’s texts could increasingly be AI generated …
Yes, that is true. However, I’m convinced that students go to university because they want to learn and to develop intellectually. There were always a few students trying to cheat their way through. One of my colleagues put it aptly: “With ChatBots, we have a democratization of the status quo.” 

What did he mean?
Until now, only well-off students could afford a ghostwriter to help with their homework. Now, everyone has professional assistance at hand. The question for the assessor still remains the same: Did the person do the work by themselves or not?

Your view of the developments seems to be rather positive. Do you also see any risks? 
I think it becomes a problem, if we uncritically use some dataset or the Internet as representing “reality” in the training of «machine learning». The AI then quickly reflects the bias in the data or the many opinions on the Internet instead of sound facts. And when the so-trained AI models then, in turn, are used to generate new texts and images, which can hardly be distinguished from «real ones», then at some point it becomes difficult to separate fact from fiction. The AI generated images of the Pope in a trendy down jacket were a first taste of what is to come.

How can we escape from this cycle?
It is important that we know how AI works and never stop critically questioning the practical applications of AI. It would be a first step forward if the systems would also include a fact check and AI generated work would be marked with a watermark. The European Union is currently working on a legal framework for AI. The «Artificial Intelligence Act» aims to define risk-based rules for data quality, transparency, human supervision and accountability. I think this approach is more appropriate and realistic than a moratorium, called for by some experts in an open letter, in which the training of AI systems more powerful than GPT-4 be suspended for six months.

Looking into the future: Will man and machine merge more and more?
We should have no illusions. Merging is already going on. Whether I carry a smartphone a centimeter over or under my skin doesn’t really make a big difference. Some of us already have the feeling that we can’t live without these devices and don’t want to sacrifice the broadening of our cognitive abilities through constant access to the cloud. In fact, aren’t we already cyborgs; just haven’t noticed it yet? 

Contact: Communications, Katrin Bühler and Heike Sacher
 

Examples of AI based projects and AI use at the University of Basel
 

  • Ivan Dokmanić, Professor of Data Analytics at the Department of Mathematics and Computer Science researches applications of artificial intelligence in the fields of big data, research, medical imaging and geoimaging. 
     
  • Malte Helmert, Professor of Artificial Intelligence at the Department of Mathematics and Computer Science investigates various aspects of artificial intelligence, with a focus on action planning and solving optimization problems.
     
  • Prof. Stefanie Bailer, Department of Political Science, evaluates in her project «Visual Politician – how do politicians present themselves on social media?» tweets and photos of politicians with the aid of AI. Her goal is to understand the decision-making process in the choice of politicians. 
     
  • The Center for Data Analytics (CeDA) supports researchers in performing modern statistical analysis and in the use of AI methods.
     
  • Heiko Schuldt, Professor of Computer Science, and his team have developed a new type of multimedia search engine called vitrivr.
     
  • At the University Heart Center of the University Hospital Basel, AI has been applied to detect cardiac arrhythmias – even before Apple introduced its Apple Watch for this.
     
  • Furthermore, AI is employed by Biomedical Image Diagnostics for tumor recognition at the University of Basel. 
     
  • At the Biozentrum, the structural biologists use the AI "AlphaFold" as standard to predict protein structures.
     
  • At the Faculty of law of the University of Basel Prof. Nadja Braun-Binder studies the role of comprehensible algorithms as part of a legal framework for the use of Artificial Intelligence.
     
  • Prof. Sabine Gless explores human-robot Interactions from a legal perspective in the context of legal culpability and establishing guilt in criminal law. Projects
     
  • Prof. Alfred Früh addresses the question how the use of AI in the invention process influences the protection intellectual property. 
     
  • The interfaculty research network “Responsible Digital Society” focuses on the digital transformation and its social, ethical, legal, economic, psychological, and political consequences. Website