This session, which is organized jointly by the ISCB Publication Committee and the ISCB Science in Society Committee, focusses on the topic of large language models (LLMs) and the consequences of their use in science and teaching. In this session, we are not focussing on the applications of LLMs in computational biology research. A list of talks at the conference addressing the use of LLMs in computational biology research will be appended to this abstract, as the conference program develops.
LLMs are a type of statistical learning method that can process and generate human-like language from models pre-trained on large text corpora. The availability of LLMs like chatGPT has ushered in a new phase of AI pervasion in society and of humans interacting with computers, with many open questions and problems. Here we will focus on issues pertaining to the use of LLMs in the scientific community. The session will begin with an introduction to the architecture of LLMs and the capabilities and limitations resulting from this architecture, as well as general issues with their use. After two presentations on the use of LLMs in scientific publishing and in teaching, respectively, the session will conclude with a discussion of ethical issues regarding the use of the new technology.
Chairs: Thomas Lengauer & Ragothaman Yennamalli
Notes added post-session:
1) The talk by David Leslie was based on his paper "Does the sun rise for ChatGPT? Scientific discovery in the age of generative AI" in the journal AI & Ethics which you can access freely at https://link.springer.com/article/10.1007/s43681-023-00315-3
2) If you have comments on the session or question to the presenters, please fill the online form at https://forms.gle/k2hvkcaJPhCFMLUi8
Presentation Overview: Show
Presentation Overview: Show
Presentation Overview: Show
Presentation Overview: Show