In recent years, large language models (LLMs) with billions of parameters have become increasingly adept at reading and generating text. These models are also beginning to have an important influence as tools for computational biology. With the emergence of freely available text generation tools, the International Society for Computational Biology (ISCB) has decided to create an acceptable use policy for these models. ISCB accepts that this is a fast-moving area of research and that this policy is likely to be subject to change.
The ISCB Acceptable Use of Large Language Models Policy is applied to all scientific research submissions for ISCB Conferences, as well as research submissions to the ISCB/OUP Bioinformatic Advances journal. OUP has also accepted the use of the policy for research submissions to OUP Bioinformatics journal.
ISCB strongly encourages its affiliated groups and affiliated conferences to apply the Policy to scientific research submissions for their individual conferences and journals.
When using commercial LLMs. such as ChatGPT or Gemini, data may be reused and thus it is important that confidential or personal information is not shared. This is particularly important with respect to peer review. The NIH currently forbids the use of LLMs in peer review for this reason (see NIH policy). Many Institutions have also developed further policies that may apply.
Below we list the acceptable and unacceptable uses of LLMs and related technologies. Note that acceptable use cases only apply where confidentiality is not an issue.
The development of these models is changing rapidly and it is not easy to foresee how they may be adopted. Therefore, it is likely that these guidelines will be subject to change in the future. At present, we do not intend to systematically detect usage of these models, but we will investigate reported instances on a case-by-case basis.
Updated 3 April 2025