Leading Professional Society for Computational Biology and Bioinformatics
Connecting, Training, Empowering, Worldwide

ISCB Policy for Acceptable Use of Large Language Models

In recent years, large language models (LLMs) with billions of parameters have become increasingly adept at reading and generating text. These models are also beginning to have an important influence as tools for computational biology. With the emergence of freely available text generation tools, the International Society for Computational Biology (ISCB) has decided to create an acceptable use policy for these models. ISCB accepts that this is a fast-moving area of research and that this policy is likely to be subject to change.

The ISCB Acceptable Use of Large Language Models Policy is applied to all scientific research submissions for ISCB Conferences, as well as research submissions to the ISCB/OUP Bioinformatic Advances journal. OUP has also accepted the use of the policy for research submissions to OUP Bioinformatics journal.

ISCB strongly encourages its affiliated groups and affiliated conferences to apply the Policy to scientific research submissions for their individual conferences and journals.

Confidentiality

When using commercial LLMs. such as ChatGPT or Gemini, data may be reused and thus it is important that confidential or personal information is not shared. This is particularly important with respect to peer review. The NIH currently forbids the use of LLMs in peer review for this reason (see NIH policy). Many Institutions have also developed further policies that may apply.

Below we list the acceptable and unacceptable uses of LLMs and related technologies. Note that acceptable use cases only apply where confidentiality is not an issue.

Unacceptable Uses

  • It is not acceptable to use LLMs or related technologies to draft paper sections. In essence, papers MUST be written by humans.
  • It is not acceptable to use LLMs or related technologies to carry out reviewing activities, such as scientific peer reviews and promotion and tenure reviews. Firstly, these are an important part of the scientific process and they require scientific judgement. Secondly, review processes are in general confidential and should not be shared with third parties, including commercial LLM providers.
  • LLMs cannot be listed as authors as they do not fulfill the requirements of authorship as laid out in the ICMJE guidelines.

Acceptable Uses

  • As an algorithmic technique for research study in your research e.g. LLMs for protein structure prediction
  • As an aid to correct written text (spell checkers, grammar checkers)
  • As an aid to language translation, however, the human is responsible for the accuracy of the final text
  • As an evaluation technique (to assist in finding inconsistencies or other anomalies)
  • It is permissible to include LLM generated text snippets as examples in research papers where appropriate, but these MUST be clearly labeled and their use explained.
  • Assist in code writing, however, the human is responsible for the code.
  • Create documentation for code, however, the human is responsible for the correct documentation.
  • To discover background information on a topic, subject to verification from trusted sources.

The development of these models is changing rapidly and it is not easy to foresee how they may be adopted. Therefore, it is likely that these guidelines will be subject to change in the future. At present, we do not intend to systematically detect usage of these models, but we will investigate reported instances on a case-by-case basis.

Updated 3 April 2025

- top -