Return to ISMB/ECCB 2025 Homepage   Click here for the abridged agenda


Select Track: 3DSIG | Bio-Ontologies and Knowledge Representation | BioInfo-Core | Bioinfo4Women Meet-Up | Bioinformatics in the UK | BioVis | BOSC | CAMDA | CollaborationFest | CompMS | Computational Systems Immunology | Distinguished Keynotes | Dream Challenges | Education | Equity and Diversity | EvolCompGen | Fellows Presentation | Function | General Computational Biology | HiTSeq | iRNA | ISCB-China Workshop | JPI | MICROBIOME | MLCSB | NetBio | NIH Cyberinfrastructure and Emerging Technologies Sessions | NIH/Elixir | Publications - Navigating Journal Submissions | RegSys | Special Track | Stewardship Critical Infrastructure | Student Council Symposium | SysMod | Tech Track | Text Mining | The Innovation Pipeline: How Industry & Academia Can Work Together in Computational Biology | TransMed | Tutorials | VarI | WEB 2025 | Youth Bioinformatics Symposium | All


Schedule for DREAM

NOTE: Browser resolution may limit the width of the agenda and you may need to scroll the iframe to see additional columns.
Click the buttons below to download your current table in that format

Date Start Time End Time Room Track Title Confrimed Presenter Format Authors Abstract
2025-07-21 14:00:00 14:15:00 12 Dream Challenges Benchmarking foundation models in biology: where we are, and we where we want to go with the community Julio Saez-Rodriguez Julio Saez-Rodriguez The AI promise of powerful solutions to solve biomedical and healthcare related problems needs to be accompanied by a transparent evaluation and proof of reproducibility of the corresponding algorithms. The evaluation for algorithms in machine learning (ML) is typically done by assessing their performance on prediction tasks. The application of this benchmarking paradigm to foundation models (FM) is not straightforward. FMs are typically trained using self-supervised methods that don’t need labeled ground truth data, and therefore the models are embodiments of the phenomena that gave rise to the data. Usually, the training of FMs is followed by finetuning the models for specific tasks that are trained using more traditional ML methodologies, and can be benchmarked as such ML models. However, such an assessment would fail to elucidate if possible failures of the model reside in the FM or in the refined model. In this era of FMs, there is a need to rethink rigorous evaluation, both in what it means to validate and in how we validate. One strategy could be a concerted effort of the communities developing and using them, whereby crowdsourcing microtasks that test the limits of these models from every possible perspective within the domain of competence of the models, and in a continuous manner. The aim of this DREAM session at ISMB is to explore together this strategy and define as a community a roadmap to move forward with such a critically needed benchmark of foundation models.
2025-07-21 14:15:00 14:45:00 12 Dream Challenges Building Foundation Models for Single-cell Omics and Imaging Bo Wang Keynote: This talk delves into the innovative utilization of generative AI in propelling biomedical research forward. By harnessing single-cellsequencing data, we developed scGPT, a foundational model that extracts biological insights from an extensive dataset of over 33 million cells. Analogous to how words form text, genes define cells, effectively bridging the technological and biological realms. The strategic application of scGPT via transfer learning significantly boosts its efficacy in diverse applications such as cell-type annotation, multi-batch integration, and gene network inference.Additionally, the talk will spotlight MedSAM, a state-of-the-art segmentation foundational model. Designed for universal application, MedSAM excels across various medical imaging tasks and modalities. It showcased unprecedented advancements in 30 segmentation tasks, outperforming existing models considerably. Notably, MedSAM possesses the unique ability for zero-shot and few-shot segmentation, enabling it to identify previously unseen tumor types andswiftly adapt to novel imaging modalities.Collectively, these breakthroughs emphasize the importance of developing versatile andefficient foundational models. These models are poised to address the expanding needs of imaging and omics data, thus driving continuous innovation in biomedical analysis.
2025-07-21 14:45:00 15:00:00 12 Dream Challenges Predicting Perturbation Effects: Are We Really There? Maria Brbic Maria Brbic TBA
2025-07-21 15:00:00 15:15:00 12 Dream Challenges AI Alliance: Benchmarking foundation models for drug discovery Pablo Meyer-Rojas Pablo Meyer-Rojas The AI Alliance is focused on fostering an open community and enabling developers and researchers to accelerate responsible innovation in AI while ensuring scientific rigor, trust, safety, security, diversity and economic competitiveness. We bring together a critical mass of compute, data, tools, and talent to accelerate and advocate for open innovation in AI. Together with DREAM challenges we aim to create a world-class research community that harnesses the potential of AI foundation models, transforms the field of drug discovery, and accelerates scientific progress by driving interdisciplinary collaboration on AI-powered drug discovery projects in the open. IBM Research biomedical foundation model (BMFM) technologies leverage multi-modal data of different types, including drug-like small molecules and proteins (covering a total of more than a billion molecules), as well as DNA and single-cell RNA sequence.
2025-07-21 15:15:00 15:30:00 12 Dream Challenges Benchmarking in Service of Virtual Cell Models: Challenges, Opportunities, and a Path Forward Elizabeth Fahsbender Elizabeth Fahsbender Realizing the vision of AI-powered Virtual Cells demands robust and biologically meaningful benchmarks that ensure models are reliable, reproducible, and relevant. This talk will present key insights from a recent community workshop convened by the Chan Zuckerberg Initiative, which identified critical challenges to benchmarking in this space—ranging from data heterogeneity and systemic bias to evaluation metric gaps and ecosystem fragmentation. We will highlight a set of community-driven recommendations and describe how CZI is beginning to address these through targeted investments in infrastructure, high-quality data generation, and community coordination. These efforts aim to catalyze progress toward a trustworthy benchmarking ecosystem that can accelerate foundational model development for cell biology.
2025-07-21 15:30:00 16:00:00 12 Dream Challenges Deep learning models of regulatory DNA: A critical analysis of model design choices Anshul Kundaje Anshul Kundaje Keynote: Gene expression is tightly regulated by complexes of proteins that interpret complex sequence syntax encoded in regulatory DNA. Genetic variants influencing traits and diseases often disrupt this syntax. Several deep learning models have been developed to decipher regulatory DNA and identify functional variants. Most models use supervised learning to map sequences to cell-specific regulatory activity measured by genome-wide molecular profiling experiments. The general trend in model design is towards larger, multi-task, supervised models with expansive receptive fields. Further, emerging self-supervised DNA language models (DNALMs) promise foundational representations for probing and fine tuning on limited datasets. However, rigorous  evaluation of these models against lightweight alternatives on biologically relevant tasks have been lacking. In this talk, I will demonstrate that light-weight, single-task CNNs are competitive with or significantly outperform massive supervised transformer models and fine-tuned DNALMs on critical prediction tasks. Additionally, I will show that the multi-task, supervised models learn causally inconsistent features, impairing counterfactual prediction, interpretation, and design. In contrast, our lightweight, single task models are causally consistent and provide robust, interpretable insights into regulatory syntax and genetic variation, enabling scalable novel discoveries.
2025-07-21 16:40:00 16:55:00 12 Dream Challenges Benchmarking Multi-Modal Large Language Models for Metastatic Breast Cancer Prognosis Justin Guinney Justin Guinney Inputs into cancer prognostic models are primarily structured data such as demographic and clinicopathological features, and lack richer and temporal context often found in unstructured clinical notes. We hypothesize that creating a temporal clinical patient note from structured data that preserves longitudinal and clinical contextual information, and coupling it with a large language models (LLM) that is trained to prognosticate overall survival (OS), may improve model accuracy with an interpretable embedding space. In this study, we benchmark different LLMs and fine-tuning strategies to develop optimal models for predicting overall survival from time of metastasis in a large cohort of de-identified patients with metastatic breast cancer.
2025-07-21 16:55:00 17:30:00 12 Dream Challenges Crowdsourcing Experiment Gustavo Stolovitzky We will collectively conduct a small scale community experiment to simulate a large scale crowdsourcing initiative to benchmark foundation models.
2025-07-21 17:30:00 18:00:00 12 Dream Challenges Evaluating and Benchmarking Foundation Models All Speakers, Bo Wang, Anshul Kundaje, Justin Guinney, Katrina Kalantar, Maria Brbic, Luca Foschini Speakers will give their opinions and about best practices to evaluate foundation models in biomedicine, and engage in conversation with attendees.

- top -