Posters - Schedules
Posters Home

View Posters By Category

Monday, July 24, between 18:00 CEST and 19:00 CEST
Tuesday, July 25, between 18:00 CEST and 19:00 CEST
Session A Poster Set-up and Dismantle
Session A Posters set up:
Monday, July 24, between 08:00 CEST and 08:45 CEST
Session A Posters dismantle:
Monday, July 24, at 19:00 CEST
Session B Poster Set-up and Dismantle
Session B Posters set up:
Tuesday, July 25, between 08:00 CEST and 08:45 CEST
Session B Posters dismantle:
Tuesday, July 25, at 19:00 CEST
Wednesday, July 26, between 18:00 CEST and 19:00 CEST
Session C Poster Set-up and Dismantle
Session C Posters set up:
Wednesday, July 26,between 08:00 CEST and 08:45 CEST
Session C Posters dismantle:
Wednesday, July 26, at 19:00 CEST
Virtual
A-062: Personal Galaxy service as a job on HPC computing nodes
Track: BioInfo-Core
  • Chieh-Wei Huang, National Center for High-performance Computing, Taiwan, Taiwan
  • Chao-Chun Chuang, National Center for High-performance Computing, Taiwan, Taiwan
  • Yu-Hsiang Chi, National Center for High-performance Computing, Taiwan, Taiwan
  • Yi-Cheng Hsiao, National Tsing Hua University, Taiwan, Taiwan
  • Chang-Wei Yeh, National Center for High-performance Computing, Taiwan, Taiwan
  • Chia-Lee Yang, National Center for High-performance Computing, Taiwan, Taiwan
  • Yu-Tai Wang, National Center for High-performance Computing, Taiwan, Taiwan


Presentation Overview: Show

Galaxy is a web-based analysis platform for life science research. The service model of Galaxy is a centralized site that serves all users. This makes users unable to customize their own version of Galaxy. Like most HPC-based core services, running personal Galaxy services on a login node is forbidden by our policy. Moreover, Galaxy uses a single administrative account to submit jobs, which is hard for the system to attribute usage fees to different users.
Therefore, we established a web-based platform, DJXpert, on our HPC.
DJXpert submits a Slurm job to activate Singularity containers. And use SSH forwarding to connect users' browsers and containers via DJXpert. That is, users’ own Galaxy services can be run on computing nodes through users' own accounts, which would be easier for the system to attribute usage fees. To scale up, we modified the Galaxy container to use HPC’s job scheduler, allowing Galaxy to submit jobs to other computing nodes. While initializing services, the separation of the database and main program makes sure that users have previously-built sites after reboot.
This DJXpert’s service model can be extended to other interactive tools such as R studio and Jupyter notebook to make HPC-based core services more user-friendly.

A-064: Standardizing and harmonizing NGS analysis workflows in the German Human Genome-Phenome Archive (GHGA) – A national secure infrastructure for omics data
Track: BioInfo-Core
  • Kübra Narcı, Deutsches Krebsforschungszentrum, Germany
  • Florian Heyl, Deutsches Krebsforschungszentrum, Germany
  • Christian Mertes, Technical University Munich, Germany
  • Paul Menges, Deutsches Krebsforschungszentrum, Germany
  • Luiz Gadelha, Deutsches Krebsforschungszentrum, Germany
  • Vangelis Theodorakis, Technical University Munich, Germany
  • Daniel Huebschmann, Deutsches Krebsforschungszentrum, Germany
  • Ivo Buchhalter, Deutsches Krebsforschungszentrum, Germany


Presentation Overview: Show

With increasing numbers of human omics data, there is an urgent need for adequate resources for data sharing while also standardizing and harmonizing data processing. Within the federated European Genome-Phenome Archive (EGA), the German Human Genome-Phenome Archive (GHGA) strives to provide (i) the necessary secure IT-infrastructure for Germany, (ii) an ethico-legal framework to handle omics data in a data-protection-compliant but open and FAIR manner, (iii) harmonized metadata schema, and (iv) standardized workflows to process the incoming omics data uniformly.
GHGA is aiming to be more than an archive. GHGA will build on cloud computing infrastructures managed in a network of data generators. Researchers will have controlled access to raw and processed sequence data using recognized GA4GH-compliant NGS workflows. For this, GHGA is working with the nf-core community to co-develop and standardize bioinformatics workflows for data analysis, benchmarking, statistical analysis, and visualizations. Besides, continuous integration and deployment to test and benchmark workflows, synthetic and experimental datasets will be applied to guarantee the high quality of workflows. Finally, by delivering on IT infrastructure and the aforementioned goals, an ethico-legal framework, metadata schemas, and standardized and reproducible workflows, GHGA will enable cross-project analysis and promote new collaborations and research projects.

A-065: OTP@EOSC: A cloud-ready data management and processing platform for sensitive cancer-genomics data
Track: BioInfo-Core
  • Sven Twardziok, Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Germany
  • Valentin Schneider-Lunitz, Berlin Institute of Health at Charité – Universitätsmedizin Berlin, Germany
  • Philip R. Kensche, German Cancer Research Center (DKFZ), Germany
  • Ivo Buchhalter, German Cancer Research Center (DKFZ), Germany


Presentation Overview: Show

As the volume of data in life sciences and healthcare continues to grow unprecedentedly, the need for efficient and secure data sharing and collaborative analysis has become increasingly critical. This is especially true for cancer genome sequencing data due to its size, complexity, and the high-security requirements of sensitive genomic data. To address these issues, we have developed a cloud platform that runs workflows where the data is stored. This eliminates the need to download and share data for local analysis. Our platform follows the GA4GH WES standard based on OTP, WESkit, and our relevant ICGC cancer genomics workflows. We have demonstrated its effectiveness by processing sensitive genomics data in the cloud using publicly available cell culture datasets. The demonstration platform is available to interested researchers. By integrating the software stack into the EOSC-life services and making it available as an EOSC service, other researchers and platform providers can quickly deploy their own sensitive genomics platform in the cloud. Our project demonstrates the feasibility and benefits of using a customized cloud-ready platform for processing sensitive cancer genomics data.

A-066: UTAP2: User-friendly Transcriptome Analysis Pipeline
Track: BioInfo-Core
  • Jordana Lindner, Weizmann Institute of Science, Israel
  • Bareket Dassa, Weizmann Institute of Science, Israel
  • Jaime Prilusky, Weizmann Institute of Science, Israel
  • Noa Wigoda, Weizmann Institute of Science, Israel
  • Ester Feldmesser, Weizmann Institute of Science, Israel
  • Gil Stelzer, Weizmann Institute of Science, Israel
  • Dena Leshkowitz, Weizmann Institute of Science, Israel


Presentation Overview: Show

In order to enable fast and user-friendly transcriptome and epigenome NGS sequence data analysis, we developed an intuitive and scalable pipeline that executes the full process. Transcriptome analysis starts from cDNA sequences derived from RNA (for the protocols: TruSeq, bulk MARS-Seq, Ribo-Seq and SCRB-Seq) and ends with gene count and/or differentially expressed genes. Output files are organized in structured folders, and results summaries are provided in rich and comprehensive reports, containing dozens of plots, tables and links. In addition, the pipeline supports epigenome sequence analysis for ChIP-Seq and ATAC-Seq, including alignment and peaks detection.
User-friendly Transcriptome Analysis Pipeline (UTAP2) can be easily installed through a single singularity image that includes all the necessary software and files in a miniconda environment, enabling execution of our Snakemake workflows, and taking advantage of parallel cluster resources.
UTAP2 is a web-based intuitive platform currently installed on Weizmann institute’s cluster and is used extensively by the institute researches. It is also available as an open-source application for the biomedical research community, thus enabling researchers with limited programming skills, to efficiently and accurately analyse transcriptome and epigenome sequence data. To learn more about UTAP2 installation and features see https://utap2.readthedocs.io/en/latest/.

A-067: Flaski - web Apps for life sciences
Track: BioInfo-Core
  • Ayesha Iqbal, Max Planck Institute for Biology of Ageing, Germany
  • Camila Duitama Gonzalez, Institut Pasteur, France
  • Franziska Metge, Radboud University Medical Center, Netherlands
  • Yun Wang, Max Planck Institute for Biology of Ageing, Germany
  • Jorge Boucas, Max Planck Institute for Biology of Ageing, Germany


Presentation Overview: Show

Flaski is a flask-dash collection of interactive web apps for data analysis and visualisation in life sciences with session management and versioning. Flaski current range of apps includes general purpose plotting (eg. scatter plots, heatmaps), data rich apps (eg. RNAseq database), machine learning (eg. PCA, t-SNE), and submission forms for backend jobs (eg. RNAseq, AlphaFold). App-to-app communication ensures easy maintenance and use of general apps across the stack. Flaski is built upon interactions between code experienced and non-experienced users for which sessions created over the web interface can be downloaded and further worked on in python as a standard plotly object and vice-versa. Flaski is responsive, depending on your data size it will work well on your desktop display down to your mobile phone. Flaski is open source under the MIT license and can be used without restrictions under https://flaski.age.mpg.de.

A-068: Learnings from Genomics England's experience developing WGS analysis pipelines to support a National-scale Genomic Medicine Service
Track: BioInfo-Core
  • Francisco Javier Lopez, Genomics England, United Kingdom
  • Adrianto Wirawan, Genomics England, United Kingdom
  • Augusto Rendon, Genomics England, United Kingdom


Presentation Overview: Show

Building WGS analysis pipelines to support a national-scale clinical service is not a trivial task. In an academic / research environment, developers will typically focus on creating functional and reproducible workflows. Resources are typically limited and once the project is finished, results are published and in most cases that is the end of the journey. Developing WGS analysis pipelines fit for clinical-service purpose possesses some critical challenges imposed by requirements such as service sustainability, contractual Turn Around Times, evolving with new technology paradigms, team growth, strict regulation, among others. Genomics England is a company wholly owned by the Department of Health and Social Care of the United Kingdom. Upon successful completion of the 100k Genomes project, GEL started a transformation process to support the Genomic Medicine Service for the NHS. It has been a challenging task for Genomics England’s pipelines to address the additional clinical-service requirements mentioned above. Although great progress has been made, some of these challenges still impact our activities. In this presentation we will walk through the learnings from 8+ years of experience developing WGS analysis pipelines with a national-scale clinical purpose.

A-069: IlluQC: a tool for monitoring next generation sequencing quality data
Track: BioInfo-Core
  • Aina Montalbán-Casafont, Hospital Clínic de Barcelona, Spain
  • Joan Anton Puig-Butillé, Hospital Clínic de Barcelona, Spain
  • Cèlia Badenas, Hospital Clínic de Barcelona, Spain
  • Judit García-Villoria, Hospital Clínic de Barcelona, Spain
  • Xavier Solé, Hospital Clínic de Barcelona, Spain
  • José Luis Villanueva-Cañas, Hospital Clínic de Barcelona, Spain


Presentation Overview: Show

Background: Quality control of next-generation sequencing (NGS) experiments is crucial to obtain reliable and accurate results. However, quality metrics are rarely explored across sequencing runs over time, thus hindering the identification of changing trends in quality parameters. Here we present IlluQC, a tool that monitors the quality of Illumina NGS workflows, from wet laboratory procedures to bioinformatics analyses.

Methods: IlluQC is a web tool developed with a Shiny-based interface and a PostgreSQL back-end. The tool gathers information from Illumina InterOp files such as cluster density, Q30, etc. Other metrics like those generated by FastQC or library quality are incorporated as well. The tool will be open-source and freely available.

Results: IlluQC provides a user-friendly interactive dashboard to visualise the key metrics of quality control checkpoints. The dashboard includes several QC tracks, such as Instruments, Experiments, Samples, allowing a time-dependent exploration of quality metrics. In addition, we can easily quantify the activity of the laboratory.

Conclusions: IlluQC is a useful tool for sequencing facilities or research groups with high sequencing activity. It provides context to quality metrics, enabling the assessment of laboratory performance in terms of quality. IlluQC can also serve as a decision support system for the laboratory.

A-070: Workflow and Pipeline Development for Core Sequencing Service Support at the Ontario Institute for Cancer Research.
Track: BioInfo-Core
  • Lawrence Heisler, Ontario Institute for Cancer Research, Canada
  • Peter Ruzanov, Ontario Institute for Cancer Research, Canada
  • Xuemei Luo, Ontario Institute for Cancer Research, Canada
  • Beatriz Lujan-Toro, Ontario Institute for Cancer Research, Canada
  • Gavin Peng, Ontario Institute for Cancer Research, Canada
  • Richard Jovelin, Ontario Institute for Cancer Research, Canada
  • Heather Armstrong, Ontario Institute for Cancer Research, Canada
  • Alexis Varsava, Ontario Institute for Cancer Research, Canada
  • Dillan Cooke, Ontario Institute for Cancer Research, Canada
  • Felix Beaudry, Ontario Institute for Cancer Research, Canada
  • Iain Bancarz, Ontario Institute for Cancer Research, Canada
  • Alexander Fortuna, Ontario Institute for Cancer Research, Canada
  • Michael Laszloffy, Ontario Institute for Cancer Research, Canada
  • Morgan Taschuk, Ontario Institute for Cancer Research, Canada


Presentation Overview: Show

The Genome Sequence Informatics (GSI) group at the Ontario Institute for Cancer Research is responsible for the building, execution, and monitoring of analysis pipelines for processing of high-throughput sequencing data for both quality assessment and support of clinical and research project aims. Analysis is conducted in a production environment where the identification of newly available data results in the automatic launching of processes for efficient generation of project specific deliverables. There is continuous pipeline development to improve the quality and efficiency of our services and to expand analysis offerings. We have developed a set of practices for the development, testing, monitoring, and resolution of issues that minimize interruptions to our production environment and ensure the quick and efficient integration of new and modified workflows. This presentation will describe our workflow development cycle in relation to our unique infrastructure and in support of our service goals.

A-071: Porting workflow managers to the cloud at a national diagnostic genomics medical service – strategy and learnings.
Track: BioInfo-Core
  • Luke Paul Buttigieg, Genomics England, United Kingdom
  • Ricardo H. Ramírez González, Genomics England, United Kingdom


Presentation Overview: Show

Genomics England provides whole genome sequencing diagnostics to the Genomic Medicine Service (U.K), a free at the point-of-care, nationwide, genomic diagnostic testing service, with ambitious targets of processing 300,000 samples by 2025. Currently, all clinical bioinformatics is processed using a clinical-standard certified, internally developed workflow engine (Bertha). We are migrating to a new solution (Genie) which combines off-the-shelf products with custom functionality, so we can focus on our core mission to enable equitably accessed, genomics medicine for all. Genie should help us support newer use cases quicker, across different infrastructures such as cloud, and uses a standard workflow definition language.

We have developed an approach to migrate at speed in an agile and iterative fashion. The initial phase involves using the same Singularity image containing the bioinformatics workflow’s logic that Bertha uses, directly from Genie. This reduces the risk of code divergence. We are using an automated comparison testing framework to compare the existing system with the new one to detect regressions. Later, we will iteratively refactor the workflows, breaking up the Singularity image and optimising for performance.

In this poster, we will describe this migration strategy, risk management and lessons learnt while working through this large-scale effort.

A-072: Rebuilding production to scale
Track: BioInfo-Core
  • Michael Laszloffy, Ontario Institute for Cancer Research, Canada
  • Heather Armstrong, Ontario Institute for Cancer Research, Canada
  • Iain Bancarz, Ontario Institute for Cancer Research, Canada
  • Dillan Cooke, Ontario Institute for Cancer Research, Canada
  • Alexander Fortuna, Ontario Institute for Cancer Research, Canada
  • Lawrence Heisler, Ontario Institute for Cancer Research, Canada
  • Lars Jorgensen, Ontario Institute for Cancer Research, Canada
  • Xuemei Luo, Ontario Institute for Cancer Research, Canada
  • Andre Masella, Ontario Institute for Cancer Research, Canada
  • Angie Mosquera, Ontario Institute for Cancer Research, Canada
  • Peter Ruzanov, Ontario Institute for Cancer Research, Canada
  • Alexis Varsava, Ontario Institute for Cancer Research, Canada
  • Morgan Taschuk, Ontario Institute for Cancer Research, Canada


Presentation Overview: Show

Advancements in sequencing technology and increases in throughput are driving software infrastructure's need to scale. At Ontario Institute for Cancer Research, operations have grown from sequencing 60 terabases in 2012 to over 440 terabases in 2022. From tracking and analyzing thousands of new samples a year to supporting bioinformaticians developing new analysis workflows, adapting and automating is required.

We will share our multi-year undertaking to rebuild our production analysis system from being an aging, unmaintainable, and unscalable system to a modular, performant, maintainable system. Through the use of distributed infrastructure and a distributed data model, we were able to rebuild the system gradually while keeping production operational. We used benchmarking and automated testing to help evaluate changes and performance. As the system grew into many components, monitoring and alerting became necessary for day to day operations.

Our infrastructure in 2022 was able to handle analyzing 440+ terabases of raw sequencing and processed nearly one petabyte of data. We are still continuing to expand and adapt the system to meet our ever-growing needs. Sustainable data management is our next major challenge.

A-073: Available tools to assess career progression in bioinformatics core facilities
Track: BioInfo-Core
  • Patricia Carvajal-López, EMBL-EBI, United Kingdom
  • Marta Lloret-Llinares, EMBL-EBI, United Kingdom
  • Cath Brooksbank, EMBL-EBI, United Kingdom


Presentation Overview: Show

Bioinformatics core facilities (BCF) play an essential role in enabling research in life sciences. As deep learning and high-throughput sequencing methodologies are increasingly applied to analyse molecular data, there is a growing need for highly specialised services from BCFs and their supporting teams.

As BCF scientists transition from entry level to managerial roles there are some tools that partially describe the competencies they require as they progress through this career. The ISCB bioinformatics competency framework, published at the EMBL-EBI Competency Hub (https://competency.ebi.ac.uk), is one of them. However, the current version of the framework (V3) does not reflect the competency proficiency needed by BCF scientists as they progress in their careers. The Curriculum Task Force of the ISCB Education Committee continues to work on providing updated frameworks for bioinformatics core competencies. Nonetheless, the current stage of analysing the competencies related to career progression of BCF scientists requires the active participation of this community of practice.

The effort of developing a well-defined competency framework along with training for this community will be essential to support career progression of BCF specialists who, in return, enable research and development within the life sciences.

A-074: Pipelines for Procesing Sensitive Human Sequencing Data (using BioContainers and Snakemake)
Track: BioInfo-Core
  • Tina Visnovska, Department of Clinical Molecular Biology, Medical Division, Akershus University Hospital, Lørenskog, Norway, Norway


Presentation Overview: Show

When high-throughput sequencing data are obtained from human samples in a hospital-based research environment, many restrictions regarding the data storage and analyses are in place to ensure that the patient's personal information is kept private. In our department this means that the analyses have to be performed on a high performance computing cluster dedicated to work with sensitive data. Developers have no access to the internet from within the cluster and they have only limited possibilities to use or install bioinformatics tools. As availability of the sequencing data increases over the time, so does the number of research groups in our department using such data in their research. This results in demand for reusable software to perform analyses of next generation sequencing data which can be easily deployed in independent project environments of the cluster dedicated to sensitive data.

Here is a suite of publicly available containerised pipelines presented. With these pipelines, raw RNAseq, ATACseq, ChIPseq, and metatranscriptomic data generated on an Illumina instrument can be analysed. Singularity is used for containerisation (with BioContainers deployed when available) and snakemake orchestrates the pipelines' respective workflows. The pipelines are version controlled and used in several research projects in our department.

A-435: Challenges in Spatial Single-Cell Transcriptomics Analysis
Track: BioInfo-Core
  • Meeta Mistry, Harvard T.H. Chan School of Public Health Bioinformatics Core, United States
  • Victor Barrera, Harvard T.H. Chan School of Public Health Bioinformatics Core, Spain
  • James M. Billingsley, Harvard T.H. Chan School of Public Health Bioinformatics Core, United States
  • Emma Berdan, Harvard T.H. Chan School of Public Health Bioinformatics Core, United States
  • Shannan Ho Sui, Harvard T.H. Chan School of Public Health Bioinformatics Core, United States


Presentation Overview: Show

Spatial single-cell transcriptomics has revolutionized our understanding of complex biological systems by capturing gene expression patterns in their native spatial context. Technologies that preserve spatial information are categorized as imaging-based or sequencing-based; each with significant differences in protocol, experimental throughput, and spatial resolution. Visium by 10x Genomics integrates gene expression profiling with histological imaging for visualizing expression patterns in intact tissue sections. CosMx by NanoString Technologies utilizes barcoded oligonucleotide probes for high-resolution mapping of gene expression patterns.
These technologies have only recently emerged in biomedical research, thus the computational methods for analyzing the data is still in its infancy. While we are able to utilize existing tools and approaches to some extent, there are necessary modifications and considerations when processing the data. Furthermore, data from each of the spatial transcriptomics technologies present unique challenges and require custom solutions. In this poster we present results from the analysis of spatial transcriptomics datasets from three different platforms. We compare and contrast the analysis workflows, discuss the challenges in dissecting spatial organization of cell types and provide considerations for the future analyses.