Posters
Poster numbers will be assigned May 30th.
If you can not find your poster below that probably means you have not yet confirmed you will be attending ISMB/ECCB 2015.
To confirm your poster find the poster acceptence email there will be a confirmation link.
Click on it and follow the instructions.
If you need further assistance please contact submissions@iscb.org and provide your poster title or submission ID.
Category P - 'Other'
P01 - The Job Management System (JMS): a web-based workflow management system for Torque
Short Abstract: Complex computational workflows have become vital to modern bioinformatics research. These workflows are often made up of a numerous standalone tools that are resource intensive and can require days or even weeks of computing time. As such, more and more researchers are turning to High Performance Computer (HPC) clusters where they can take advantage of the aggregated resources of a many powerful computers. Resource managers such as Torque (http://www.adaptivecomputing.com/) and SLURM have been developed to facilitate the submission and management of jobs on a cluster. These tools are command-line based, however, and may provide a steep learning curve for new users.
We have developed the Job Management System (JMS), a web interface to Torque that allows even casual users to run jobs on a cluster. In addition to this, JMS is a workflow management system that allows users to upload scripts and tools and string them into complex, computational pipelines. Pipelines can be created and run directly via the interface, without any need for complicated configuration files, and can be saved and shared with other users. In addition, results from workflows that have been run can be shared, facilitating collaboration between groups. JMS also provides a RESTful web API for programmatic access.
Although JMS can be used by researchers from any field, it is currently being tailored towards bioinformatics with the introduction of multiple bioinformatics pipelines and result viewers. The JMS source code is freely available at https://github.com/RUBi-ZA/JMS.
TOPWe have developed the Job Management System (JMS), a web interface to Torque that allows even casual users to run jobs on a cluster. In addition to this, JMS is a workflow management system that allows users to upload scripts and tools and string them into complex, computational pipelines. Pipelines can be created and run directly via the interface, without any need for complicated configuration files, and can be saved and shared with other users. In addition, results from workflows that have been run can be shared, facilitating collaboration between groups. JMS also provides a RESTful web API for programmatic access.
Although JMS can be used by researchers from any field, it is currently being tailored towards bioinformatics with the introduction of multiple bioinformatics pipelines and result viewers. The JMS source code is freely available at https://github.com/RUBi-ZA/JMS.
P02 - Recent developments in UniProt: improving access to protein knowledge
Short Abstract: UniProt provides the scientific community with a comprehensive, high-quality and freely accessible resource of protein sequence and functional information. It facilitates scientific discovery by organising biological knowledge and enabling researchers to rapidly comprehend complex areas of biology. Information is integrated from a range of sources such as scientific literature, protein sequence analysis tools, other databases and automatic annotation systems to provide an integrated overview of available protein knowledge. To ensure that the resource continues to meet the needs of our users, a number of new developments have been introduced over the last year. These include: (1) the launch of a new website to allow easier navigation with better visibility and usability of functionality and improved presentation of information content; (2) introduction of an annotation score for all entries in UniProt to represent the amount of knowledge known about each protein and to allow easy identification of proteins which are the best characterized and most informative; (3) improved representation of the evidence supporting UniProt data; (4) improved coverage of identified peptides in the human proteome. An overview of these highlights from the past year will be presented. All data are freely available at www.uniprot.org.
TOPP03 - ELIXIR Tools & Data Services Registry
Short Abstract: ELIXIR, the European infrastructure for biological information, is building a registry of analytical tools and data services for the life sciences. We aim to facilitate discovery and usability of resources across the spectrum: for biomedical research infrastructures and disciplines, for diverse types of application software and interfaces, and for use by scientists, technicians, software developers and managers. The registry will include resources spanning all of biomedical research from molecules to systems biology and personalised medicine.
TOPP04 - MetaboloView: integrated analysis platform for large-scale metabolomics cohort datasets
Short Abstract: Metabolomics is a study to identify and characterize small-molecules produced by cellular processes, and it can detect change of behaviors of metabolic pathways as differences of observed metabolites before phenotypic changes appear. Therefore, it can be used, for example, to detect development of a disease, and it has been playing an important role in various fields nowadays. Several large metabolomics cohort studies, such as HUSERMET project and Tsuruoka metabolomic cohort study, have been conducted aiming to elucidate, for example, key components to characterize diseases and to discover drug targets. There are several software have been developed, such as MZmine and MetDAT, on the other hand, few software can efficiently handle large-scale datasets and is suitable for analysis of cohort datasets.
Here, we present MetaboloView, the fast and integrated viewer and analysis platform for large-scale metabolomics cohort datasets. The software equips special functionalities suitable for analysis and visualization of cohort datasets, for example, creation of intensity histogram per each small molecule. In addition to cohort dataset specific features, the software automatically performs standard analyses, such as, identification of observed molecules and several multivariate analyses such as PCA, and reports summary of the metabolome dataset given as an input. The software is designed to be flexible and can be customized by plugins, thus, user-defined analysis methods can easily be incorporated into MetaboloView’s analysis pipeline.
TOPHere, we present MetaboloView, the fast and integrated viewer and analysis platform for large-scale metabolomics cohort datasets. The software equips special functionalities suitable for analysis and visualization of cohort datasets, for example, creation of intensity histogram per each small molecule. In addition to cohort dataset specific features, the software automatically performs standard analyses, such as, identification of observed molecules and several multivariate analyses such as PCA, and reports summary of the metabolome dataset given as an input. The software is designed to be flexible and can be customized by plugins, thus, user-defined analysis methods can easily be incorporated into MetaboloView’s analysis pipeline.
P05 - Biological Dynamics Markup Language (BDML): an open format for representing quantitative data on biological dynamics
Short Abstract: Quantitative data on biological dynamics are crucial for understanding mechanisms of biological phenomena. With the recent advances of live-cell imaging and bioimage informatics, there is an explosion of quantitative data on spatiotemporal dynamics of molecules, cells and organisms in a wide variety of biological phenomena. However these data have not been well used because they are provided in their specific formats requiring data-specific software tools. We present Biological Dynamics Markup Language (BDML) as an open and unified format for representing quantitative biological dynamics data. BDML is based on XML. Its machine-readability and extensibility enables efficient development and evaluation of software tools for data visualization and analysis. We also present Systems Science of Biological Dynamics database (SSBD; http://ssbd.qbic.riken.jp). SSBD provides over 300 BDML datasets including those on nuclear division dynamics in C. elegans, D. melanogaster, zebrafish and mouse. We will discuss how BDML and SSBD will accelerate systems biology.
TOPP06 - GeneYenta: A Phenotype Based Rare Disease Case Matching Tool Based on Online Dating Algorithms For the Acceleration of Exome Interpretation
Short Abstract: Advances in next generation sequencing (NGS) technologies have helped reveal causal variants for genetic diseases. In order to establish causality, it is often necessary to compare genomes of unrelated individuals with similar disease phenotypes to identify common disrupted genes. When working with cases of rare genetic disorders, finding similar individuals can be extremely difficult. We introduce a web tool, GeneYenta, which facilitates the matchmaking process, allowing clinicians to coordinate detailed comparisons for phenotypically similar cases. Importantly, the system is focused on phenotype annotation, with explicit limitations on highly confidential data that create barriers to participation. The procedure for matching of patient phenotypes, inspired by online dating services, uses an ontologybased semantic case matching algorithm with attribute weighting. We evaluate the capacity of the system using a curated reference data set and nineteen clinician entered cases comparing four matching algorithms. We find that the inclusion of clinician weights can augment phenotype matching.
TOPP07 - The EDAM Ontology
Short Abstract: EDAM is an ontology of well established, familiar concepts that are prevalent within bioinformatics, including types of data and data identifiers, data formats, operations and topics. EDAM is a simple ontology - essentially a set of terms with synonyms and definitions - organised into an intuitive hierarchy for convenient use by curators, software developers and end-users.
EDAM aims to unify semantically the bioinformatics concepts in common use, provide curators with a comprehensive controlled vocabulary that is broadly applicable, and support new and powerful search, browse and query functions.
TOPEDAM aims to unify semantically the bioinformatics concepts in common use, provide curators with a comprehensive controlled vocabulary that is broadly applicable, and support new and powerful search, browse and query functions.
P08 - The current state of genomic data data sharing: a focus on data accessibility
Short Abstract: The success of next generation sequencing technologies potentially opens up new horizons in clinical research and practice. But before one can really benefit from genomic clinics of the future, multiple issues must be addressed.
Human genomics research relies on the availability of genomic datasets that are needed to test a hypothesis. Although a large amount of data is generated around the world, individual researchers still often lack access to it. Exemplary collaborative practices demonstrated during the realisation of the Human Genome Project do not reflect the state of data sharing in the community today: data sharing is not the default, but the exception.
Data sharing has continually been recognised as important, not only for the advancement of scientific knowledge, but also for the preservation of information: verification of conclusions and safeguarding against misconduct.
But data sharing in human genomics is a multifaceted challenge. Ethical considerations combined with the uniqueness of the genome of an individual require special precautions to enable sharing whilst protecting data privacy.
Here, we investigate the current extent of human genomic data sharing by examining the data handling processes and needs of human genomics researchers in different settings. We explore how researchers are including data access and data sharing in their current workflows and whether any bottlenecks need to be addressed to enable more efficient data collaborations.
TOPHuman genomics research relies on the availability of genomic datasets that are needed to test a hypothesis. Although a large amount of data is generated around the world, individual researchers still often lack access to it. Exemplary collaborative practices demonstrated during the realisation of the Human Genome Project do not reflect the state of data sharing in the community today: data sharing is not the default, but the exception.
Data sharing has continually been recognised as important, not only for the advancement of scientific knowledge, but also for the preservation of information: verification of conclusions and safeguarding against misconduct.
But data sharing in human genomics is a multifaceted challenge. Ethical considerations combined with the uniqueness of the genome of an individual require special precautions to enable sharing whilst protecting data privacy.
Here, we investigate the current extent of human genomic data sharing by examining the data handling processes and needs of human genomics researchers in different settings. We explore how researchers are including data access and data sharing in their current workflows and whether any bottlenecks need to be addressed to enable more efficient data collaborations.
P10 - Modelling electron transfer reactions in top-down Mass Spectrometry
Short Abstract: Top-down proteomics offers methods for protein sequence determination. These methods are particularly useful in degradation products detection and discovery of both sequence variants and combinations of the post-translational modifications. They all rely on the abilty of the mass spectrometer to preselect ions in certain mass ranges, followed by their fragmentation and detection. Among different fragmentation techniques the electron transfer is known for its high cleavage specificity. However, no formal quantitative model of that fragmentation process has yet been proposed.
We fill this gap by developing a probabilistic model of the electron transfer reactions. We brake up the fragmentation process into a sequence of consecutive reaction events that occur in time and act on individual ions. The frequency of reaction events depends on the whole population of considered ions, with a general rule that highly charged ions react away more quickly. During one reaction event the type of reaction is established out of the following four: fragmentation event (ETD), fragmentation followed by electron transfer between daughter ions, electron transfer not causing fragmentation (ETnoD), and proton transfer reaction (PTR). An individual ion can face several reaction events - a reaction pathway. The probability of occurence of individual pathways is ruled by the intensities of individual reactions.
Using state-of-the-art bayesian MCMC routines we can infer reaction intensities from real data. By doing so we provide answers to process specific questions, e.g. when to stop the reaction to make the process most informative. The model can be easily generalised to answer other questions.
TOPWe fill this gap by developing a probabilistic model of the electron transfer reactions. We brake up the fragmentation process into a sequence of consecutive reaction events that occur in time and act on individual ions. The frequency of reaction events depends on the whole population of considered ions, with a general rule that highly charged ions react away more quickly. During one reaction event the type of reaction is established out of the following four: fragmentation event (ETD), fragmentation followed by electron transfer between daughter ions, electron transfer not causing fragmentation (ETnoD), and proton transfer reaction (PTR). An individual ion can face several reaction events - a reaction pathway. The probability of occurence of individual pathways is ruled by the intensities of individual reactions.
Using state-of-the-art bayesian MCMC routines we can infer reaction intensities from real data. By doing so we provide answers to process specific questions, e.g. when to stop the reaction to make the process most informative. The model can be easily generalised to answer other questions.
P12 - Comprehensive study of sphingolipid metabolism in urothelial bladder cancer
Short Abstract: Sphingolipids (SL) are complex bioactive lipids. Notable body of work has been devoted studying the influence of sphingolipids metabolism on cellular fate. Disruptions in metabolic pathways involved in controlling the dynamic balance between proapoptotic SL and prosurvival ones are considered to underline various diseases. Indeed, sphingolipids are known to have critical implications for the pathogenesis and treatment of cancer.
Here we present the results of the comparative analysis of the SL metabolism pathway in conventional transitional cell carcinoma (cTCC) and micropapillary urothelial carcinoma (MPUC). cTCC is the most common tumor of the entire urinary system, while MPUC is a variant characterized by aggressive biological behavior. Preliminary outcomes suggest the correlation between the level of disruption of SL metabolism pathway and specific cancer subtype.
The sphingolipid metabolism pathway deregulation is analyzed in wider regulome context by means of bayesian modeling. We propose integrative genomic model assessing the influence of given gene for patient’s survivability. Model inference is performed by efficient MCMC method and based on publicly available data from different platforms (TCGA database).
Finally, we confirmed that the genes expression of SL metabolism enzymes and proteins associated with their activity are both important factors in progression of cancer types.
TOPHere we present the results of the comparative analysis of the SL metabolism pathway in conventional transitional cell carcinoma (cTCC) and micropapillary urothelial carcinoma (MPUC). cTCC is the most common tumor of the entire urinary system, while MPUC is a variant characterized by aggressive biological behavior. Preliminary outcomes suggest the correlation between the level of disruption of SL metabolism pathway and specific cancer subtype.
The sphingolipid metabolism pathway deregulation is analyzed in wider regulome context by means of bayesian modeling. We propose integrative genomic model assessing the influence of given gene for patient’s survivability. Model inference is performed by efficient MCMC method and based on publicly available data from different platforms (TCGA database).
Finally, we confirmed that the genes expression of SL metabolism enzymes and proteins associated with their activity are both important factors in progression of cancer types.
P13 - Population genetics of response to environmental stress in sexual and asexual populations: a comparison.
Short Abstract: In our study we focus on the differences between asexual and sexual organisms in their response to environmental stress. Our study is performed within the framework of Fisher's geometric model - which is a well-established (class of) models within the field of population genetics. The organisms' phenotypes are modelled in a geometric setting, as a real-valued vector, which is randomly shifted by mutations. The distance between each organism's phenotype and the global optimal phenotype forms a basis of the fitness function.
In our study we model environmental stress by shifting the optimal phenotype, and observe the adaptive response from the studied populations. In our mathematical approach we model populations as probability measures, and apply the tools of in the theory of Lebesgue measure and integration to derive an analytical formula for the fixpoint, equilibrium population. We formally prove convergence and stability at the equilibrium point.
The mathematical analysis is complemented by a computational approach, which is used to study the effects of finite population sizes on the theoretical equilibrium, and also to study the cases which have proved mathematically intractable.
We shall present a comprehensive analysis of the differences in response to environmental stress, between sexual and asexual organisms, as well as attempt to study the advantages of one mode of reproduction over the other in conditions of environmental stress.
TOP
In our study we model environmental stress by shifting the optimal phenotype, and observe the adaptive response from the studied populations. In our mathematical approach we model populations as probability measures, and apply the tools of in the theory of Lebesgue measure and integration to derive an analytical formula for the fixpoint, equilibrium population. We formally prove convergence and stability at the equilibrium point.
The mathematical analysis is complemented by a computational approach, which is used to study the effects of finite population sizes on the theoretical equilibrium, and also to study the cases which have proved mathematically intractable.
We shall present a comprehensive analysis of the differences in response to environmental stress, between sexual and asexual organisms, as well as attempt to study the advantages of one mode of reproduction over the other in conditions of environmental stress.
View Posters By Category
- A) Bioinformatics of Disease and Treatment
- B) Comparative Genomics
- C) Education
- D) Epigenetics
- E) Functional Genomics
- F) Genome Organization and Annotation
- G) Genetic Variation Analysis
- H) Metagenomics
- I) Open Science and Citizen Science
- J) Pathogen informatics
- K) Population Genetics Variation and Evolution
- L) Protein Structure and Function Prediction and Analysis
- M) Proteomics
- N) Sequence Analysis
- O) Systems Biology and Networks
- P) Other
Search Posters: |