Posters

Poster numbers will be assigned May 30th.
If you can not find your poster below that probably means you have not yet confirmed you will be attending ISMB/ECCB 2015. To confirm your poster find the poster acceptence email there will be a confirmation link. Click on it and follow the instructions.

If you need further assistance please contact submissions@iscb.org and provide your poster title or submission ID.

Category I - 'Open Science and Citizen Science'
I01 - Using the Power of Big Data and Crowdsourcing for Catalyzing Breakthroughs in Amyotrophic Lateral Sclerosis (ALS)
Neta Zach, Prize4Life, Israel
Robert Kueffner, Helmholtz Zentrum, Germany
Hagit Alon, Prize4life, Israel
Raquel Norel, IBM T.J. Watson Research Center, United States
Alexander Sherman, Clinical Research Institute, Massachusetts General Hospital, United States
Jason Walker, Clinical Research Institute, Massachusetts General Hospital, United States
Ervin Sinai, Clinical Research Institute, Massachusetts General Hospital, United States
Igor Katsovskiy, Clinical Research Institute, Massachusetts General Hospital, United States
David Schoefeld, Harvard University, United States
Melanie Leitner, Biogen Idec, United States
Gustavo Stolovitzky, IBM T.J. Watson Research Center, United States
Short Abstract: Amyotrophic lateral sclerosis (ALS) is a fatal neurodegenerative disease with significant heterogeneity in its progression. This heterogeneity makes research, clinical care and drug development difficult.
To overcome this challenge, we developed the Pooled Resource Open-access ALS Clinical Trials (PRO-ACT) platform. The PRO-ACT database includes demographics, family history, vital signs, clinical assessment, lab-based, treatment arm, and survival information from 8600 ALS patients. The database was launched open access on December 2012, and since then over 400 researchers from over 40 countries have requested the data. We used this data to launch the DREAM ALS Prediction Prize4Life, a crowdsourcing Challenge seeking the development of more accurate tools for estimating disease progression at the individual patient level.
The DREAM ALS Prediction Prize4Life drew over 1000 registrants that uses three months of clinical data to predict disease progression a year later. In a simulation, the winning algorithms could reduce the number of patients needed for future clinical trials by 20%. The best performing methods also predicted disease progression for a representative subsample of patients consistently better than a group of world leading clinicians. Finally, the algorithms uncovered several novel predictors of disease progressions that can shed light on the mechanisms behind the disease. The algorithms are now being used by several clinical trials and clinics.
These results demonstrate the value of large datasets and DREAM Challenges for developing a better understanding of ALS natural history, prognostic factors and disease variables. We are now working on another crowdsourced Challenge to further these findings.
TOP
I02 - Bioinformatics application which searches Co-Occurrence-based Co-Operational Formation with Advanced Method(COCOFAM)
Junseok Park, KAIST, Korea, Rep
Dongjin Jang, KAIST, Korea, Rep
Sungji Choo, KAIST, Korea, Rep
Sunghwa Bae, KAIST, Korea, Rep
Doheon Lee, KAIST, Korea, Rep
Short Abstract: Massive set size of text is a time-consuming issue in text mining research. It is important to reduce the set size through pre-processing that contains at least a term-pair of our interest. There have been many text mining applications in bioinformatics domain. It mostly has, however, limitations in scalability and flexibility. We introduce a bioinformatics application which searches Co-Occurrence-based Co-Operational Formation with Advanced Method(COCOFAM) to solve aforementioned problems. COCOFAM can be used as co-operational mode and stand-alone mode. (1) A co-operational mode is used to work with other users for through internet. This mode works the same as standalone mode on user-side but it works differently on server-side. Once the co-operational mode is turned on, the application gathers previous related co-occurrence results, then proceed rest of the job on the local machine. Lastly, it share result to other users through remote transmission system. COCOFAM helps users to find co-occurrences more efficiently and build big biomedical text data set for others. (2) A standalone mode is used to locally process preprocessing jobs. A user simply input list file of pair-words and get results from PubMed abstracts or customized raw texts. The tool can be executed as standalone Hadoop mode or run on other Hadoop systems. All modes provide statistical significance through Fisher’s exact test. Our application will be soon downloadable at COCOFAM webpage http://cocofam.kaist.ac.kr.
TOP
I03 - Zegami: Image centric data analysis of large biological data sets
Stephen Taylor, Computational Biology Research Group Oxford University,
Roger Noble, Coritsu UK, Australia
Short Abstract: New, sophisticated imaging and visualization techniques yield large, heterogeneous, multidimensional data sets that need to be viewed, analyzed, annotated, queried and shared. We present Zegami, a browser based, platform independent viewer that allows seamless querying and visualization of thousands of images and their associated metadata. The application uses the latest HTML5, Javascript and Deep Zoom techniques to empower the user to be able to combine visual information and database querying in a seamless and compelling user interface. Zegami is being used in gene expression visualization, Oxford Nanopore sequencing, colocalisation, optical mapping, in-situ RNA sequencing and many other areas.

For more information and demos of the software in action please see zegami.com and demo.zegami.com.
TOP
I04 - A General Concept for Consistent Documentation of Computational Analyses
Peter Ebert, Max Planck Institute for Informatics, Germany
Fabian Müller, Max Planck Institute for Informatics, Germany
Karl Nordström, Department of Genetics, Saarland University, Germany
Thomas Lengauer, Max Planck Institute for Informatics, Germany
Marcel H. Schulz, Cluster of Excellence on Multimodal Computing and Interaction, Saarland University, Germany
Short Abstract: The ever-growing amount of data in the field of life sciences demands standardized ways of high-throughput computational analysis. This standardization requires a thorough documentation of each step in the computational analysis to enable researchers to understand and reproduce the results. However, due to the heterogeneity in software setups and the high rate of change during tool development, reproducibility is hard to achieve. One reason is that there is no common agreement in the research community on how to document computational studies. In many cases, simple flat files or other unstructured text documents are provided by researchers as documentation, which are often missing software dependencies, versions and sufficient documentation to understand the workflow and parameter settings.
As a solution we suggest a simple and modest approach for documenting and verifying computational analysis pipelines. We propose a two-part scheme that defines a computational analysis using a Process and an Analysis metadata document, which jointly describe all necessary details to reproduce the results. We separate the metadata specifying the process from the metadata describing an actual analysis run, thereby reducing the effort of manual documentation to an absolute minimum. Our approach is independent of a specific software environment, results in human readable XML documents that can easily be shared with other researchers and allows an automated validation to ensure consistency of the metadata. Since our approach has been designed with little to no assumptions concerning the workflow of an analysis, we expect it to be applicable in a wide range of computational research fields.
TOP
I05 - e!DAL - a framework to store, share and publish research data
Daniel Arend, Leibniz Institute of Plant Genetics and Crop Plant Research (IPK) Gatersleben, Germany
Jinbo Chen, IPK Gatersleben, Germany
Christian Colmsee, IPK Gatersleben, Germany
Uwe Scholz, IPK Gatersleben, Germany
Matthias Lange, IPK Gatersleben, Germany
Short Abstract: In the first part of the presentation we will describe the minimal requirements of a long-term storage infrastructure for research data in life sciences, which we worked out considering our experiences in a modern plant research institute. We will show that these features guarantee a permanent reusability of digital data and present how we realize them in the developed e!DAL-API. Beside the theoretical background, we will mention some technical details of the reference implementation for the API and show some first applications in the field of System Biology and Plant Phenotyping.
The second part of the talk will be a short outlook of some new functions and applications of the e!DAL-API, which we developed since the publication of the article. Furthermore we present some statistics of the first productive use case of e!DAL to show the benefits of the API.
TOP
I06 - The Farr Commons: Healthcare assets all in one place - linked and discoverable!
Norman Morisson, University Of Manchester,
Matthew Gamble, University Of Manchester,
John Ainsworth, University of Manchester,
Carole Goble, University Of Manchester,
Iain Buchan, University Of Manchester,
Short Abstract: The Farr Commons (www.farrcommons.org) is an online shared space for the building and exchange of Research Objects (www.researchobject.org), semantically rich aggregations of the digital assets created by the researchers and data scientists of the Far Institute - an organisation that comprises a total of 24 academic institutions and two MRC units across the UK.

The knowledge captured in these digital assets may be the result of intensive human activity. However, because they are digital, given the appropriate infrastructure, the costs for dissemination and reuse are negligible in comparison.

Here we present an initial prototype Farr Commons that builds upon the popular CKAN data catalogue. This initial phase the commons focuses upon discovery and citation by ensuring that there is a persistent unique identifier for each digital asset. The identifier will enable digital assets to be found, shared and attributed enabling them to be cited by other scholarly works. Each digital asset will have associated (i) provenance, which minimally, identifies the creator(s) of the asset, those that have subsequently modified it, and how it was modified, and (ii) descriptive metadata is required for each asset to facilitate indexing, discovery and reuse. These three basic rules are sufficient to create a functioning data science commons that will enhance the working practices of Farr Data Scientists
TOP

View Posters By Category

Search Posters:


TOP