Leading Professional Society for Computational Biology and Bioinformatics
Connecting, Training, Empowering, Worldwide

banner

ROCKY 2019 | Dec 5 – 7, 2019 | Aspen/Snowmass, CO |Keynote Speakers

KEYNOTE SPEAKERS


Links within this page:
Joseph Allison
Kevin Bretonnel Cohen
Justin Guinney
Joyce C. Ho
Kirk E. Jordan
Oluwatosin Oluwadare
Heinrich Röder
• Joint Presentation: Nimisha Schneider and Ted Foss

JOSEPH ALLISON, PhD JOSEPH ALLISON, PhD
Bioinformatic Scientist
SomaLogic, Inc.
Colorado, USA

Biography (web)

A Story of Proteomic Statistical Process Control at SomaLogic

The processes that have been implemented to support and run the SomaScan® Assay at SomaLogic are incredibly robust but, before the introduction of Umbrella, post-hoc tracking of these processes and their longitudinal stability was spread across the collective consciousness. Umbrella is a bespoke full-stack data science platform designed to coalesce that collective knowledge and report back to the company at large. These reports range from simple control charting to unsupervised ML pipelines to detect nuanced assay artifact, identify potential causes, and document the impact on our SOMAmer® reagents’ signals. It is this system that enables us to be able to robustly distinguish between biological signal and assay artifact.

- top -

KEVIN BRETONNEL COHEN, PhD KEVIN BRETONNEL COHEN, PhD
Director, Biomedical Text Mining Group
Computational Bioscience Program
University of Colorado School of Medicine
USA

Biography (web)

Two Existential Threats to Biomedical Text Mining...and How to Address Them with Natural Language Processing

The reproducibility crisis calls into question some of the most fundamental use cases of biomedical natural language processing: if 65% of the scientific literature is questionable, what is the point of mining it? Meanwhile, computational research has mostly been immune to the crisis, but there is no a priori reason to expect that state of affairs to continue. This talk proposes that natural language processing itself can address this issue on both fronts--but how?

- top -

JUSTIN GUINNEY, PhD

JUSTIN GUINNEY, PhD
VP, Computational Oncology, Sage Bionetworks
Affiliate Associate Professor, University of Washington
Director, DREAM Challenges
Washington, USA

Biography (web)

The Model-to-Data Paradigm: Overcoming Data Access Barriers in Biomedical Competitions

Data competitions often rely on the physical distribution of data to challenge participants, a significant limitation given that much data is proprietary, sensitive, and often non-shareable. To address this, the DREAM Challenges has advanced a challenge framework called model-to-data (M2D), requiring participants to submit re-runnable algorithms instead of model predictions. The DREAM organization has successfully completed multiple M2D-based challenges, and is expanding this approach to unlock highly sensitive and non-distributable human data for use in biomedical data challenges.

The EHR DREAM Challenges is the first M2D data utilizing EHR data. We are asking participants to predict patient mortality within six months of their last hospital visit, using data from the University of Washington Medical System enterprise data warehouse (2009-2019) with over 1.3 million patients and 22 million visits. Given the highly sensitive nature of EHR data and risks of re-identifiability, the M2D approach is being used to ensure data privacy and protections.

Prior to launching the EHR Challenge, we completed a feasibility study and developed 3 models: demographic information; demographic information and 4 common chronic diseases, and demographic information and the top 20 indications. Model performance using area under the receiver-operating-curve was 0.682, 0.794, and 0.723, respectively. This demonstrated technical robustness of the challenge architecture, and the ability to generate and evaluate predictive algorithms in a secure manner. The EHR Challenge was launched in September 2019, and will close in January 2020.

- top -


JOYCE HO, PhD

JOYCE C. HO, PhD
Assistant Professor, Computer Science Department
Emory University
Georgia, USA

CV (.pdf)

Developing an Evidence Matching Framework Using Web-based Medical Literature

Researchers are discovering new disease subgroups from secondary analyses of electronic health records. However, such subgroups need to be validated or aligned with current literature. We developed a scalable framework that produces evidence sets (or relevant articles) using a large corpus of online medical literature. In this talk, I will discuss some of the challenges associated with term representation and mining biomedical text. I will also present a case study of our framework to validate EHR-based phenotypes.

- top -


KIRK E. JORDAN, Ph.D.

KIRK E. JORDAN, Ph.D.
IBM Distinguished Engineer
Data Centric Solutions
IBM T.J. Watson Research
and
Chief Science Officer
IBM Research UK


Biography: web

Intelligent Simulations - Incorporating AI into Computational Simulations

We are at an inflexion point in the way we are computing today and the way we will compute tomorrow.  While computer systems will consist of heterogeneous components adding to complexity we need to make high end systems much more accessible.   Combine this machine complexity with the fact that more compute is required to tackle the ever increasing complexity of the problems and data that are driving science investigations especially in the life sciences.  How will we address this ever increasing complexity.  IBM Research is looking at the Future of Computing through a new lens which consists of bits, neurons and qubits.  In this talk I will describe how IBM Research is looking at the future of computing and I will outline how we are increasing the use of Artificial Intelligence (AI) and Machine Learning (ML) to address some of the complexity while enabling  us to tackle the interesting science questions we and our collaborators are currently investigating with some examples in the Life Sciences.

- top -


 OLUWATOSIN OLUWADARE, PhD OLUWATOSIN OLUWADARE, PhD
Assistant Professor

Department of Computer Science and Bachelor of Innovation
College of Engineering and Applied Science
University of Colorado
USA

CV (.pdf)

3D Chromosome and Genome Structure Modeling

To improve the understanding of chromosome organization within a cell, chromosome conformation capture techniques such as 3C, 4C, 5C, and Hi-C have been developed. These technologies help to determine the spatial positioning and interaction of genes and chromosome regions within a genome. Using next-generation sequencing strategies such as high-throughput and parallel sequencing, Hi-C can profile read pair interactions on an "all-versus-all" basis—that is, it can profile interactions for all read pairs in an entire genome. The development of chromosome conformation capture techniques, particularly Hi-C, has substantially benefited the study of the spatial proximity, interaction, and genome organization of several cells. In recent years, numerous genome structure construction algorithms have been developed to explain the roles of three-dimensional (3D) structure reconstruction in the cell and to explain abnormalities occurring in a diseased and a normal cell. Three-dimensional inference involves the reconstruction of a genome’s 3D structure or (in some cases) ensemble of structures from contact interaction frequencies represented in a two-dimensional matrix. To solve this 3D inference problem, we developed an optimization-based algorithm that performed better than any existing tool for chromosome and genome 3D structure prediction called 3DMax. 3DMax has been packaged as a software tool and is publicly available to the research community. 3DMax performs well with noisy data; also, its performance is unaffected by changing normalization methods, which is not the case for many other existing methods.

- top -

HEINRICH RÖDER, PhD

HEINRICH RÖDER, PhD
Biodesix, Inc.
Colorado, USA

Biography (web)

Development of Clinically Relevant Tests from Human Serum Samples: A Look at the Circulating Proteome

Therapeutics for the treatment of cancer patients have been transformed with the introduction of immunotherapies, starting with checkpoint inhibition. Instead of targeting the tumor itself, the mechanism of action for immunotherapies relies on reactivation of the immune system such that the host can re-engage the cancer using the complex system evolutionarily developed to heal human disease. In oncology, clinical trials have proven that immunotherapies are effective at reducing tumor burden and extending survival in cancer patients across many indications. However, not all patients benefit from all immunotherapies. Specifically, there is a subgroup of patients whose lack of response may be attributed to a compromised immune system referred to as primary immunotherapy resistance (PIR). A test identifying patients with PIR, prior to treatment with specific immunotherapies, would be useful for guiding therapeutic decision making.

Biodesix uses a hypothesis-free approach to build clinically relevant tests allowing the creation of multivariate classifiers related to deep learning that reflect the complexity of biological interactions without any bias from expectations about their mechanisms. Mass spectral data collected from human serum samples are analyzed by the Diagnostic Cortex® robust data analytics machine learning based platform to design classifiers with clinical relevance. Using this approach, we have developed multiple independently validated tests to identify patients with melanoma and lung cancer that have particular poor outcomes on anti-PD1 immunotherapy and therefore may be unsuitable candidates for treatment with checkpoint inhibition.

These tests stratify patients into different immunological phenotypes with different outcomes on immunotherapies. We applied ideas similar to GSEA (Gene Set Enrichment Analysis) to mass spectral data (PSEA: Protein Set Enrichment Analysis) to gain biological insight into the processes detectable in the circulating proteome related to these phenotypes. We found host immunological functions, such as acute phase response, wound healing, and complement system are related to test classification labels indicating that treatment success does not depend solely on a single molecule, protein, or signaling pathway. Our systems biology approach combining proteomics and machine learning methods is hypothesis generating and requires further external validation; however, our findings have been supported by independent research groups using orthogonal approaches.

- top -

JOINT PRESENTATION:
 
NNIMISHA SCHNEIDER, PhDIMISHA SCHNEIDER, PhD
QuartzBio, part of Precision for Medicine
Maryland, USA

Biography (.pdf)
  TED FOSS, PhDTED FOSS, PhD
Director, Systems and Data Integration
Precision For Medicine
QuartzBio, part of Precision for Medicine
Maryland, USA

LinkedIn profile (web)

Coupling Data-Driven and Mechanistic Modeling Approaches Through the Application of a Scalable, Knowledge-Driven Framework and High-Throughput Public Omics Data Sources

Advances in high throughput measurement technologies (-omics data) have made it possible, and increasingly affordable, to generate high complexity, high volume data for medical research, and these data are increasingly available to researchers through public sources.  Rapidly mining these data for useful mechanistic insights can be a challenge, given their complexity; for example, the cancer genome atlas contains multi-omic and clinical profiles of 11K+ patients across 33 cancer subtypes, over 500K files and 1 billion measurements. This talk will outline (1) how we are building the infrastructure and methods that couple data-driven approaches with a knowledgebase to enable mechanistic modeling of public data sources, (2) the challenges we face when modeling these kinds of data e.g., overfitting of models due to large feature sets on small sample sizes, and (3) a case study on one approach where we mined TCGA for mechanistic insights.
   
- top -