In 2009, a typical study involved a single modality and 100 cells. Today, it’s common for dozens of
studies to be performed against 1,000,000 cells. This exponential growth of data and modalities has
caused time-to-discovery to slow as loading, saving, replicating, and recovering hundreds of gigabytes,
or even terabytes, of data to/from storage takes minutes to hours.
The new category of Big Memory Computing consists of DRAM and much lower-cost persistent memory
virtualized into a pool of software-defined memory.
Once the memory is virtualized, in-memory snapshots allow terabytes of bioscience data to be loaded,
saved, replicated, and recovered in seconds. This allows bioinformatics consisting of massive data sets at
the speed of memory, and it transforms memory into a high-availability data storage tier.
In this presentation, MemVerge shows how Analytical Biosciences, Penn State University, SeekGene, and
TGen accelerated their time-to-discovery, and/or increased the availability of their genomic analytic
pipelines by implementing Big Memory Computing.