Genomics pipelines and data integration: challenges and opportunities in the research setting.

Jeremy Davis-Turak,, Sean Courtney , E Starr Hazard, W Bailey Glen. Jr, Willian da Silveira, Timothy Wesselman, Bethany Wolfe, Dongjun Chung, Gary Hardiman

Research output: Contribution to journalArticle

26 Citations (Scopus)

Abstract

The emergence and mass utilization of high-throughput (HT) technologies, including sequencing technologies (genomics) and mass spectrometry (proteomics, metabolomics, lipids), has allowed geneticists, biologists, and biostatisticians to bridge the gap between genotype and phenotype on a massive scale. These new technologies have brought rapid advances in our understanding of cell biology, evolutionary history, microbial environments, and are increasingly providing new insights and applications towards clinical care and personalized medicine. Areas covered: The very success of this industry also translates into daunting big data challenges for researchers and institutions that extend beyond the traditional academic focus of algorithms and tools. The main obstacles revolve around analysis provenance, data management of massive datasets, ease of use of software, interpretability and reproducibility of results. Expert commentary: The authors review the challenges associated with implementing bioinformatics best practices in a large-scale setting, and highlight the opportunity for establishing bioinformatics pipelines that incorporate data tracking and auditing, enabling greater consistency and reproducibility for basic research, translational or clinical settings.
Original languageEnglish
Pages (from-to)225
Number of pages237
JournalExpert review of molecular diagnostics
Volume17
Early online date25 Jan 2017
DOIs
Publication statusPublished - 17 Mar 2017

Fingerprint Dive into the research topics of 'Genomics pipelines and data integration: challenges and opportunities in the research setting.'. Together they form a unique fingerprint.

Cite this