WebmedCentral Editor
View all WebmedCentral Editors
No Image

Dr. Gerald Lushington

Principal Consultant
LiS Consulting
2933 Lankford Dr.

Brief Biography:

Gerald Lushington / Professional Preparation

Gerry Lushington received a Ph.D. in theoretical chemistry in 1995 under Fritz Grein at the University of New Brunswick.  After a brief postdoctoral stint in which he refined his GSTEPS suite of software for characterizing molecular magnetism, he then undertook defense contractor positions at the U.S. Army Research Laboratory (1996) and Ohio Supercomputer Center (1997-2001) supporting military research in computational chemistry and materials science.  In these activities, he was a three-time recipient of DOD HPCMP Challenge Awards for studies on a variety of topics including the development of prophylactic and therapeutic countermeasures for organophosphorus neurotoxicity. The latter provided him with the impetus to redirect his career toward the life sciences, where he has embraced computational biology, chemical informatics, drug discovery and structural biology.

Gerald Lushington / Molecular Modeling & Informatics

In more than eleven years spent as a research laboratory director at the University of Kansas (2001 - 2012), Dr. Lushington developed diverse interests in the methods and applications of computational life sciences techniques.  In addition to directing five shared research laboratories with focus on molecular modeling, analytical chemistry, chemical informatics and bioinformatics, he held courtesy faculty appointments in the departments of Medicinal Chemistry and Chemistry.  In June 2012 he commenced his current role as principal consultant in his firm, LiS Consulting which provides contract research, consulting and technical writing services for a broad range of clients (currently 11 distinct groups or institutions) in the pharmaceutical sciences and biotechnology arena.


Academic positions:

Professor of Human Nutrition (adjunct), Kansas State University

Principal consultant, LiS Consulting



Research interests:

Research philosophy

Computational science fundamentally entails translating basic information from one analytical level into actionable insight at another.  When such translation is performed with care and rigor, it is often possible to provide complement the original research via useful validation, while simultaneously expanding the impact of the originating information to broader application areas.  Consequently, informaticians are naturally afforded uniquely important prospects in collaborative multidisciplinary research. Such opportunities have been the foundation and fuel of my career, and embody the aspirations of my future.

Gerald Lushington's research focus

The core of my research program has entailed seeking novel ways to bridge the distinct but complementary paradigms of statistical and deterministic modeling (sometimes known as informatics and simulation).  Computational power has flourished to the point where one can rapidly process massive volumes of biological data to extract and assemble information for next-generation simulation models.  The keys to accomplishing this are identifying basic properties (so-called descriptors) of the objects or phenomena of interest that are an appropriate basis for a model, and knowing which experimental data best encapsulate the property or process trends that one wishes to train in the model.  The resulting models then form the basis for improved simulations or better assessment, characterization or classification of objects or phenomena.

Specific methods and applications

While maintaining a broad interest in life sciences applications, chemical biology is the primary passion of our lab, embracing problems such as the characterization of small molecule interactions with protein receptors, elucidation of structure- and property-based pharmacophores and toxicophores, and the development of assay-specific chemical diversity metrics for guiding combinatorial synthesis and library development.  To this end, we have been actively developing novel forms of the Comparative Binding Energy (COMBINE) methodology in order to predict protein inhibitor (or substrate) binding conformers or substrate molecules with substantially greater accuracy than conventional molecular docking techniques afford.  Our specific innovations include extending the COMBINE methodology to rationalize inhibition trends among covalently bound inhibitors or substrate, developing multi-conformational COMBINE models to characterize affinity perturbations arising from ligand or receptor dynamics, and analysis of COMBINE coefficients for the purpose of pharmacophore optimization.  Our efforts on assay-specific diversity modeling are very recent, however in preliminary studies on two assays published by the NIH's PubChem database, we have applied clustering algorithms and a systematic refinement algorithm to identify small-dimensional (less than 5 descriptors) molecular property space representations, as condensed from collections of thousands of metrics that quantify molecular structure and physicochemical attributes.  These property space representations significantly enrich the distribution of active compounds into clusters with reduced incidence of inactives relative to conventional diversity analyses, thus producing a rapid computational prescreen that can be performed on proposed chemical libraries to evaluate potential suitability to specific medicinal targets.  These techniques collectively have demonstrable value in improving the efficiency of target discovery and drug design processes, and thus form the basis for much of our future plans.

In application areas where structural and activity data may not be robust enough to provide a basis for methods such as those described above, we have been active in our application of robust de novo simulations to intuit the molecular origin of specific effects and biactivity observations.  A key application area of ours has been in the use of sophisticated molecular dynamics simulations to rationalize the antimicrobial efficacy of specific peptide formulations.  Our simulations thus provide the basis for understanding how peptides can selectively compromise microbial cell membranes in a therapeutically desirable manner.

Open Source Software

Our research activities have benefited tremendously from the emergence or powerful, sophisticated and reliable open source scientific software.  The open source model is proving to be incredibly responsive to the needs to dynamic fields such as the life and biomedical sciences since they both serve and feed upon the energy of the research community.  In order to vigorously promote these important activities, we maintain a detailed blog on new and emerging open source software tools that have become important tools in our pursuit of molecular modeling and informatics research.


Any other information:

My next invited talk:  Proteomics-2013, on Bridging gaps in the chemical interaction matrix via biclustering across activity-modulated feature space

Note that I also offer technical writing and proof-reading services


What I think of the idea behind WebmedCentral:

Clearly the volume of material being submitted for publication now is far greater than in prior generations of scientists and this is stretching the capacity of credible peer review as we know it.  It is hard to say what the right solution to this problem will be, but it is necessary to explore new innovations such as the WMC concept.

Please also see my recent blog post (scroll down to the 12/14/2012 post on post-publication peer-review).


Home Page: