How two distinct, yet complementary approaches could revolutionize R&D

Mechanistic modeling and artificial intelligence (AI) are both transforming medicine. AI is being increasingly applied to drug discovery and diagnosis, for instance screening high volume compound libraries or detecting patterns in medical images. Mechanistic models of biological systems and treatment effects are starting to inform target selection, pre-clinical and clinical trial design, and guiding how clinical data is translated into the real world.

Mechanistic models and AI are different

The two approaches are fundamentally different. AI – often synonymous with “Big Data”- is essentially a number-crunching game. It derives patterns and correlations from high volumes of data, but cannot itself uncover causality or underlying mechanisms. Correlations can be useful – for example, in teasing out subsets of a given class of biomarkers most strongly associated with coronary artery disease – they can prompt further experimentation. But AI applied to observational (as opposed to experimental) data cannot by itself determine how or why things happen.

Mechanistic (or physics-based) models are built using knowledge and information about the way the world works. They encode well-established laws and relationships and can be used to generate testable predictions.[1] Mechanistic models can project beyond the data they are fed, elucidating how (and, at the mechanistic level, why) certain scenarios are likely to arise.

Models and AI both have limitations

Neither approach is perfect. Models are limited by our existing knowledge of the world and by our ability to represent it coherently and accurately using mathematical equations . AI’s utility in fields like biology and medicine is limited by the amount, quality and selection of input data (including the problem of hidden data – relevant data that is overlooked or missing).

Ballooning data volumes – think genomics and all the other ‘omics’ – should be good for AI. After all, more data is usually better. But, in medicine, we don’t properly understand the significance of much of that data. The function of most genes, for instance, remains a mystery. So does the relevance of many of the tens of millions of proteins in a single living cell to any given health condition. So, even with large amounts of high-quality data, selecting which data to use – let alone attempting to draw conclusions from any resulting patterns – could be misleading.

Models and AI complement one another

But mechanistic modeling and AI can also enhance one another.

AI can generate data and knowledge to help calibrate, or ‘tune’, a mechanistic model. Calibration involves selecting input values (parameters) that ensure a ‘best fit’ between model output and actual experimental output. But sometimes, there aren’t enough of the right kinds of input data available to get the tuning exactly right. AI can bring additional information, for example by crunching data around related compounds or mechanisms to generate plausible value ranges or distributions. It doesn’t replace missing data, but it can help uncover some likely characteristics of that data – enough for model calibration, if not for model design.

Similarly, models can generate data to feed into AI, helping uncover further patterns beyond what either tool could do alone. They can also uncover and explain latent variables – variables that are not directly observable, but which are inferred from those that are.

As AI becomes more widespread, it might throw up unexpected or unexplained patterns in data, whose origins could be explored using mechanistic models. AI is unlikely to directly fill gaps in our knowledge – and thus in our mechanistic models. But it could provide the seed of a hypothesis for further investigation. (AI-derived theories can be more easily and convincingly tested using mechanistic models, if available, than by re-running AI on other data-sets.)

Using complementary approaches may also help reduce bias – a risk common to most data mining efforts. Bias may occur in the selection of data used to build models or run algorithms. It may creep into the design of the algorithms themselves, perhaps hidden in correlations. (Racial bias was found in algorithms used to predict recidivism in the US, for example.) Validating and cross-checking output from two different systems is more likely to uncover anomalies.

Mechanistic models and AI should evolve in tandem

For mechanistic models and AI to mutually reinforce each other in the ways described, they must evolve at similar rates.

Some, like Professor Peter Coveney at the Centre for Computational Science at University College London, argue that AI and Big Data projects are running away, providing an alluring short-cut to acquiring ‘knowledge’ that is not underpinned by any real grasp of underlying systems. That may not matter in movie selection, but it does in drug development.

It takes much longer to build and validate a mechanistic disease or treatment model, based on knowledge and existing data, than to feed trillions of data-points into a powerful pattern-finding computer.

Yet as knowledge, analytics and processing power evolve, mechanistic models too are becoming more sophisticated and widespread . Regulators are embracing their potential to help expedite many aspects of drug R&D.

The greatest progress – and health impact – will come from continuing to model and elucidate the mechanisms underlying complex systems like our own selves, while also harnessing the undoubted power of AI and Big Data.


  1. 1. Some argue that machine learning can generate predictions or ‘new knowledge’. But these are often difficult to replicate or validate given the ‘black box’ nature of ML algorithms. Further, in highly complex fields like biology (unlike, say, social media or online retail), inputting the appropriate range, type and volume of data in order to generate meaningful patterns or correlations is difficult without an understanding of underlying system characteristics. ↩︎