How we do itWe apply the in silico trials standard methodology with the most robust and versatile technology platform in our industry
#EffectModel #diseasemodels #virtualpatients #WISE #GitHealth #SimWork #Haskell
The methodological standard for in silico clinical trials
Discovered by Pr. Jean-Pierre BOISSEL, co-founder of NOVA, a new standard has emerged from the Effect Model, a law which rationalizes the results of in silico assays.
It took several centuries from ideation to the establishment of double-blind placebo-controlled RCT design as the gold standard. The Chaldean king Nebuchadnezzar II of the Neo-Babylonian Empire (c.605 BCE – c.562 BCE) had the intuition of comparing a treated and untreated group of soldiers to compare the effect on the physical condition of his subjects of two different diets. Much later came the French philosopher Michel de Montaigne (16th Century) with the intuition of the importance of the placebo effect for the assessment of treatment efficacy. Then came the famous Lind trial in 1747, followed by a series of decisive contributions (Pearson, Claude Bernard, etc.) until the landmark piece of legislation by the US Food & Drug Administration in 1962.
Pr. Jean-Pierre BOISSEL, co-founder of NOVA and who was instrumental in formalizing the placebo-controlled RCT design, has discovered the Effect Model law which is emerging as the industry standard for in silico clinical trials.
An innovative approach to modeling
With the emergence of Modeling & Simulation applied to drug R&D, it feels like data and computer scientists are taking over the industry. We hold the contrarian belief that data as a base material for predictive modeling purposes is unreliable and that raw computing power won’t fix the failed R&D paradigm:
- Machine learning algorithms work as a blackbox
- Data-driven models are based on correlation analysis – in the words of Kitano “Although clustering analysis provides insight into the ‘correlation’ among genes and biological phenomena, it does not reveal the ‘causality’ of regulatory relationships”
- Algorithms can only be tasked with making inductive predictions based on past data, which is of limited use when exploring novel targets and mechanisms of action
- Data are time- and context-dependent, which severely impedes the predictive power of these algorithms when applied in different situations
- Data collected in an observational context are virtually useless to infer reliable conclusions in the absence of an unbiased control group of patients
Mathematical representations of human physiology and pathology should be grounded in knowledge extracted from the scientific literature. Disease and treatment models should be multi-scale (from genes to cells, tissues and organs) and mechanistic (i.e. causal, e.g. “IkB kinase phosphorylates IkB, resulting in a dissociation of NF-kappaB from the complex with its inhibitor”).
How we do it
The Effect Model enables us -according to the indicators that you have, either clinical, genetic, imaging or biomarkers, – to go down to the patient and to go much further in an individual prescription.
Our work is to really get to the bottom of these mechanisms. To identify step-by-step the sequences that lead to the disease and to the treatment that can be given.
In itself, the virtual population, irrespective of its important role in the use of models, has other qualities. The first quality is to be able to collect data that are today stored in various dispersed databases…
In this European project called SysClad, Novadiscovery was instructed to explore in-silico the
efficacy of a partial blocking of the signaling pathway mTor for the prevention of lung transplant rejection.