When embarking on a new R&D program, drug developers face a strategic and moral imperative to assemble the best, most relevant available data and insights. The explosion in scientific literature and information sources makes this challenging, yet also underscores its importance: exposing human subjects to experimental treatments without fully capturing the best of what science has already uncovered is ethically questionable and commercially unwise.
Curated knowledge repositories allow R&D experts to systematically capture validated information, avoid flaky data or irrelevant claims, and remain alert to new developments. Building and maintaining knowledge repositories provides competitive advantages and is also highly collaborative, requiring – and promoting – cooperation among internal teams and with external thought leaders.
Such repositories underpin detailed computational disease models that can help predict the impact on disease of new treatments. They help bio-modelers determine which pieces of knowledge are reliable and most relevant to the task at hand.
The stronger the knowledge repository, the more accurate the model, making development faster, more efficient, and less burdensome for patients.
Jinkō and knowledge management
Knowledge management is at the heart of nova’s clinical trial simulation platform jinkō. Jinkō allows scientists to capture and assess the latest scientific and medical knowledge in a systematic, transparent and traceable manner. This “white box” environment is important because knowledge is dynamic. It emerges from observations and experiments, and can change. So sharing knowledge – and assessments of the value of any given piece of it – is key to its meaningful application.
Jinkō is designed to optimize the human curation of scientific knowledge and to facilitate the application of that knowledge in clinical trial simulations. At the core of this knowledge management is a structured knowledge review – a shared repository of curated knowledge on which given disease or treatment models are based.
The platform guides users through a few steps to creating a structured knowledge review.
- Selection, upload and collation of scientific source materials (papers, reviews, meta-analyses); automatic extraction of key information from PDFs
- Extraction relevant sections of text or data; review/re-write as applicable
- Systematic evaluation of the quality of chosen extracts or claims (“Strength of evidence”)
Strength of evidence
The evaluation step is especially critical to ensuring robust knowledge and disease models.
Extracts from scientific publications are first classified into statements (backed by evidence) or hypotheses (proposed explanations). Each of these are scored according to the strength of evidence supporting them, using a qualitative, yet systematic scoring system.
These steps result in a centralized knowledge “source” – the structured review – with fully annotated, scored components. Other team members can verify and annotate these components, resulting in a document that is dynamic, and shared – reflecting the qualities of the knowledge captured within it.
Knowledge-based in silico models
This document forms the kernel of knowledge-based in silico models, which are essentially mathematical representations of the assertions captured in the source material.
These mathematical models power clinical trial simulations. They capture and formalize the best and latest scientific knowledge, curated by humans, in a useful structure that generates practical applications from that knowledge.
Purely data-driven approaches to R&D – most prominently, AI and machine learning – have a place in drug discovery and diagnosis. But today, they still lack the context and dynamics required to accurately model biological processes and generate accurate, clinically-relevant predictions.